My last column, in HERDSA News 29(3), concluded with a worry, "...I'm a bit jittery about the way the RQF is ticking!" [1] Well, relax, it stopped ticking. As announced tersely on the DEST/DEEWR websites:
A new Government, led by the Leader of the Australian Labor Party, the Hon Kevin Rudd MP, was sworn in by the Governor-General on 3 December 2007. [2]Given that the principal recent stimulus for an interest in bibliometrics, the RQF, will "not be proceeding", why revisit bibliometrics, and why an IT perspective? To begin with, there are the "I'll be back" [4] and "Be prepared" [5] concerns. Here is a succinct potential early warning from a medical researcher in the UK:
...
The Australian Government announced on 21 December 2007 that it would not be proceeding with the former Government's Research Quality Framework (RQF) project. [3]
From 2008, state funding for academic research in the UK will be calculated differently. The research assessment exercise, which is based substantially on (high intensity, high cost) peer review, will transfer to bibliometric scoring. Such metrics include journal impact factors ... [6]So bibliometrics should remain on the agenda, though this brief column will concentrate on highlighting and awareness raising rather than a systematic exposition, recognising that deep interest in the topic will be mainly from librarians, editors [7], publishers, and specialist researchers, for example in cybermetrics and webometrics [8]. An IT perspective is especially relevant because information technology could be aptly described as the "engine room" for modern bibliometrics, in particular for the two main topics I wish to explore. Firstly, in recent developments in citation analysis services, Scopus [9] and Google Scholar [10] are challenging the ISI Web of Knowledge [11] and related products including Web of Science from Thomson Scientific, the owners of the well known ISI journal Impact Factor [11]. Secondly, many publishers have developed website services which automatically deliver links to articles that cite the article or abstract you are viewing on screen, and to related articles. These two areas are linked, in the usual titillating ways that you might expect, namely money, because Scopus and Web of Knowledge cost you (or your poor library) serious amounts, whilst Google Scholar is for free, and 'academic ego', because most of us feel that it's nice to know who out there has or has not cited my papers.
To begin at a straightforward level, a growing number of papers in the library and information sciences literature are investigating comparisons between the 'majors' in bibliometrics, and other topics in citation reporting and analysis. Staying with my 'highlighting and awareness' purpose, here is a very, very small sample of titles of articles from recent browsings:
Going back to the main topic, one impression is that a 'bibliometric skirmish' could be brewing. For example:
The combination of the inflated citation count values dispensed by Google Scholar (GS) with the ignorance and shallowness of some GS enthusiasts can be a real mix for real scholars. [13, Jacso, 2006]Whether this kind of ''skirmishing" becomes a "war" remains to be seen. Some may hope that it does develop into the only kind of war I like to see, namely a price war. After the bibliometric heat has been on authors, it could be time for publishers to have a turn. An executive with Elsevier's Scopus made a sympathetic comment about authors, when writing on new alternatives to the Thomson ISI Impact Factor [11]:
Because Google Scholar is freely accessible from the Google site, students and faculty are finding and using it. They are beginning to ask librarians for their professional opinions of its efficacy. [18, Schroeder, 2007]
...Scopus offers the best coverage from amongst these databases and could be used as an alternative to the Web of Science as a tool to evaluate the research impact in the social sciences. [17, Norris & Oppenheim, 2007]
...Google Scholar's wider coverage of Open Access (OA) web documents is likely to give a boost to the impact of OA research and the OA movement. [14, Kousha & Thelwall, 2008]
...[Google Scholar] contains all of the elements of the sort of search service which we in our libraries are trying to provide by purchasing federated search tools. ...for known item searching - for that paper by this author on this topic for instance - it is often as good as any of the abstracting and indexing services we take, and better in that it is Google - easy and free and used by everyone. [15, MacColl, 2006]
Originally it [the Impact Factor] was intended as a collection management tool, but has since evolved into a metric used for evaluation of science at all levels as well as evaluation of authors. This can have far-reaching consequences for an author's grant applications, promotion and tenure since the metric is directly influenced by the performance of specific journals and is thus for a large part beyond the author's control. [24, de Mooij, 2007]However, the publishing and bibliometric scenes are also displaying some interesting trends towards collaboration. This takes us to the second of my main topics, publishers developing web page links to a citation service. To begin with, here is a simple but important illustration from SAGE Journals Online, publishing the AERA's Review of Educational Research. At this point we really need screen delivery for this column, but let's try anyway. For an example I selected Hattie and Timperley (2008) [26], partly because many members of HERDSA could be or perhaps should be interested in its content. The website display of the article's abstract [26] includes a menu comprising about 22 items, among them four links to Google Scholar. For example, click upon "Articles by Hattie, J." and the underlying HTML enables the HTTP call:
For another example, consider van Raan (2005b), the "Fatal attraction..." article in Scientometrics [23]. The menu accompanying the display of the abstract [23] includes the information, "Referenced by 19 newer articles" (maybe higher if and when you look), each of these listed in a conventional way (first author, year, title, journal) with a hypertext link to an abstract for the citing article. For example, one of the links is to Yang & Meho (2007) [20] (though date is given incorrectly as 2006), published in a Wiley Interscience journal, Journal of the American Society for Information Science and Technology. That's a rather nice "one click" thing for the reader, delivered in this case via CrossRef [27], a collaborative service created by numerous publishers. As in the previous example, citation counts and links to citing articles are created and updated automatically by IT "engine room" processes.
To conclude, I should confess on the subtitle, "An IT perspective". This column is really "A search engine perspective", mainly Google and Google Scholar, and occasionally a publisher's website search engine. Without IT, and very rarely visiting a university library, how else could I find such interesting and relevant reading on a topic that I knew little about?
Author: Roger Atkinson retired from Murdoch University's Teaching and Learning Centre in June 2001. His current activities include publishing AJET and honorary work on TL Forum, ascilite Melbourne 2008 and other academic conference support and publishing activities. Website (including this article in html format): http://www.roger-atkinson.id.au/ Contact: rjatkinson@bigpond.com Please cite as: Atkinson, R. J. (2008). Bibliometrics: An IT perspective HERDSA News, 30(1). http://www.roger-atkinson.id.au/pubs/herdsa-news/30-1.html |