Measuring citations: Calculations can vary widely
By Janet Raloff
Although the impact of a published study can be measured many ways, the most common tactic has been to tally how often, over the years, others cite the study in their published works. A small industry has emerged over the past half century to quantify these citations.
A new analysis has now compared citation counts from three different companies and shown that their performance differs. At least when it comes to published biomedical studies, some citation indices may make a given piece of work appear substantially more — or less — influential than do others.
And high tallies are not just a source of bragging rights. With academic-promotion committees, granting agencies, and others increasingly asking scientists for their research papers’ citation stats, these figures have the potential to boost or torpedo careers.
For their new analysis, Abhaya Kulkarni of the Hospital for Sick Children, in Toronto, and his colleagues compared three indexing services: the Web of Science, Scopus and Google Scholar. All regularly index thousands of biomedical and medical journals, books and more.
The researchers asked each indexing service to add up the citations through June 2008 for 328 papers. They had all been published between Oct. 1, 1999 and March 30, 2000 in one of three peer-reviewed general medical journals — the Journal of the American Medical Association, Lancet or the New England Journal of Medicine.
In the Sept. 2 JAMA, Kulkarni’s team found that the three counting houses “produced quantitatively and qualitatively different citation counts.” The median count per indexed paper was 121 by the Web of Science, 149 by Scopus and 160 by Google Scholar. Depending on the type of paper tracked, Google Scholar picked up 30 to 37 percent more cites than did the Web of Science; Scopus picked up about 19 percent more.
The fact that there was a difference isn’t surprising, Kulkarni says, because each citation counter scans a different mix of publications. Whereas Web of Science tracks some 10,000 peer-reviewed journals, Scopus covers half again as many. (No comparison to Google Scholar was possible in this particular area, the researchers note, because details on the Google system’s methods for finding cites “have not been made public and it does not provide a list of all publishers with whom it has content agreements.”) Normally, tracked journals supply their contents to the indexing companies that are following them.
Web of Science’s indexing makes no claim to being all inclusive. Quite the contrary. It maintains that its goal is to identify the most influential journals and to preferentially track the original papers that they publish. Its sources are primarily English-language, North American publications.
Scopus, by contrast, notes that publications in Europe, Latin America and Asia are the source of more than half of the material it tracks. This indexing service also covers conference proceedings, trade publications and some web sources. Google Scholar includes even less-scholarly offerings, such as student handbooks and administrative notes, the new JAMA analysis reports.
Web of Science, founded nearly a half century ago by Eugene Garfield’s Institute for Scientific Information, was the first major science-indexing service. Its Science Citation Index (now Web of Science) has long been viewed as the gold standard, Kulkarni says. “But many of us have not fully appreciated the implications of not counting many of the other citations out there,” he says.
For instance, imagine that a new alternative to a drug banned in Japan has emerged, one that is safer and at least as effective as the original. References to the new alternative might be spreading like wildfire in Asian journals, especially Japanese-language publications — ones totally ignored by the “gold standard” Web of Science. A Western paper would get no credit for much of its influence there.
Cite counters that largely ignore particular types of published materials may also fail to gauge the true influence of a particular piece of research. The new analysis found, for instance, that Web of Science excelled in finding citations to papers described as “articles”, editorials and letters. Scopus retrieved more non-English and review papers. Both of these services retrieved more citations than Google Scholar did to studies that acknowledged industry funding, that investigated the value of a drug or medical device, or that had a large group of authors.
There are limits to what citation tallies can reveal, of course. Scores of papers may cite a study as being lame or even worse. High cite counts for that paper would justify embarrassment, not jubilation, something that pure mathematical analyses would never uncover.
These counting houses also perform cumulative citation tallies for everything that a journal has published during a year, then run their findings through some proprietary algorithms. What results is a figure representing a journal’s putative “impact factor.”
In an editorial accompanying the new JAMA paper, Marie McVeigh and Stephen Mann argue that many research analysts have “misappropriated” this impact factor as a quick and dirty estimate of the quality of a scientist’s endeavors. The idea: If the work is really good, it will get into the highest impact journals. If it only appears in low-impact journals, it must be weak.
The resulting pressure to publish in the most influential venues drives ever more authors to submit their papers to the same relatively few high-impact journals, observe McVeigh and Mann, staff members of Journal Citation Reports (which. like Web of Science, is published by Thomson Reuters). Ignoring the lower-impact journals won’t always be in the best interests of science, however. This could be especially true if those other journals are better at reaching the people who would best understand the new data — and use them.