Something has just happened that will almost certainly end the tyranny of impact factors and may well mark another step towards the extinction of most scientific journals. Did you notice it? Probably not, and even if you did you may not have understood what it was or what it may lead to.
It was the appearance of something called rather clunkily “article level metrics.” These are a variety of scores and other bits of information attached to each article in the publications of the Public Library of Science (where I’m on the board). They shift attention from journals to articles, particularly for the academic bean counters anxious to find a convenient and low cost way of ranking academics.
To illustrate the metrics let’s consider the article “Why most published research findings are false” by John Ioannidis, the most popular article ever published in PLoS Medicine, which has just celebrated its fifth birthday. (If you haven’t ever read the article, you should: it’s very important. As you read it you will add to its metrics.)
You can click on the tab at the top of the article entitled “metrics.” When you get to the metrics the first thing that you’ll see is that the article has been viewed 239 697 times since it was published in August 2005. The number of page views will actually be more by the time you access the article because the data are updated every 24 hours, and we know from a graph that shows the growth of page views over time that page views of this article are continuing to grow. The shape of the graph is clearly important. Many articles will cease to be viewed after a while—and so the graph will flatten. John’s article continues to command attention.
We can also see that there have been 48 680 downloads of the PDF of the article. This probably reflects the number of people printing out the article to read and keep it. A high ratio of PDF downloads to page views probably means that many people have found the article valuable.
Next you can see that the article has been cited 110 times in the Scopus database, 58 times in PubMed Central, and 98 times in CrossRef. Many of these citations will be the same, but different databases include different journals. Citations are used to calculate the impact factor, but these citations come from only one (expensive) database. It’s better to use more than one database. The number of citations for John’s article is very high, especially when we remember that many articles are never cited.
Citing an article usually indicates that other authors have seen value in it, although they could be citing it to point out its many flaws. Citations are obviously driven by researchers and other authors, and many doctors publish little or nothing—so when assessing the value of a piece of medical research it makes a lot of sense to consider data on readers as well as citations.
But there is still more. You can see that the article has been mentioned 17 times in blogs on the Postgenomic blog site, which collects science blogs from many different sites. Academic bean counters may be snotty about blogs, but increasingly blogs are the way that scientists communicate with each other—avoiding the misery of peer review and the wild inaccuracies of journalists.
You can also see that 105 people have bookmarked the article in CiteULike, a site for collecting references, meaning probably that the article has been or will be cited in articles. Eighteen people have also bookmarked the reference in Connotea.
This is all a beginning. PLoS plans to add more metrics. What is crucial is that the metrics can be collected automatically. It may be possible, for example, to measure references in parliaments, official reports, Cochrane reviews, or any news media.
Slowly but surely these metrics will become much superior to using the impact factor of the journal in which an article is published as a surrogate for the impact of the article itself. Although a routine practice, this is wholly unscientific because there is very little correlation between the impact of a journal and the impact of the articles it publishes—because the impact factor of the journal is driven by a few articles that are very highly cited.
Plus the metrics give a real time and much broader measure of the influence of an article. Increasingly governments and research funders are interested not just in the number of times an article is cited in other publications (an incestuous and self serving measure) but on the impact they have in the real world, the changes they lead to.
So that’s why article level metrics might doom the impact factor, but why might they signal an end to many journals? It’s because they lead to articles rather than journals being what matters, and the articles can then be published quickly on databases rather than in journals. PLoS One is already publishing around 500 papers a month, and other publishers are beginning to copy it.
The edifice of journals is beginning to crack—and not before time.
Competing interest: Richard Smith is on the board of the Public Library of Science and has been an enthusiast for open access publishing for 15 years.