June is a month of some trepidation for the editorial office of a journal. While, like any profession, editors have key performance indicators with which to calibrate and benchmark their performance, one of the most visible and influential measures of our success is the annual assessment carried out by ISI’s Journal Citation Reports, more popularly known as the Impact Factor. Every year at the start of June, nervous speculation begins to seep through the office – up, down, or stay the same?
But what does it really mean? To begin with, the JCR publishes more than just the Impact Factor – there are a number of other metrics in the report that offer different insights, but essentially all of these metrics boil down to one concept, a measure of how influential the articles published in a given journal are, or, more crudely, how often articles in the journal are cited.
The Impact Factor is calculated as follows: take all the articles published over a two-year period, and count all the citations to those articles in the year following. By taking the same data across the same time period for all journals, you can get a picture of how one journal is ranked relative to another in the same discipline. And that’s important caveat number one when quoting Impact Factors – different disciplines can have quite different ranges of Impact Factor, depending on the citation behavior of the community they serve (some communities have traditions of citing quickly from journals, others are more likely to use different sources, such as preprint repositories, which then affects the numbers of citations). As an example of the contrast, medicine is a high-Impact Factor discipline, its highest ranked title in the general medicine category being the New England Journal of Medicine, Impact Factor 47.050. Mathematics, on the other hand, is a much lower Impact Factor field, with Annals of Mathematics taking top spot with 4.174. It’s not really fair to suggest that the former is a “better” journal than the latter – both are the “best” journal in their field. Or more accurately, the highest-ranked journal in their field.
Which brings me to caveat number two – the Impact Factor doesn’t tell you which journal is the “best”. That really is something you can’t measure, since, like subject disciplines, there are many different types of journals, each serving a different function and a different need for their academic communities, all of which are equally important in the grand scheme. Publishing in Nature or Science is undoubtedly a career goal for most scientists, as these journals are highly visible and extremely influential, but their multidisciplinary focus means that you might only find a few papers a year in your subject area are published. Some journals that are more specialized in a certain topic might seek to publish the most important breakthroughs in a field, those most likely to be highly cited, and others may adopt a more archival approach, choosing to report more incremental advances, detailed characterizations, or work that refines and optimizes previously reported results. Such journals may publish many thousands of articles in a year, and since the Impact Factor is dependent on the number of articles, this inevitably means a lower value than for that of the journal publishing a select number of new breakthroughs, but it doesn’t mean that an archival journal performs any less an important function in documenting the scientific record, which is, ultimately, the purpose of scientific literature. So, the idea of a “best journal” is over-simplistic (but then again, so are most other “best (insert item) in the world” claims).
Both of the above issues mean that it is not a good idea to consider the Impact Factor as an absolute measure of journal standards. This is an easy trap to fall into, not helped by the ever-increasing pressure for authors to publish in high-impact journals. However, for the moment the Impact Factor is the ranking system most widely accepted by the academic community – one could consider it, with apologies to Churchill, “the worst form of journal ranking except for all the other that have been tried”. So, with due consideration of the above ideas, what do I make of Advanced Materials‘ performance this year? Well, first things first; we were pleased to see that the Impact Factor rose from 8.19 to 8.38. To put that into context, over that two-year period we had published a substantially larger number of papers than in previous years, so an increased Impact Factor against that backdrop shows that the papers we published were cited much more often. So although we published more, our selection of papers was also more critical (a process aided immeasurably by the expertise of our reviewers), resulting in more influential publications. What I find interesting, looking into this a little more deeply, is which papers contributed significantly to this result, as those papers deemed most influential by our readers will certainly give an insight into what fields in materials science have been the most read about over the last year.
What topics were covered by our top ten papers published in 2007-2008, i.e., those most cited in 2009? Top of the list was thermoelectric materials, reviewed by Dresselhaus et al. Materials for energy applications accounted for two of the ten, with Blom et al’s discussion of device physics in fullerene BHJ solar cells, and Leclerc et al’s report of a new low-bandgap poly(carbazole) for solar cell applications. Also featuring, in no particular order, were organic nonvolatile memory, boron nitride nanotubes, click chemistry, superparamagnetic colloids, nanolithography, carbon nanotube based electronics, and light-emitting polymers for display applications. That’s quite a broad range, and a nice mix of applications, techniques, and basics, which is good to see, as Advanced Materials is a general materials science journal and one of our goals as editors is to make sure we provide coverage of as wide a range of topics as we can.
All in all, then, we can reflect on a good performance in 2009, thanks to our authors submitting excellent, influential work, and our reviewers making sure we could select the best. But there’s always room for improvement, and come next June, it’ll back to gnawing our fingernails and pacing the office, waiting for a (hopefully) higher impact.