Metrics. Scopus, Google Scholar, Web of Science and H-index.
When I fire up WordPress is asks me ‘What’s on your mind’. Usually I don’t just bang on about my current bugbear without having a point, but today is an exception.
Even though WoS and S give the same number, they disagree on which papers are inside and outside the band. That’s not a surprise; all sources agree on papers that are well-cited and all on papers that are poorly cited, but there’s the great middle ground where just one or two citations might put a paper ahead of ten or a dozen others.
Metrics are the bane of many a researcher’s life. They vary from field to field, from sub-discipline to sub-discipline. Someone doing work way ahead of the curve will not be cited as much as someone doing routine work on a hot topic. A junior author on a few papers by a big name gets an artificial boost. Some fields publish more than others, but managers may not be able to know this.
One thing I don’t like is an idea that having a lot of low-cited papers should be a negative. Now, yes, I have quite a few, but some of those were written to give a student experience of the publication process; the work was definitely publishable, just perhaps not as exciting as some other stuff I’ve done, but I felt that there was value in both putting the results into circulation, and in getting a student to go through the process of writing a paper (and therefore working out what was important, how to say it and how to illustrate the points they were making), responding to reviewers’ remarks, and so on, through to seeing it in print. Penalising researchers for having a long ‘tail’ will partly mean less material out there waiting to be discovered, and less chance that that student will get a chance to see how the process works. Conversely, I suppose it might prevent some ‘stamp collecting’ research. But even the stamp collecting is useful — from great masses of routine crystal structures useful guiding principles like the bond valence sum approach to evaluating and predicting crystal structures can arise. It is like crowd-sourcing. The individual contributions may seem trivial, or at least only of very limited appeal, yet the trends across the whole lot put together add up to something important.
Picking winners is always fraught, and the great problem in science management is often seen as making sure that all staff are ‘productive’ when they are very diverse and their outputs are contributions to knowledge rather than material products. So we look at patents and funding obtained and papers cited, all of which work well (or at least ‘badly but not appallingly badly’) when the person in question is working in a large field with lots of peers. Anyone who strays too far off the curve is penalised, and it is very difficult to tell someone who is hewing out real new territory from someone who is achieving nothing; partly because in science one can turn into the other remarkably quickly. As science as a career has mainstreamed, gone from the province of a relatively few gifted amateurs to an established career path, as economies depend on it as a source of new ideas, there is a natural tendency to try to shape it to produce useful outcomes rather than let the useful applications arise naturally from all those clever people following their noses. This is not unreasonable, especially as so much of it is funded by tax dollars; but I do wonder what we’re missing.
Thus ends my ramble.