Why I Dislike Metrics

Metrics are used to measure a researcher’s output. How many publications? Patents? Students? Where are they publishing? Are they being cited? How many dollars in grants are they pulling in?

It’s tricky, because researchers at universities do need to be held accountable for the money invested in them — and the opportunity given to them that may have been given to another. Yet the outcomes of research can be diffuse, slow to materialise and hard to evaluate. A great conceptual breakthrough may have little impact initially. The investigator may have been fired by the time it is recognised. How does a non-expert administrator (who holds the purse strings) distinguish between a researcher who is ahead of the curve, and so not being cited because there are few others working on similar ideas, and one who is poorly cited because they are simply dull? Both are likely to have a tough time getting grant money, too.

Such an administrator falls back on metrics. Impact factors, grant dollars accrued, and so on. Complex formulas are developed. So much for a publication in one of these journals, less for one in these; citation rates are multiplied by this and divided by that, papers with a lot of authors [are|are not] (choose one) down-rated…and when government agencies that dole out grant money choose a particular metric, there’s really no choice.

Just looking at publications, once sheer numbers was the ‘in’ thing. Then it was citations. Then the H-index, the M-index, the insert-your-own-clever-metric-here-index, who knows. Now there are scores that mean publications in lower-ranked journals will actually count against a researcher, such that when comparing two researchers, one with four papers in ‘top’ journals and one with four in top and three in middle, the latter would actually be penalised relative to the first.

I cannot understand how this can be considered equitable, reasonable or sensible. I recognise that it is better to have high impact than low. I recognise that staff who consistently fail to have high impact need to improve that record. I have no problem with that. But the idea that a tail of papers in lower ranked journals is to be penalised is short-sighted, counter-productive and shows a lack of understanding about how science works. I will not speak for other fields.

(1) If I have a postgraduate student, or even an honours student, who had produced a nice result, a novel result, but not a high-impact result, I must now deny them the right to publish that result and build their publication record. They will finish their studies with fewer papers, less experience in writing up their work, a poor publication record, and less chance of employment. Writing for publication is a valuable part of a student’s training. By publishing a (possibly minor) paper extracted from their thesis before the thesis is submitted, a scholar gets feedback on their work and their writing ability from a wide audience, begins to build a profile, and can be more confident that the thesis will be passed because a component of it has already passed peer review.

(2) It would be easy for any such rules to be biased against staff publishing in certain areas. Who decides what is a ‘top’ journal? How is this harmonised across fields? Some fields are oddly replete with high-ranking journals and some have a dearth. This needs to be recognised.

(3) Science is a dialogue, a discussion. Many important results come from bringing together many small results. By forcing staff to only publish their highest-impact work, many results that might be useful to other workers in the field will never see the light of day, will never contribute to the debate. This holds back the field. To give a simple example, databases like the Inorganic Crystal Structure Database are populated by thousands of individually minor results. Most of these were not published in high-impact journals, yet data mining across that database and others has produced powerful results that are of great value. Great cathedrals can be made from many small bricks. This policy prevents those bricks from accumulating. It works against the fundamental (okay, and idealised) nature of science as a transparent, collaborative enterprise.

(4)  Building of collaborations will be inhibited. If I have a colleague at a university or facility (like a synchrotron, say) who is not subject to the same rules, they will quite reasonably say that a piece of work may not be ‘high-impact’, but is worth publishing nonetheless, and I will have to either accept the impact on my own record or deny publication. That is hardly a great way to build a relationship.

(5) Metrics have a habit of being applied retrospectively. Evaluating my performance in 2015 or 2016 (or even further back) against criteria that were not in force or even available at the time is simply unethical. If organisations are going to use metrics, it is because they want to (a) select staff that are performing at a high level and (b) encourage staff to perform at what is considered to be a high level. Evaluating staff who have been trying to satisfy one regime against totally new criteria is unfair and unreasonable, yet happens all the time. There need to be grandfather clauses.

I fully agree that we need to do high-impact science. I fully agree that staff need to be encouraged to publish in top journals. But actively precluding publishing in lesser, but still sound, journals is short-sighted and dangerous, and an example of how the careless use of metrics is destructive. Perhaps metrics are a necessary evil, but I have yet to see whether they do more good than harm.

 

Pontification over.

Advertisements

Tags: , , , , , ,

About Darren

I'm a scientist by training, based in Australia.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: