SNIP — what a destructive load of nonsense.

I feel compelled to make a few comments on the recent changes to the way in which journal publications are to be evaluated in many research organisations and funding bodies.

There is a thing called a SNIP (Source Normalized Impact per Paper). It sounds very plausible, but sadly it is just more nonsense used to berate researchers. For example, it says, “The impact of a single citation is given higher value in subject areas where citations are less likely” — which seems like it makes sense since it is harder to get highly cited in those areas. But maybe some areas have low cites because the citations are not the traditional measure of success.

More importantly for researchers, is the question of granularity. Is it harder to get highly cited in biological crystallography or solid state? Or do you lump them all into a single heading called ‘crystallography’ even though solid state crystallography borders on physics and protein crystallography on biology? Maybe you normalise a journal’s score according to the fields it says it publishes in — opening the way for a journal to ‘tune’ its stated field to maximise its score. Suddenly, we have more options for manipulating the terms of reference to get the result we want. The very fact that the normalisation is attempted adds a whole new layer where graft, misdirection and manipulation can happen. And does. For example…

Here are three journals that cover condensed matter physics. They have the same mandate, effectively, and researchers think of them as part of the same cohort, even if they are distinctly not considered as of equal quality.

  • Physical Review B: IF: 3.7 SNIP: 1.204  Ratio IF/SNIP: 3.1
  • J. Phys.: Condensed Matter: IF: 2.2 SNIP: 0.901 Ratio IF/SNIP: 2.4
  • Physica B: IF: 1.4 SNIP: 0.918 Ratio IF/SNIP: 1.5

So, Physica B gets a SNIP higher than JPCM despite having a much lower impact factor. Why? Because presumably it is being normalised against a different subset of journals. But there is a  more insidious reason… Physica B is published by the same publisher that is hosting the SNIP data. No doubt they can completely justify the scores, but the bottom line remains that the SNIP is clearly misleading and more open to manipulation. Physica B‘s SNIP score suggests that a citation in Physica B is about twice as valuable as one in Physical Review B, (because it takes about 3 PRB cites to get a point of SNIP but only 1.5 Physica B cites) which is a complete and utter lie. It should be the other way around, if anything.

It’s all rubbish, but it is dangerous rubbish because I know that people’s careers are being evaluated by reference to numbers like these. People will get fired and hired, though more likely the former, based on numbers like these.

At least a bloody toss of a coin isn’t rigged.

End rant.

Advertisements

Tags: , , , , , ,

About Darren

I'm a scientist by training, based in Australia.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: