I feel compelled to make a few comments on the recent changes to the way in which journal publications are to be evaluated in many research organisations and funding bodies.
There is a thing called a SNIP (Source Normalized Impact per Paper). It sounds very plausible, but sadly it is just more nonsense used to berate researchers. For example, it says, “The impact of a single citation is given higher value in subject areas where citations are less likely” — which seems like it makes sense since it is harder to get highly cited in those areas. But maybe some areas have low cites because the citations are not the traditional measure of success.
More importantly for researchers, is the question of granularity. Is it harder to get highly cited in biological crystallography or solid state? Or do you lump them all into a single heading called ‘crystallography’ even though solid state crystallography borders on physics and protein crystallography on biology? Maybe you normalise a journal’s score according to the fields it says it publishes in — opening the way for a journal to ‘tune’ its stated field to maximise its score. Suddenly, we have more options for manipulating the terms of reference to get the result we want. The very fact that the normalisation is attempted adds a whole new layer where graft, misdirection and manipulation can happen. And does. For example…
Here are three journals that cover condensed matter physics. They have the same mandate, effectively, and researchers think of them as part of the same cohort, even if they are distinctly not considered as of equal quality.
- Physical Review B: IF: 3.7 SNIP: 1.204 Ratio IF/SNIP: 3.1
- J. Phys.: Condensed Matter: IF: 2.2 SNIP: 0.901 Ratio IF/SNIP: 2.4
- Physica B: IF: 1.4 SNIP: 0.918 Ratio IF/SNIP: 1.5
So, Physica B gets a SNIP higher than JPCM despite having a much lower impact factor. Why? Because presumably it is being normalised against a different subset of journals. But there is a more insidious reason… Physica B is published by the same publisher that is hosting the SNIP data. No doubt they can completely justify the scores, but the bottom line remains that the SNIP is clearly misleading and more open to manipulation. Physica B‘s SNIP score suggests that a citation in Physica B is about twice as valuable as one in Physical Review B, (because it takes about 3 PRB cites to get a point of SNIP but only 1.5 Physica B cites) which is a complete and utter lie. It should be the other way around, if anything.
It’s all rubbish, but it is dangerous rubbish because I know that people’s careers are being evaluated by reference to numbers like these. People will get fired and hired, though more likely the former, based on numbers like these.
At least a bloody toss of a coin isn’t rigged.
I wanted to get the exact same bit from a lot (over 1000) of ps files. I’m sure there are better ways to do this — called ‘pscrop’, probably — but for some reason I did it as outlined below. First I converted all my ps to eps:
$ cat auto/pstoeps.sh for f in *.ps; do ls $f ps2epsi $f done
then I created a little ‘new header’ with the bounding box I wanted (got bounding box coords from opening the file in gv):
$ cat auto/header.txt %!PS-Adobe-2.0 EPSF-1.2 %%BoundingBox: 370 263 420 319 %%HiResBoundingBox: 370.0 263.0 420.0 319.0
Then I used sed to remove the old bounding box and header from all the many eps files, and then I replaced it with new bit
$ cat auto/extract.sh for f in *.epsi; do ls $f sed -i '/Bounding/d' $f sed -i '/EPSF/d' $f cat header.txt $f > extract_$f done
(Note that this screws up the original eps(i) files, since ‘-i’ overwrites. Also, it deletes all lines with ‘Bounding’ or ‘EPSF’ in them; I checked that the ones I wanted to remove were the only ones in the file that had that text, but another file might coincidentally have that text in an important line, so care is required!)
so the top of the epsi file went from this:
%!PS-Adobe-2.0 EPSF-1.2 %%Title: 0.02_p02000.ps %%Creator: Ghostscript ps2epsi from 0.02_p02000.ps %%CreationDate: Oct 26 11:42 %%For:dgoossensdgoossens dgoossens %%Pages: 1 %%DocumentFonts: Courier %%BoundingBox: 5 6 566 565 %%HiResBoundingBox: 5.460047 6.860039 565.739983 564.339991 %%EndComments %%BeginProlog
%!PS-Adobe-2.0 EPSF-1.2 %%BoundingBox: 370 263 420 319 %%HiResBoundingBox: 370.0 263.0 420.0 319.0 %%Title: 0.02_p02000.ps %%Creator: Ghostscript ps2epsi from 0.02_p02000.ps %%CreationDate: Oct 26 11:42 %%For:dgoossensdgoossens dgoossens %%Pages: 1 %%DocumentFonts: Courier %%BeginProlog
and rather than get the whole file, when I gv it I get a little window…
So yes, I’m sure there a better ways do do this, but this works pretty well. I can use various tools to them convert the eps file to some other format. All good. The handy thing about this method is you can fiddle with other bits of the file, like the metadata, duplexing instructions, font commands, and so on. A small change to the sed commands and rather than overwriting it is possible to create a new eps file, such that multiple bits could be cut from one file. And of course the whole point is that this is scriptable so I can do oodles of files. FWIW.
Here’s a novelty; I’m talking about a kids’ book. Originally published in French as Les Contraires, Elephant Elements is that rare beast, a work aimed at kids which pleases adults but without being sly or condescending.
So many childrens’ books illustrate one word per page, to build vocabulary and teach those first few words. How could one word offer scope for wit, style and panache? This is where some creative people can find new ground where there seems to be none. This book does it by, as the French title suggests, pairing opposites. But it does it in the context of what they would mean for an elephant. And it does the unexpected. ‘Big/Small’ is commonplace, but how about ‘Solid/Liquid’? How does that work for an elephant? Below is a personal favourite, that captures the impish wit of the book.
There’s a kind of comedian’s timing to the entries. A few commonplace pairings, like ‘Big/Small’, then something a little unexpected. Then a few more plain ones and then… Until you are turning every page with a little thrill of anticipation. Will it be conventional (though still illustrated in those charmingly simple drawings)? Or will it be completely out of left field?
Well, I don’t want to spoil it. But I can heartily recommend this book to anyone who likes one word per page.
As someone working in a technical field, I often feel like designers do not really appreciate the subtleties of notation and how to make it clear. In the title of this post, ‘I and l’ is upper case ‘eye’ and lower case ‘el’. Not that you can tell.
and here is the same formula using some sans serif fonts, using Microsoft Word…
Now, this is not to criticise these fonts. They are just not designed for this job. It is the chooser of the font who is being a wee bit silly if these fonts are used in a mathematical document. An even trickier example is…
which I have produced in LaTeX, and the nu and vee are well-differentiated, but that is because the font was designed by someone (Knuth) with the express purpose of laying out mathematics.
If I was able to give advice to anyone out there designing a text with mathematics in it, it would be to look at the two letter/symbol pairs I have shown here, and make sure they can be told apart. If not, the font choice is a poor one and needs to be changed. And what is fashionable at the moment is irrelevant beside the need for clarity and the fight against ambiguity and lack of precision.
Science and movies. A classic combination. I’m not talking science fiction, but actual science. As datasets get bigger and more and more imaging and modelling is done in three dimensions, being able to represent data graphically is more and more important. This is a problem for me because I am not really into computer graphics and the like — my scientific programming is all done in a text-based interface, with text files as inputs and outputs, and I do not have the time to spend learning GUI programming.
But, I do have some old PostScript routines that I use for printing stuff out when I have a simulation that is highly geometric in nature, for example something that involves atomic coordinates.
These routines are just old Fortran 77 subroutines, and I have bunged them up on the web for reference and because they are still kind of useful. [[I have also created a small program that uses them as a bit of a demo, but that is a story for another post, and not yet on the web.]]
Systematic naming of the input files is very important. Let’s say all my files have names of the form pXXXXX.ps, where XXXXX goes from 00001 to 99999. Now, the subroutine outputs them as PostScript files, so first I do some conversions. I use two steps. I’m sure you can do better. Here is my script, movieB.sh:
for f in *.ps; do ls $f ps2epsi $f done for f in *.epsi; do ls $f convert -alpha opaque $f $f.gif convert $f $f.pnm done gifsicle --delay=10 --loop *.gif > myanimatedgif.gif ppmtompeg mpeg.inp
So what happens here is that I convert all the PostScript files to encapsulated PostScript, then I convert them to GIF and also to PNM. I use two loops and keep all the intermediate files, for no good reason. Then I run gifsicle and produce my animated GIF, and I run ppmtompeg, which takes its input from mpeg.inp. Let’s have a look at that magical incantation:
PATTERN IBBPBBPBBPBBPBB OUTPUT mympeg.mpg INPUT_DIR . INPUT p*.epsi.pnm [00001-02000] END_INPUT BASE_FILE_FORMAT PNM INPUT_CONVERT * GOP_SIZE 15 SLICES_PER_FRAME 1 PIXEL HALF RANGE 3 PSEARCH_ALG LOGARITHMIC BSEARCH_ALG SIMPLE IQSCALE 3 PQSCALE 3 BQSCALE 3 REFERENCE_FRAME ORIGINAL ASPECT_RATIO 1
Okay, so this is reading in PNM files, and note the wildcard format for the filenames, which is why I chose such a simple input filename format, and with a fixed number of characters. It is usually a lot easier if files are named ‘p00001.ps‘ to ‘p99999.ps‘ than ‘p1.ps‘ to ‘p99999.ps‘.
Most of the fields I don’t change. They may not be optimal but they work. I am not embedding an example because the files are too big and I don’t want to waste your bandwidth and mine.
On Saturday 12 Nov in 2016 I attended the inaugural Goulburn Readers Writers Festival. It was small but very well done, and bodes well for the future. The two events I went to were the bookbinding workshop, run by Erika Mordek of the National Library of Australia, and a talk on his career so far by George Ivanoff. Both were excellent. George is an enthusiastic and engaging speaker, and extremely down-to-earth about the writing business. I’d recommend any aspiring author who wants to make some actual money out of it (a tough gig!) listen to him speak if the opportunity arises. He did not waffle on about literary theory or self expression. He talked about how to make a living by writing stories. Writing for the education market, taking the opportunities when they come and running with them as hard as you can, making connections and then delivering on time as promised every time, so you get a reputation as reliable. And writing and writing and writing so you keep getting better.
And I spent about four hours learning about bookbinding, sewing pages together. I have not concentrated so hard for so long in ages. The presenter, Erika, was enthusiastic, and relentless in her encouragement and in pushing us through the task. Key phrase — ‘trust your eye’. Erika runs courses at CIT, and works as a book conservator at the NLA — so she knows her stuff. We went from a pile of pages to a completed little bound notebook. We started, though, by sewing together our notes. Here is the cover of the notes booklet:
So lesson #1 was a sort of ‘booklet stitch’, which was remarkably simple once you were told what to do and potentially quite useful of itself. Then we started the main project — sewing six signatures of three folded A4 sheets each into a hardback notebook. There were a lot of tricky little things, but mostly it takes patience and method — like make sure there are no blobs of glue on the work surface before you put the book down…. Anyway, the final result looks like this:
So you can see the cloth spine, the decorated paper that was used for the cover, and some of the inner bits showing at the bottom edge, where I did not make the blue bit quite big enough….
What you can’t see are the stitches that hold it together, the endpapers, the card that stiffens the covers, the card that stiffens the spine…. There’s a lot in there that is not immediately apparent to the eye.
It was a fascinating experience. I fully intend to try to make a few more little volumes, and sooner rather than later so I don’t forget too much.
Oh, and the sordid subject of money? George’s talk was free and the workshop was $10 — for four hours tutoring by an expert, and all tools and materials supplied and a little notebook to keep at the end.
Impossibly good value.
Just posting a sample of text from reinked ribbon a week or two after doing it. Looks OK to me.
So, it’s not very blue but it is blue. It is pretty even, and has actually improved since I did it. And it has had plenty of time to dry out and it hasn’t, even the bit of ribbon near the paper and which is exposed to the air. Maybe if I left the machine unused for weeks it would dry out, but so far it looks like the glycerine works.
But There’ve been some comments in the light of the recent US election about reforming/abolishing the electoral college — after all, apparently, like Gore in 2000, Clinton got more votes but they were in the wrong places and so Trump won. Votes in less populous states count more, and so on.
I would argue, based on what I’ve read and seen, that there are more important things. The US needs an electoral commission, independent of Federal and State governments, that makes sure polling booths are equally available to all groups across the country. Voting systems need to be uniform. There need to be as many booths in non-white communities as in white, for example, and they need to be open. It needs to be easy to vote by post, in advance and from overseas. Recall Florida 2000 — Federal votes need to be immune from state-based interests.
That is a far bigger factor than the EC. I mean, sure, reform that — but you need to fix the bigger problems first. It needs to be easier to vote, equally easy everywhere. Once everyone gets to vote under a more uniform system, then you can get the inputs into the EC system to be more representative. At the moment the EC is GIGO, Fix the garbage going in first.
Second, once you’ve made it easy for everyone to vote, you would (ideally, though this will never ever ever ever ever ever ever ever ever ever ever happen) introduce what I call ‘compulsory attendance’. Put simply, you can fail to vote if you want to, but you have to show up (or postal vote or whatever) and tick the Brewster’s Millions box (draw a funny face on the ballot paper, whatever) or the Feds come after you. Failing to vote through apathy, or showing up and choosing not to vote for anybody because they all suck are two very different things, and send two very different messages, and what it does — and this is very important — is it removes the effect of voter turnout. Getting out the vote is discounted as a factor. That means there can be more focus on policy and genuine comparison of the parties. And you have to appeal to a wider range of voters, which tends to cut down on the more extreme ideas like Mexican walls. it is not undemocratic, because you do not have to vote, you just have to actually choose to not vote, rather than just be lazy.
But there’s no way you can have a law like that until everyone has an equal chance to vote, and right now that’s not the case.
As I write this, Trump looks like wining the presidency of the USA. His capturing so much of the vote says much about the mindset of many people, and not just in the USA.
A happy people would not vote for Trump. A hopeful people would not vote for Trump. Forward looking people would not vote to Brexit, and generous people would not vote for governments that persecute people fleeing persecution — as both sides of politics do in my own country of Australia.
Clearly, the world, even (especially? no) the bits of it that are supposed to be wealthy, is not a happy place. Not confident. ‘Progress’ has let us down. Globalisation has given us cheap TVs but lousy job prospects, or so the narrative goes. The climate is about to make our way of life a lot more difficult. Things do not look good, whether you are following your gut or thinking very carefully. It looks more and more like the baby boomers will be the peak of western affluence, with the seemingly endless climb finally cresting and falling away as we spend our time dealing with the world as we have made it and they have left it.
And so countries want to retreat, to blame somebody then keep them out.
Trump is not a cause, but a symptom. Despite all the interconnections in the world, the web being pre-eminent these days, we either have not come to understand each other any better, or if we have we don’t like what we see.
His win is not the cataclysm some would suggest. He is such a policy-free zone (except for thought bubbles) that what really matters is which GoP figures end up pulling his strings. When, after the election, he is desperately casting about for actual programs and policies, then the GoP establishment can swoop in, present him with a ready-to-go program which he can preside over. The question is, what will that program be?
The real impact of a Trump presidency depends on who gets to put the agenda in front of him. We are in the hands of a Republican Party that has the House, the Senate and the Oval Office.
This follows on from part I.
So the next thing to do was get a clean glass jar, drop in twenty or so drops of ink and one of glycerine, then mix. Well, here is a numbered list of things:
(1) I made up a small jig to hold two spools. The spools were not parallel enough, but it was okay for a first pass.
(2) Between the spools I put a stamp pad, and I loaded the stamp pad with some Artline stamp pad ink that had been mixed with glycerine, the latter simply bought from a supermarket, from the cake-making aisle. The ratio of ink to glycerine was about 10:1, possibly richer in ink than that.
(3) I used a glass jar held on its side to push the ribbon against the pad as I wound the ribbon from one spool to the other. I recharged the pad a couple of times on the way.
(4) When I was done I dropped a little excess ink on the tightly-wound spool and let it soak in, just because I had some left. I wanted to see if the glycerine stopped the ink from drying out too quickly, so it was better to err on the side of having extra ink, so that I could be sure that a simple lack of ink was not the problem.
(5) I note that the ribbon is quite old (metal spools) and rather frayed which leads to stray strands of nylon flopping around and giving unwanted spidery lines on the page. Can’t blame the reinking for that.
(6) I found that immediately after reinking it worked pretty well. An hour later it still worked just as well. Next day even the exposed bit of ribbon was still usable, so it seems to work! I seem to have put too much ink into the ribbon, and not as uniform as I would have liked, so probably not much good for serious work, but for a new notes and whatnot would be fine, and with a bit of trial and error I think I’ll be able to get ribbons that can do almost a well as a bought one, and in some funky colours. Can even experiment with buying a lightweight half inch nylon ribbon and inking it.
I am wondering if a different brand of ink — Horse brand comes to mind — would not need the glycerine added. I’d be interested to hear.