Archive | science RSS for this section

A Paper! Good God, a Paper: ‘Synchrotron X-ray diffuse scattering from a stable polymorphic material: terephthalic acid, C8H6O4’

I’ve been doing science for a long time, and while I’m in a bit of a career transition at the moment (see here for example), I’ve still got a few fingers in a few pies, and a few pieces of work slowly wending their ways through the system. Most recently, Eric Chan and I put out ‘Synchrotron X-ray diffuse scattering from a stable polymorphic material: terephthalic acid, C8H6O4‘. It’s a paper about the fuzzy, diffuse scattering from two polymorphs of the title compound.

It’s out in Acta Crystallographica Section B: STRUCTURAL SCIENCE, CRYSTAL ENGINEERING AND MATERIALS, a highly reputable but not open access journal, although they do allow authors to self-archive. At the moment, what that means is if you want a copy send me a message and I’ll punt one back to you.

Terephthalic acid molecule, drawn in Mercury.

Terephthalic acid molecule, drawn in Mercury.

What is terephthalic acid (TPA)? Well, it is a chemical used a lot in industry (plastics and such) and at room temperature it can crystallise out of solution in two forms, called (wait for it) form I and form II. (Well, actually the word ‘form’ is poorly defined in this context, technically, and it’s better to just say ‘polymorph I’ and ‘polymorph II’). In this context, a molecule is polymorphic if it can form more than one crystal structure and these structures can co-exist. Many materials change structure as you heat them up or squash them, but in a polymorphic system separate crystals of the structures can sit there side by side, under the same conditions. In most case, those conditions are room temperature and one atmosphere of pressure.

The two room temperature polymorphs are both triclinic, so of low symmetry. The difference is in how the molecules are arranged relative to each other. In both cases the -COOH groups on the ends of the molecules connect strongly to those on neighbouring molecules, so long chains of molecules form. (In the picture here, the -COOH groups are those at the ends of the molecule consisting of two red (oxygen) atoms, one white (hydrogen) and the grey (carbon) atom attached to the two whites.) These chains are sort of like one dimensional crystals, and then they are stacked up (like logs or a pile of pipes), but you can stack them up with, say, the -COOH in neighbouring chains close together, or you might have the phenyl rings (that is, the hexagon of grey carbon atoms) in one chain adjacent to the -COOH in the next. So in that sort of way you can get different crystal structures depending on how you stack things up.

Anyway, the paper looks at these polymorphs and how they are similar and how they differ. It uses my old ZMC program, which you can download from here (it comes with an example simulation, though not this one I’m talking about now). (That link goes to a paper I wrote and published for an Open Access journal, which I chose specifically so that you could go and download ZMC and everything for free…)

So in doing this I think about the connectivity of the molecule — how do the atoms depend on each other and where does the molecule need to be able to flex and twist? That means I end up drawing diagrams like this one:



That’s exciting, isn’t it? I start at the middle (X) and then each atom is positioned relative to the ones that went before. Here’s another picture (because I happen to have it handy)…. This shows how the atoms were numbered, and how by numbering them correctly and building the molecule up in the right order it is easy to let the -COOH groups spin around. mol_num

The X-ray diffuse scatting in the <i>h</i>0<i>l</i> layer of reciprocal space of TPA.

The X-ray diffuse scattering in the h0l layer of reciprocal space of TPA. Measured at the Advanced Photon Source.

Here I show typical data. You can see the little white spots — these are the sharp diffraction peaks, Bragg peaks, and they indicate where a lot of X-rays were reflected off the crystal. They are what is used to work out what is usually called the ‘crystal structure’ which consists of the unit cell (the repeating unit) that the crystal is made up from. But you can also see blobs and streaks and stuff, and these are wider (‘diffuse’) features, and these tell us about how the molecules interact and shuffle each other around, and stuff like that.

Anyway, the paper is online now. The DOI link is One thing I really like about it is it’s got a mathematical appendix. I always wanted to write an article with a mathematical appendix. I think I might post on that separately.



Is it better to go off-line when teaching?

Students, just like most of us including me, are too distractible, especially younger ones lacking self discipline, and by younger I mean first year university, not genuinely young. These days we put the content and the tutorial questions on the Learning Management System (LMS, really just a website) and we tell them to use the LMS to access the questions and the supporting materials and such. Once upon a time they’d just get a bunch of photocopies (‘photostats’) or before that roneos (mimeographs) or just “copy this down off the board.” I’m not pining for the past, I’m trying to work out how we can combine the best of then and now.

What happened then was we’d come to class having not looked at anything beforehand, we’d copy down a bunch of questions or question numbers off the blackboard (it wasn’t a whiteboard) like ‘Ch 8 Q 12-18’) then we’d have the book open in front of us and we’d whisper to each other while we were supposed to be working out the answers. Hmm.

What happens now is this:

They come to class having not looked at anything beforehand (just like in the old days), because the know they can access it when they get there (we knew we’d be given it when we got there, back in the day, so no difference there). But, and this is different now, they then spend ten minutes getting onto the university network and getting distracted by Facebook or whatever and don’t download the questions until the tutorial is half over. Then they get out their notebook (or tablet and stylus) and read the question and… check their messages. Then they show the guy sitting next to them a cat video. Then they laugh and eat some Skittles (fine, fine, that is not the internet’s fault), then they look at Pinterest or for all I know Tinder, and then I ask them how they’re going and they mumble and we’re over half way through now and they have written down a few bits of data pertaining to the first question and that’s it.

Okay, maybe I’m overstating, but I have seen it happen that way. I’m not just fighting any innate apathy or disinterest (or depression or sense of futility) to get them to do the work, I am fighting the single most interesting thing the human race has ever constructed — a world wide distraction machine that has everything on it and available at the touch of a screen.

At best, even when they are doing some physics or mathematics, their attention is divided — they are always ready to pounce on an alert from whatever bit of social media they use, so their brain is never really thinking about the questions we give them to (we hope) help them learn.

Now, in the past when you copied a question off the board, it went in your eyes, through your brain and out your fingers onto the paper. I’m not sure that’s much better than not engaging with it at all, but it can’t be worse. You could only really talk to the people either side of you, just as students can now, so there were by definition fewer distractions because now there are all the ones I had as a student plus smart phones, so at the very least students now have more distractions. Do they deal with them better than I used to? Valid question. Maybe these days they have extra information, extra connectivity, and the ability to use that without being consumed by it.

I’m not sure.

I started thinking about this post while I stood there watching students flick away from Snapchat (or whatever it was) and back to the LMS whenever they saw me coming. A few were able to use the ‘net to find useful information, or a website with some helpful content, and that’s good because a working scientist or problem solver (engineer, IT, whatever) does just that, calling on the info around them as well as what they know. But those students were a small minority.

I recall thinking how I would really, really like to given them all a paper copy of the questions or, better, ask them to bring their own copies (then at least they would have looked at it to the extent of downloading and printing it off and getting it from the printer with their own actual physical fingers before they got there — does that count as ‘engagement’?), and then use just their notebook, their bog basic calculator and their textbook (they still exist, they do!) to tackle the problems.

I don’t say the web is useless. It is great for communication, for extra activities and resources. They can use the web to access the material easily and flexibly when they are not in my class. I use it to distribute videos to buttress the material, to direct them to external resources, though Britney Spears’ Guide to Semiconductor Physics is getting a little behind the zeitgeist now… The WWW ought to be great for collaboration, for ready access to what the students have not internalised. For simulations, for VR, for virtual laboratories, for Skype visits to major laboratories, for feedback, for interaction, for… the sky is the limit.

But not if you can’t sit still long enough to actually do it.

We’ve tried to engage the students to make them want to be there. I mean, that should solve everything. And there’s always a few who do want to be there and  that’s great, they learn almost regardless of what the teachers do. But some students are in the class because they have been told to be there, because the subject is a prerequisite for what the really want, because they thought they would like it and now it’s too late to drop out without recording a fail, whatever. By giving them the option to more easily be mentally elsewhere when they have not developed the self-discipline to choose to do what needs to be done, I’m not sure we’re helping. I wonder if more distraction-free classroom time would have its benefits as part of a broader suite of learning opportunities? Some of the environments would use all the tech at our disposal, and some would just have the student and their brain and the stuff to be tackled.

I just want the best of both worlds; is that too much to ask?


Old fart, I am.

Irrationality: The Enemy Within by Stuart Sutherland. Too true.

Penguin, 1994, 357 pages.

Well. This book is replete with summaries of studies that on the whole show that we are creatures of habit, instinct and fear more than thought and reason. We suffer from the illusion of control. We make emotional decisions and then convince ourselves they were carefully reasoned. We avoid data that might prove us wrong, even when being proved wrong is the best thing that could happen to us.

The cover of <i>Irrationality</i> by Stuart Sutherland.

The cover of Irrationality by Stuart Sutherland.

I can’t say I was shocked. There’s a time and a place for aiming for the utmost in rationality, of course, and times when that’s not sensible, and it is useful to know the difference. If you’re being chased by a bear a quick but sub-optimal decision may be better than making the right one too late. And it’s useful to know when it doesn’t really matter and you can just please your inner reptile, and when you really do need to sit down and analyse things properly.

And in a sense that is the key point. He basically says that only by understanding statistics and by essentially falling back on some means of scoring the alternatives and then picking the one with the best score can we really make rational decisions. Otherwise we rely on impressions, feelings and hunches, none of which are actually reliable. In the end, only by breaking down the problem and applying some kind of rigorous-as-possible analysis, generally relying on mathematics, can a really rational decision be made. And what fraction of decisions are made like than? In my life, relatively few.

Each chapter tackles various forms of irrationality, and each ends with a ‘moral’ which is really a bullet-point summary, the last one of which is usually humorous/facetious. (‘Eat what you fancy.’)

There is some repetition, but the points being made deserve hammering home. There are some lovely little ‘try this yourself’ puzzles, where even though I knew there was a trick and I desperately did not want to answer like an irrational creature, I still got it wrong. The simple two card trick, for example, which I won’t describe in detail here since it would be too much like giving away the twist in the tail.

In summary, if you think you are good at making decisions, you might find this book useful. If you already believe that we’re basically animals in clothes, this will not disabuse you. It’s funny, opinionated, amusing and entertaining, but a little, I repeat, repetitive. Some of the case studies of how really really really important ‘decisions’ were made are a little worrisome, especially because (of course) human nature has not really changed in the meantime. I sometimes look around at a skyscraper, or read about a decision to go to war or spend billions of dollars on a useless aeroplane, and this book comes to mind. Will the building fall down? Is the war really worthwhile? Will the aeroplane get off the ground, and if it does will it stay up?

In some ways the book makes our achievements all the greater. Okay, the planet is in trouble. Okay, we don’t always elect great leaders or do the right thing by our neighbours, family, friends. Yet so much has been done. We’re not always rational, no, and neither should we be. Would more people be happier if the balance shifted towards more rationality? Probably. Yet on the whole we go forward, stumbling sometimes, by accident sometimes, yet we do live longer, we have sent people (okay, men) to the moon, vastly fewer children and mothers die in childbirth. It’s not all bad, this world.

Anyway, it’s a good book.


Book book book.

Why I Dislike Metrics

Metrics are used to measure a researcher’s output. How many publications? Patents? Students? Where are they publishing? Are they being cited? How many dollars in grants are they pulling in?

It’s tricky, because researchers at universities do need to be held accountable for the money invested in them — and the opportunity given to them that may have been given to another. Yet the outcomes of research can be diffuse, slow to materialise and hard to evaluate. A great conceptual breakthrough may have little impact initially. The investigator may have been fired by the time it is recognised. How does a non-expert administrator (who holds the purse strings) distinguish between a researcher who is ahead of the curve, and so not being cited because there are few others working on similar ideas, and one who is poorly cited because they are simply dull? Both are likely to have a tough time getting grant money, too.

Such an administrator falls back on metrics. Impact factors, grant dollars accrued, and so on. Complex formulas are developed. So much for a publication in one of these journals, less for one in these; citation rates are multiplied by this and divided by that, papers with a lot of authors [are|are not] (choose one) down-rated…and when government agencies that dole out grant money choose a particular metric, there’s really no choice.

Just looking at publications, once sheer numbers was the ‘in’ thing. Then it was citations. Then the H-index, the M-index, the insert-your-own-clever-metric-here-index, who knows. Now there are scores that mean publications in lower-ranked journals will actually count against a researcher, such that when comparing two researchers, one with four papers in ‘top’ journals and one with four in top and three in middle, the latter would actually be penalised relative to the first.

I cannot understand how this can be considered equitable, reasonable or sensible. I recognise that it is better to have high impact than low. I recognise that staff who consistently fail to have high impact need to improve that record. I have no problem with that. But the idea that a tail of papers in lower ranked journals is to be penalised is short-sighted, counter-productive and shows a lack of understanding about how science works. I will not speak for other fields.

(1) If I have a postgraduate student, or even an honours student, who had produced a nice result, a novel result, but not a high-impact result, I must now deny them the right to publish that result and build their publication record. They will finish their studies with fewer papers, less experience in writing up their work, a poor publication record, and less chance of employment. Writing for publication is a valuable part of a student’s training. By publishing a (possibly minor) paper extracted from their thesis before the thesis is submitted, a scholar gets feedback on their work and their writing ability from a wide audience, begins to build a profile, and can be more confident that the thesis will be passed because a component of it has already passed peer review.

(2) It would be easy for any such rules to be biased against staff publishing in certain areas. Who decides what is a ‘top’ journal? How is this harmonised across fields? Some fields are oddly replete with high-ranking journals and some have a dearth. This needs to be recognised.

(3) Science is a dialogue, a discussion. Many important results come from bringing together many small results. By forcing staff to only publish their highest-impact work, many results that might be useful to other workers in the field will never see the light of day, will never contribute to the debate. This holds back the field. To give a simple example, databases like the Inorganic Crystal Structure Database are populated by thousands of individually minor results. Most of these were not published in high-impact journals, yet data mining across that database and others has produced powerful results that are of great value. Great cathedrals can be made from many small bricks. This policy prevents those bricks from accumulating. It works against the fundamental (okay, and idealised) nature of science as a transparent, collaborative enterprise.

(4)  Building of collaborations will be inhibited. If I have a colleague at a university or facility (like a synchrotron, say) who is not subject to the same rules, they will quite reasonably say that a piece of work may not be ‘high-impact’, but is worth publishing nonetheless, and I will have to either accept the impact on my own record or deny publication. That is hardly a great way to build a relationship.

(5) Metrics have a habit of being applied retrospectively. Evaluating my performance in 2015 or 2016 (or even further back) against criteria that were not in force or even available at the time is simply unethical. If organisations are going to use metrics, it is because they want to (a) select staff that are performing at a high level and (b) encourage staff to perform at what is considered to be a high level. Evaluating staff who have been trying to satisfy one regime against totally new criteria is unfair and unreasonable, yet happens all the time. There need to be grandfather clauses.

I fully agree that we need to do high-impact science. I fully agree that staff need to be encouraged to publish in top journals. But actively precluding publishing in lesser, but still sound, journals is short-sighted and dangerous, and an example of how the careless use of metrics is destructive. Perhaps metrics are a necessary evil, but I have yet to see whether they do more good than harm.


Pontification over.

SNIP — what a destructive load of nonsense.

I feel compelled to make a few comments on the recent changes to the way in which journal publications are to be evaluated in many research organisations and funding bodies.

There is a thing called a SNIP (Source Normalized Impact per Paper). It sounds very plausible, but sadly it is just more nonsense used to berate researchers. For example, it says, “The impact of a single citation is given higher value in subject areas where citations are less likely” — which seems like it makes sense since it is harder to get highly cited in those areas. But maybe some areas have low cites because the citations are not the traditional measure of success.

More importantly for researchers, is the question of granularity. Is it harder to get highly cited in biological crystallography or solid state? Or do you lump them all into a single heading called ‘crystallography’ even though solid state crystallography borders on physics and protein crystallography on biology? Maybe you normalise a journal’s score according to the fields it says it publishes in — opening the way for a journal to ‘tune’ its stated field to maximise its score. Suddenly, we have more options for manipulating the terms of reference to get the result we want. The very fact that the normalisation is attempted adds a whole new layer where graft, misdirection and manipulation can happen. And does. For example…

Here are three journals that cover condensed matter physics. They have the same mandate, effectively, and researchers think of them as part of the same cohort, even if they are distinctly not considered as of equal quality.

  • Physical Review B: IF: 3.7 SNIP: 1.204  Ratio IF/SNIP: 3.1
  • J. Phys.: Condensed Matter: IF: 2.2 SNIP: 0.901 Ratio IF/SNIP: 2.4
  • Physica B: IF: 1.4 SNIP: 0.918 Ratio IF/SNIP: 1.5

So, Physica B gets a SNIP higher than JPCM despite having a much lower impact factor. Why? Because presumably it is being normalised against a different subset of journals. But there is a  more insidious reason… Physica B is published by the same publisher that is hosting the SNIP data. No doubt they can completely justify the scores, but the bottom line remains that the SNIP is clearly misleading and more open to manipulation. Physica B‘s SNIP score suggests that a citation in Physica B is about twice as valuable as one in Physical Review B, (because it takes about 3 PRB cites to get a point of SNIP but only 1.5 Physica B cites) which is a complete and utter lie. It should be the other way around, if anything.

It’s all rubbish, but it is dangerous rubbish because I know that people’s careers are being evaluated by reference to numbers like these. People will get fired and hired, though more likely the former, based on numbers like these.

At least a bloody toss of a coin isn’t rigged.

End rant.

Random Thoughts on Open Access Publishing.

There are a lot of problems with Open Access (OA) academic publishing. The biggest one is simple — if authors are paying to get their work out there, there is a financial incentive to publish everything that can be paid for. This has resulted in a vast explosion of completely crap online journals springing up, which effectively take money and post a pdf on a website and do little else. There are decent OA journals, but they virtually all come from established publishers. I have even used them myself. A new nadir was reached recently when it was pointed out that some journals are even charging people to be on their editorial staff, because such things presumably are seen as valuable on a CV or something. It is hideous to behold. Browsing old libraries and looking at the standards of papers in pre-internet era journals, it is on average much, much higher than now. I don’t think the good journals (say, those of the American Physical Society, IoP, IUCr, etc) have deteriorated, but the scientific literature is so diluted now.

The internet has enabled rapid search, but has also made it essential. New authors (and perhaps older ones too) must research the places they publish. I repeat; the best place to publish is the place you find the most useful papers.

BUT… I agree it is undesirable that publicly-funded science is published in subscriber-only journals. But how do we avoid the current problem that open access has become a synonym for rubbish?

The DOAJ website is something of a clearing house. They have a list of journals and a list of ones they have delisted. They link to places like, which can also help out. Having said that, DOAJ is funded by memberships and these include publishers, which is definitely a conflict of interest. It may be a necessary evil in getting the organisation running, but it is not a good look. A few quick non-exhaustive spot-checks suggest that the publishers on the DOAJ website are mostly not listed as dodgy at Beall’s list. So that’s a good thing.

DOAJ is meant to be a kind of ‘white list’ for open access. That’s a good idea. Ideally, though, it would be beneficial if labs and universities took more interest in the white list.  They (largely, though governments matter too) control the metrics by which researchers are measured, they produce the research and use the results.

I can imagine a parallel world where the OA journals are run by consortia of labs and universities. They could do it with minimal duplication of effort, host a network of mirrored servers, not charge a fee because they would be paying themselves anyway, base publication purely on merit, and probably save a lot of money that would otherwise be funnelled into the pockets of crappy OA journals.

Clearly this is impossible.

It would potentially send the current good publishers to the wall, it would be prey to things getting published because the people in the research labs have closer links to the publishers (though governance could probably deal with that, and even now publications have to have editors and boards and referees who may know the authors, so it’s not that different — there could be rules about submitting your paper to a non-local editor with non-local reviewers, which would be easier if the whole thing was done through a wide, multinational network such as that proposed). And it is against the modern trend of outsourcing everything (though the labs could get together and outsource the whole exercise in order to satisfy that modern fantasy).

What can I say? I have my doubts but I am not convinced it is unworkable. How something like would fold into it, I’m not sure. Anyway, just some thinking aloud.


If thinking’s allowed.



Wagga Wagga Wagga Wagga Wagga Wagga Wagga Wagga

So, all you sciency people out there who look at materials, whether you’re a solid state chemist or a condensed matter physicist or a materials engineer or whether you work with organic materials or metals or ceramics or… well, let’s face it, everything you can touch, sit on or turn into a useful gadget is made of stuff and stuff means materials and materials means the annual Wagga Condensed Matter and Materials Meeting…

Go to this magisterial website:

Or go direct to this one:

And take a look. It’s cheap! Just a few hundred dollars for accommodation, meals and the meeting. It’s cheap, but it’s not nasty, though it might be bloody hot. Here are the important dates:

Key Dates

Abstract Submission Open Monday 12th September 2016
Abstract Submission Close Friday  11th November 2016
Notification of Acceptance Friday  23rd December 2016
Conference Begins Tuesday 31st January 2017


Here’s the conference flyer as a png file, stolen directly from the conference website:

Wagga Wagga Condensed Matter and Materials Meeting announcement.

Wagga Wagga Condensed Matter and Materials Meeting announcement.



AANSS 2016 — it’s approximately that time of year again, again.

Get that neutron feeling.

Get that neutron feeling.

The AANSS is a great mix of formality and informality, quality science in a relaxed atmosphere. Anyone who has or might or ought to use neutron scattering in their work (and isn’t that all of us, really?) is invited. And here’s a trick: Registration is $50 cheaper for ANBUG members but ANBUG membership is free! So join up!




Science inaction. No, wait, “Science in ACTion.”

So on August 12 and 13 we (myself and numerous colleagues from the UNSW Canberra campus) took part in Science in ACTion, advertising the wonders of Science to the good people of the ACT (Canberra) and a few surrounding towns.  It was held at the Old Bus Teapot Depot markets, and we presented a liquid nitrogen show (mostly just freezing balloons…) and some other stuff; a Van de Graaff generator (very effective — I got a spark off a nearby table frame…), some UV fluorescence, mathematical puzzles and mazes and some cheap chromatography uising filter paper and felt tipped pens:

Unmixing colours using filter paper.

Unmixing colours using filter paper.

It was all part of Science Week 2016, and I don’t have the photos back from the chemist yet, so I can’t show you anything else. But if you look in this image, you can see our purple and yellow stand in the background on the left, and some coloured balloons.

So there.



Diffuse in HgBa2CuO4+δ

It has long been an intention of mine to take our techniques for exploring the way the atoms are arranged in complicated materials and apply them to superconductors. The crystal structures of the oxide (high-temperature) superconductors are similar to those found in ferroelectric materials, which we have looked at in some detail. The difference is that in ferroelectrics the positions of the atoms relate directly to the interesting properties, since the ferroelectricity arises from atomic displacements (that is, from atoms moving around), whereas in superconductors the useful property shows up in how the electrons behave, and while this must be enabled by the crystal structure, the link is less direct. Even so, it seems to me that if we want to have a good idea of how the properties arise from the structure, then we need to know what the structure is.

One of the high-temperature superconductors is HgBa2CuO4+δ, a classic ‘copper oxide layer’ superconductor, descended from the original high-TC materials discovered in the late 1980s. We found some data on it in the literature, and decided that while the modelling there was a useful place to start, the model that was developed did not really do a great job of mimicking the observed scattering. Hence, we decided to re-analyse their data.

The paper came out recently in IUCrJ, which is open access which means you can download it now, without a subscription…so here it is (or click on the image below).


In summary, we find that when the extra oxygen atoms are added to the structure (that’s the ‘+δ’ in the chemical formula), they go into the structure as long strings of atoms, as correctly identified by the authors of the paper with the original data, which is behind a paywall. What we have done that is new is improve the agreement between model and data by adjusting the positions of the surrounding atoms; it makes sense that when you stuff new atoms into a structure, the ones already there have to adjust to accommodate them. Based on things like bond valence sums, we can get some idea of what these adjustments should be, and then create a model crystal in which the atoms are pushed around in sensible ways in response top the added oxygens. These new atomic positions will then influence the environments of other atoms, and of electrons moving through the structure. Here is an image to break up the text:


An image to break up the text. On the left we see a row of added (‘interstitial’) oxygen atoms [‘O(3)’], moving between rows of mercury (Hg) atoms, and dragging the barium (Ba) atoms along with them. On the right we see a diffuse scattering pattern calculated from our model; X, Y and Z indicate important features on the plots, discussed in the paper.

Since the paper is open access, I won’t go into massive detail here, but when it comes to modelling the streaks of scattering in the pattern the results are pretty solid. There are some other, subtle details we continue to work on, but so far I think we can conclude that the methods of Monte Carlo analysis of single crystal diffuse scattering promise to deepen our understanding of superconductors and maybe — maybe! — will help us design ones that work at ever-higher temperatures.

More of the similar.