This paper and its deposited material explore clustering of 2 × 1 dimers (dominoes) subject to simple interactions and temperature. Much of the work in domino tilings has been statistical, combinatorial and thermodynamic in nature. Instead, here, the domino is used as a simple model of a non-spherical molecule to explore aggregation, rather as if the molecules were interacting in solution. As a result, the work does not look at how many ways there are to tile a plane, but at how the cluster evolves with different parameters in the potential that governs the clustering. These parameters include the rules used to select which of the many possible dominoes will be added to the cluster, and temperature. It is shown that qualitative changes in clustering behaviour occur with temperature, including affects on the shape of the cluster, vacancies and the domain structure.
The paper is on the web, open access, at http://dx.doi.org/10.3390/condmat2020015 and http://www.mdpi.com/2410-3896/2/2/15. It comes with a bundle of software anyone can use to play with the model, modify it, whatever. Please do!
It’s basically a toy model, but it shows some nice behaviour. Apologies to the red/green colour-blind.
In a very recent post, I mentioned an appendix to an article I wrote. I rather like it. The appendix grew out of a little document I put together. That document is longer, vaguer and a little different from the published appendix, and so I am putting it here. Now, the article was written in LaTeX, and this is a website, so I tried running htlatex on the file. It was very complicated:
$ htlatex planes $ firefox planes.html
And it worked. Next thing is to get it into WordPress… Easy enough to cut and paste the HTML code into the window here, but what about all the graphics that were turned into png files?Ah well…bit of manual fiddling. Equations and symbols seem to sit high, and some of the inline equations have been broken into a mix of graphics and characters… still, not too bad. The PDF version is available here.
Planes perpendicular to vectors
Say you have a vector in real space, expressed say in direct lattice terms, for
You may want the reciprocal plane(s) perpendicular to this vector.
Because correlations in a crystal collapse the scattering into features perpendicular to the direction of the correlation. In a normal, fully ordered three dimensions (3D) crystal, this collapsing happens in all three directions, so the scattered intensity coming off the atoms gets concentrated at points, the reciprocal lattice points, usually denoted hkl.
If you have only two dimensional ordering, the scattering is collapsed down in two directions but not the third, giving rise to rods or lines of scattering in reciprocal space (that is, in diffraction space). If there are only one dimensional correlations, the scattering collapses into sheets, that is, it is delocalised in two dimensions and only localised in one dimension (because there are only correlations in one dimension).
In diffuse scattering the crystal is typically long-range ordered in three dimensions, and the diffraction pattern shows nice Bragg peaks (hkl reflections). However, there can also be disorder, for example in the motions of the molecules or the chemical; substitution of one species of atom or molecule for another.
In a molecular crystal, one can sometimes identify a chain of molecules running through the crystal, and interactions within these chains are likely to be much stronger than those within. That tends to mean that the motions of the molecules along the direction of the chain (call that ‘longitudinal’ motion) is highly correlated, while it is not well correlated laterally.
In such a situation, the single crystal diffuse scattering will show ‘sheets’ of scattering perpendicular to the length of the chain.
Then we can say that
and these reciprocal vectors are defined in terms of the direct space vectors like this
and similarly for the other reciprocal vectors. The important thing for us to note is that this means is perpendicular to and . This is important when we go to take dot products later on. The bottom line here is basically the volume of the unit cell, and 2π is just a scalar, so from the point of view of defining the plane that we want, these are not important.
and since we have more variables than we need if we are to satisfy eq. 1, we can arbitrarily set qc⋆ = 0.
and this is useful because, to take the last term on the first line as an example, is perpendicular to ( × ) by the very nature of the cross product. This means that any terms with a repeated vector go to zero. Further, in the remaining terms the vector part is just of the form ⋅ which is the unit cell volume and a constant, which we can also factor out to be left with
which is nice and simple. This is not a surprise but still…
The next step is to find another vector in that plane. This is just , and if we use the same logic but, to make non-collinear with , we choose rb⋆ to be zero, we get an equation analogous to eq. 6. These can be summed up as
Now, in terephthalic acid (TPA), triclinic polymorph of form II, each molecule has a -COOH group at each end. These H-bond strongly with the groups on neighbouring molecules and you get strongly correlated chains of molecules running along the [-111] (direct space) direction. This then suggests that the planes of scattering perpendicular to these chains will extend in the directions
Now, does this work? Figure 1 is some data from TPA, diffuse scattering data measured on a synchrotron. It also shows the reciprocal axes and the white, two-ended arrows show the directions of the diffuse planes and by
counting Bragg spots it can be seen that these agree with the calculation above.
This means that we can ascribe these features to correlations in the displacements of the TPA molecules linked by the -COOH groups.
A Paper! Good God, a Paper: ‘Synchrotron X-ray diffuse scattering from a stable polymorphic material: terephthalic acid, C8H6O4’
I’ve been doing science for a long time, and while I’m in a bit of a career transition at the moment (see here for example), I’ve still got a few fingers in a few pies, and a few pieces of work slowly wending their ways through the system. Most recently, Eric Chan and I put out ‘Synchrotron X-ray diffuse scattering from a stable polymorphic material: terephthalic acid, C8H6O4‘. It’s a paper about the fuzzy, diffuse scattering from two polymorphs of the title compound.
It’s out in Acta Crystallographica Section B: STRUCTURAL SCIENCE, CRYSTAL ENGINEERING AND MATERIALS, a highly reputable but not open access journal, although they do allow authors to self-archive. At the moment, what that means is if you want a copy send me a message and I’ll punt one back to you.
What is terephthalic acid (TPA)? Well, it is a chemical used a lot in industry (plastics and such) and at room temperature it can crystallise out of solution in two forms, called (wait for it) form I and form II. (Well, actually the word ‘form’ is poorly defined in this context, technically, and it’s better to just say ‘polymorph I’ and ‘polymorph II’). In this context, a molecule is polymorphic if it can form more than one crystal structure and these structures can co-exist. Many materials change structure as you heat them up or squash them, but in a polymorphic system separate crystals of the structures can sit there side by side, under the same conditions. In most case, those conditions are room temperature and one atmosphere of pressure.
The two room temperature polymorphs are both triclinic, so of low symmetry. The difference is in how the molecules are arranged relative to each other. In both cases the -COOH groups on the ends of the molecules connect strongly to those on neighbouring molecules, so long chains of molecules form. (In the picture here, the -COOH groups are those at the ends of the molecule consisting of two red (oxygen) atoms, one white (hydrogen) and the grey (carbon) atom attached to the two whites.) These chains are sort of like one dimensional crystals, and then they are stacked up (like logs or a pile of pipes), but you can stack them up with, say, the -COOH in neighbouring chains close together, or you might have the phenyl rings (that is, the hexagon of grey carbon atoms) in one chain adjacent to the -COOH in the next. So in that sort of way you can get different crystal structures depending on how you stack things up.
Anyway, the paper looks at these polymorphs and how they are similar and how they differ. It uses my old ZMC program, which you can download from here (it comes with an example simulation, though not this one I’m talking about now). (That link goes to a paper I wrote and published for an Open Access journal, which I chose specifically so that you could go and download ZMC and everything for free…)
So in doing this I think about the connectivity of the molecule — how do the atoms depend on each other and where does the molecule need to be able to flex and twist? That means I end up drawing diagrams like this one:
That’s exciting, isn’t it? I start at the middle (X) and then each atom is positioned relative to the ones that went before. Here’s another picture (because I happen to have it handy)…. This shows how the atoms were numbered, and how by numbering them correctly and building the molecule up in the right order it is easy to let the -COOH groups spin around.
Here I show typical data. You can see the little white spots — these are the sharp diffraction peaks, Bragg peaks, and they indicate where a lot of X-rays were reflected off the crystal. They are what is used to work out what is usually called the ‘crystal structure’ which consists of the unit cell (the repeating unit) that the crystal is made up from. But you can also see blobs and streaks and stuff, and these are wider (‘diffuse’) features, and these tell us about how the molecules interact and shuffle each other around, and stuff like that.
Anyway, the paper is online now. The DOI link is https://doi.org/10.1107/S2052520616018801. One thing I really like about it is it’s got a mathematical appendix. I always wanted to write an article with a mathematical appendix. I think I might post on that separately.
I feel compelled to make a few comments on the recent changes to the way in which journal publications are to be evaluated in many research organisations and funding bodies.
There is a thing called a SNIP (Source Normalized Impact per Paper). It sounds very plausible, but sadly it is just more nonsense used to berate researchers. For example, it says, “The impact of a single citation is given higher value in subject areas where citations are less likely” — which seems like it makes sense since it is harder to get highly cited in those areas. But maybe some areas have low cites because the citations are not the traditional measure of success.
More importantly for researchers, is the question of granularity. Is it harder to get highly cited in biological crystallography or solid state? Or do you lump them all into a single heading called ‘crystallography’ even though solid state crystallography borders on physics and protein crystallography on biology? Maybe you normalise a journal’s score according to the fields it says it publishes in — opening the way for a journal to ‘tune’ its stated field to maximise its score. Suddenly, we have more options for manipulating the terms of reference to get the result we want. The very fact that the normalisation is attempted adds a whole new layer where graft, misdirection and manipulation can happen. And does. For example…
Here are three journals that cover condensed matter physics. They have the same mandate, effectively, and researchers think of them as part of the same cohort, even if they are distinctly not considered as of equal quality.
- Physical Review B: IF: 3.7 SNIP: 1.204 Ratio IF/SNIP: 3.1
- J. Phys.: Condensed Matter: IF: 2.2 SNIP: 0.901 Ratio IF/SNIP: 2.4
- Physica B: IF: 1.4 SNIP: 0.918 Ratio IF/SNIP: 1.5
So, Physica B gets a SNIP higher than JPCM despite having a much lower impact factor. Why? Because presumably it is being normalised against a different subset of journals. But there is a more insidious reason… Physica B is published by the same publisher that is hosting the SNIP data. No doubt they can completely justify the scores, but the bottom line remains that the SNIP is clearly misleading and more open to manipulation. Physica B‘s SNIP score suggests that a citation in Physica B is about twice as valuable as one in Physical Review B, (because it takes about 3 PRB cites to get a point of SNIP but only 1.5 Physica B cites) which is a complete and utter lie. It should be the other way around, if anything.
It’s all rubbish, but it is dangerous rubbish because I know that people’s careers are being evaluated by reference to numbers like these. People will get fired and hired, though more likely the former, based on numbers like these.
At least a bloody toss of a coin isn’t rigged.
The AANSS is a great mix of formality and informality, quality science in a relaxed atmosphere. Anyone who has or might or ought to use neutron scattering in their work (and isn’t that all of us, really?) is invited. And here’s a trick: Registration is $50 cheaper for ANBUG members but ANBUG membership is free! So join up!
It has long been an intention of mine to take our techniques for exploring the way the atoms are arranged in complicated materials and apply them to superconductors. The crystal structures of the oxide (high-temperature) superconductors are similar to those found in ferroelectric materials, which we have looked at in some detail. The difference is that in ferroelectrics the positions of the atoms relate directly to the interesting properties, since the ferroelectricity arises from atomic displacements (that is, from atoms moving around), whereas in superconductors the useful property shows up in how the electrons behave, and while this must be enabled by the crystal structure, the link is less direct. Even so, it seems to me that if we want to have a good idea of how the properties arise from the structure, then we need to know what the structure is.
One of the high-temperature superconductors is HgBa2CuO4+δ, a classic ‘copper oxide layer’ superconductor, descended from the original high-TC materials discovered in the late 1980s. We found some data on it in the literature, and decided that while the modelling there was a useful place to start, the model that was developed did not really do a great job of mimicking the observed scattering. Hence, we decided to re-analyse their data.
In summary, we find that when the extra oxygen atoms are added to the structure (that’s the ‘+δ’ in the chemical formula), they go into the structure as long strings of atoms, as correctly identified by the authors of the paper with the original data, which is behind a paywall. What we have done that is new is improve the agreement between model and data by adjusting the positions of the surrounding atoms; it makes sense that when you stuff new atoms into a structure, the ones already there have to adjust to accommodate them. Based on things like bond valence sums, we can get some idea of what these adjustments should be, and then create a model crystal in which the atoms are pushed around in sensible ways in response top the added oxygens. These new atomic positions will then influence the environments of other atoms, and of electrons moving through the structure. Here is an image to break up the text:Since the paper is open access, I won’t go into massive detail here, but when it comes to modelling the streaks of scattering in the pattern the results are pretty solid. There are some other, subtle details we continue to work on, but so far I think we can conclude that the methods of Monte Carlo analysis of single crystal diffuse scattering promise to deepen our understanding of superconductors and maybe — maybe! — will help us design ones that work at ever-higher temperatures.
Methuen, 1946. 126 pages.
Methuen’s Monographs on Physical Subjects was a long-running series of slim volumes dealing with a wide range of subjects, from AC power transmission to cosmology. This particular example is the 1946 revision of Worsnop’s 1930 volume. It covers quite fundamental topics, including the properties and generation of X-rays (pre-synchrotron, of course), scattering (Thomson and Compton), refraction, diffraction, spectroscopy (including Auger) and the importance of X-ray studies in supporting the development of quantum theory.
It may seem on the surface that a book from seventy years ago would be of nothing but historical interest. This is in fact not true. The volume gives a very clear account of how an X-ray tube works — and these are still the most common sources of X-rays — and explains how the X-ray spectrum is obtained, with its continuous background and characteristic radiation. It also traces out how X-rays were first characterised, their wavelengths determined, their properties explored in early important experiments. And these both give a sense of the history of the field, but also present some important physics in a very accessible way. Yes, it does in places use the ‘X-unit’ which was not destined to remain part of the field, and refers to ‘centrifugal force’ in a way which I think suggests that the author has not thought clearly about some fundamental aspects of mechanics (or that word usages have changed a little).
These little books show up here and there in jumble sales and book shops, and I’ve accumulated a small subset of them. They are very readable, though pitched at a fairly high level — this is not popular science! — and I continue to pick them up when I see them.
For workers in the field.
Pan 2003, 497 pages.
This is a fascinating book. Sheer detail brings Hooke’s remarkable career into sharp focus.
Inwood is not a prose stylist, I would venture to say. Perhaps it is due to the nature of Hooke’s career — he pursued many themes for a long time — but the text comes to be rather repetitive. List-like. But my interest never flagged because of the subject, because of the pains taken over the research, and because of the enormous significance of Hooke’s work.
Hooke was one of the key figures of the 17th century, at least in England. He left no field of natural philosophy untouched, yes — but was also second only to Wren in shaping the rebuilt London that rose after the great fire. His contributions were perhaps rarely fundamental. He was part of the debate that laid the groundwork for Newton’s Laws, and stated some of Newton’s results before Newton, but from intuition; and without Newton’s impeccable mathematical foundations, his comments were more in the form of opinions in a debate, rather than laws carved in stone.
Why is he so often merely a footnote to the Newton story?
There are several reasons.
One is that Hooke was a professional research scientist — possibly the first in the land. Newton inherited and was gifted enough money to allow him to develop his ideas in a lofty isolation, giving his perfunctory lectures at Cambridge but essentially able to think and dig deep. Hooke was employed by The Royal Society to provide them with demonstrations every week, some titbit to fascinate the dilettantes. One week he was inflating an animal’s lungs or evacuating vessels, the next demonstrating a new pendulum or sextant. He did not have the luxury of time and resources for deep, fundamental study. But I suspect Hooke would have thrived in today’s scientific environment, where entrepreneurship is all the fashion, though would have found many of us far too narrow for his liking.
Related to that was his need to maintain reputation. Hooke was not poor — but he relied on his own efforts for his money. Forty pounds a year for this, fifty for that, a fee for designing a mansion, and so on. This meant that again the need to live got in the way of really grappling with the essence of a field. Further, it explains his irritating and ultimately counter-productive mania about priory of various discoveries. Only by ensuring that everybody knew that he was the mind behind various ideas could he be sure that the employment would continue. This lead him to claim he had achieved things he had not — or to prematurely claim achievements that never came to fruition, or to play odd games like using a code to present results he wanted to claim as his own but was not yet ready to reveal. The end result was a great deal of scepticism toward his every word from certain figures, in particular partisans of other great figures of the time like Newton and Huygens.
But I suspect it was in his nature of flit from topic to topic. His was a restless energy. He did fundamental work in chemistry — where he was Boyle’s right hand man — and made some statements that presage the ideal gas law; and in physics, where he invented early vacuum pumps, made important strides in time-keeping (work which lead to his most persistent memorial — Hooke’s Law of the force due to the extension of a spring), in astronomy and in optics. In biology he did early work on the nature of respiration and published Micrographia, one of the most important texts of its time and a key work in the history of microscopy and biology. He coined the term ‘cell’ in biology, by analogy with a monk’s cell, when he was looking at the structures of cork under one of his own microscopes. In my own field of crystallography he proposed the idea that crystals were made of stacked identical building blocks, and that this explained the regular facets. Typically, this is rarely mentioned in crystallography texts.
Another reason for Hooke’s lower fame is, I suspect, that no portraits of him remain. No little marginal bio with a photo appears in a history or text book. It adds up.
Yet he was in some ways the most modern of all the figures of his time; he was a scientist by career rather than as a gentlemanly pursuit, and a firm believer in the primacy of reason and evidence. Newton explored alchemy and magic, and has aptly been described as an early scientist and a late sorcerer. Hooke saw petrified shells high up in the mountains and, rather than convince himself they were ‘figured stones’ (what? decoys buried by God?), insisted that they had once been in the sea and the sea bed must have risen, and if that meant that the world was older than the bible indicated then… so be it. He found the conclusions difficult to stomach, but he did not bury his head in the sand, unlike so many around him. And he came to these ideas a century before Hutton came on the scene and two before Lyell. But, typically, he did not bury himself in the work, but threw off ideas, argued in their favour, and moved on. Part of the greatness of Darwin is that he buttressed his theory and made it impossible to ignore. Similarly, Newton underpinned his ideas about gravitation — most of which had been quoted previously by someone else, Hooke included — by a unifying mathematical treatment that made them more than a matter for debate. It is remarkable how often figures we venerate for their originality in fact were not as original as we think, but more rigorous. We should not underestimate the importance of this! We all tend to cling onto old ideas as long as we can. They are comfortable, familiar, accepted. To displace them takes fortitude and thoroughness. Especially in earlier times, when religion retained its grip.
He also invented the universal joint.
This book is essential reading for anyone interested in the history of science, or in Newton or the 17th century. It offers lessons on the parlousness of reputation and legacy, and is testament to Inwood’s inkling that there was a story here to be told. Even the workmanlike nature of the prose, which I began by criticising, seems like the only language suitable for the topic; forthright, truthful and putting content above form.
Here begins a technicalish, science-y post.
This post is all about a paper we recently published in IUCrJ, here is the link: http://dx.doi.org/10.1107/S2052252515018722.
When X-rays or neutrons scatter off a sample of crystalline powder, the result is a powder diffraction pattern. Usually the intensity of the scatting is measured as a function of the angle of scattering for radiation of a fixed wavelength. The angle can be converted to the more universal ‘scattering vector’:
Now, when analysing a pattern like this, the most common method is Rietveld refinement, in which a possible unit cell is posited, and its diffraction pattern calculated and compared to the observed.
Now, this is very useful indeed, but there are a couple of issues. The first is that this sort of analysis only uses the strong Bragg reflections in the pattern — the big sharp peaks. Mathematically, this means it finds the single body average which is to say that it can show what is going on on each atomic site but not how one site relates to another. For example, it might say that a site has a 50% chance of having an atom of type A on it and 50% of type B, but it can’t say how this influences a neighbouring site. Do A atoms cluster? Do they like to stay apart? This information, if we can get it, tells of the short-range order (SRO) in a crystalline material, where the Bragg peaks tell of the long-range order. SRO is important, interesting, and rather difficult to get a handle on.
Now, the flat, broad (‘diffuse‘) scattering between the Bragg peaks — stuff that looks rather like background, and is often mixed up with background — contains two body information. If the non-sample scattering is carefully removed, then what is left is all the scattering from the sample, and only scattering from the sample. This is called the Total Scattering. This can then be analysed to try to understand what it going on. The most common way of doing that is to calculate the pair distribution function (PDF) from the TS. This essentially shows the probabilities of finding scatterers at different separations — a two-body probability, which helps us ‘get inside’ the average structure that we get from Bragg peak (Rietveld) analysis.
Now, this is all talking about powders. The main issue is that a powder is a collection of randomly oriented crystallites/grains which means the pattern is averaged. Ideally, it would be nice to have a single crystal, to measure the total scattering in a way that is not averaged by random orientation. This is Single Crystal Diffuse Scattering, SCDS. It is (in my opinion) rather a gold standard in structural studies, but is pretty tricky to do…
What the paper we have just published in IUCrJ does is to take a system we have studied using SCDS, and then study it using PDF to show what things the PDF can reasonably be expected to reveal and what features are hidden from it (but apparent in the SCDS). We did this because we felt that PDF, powerful as it is, was perhaps being over-interpreted, and treated as more definitive than it is, and in many cases it is the only viable technique, so it is hard to gauge when it is being over-interpreted. Hence we look at in for a case when it is not the only available method.
What we found was that PDF is very good for showing the magnitudes of the spacings between atoms, and for showing the population of the spacings between atoms, but is not good for showing how these spacings might be correlated (ie, are the closely spaced atoms clustering together?). Similarly, it was not good at showing up the ordering of atoms (…ABABA… vs …AAABBBB… for example).
The PDF is in real space — it is a plot of probability against separation, separation is measured in metres, like distances are measured in the world we experience. The SCDS and the TS exist in reciprocal space, where distances are measured in inverse metres (m-1). Some atomic orderings give rise to features that are highly localised in reciprocal space, so are best explored in that space. Also, if the ordering in question only affects a small section of reciprocal space, and that is getting smeared out by the powder averaging, then it won’t show up very well in TS or then in PDF.
For example, above is a cut of SCDS calculated from an analysis of the PDF, whereas below is our model for the SCDS. Clearly the latter should be a lot better — and it is. No surprise. Now this is not making the PDF fight with one hand tied behind its back, and not setting up a straw man, either. The point it not to show that SCDS is a more definitive measurement, the point is to show what PDF can be expected to tell us, so that when we are studying the many systems that we cannot do with SCDS because we cannot get a single crystal, we know when we are stretching the data too far.