There are a lot of problems with Open Access (OA) academic publishing. The biggest one is simple — if authors are paying to get their work out there, there is a financial incentive to publish everything that can be paid for. This has resulted in a vast explosion of completely crap online journals springing up, which effectively take money and post a pdf on a website and do little else. There are decent OA journals, but they virtually all come from established publishers. I have even used them myself. A new nadir was reached recently when it was pointed out that some journals are even charging people to be on their editorial staff, because such things presumably are seen as valuable on a CV or something. It is hideous to behold. Browsing old libraries and looking at the standards of papers in pre-internet era journals, it is on average much, much higher than now. I don’t think the good journals (say, those of the American Physical Society, IoP, IUCr, etc) have deteriorated, but the scientific literature is so diluted now.
The internet has enabled rapid search, but has also made it essential. New authors (and perhaps older ones too) must research the places they publish. I repeat; the best place to publish is the place you find the most useful papers.
BUT… I agree it is undesirable that publicly-funded science is published in subscriber-only journals. But how do we avoid the current problem that open access has become a synonym for rubbish?
The DOAJ website is something of a clearing house. They have a list of journals and a list of ones they have delisted. They link to places like http://thinkchecksubmit.org/, which can also help out. Having said that, DOAJ is funded by memberships and these include publishers, which is definitely a conflict of interest. It may be a necessary evil in getting the organisation running, but it is not a good look. A few quick non-exhaustive spot-checks suggest that the publishers on the DOAJ website are mostly not listed as dodgy at Beall’s list. So that’s a good thing.
DOAJ is meant to be a kind of ‘white list’ for open access. That’s a good idea. Ideally, though, it would be beneficial if labs and universities took more interest in the white list. They (largely, though governments matter too) control the metrics by which researchers are measured, they produce the research and use the results.
I can imagine a parallel world where the OA journals are run by consortia of labs and universities. They could do it with minimal duplication of effort, host a network of mirrored servers, not charge a fee because they would be paying themselves anyway, base publication purely on merit, and probably save a lot of money that would otherwise be funnelled into the pockets of crappy OA journals.
Clearly this is impossible.
It would potentially send the current good publishers to the wall, it would be prey to things getting published because the people in the research labs have closer links to the publishers (though governance could probably deal with that, and even now publications have to have editors and boards and referees who may know the authors, so it’s not that different — there could be rules about submitting your paper to a non-local editor with non-local reviewers, which would be easier if the whole thing was done through a wide, multinational network such as that proposed). And it is against the modern trend of outsourcing everything (though the labs could get together and outsource the whole exercise in order to satisfy that modern fantasy).
What can I say? I have my doubts but I am not convinced it is unworkable. How something like http://arxiv.org/ would fold into it, I’m not sure. Anyway, just some thinking aloud.
So, all you sciency people out there who look at materials, whether you’re a solid state chemist or a condensed matter physicist or a materials engineer or whether you work with organic materials or metals or ceramics or… well, let’s face it, everything you can touch, sit on or turn into a useful gadget is made of stuff and stuff means materials and materials means the annual Wagga Condensed Matter and Materials Meeting…
Go to this magisterial website: http://cmm-group.com.au/
Or go direct to this one: http://www.wagga2017.unsw.edu.au/
And take a look. It’s cheap! Just a few hundred dollars for accommodation, meals and the meeting. It’s cheap, but it’s not nasty, though it might be bloody hot. Here are the important dates:
|Abstract Submission Open||Monday 12th September 2016|
|Abstract Submission Close||Friday 11th November 2016|
|Notification of Acceptance||Friday 23rd December 2016|
|Conference Begins||Tuesday 31st January 2017|
Here’s the conference flyer as a png file, stolen directly from the conference website:
The AANSS is a great mix of formality and informality, quality science in a relaxed atmosphere. Anyone who has or might or ought to use neutron scattering in their work (and isn’t that all of us, really?) is invited. And here’s a trick: Registration is $50 cheaper for ANBUG members but ANBUG membership is free! So join up!
So on August 12 and 13 we (myself and numerous colleagues from the UNSW Canberra campus) took part in Science in ACTion, advertising the wonders of Science to the good people of the ACT (Canberra) and a few surrounding towns. It was held at the Old Bus
Teapot Depot markets, and we presented a liquid nitrogen show (mostly just freezing balloons…) and some other stuff; a Van de Graaff generator (very effective — I got a spark off a nearby table frame…), some UV fluorescence, mathematical puzzles and mazes and some cheap chromatography uising filter paper and felt tipped pens:
It was all part of Science Week 2016, and I don’t have the photos back from the chemist yet, so I can’t show you anything else. But if you look in this image, you can see our purple and yellow stand in the background on the left, and some coloured balloons.
It has long been an intention of mine to take our techniques for exploring the way the atoms are arranged in complicated materials and apply them to superconductors. The crystal structures of the oxide (high-temperature) superconductors are similar to those found in ferroelectric materials, which we have looked at in some detail. The difference is that in ferroelectrics the positions of the atoms relate directly to the interesting properties, since the ferroelectricity arises from atomic displacements (that is, from atoms moving around), whereas in superconductors the useful property shows up in how the electrons behave, and while this must be enabled by the crystal structure, the link is less direct. Even so, it seems to me that if we want to have a good idea of how the properties arise from the structure, then we need to know what the structure is.
One of the high-temperature superconductors is HgBa2CuO4+δ, a classic ‘copper oxide layer’ superconductor, descended from the original high-TC materials discovered in the late 1980s. We found some data on it in the literature, and decided that while the modelling there was a useful place to start, the model that was developed did not really do a great job of mimicking the observed scattering. Hence, we decided to re-analyse their data.
In summary, we find that when the extra oxygen atoms are added to the structure (that’s the ‘+δ’ in the chemical formula), they go into the structure as long strings of atoms, as correctly identified by the authors of the paper with the original data, which is behind a paywall. What we have done that is new is improve the agreement between model and data by adjusting the positions of the surrounding atoms; it makes sense that when you stuff new atoms into a structure, the ones already there have to adjust to accommodate them. Based on things like bond valence sums, we can get some idea of what these adjustments should be, and then create a model crystal in which the atoms are pushed around in sensible ways in response top the added oxygens. These new atomic positions will then influence the environments of other atoms, and of electrons moving through the structure. Here is an image to break up the text:Since the paper is open access, I won’t go into massive detail here, but when it comes to modelling the streaks of scattering in the pattern the results are pretty solid. There are some other, subtle details we continue to work on, but so far I think we can conclude that the methods of Monte Carlo analysis of single crystal diffuse scattering promise to deepen our understanding of superconductors and maybe — maybe! — will help us design ones that work at ever-higher temperatures.
Methuen, 1946. 126 pages.
Methuen’s Monographs on Physical Subjects was a long-running series of slim volumes dealing with a wide range of subjects, from AC power transmission to cosmology. This particular example is the 1946 revision of Worsnop’s 1930 volume. It covers quite fundamental topics, including the properties and generation of X-rays (pre-synchrotron, of course), scattering (Thomson and Compton), refraction, diffraction, spectroscopy (including Auger) and the importance of X-ray studies in supporting the development of quantum theory.
It may seem on the surface that a book from seventy years ago would be of nothing but historical interest. This is in fact not true. The volume gives a very clear account of how an X-ray tube works — and these are still the most common sources of X-rays — and explains how the X-ray spectrum is obtained, with its continuous background and characteristic radiation. It also traces out how X-rays were first characterised, their wavelengths determined, their properties explored in early important experiments. And these both give a sense of the history of the field, but also present some important physics in a very accessible way. Yes, it does in places use the ‘X-unit’ which was not destined to remain part of the field, and refers to ‘centrifugal force’ in a way which I think suggests that the author has not thought clearly about some fundamental aspects of mechanics (or that word usages have changed a little).
These little books show up here and there in jumble sales and book shops, and I’ve accumulated a small subset of them. They are very readable, though pitched at a fairly high level — this is not popular science! — and I continue to pick them up when I see them.
For workers in the field.
Pan 2003, 497 pages.
This is a fascinating book. Sheer detail brings Hooke’s remarkable career into sharp focus.
Inwood is not a prose stylist, I would venture to say. Perhaps it is due to the nature of Hooke’s career — he pursued many themes for a long time — but the text comes to be rather repetitive. List-like. But my interest never flagged because of the subject, because of the pains taken over the research, and because of the enormous significance of Hooke’s work.
Hooke was one of the key figures of the 17th century, at least in England. He left no field of natural philosophy untouched, yes — but was also second only to Wren in shaping the rebuilt London that rose after the great fire. His contributions were perhaps rarely fundamental. He was part of the debate that laid the groundwork for Newton’s Laws, and stated some of Newton’s results before Newton, but from intuition; and without Newton’s impeccable mathematical foundations, his comments were more in the form of opinions in a debate, rather than laws carved in stone.
Why is he so often merely a footnote to the Newton story?
There are several reasons.
One is that Hooke was a professional research scientist — possibly the first in the land. Newton inherited and was gifted enough money to allow him to develop his ideas in a lofty isolation, giving his perfunctory lectures at Cambridge but essentially able to think and dig deep. Hooke was employed by The Royal Society to provide them with demonstrations every week, some titbit to fascinate the dilettantes. One week he was inflating an animal’s lungs or evacuating vessels, the next demonstrating a new pendulum or sextant. He did not have the luxury of time and resources for deep, fundamental study. But I suspect Hooke would have thrived in today’s scientific environment, where entrepreneurship is all the fashion, though would have found many of us far too narrow for his liking.
Related to that was his need to maintain reputation. Hooke was not poor — but he relied on his own efforts for his money. Forty pounds a year for this, fifty for that, a fee for designing a mansion, and so on. This meant that again the need to live got in the way of really grappling with the essence of a field. Further, it explains his irritating and ultimately counter-productive mania about priory of various discoveries. Only by ensuring that everybody knew that he was the mind behind various ideas could he be sure that the employment would continue. This lead him to claim he had achieved things he had not — or to prematurely claim achievements that never came to fruition, or to play odd games like using a code to present results he wanted to claim as his own but was not yet ready to reveal. The end result was a great deal of scepticism toward his every word from certain figures, in particular partisans of other great figures of the time like Newton and Huygens.
But I suspect it was in his nature of flit from topic to topic. His was a restless energy. He did fundamental work in chemistry — where he was Boyle’s right hand man — and made some statements that presage the ideal gas law; and in physics, where he invented early vacuum pumps, made important strides in time-keeping (work which lead to his most persistent memorial — Hooke’s Law of the force due to the extension of a spring), in astronomy and in optics. In biology he did early work on the nature of respiration and published Micrographia, one of the most important texts of its time and a key work in the history of microscopy and biology. He coined the term ‘cell’ in biology, by analogy with a monk’s cell, when he was looking at the structures of cork under one of his own microscopes. In my own field of crystallography he proposed the idea that crystals were made of stacked identical building blocks, and that this explained the regular facets. Typically, this is rarely mentioned in crystallography texts.
Another reason for Hooke’s lower fame is, I suspect, that no portraits of him remain. No little marginal bio with a photo appears in a history or text book. It adds up.
Yet he was in some ways the most modern of all the figures of his time; he was a scientist by career rather than as a gentlemanly pursuit, and a firm believer in the primacy of reason and evidence. Newton explored alchemy and magic, and has aptly been described as an early scientist and a late sorcerer. Hooke saw petrified shells high up in the mountains and, rather than convince himself they were ‘figured stones’ (what? decoys buried by God?), insisted that they had once been in the sea and the sea bed must have risen, and if that meant that the world was older than the bible indicated then… so be it. He found the conclusions difficult to stomach, but he did not bury his head in the sand, unlike so many around him. And he came to these ideas a century before Hutton came on the scene and two before Lyell. But, typically, he did not bury himself in the work, but threw off ideas, argued in their favour, and moved on. Part of the greatness of Darwin is that he buttressed his theory and made it impossible to ignore. Similarly, Newton underpinned his ideas about gravitation — most of which had been quoted previously by someone else, Hooke included — by a unifying mathematical treatment that made them more than a matter for debate. It is remarkable how often figures we venerate for their originality in fact were not as original as we think, but more rigorous. We should not underestimate the importance of this! We all tend to cling onto old ideas as long as we can. They are comfortable, familiar, accepted. To displace them takes fortitude and thoroughness. Especially in earlier times, when religion retained its grip.
He also invented the universal joint.
This book is essential reading for anyone interested in the history of science, or in Newton or the 17th century. It offers lessons on the parlousness of reputation and legacy, and is testament to Inwood’s inkling that there was a story here to be told. Even the workmanlike nature of the prose, which I began by criticising, seems like the only language suitable for the topic; forthright, truthful and putting content above form.
Intel 6th-Gen i7 6700K SSD DDR4 4.0GHz CPU, 16GB DDR4 RAM, 2TB SATA III 6GB/s HDD,N600 Wireless Dual Band PCI-Express Network Adapter with 2 Antennae. (Just a cut and paste from the specs.)
Ordered it from D&D Computer Technology Pty Ltd, and delivery was pretty quick. At my work the standard Linux ‘solution’ is RHEL, so it is running RHEL 6.7 (the IT guys here don’t like 7 — it uses the controversial systemd, for one thing…)
Wireless internet so I can put it wherever I want to.
Compared to our previous generation of boxen (4+ years old), it runs a fairly typical Monte Carlo simulation in 20m55s instead of 27m21s, which is a useful but not massive improvement, which is really the result of code that is really just a single, single-threaded process which results in it scaling more with the clock speed than anything else.
I’ve put LaTeX on the box, but I am going to manage it via TeXLive’s tlmgr rather than RHEL’s package management, so we’ll see how that works out…
Here begins a technicalish, science-y post.
This post is all about a paper we recently published in IUCrJ, here is the link: http://dx.doi.org/10.1107/S2052252515018722.
When X-rays or neutrons scatter off a sample of crystalline powder, the result is a powder diffraction pattern. Usually the intensity of the scatting is measured as a function of the angle of scattering for radiation of a fixed wavelength. The angle can be converted to the more universal ‘scattering vector’:
Now, when analysing a pattern like this, the most common method is Rietveld refinement, in which a possible unit cell is posited, and its diffraction pattern calculated and compared to the observed.
Now, this is very useful indeed, but there are a couple of issues. The first is that this sort of analysis only uses the strong Bragg reflections in the pattern — the big sharp peaks. Mathematically, this means it finds the single body average which is to say that it can show what is going on on each atomic site but not how one site relates to another. For example, it might say that a site has a 50% chance of having an atom of type A on it and 50% of type B, but it can’t say how this influences a neighbouring site. Do A atoms cluster? Do they like to stay apart? This information, if we can get it, tells of the short-range order (SRO) in a crystalline material, where the Bragg peaks tell of the long-range order. SRO is important, interesting, and rather difficult to get a handle on.
Now, the flat, broad (‘diffuse‘) scattering between the Bragg peaks — stuff that looks rather like background, and is often mixed up with background — contains two body information. If the non-sample scattering is carefully removed, then what is left is all the scattering from the sample, and only scattering from the sample. This is called the Total Scattering. This can then be analysed to try to understand what it going on. The most common way of doing that is to calculate the pair distribution function (PDF) from the TS. This essentially shows the probabilities of finding scatterers at different separations — a two-body probability, which helps us ‘get inside’ the average structure that we get from Bragg peak (Rietveld) analysis.
Now, this is all talking about powders. The main issue is that a powder is a collection of randomly oriented crystallites/grains which means the pattern is averaged. Ideally, it would be nice to have a single crystal, to measure the total scattering in a way that is not averaged by random orientation. This is Single Crystal Diffuse Scattering, SCDS. It is (in my opinion) rather a gold standard in structural studies, but is pretty tricky to do…
What the paper we have just published in IUCrJ does is to take a system we have studied using SCDS, and then study it using PDF to show what things the PDF can reasonably be expected to reveal and what features are hidden from it (but apparent in the SCDS). We did this because we felt that PDF, powerful as it is, was perhaps being over-interpreted, and treated as more definitive than it is, and in many cases it is the only viable technique, so it is hard to gauge when it is being over-interpreted. Hence we look at in for a case when it is not the only available method.
What we found was that PDF is very good for showing the magnitudes of the spacings between atoms, and for showing the population of the spacings between atoms, but is not good for showing how these spacings might be correlated (ie, are the closely spaced atoms clustering together?). Similarly, it was not good at showing up the ordering of atoms (…ABABA… vs …AAABBBB… for example).
The PDF is in real space — it is a plot of probability against separation, separation is measured in metres, like distances are measured in the world we experience. The SCDS and the TS exist in reciprocal space, where distances are measured in inverse metres (m-1). Some atomic orderings give rise to features that are highly localised in reciprocal space, so are best explored in that space. Also, if the ordering in question only affects a small section of reciprocal space, and that is getting smeared out by the powder averaging, then it won’t show up very well in TS or then in PDF.
For example, above is a cut of SCDS calculated from an analysis of the PDF, whereas below is our model for the SCDS. Clearly the latter should be a lot better — and it is. No surprise. Now this is not making the PDF fight with one hand tied behind its back, and not setting up a straw man, either. The point it not to show that SCDS is a more definitive measurement, the point is to show what PDF can be expected to tell us, so that when we are studying the many systems that we cannot do with SCDS because we cannot get a single crystal, we know when we are stretching the data too far.