Saturday, January 13, 2018

About grants: What is cost sharing?

In addition to science, I occasionally use this forum as a way to try to explain to students and the public how sponsored research works in academia.  Previously I wrote about the somewhat mysterious indirect costs.  This time I'd like to discuss cost sharing.

Cost sharing is what it sounds like - when researchers at a university propose a research project, and the funding agency or foundation wants to see the university kick in funding as well (beyond obvious things like the lab space where the investigators work).  Many grants, such as NSF single-investigator awards, expressly forbid explicit cost sharing.  That has certain virtues:  To some extent, it levels the playing field, so that particularly wealthy universities don't have an even larger advantage.  Agencies would all like to see their money leveraged as far as possible, and if cost sharing were unrestricted on grants, you could imagine a situation where wealthy institutions would effectively have an incentive to try to buy their way to grant success by offering big matching funds.   

In other programs, such as the NSF's major research instrumentation program, cost sharing is mandated, but the level is set at a fixed percentage of the total budget.  Similarly, some foundations make it known that they expect university matching at a certain percentage level.  While that might be a reach for some smaller, less-well-off universities when the budget is large, at least it's well-defined.    

Sometimes agencies try to finesse things, forbidding explicit cost sharing but still trying to get universities to invest "skin in the game".  For the NSF materials research science and engineering center program, for example, cost sharing is forbidden (in the sense that explicit promises of $N matching or institutional funding is not allowed), but proposals are required to include a discussion of "organizational commitment":  "Provide a description of the resources that the organization will provide to the project, should it be funded. Resources such as space, faculty release time, faculty and staff positions, capital equipment, access to existing facilities, collaborations, and support of outreach programs should be discussed, but not given as dollar equivalents.

"  First and foremost the science and broader impacts drive the merit review, but there's no question that an institution that happens to be investing synergistically with the topic of such a proposal would look good.

The big challenge for universities are grants where cost sharing is not forbidden, and no guidance is given about expectations.  There is a game theory dilemma at work, where institutions try to guess what level of cost sharing is really needed to be competitive.   

So where does the money for cost sharing come from on the university side?  Good question.  The details depend on the university.  Departments, deans, and the central administration typically have some financial resources that they can use to support cost sharing, but how these responsibilities get arranged and distributed varies.  

For the open-ended cost sharing situations, one question that comes up is, how much is too much?  As I'd discussed before, university administrations often argue that research is already a money-losing proposition, in the sense that the amount of indirect costs that they bring in does not actually come close to covering the true expenses of supporting the research enterprise.  That would argue in favor of minimizing cost sharing offers, except that schools really do want to land some of these awards.  (Clearly there are non-financial or indirect benefits to doing research, such as scholarly reputation, or universities would stop supporting that kind of work.)  It would be very interesting if someone would set up a rumor-mill-style site, so that institutions could share with peers roughly what they are offering up for certain programs - it would be revealing to see what it takes to be competitive.  

Sunday, January 07, 2018

Selected items

A few recent items that caught my eye:

  • The ever-creative McEuen and Cohen groups at Cornell worked together to make graphene-based origami widgets.   Access to the paper seems limited right now, but here is a link that has some of the figures.
  • Something else that the Cohen group has worked on in the past are complex fluids, such as colloidal suspensions.  The general statistical physics problem of large ensembles of interacting classical objects (e.g., maybe short-range rigid interactions, as in grains of sand, or perhaps M&Ms) is incredibly rich.  Sure, there are no quantum effects, but often you have to throw out the key simplifying assumption of statistical physics (that your system can readily explore all microscopic states compatible with overall constraints).  This can lead to some really weird effects, like dice packing themselves into an ordered array when stirred properly.  
  • When an ensemble of (relatively) hard classical objects really lock up collectively and start acting like a solid, that's called jamming.  It's still a very active subject of study, and is of huge industrial importance.  It also explains why mayonnaise gets much more viscous all of the sudden as egg yolk is added.
  • I'd be remiss if I didn't highlight a really nice article in Quanta about one of the grand challenges of (condensed matter) physics:  Classifying all possible thermodynamic phases of matter.   While the popular audience thinks of a handful of phases (solid, liquid, gas, maybe plasma), the physics perspective is broader, because of ideas about order and symmetries.  Now we understand more than ever before that we need to consider phases with different  topological properties as well.  Classification is not just "stamp collecting".

Monday, January 01, 2018

The new year and another arbitrary milestone

Happy new year to all!  I'm sure 2018 will bring some exciting developments in the discipline - at minimum, there will surely be a lot of talk about quantum computing.  I will attempt to post more often, and to work further on ways to bring condensed matter and nanoscale physics to a broader audience, though other responsibilities continue to make that a challenge.  Still, to modify a quote from Winston Churchill, "Writing a [blog] is like having a friend and companion at your side, to whom you can always turn for comfort and amusement, and whose society becomes more attractive as a new and widening field of interest is lighted in the mind."

By the way, this is the 1000th post on Nanoscale Views.  As we all know, this has special significance because 1000 is a big, round number.

Wednesday, December 27, 2017

The Quantum Labyrinth - a review

Because of real life constraints I'm a bit slow off the mark compared to others, but I've just finished reading The Quantum Labyrinth by Paul Halpern, and wanted to get some thoughts down about it.  The book is a bit of a superposition between a dual biography of Feynman and Wheeler, and a general history of the long-term impact of what started out as their absorber theory.  

The biographical aspects of Feynman have been well trod before by many, including Feynman himself and rather more objectively by James Gleick.   Feynman helped create his own legend (safecracking, being a mathematically prodigious, bongo-playing smart-ass).  The bits called back in the present work that resonate with me now (perhaps because of my age) are how lost he was after his first wife's death, his insecurity about whether he was really getting anything done after QED, his embracing of family life with his third wife, and his love of teaching - both as theater and as a way to feel accomplishment when research may be slow going.  

From other books I'd known a bit about Wheeler, who was still occasionally supervising physics senior theses at Princeton when I was an undergrad.  The backstory about his brother's death in WWII as motivation for Wheeler's continued defense work after the war was new to me.   Halpern does a very good job conveying Wheeler's style - coining pithy epigrams ("Spacetime tells matter how to move; matter tells spacetime how to curve.", "The boundary of a boundary is zero.") and jumping from topic to topic with way outside the box thinking.  We also see him editing his students' theses and papers to avoid antagonizing people.  Interesting.

From the Feynman side, the absorber theory morphed into path integrals, his eponymous diagrams, and his treatment of quantum electrodynamics.   The book does a good job discussing this, though like nearly every popularization, occasionally the analogies, similes, and metaphors end up sacrificing accuracy for the sake of trying to convey physical intuition.    From the Wheeler angle, we get to learn about attempts at quantizing gravity, geons, wormholes, and the many worlds interpretation of quantum mechanics.

It's a fun read that gives you a sense of the personalities and the times for a big chunk of twentieth century theoretical physics, and I'm impressed with Halpern's ability to convey these things without being a professional historian.  

Tuesday, December 19, 2017

The state of science - hyperbole doesn't help.

It seems like every few weeks these days there is a breathless essay or editorial saying science is broken, or that science as a whole is in the midst of a terrible crisis, or that science is both broken and in the midst of a terrible crisis.  These articles do have a point, and I'm not trying to trivialize anything they say, but come on - get a grip.  Science, and its cousin engineering, have literally reshaped society in the last couple of hundred years.  We live in an age of miracles so ubiquitous we don't notice how miraculous they are.  More people (in absolute numbers and as a percentage of the population) are involved in some flavor of science or engineering than ever before.

That does mean that yes, there will be more problems in absolute numbers than before, too, because the practice of science and engineering is a human endeavor.  Like anything else done by humans, that means there will be a broad spectrum of personalities involved, that not everyone will agree with interpretations or ideas, that some people will make mistakes, and that occasionally some objectionable people will behave unethically.   Decisions will be made and incentives set up that may have unintended consequences (e.g., trying to boost Chinese science by rewarding high impact papers leads to a perverse incentive to cheat.).   This does not imply that the entire practice of science is hopelessly flawed and riddled with rot, any more than a nonzero malpractice rate implies that all of medicine is a disaster.

Why is there such a sense of unease right now about the state of science and the research enterprise?  I'm not a sociologist, but here's my take.

Spreading information, good and bad, can happen more readily than ever before.  People look at sites like pubpeer and come away with the impression that the sky is falling, when in fact we should be happy that there now, for the first time ever, exists a venue for pointing out potential problems.  We are now able to learn about flawed studies and misconduct far more effectively than even twenty years ago, and that changes perceptions.  This seems to be similar to the disconnect between perception of crime rates and actual crime rates.

Science is, in fact, often difficult.  People can be working with complex systems, perhaps more complicated than their models assume.   This means that sometimes there can be good (that is, legitimate) reasons why reproducing someone's results can be difficult.  Correlation doesn't equal causation; biological and social phenomena can be incredibly complex, with many underlying degrees of freedom and often only a few quantifiable parameters.  In the physical sciences we often look askance at those fields and think that we are much better, but laboratory science in physics and chemistry can be genuinely challenging.  (An example from my own career:  We were working with a collaborator whose postdoc was making some very interesting nanoparticles, and we saw exciting results with them, including features that coincided with a known property of the target material.  The postdoc went on to a faculty position and the synthesis got taken over by a senior grad student.  Even following very clear directions, it took over 6 months before the grad student's particles had the target composition and we reproduced the original results, because of some incredibly subtle issue with the synthesis procedure that had changed unintentionally and "shouldn't" have mattered.)

Hyperbolic self-promotion and reporting are bad.   Not everything is a breakthrough of cosmic significance, not every advance is transformative, and that's ok.  Acting otherwise sets scientists and engineers up for a public backlash from years of overpromising and underdelivering.   The public ends up with the perception that scientists and engineers are hucksters.  Just as bad, the public ends up with the idea that "science" is just as valid a way of looking at the world as astrology, despite the fact that science and engineering have actually resulted in technological society.  Even worse, in the US it is becoming very difficult to disentangle science from politics, again despite the fact that one is (at least in principle) a way of looking at the world and trying to determine what the rules are, while the other can be driven entirely by ideology.  This discussion of permissible vocabulary is indicative of a far graver threat to science as a means of learning about the universe than actual structural problems with science itself.  Philosophical definitions aside and practical ones to the fore, facts are real, and have meaning, and science is a way of constraining what those facts are.

We can and should do better.  Better at being rigorous, better at making sure our conclusions are justified and knowing their limits of validity, better at explaining ourselves to each other and the public, better at policing ourselves when people transgress in their scientific ethics or code of conduct.

None of these issues, however, imply that science itself as a whole is hopelessly flawed or broken, and I am concerned that by repeatedly stating that science is broken, we are giving aid and comfort to those who don't understand it and feel threatened by it.


Saturday, December 16, 2017

Finding a quantum phase transition, part 2

See here for part 1.   Recall, we had been studying electrical conduction in V5S8, a funky material that is metallic, but on one type of vanadium site has local magnetic moments that order in a form of antiferromagnetism (AFM) below around 32 K.  We had found a surprising hysteresis in the electrical resistance as a function of applied magnetic field.  That is, at a given temperature, over some magnetic field range, the resistance takes different values depending on whether the magnitude of H is being swept up or back down. 

One possibility that springs to mind when seeing hysteresis in a magnetic material is domains - the idea that the magnetic order in the material has broken up into regions, and that the hysteresis is due to the domains rearranging themselves.  What speaks against that in this case is the fact that the hysteresis happens over the same field range when the field is in the plane of the layered material as when the field is perpendicular to the layers.   That'd be very weird for domain motion, but makes much more sense if the hysteresis is actually a signature of a first-order metamagnetic transition, a field-driven change from one kind of magnetic order to another.   First order phase transitions are the ones that have hysteresis, like when water can be supercooled below zero Celsius.

That's also consistent with the fact that the field scale for the hysteresis starts at low fields just below the onset of antiferromagnetism, and very rapidly goes to higher fields as the temperature falls and the antiferromagnetic state is increasingly stable.   Just at the ordering transition, when the AFM state is just barely favored over the paramagnetic state, it doesn't necessarily take much of a push to destabilize AFM order.... 

There was one more clue lingering in the literature.  In 2000, a paper reported a mysterious hysteresis in the magnetization as a function of H down at 4.2 K and way out near 17-18 T.  Could this be connected to our hysteresis?  Well, in the figure here at each temperature we plot a dot for the field that is at the middle of our hysteresis, and a horizontal bar to show the width of the hysteresis, including data for multiple samples.  The red data point is from the magnetization data of that 2000 paper.  

A couple of things are interesting here.   Notice that the magnetic field apparently required to kill the AFM state extrapolates to a finite value, around 18 T, as T goes to zero.  That means that this system has a quantum phase transition (as promised in the post title).  Moreover, in our experiments we found that the hysteresis seemed to get suppressed as the crystal thickness was reduced toward the few-layer limit.  That may suggest that the transition trends toward second order in thin crystals, though that would require further study.  That would be interesting, if true, since second order quantum phase transitions are the ones that can show quantum criticality.  It would be fun to do more work on this system, looking out there at high fields and thin samples for signatures of quantum fluctuations....

The bottom line:  There is almost certainly a lot of interesting physics to be done with magnetic materials approaching the 2d limit, and there are likely other phases and transitions lurking out there waiting to be found.

Saturday, December 09, 2017

Finding a quantum phase transition, part 1

I am going to try to get the post frequency back up now that some tasks are getting off the to-do list....

Last year, we found what seems to be a previously undiscovered quantum phase transition, and I think it's kind of a fun example of how this kind of science gets done, with a few take-away lessons for students.  The paper itself is here.

My colleague Jun Lou and I had been interested in low-dimensional materials with interesting magnetic properties for a while (back before it was cool, as the hipsters say).  The 2d materials craze continues, and a number of these are expected to have magnetic ordering of various kinds.  For example, even down to atomically thin single layers, Cr2Ge2Te6 is a ferromagnetic insulator (see here), as is CrI3 (see here).  The 2d material VS2 had been predicted to be a ferromagnet in the single-layer limit.  

In the pursuit of VS2, Prof. Lou's student Jiangtan Yuan found that the vanadium-sulphur phase diagram is rather finicky, and we ended up with a variety of crystals of V5S8 with thicknesses down to about 10 nm (a few unit cells).  

[Lesson 1:  Just because they're not the samples you want doesn't mean that they're uninteresting.]   

It turns out that V5S8  had been investigated in bulk form (that is, mm-cm sized crystals) rather heavily by several Japanese groups starting in the mid-1970s.  They discovered and figured out quite a bit.  Using typical x-ray methods they found the material's structure:  It's better to think of V5S8  as V0.25VS2.  There are VS2 layers with an ordered arrangement of vanadium atoms intercalated in the interlayer space.  By measuring electrical conduction, they found that the system as a whole is metallic.   Using neutron scattering, they showed that there are unpaired 3d electrons that are localized to those intercalated vanadium atoms, and that those local magnetic moments order antiferromagnetically below a Neel temperature of 32 K in the bulk.  The moments like to align (antialign) along a direction close to perpendicular to the VS2 layers, as shown in the top panel of the figure.   (Antiferromagnetism can be tough to detect, as it does not produce the big stray magnetic fields that we all associate with ferromagnetism. )

If a large magnetic field is applied perpendicular to the layers, the spins that are anti-aligned become very energetically unfavored.  It becomes energetically favorable for the spins to find some way to avoid antialignment but still keep the antiferromagnetism.  The result is a spin-flop transition, when the moments keep their antiferromagnetism but flop down toward the plane, as in the lower panel of the figure.  What's particularly nice in this system is that this ends up producing a kink in the electrical resistance vs. magnetic field that is a clear, unambiguous signature of the spin flop, and therefore a way of spotting antiferromagnetism electrically

My student Will Hardy figured out how to make reliable electrical contact to the little, thin V5S8 crystals (not a trivial task), and we found the physics described above.  However, we also stumbled on a mystery that I'll leave you as a cliff-hanger until the next post:  Just below the Neel temperature, we didn't just find the spin-flop kink.  Instead, we found hysteresis in the magnetoresistance, over an extremely narrow temperature range, as shown here.

[Lesson 2:  New kinds of samples can make "old" materials young again.]

[Lesson 3:  Don't explore too coarsely.  We could easily have missed that entire ~ 2.5 K temperature window when you can see the hysteresis with our magnetic field range.] 

Tune in next time for the rest of the story....