Monday, December 29, 2014

Great elasticity demonstration

I've linked to Dustin Sandlin before.  He does a fantastic series of YouTube videos that are readily accessible by a lay audience and show why science is fun.  Here is his look, using ultrahighspeed video, at why dry spaghetti tends to break in at least three pieces when flexed from the ends.  As he says, this is something that Feynman himself couldn't readily unravel.  Watch the video before you scroll down and read my spoiler description of the mechanism.








...







...






This video shows some great elasticity concepts that we generally don't teach in the undergrad physics curriculum.  Flexing the noodle puts the top in tension and the bottom in compression - if you assume simple elasticity (e.g., stress = (Young's modulus)(strain)), and you consider the resulting forces and torques on a little segment of the noodle, you can calculate the shape (e.g., vertical deflection and tilt angle as a function of position along the noodle) of the bent spaghetti, though you have to assume certain boundary conditions (what happens to the displacement and tilt at the ends).  As the flexing is increased, at some point along its length (more on this in a minute) the noodle fractures, because the local strain has exceeded the material strength of the pasta.  One way to think about this is that the boundary condition on one end of each piece of the noodle has now changed abruptly.  Each piece of noodle now starts changing shape, since the previous strained configuration isn't statically stable anymore.  That shape change, the elastic information that the boundary condition has changed, propagates away from the fracture point at the speed of transverse sound in spaghetti (probably a couple of km/s).  The result is a propagating kink in the noodle, the severity of the kink depending on the local curvature of the pre-fracture shape.  If that local strain again exceeds the critical threshold, the noodle will fracture again.  The fact that we need really high speed photography to see the order of breaking shouldn't be that surprising - the time interval between fractures should be the size of the noodle fragment (around 3 cm) divided by the speed of sound in the pasta (say 2000 m/s), or around 15 microseconds!  (If I was really serious, I'd go the other way and use the video record of the propagating kink to measure the speed of transverse sound in pasta.)

This problem is actually somewhat related to another mechanics question: why do falling chimneys tend to break into three pieces?  Again, treating the chimney as some kind of elastic beam clamped at the bottom but free at the top, one can find the (quasi static, because the time it takes sound to propagate in the chimney material is much shorter than the time for the chimney to fall) shape of the flexing chimney.  There are two local maxima in the strain, and that's where the chimney tends to break.  Note that the chimney case is quasi static, while the spaghetti case really involves the dynamics of the flexing noodle after fracture. 

The bottom line:  I want one of those cameras.

Sunday, December 21, 2014

Lack of self-awareness, thy name is John Horgan.

I see that Scientific American is reorganizing its blogging efforts.  I hope it works out well for them.  Call me if you want someone to blog about condensed matter and nanoscale science.  I'd really enjoy talking to a wider audience and would, of course, tailor my style accordingly.

When looking at their site, though, I came upon this piece by John Horgan, about whom I have written previously.  This latest essay is meant to be advice for young science writers.  Because he is a smart person with great experience in science journalism, his basic advice does have some kernels of merit (be skeptical of claims of scientists; pay attention to who is talking about science and their possible agendas).  His other points strike me as odd or beside the point to varying degrees.  (e.g., scientists are people and therefore have a human context to their work, but claiming that the majority of US science is shaped by capitalism and militarism is just nutty; inequality, our screwed up healthcare system, and militarism are all distressing, but what does that have to do with talking about a large part of science?)

The very first point that Horgan makes got my attention, though, and nearly broke my irony-meter.  He writes (his emphasis): "Most scientific claims are bogus. Researchers competing for grants, fame, glory and tenure often—indeed usually–make exaggerated or false claims, which scientific journals and other media vying for readers eagerly disseminate."  While I recognize that there have been claims to this effect in recent years, I think it is pretty hilarious that Horgan can warn about this with a straight face.  This is the guy who vaulted onto the larger, international stage by writing a book called The End of Science back in 1996.  Yeah, that wasn't at all an exaggerated or false claim made with the intent of capturing as much media attention as possible.  Nope.





Tuesday, December 16, 2014

Long odds: Proposals and how we spend our time

We just completed the two-day kickoff symposium of the Rice Center for Quantum Materials.  It was a good meeting, and the concluding panel discussion ended up spending a fair bit of time talking about the public policy challenges facing basic research funding in the US (with some discussion of industry, but largely talking about government support).  Neal Lane is an impressive resource, and lately he and Norm Augustine have been making the rounds in Washington trying to persuade people that it's a dire mistake to let basic research support continue to languish for the foreseeable future.

Over the December/January timeframe, I'm spending time on several grant proposals.  Three of them have a priori odds of success (based on past years, dividing awards by the number of initial proposals) less than 5%.  Now, obviously longshots have their place - you can't win if you don't play, and there is no question that thinking, planning, and writing about your ideas has utility even if you don't end up getting that particular award.  Still, it seems like more and more programs are trending in this awful positive feedback direction (low percentage chance per program = have to write more grants = larger applicant pool = lower percentage chance).  Many of these are prestigious center and group programs that are greatly desired by universities as badges of success and sources of indirect costs, and by investigators as sources of longer term/not-single-investigator support.  When yields drift below 5%, it really does raise questions:  How should we be spending our time, one resource that we can never replenish?  Does this funding approach make sense?  When the number of potentially "conflicted" people (e.g., coauthors/collaborators over the last four years for every person affiliated with a big center grant) exceeds 1000 (!), who the heck is left to review these things that has any real expertise?

Thursday, December 11, 2014

Science and sensationalism: The allure of superlatives and bogus metrics

I helped out a colleague of mine today, who was fact-checking a couple of sentences in a story that's going to come out in a large circulation magazine (that shall remain nameless).  The article is about graphene, and in draft form included a sentence along the lines of "Graphene is 1000x better at conducting electricity than copper."  That sounds great and exciting.  It's short, simple, and easy to remember.  Unfortunately, it's just not true unless accompanied by a huge asterisk that links to a page full of qualifications and disclaimers. 

The challenge:  Come up with a replacement that gets the main point across (graphene is a remarkable material) without being a gross distortion or dissolving into scientific jargon. 

My response:  "Graphene is an electrical conductor that rivals copper and silver, and is much lighter and stronger."  At least this is true (or moreso, anyway), though it's longer and doesn't have an easy-to-remember number in it. 

The search for a simple, one-sentence, exclamatory pronouncement can lead science journalists (and university public relations people) down a dangerous path.  Often really great science is simply more complicated than a sound-bite.  Moreover, the complications can be fascinating and important.  It takes a special journalist to recognize this.


Friday, December 05, 2014

Interesting superconductivity developments

Three superconductivity-related things during the crazy end-of-semester time crunch. 

First, the paper that I'd mentioned here has been accepted and published in Nature Materials here.  That one reports signatures of superconductivity in a single atomic layer of FeSe on SrTiO3 at around 100 K.  This result is not without controversy, as it's very hard to do standard transport in single layers of material like this in UHV, and usually people want to have multiple signatures besides resistivity when claiming superconductivity.

In that vein, there is a recent preprint that reports superconductivity above 190 K (!) in H2S under high pressure.  The belief by the authors is that this is conventional superconductivity, related to classic work over many years (see here and here for example) by Neil Ashcroft and others discussing superconductivity in metallic hydrogen (possibly responsible for things like Jupiter's large magnetic field, for instance).  Because of the challenges of doing ultrahigh pressure measurements in diamond anvil cells, this, too, has only resistivity apparently dropping to zero as its main evidence for superconductivity.  It looks pretty cool, and it will be interesting to see where this goes from here.

Lastly, in Nature there is a paper that looks at trying to understand recent measurements of copper oxide superconductors when hit with ultrafast laser pulses.  The argument in those pump-probe experiments is that smacking the cuprates while in the normal state is enough to produce apparent transient superconductivity (as inferred on picosecond timescales with another optical pulse used to measure a quantity related to the conductivity).  The new paper claims that the initial pulse produces lattice distortions that should favor higher temperature superconductivity.

The common thread here:  There continue to be tantalizing hints of possible higher temperature superconductors, but in all of these cases it's really darn hard to do the measurements (or at least to bring multiple tools to bear).  For a nice look at this topic, see these recent words of wisdom.

Tuesday, November 25, 2014

Writing style, "grand visions", and impact

It's hard for me to believe that over eight years (!) have passed since I wrote this.  Recently I've been thinking about this again.  When writing proposals, it's clearly important to articulate a Big Picture vision - why are you working on a problem, where does that problem fit in the scheme of things, and what would the consequences be if you achieved your goals?  Some people's writing styles tilt more in this direction (e.g., our team is smart and highly accomplished, and we have a grand vision of a world set free by room temperature superconductors - this path will lead us there) and others lean more toward the concrete (e.g., our team is smart and highly accomplished, and we've thought carefully about an important problem - here is what we are going to do in some detail, what the challenges are, and what it will mean).  I tend to lean toward the latter.  It's not that I lack a grand vision - I'd just rather underpromise and overperform.  Still, it's clear that this doesn't always pay dividends.  (Of course, the best of all possible worlds is to promise a grand vision and actually achieve it, but that's extremely rare.)

Sunday, November 16, 2014

Beautiful mechanical design, + excellent outreach

Yesterday I came across this video series, put up by "EngineerGuy" Bill Hammack.   It shows a mechanical analog computer originally designed by Michelson for building up Fourier series (sums of sinusoids) of up to twenty integer multiples of a fundamental frequency.   Moreover, you could use this machine to go backwards, and mechanically do Fourier decomposition of periodic waveforms.  It's really wonderful.  I would love to have one to use as a teaching tool, and I'm sure some enterprising person will figure out how to 3d print all the relevant parts (except the springs and cables), or perhaps build one out of Lego. 

I also wanted to point out Hammack's other videos.  This is great outreach - really accessible, clear, well-produced content.

Friday, November 14, 2014

Chapter epigraphs from my book

Because of the vagaries of British copyright law and their lack of the concept of "fair use", I am not allowed to use clever little quotes to start the chapters of my nano textbook unless I have explicit permission from the person or their estate.  Rather than chasing those, I've sacrificed the quotes (with one exception, which I won't reveal here - you'll have to buy the book).  However, on my blog I'm free to display these quotes, so here they are.

  • "I would like to describe a field, in which little has been done, but in which an enormous amount can be done in principle. This field is not quite the same as the others in that it will not tell us much of fundamental physics (in the sense of, “What are the strange particles?”) but it is more like solid-state physics in the sense that it might tell us much of great interest about the strange phenomena that occur in complex situations. Furthermore, a point that is most important is that it would have an enormous number of technical applications. What I want to talk about is the problem of manipulating and controlling things on a small scale."  - Richard Feynman, "There's Plenty of Room at the Bottom" lecture, Engineering and Science 23, 22 (1960)
  • "More is different." - Phil Anderson, Science 177, 393 (1972).
  • "Solid state I don’t like, even though I started it." - Wolfgang Pauli, from AIP's oral history project
  • "How do we write small? ... We have no standard technique to do this now, but let me argue that it’s not as difficult as it first appears to be." - Richard Feynman, "There's Plenty of Room at the Bottom" lecture, Engineering and Science 23, 22 (1960)
  • "God made solids, but surfaces were the work of the devil." - Wolfgang Pauli,  quoted in Growth, Dissolution, and Pattern Formation in Geosystems (1999) by Bjørn Jamtveit and Paul Meakin, p. 291.
  • "The importance of the infinitely little is incalculable." - Dr. Joseph Bell, 1892 introduction to Arthur Conan Doyle's A Study in Scarlet
  • "Magnetism, as you recall from physics class, is a powerful force that causes certain items to be attracted to refrigerators." - Dave Barry, 1997
  • "If I were creating the world I wouldn’t mess about with butterflies and daffodils. I would have started with lasers, eight o’clock, Day One!" - Evil, Time Bandits
  • "Make big money!  Be a Quantum Mechanic!" - Tom Weller, Science Made Stupid (1985). 
  • "I am an old man now, and when I die and go to Heaven there are two matters on which I hope for enlightement. One is quantum electrodynamics and the other is the turbulent motion of fluids. And about the former I am rather more optimistic." - Horace Lamb, 1932 address to the British Association for the Advancement of Science, as cited in Eames, I., and J. B. Flor. "New developments in understanding interfacial processes in turbulent flows." Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 369.1937 (2011): 702-705
  • "Almost all aspects of life are engineered at the molecular level, and without understanding molecules we can only have a very sketchy understanding of life itself." - Francis Crick, What Mad Pursuit: A Personal View of Scientific Discovery (1988), p. 61
  • "Be a scientist, save the world!" - Rick Smalley 

Thursday, November 06, 2014

Refereeing redux.

I was also asked:

"I would like to see a post addressing how to handle a paper sent back by the editor for another round of reviews. Of particular interest to me: what do you do if you notice errors that escaped notice (or weren't present) in the original manuscript? What if the authors answered your issues well in the response letter, but didn't include those modifications in the manuscript? What advice would you have if the authors have clearly done the experiments and theory well, and the results are worth publishing, but the writing/figures are still not at a publishable level following their revisions?"

My answers are probably what you'd guess.  I try hard to identify possible errors the first time through refereeing a paper.  If I spot something on the second round that I'd missed, I try to be clear about this by writing something like "Also, on this pass through the paper, I had the realization that [blah blah] - my apologies for not catching this on the first round, but I think it's important that this issue be addressed."  Again, I try hard not to miss things on the first pass, since I know how annoying it is from the author side to be hit with apparently new objections that could have been addressed in the first revisions.

I've definitely had cases where the authors wrote a great response and then made almost no changes to the manuscript.  In this situation, I usually say, "The response letter was very convincing/clarifying regarding these points, and I think it is important that these issues are discussed in the manuscript itself."  I would then, in the "comments to the editor" part of the report, re-emphasize this, in the hopes that the editor will push the authors about it.

If the manuscript contains good science but is written at an unpublishable level (rare, but not unheard of), I try to point this out diplomatically (e.g., "The science here is very interesting and deserving of publication, but I strongly recommend that the presentation be revamped substantially.  I think swapping the order of the figures would make the story much clearer.").  Again, to the editors, I might make more specific recommendations (e.g., "This manuscript absolutely needs to be closely edited by a native speaker of English" if it's full of truly poor grammar).

The basic strategy I follow is to try to evaluate the science and offer as useful and constructive feedback as possible (given that I can't spend tons of time on each refereeing assignment), in the kind of professional and courteous tone I'd like to read in reports on my own work.

Tuesday, November 04, 2014

"What happened to PRL?"

A commenter wrote the following:  "About PRL, I do have a concrete question. You've been around for some time, I am new in the business. Can you explain what happened to it? Twenty years ago it used to be the journal to publish in, now it is an afterthought."

Physical Review Letters remains a premier place to publish high impact physics results in letter-format (that is, typically 4-ish page papers with around 4 figures).   I think that the recently arrived editor in chief Pierre Meystre is working hard to revitalize PRL as a "destination journal" for physics results, where you know that the primary audience comprises physicists. 

That being said, the origins of some of PRL's (possible) loss of cachet are immediately obvious.  Twenty years ago, Nature and Science were not as highly regarded within the physics community as places to publish.  Nature Publishing Group did not have all of its various progeny (Nature Physics, Nature Materials, Nature Nanotechnology, Nature Photonics being the four most relevant here).  Likewise, the American Chemical Society's journal offerings used to be a lot less friendly to physicists.  Now there are Nano Lett., ACS Nano, ACS Photonics, ACS Applied Materials and Interfaces.   It's a far more competitive journal marketplace, and the Phys Rev journals have been slow to innovate.  Furthermore, I think there is a broad perception that PRL's refereeing can be long, arduous, contentious, and distressingly random.  Some of the competing journals somehow are able to be rapid and on-target in terms of the expertise of the referees.  If you have a hot result, and you think that refereeing at PRL is highly likely to take a long time and require a protracted fight with referees, other alternatives have room to make inroads. 

Somehow PRL needs to improve its reviewing reputation in terms of accuracy and timeliness.   That, I think, would be the best way to be more competitive.  That, and a re-design of their webpage redesign, which is neither particularly attractive or functional.

Sunday, November 02, 2014

What are skyrmions?

Skyrmions are described somewhat impenetrably here on wikipedia, and they are rather difficult beasts to summarize briefly, but I'll give it a go.  There are a number of physical systems with internal degrees of freedom that can be described mathematically by some kind of vector field.  For example, in a magnetically ordered system, this could be the local magnetization, and we can imagine assigning a little vector \( \mathbf{m}\) (that you can think of as a little arrow that points in some direction) at every point in the system.  The local orientation of the vector \( \mathbf{m} \) depends on position \(\mathbf{r}\), and there is some energy cost for having the orientation of \( \mathbf{m} \) vary between neighboring locations.   In this scenario, the lowest energy situation would be to have the direction of \(\mathbf{m}\) be uniform in space.

Now, there are some configurations of \( \mathbf{m}(\mathbf{r})\) that would be energetically extremely expensive, such as having \( \mathbf{m}\) at one point be oppositely directed to that at the neighboring sites.  Relatively low energy configurations can be found by spreading out the changes in \(\mathbf{m}\) so that they are gradual with position.  Some of these are topologically equivalent to each other, but some configurations of \(\mathbf{m}\) are really topologically distinct, like a vortex pattern.  Examples of these topological excitations are shown here.   With a lone vortex, you can't trivially deform the local orientations to get rid of the vortex.  However, if you combine a vortex with an antivortex, it is possible to annihilate both.

Skyrmions (in ferromagnetis) are one kind of topological excitation of a system like this.  They are topologically nontrivial "spin textures", and in real magnetic systems they can be detected through techniques such as magnetic resonance.   It's worth noting that there are other topological defects that are possible (domain walls that can be soliton-like; defects called "cosmic strings" if they are defects in the structure of spacetime, or "line defects" if they are in nematic liquid crystals; monopoles (all arrows pointing outward from a central point) sometimes called "hedgehogs"; and other textures like the boojum (a monopole pinned to the surface of a system; relevant in liquid crystals and in superfluid 3He)).  With regard to the last of these, I highly recommend reading this article, which further cemented David Mermin as a physics and science communication idol of mine.

Thursday, October 23, 2014

Ask me something.

I haven't done this in a while.  Are there any particular subjects that you would like me to address, or concepts to explain?  It's a busy semester, but I can try....

Thursday, October 16, 2014

Some interesting links: Books and news

Here are some things that I wanted to share with my readership:
  • My friend Arjendu Pattanayak (founder of a very good blog) pointed out to me this book by Kittel.  It's really nice - it is very concise and tightly written without being incomplete, and it's cheap.
  • On a lighter note, Science...For Her! is a book by a friend of a friend.  The introductory video is here.  Attention Physics Today:  I volunteer to review this book when it comes out.  Seriously, I'd be happy to do it, and I think it would be great for some amount of wry humor to make its way into the pages of PT.  
  • Similarly, Randall Munroe's book What If? is magnificent.   Attention Physics Today:  I volunteer to review this one, also.  If you don't review this book, you are entirely humorless.
  • The MIT Technology Review has a fun article in it about topological quantum computing with non-Abelian anyons.  The reason it's fun is that it talks about the people involved (including my postdoctoral mentor) and manages to avoid becoming overly technical. 
  • A few people have pointed out to me that Lockheed Martin has made a rather strong press statement regarding a fusion reactor scheme being developed by Skunk Works (the folks who brought us the SR-71 and the F-117, among other things).   This is potentially interesting, but it's really hard to tell whether this is all vaporware so far.  It looks like a magnetic mirror configuration, something that has been explored extensively over the last few decades, and they don't provide enough technical discussion to figure out what they're doing that's different.  Still, there seem to be many takers trying alternatives to tokomaks (Washington, and what Nature termed "fusion upstarts"), and it's surely worth a shot.  
  • I listened in on a conference call today from Benefunder.  These people are trying to come up with an alternative philanthropic approach to research funding that isn't crowdsourcing.  Any commenters already sign up with them?

Tuesday, October 14, 2014

Quantitatively, how amazing are modern electronics technologies and materials?

I've talked before about how condensed matter/materials physics/engineering is so ubiquitous that it somehow fades into the background, and people don't appreciate how truly wondrous it is.  I thought I'd compile a few stats to try and drive this home.
  • A typical car contains something like 30,000 discrete parts, if you count down to the smallest individual screw.  By comparison, a typical microprocessor has around (to make the numbers work out conveniently) 3 billion transistors.  That's a factor of a million more constituents.  Bear in mind that essentially all of those transistors work, without fail, for a decade or more.  (When was the last time you actually had a processor failure, rather than a power supply or hard drive issue?).   Imagine taking a million cars, and claiming that they will all run, flawlessly, with no broken parts, for a decade.  
  • Parallel manufacturing is a wonderful thing.  If you built the 3 billion transistors serially at a rate of one per second, it would take around 95 years to put together a processor.  
  • There is a famous study that proved that Kansas is actually flatter than a pancake.  Perfect flatness would correspond with their flatness metric equalling 1, and they found that Kansas has a flatness of 0.9997.  By that measure, a 300 mm silicon wafer used to fabricate chips would have a flatness on the order of 1 - (30 nm/300mm) = 1 - 10-7.  If your dining room table was that flat, the typical height of a surface defect would be well under the wavelength of visible light.  If Kansas was that flat, the tallest feature in the state would be a few cm high.
  • The worst silicon purity acceptable for Si electronics processing is around 0.1 parts per billion.  That means that a single impurity atom in such silicon is more rare than, well, you as a member of the population of the earth.  
  • We have the ability to position particular devices with (roughly) few nm precision and accuracy on a processor of cm scale.  That's equivalent to being able to place an item on your desk in a particular place to within about 1/50th the diameter of a human hair.
If none of this impresses you, you're pretty jaded.

Thursday, October 09, 2014

Chapman Lecture - Paul McEuen

We were very fortunate last week to host Paul McEuen for our Chapman Lecture series (previous speakers here).  NAS member, successful novelist, director of LASSP at Cornell - typical underachiever.  The talk was tremendous fun, a look at several cool experiments going on in his lab examining the mechanical properties of graphene (which acts surprisingly like paper, and taught me about the Foppl/von Karman number, \(YL^2/\kappa\), where \(Y\) is the Young's modulus, \(L\) is a relevant length scale, and \(\kappa\) is the bending stiffness) and nanotubes.  The best part of the talk (apart from the rendition of the Cornell alma mater as played by electrically plucked carbon nanotubes) was the palpable sense of joy that he conveyed to the students in the audience.  He clearly really enjoys the playing-with-toys aspect of research!

Sunday, October 05, 2014

Annual Nobel speculation

It's that time of year again - go ahead and speculate away in the comments about possible Nobel laureates in physics or chemistry.  Natural suggestions in physics include Aharonov and Berry for geometric phases, Vera Rubin for dark matter/galaxy rotation curves, Charlie Kane and Shoucheng Zhang (and possibly Molenkamp) for topological insulators, Pendry, Smith, and Yablonovitch and John (oh dear that's four) for metamaterials and/or photonic bandgaps.

Update:  check out Slate's article on deserving women candidates.  Dresselhaus would be a good choice.  (I'm not as big a fan of, e,g., Lisa Randall, who is extremely smart but is in the space of high energy theorists who have not yet had predictions of exotic physics actually verified by experiment.)

Thursday, October 02, 2014

AAAS, Science magazine, and figure permissions

Hello readers - As I'd mentioned previously, I've written a nano textbook that's going to come out next year.  I'd like to ask my readership, on the off-chance that someone has a suggested contact:  Please email me if you can suggest a good contact at AAAS/Science, with whom I could have a discussion regarding figure permission fees.  (I'd like to try talking to someone first before turning this into a major blogging topic.)  Thanks.
Update:  I've made contact with an actual person.  We will see what happens....

Monday, September 29, 2014

Penny-wise, pound-foolish: Krulwich blog

I just read that NPR is getting rid of Robert Krulwich's excellent science blog, allegedly as part of cost-cutting.  Cost-cutting?  Really?  Does anyone actually think that it costs a huge sum of money to run that blog?  Surely the most expensive part of the blog is Robert Krulwich's time, which he seems more than willing to give.  Seriously, we should find out what the costs are, and have a kick-starter project to finance it.  Come on, NPR.

Thursday, September 25, 2014

The persistent regional nature of physics

In the 21st century, with the prevalence of air travel, global near-instantaneous communications, and active cultures of well-financed scientific research on several continents, you would think that the physics enterprise would be thoroughly homogenized, at least across places with similar levels of resources.  Sure, really expensive endeavors would be localized to a few places (e.g., CERN), but the comparatively cheap subfields like condensed matter physics would be rather uniformly spread out.

Strangely, in my (anecdotal, by necessity) experience, that doesn't seem to be the case.  One area of my research, looking at electronic/optical/thermal properties of atomic and molecular-scale junctions, has a very small number of experimental practitioners in the US (I can think of a handful), though there are several more groups in Europe and Asia.  Similarly, the relevant theory community for this work, with a few notable exceptions, is largely in Europe.   This imbalance has become clear in terms of both who I talk with about this work, and where I'm asked to speak.  Interestingly, there are also strong regional tendencies in some of the citation patterns (e.g., European theorists tend to cite European experimentalists), and I'm told this is true in other areas of physics (and presumably chemistry and biology).  I'm sure this has a lot to do with proximity and familiarity - it's much more likely for me to see talks by geographically proximal people, even if it's equally easy for me to read papers from people all over the world.

Basically, physics areas of pursuit have a (surprising to me) large amount of regional specialization.  There's been a major emphasis historically on new materials growth and discovery in, e.g., Germany, China, and Japan compared to the US (though this is being rectified, in part thanks to reports like this one).  Atomic physics w/ cold atoms has historically been dominated by the US and Europe.   I'm sure some of these trends are the result of funding decisions by governments.   Others are due to the effect of particularly influential, talented individuals that end up having long-lasting effects because the natural timescale for change at universities is measured in decades.  It will be interesting to see whether these inhomogeneities smooth out or persist over the long term.

Tuesday, September 23, 2014

Hype, BICEP2, and all that.

It's been a few years since I've written a post slamming some piece of hype about nanoscience.  In part, I decided that all this-is-hype posts start to sound the same and therefore weren't worth making unless the situation was truly egregious or somehow otherwise special.  In part, I also felt like I was preaching to the choir, so to speak.  That being said, I think the recent dustup over the BICEP2 experiment is worth mentioning, as an object lesson.   
  • If the BICEP2 collaboration had only posted their paper on the arxiv and said that the validity of their interpretation depended on further checks of the background by, e.g., the PLANCK collaboration, no one would have batted an eye.  They could have said that they were excited but cautious, and that, too, would have been fine.  
  • Where they (in my view) crossed the line is when they orchestrated a major media extravaganza around their results, including showing up at Andre Linde's house and filming his reaction on being told about the data.  Sure, they were excited, but it seems pretty clear that they went well beyond the norm in terms of trying to whip up attention and recognition.
  • While not catastrophic for science or anything hyperbolic like that by itself, this is just another of the death-by-1000-cuts events that erodes public confidence in science.  "Why believe what scientists say?  They drum up attention all the time, and then turn out to be wrong!  That's why low fat diets were good for me before they were bad for me!"
  • Bottom line:  If you are thinking of staging a press conference and a big announcement before your paper has even been sent out to referees, please do us all a favor and think again. 

Thursday, September 18, 2014

When freshman physics models fail

When we teach basic ac circuits in second semester freshman physics, or for that matter in intro to electrical engineering, we introduce the idea of an impedance, \(Z\), so that we can make ac circuit problems look like a generalization of Ohm's law.  For dc currents, we say that \(V = I R\), the voltage dropped across a resistor is linearly proportional to the current.  For reactive circuit elements and ac currents, we use complex numbers to keep track of phase shifts between the current and voltage.  Calling \(j \equiv \sqrt{-1}\), we assume that the ac current has a time dependence \(\exp(-j \omega t\).  Then we can say that the impedance \(Z\) of an inductor is \(j \omega L\), and write \(V = Z I\) for the case of an ac voltage across the inductor.

Where does that come from, though?  Well, it's really Faraday's law.  The magnetic flux through an inductor is given by \(\Phi = LI\).  We know that the voltage induced between the ends of such a coil is given by \(-d\Phi/dt = L (dI/dt) + (dL/dt) I\), and in an ordinary inductor, \(dL/dt\) is simply zero.  But not always!

Last fall and into the spring, two undergrads in my lab (aided by two grad students) were doing some measurements of inductors filled with vanadium dioxide powder, a material that goes through a sharp first-order phase transition at about 65 \(^{\circ}\)C from a low temperature insulator to a high temperature poor metal.  At the transition, there is also a change in the magnetic susceptibility of the material.  What I rather expected to see was a step-like change in the inductive response going across the transition, and an accompanying step-like change in the loss (due to resistive heating in the metal).  Both of these effects should be small (just at the edge of detection in our scheme).  Instead, the students found something very different - a big peak in the lossy response on warming, and an accompanying dip in the lossy response on cooling.  We stared at this data for weeks, and I asked them to run a whole variety of checks and control experiments to make sure we didn't have something wrong with the setup.  We also found that if we held the temperature fixed in the middle of the peak/dip, the response would drop off to what you'd expect in the absence of any peak/dip.   No, this was clearly a real effect, requiring a time-varying temperature to be apparent, and eventually it dawned on me what was going on:  we were seeing the other contribution to \(d\Phi/dt\)!  As each grain flicks into the new phase, it makes a nearly singular contribution to \(dL/dt\) because the transition for each grain is so rapid.

This is analogous to the Barkhausen effect, where a pickup coil wrapped around a piece of, e.g., iron and wired into speakers produces pops and crackling sounds as an external magnetic field is swept.  In the Barkhausen case, individual magnetic domains reorient or domain walls propagate suddenly, also giving a big \(d\Phi/dt\).  In our version, temperature is causing sudden changes in susceptibility, but it's the same basic idea.

This was great fun to figure out, and I really enjoy that it shows how the simple model of the impedance of an inductor can fail dramatically if the material in the coil does interesting things.  The paper is available here.


Monday, September 15, 2014

What is a "bad metal"? What is a "strange metal"?

Way back in the mists of time, I wrote about what what physicists mean when they say that some material is a metal.  In brief, a metal is a material that has an electrical resistivity that decreases with decreasing temperature, and in bulk has low energy excitations of the electron system down to arbitrarily low energies (no energy gap in the spectrum).  In a conventional or good metal, it makes sense to think about the electrons in terms of a classical picture often called the Drude model or a semiclassical (more quantum mechanical) picture called the Sommerfeld model.  In the former, you can think of the electrons as a gas, with the idea that the electrons travel some typical distance scale, \(\ell\), the mean free path, between scattering events that randomize the direction of the electron motion.  In the latter, you can think of a typical electronic state as a plane-wave-like object with some characteristic wavelength (of the highest occupied state) \(\lambda_{\mathrm{F}}\) that propagates effortlessly through the lattice, until it comes to a defect (break in the lattice symmetry) causing it to scatter.  In a good metal, \(\ell >> \lambda_{\mathrm{F}}\), or equivalently \( (2\pi/\lambda_{\mathrm{F}})\ell >> 1\).  Electrons propagate many wavelengths between scattering events.  Moreover, it also follows (given how many valence electrons come from each atom in the lattice) that \(\ell >> a\), where \(a\) is the lattice constant, the atomic-scale distance between adjacent atoms.

Another property of a conventional metal:  At low temperatures, the temperature-dependent part of the resistivity is dominated by electron-electron scattering, which in turn is limited by the number of empty electronic states that are accessible (e.g., not already filled and this forbidden as final states due to the Pauli principle).    The number of excited electrons (that in a conventional metal called a Fermi liquid act roughly like ordinary electrons, with charge \(-e\) and spin 1/2) is proportional to \(T\), and therefore the number of empty states available at low energies as "targets" for scattering is also proportional to \(T\), leading to a temperature-varying contribution to the resistivity proportional to \(T^{2}\).

bad metal is one in which some or all of these assumptions fail, empirically.  That is, a bad metal has gapless excitations, but if you analyze its electrical properties and tried to model them conventionally, you might find that the \(\ell\) that you infer from the data might be small compared to a lattice spacing.   This is called violating the Ioffe-Mott-Regel limit, and can happen in metals like rutile VO2 or LaSrCuO4 at high temperatures.

strange metal is a more specific term.  In a variety of systems, instead of having the resistivity scale like \(T^{2}\) at low temperatures, the resistivity scales like \(T\).  This happens in the copper oxide superconductors near optimal doping.  This happens in the related ruthenium oxides.  This happens in some heavy fermion metals right in the "quantum critical" regime.  This happens in some of the iron pnictide superconductors.  In some of these materials, when some technique like photoemission is applied, instead of finding ordinary electron-like quasiparticles, a big, smeared out "incoherent" signal is detected.  The idea is that in these systems there are not well-defined (in the sense of long-lived) electron-like quasiparticles, and these systems are not Fermi liquids.

There are many open questions remaining - what is the best way to think about such systems?  If an electron is injected from a boring metal into one of these, does it "fractionalize", in the sense of producing a huge number of complicated many-body excitations of the strange metal?  Are all strange metals the same deep down?  Can one really connect these systems with quantum gravity?  Fun stuff.

Saturday, September 06, 2014

What is the Casimir effect?

This is another in an occasional series of posts where I try to explain some physical phenomena and concepts in a comparatively accessible way.  I'm going to try hard to lean toward a lay audience here, with the very real possibility that this will fail.

You may have heard of the Casimir effect, or the Casimir force - it's usually presented in language that refers to "quantum fluctuations of the electromagnetic field", and phrases like "zero point energy" waft around.  The traditional idea is that two electrically neutral, perfectly conducting plates, parallel to each other, will experience an attractive force per unit area given by \( \hbar c \pi^{2}/(240 a^{4})\), where \(a \) is the distance between the plates.  For realistic conductors (and even dielectrics) it is possible to derive analogous expressions.  For a recent, serious scientific review, see here (though I think it's behind a paywall).

To get some sense of where these forces come from, we need to think about van der Waals forces.  It turns out that there is an attractive force between neutral atoms, say helium atoms for simplicity.  We are taught to think about the electrons in helium as "looking" like puffy, spherical clouds - that's one way to visualize the electron's quantum wave function, related to the probability of finding the electron in a given spot if you decided to look through some experimental means.  If you imagine using some scattering experiment to "take a snapshot" of the helium atom, you'd find the two electrons located at particular locations, probably away from the nucleus.  In that sense, the helium atom would have an "instantaneous electric dipole moment".  To use an analogy with magnetic dipoles, imagine that there are little bar magnets pointing from the nucleus to each electron.  The influence (electric field in the real atom; magnetic field from the bar magnet analogy) of those dipoles drops off in distance like \(1/r^{3}\).  Now, if there was a second nearby atom, its electrons would experience the fields from the first atom.  This would tend to influence its own dipole (in the magnet analogy, instead of the bar magnets pointing on average in all directions, they would tend to align with the field from the first atom, rather like how a compass needle is influenced by a nearby bar magnet).   The result would be an attractive force, proportional to \(1/r^{6}\).

In this description, we ignored that it takes time for the fields from the first atom to propagate to the second atom.  This is called retardation, and it's one key difference between the van der Waals interaction (when retardation is basically assumed to be unimportant) and so-called Casimir-Polder forces.   

Now we can ask, what about having more than two atoms?  What happens to the forces then?  Is it enough just to think of them as a bunch of pairs and add up the contributions?  The short answer is, no, you can't just think about pair-wise interactions (interference effects and retardation make it necessary to treat extended objects carefully).

What about exotic quantum vacuum fluctuations, you might ask.  Well, in some sense, you can think about those fluctuations and interactions with them as helping to set the randomized flipping dipole orientations in the first place, though that's not necessary.  It has been shown that you can do full, relativistic, retarded calculations of these fluctuating dipole effects and you can reproduce the Casimir results (and with greater generality) without saying much of anything about zero point stuff.  That is why while it is fun to speculate about zero point energy and so forth (see here for an entertaining and informative article - again, sorry about the paywall), there really doesn't seem to be any way to get net energy "out of the vacuum".

Thursday, August 28, 2014

Two cool papers on the arxiv

The beginning of the semester is a crazy time, so blogging is a little light right now.  Still, here are a couple of recent papers from the arxiv that struck my fancy.

arxiv:1408.4831 - "Self-replicating cracks:  A collaborative fracture mode in thin films," by Marthelot et al
This is very cool classical physics.  In thin, brittle films moderately adhering to a substrate, there can be a competition between the stresses involved with crack propagation and the stresses involved with delamination of the film.  The result can be very pretty pattern formation and impressively rich behavior.  A side note:  All cracks are really nanoscale phenomena - the actual breaking of bonds at the tip of the propagating crack is firmly in the nano regime.

arxiv:1408.6496 - "Non-equilibrium probing of two-level charge fluctuators using the step response of a single electron transistor," by Pourkabirian et al
I've written previously (wow, I've been blogging a while) about "two-level systems", the local dynamic degrees of freedom that are ubiquitous in disordered materials.  These little fluctuators have a statistically broad distribution of level asymmetries and tunneling times.  As a result, when perturbed, the ensemble of these TLSs responds not with a simple exponential decay (as would a system with a single characteristic time scale).  Instead, the TLS ensemble leads to a decaying response that is logarithmic in time.  For my PhD I studied such (agonizingly) slow relaxations in the dielectric response and acoustic response of glasses (like SiO2) at cryogenic temperatures.   Here, the authors use the incredible charge sensitivity of a single-electron transistor (SET) to look at the relaxation of the local charge environment near such disordered dielectrics.  The TLSs often have electric dipole moments, so their relaxation changes the local electrostatic potential near the SET. Guess what:  logarithmic relaxations.  Cute, and brings back memories of loooooong experiments from grad school.

Wednesday, August 20, 2014

Science and engineering research infrastructure - quo vadis?

I've returned from the NSF's workshop regarding the successor program to the NNIN.  While there, I learned a few interesting things, and I want to point out a serious issue facing science and engineering education and research (at least in the US).
  • The NNIN has been (since 2010) essentially level-funded at $16M/yr for the whole program, and there are no indications that this will change in the foreseeable future.  (Inflation erodes the value of that sum as well over time.)  The NNIN serves approximately 6000 users per year (with turnover of about 2200 users/yr).  For perspective, a truly cutting edge transmission electron microscope, one instrument, costs about $8M.  The idea that the NNIN program can directly create bleeding edge shared research hardware across the nation is misguided.
  • For comparison, the US DOE has five nano centers.  The typical budget for each one is about $20M/yr.  Each nano center can handle around 450 users/yr.  Note that these nano centers are very different things than NNIN sites - they do not charge user fees, and they are co-located with some truly unique characterization facilities (synchrotrons, neutron sources).  Still, the DOE is spending seventeen times as much per user per year in their program as the NNIN.
  • Even the DOE, with their much larger investment, doesn't really know how to handle "recapitalization".  That is, there was money available to buy cutting edge tools to set up their centers initially, but there is no clear, sustainable financial path to be able to replace aging instrumentation.  This is exactly the same problem faced by essentially every research university in the US.  Welcome to the party.  
  • Along those lines:  As far as I can tell (and please correct me if I'm wrong about this!), every US federal granting program intended to have a component associated with increasing shared research infrastructure at universities (this includes the NSF MRI program, MRSEC, STC, ERC, CCI; DOE instrumentation grants, DOE centers like EFRCs, DOD equipment programs like DURIPs) is either level-funded or facing declining funding levels.  Programs like these often favor acquisition of new, unusual tools over standard "bread-and-butter" as well.  Universities are going to have to rely increasingly on internal investment to acquire/replace instrumentation.  Given that there is already considerable resentment/concern about perceived stratification of research universities into "haves" and "have-nots", it's hard to see how this is going to get much better any time soon.
  • To potential donors who are really interested in the problem of graduate (and advanced undergrad) science and engineering hands-on education:  PLEASE consider this situation.  A consortium of donors who raised, say, $300M in an endowment could support the equivalent of the NNIN on the investment returns for decades to come.  This can have an impact on thousands of students/postdocs per year, for years at a time.  The idea that this is something of a return to the medieval system of rich patrons supporting the sciences is distressing.  However, given the constraints of government finances and the enormous sums of money out there in the hands of some brilliant, tech-savvy people who appreciate the importance of an educated workforce, I hope someone will take this possibility seriously.  To put this in further perspective:  I heard on the radio yesterday that the college athletics complex being built at Texas A&M University costs $400M.  Think about that.  A university athletic booster organization was able to raise that kind of money for something as narrowly focused (sorry, Aggies, but you know it's true). 

Sunday, August 17, 2014

Distinguishable from magic?

Arthur C. Clarke's most famous epigram is that "Any sufficiently advanced technology is indistinguishable from magic."  A question that I've heard debated in recent years is, have we gone far enough down that road that it's adversely affecting the science and engineering education pipeline?  There was a time when young people interested in technology could rip things apart and actually get a moderately good sense of how those gadgets worked.  This learning-through-disassembly approach is still encouraged, but the scope is much more limited. 

For example, when I was a kid (back in the dim mists of time known as the 1970s and early 80s), I ripped apart transistor radios and at least one old, busted TV.  Inside the radios, I saw how the AM tuner worked by sliding a metal contact along a wire solenoid - I learned later that this was tuning an inductor-capacitor resonator, and that the then-mysterious diodes in there (the only parts on the circuit board with some kind of polarity stamped on them, aside from the electrolytic capacitors on the power supply side) somehow were important at getting the signal out.  Inside the TV, I saw that there was a whopping big transformer, some electromagnets, and that the screen was actually the front face of a big (13 inch diagonal!) vacuum tube.  My dad explained to me that the electromagnets helped raster an electron beam back and forth in there, which smacked on phosphors on the inside of the screen.  Putting a big permanent magnet up against the front of a screen distorted the picture and warped the colors in a cool way that depended strongly on the distance between the magnet and the screen, and on the magnet's orientation, thanks to the magnet screwing with the electron beam's trajectory. 

Now, a kid opening up an ipod or little portable radio will find undocumented integrated circuits that do the digital tuning.  Flat screen LCD TVs are also much more black-box-like (though the light source is obvious), again containing lots of integrated circuits.  Touch screens, the accelerometers that determine which way to orient the image on a cell phone's screen, the chip that actually takes the pictures in a cell phone camera - all of these seem almost magical, and they are either packaged monolithically (and inscrutably), or all the really cool bits are too small to see without a high-power optical microscope.  Even automobiles are harder to figure out, with lots of sensors, solid-state electronics, and an architecture that often actively hampers investigation. 

I fully realize that I'm verging on sounding like a grumpy old man with an onion on his belt (non-US readers: see transcript here).  Still, the fact that understanding of everyday technology is becoming increasingly inaccessible, disconnected with common sense and daily experience, does seem like a cause for concern.  Chemistry sets, electronics sets, arduinos and raspberry pi-s, these are all ways to fight this trend, and their use should be encouraged!

Tuesday, August 12, 2014

Some quick cool science links

Here are a few neat things that have cropped up recently:
  • The New Horizons spacecraft is finally getting close enough to Pluto to be able to image Pluto and Charon orbiting about their common (approximate, b/c of other moons) center of mass.
  • The Moore Foundation announced the awardees in the materials synthesis component of their big program titled Emergent Phenomena in Quantum Systems.  Congratulations all around.  
  • Here's a shock:  congressmen in the pockets of the United Launch Alliance don't like SpaceX.
  • Cute toy.
  • The Fields Medal finally goes to a woman, Maryam Mirzakhani.  Also getting a share, Manjul Bhargava, who gave the single clearest math talk I've ever seen, using only a blank transparency and a felt-tip pen.

Saturday, August 09, 2014

Nanotubes by design

There is a paper in this week's issue of Nature (with an accompanying news commentary by my colleague Jim Tour) in which the authors appear to have solved a major, two decade+ challenge, growing single-walled carbon nanotubes of a specific type.   For a general audience:  You can imagine rolling up a single graphene sheet and joining the edges to make a cylinder.  There are many different ways to do this.  The issue is, different ways of rolling up the sheet lead to different electronic properties, and the energetic differences between these different tube types are very small.  When people have tried to grow nanotubes by any number of methods, they tend to end up with a bunch of tube types of similar diameters, rather than just the one they want.

The authors of this new paper have taken an approach that has great visual appeal.  They have used synthetic chemistry to make a planar hydrocarbon molecule that looks like they've taken the geodesic hemisphere end-cap of their desired tube and cut it to lay it flat - like making a funky projection to create a flat map of a globe.  When placed on a catalytically active Pt surface at elevated temperatures, this molecular seed can fold up into an endcap and start growing as a nanotube.  The authors show Raman spectroscopic evidence that they only produce the desired tube type (in this case, a metallic nanotube).  The picture is nice, and the authors imply that they could do this for other desired tube types.  It's not clear whether this is scalable for large volumes, but it's certainly encouraging.

This is very cute.  People in the nanotube game have been trying to do selective synthesis for twenty years.  Earlier this summer, a competing group showed progress in this direction using nanoparticle seeds, an approach pursued by many over the years with limited success.  It will be fun to see where this goes.  This is a good example of how long it can take to solve some materials problems.

Monday, August 04, 2014

Does being a physicist ruin science fiction for me? Generally, no.

For the past few years, as I've been teaching honors freshman mechanics, I've tried to work in at least one homework problem based on a popular sci-fi movie.  Broadening that definition to include the Marvel Cinematic Universe, I've done Iron Man, Captain America, the Avengers.  Yesterday I saw Guardians of the Galaxy, and I've got a problem in mind already.

I've been asked before, does being a physicist just ruin science fiction books and movies for me?  Does bad physics in movies or sci-fi books annoy me since I can't not see it?  Generally, the answer is "no".  I don't expect Star Trek or Star Wars to be a documentary, and I completely understand why bending physics rules can make a story more fun.  Iron Man would be a lot less entertaining if Tony Stark couldn't build an arc reactor with enough storage capacity and power density to fly long distances.  Trips through outer space that require years of narrative time just to get to Jupiter are less fun than superluminal travel.  If anything, I think well-done science fiction can be creatively inspiring.

One thing that does bug me is internally inconsistent bad physics or bad science.  For example, in the book Prey by Michael Crichton, it's established early on that any tiny breach in a window, etc. is enough for the malevolent nanocritters to get in, yet at the climax of the book the author miraculously forgets this (because if he'd remembered it the protagonist would've died).  Another thing that gets me is trivially avoidable science mistakes.  For example, in a Star Trek:TNG episode (I looked it up - it was this one), they quote a surface temperature less than absolute zero.  I'm happy to serve as a Hollywood science advisor to avoid these problems :-)





Monday, July 28, 2014

A book, + NNIN

Sorry for the posting drought.  There is a good reason:  I'm in the final stages of a textbook based on courses I developed about nanostructures and nanotechnology.  It's been an embarrassingly long time in the making, but I'm finally to the index-plus-final-touches stage.  I'll say more when it's in to the publisher.

One other thing:  I'm going to a 1.5 day workshop at NSF in three weeks about the next steps regarding the NNIN.  I've been given copies of the feedback that NSF received in their request for comment period, but if you have additional opinions or information that you'd like aired there, please let me know, either in the comments or via email.

Monday, July 14, 2014

My Nerd Nite talk - video

I mentioned back in February that I'd had the chance to speak at Nerd Nite Houston (facebook link - it's updated more frequently than the website).  It was a blast, and I encourage people in the area to check it out on the last Thursday of each month, location announced on the page, though so far they've all been at Notsuoh

Thanks to the fantastic videographic efforts of Jon Martensen, the video of my talk is now available on youtube here.  The talk is about 20 minutes and the rest is the audience Q&A.   All in all, a very fun experience - thanks again to Amado Guloy and the rest of the Nerd Nite folks for giving me the opportunity.

Sunday, July 13, 2014

Interesting links: peer review, falsifiability

Slow blogging - I've got the usual papers plus working on finishing a really big writing project (more about that soon), combined w/ summer travel.  The posting rate will pick up again in another week and a half.  In the meantime, here are a few interesting links from the last couple of weeks.
  • A thoroughly dishonest scientist (and I guess a couple of other people) were exposed as running an awful peer review scam.  More about this here.  The scam involved creating fake email addresses and identities to mask people essentially reviewing their own and friends' papers.  The worst thing about this whole mess is that it gives ammunition to the anti-science crowd who are convinced that scientific research is a corrupt enterprise - people like the person I wrote about here.
  • Peter Woit has written an interesting review of a book about string theory and whether the scientific method needs to be revised to deal with "post-emprical" theory verification, whatever that means.  I haven't read the book, but the idea of post-empiricism is pretty sketchy to me.
  • Natalie Wolchover has written an article about some fluid droplet experiments that show quantum-like behavior of droplets (e.g., interference-fringe-like distributions, for example).  The physics here is that the droplets are interacting with associated surface waves of an underlying fluid, and the mechanics of those waves self-consistently guides the droplets.  This is similar in spirit to Bohm's ideas about pilot waves as a way of thinking about quantum mechanics.  The authors of the fluid paper are clearly high on this idea.  These are clearly very cool experiments, but it's a huge stretch to say that they should motivate re-thinking our interpretations of quantum mechanics. 

Friday, July 04, 2014

An expression of concern about an expression of concern

There has been a big kerfluffle about Facebook conducting a mass social psychology experiment.  At heart is the issue of informed consent.  By clicking "ok" on a vaguely worded license agreement, did users really give true informed consent to participate in experiments designed to manipulate them?  The study was published in the Proceedings of the National Academy of Sciences here.  Now, in hindsight, PNAS has published an "Expression of Concern" here about whether the study was in compliance with the Common Rule regarding informed consent by human subjects.  The PNAS editors point out that as a privately funded, for-profit corporation not taking federal funding for this work, Facebook isn't technically bound by this constraint.

This is technically correct (the best kind of correct), but doesn't this have frightening implications?  Does this mean that private companies are free to perform experiments on human subjects without asking for informed consent, so long as they don't violate obvious laws like killing people?  Seems like there must be some statutes out there about human experimentation, right?  Perhaps one of my readers knows this issue....

Monday, June 30, 2014

What are universal conductance fluctuations?

Another realization I had at the Gordon Conference:  there are plenty of younger people in condensed matter physics who have never heard about some mesoscopic physics topics.   Presumably those topics are now in that awkward purgatory of being so established that they're "boring" from the research standpoint, but they are beyond what is taught in standard solid state physics classes (i.e., they're not in Ashcroft and Mermin or Kittel).  Here is my attempt to talk at a reasonably popular level about one of these, so-called "Universal Conductance Fluctuations" (UCF).

In physics parlance, sometimes it can be very useful to think about electrons in solids as semiclassical, a kind of middle ground between picturing them as little classical specks whizzing around and visualizing them as fuzzy, entirely wavelike quantum states.  In the semiclassical picture, you can think of the electrons as following particular trajectories, and still keep in mind their wavelike aspect by saying that the particles rack up phase as they propagate along.  In a typical metal like gold or copper, the effective wavelength of the electrons is the Fermi wavelength, \( \lambda_{\mathrm{F}} \sim 0.1~\)nm.  That means that an electron propagating 0.1 nm changes its quantum phase by about \(2 \pi\).  In a relatively "clean" metal, electrons propagate along over long distances, many Fermi wavelengths, before scattering.  At low temperatures, that scattering is mostly from disorder (grain boundaries, vacancies, impurities).

The point of keeping track of the quantum phase \(\phi\) is that this is how we find probabilities for quantum processes.  In quantum mechanics, if there are two paths to do something, with (complex) amplitudes \(A_{1}\) and \(A_{2}\), the probability of that something is \(|A_{1} + A_{2}|^{2}\), which is different than just adding the probabilities of each path, \(|A_{1}|^{2}\) and \(|A_{2}|^{2}\).  For an electron propagating, for each trajectory we can figure out an amplitude that includes the phase.  We add up all the (complex) amplitudes for all the possible trajectories, and then take the (magnitude) square of the sum.  The cross terms are what give quantum interference effects, such as the wavy diffraction pattern in the famous two-slit experiment.  This is how Feynman describes interference in his great little book, QED

Electronic conduction in a disordered metal then becomes a quantum interference experiment.  An electron can bounce off various impurities or defects in different sequences, with each trajectory having some phase.  The exact phases are set by the details of the disorder, so while they differ from sample to sample, they are the same within a given sample as long as the disorder doesn't change.  The conduction of the electrons is then something like a speckle pattern.  The typical scale of that speckle is a change in the conductance \(G\) of something like \(\delta G \sim e^{2}/h\).  Note that inelastic processes can change the electronic wavelength (by altering the electron energy and hence the magnitude of its momentum) and also randomize the phase - these "dephasing" effects mean that on length scales large compared to some coherence length \(L_{\phi}\), it doesn't make sense to worry about quantum interference.

Now, anything that alters the relative phases of the different trajectories will lead to fluctuations in the conductance on that scale (within a coherent region).  A magnetic field can do this, because the amount of phase racked up by propagating electrons depends not just on their wavelength (basically their momentum), but also on the vector potential, a funny quantity discussed further here.  So, ramping a magnetic field through a (weakly disordered) metal (at low temperatures) can generate sample-specific, random-looking but reproducible, fluctuations in the conductance on the order of \(e^{2}/h\).  These are the UCF. 

By looking at the UCF (their variation with magnetic field, temperature, gate voltage in a semiconductor, etc.), one can infer \(L_{\phi}\), for example.  These kinds of experiments were all the rage in ordinary metals and semiconductors in the late 1980s and early 1990s.  They enjoyed a resurgence in the late '90s during a controversy about coherence and the fate of quasiparticles as \(T \rightarrow 0\), and are still used as a tool to examine coherence in new systems as they come along (graphene, atomically thin semiconductors, 2d electron gases in oxide heterostructures, etc.). 

Thursday, June 26, 2014

Gordon Conference thoughts

Because of travel constraints I'm missing the last day of the meeting, but here are some thoughts, non-science first:
  • These meetings remain a great format - not too big, a good mix of topics, real opportunities for students and postdocs to interact w/ lots of people, chances for older researchers to play soccer and ultimate frisbee, etc.  As travel costs rise and internet connectivity improves, there are going to be sensible reasons to have fewer in-person meetings of otherwise distant participants, but there remains no substitute for a good conversation face-to-face over a coffee or a beer.
  • College dorm rooms, while better than when I was a student, are still not high on ambiance.  Generic fitted and top sheets for bedding appear to be made from dryer lint.
  • Food options have become progressively healthier and tastier in general.
  • Mount Holyoke is a lovely campus, with very loud and happy frogs.
  • A session about cuprate superconductors correlated with the literal gathering of storm clouds in an otherwise sunny week.
  • About 30% of the audience got the reference (after about a 5 second delay) when, on a slide about magnetic interactions (\(J_{zz} S^{z}_{i} \cdot S^{z}_{j}\)), there was an unlabeled picture of Jay-Z.  
A couple of science thoughts (carefully brief to avoid violating the GRC policy about discussing conference talks and posters):
  • Cuprate superconductors remain amazingly complicated, even after years of improving sample quality and experimental techniques. 
  • Looking at driven systems is becoming very exciting.  Basically under some circumstances you can use light to flip on or off topological changes in band structure, for example.
  • It remains very challenging to figure out how to think about systems with low energy excitations that don't look like long-lived quasiparticles. 

Sunday, June 22, 2014

Gordon Conference

I am going to be at the Gordon Research Conference on correlated electrons for the next few days. Should be fun, but blogging about such meetings is generally frowned upon (don't want to discourage people from frank discussions and showing brand new, untried stuff).  There are rules about confidentiality for these meetings.  I'll write more later in the week on other topics.

Sunday, June 15, 2014

FeSe on SrTiO3: report of 100 K superconductivity

I'd heard rumors about this for a while.  I presume that the posting of this on the arxiv means that some form of this paper is in submission out there to a suitably glossy, high impact journal that requires reference citations in its abstracts.  Background:  Bulk FeSe superconducts below around 8 K at ambient pressure (see here).  Under pressure, that transition can be squeezed up beyond 35 K (see here).  The mechanism for superconductivity in this material is up for debate, as far as I know (please feel free to add a reference or two in the comments). 

These investigators have a very fancy ultrahigh vacuum system, in which they are able to grow single layer FeSe on top of SrTiO3 (with the substrate doped with niobium in this case).  This material is not stable in air, and apparently doesn't do terribly well even when coated with some protective layer.  However, these folks have a multi-probe scanning tunneling microscope system in their chamber, along with a cold stage, so that they can perform electrical measurements in situ without ever exposing their single layer to air.  They find that the electrical resistance measured in their four-point-probe configuration drops to zero below around 100 K (as high as 109 K, depending on the sample).  One subtle point that clearly worried them:  SrTiO3 is know to have a structural phase transition (the onset of ferroelasticity - see here) at around 105 K, so they wanted to be sure that what they saw wasn't somehow an artifact of that substrate effect.  (Makes me wonder what happens to superconductivity in the FeSe depending on the ferroelastic domain orientation underneath it.)  For the lay audience:  liquid nitrogen boils at ambient pressure at 77 K.  This would be the first iron-based superconductor to cross that threshold, a domain previously limited to the copper oxides.   Remember, if the bulk transition is at 8 K and the single layer case exceeds 100 K, it doesn't seem crazy to hope for some related system with an additional factor of three or four that takes us beyond room temperature.

Important caveats:  Right now, they have resistance measurements and tunneling spectroscopy measurements.  Because of the need for in situ measurement they don't have Meissner data.  It's also important to realize that the restrictions here (not air stable; only happens in single layer material when ultraclean) are not small.  At the same time, this is potentially very exciting, and hopefully it holds up well and can be the foundation for more exciting materials.

Saturday, June 14, 2014

750th post - blog demographics

This is the 750th post since this blog's inception.  Fun facts gleaned from google analytics:

1) Unsurprisingly, the US leads in blog hits over that time, with 270,648.  In second place, the UK with 26,698.

2) According to google's tracking, over the last nine years there have been hits from every country in North, Central, and South America, as well as Europe.  In Asia, the only countries with zero hits are Turkmenistan and New Guinea.  In Africa, I'm missing about a dozen, basically the sub-Saharan region plus Somalia. 

3) In the US, the state with the least hits is South Dakota (84 visits over nine years), narrowly edging out Wyoming (88) and Alaska (91).   The states with the most hits are Texas, California, New York, and Massachusetts.

4) Most common browser, by a wide margin, is Firefox, followed by Chrome.  I like the idea that someone has read the blog on a PlayStation 3, and someone else on a PlayStation Portable.  Disappointed (and showing my age by that fact) that no one used lynx or emacs.  

5) Most-viewed post of all time was the meme contest.  Most-viewed physics posts were these on plasmons and polarons

Thank you all for reading!

Friday, June 13, 2014

"Seeing" chemical bonds with sub-molecular resolution

Chemists (and physicists) often draw molecular bonds as little lines connecting atoms, but actually imaging the bonds themselves is very hard.  With the advent of the scanning tunneling microscope, it's become almost commonplace to be able to image the position of atoms.  STM images the ability of electrons to enter or leave a conducting surface, and since an atom on the surface carry electrons within itself, the presence of an atom on the surface strongly modulates the STM signal.  This doesn't show anything direct about bonding between atoms, however.

Wilson Ho's group at UC Irvine has published another gem.  The paper is here (unfortunately behind the Science paywall), and the news release is here.  The new STM-based imaging technique, "itProbe", is based in inelastic tunneling, which I've described before.  (One advantage in being an ancient blogger - I can now refer back to my old stuff, with google helping me remember what I wrote.)  The Ho group deliberately attaches a CO molecule to their STM tip.  The CO molecule has a couple of very sharp vibrational (and "hindered translational") modes at low energies that can be seen electrically through inelastic electron tunneling spectroscopy (IETS) - basically sharp features in (the second derivative of) the tunneling current-voltage curve.  In the itProbe technique, the experimenters map out spatially what happens to those modes.  The idea is, as the CO molecule interacts with the sample close by, the precise energies of those vibrational modes shift - the environment of the CO molecule tweaks the effective spring constant for the CO's motion.  Imaging in this way, they find that maps of the inelastic signal seem to show the bonds between the atoms in an underlying molecule, rather than the atom positions themselves.  I admit I don't understand the precise mechanism here, but the images are eye-popping.  A similar idea, involving atomic force microscopy with CO attached to an AFM tip, was demonstrated before (here and here, for example).  In those experiments, the investigators looked at how interactions between the CO on the tip and the sample affected the mechanical properties of the tip as a whole.   

This is an example of a tour de force experiment that can be accomplished by long, sustained effort - the Ho group has been refining their IETS measurements for nearly two decades, and it's really paid off.  Hopefully these kinds of efforts will not become even less common as research funding seems to be focused increasingly on short time horizons and rapid changes in fashion.

Sunday, June 08, 2014

Bad physics as a marker for tracking text recycling

A colleague of mine was depressed to find, in a reasonably high impact journal, a statement that magnetic nanoparticles obey Coulomb's law, and thus can be manipulated by external magnetic fields.  As far as physics goes, this is just wrong.  Coulomb's law is the mathematical relationship that says that the force between two charges is proportional to the product of their charges and inversely proportional to the distance between them.  This has nothing to do with magnetic nanoparticles. 

I was curious - where did this weird, incorrect statement come from?  I turned to google to find out.  The earliest result I can find is from this paper by Pankhurst, Connolly, Jones, and Dobson.  The paper seems quite good, and the (strange to me) Coulomb's Law language appears to be some shorthand for a physically sound description of the interactions of magnetic materials with magnetic fields.  The Pankhurst paper includes the following sentence:  "Second, the nanoparticles are magnetic, which means that they obey Coulomb’s law, and can be manipulated by an external magnetic field gradient." This is part of a paragraph that lists three virtues of magnetic nanoparticles for biological applications.  


For fun, try copy/pasting that sentence into google.  Look at how many times that sentence (indeed, that whole introductory paragraph with very minimal changes) shows up nearly verbatim in other publications.  At the risk of saying something actionable, this is plagiarism.   This tends to happen in obscure proceedings, edited book chapters, etc., rather than high impact literature.  The proliferation of shady publication houses and vanity press journals only aggravates this situation.  Very depressing.

Wednesday, June 04, 2014

My views on teaching "nano"

Blatant self-promotion time:  I was grateful for the invitation to write an editorial about teaching "nano" for Nature Nanotechnology.  The full text is available for free at the above link, and comments and feedback are invited below.  (As a blog reader, you get the added bonus of reading the analogy I made that was cut due to space constraints.  When I advise becoming an expert in a traditional discipline first before tackling an interdisciplinary field, I had written:  "To make a food analogy, it would be very difficult to become an expert at Korean/Mexican fusion cuisine if you did not first know Korean and/or Mexican cooking at a high level.") 

Monday, June 02, 2014

What is chemical potential?

I've been meaning to do a post on this for a long time.  Five years ago (!) I wrote a post about the meaning of temperature, where I tried to go from the intuitive understanding given by common experience (temperature has something to do with energy content, and that energy flows from hot things to cold things) to a deeper view (that flow of energy comes from the tendency of the universe to take on macroscopic configurations that correspond to the most common ways of arranging microscopic degrees of freedom - the 2nd law of thermodynamics, basically).  I wasn't very satisfied with how the post turned out, but c'est la vie.

Chemical potential is a similar idea, but with added complications - while touch gives us an intuition for relative temperatures, we have no corresponding sense for chemical potential; and the rigorous definition of chemical potential is more complicated.  (For another take on this, see this article, available in full text via google from a variety of sources.)

Let's reason by analogy with temperature.  Energy tends to flow from a system at high temperature to a system at low temperature; when systems with identical temperatures are brought into contact so that they may exchange energy (e.g., by thermal conduction), there is no net flow of energy.  Now suppose systems are able to exchange particles as well as energy.  If two systems are at the same temperature, then particles will tend to flow from the system of higher chemical potential (one of the several parameters denoted by the symbol \(\mu\)) to that of lower chemical potential.  If two systems have identical chemical potentials for a particular kind of particle, there will be no net flow of particles.  In general, particles tend to flow from regions of high \(\mu/T\) to regions of low \(\mu/T\).  The classic example of this is the case of a closed bottle of perfume in a room full of (non-perfumed) air.  The perfume molecules have a high \(\mu\) in the bottle relative to the rest of the room.  When the bottle is opened, perfume molecules will tend to diffuse out of the bottle, simultaneously lowering their \(\mu)\) in the bottle and increasing their \(\mu\) in the room.  This will continue until the chemical potentials equalize.  From the point of view of entropy, there are clearly very many more arrangements of molecules with them roughly spread throughout the room+bottle than the number of arrangements with the molecules happening to occupy just the bottle.  Hence, the universe tends toward the macroscopic configuration corresponding to the most microscopic configurations.  Bottom line:  equilibrium between two systems that can exchange particles requires equal temperatures and equal chemical potentials. 

Where this also gets tricky is that thermodynamics tells us that \(\mu\) also corresponds to the energy per particle required to add (or remove) one particle from the system at constant temperature and pressure (!).  This identity is not at all obvious from the above description, but it's nevertheless true.  This latter way of thinking about chemical potential means that when particles can couple to some "real" potential (gravitational, electrical), it is possible to tune their total \(\mu\).  The connection to the entropic picture is the idea that particles will tend to "fall downhill" (there are usually fewer configurations of the combined system that have some particles "stacked up" in a region of high potential energy with others in a region of low potential energy, than the situation when the energy gets spread around among all the particles). 

Tuesday, May 27, 2014

Prize season again - updated w/ Kavli winners

Once again the Breakthrough Prize and New Horizons Prize in fundamental physics are seeking nominations.  See here.  I have very mixed feelings about these prizes, given how the high energy theory components seem increasingly disconnected from experiment (and consider that a feature rather than a bug).

On a related note, the Kavli Prizes are being awarded this Thursday.  Past nanowinners are Millie Dresselhaus (2012), Don Eigler (love his current affiliation) and Nadrian Seeman (2010), and Louis Brus and Sumio Iijima (2008).   Not exactly a bunch of underachievers.  Place your bets.  Whitesides?  Alivisatos and Bawendi?

UpdateThomas Ebbeson (extraordinary transmission through deep sub-wavelength apertures, thanks to plasmons), Stefan Hell (stimulated emission depletion microscopy, for deep subwavelength fluorescence microscopy resolution), and John Pendry (perfect lenses and cloaking).  Congratulations all around - all richly deserved.  I do think that the Kavli folks are in a sweet spot for nano prizes, as there is a good-sized pool of outstanding people that has built up, few of whom have been honored already by the Nobel.  This is a bit like the early days of the Nobel prize, though hopefully with much less political infighting (see this book if you really want to be disillusioned about the Nobel process in the early years).