Tuesday, May 29, 2012

Buying out of teaching - opinions?

This is a topic that comes up at many research universities, and I'd be curious for your opinions.  Some institutions formally allow researchers to "buy" out of teaching responsibilities.  Some places actively encourage this practice, as a way to try to boost research output and standing.  Does this work overall?  Faculty who spend all their time on research should generally be more research-productive, though it would be interesting to see quantitatively how much so.  Of course, undergraduate and graduate classroom education is also an essential part of university life, and often (though certainly not always) productive researchers are among the better teachers.  It's a fair question to ask whether teaching buyout is a net good for the university as a whole.  What do you think?

Sunday, May 27, 2012

Work functions - a challenge of molecular-scale electronics

This past week I was fortunate enough to attend this workshop at Trinity College, Dublin, all about the physics of atomic- and molecular-scale electronics.  It was a great meeting, and I feel like I really learned several new things (some of which I may elaborate upon in future posts).  One topic that comes up persistently when looking at this subject is the concept of the work function, defined typically as the minimum amount of energy it takes to kick an electron completely out of a material (so that it can go "all the way to infinity", rather than being bound to the material somehow).  As Einstein and others pointed out when trying to understand the photoelectric effect, each material has an intrinsic work function that can be measured, in principle, using photoemission.  You can hit a material surface with ultraviolet light and measure the energy of the electrons that get kicked out (for example, by slowing them down with an electric field and seeing how long it takes them to arrive at a detector).  Alternately, with a fancy tunable light source like a synchrotron, you can dial around the energy of the incident light and see when electrons start getting kicked out.   As you might imagine, if you are trying to understand electronic transport, where an electron has to leave one electrode, traverse through a system such as a molecule, and end up back in another electrode, the work function is important to know.

One problem with work functions is, they are extremely sensitive to the atomic-scale details of a surface.  For example, different crystallographic faces of even the same material (e.g., gold) can have work functions that differ by a couple of hundred millielectronvolts (meV).  Remember, the thermal energy scale at room temperature is 25 meV or so, so these are not small differences.  Moreover, anything that messes with the electronic cloud that spills a little out of the surface of materials at the atomic scale can alter the work function.  Adsorbed impurities on metal surfaces can change the effective work function by more than 1 eV (!).  To see how tricky this gets, imagine chemically assembling a layer of covalently bound molecules on a metal surface.  There is some charge transfer where the molecule chemically bonds to the metal, leading to an electric dipole moment and a corresponding change in work function.  The molecule itself can also polarize or be inherently polar based on its structure.  In the end, ordinary photoemission measures just the total of all of these effects.  Finally, ponder what then happens if the other end of the molecules is also tethered chemically to a piece of metal.  How big are all the dipole shifts?  What is the actual energy landscape "seen" by an electron going from one metal to the other, and is there any way to measure it experimentally, let alone compute it reliably from quantum chemistry methods?  Really understanding the details is difficult yet ultimately essential for progress here.

Monday, May 21, 2012

Catalysis seems like magic.

In our most recent paper, we found that we could dope a particularly interesting material, vanadium dioxide, with atomic hydrogen, via "catalytic spillover". By getting hydrogen in there in interstitial sites, we could dramatically alter the electrical properties of the material, allowing us to stabilize its unusual metallic state down to low temperatures. The funkiest part of this to me is the catalysis part. The metal electrodes that we use for electronic measurements have enough catalytic activity that they can split hydrogen molecules into atomic hydrogen at an appreciable rate even under very modest conditions (e.g., not much warmer than the boiling point of water). This paper (sorry it is subscription only) shows an elegant experimental demonstration of this, where gold is exposed to H2 and D2 gas and HD molecules are then detected. I would love to understand the physics at work here better. Any recommendations for a physics-based discussion would be appreciated - I know there is enormous empirical and phenomenological knowledge about this stuff, but something closer to an underlying physics description would be excellent.

 

Wednesday, May 16, 2012

Vanity journals: you've got to be kidding me.

I just received the following email:
Dear Pro. ,
Considering your research in related areas, we cordially invite you to submit a paper to Modern Internet of Things (MIOT).

The Journal of Modern Internet of Things (MIOT) is published in English, and is a peer reviewed free-access journal which provides rapid publications and a forum for researchers, research results, and knowledge on Internet of Things. It serves the objective of international academic exchange.
Wow!  I feel so honored, given my vast research experience connected to "Internet of Things". 

The publisher should be shamed over this.  This is absurd, and while amusing, shows that there is something deeply sick about some parts of academic publishing.

Monday, May 14, 2012

The unreasonable clarity of E. M. Purcell

Edward Purcell was one of the great physicists of the 20th century.  He won the Nobel Prize in physics for his (independent) discovery of nuclear magnetic resonance, and was justifiably known for the extraordinarily clarity of his writing.  He went on to author the incredibly good second volume of the Berkeley Physics Course (soon to be re-issued in updated form by Cambridge University Press), and late in life became interested in biophysics, writing the evocative "Life at Low Reynolds Number" (pdf).   

Purcell is also known for the Purcell Factor, a really neat bit of physics.  As I mentioned previously, Einstein showed through a brilliant thermodynamic argument that it's possible to infer the spontaneous transition rate for an emitter in an excited state dropping down to the ground state and spitting out a photon.  The spontaneous emission rate is related to the stimulated rate and the absorption rate.  Both of the latter two may be calculated using "Fermi's Golden Rule", which explains (with some specific caveats that I won't list here) that the rate of a quantum mechanical radiative transition for electrons (for example) is proportional to (among other things) the density of states (number of states per unit energy per unit volume) of the electrons and the density of states of the photons.  The density of states for photons in 3d can be calculated readily, and is quadratic in frequency.  

Purcell had the insight that in a cavity, the number of states available for photons is not quadratic in frequency anymore.  Instead, a cavity on resonance has a photon density of states that is proportional to the "quality factor", Q,  of the cavity, and inversely proportional to the size of the cavity.  The better the cavity and the smaller the cavity, the higher the density of states at the cavity resonance frequency, and off-resonance the photon density of states approaches zero.  This means that the spontaneous emission rate of atoms, a property that seems like it should be fundamental, can actually be tuned by the local environment of the radiating system.  The Purcell factor is the ratio of the spontaneous emission rate with the cavity to that in free space.

While I was doing some writing today, I decided to look up the original citation for this idea.  Remarkably, the "paper" turned out to be just an abstract!  See here, page 681, abstract B10.  That one paragraph explains the essential result better than most textbooks, and it's been cited a couple of thousand times.  This takes over as my new favorite piece of clear, brief physics writing by a famous scientist, displacing my long-time favorite, Nyquist's derivation of thermal noise.  Anyone who can be both an outstanding scientist and a clear writer gets bonus points in my view.

Saturday, May 05, 2012

Models and how physics works

Thanks to ZapperZ for bringing this to my attention. This paper is about to appear in Phys Rev Letters, and argues that the Lorentz force law (as written to apply to magnetic materials, not isolated point charges) is incompatible with Special Relativity. The argument includes a simple thought experiment. In one reference frame, you have a point charge and a little piece of magnetic material. Because the magnet is neutral (and for now we ignore any dielectric polarization of the magnet), there is no net force on the charge or the magnet, and no net torque on the magnet either. Now consider the situation when viewed from a frame moving along a line perpendicular to the line between the magnet and the charge. In the moving frame, the charge seems to be moving, so that produces a current. However (and this is the essential bit!), in first year physics, we model permanent magnetization as a collection of current loops. If we then consider what those current loops look like in the moving frame, the result involves an electric dipole moment, meaning that the charge should now exert a net torque on the magnet when all is said and done. Since observers in the two frames of reference disagree on whether a torque exists, there is a problem! Now, the author points out that there is a way to fix this, and it involves modifying the Lorentz force law in terms of how it treats magnetization, M (and electric polarization, P). This modification was already suggested by Einstein and a coauthor back in 1908.

I think (and invite comments one way or the other) that the real issue here is that our traditional way to model magnetization is unphysical at the semiclassical level. You really shouldn't be able to have a current loop that persists, classically. A charge moving in a loop is accelerating all the time, and should therefore radiate. By postulating no radiation and permanent current loops, we are already inserting something fishy in terms of our treatment of energy and momentum in electrodynamics right at the beginning. The argument by the author of the paper seems right to me, though I do wonder (as did a commenter in ZZ's post) whether this all would have been much more clear if it had been written out in four-vector/covariant notation rather that conventional 3-vectors.

This raises a valuable point about models in physics, though. Our model of M as resulting from current loops is extremely useful for many situations, even though it is a wee bit unphysical. We only run into trouble when we push the model beyond where it should ever have been expected to be valid. The general public doesn't always understand this distinction - that something can be a little wrong in some sense yet still be useful. Science journalists and scientists trying to reach the public need to keep this in mind. Simplistically declaring something to be wrong, period, is often neither accurate nor helpful.