Wednesday, November 30, 2011

Antennas for light + ionics at the nanoscale

A (revised) particularly excellent review article was posted on the arxiv the other day, about metal nanostructures as antennas for light. This seems to be an extremely complete and at the same time reasonably pedagogical treatment of the subject. While in some sense there are no shocking surprises (the basic physics underlying all of this is, after all, Maxwell's equations with complicated boundary conditions and dielectric functions for the metal), there are some great ideas and motifs: the importance of the optical "near field"; the emergence of plasmons, the collective modes of the electrons, which are relevant at the nanoscale but not in macroscopic antennas for, e.g., radio frequencies; the use of such antennas in real quantum optics applications. Great stuff.

I also feel the need for a little bit of shameless self-promotion. My colleague http://physics.ucsd.edu/~diventra/ and I have an article appearing in this month's MRS Bulletin, talking about the importance of ion motion and electrochemistry in nanoscale structures. (Sorry about not having a version on the arxiv at this time. Email me if you'd like a copy.) This article was prompted in part by a growing realization among a number of researchers that the consequences of the motion of ions (often neglected at first glance!) are apparent in a number of nanoscale systems. Working at the nanoscale, it's possible to establish very large electric fields and concentration/chemical potential gradients that can drive diffusion. At the same time, there are large accessible surface areas, and inherently small system dimensions mean that diffusion over physically relevant distances is easier than in macroscale materials. While ionic motion can be an annoyance or an unintended complication, there are likely situations where it can be embraced and engineered for useful applications.

Saturday, November 26, 2011

Nano"machines" and dissipation

There's an article (subscription only, unfortunately) out that has gotten some attention, discussing whether artificial molecular machines will "deliver on their promise".  The groups that wrote the article have an extensive track record in synthesizing and characterizing molecules that can undergo directed "mechanical" motion (e.g., translation of a rod-like portion through a ring) under chemical stimuli (e.g., changes in temperature, pH, redox reactions, optical excitation).  There is no question that this is some pretty cool stuff, and the chemistry here (both synthetic organic, and physical) is quite sophisticated.  

Two points strike me, though.  First, the "promise" mentioned in the title is connected, particularly in the press writeup, with Drexlerian nanoassembler visions.  Synthetic molecules that can move are impressive, but they are far, far away from the idea of actually constructing arbitrary designer materials one atom at a time (a goal that is likely impossible, in my opinion, for reasons stated convincingly here, among others).  They are, however, a possible step on the road to designer, synthetic enzymes, a neat idea.

Second, the writeup particularly mentions how "efficient" the mechanical motions of these molecules are.  That is, there is comparatively little dissipation relative to macroscopic machines.  This is actually not very surprising, if you think about the microscopic picture of what we think of as macroscopic irreversibility.  "Loss" of mechanical energy takes place because energy is transferred from macroscopic degrees of freedom (the motion of a piston) to microscopic degrees of freedom (the near-continuum of vibrational and electronic modes in the metal in the piston and cylinder walls).  When the whole system of interest is microscopic, there just aren't many places for the energy to go.  This is an example of the finite-phase-space aspect that shows up all the time in truly nanoscale systems. 

Thursday, November 17, 2011

Superluminal neutrinos - follow-up

The OPERA collaboration, or at least a large subset of it, has a revised preprint out (and apparently submitted somewhere), with more data on their time-of-flight studies of neutrinos produced at CERN. Tomasso has a nice write-up here. Their previous preprint created quite a stir, since it purported to show evidence of neutrino motion faster than c, the speed of light in vacuum. The general reaction among physicists was, that's really weird, and it's exceedingly likely that something is wrong somewhere in the analysis. One complaint that came up repeatedly was that the pulses used by the group were about 10000 nanoseconds long, and the group was arguing about timing at the 60 ns level. You could readily imagine some issues with their statistics or the functioning of the detector that could be a problem here, since the pulses were so long compared to the effect being reported. To deal with this, the group has now been running for a while with much shorter pulses (a few ns in duration). While they don't have nearly as much data so far (in only a few weeks of running), they do have enough to do some analysis, and so far the results are completely consistent with their earlier report. Funky. Clearly pulse duration systematics or statistics aren't the source of the apparent superluminality, then. So, either neutrinos really are superluminal (still bloody unlikely for a host of reasons), or there is still some weird systematic error in the detector somewhere. (For what it's worth, I'm sure they've looked a million ways at the clock synchronization, etc. now, so that's not likely to be the problem either.)

Update:  Matt Strassler has an excellent summary of the situation.

So you want to compete w/ fossil fuels (or silicon)

Yesterday I went to an interesting talk here by Eric Toone, deputy director of ARPA-E, what is supposed to be the blue-sky high-risk/high-reward development portion of the US Department of Energy. He summarized some basic messages about energy globally and in the US, gave quite a number of examples of projects funded by ARPA-E, and had a series of take-home messages. He also gave the most concise (single-graph) explanation for the failure of Solyndra: they bet on a technology based on CIGS solar cells, and then the price of silicon (an essential component of the main competing technology) fell by 80% over a few months. It was made very clear that ARPA-E aims at a particular stage in the tech transfer process, when the basic science is known, and a technology is right at the edge of development.

The general energy picture was its usual fairly depressing self. There are plenty of fossil fuels (particularly natural gas and coal), but if you think that CO2 is a concern, then using those blindly is risky. Capital costs make nuclear comparatively uncompetitive (to say nothing of political difficulties following Fukushima). Solar is too expensive to compete w/ fossil fuels. Other renewables are also too expensive and/or not scalable. Biomass is too expensive. Batteries don't come remotely close to competing with, e.g., gasoline in terms of energy density and effective refueling times.

The one thing that really struck me was the similarity of the replacing-fossil-fuels challenge and the replacing-silicon-electronics challenge. Fossil fuels have problems, but they're sooooooo cheap. Likewise, there is a great desire to prolong Moore's law by eventually replacing Si, but Si devices are sooooooo cheap that there's an incredible economic barrier to surmount. When you're competing against a transistor that costs less than a millionth of a cent and has a one-per-billion failure rate over ten years, your non-Si gizmo better be really darn special if you want anyone to take it seriously....

Monday, November 14, 2011

Bad Astronomy day at Rice

Today we hosted Phil Plait for our annual Rorschach Lecture (see here), a series in honor of Bud Rorschach dedicated to public outreach and science policy. He kept us fully entertained with his Death from the Skies! talk, with a particularly amusing litany of (a small subset of) the scientific flaws in "Armageddon". There was a full house in our big lecture hall - there's no question that astro has very broad popular appeal (though it did bring out the "Obama should be impeached immediately because he's not protecting us from possible asteroid impacts!" crowd).

Sunday, November 06, 2011

Teaching - Coleman vs. Feynman

As pointed out by Peter Woit, Steve Hsu recently posted a link to an interview with (the late) Sidney Coleman, generally viewed as one of the premier theoretical physicists of his generation. Ironically, for someone known as an excellent lecturer, Coleman apparently hated teaching, likening it to "washing dishes" or "waxing floors" - two activities he could do well, from which he derived a small amount of "job well done" satisfaction, but which he would never choose to do voluntarily.

It's fun to contrast this with the view of Richard Feynman, as he put it in Surely You Must Be Joking, Mr. Feynman:
I don't believe I can really do without teaching. The reason is, I have to have something so that when I don't have any ideas and I'm not getting anywhere I can say to myself, "At least I'm living; at least I'm doing something; I am making some contribution" -- it's just psychological.... The questions of the students are often the source of new research. They often ask profound questions that I've thought about at times and then given up on, so to speak, for a while. It wouldn't do me any harm to think about them again and see if I can go any further now. The students may not be able to see the thing I want to answer, or the subtleties I want to think about, but they remind me of a problem by asking questions in the neighborhood of that problem. It's not so easy to remind yourself of these things. So I find that teaching and the students keep life going, and I would never accept any position in which somebody has invented a happy situation for me where I don't have to teach. Never.
I definitely lean toward the Feynman attitude. Teaching - explaining science to others - is fun, important, and helpful to my own work. Perhaps Coleman was simply so powerful in terms of creativity in research that teaching always seemed like an annoying distraction. In these days when there are so many expectations on faculty members beyond teaching, I hope we're not culturally rewarding a drift toward the Coleman position.

Tuesday, November 01, 2011

Science - what is it up to?

Hat tip to Phil Plait, the Bad Astronomer, for linking to this video from The Daily Show.  My apologies to non-US readers who won't be able to watch this.  It's a special report from Asif Mandvi, complete with remarks from a Republican "strategist" / Fox News talking head, who explains how science is inherently corrupt, because only scientists are really qualified to review the work of scientists.  Seriously, she really makes that argument, and more.

Update:  I've decided to ditch the embedded video.  Here's a link to the video on the Daily Show's site, and here's a link that works internationally.