Sunday, December 30, 2007

Nanotechnology - how to get into it, and where it's going

This post is in response to a comment here seeking some advice about nanotechnology, and is relatively brief.

What is nanotechnology? Nanotechnology is a vague, overly broad term. The most commonly accepted definition is something like "nanotechnology is any technology making use of the unique properties of matter structured on length scales smaller than 100 nm." By this definition the semiconductor industry has been doing nanotechnology for a long time now. The point is, in the last ten to twenty years, we've learned a lot about how to engineer materials and structure them in all three dimensions (under the right circumstances) on scales much smaller than 100 nm. This capability has a real chance of having a major impact on a large number of industries, from biomedical sensing and treatment to light strong structural composites to energy generation to waste remediation.

What should I study if I'm interested in nanotechnology? Nanoscale science and engineering is broad and interdisciplinary. The main avenues for getting into cutting edge work at these scales remain condensed matter physics, physical chemistry, and electrical engineering programs, though there are exceptionally good people working at the nanoscale in bio, bioengineering, chemical engineering, and mechanical engineering programs as well. The best approach, in my opinion, is to get a first-rate education in one of these traditional disciplines and focus on the nano, if you want to make scientific or engineering research contributions. Broad nano overview programs right now are better suited to people who want to be scientifically literate for decision-making (e.g. managers or patent lawyers) rather than those who want to do the science and engineering.

Is there really substance behind the hype? Is nanotechnology actually going somewhere? There is definitely substance behind some of the hype. As a very recent example, this new paper in Nature Nanotechnology reports a way of making lithium ion battery electrodes from silicon nanowires. Because it's in nanowire form, the Si can take up huge amounts of Li without the resulting strain pulverizing the Si. Between that and the huge specific surface area of the nanowires, real gains over conventional batteries should be possible. Best of all, industrial scaleup of Si nanowire growth looks achievable.

That's just one example from the past week. There is an awful lot of silliness out there, too, however. We're not going to have nanorobots swimming through our bodies repairing our capillaries. We're not going to have self-reproducing nanomachines assembling rocket engines one atom at a time out of single-crystal diamond. Getting a real science or engineering education gives you the critical skills and knowledge to tell the difference between credible and incredible claims.

Is going into nanotechnology a stable career path relative to alternatives? Another reason to get a solid education in a traditional science or engineering discipline is that you shouldn't be limited to just "nano" stuff. Frankly, I think this would be far more useful in just about any career path (including law or medicine) than an undergrad degree in business. Still, there are no guarantees - learn to be flexible, learn to think critically, and learn to solve problems.

Texas and "creation science"

Is it a coincidence that every state panel staffed with Rick Perry appointees does something to undermine science education and science literacy in this state? The latest ridiculousness comes from an advisory panel to the Texas Higher Education Coordination Board, who have recommended in favor of recognizing Masters of Science Education degrees granted by the Institute for Creation Science Research. Yes, that's right - this isn't even the subtle creationism of "Intelligent Design". This is full-on young Earth creationism, as explained on the ICR's own FAQ page:
All things in the universe were created and made by God in the six literal days of the creation week described in Genesis 1:1-2:3, and confirmed in Exodus 20:8-11. The creation record is factual, historical and perspicuous; thus all theories of origins or development which involve evolution in any form are false. All things which now exist are sustained and ordered by God's providential care. However, a part of the spiritual creation, Satan and his angels, rebelled against God after the creation and are attempting to thwart His divine purposes in creation.
Remember, if you believe in evolution in any form, or that the universe is actually 13 billion years old, according to these folks who want to staff science faculty positions in Texas you have been corrupted by Satan and his agents. Great.

To the arguments of the Houston Chronicle against granting the ICR request, let me add two more (both admittedly self-serving): this seriously hurts our ability to recruit high tech professionals to this state, and this puts Texas science and engineering faculty at a competitive disadvantage for funding. For example, ordinarily it would be a plus for a large center proposal to the NSF to be coupled to the state's education initiatives. In Texas, that's not clear.

Saturday, December 22, 2007

Books on science this holiday season

From his new book, I am America (and so can you!), part of Stephen Colbert's view of science:
" 'Why?' -- The question scientists are always asking. You know who else is always asking 'why?' ? Five year olds! That's the kind of intellectual level we're dealing with here."

That's the best justification for my desire to be a scientist that I've seen since I read Tom Weller's book Science Made Stupid back when I was in high school:
"What is science? Put most simply, science is a way of dealing with the world around us. It is a way of baffling the uninitiated with incomprehensible jargon. It is a way of obtaining fat government grants. It is a way of achieving mastery over the physical world by threatening it with destruction."

I've also been reading Uncertainty, a very well-written book about the birth of quantum mechanics that focuses mostly on the personalities of the major players. It's a compelling story, though there are no major surprises: Heisenberg was ludicrously bright; Bohr was incapable of writing a short, declarative statement; Pauli was a sarcastic bastard who could get away with it because he was brilliant; Einstein was already the grand old man.

I can also recommend American Prometheus: The Triumph and Tragedy of J. Robert Oppenheimer. I'm not done with this one yet, but it's extremely interesting. If you thought that socially awkward, neurotic people going into science was a recent phenomenon, think again.



Tuesday, December 18, 2007

Research and corporations

This article from the NY Times is worth reading. Basically it points out something I've said repeatedly in the past: some corporations think that they can substitute university-funded research for what used to get done in big industrial research labs (like Bell Labs, IBM, Xerox, GM, Westinghouse, Ford Scientific, RCA, etc.). I think that there's definitely a place for corporate funding of university research, provided the company goes into this with their eyes open and conscious of the realities of academic work. However, I don't think university research projects will ever be able to approach the total intellectual effort that IBM or Bell could bring to bear on an important problem. If Bell decided that some piece of solid state physics was important, they could put 35 PhDs onto the problem. There just isn't that kind of concentration of expertise at a university - we're all working on different areas, for the most part.

Monday, December 17, 2007

Magnetite

Now that it's been published online, I can talk about our new paper about magnetite. Back in August I wrote a post about the different types of papers that I've been involved with. This one fits the third category that I'd mentioned, the (Well-Motivated) Surprise, and it's been a fun one.

Background: Magnetite is Fe3O4, also known as lodestone. This material is a ferrimagnet, meaning that it has two interpenetrating lattices of magnetic ions with oppositely directed polarizations of different magnitudes. Since one polarization wins, the material acts in many ways like a ferromagnet, which is how it was first used in technology: to make primitive compasses. The magnetic ordering temperature for magnetite is about 860 K. Anyway, at room temperature magnetite has a crystal structure called inverse spinel, with two kinds of lattice sites for iron atoms. The A sites are each in the center of a tetrahedron with oxygen atoms at the corners, and are occupied by Fe(3+). The B sites (there are twice as many as A sites) are each in the center of an oxygen octahedron, and are occupied by a 50/50 mix of Fe(3+) and Fe(2+), according to chemical formal charges.

It's been known for nearly 70 years that the simple single-electron band theory of solids (so good at describing Si, for example) does a lousy job at describing magnetite. Fe3O4 is a classic example of a strongly correlated material, meaning that electron-electron interactions aren't negligible. At room temperature it's moderately conducting, with a resistivity of a few milli-Ohm-cm. That's 1000 times worse than Cu, but still not too bad. When cooled, the resistivity goes weakly up with decreasing temperature (not a standard metal or semiconductor!), and at about 120 K the material goes through the Verwey transition, below which it becomes much more insulating. Verwey first noticed this in 1939, and suggested that conduction at high temperatures was through shifting valence of the B-site irons, while below the transition the B-site irons formed a charge ordered state. People have been arguing about this ever since, sometimes with amusing juxtapositions (hint: look at the titles and publication dates on those links).

Motivation: I'd been interested for a while about trying to do some nanoscale transport measurements in strongly correlated systems. The problem is, most relevant materials are very difficult to work with - not air stable, difficult to prepare, etc. Magnetite is at least a well-defined compound, and the Verwey transition acts as something of a gauge of material quality, at least in bulk. Screw up the oxygen content by a couple of percent, and the transition temperature falls through the floor.

What did we find: In two different kinds of magnetite nanostructures, we found that the I-V characteristics become dramatically hysteretic once the sample is cooled below the Verwey transition. This was completely unexpected! Basically it looks like you can take the system, which wants to be a decent insulator in equilibrium at low temperatures, and kick it back into a conducting state by applying a large enough electric field. Reduce the field back down, and the system remains in the conducting state until you pass a lower threshold, and then the magnetite snaps back into being an insulator. We worked very hard to check that this was not just some weird self-heating problem, and that's all described in the paper. I should point out that other strongly correlated insulators (vanadium oxides; some perovskite oxides) seem to be capable of qualitatively similar transitions. Hopefully we'll be able to use this transition as a way of getting a better handle on the nature of the Verwey transition itself - in particular, the role of structural degrees of freedom as well as electronic correlations.

Tuesday, December 11, 2007

Hahvahd and the burden of financial excess.

As pointed out by Julianne at Cosmic Variance, the president of Harvard had this to say about the combined issue of declining federal science research (in real dollars) and Harvard's soul-crushing dilemma of extreme wealth:
"One thing we all must worry about — I certainly do — is the federal support for scientific research. And are we all going to be chasing increasingly scarce dollars?" says Drew Gilpin Faust, Harvard's new president.

Not that Faust seems worried about Harvard or other top-tier research schools. "They're going to be—we hope, we trust, we assume—the survivors in this race," she says. As for the many lesser universities likely to lose market share, she adds, they would be wise "to really emphasize social science or humanities and have science endeavors that are not as ambitious" as those of Harvard and its peers.

Wow. So much for thinking that Larry Summers' arrogance was anomalous.


Thursday, December 06, 2007

Abstract fun

I spent my day at APS headquarters sorting abstracts for the March Meeting, the big condensed matter gathering that now approaches 7000 talks and posters. This is the second time I've done this, and it's always an interesting experience. When people submit abstracts they are supposed to choose a sorting category so that their talk ends up in an appropriate session - that way the audience will hopefully include people that actually are interested in the subject of the work. The contributed talks at the March Meeting are each 10 minutes, with 2 minutes for questions. Often these talks are the first chance a graduate student gets to present their work in a public forum before other scientists. Unfortunately 10 minutes is very short, so much so that often only near-experts in an area can get much out of such a brief explanation of results. There are also invited talks that are 30 minutes with 6 minutes for questions. These can be arranged in Invited Sessions, where all the talks are invited, and the session theme and potential speakers are nominated and voted upon by the program committee. Alternately, there are mixed Focus Topic sessions that typically have one or two invited talks mixed in with contributed ones.

The first big challenge in sorting the abstracts is that the sorting categories often overlap. For example, there were at least four different categories where people could have submitted abstracts about electronic properties of quantum dots. Surprisingly, about 80 people pushing around 7000 slips of barcoded paper is a reasonably efficient way of sorting. The second major issue in organizing the meeting is that space is very limited, and sessions are highly constrained - you don't want a contributed session to take place at the same time as an invited session on a closely related area, for example.

Helping to put together meetings like this is a bit like the scientific equivalent of jury duty. You want to make sure that it gets done well by people whose judgment you trust, but you don't want to have to do it yourself very often. It is a good way to get meet your fellow physicists, though.

Saturday, December 01, 2007

Texas, you're not making this any easier.

Well, looks like it's time for another of my once-every-few-months occasions to be severely disappointed in public agencies in Texas. This time the director of the state's public school science curriculum has been forced out, apparently because she prefers evolution to "intelligent design". This is just pathetic. While I appreciate Eric Berger's spirited defense of Texas (in short, we're not all antiscience zealots), the steady stream of this stuff from Austin is unquestionably depressing.

Tuesday, November 27, 2007

"Unparticles" and condensed matter

At the risk of contributing to what has recently been called the intellectual wasteland that is the physics blogosphere, I want to point out a nice review paper on the arxiv, and its connection to high energy physics. Subir Sachdev at Harvard has put up a relatively pedagogical review about quantum magnetism and criticality. Back when I was a grad student, I didn't appreciate that quantum magnetic systems were so interesting - I thought that they were a zoo or menagerie of semi-random compounds that happened to have effective model Hamiltonians of interest only to rather esoteric theorists. Now I understand the appeal - the relevant Hamiltonians can have some truly bizarre solutions that can be relevant (intellectually if not directly) to whole classes of systems. One class of such systems is the heavy fermion compounds that are non-Fermi liquids, and another comprises some exotic "spin liquids". The low energy excitations of these strongly correlated quantum systems are not readily described as particle-like or wave-like. They don't have simple quantum numbers and simple dispersion relations, and they result from complicated, correlated motion of electrons (or spins, or both). This has been known in condensed matter circles for some time, and is very neat. Much exciting theory work is being done to come up with good ways to treat such systems.

What I don't understand, and perhaps a reader can enlighten me, is how these ideas relate to "unparticles". Howard Georgi, also of Harvard, made a pretty big splash this past year by publishing a PRL (linked above in free form) about the possibility that there may be fundamental excitations of quantum fields (like the ones thought to be relevant in high energy physics) that are not simply described as particles. Since this paper came out, there are now 78 papers on the arxiv that deal with unparticles. So, is this a case of high energy physics reinventing an idea that's been known conceptually for some time in condensed matter? Or is there really a basic underlying difference here? I should point out that at present, while there is experimental evidence for non-particle-like excitations in condensed matter, there is not yet any evidence for such things in high energy experiments as far as I know.

Friday, November 23, 2007

Really? Seriously?

Sometimes I read a science article online or in the newspaper that I think is poor. This one I just don't know how to interpret. Lawrence Krauss is a solid guy, a very strong public advocate for science, and a very good popularizer of physics. Still, the idea that our observations of dark energy have somehow collapsed the quantum state of the entire universe is, umm, nuts on the same level as saying that the moon doesn't exist if no one is looking at it. There's no question that there are subtleties in worrying about applying quantum mechanics to the universe as a whole. Still, this carries the "spooky action at a distance" idea a bit far.

Saturday, November 17, 2007

This week in cond-mat

Two papers this week. I'll write about our own at a later date. These two are both connected to on-going long-term controversies in condensed matter/mesoscopic/nanoscale physics.

arxiv:0711.1810 - Capron et al., Low temperature dephasing in irradiated metallic wires
In the orthodox picture of metals (thought to be valid for relatively weak disorder), the quantum coherence time of electrons is expected to diverge toward infinity as the temperature approaches zero. Think about a two-slit experiment for electrons. If the electrons are well isolated from their environment, they can diffract off the slits and land on the screen, producing an interference pattern. If the electrons are coupled to environmental degrees of freedom that can change their state when the electron goes by, the relative phase of the electron wavefunctions going through each of the slits gets scrambled by that interaction, washing out the interference. In the usual 2-slit experiment, the degrees of freedom are those of detectors at the slits. Within a disordered metal, those environmental degrees of freedom can be lattice vibrations, other electrons, or magnetic impurities. For a decade now there has been an ongoing controversy about whether the coherence time (as inferred from some quantum correction to the classical electronic conductance) really does diverge, or whether it saturates as T -> 0. Intrinsic saturation would be a big deal - it would imply that the quasiparticle picture of electrons (Fermi liquid theory) fails at the low T limit. In this paper, the authors perform a very careful control experiment, looking at whether structural damage to silver nanowires can, by itself, introduce extra degrees of freedom that cause decoherence. They get this damage by ion-implanting Ag ions into Ag nanowires. The results show no sign of extra decoherence due to this irradiation.

arxiv:0711.1464 - Baenninger et al., Low-temperature collapse of electron localisation (sic) in two dimensions
Another ongoing brouhaha has been about whether electrons confined to two dimensions have an insulating or metallic ground state in the presence of any disorder. Without interactions, the "Gang of Four" (Anderson, Abrahams, Ramakrishnan, and Licciardello) showed that even infinitesimal disorder leads to localization and an insulating ground state for an infinite 2d system. Of course, real electrons do interact with each other, and real systems are of finite size. One big complication in this whole discussion is in trying to tell the difference between a real, uniform, insulating state and the breakup of your system into inhomogeneous "puddles" of electrons due to the disorder potential. The Cambridge group has done some careful experiments in mesoscopic samples of rather clean 2d electron gas, and they've found that small regions with higher temperature resistances far exceeding the quantum of resistance (~ h/e^2 ~ 26 kOhms) can show a crossover at low temperatures to what looks like a metallic state. I haven't been following this controversy in detail, but these data look very interesting, and I will have to read this closely.


Monday, November 12, 2007

Potpourri

A small selection of links....

This game is very addictive, educational, and as you play, you feed the hungry (albeit extremely slowly).

Now this is a nanotube radio! Rather than having the nanotube just be the nonlinear element responsible for demodulating the AM signal on the carrier wave, this one has the nanotube acting as the antenna and amplifier as well, effectively. I heard Alex Zettl get interviewed on NPR about it.

The FSP has an interesting post about ambition. Physics as a discipline has issues with this, with an historical attitude that anything less than a tenured job at Harvard is somehow inadequate - a notion that's wrongheaded and sociologically unhealthy.

Schlupp has a post about a little frustrating science journalism. It is a shame that sometimes the media can't tell the difference between good science or engineering and crackpottery. On a plane last week I had someone (who realized I was a physicist from my reading material) ask me about the guy who can get hydrogen from seawater by hitting it with microwaves. Kind of cool, yes. Source of energy? Of course not - it takes more microwave power to break the water into hydrogen and oxygen than you can get back by burning the resulting hydrogen. It's called thermodynamics.

Monday, November 05, 2007

This week in cond-mat

Several entries from the arxiv this week. My descriptions here are a bit brief b/c of continued real-world constraints.

arxiv:0711.0343 - Dietl, Origin and control of ferromagnetism in dilute magnetic semiconductors and oxides
arxiv:0711.0340 - Dietl, Origin of ferromagnetic response in diluted magnetic semiconductors and oxides
These are two review articles by Tomasz Dietl, one of the big names in the dilute magnetic semiconductor (DMS) game. DMS are semiconductor materials that exhibit ferromagnetic order usually because of doping with transition metal atoms that contain unpaired d electrons, such as manganese. The idea of integrating magnetic materials directly with semiconductor devices, and ideally controlling magnetism via electrical or optical means, is quite appealing. However, it is very challenging to achieve high magnetic ordering temperatures (e.g., room temperature) and decent electronic properties at the same time. In many systems the high doping levels required for the magnetism go hand in hand with lots of disorder, in part because crystal growth must be performed under nonequilibrium conditions to force enough transition metal atoms to sit on the appropriate lattice sites. Anyway, these articles (one coming out in J. Phys.: Cond. Matt.
and the other coming out in J. Appl. Phys.) should give you plenty of reading material if you're interested in this area.

arxiv:0711.0218 - Leek et al., Observation of Berry's phase in a solid state qubit
In basic quantum mechanics we learn that particles are described by a complex wavefunction that has a phase factor. Propagation of a particle in space racks up phase at a rate proportional to the particle's momentum. As Feynman would tell us, each possible trajectory of a particle from A to B then contributes some complex amplitude (with a phase). The total probability of finding the particle at B is the squared magnitude of the sum of all of those amplitudes, rather than the classical sum of the probabilities of each path. Phase differences between paths lead to interference terms, and are the sort of thing responsible for electro diffraction, for example. Besides propagating through space, there are other ways of accumulating phase. In the case of the Aharanov-Bohm effect, the vector potential leads to an additional phase factor that depends on trajectory. In the general case of Berry's Phase, the slow variation of some external parameters (such as electric fields) can lead to a similar geometrical phase factor. The intro to this paper gives a nice discussion of the classical analog of this in terms of moving a little vector on the surface of a sphere. Anyway, this team has used a solid-state superconducting qubit to demonstrate this geometric phase explicitly. Quite nice.

arxiv:0710.5515 - Castelnovo et al., Magnetic monopoles in spin ice
One of the things that I find so interesting about condensed matter physics is the idea of emergent degrees of freedom. For example, phonons (quantized sound waves) are quantum mechanical quasiparticles in solids that can have well-defined quantum numbers, and arise because of the collective motion of large numbers of atoms. In a more exotic example, Cooper pairs in ordinary superconductors are objects with spin 0, charge -2e, yet are "built" out of electrons plus phonons. In a very exotic example, the quasiparticles in the fractional quantum Hall effect can have fractional charges and obey exotic statistics. In an even more extreme case, these authors propose that there are quasiparticle excitations in a kind of magnetically ordered insulator that act like magnetic monopoles. It seems that magnetic monopoles do not exist as elementary particles. Indeed, they would require a modification of Maxwell's equations. (In this solid state system the argument is that they exist as monopole/antimonopole pairs, so that the net divergence of the magnetic field is still zero). "Forbidden" particles emerging from the collective action of many electrons - a very neat idea, and it would appear that there may even be some experimental evidence for this already.

Wednesday, October 31, 2007

In honor of Halloween....

Three of my favorite science-related quotes from the movies, all from Ghostbusters:

Dean Teager: Your theories are the worst kind of popular tripe; your methods are sloppy, and your conclusions are highly questionable. You are a poor scientist, Dr. Venkman.
---
Ray Stantz: Personally, I like the University. They gave us money and facilities, we didn't have to produce anything. You've never been out of college. You don't know what it's like out there. I've worked in the private sector. They expect results.
---
Peter Venkman: Back off, man! I'm a scientist!

Any other good ones to share? (Real science post coming in a day or two....)

Friday, October 26, 2007

Jobs jobs jobs

I figure it's probably a good idea to take advantage of the staggeringly enormous readership of this blog to point out several searches going on at Rice right now.

First, three searches are going on here at Rice in the Physics and Astronomy department at the moment. These are:
There is also an experimental nanophotonics search going on in Electrical and Computer Engineering.

Finally, the Chemistry department is doing a search for inorganic or physical chemists, broadly defined. The ad is on the departmental homepage.

Share and enjoy! If you want to discuss what Rice is like as a faculty member, please feel free to contact me and I'll be happy to talk.



Friday, October 19, 2007

Three papers and a video.

Three interesting papers on ASAP at Nano Letters at the moment:

http://dx.doi.org/10.1021/nl0717715 and http://dx.doi.org/10.1021/nl072090c are both papers where people have taken graphite flakes, oxidized them to make graphite oxide, and then suspended the graphene oxide sheets in solvent. They then deposit the sheets onto substrates and made electronic devices out of them after trying to reduce the graphene oxide back to just graphene. There are a couple of people here at Rice trying similar things from the chemistry side. Interesting that a number of groups are all working on this at about the same time. That's one reason why it can be dangerous to try to jump into a rapidly evolving hot topic - it's easy to get scooped.

This one is a cute paper titled "Carbon nanotube radio". The science is nicely done, though not exactly surprising. AM radio works by taking an rf carrier signal and demodulating it to get back just the envelope of that carrier signal. Back in the early 20th century (or more recently, if you bought an old kit somewhere), people used to do the demodulating using a diode made semi-reliably by jamming a metal needle (a "cat's whisker") into a lead sulfide crystal - hence the term "crystal radio". It's simple trig math to see that a nonlinear IV curve (one with a nonzero d^2I/dV^2) can rectify an ac signal of amplitude V0 to give a dc signal of (1/4)(d^2I/dV^2)V0^2. Well, in this case the nonlinear element is a nanotube device. Cute, though I have to admit that I found the media hype a bit much. Wilson Ho did the same essential thing very nicely with an STM, but didn't talk about atomic-scale radio receivers....

Lastly, via Scott Aaronson, a link to a fantastic math presentation. Watch the whole thing - this really is a model of clarity and public outreach. On a bitter-sweet note, in the credits at the end I realized that one of the people responsible for this was an acquaintance from college who has since passed away. Small world.

Tuesday, October 16, 2007

This week in cond-mat

Real life continues to be very busy this semester. Two interesting papers on the arxiv this week....

arxiv:0710.2845
- Fratini et al., Current saturation and Coulomb interactions in organic single-crystal transistors
The technology finally exists to do what He Who Must Not Be Named claimed to have done: use a field-effect geometry to gate significant charge densities (that is, a good fraction of a charge carrier per molecule) into the surface of a clean single crystal of an organic semiconductor. The Delft group has used Ta2O5 as a high-k gate dielectric, and are able to get 0.1 holes per rubrene atom in a single-crystal FET geometry. In typical organic FETs, increasing the charge density in the channel improves transport by filling trap states and by moving the chemical potential in the channel toward the mobility edge in the density of states. Surprisingly, Fratini et al. have found that the channel conductance actually saturates at very high charge densities instead of continuing to increase. The reason for this appears to be Coulomb interactions in the channel due to the high carrier density and the polaronic nature of the holes. The strong coupling between the carriers and the dielectric layer leads to a tendency toward self-trapping; add strong repulsion and poor screening into the mix, and you have a more insulating state induced by this combination of effects. Very interesting!

arxiv:0710.2323 - Degen et al., Controlling spin noise in nanoscale ensembles of nuclear spins
Dan Rugar
at IBM has been working on magnetic resonance force microscopy for a long time, and they've got sensitivity to the point where they can detect hundreds of nuclear spins (!). (That may not seem impressive if you haven't been following this, but it's a tour de force experiment that's come very far from the initial work.) The basic idea of MRFM is to have a high-Q cantilever that is mechanically resonant at the spin resonance frequency and coupled via magnetic interactions to the sample - that way the polarized spins precess, they drive the cantilever resonance mode. When they look at such a small number of spins, the statistical fluctuations in the spin polarization are readily detected. This is a problem for imaging, actually - the timescale for the natural fluctuations is long enough that the signal bops around quite a bit during a line scan. Fortunately, Degen et al. have demonstrated in this paper that one can deliberately randomize the magnetization by bursts of rf pi/2 pulses, and thus suppress the fluctuation impact on imaging by making the effective fluctuations much more rapid. This is a nice mix of pretty physics and very clever experimental technique.

Wednesday, October 10, 2007

Giant magnetoresistance

I think it's great that the physics Nobel this year went for giant magnetoresistance (GMR). GMR is intrinsically a quantum mechanical effect, an example of a nanoscale technology that's made it out of the lab and into products, and one of the big reasons that you can buy a 500GB hard drive for $100. (Good job, Sujit, for the advanced pick!).

The story in brief: Back in the ancient past (that is, the 1980s), the read heads on hard drives operated based on the anisotropic magnetoresistance (AMR). For band structure reasons, the electrical resistivity of ferromagnetic metals depends a bit on the relative orientations of M, the magnetization, and J, the current density. In the common NiFe alloy permalloy, for example, the resistivity is about 2% larger when M is parallel to J than when M is perpendicular to J. To read out the bits on magnetic media, a strip of very coercible magnetic material was used, and the fringing fields from the disk media could alter the direction of that strip's M, leading to changes in the resistance that were translated into voltage changes that correspond to 1s and 0s.

In the late 1980s, Fert and Grunberg demonstrated that stacks of nanoscale layers of alternating magnetic and nonmagnetic metals had remarkable magnetoresistive properties. When the M of the FM layers are aligned, the mobile electrons can move smoothly between the layers, leading to relatively low resistance. However, when the M of the FM layers are anti-aligned, there is a mismatch between the densities of states for spin-up and spin-down electrons between anti-aligned layers. The result is enhanced scattering of spin-polarized electrons at the interfaces between the normal and FM layers. (Crudely, a spin-down electron that comes from being the majority spin in one FM layer goes through the normal metal and runs into the anti-aligned FM layer, where that spin orientation is now the minority spin - there are too few empty states available for that electron in the new FM layer, so it is likely to be reflected from the interface.) More scattering = higher resistance. The resulting GMR effect can be 10x larger than AMR, meaning that read heads based on GMR multilayers could read much smaller bits (with smaller fringing fields) for the same signal-to-noise ratio.

Thursday, October 04, 2007

Challenges in measurement

This post is only going to be relevant directly for those people working on the same kind of stuff that my group does. Still, it gives a flavor of the challenges that can pop up unexpectedly in doing experimental work.

Often we are interested in measuring the electronic conductance of some nanodevice. One approach to doing this is to apply a small AC voltage to one end of the device, and connect the other end to something called a current preamplifier (or a current-to-voltage converter, or a glorified ammeter) to measure the amount of current that flows. It's possible to build your own current preamp, but many nanodevice labs have a couple of general purpose ones lying around. A common one is the SR570, made by Stanford Research. This gadget is pretty nice - it has up to a 1 MHz bandwidth, it has built-in filter stages, it is remotely programmable, and it has various different gain settings depending on whether you want to measure microamps or picoamps of current.

Here's the problem, though. One of my students observed that his devices seemed to fail at a surprisingly high rate when using the SR570, while the failure rate was dramatically lower when using a different (though more expensive) preamp, the Keithley 428. After careful testing he found that when the SR570 changes gain ranges (there is an audible click of an internal relay when this happens, as the input stage of the amplifier is switched), spikes of > 1V (!) lasting tens of microseconds show up on the input of the amplifier (the part directly connected to the device), at least when hooked up to an oscilloscope. Our nanoscale junctions are very fragile, and these spikes irreversibly damage the devices. The Keithley, on the other hand, doesn't do this and is very quiet. Talking to SRS, this appears to be an unavoidable trait of the SR570. We're working to mitigate this problem, but it's probably good for people out there in the community using these things to know about this.

Sunday, September 30, 2007

This week in cond-mat

Two recent papers in cond-mat this time, both rather thermodynamics-related. That's appropriate, since I'm teaching undergrad stat mech these days.

arxiv:0709.4181 - Kubala et al., Violation of Wiedemann-Franz law in a single-electron transistor
The Wiedemann-Franz law is one of those things taught in nearly every undergraduate solid-state physics class. It also happens to be extremely useful for doing cryogenic engineering, as I learned during my grad school days. The idea is simple: simple kinetic theory arguments (and dimensional analysis) imply that the conductivity of some parameter via some excitations is given by the product (carrying capacity of that parameter per excitation)*(speed of excitation carrying that parameter)*(mean free path of that excitation), with some geometric factor out in front (e.g., 1/3 for three dimensional diffusive motion of the excitation). For example, the electrical conductivity in a 3d, diffusive, ordinary metal is (1/3)(e)(v_F)(\ell), where e is the electronic charge, v_F is the Fermi velocity for conduction electrons, and \ell is the mean free path for those electrons (at low T, \ell is set by impurity scattering or boundary scattering). However, in a normal metal electrons can also carry thermal energy with some heat capacity c_v per electron that scales like T, while the speed and mean free path of the electrons are as above. This implies the Wiedemann-Franz law, that the ratio of the thermal conductivity to the (electrical conductivity*T) in an ordinary metal should be a constant (the Lorenz number, ~25 nanoOhms W/K^2). Deviations from the W-F law are indicators of interesting physics - basically that simple metal electrons either aren't the dominant carriers of the electrical current, or that the charge carriers don't carry thermal energy as normal. This paper is a theory piece by the Helsinki group showing that the W-F law fails badly for single-electron transistors. In particular, in the co-tunneling regime, when current is carried via quantum coherent processes, the Lorenz number is predicted to be renormalized upward by a factor of 9/5. This will be challenging to measure in experiments, but exquisite thermal conductivity measurements have been performed in similar systems in the past.

arxiv:0709.4125 - Allahverdyan et al., Work extremum principle: structure and function of quantum heat engines
Marlan Scully (also here) caused a bit of a flurry of excitement a few years ago by proposing a form of heat engine that uses quantum coherence and its destruction to do work, in addition to the conventional approach of using two thermal baths at different temperatures. This paper is a theoretical analysis of some such quantum heat engines. Carnot can sleep easy - in the end you can't violate the Carnot efficiency even with quantum heat engines, if you dot all the "i"s and cross all the "t"s. Neat to think about, though, and of some experimental relevance to the cold atom community, who can prepare highly coherent atomic gases at very low temperatures. This paper is long and detailed and I don't claim to have read it in depth, but it looks interesting.

Tuesday, September 25, 2007

Revised: Primer on faculty searches, part I

It's that time of year again, with Chad Orzel and the Incoherent Ponderer both posting about the faculty job market and job hunting. So, I'm recycling a post of mine from last year describing the search process, at least the way it's done at Rice. I'm going to insert some revisions that are essentially tips to would-be candidates, though I think the IP has already done a good job on this, and some are basically common sense. An obvious disclaimer: this is based on my experience, and may not generalize well to other departments with vastly differing cultures or circumstances.

Here are the main steps in a search:
  • The search gets authorized. This is a big step - it determines what the position is, exactly: junior vs. junior or senior; a new faculty line vs. a replacement vs. a bridging position (i.e. we'll hire now, and when X retires in three years, we won't look for a replacement then).
  • The search committee gets put together. In my dept., the chair asks people to serve. If the search is in condensed matter, for example, there will be several condensed matter people on the committee, as well as representation from the other major groups in the department, and one knowledgeable person from outside the department (in chemistry or ECE, for example). The chairperson or chairpeople of the committee meet with the committee or at least those in the focus area, and come up with draft text for the ad.
  • The ad gets placed, and canvassing begins of lots of people who might know promising candidates. A special effort is made to make sure that all qualified women and underrepresented minority candidates know about the position and are asked to apply (the APS has mailing lists to help with this, and direct recommendations are always appreciated - this is in the search plan). Generally, the ad really does list what the department is interested in. It's a huge waste of everyone's time to have an ad that draws a large number of inappropriate (i.e. don't fit the dept.'s needs) applicants. The exception to this is the generic ad typically placed by MIT and Berkeley: "We are looking for smart folks. Doing good stuff. In some area." They run the same ad every year, trolling for talent. They seem to do ok. The other exception is when a university already knows who they want to get for a senior position, and writes an ad so narrow that only one person is really qualified. I've never seen this personally, but I've heard anecdotes.
  • In the meantime, a search plan is formulated and approved by the dean. The plan details how the search will work, what the timeline is, etc. This plan is largely a checklist to make sure that we follow all the right procedures and don't screw anything up. It also brings to the fore the importance of "beating the bushes" - see above. A couple of people on the search committee will be particularly in charge of oversight on affirmative action/equal opportunity issues.
  • The dean meets with the committee and we go over the plan, including a refresher for everyone on what is or is not appropriate for discussion in an interview (for an obvious example, you can't ask about someone's religion.).
  • Applications come in and are sorted; rec letters are collated. Each candidate has a folder.
  • The committee begins to review the applications. Generally the members of the committee who are from the target discipline do a first pass, to at least wean out the inevitable applications from people who are not qualified according to the ad (i.e. no PhD; senior people wanting a senior position even though the ad is explicitly for a junior slot; people with research interests or expertise in the wrong area). Applications are roughly rated by everyone into a top, middle, and bottom category. Each committee member comes up with their own ratings, so there is naturally some variability from person to person. Some people are "harsh graders". Some value high impact publications more than numbers of papers. Others place more of an emphasis on the research plan, the teaching statement, or the rec letters. Yes, people do value the teaching statement - we wouldn't waste everyone's time with it if we didn't care. Interestingly, often (not always) the people who are the strongest researchers also have very good ideas and actually care about teaching. This shouldn't be that surprising. As a friend of mine at a large state school once half-joked to me: 15% of the faculty in any department do the best research; 15% do the best teaching; 15% do the most service and committee work; and it's often the same 15%.
  • Once all the folders have been reviewed and rated, a relatively short list (say 20-25 or so out of 120 applications) is arrived at, and the committee meets to hash that down to, in the end, five or so to invite for interviews. In my experience, this happens by consensus, with the target discipline members having a bit more sway in practice since they know the area and can appreciate subtleties - the feasibility and originality of the proposed research, the calibration of the letter writers (are they first-rate folks? Do they always claim every candidate is the best postdoc they've ever seen?). I'm not kidding about consensus; I can't recall a case where there really was a big, hard argument within the committee. I know I've been lucky in this respect, and that other institutions can be much more fiesty. The best, meaning most useful, letters, by the way, are the ones who say things like "This candidate is very much like CCC and DDD were at this stage in their careers." Real comparisons like that are much more helpful than "The candidate is bright, creative, and a good communicator." Regarding research plans, the best ones (for me, anyway) give a good sense of near-term plans, medium-term ideas, and the long-term big picture, all while being relatively brief and written so that a general committee member can understand much of it (why the work is important, what is new) without being an expert in the target field. It's also good to know that, at least at my university, if we come across an applicant that doesn't really fit our needs, but meshes well with an open search in another department, we send over the file. This, like the consensus stuff above, is a benefit of good, nonpathological communication within the department and between departments.
That's pretty much it up to the interview stage. No big secrets. No automated ranking schemes based exclusively on h numbers or citation counts.

Tips for candidates:
  • Don't wrap your self-worth up in this any more than is unavoidable. It's a game of small numbers, and who gets interviewed where can easily be dominated by factors extrinsic to the candidates - what a department's pressing needs are, what the demographics of a subdiscipline are like, etc. Every candidate takes job searches personally to some degree because of our culture, but don't feel like this is some evaluation of you as a human being.
  • Don't automatically limit your job search because of geography unless you have some overwhelming personal reasons. I almost didn't apply to Rice because neither my wife nor I were particularly thrilled about Texas, despite the fact that neither of us had ever actually visited the place. Limiting my search that way would've been a really poor decision.
  • Really read the ads carefully and make sure that you don't leave anything out. If a place asks for a teaching statement, put some real thought into what you say - they want to see that you have actually given this some thought, or they wouldn't have asked for it.
  • Research statements are challenging because you need to appeal to both the specialists on the committee and the people who are way outside your area. My own research statement back in the day was around three pages. If you want to write a lot more, I recommend having a brief (2-3 page) summary at the beginning followed by more details for the specialists. It's good to identify near-term, mid-range, and long-term goals - you need to think about those timescales anyway. Don't get bogged down in specific technique details unless they're essential. You need committee members to come away from the proposal knowing "These are the Scientific Questions I'm trying to answer", not just "These are the kinds of techniques I know".
  • Be realistic about what undergrads, grad students, and postdocs are each capable of doing. If you're applying for a job at a four-year college, don't propose to do work that would require an experienced grad student putting in 60 hours a week.
  • Even if they don't ask for it, you need to think about what resources you'll need to accomplish your research goals. This includes equipment for your lab as well as space and shared facilities. Talk to colleagues and get a sense of what the going rate is for start-up in your area. Remember that four-year colleges do not have the resources of major research universities. Start-up packages at a four-year college are likely to be 1/4 of what they would be at a big research school (though there are occasional exceptions). Don't shave pennies - this is the one prime chance you get to ask for stuff! On the other hand, don't make unreasonable requests. No one is going to give a junior person a start-up package comparable to a mid-career scientist.
  • Pick letter-writers intelligently. Actually check with them that they're willing to write you a nice letter - it's polite and it's common sense. Beyond the obvious two (thesis advisor, postdoctoral mentor), it can sometimes be tough finding an additional person who can really say something about your research or teaching abilities. Sometimes you can ask those two for advice about this. Make sure your letter-writers know the deadlines and the addresses.
I'll revise more later if I have the time.

Monday, September 24, 2007

2007 Nobel Prize in Physics

Time for pointless speculation. I suggest Michael Berry and Yakir Aharonov for the 2007 physics Nobel, because of their seminal work on nonclassical phase factors in quantum mechanics. Thoughts?

Saturday, September 22, 2007

Two seminars this past week

I've been remiss by not posting more interesting physics, either arxiv or published. I'll try to be better about that, though usually those aren't the posts that actually seem to generate comments. For starters, I'll write a little about two interesting condensed matter seminars that we had this week. (We actually ended up with three in one week, which is highly unusual, but I was only able to go to two.)

First, my old friend Mike Manfra from Bell Labs came and gave a talk about the interesting things that one sees in two-dimensional hole systems (2dhs) on GaAs (100). Over the last 25 years, practically a whole subdiscipline (including two Nobel prizes) has sprung up out of our ability to make high quality two-dimensional electron systems (2des). If you have a single interface between GaAs below and AlxGa(1-x)As above, and you put silicon dopants in the AlGaAs close to the interface, charge transfer plus band alignment plus band bending combine to give you a layer of mobile electrons confined in a roughly triangular potential well at the interface. Those electrons are free to move within the plane of the interface, but they typically have no ability to move out of the plane. (That is, the energy to excite momentum in the z direction is greater than their Fermi energy.) Now it's become possible to grow extremely high quality 2dhs, using carbon as a dopant rather than silicon. The physics of these systems is more complicated than the electron case, because holes live in the valence band and experience strong spin-orbit effects (in contrast to electrons in the conduction band). In the electron system, it's known that at relatively low densities, low temperatures, and moderate magnetic fields, there is a competition between different possible ground states, including ones where the electron density is spatially complicated ("stripes", "bubbles", "nematics"). Manfra presented some nice work on the analogous case with holes, where the spin-orbit complications make things even more rich.

Then yesterday we had a talk by Satoru Nakatsuji from the ISSP at the University of Tokyo. He was talking about an extremely cool material, Pr2Ir2O7. This material is a metal, but because of its structure it has very complicated low temperature properties. For example, the Pr ions live on a pyrochlore lattice, which consists of corner-sharing tetrahedra. The ions are ferromagnetically coupled (they want to align their spins), but the lattice structure is a problem because it results in geometric frustration - not all the spins can be satisfied. As a result, the spins never order at nonzero temperature (at least, down to the milliKelvin range) despite having relatively strong couplings. This kind of frustration is important in things like water ice, too. In water ice, the hydrogens can be thought of as being at the corners of such tetrahedra, but the O-H bond lengths can't all be the same. For each tetrahedron, two are short (the covalent O-H bonds) and two are long (hydrogen bonds). The result is a ground state for water ice that is highly degenerate, leading to an unusual "extra" residual entropy at T = 0 of R/2 ln 3/2 per mole (in contrast to the classical third law of thermodynamics that says entropy goes to zero at T = 0. The same kind of thing happens in Pr2I2O7 - the spins on the tetrahedron corners have to be "two-in" and "two-out" (see the link above), leading to the same kind of residual entropy as in water ice. This frustration physics is just the tip of the iceberg (sorry.) of what Nakatsuji discussed. Very neat.

Friday, September 14, 2007

The secret joys of running a lab II: equipment

The good news is that we're getting a cool new piece of equipment to be installed here next week. The bad news (apart from the fact that it uses liquid helium - see previous post) is that I've been spending my morning shifting through US import tariff codes trying to come up with a number that will make the shipping agent happy. You might think that the tariff code supplied by the vendor would be good enough. Apparently you'd be wrong. You might think that this would be the job of a customs broker. Again, apparently you'd be wrong. As the Incoherent Ponderer pointed out, there are many aspects of our jobs for which we never receive formal training. Customs agent is one. By the way: can anyone explain to me why US tariff codes are maintained by the US Census Bureau? Ok, so they're part of the Department of Commerce, but this is just odd.

Thursday, September 13, 2007

The secret joys of running a lab: helium.

In my lab, and in many condensed matter physics labs around the world, we use liquid helium to run many of our experiments. At low temperatures, many complicating effects in condensed matter systems are "frozen out", and it becomes easier to understand the effects that remain. Often we are interested in the ground state of some system and want to reduce thermal excitations. Quantum effects are usually more apparent at low temperatures because the inelastic processes that lead to decoherence are suppressed as T approaches zero. For example, the quantum coherence length (the distance scale over which the phase of an electron's wavefunction is well defined before it gets messed up due to inelastic effects of the environment) of an electron in a metal like silver at room temperature is on the order of 1 nm, while that length can be thousands of times longer at 4.2 K, the boiling point of liquid helium at atmospheric pressure. Those kinds of temperatures are also necessary for running good superconducting magnet systems.

The downside of liquid helium is that it's damned expensive, and getting more so by the minute. Running at full capacity I could blow through several thousand liters in a year, and at several dollars a liter minimum plus overhead, that's real money. As a bonus, lately our supplier of helium has become incredibly unreliable, missing orders and generally flaking out, while simultaneously raising prices because of actual production shortages. I just had to read the sales guy the riot act, and if service doesn't improve darn fast, we'll take our business elsewhere, as will the other users on campus. (Helium comes from the radioactive decay of uranium and other alpha emitters deep in the earth, and comes out of natural gas wells.) The long-term solutions are (a) set up as many cryogen-free systems as possible, and (b) get a helium liquifier to recycle the helium that we do use. Unfortunately, (a) requires an upfront cost comparable to about 8 years of a system's helium consumption per system, and (b) also necessitates big capital expenses as well as an ongoing maintenance issue. Of course none of these kinds of costs are the sort of thing that it's easy to convince a funding agency to support. Too boring and pedestrian.

Fortunately, when you work at really nanometer scales, interesting physics often happens at higher temperatures. I've been lucky that two major things going on in my lab right now don't require helium at all. Still, it's bad enough worrying about paying students without the added fun of helium concerns.

UPDATE: See here.

Sunday, September 09, 2007

Other Packard meeting highlights

I'm back from California, and the remainder of the Packard meeting was just as much intellectual fun as the first day. It's great to see so much good science and engineering outside my own discipline. Some fun things I learned:
  • Plants really can communicate by smell (that is, by giving off and detecting volatile compounds).
  • Many flying insects have evolutionarily found wing flap patterns that optimize for minimum energy consumption when hovering.
  • Most of the huge number of insect species in tropical rainforests (at least in New Guinea) are specialist feeders, preferring to eat only one type of plant.
  • When you split a molecular ion (say I2-) into a neutral atom and an atomic ion, the coherent superposition (in this case, 1/\sqrt(2) [(I + I-) + (I- + I)]) can persist even when the atom and ion are separated by more than 10 atomic diameters.
  • Super fancy mass spec plus amazing statistical capabilities can let you do serious proteomics.
  • There may have been as many as four supercontinent phases and two "snowball earth" phases in the last three billion years.
  • If you come up with a computationally efficient way to model viscoelastic materials (e.g. jello, human skin), you can develop virtual surgery tools for reconstructive surgeons, and win an Oscar for special effects by modeling Davey Jones for POTC II.
  • If you develop a DNA microarray chip that lets you cheaply and reliably identify any known virus or the nearest relative of an unknown virus, and you want to use this clinically, the established medical testing companies will react in a very negative way (because they're afraid that if you're successful, they won't be able to keep chargin insurers $3K per possibly unnecessary blood test). The fact that you can save lives won't be of interest to them.
  • Comparing different measurement techniques can really tell you a lot about how cells sense and respond to touch.
  • You can design a Si photonic crystal to act as a superprism and show negative refraction and negative diffraction, all at the same time, over a useful bandwidth near 1.55 microns wavelength (the standard telecommunications band).
I know I'm leaving some out, too. Very fun stuff.

Friday, September 07, 2007

Packard meeting

I'm currently in Monterey thanking the Packard Foundation for their generous support. They're fantastic, and their fellowship has been a godsend that's really given me the flexibility in my research that I've needed. The best part about their annual meetings is that it's a chance for me to listen to good talks pitched to a general audience on an enormously broad set of science and engineering subjects. Some things that I learned yesterday:
  • It's possible to do successful astronomical planet-hunting surveys using 300mm camera lenses to make a telescope array.
  • There are molecules and molecular ions in astronomical gas clouds that are extremely difficult to make and study on earth (e.g., CH5-; C6H7+).
  • The human brain is 2% of the body's mass but uses 20% of the body's oxygen. It also has roughly 10x the concentration of iron, copper, and zinc as other soft tissues on the body.
  • Chemical "noise" (e.g., concentration fluctuations) is essential for some kinds of cell differentiation.
  • There are other photoactive parts in your eye besides rods and cones, and if those other parts are intact, your body clock can still re-set itself even in the absence of vision.
  • Soft tissue can (pretty convincingly) survive inside fossil bones dating back tens of millions of years.
  • Viral phylogeny shows convincingly that HIV did not start from contaminated polio vaccines grown in monkeys, and that HIV came from Africa first to Haiti, and then from Haiti to the US in the late 1960s.
  • Lots of microbes live as biofilms on the ocean floor via chemical energy gained from the decomposition of basaltic rock.

Wednesday, August 29, 2007

Invited talk suggestions, APS March Meeting 2008

Along with Eric Isaacs, I am co-organizing a focus topic at the March Meeting of the APS this year on "Fundamental Challenges in Transport Properties of Nanostructures". The description is:
This focus topic will address the fundamental issues that are critical to our understanding, characterization and control of electronic transport in electronic, optical, or mechanical nanostructures. Contributions are solicited in areas that reflect recent advances in our ability to synthesize, characterize and calculate the transport properties of individual quantum dots, molecules and self-assembled functional systems. Resolving open questions regarding transport in nanostructures can have a huge impact on a broad range of future technologies, from quantum computation to light harvesting for energy. Specific topics of interest include: fabrication or synthesis of nanostructures involved with charge transport; nanoscale structural characterization of materials and interfaces related to transport properties; advances in the theoretical treatment of electronic transport at the nanoscale; and experimental studies of charge transport in electronic, optical, or mechanical nanostructures.
The sorting category is 13.6.2, if you would like to submit a contributed talk. Until Friday August 31, we're still soliciting suggestions for invited speakers for this topic, and I would like to hear what you out there would want to see. If you've got a suggestion, feel free either to post below in the comments, or to email me with it, including the name of the suggested speaker and a brief description of why you think they'd be appropriate. The main restriction is that suggested speakers can't have given an invited talk at the 2007 meeting. Beyond that, while talks by senior people can be illuminating, it's a great opportunity for postdocs or senior students to present their work to an audience. Obviously space is limited, and I can make no promises, but suggestions would be appreciated. Thanks.

Tuesday, August 28, 2007

Quantum impurities from Germany II

A recurring theme at the workshop in Dresden last week was quantum impurities driven out of equilibrium. In general this is an extremely difficult problem! One of the approaches discussed was that of Natan Andrei's group, presented here and here. I don't claim to understand the details, but schematically the idea is to remap the general problem into a scattering language. You set up the nonequilibrium aspect (in the case of a quantum dot under bias, this corresponds to setting the chemical potentials of the leads at unequal values) as a boundary condition. By recasting things this way, you can use a clever ansatz to find eigenstates of the scattering form of the problem, and if you're sufficiently clever you can do this for different initial conditions and map out the full nonequilibrium response. Entropy production and eventual relaxation of the charge carriers far from the dot happens "at infinity". Andrei gives a good (if dense) talk, and this formalism seems very promising, though it also seems like actually calculating anything for a realistic system requires really solving for many-body wavefunctions for a given system.

Tuesday, August 21, 2007

Quantum impurities from Germany

I'm currently at a workshop on quantum impurity problems in nanostructures and molecular systems, sponsored by the Max Planck Institute for Complex Systems here in Dresden. A quantum impurity problem is defined by a localized subsystem (the impurity) with some specific quantum numbers (e.g. charge; spin) coupled to nonlocal degrees of freedom (e.g. a sea of delocalized conduction electrons; spin waves; phonons). The whole coupled system of impurity (or impurities) + environment can have extremely rich properties that are very challenging to deduce, even if the individual subsystems are relatively simple.

A classic example is the Kondo problem, with a localized impurity site coupled via tunneling to ordinary conduction electrons. The Coulomb repulsion is strong enough that the local site can really be occupied by only one electron at a time. However, the total energy of the system can be reduced if the localized electron can undergo high order virtual processes where it can pop into the conduction electron sea and back. The result is an effective magnetic exchange between the impurity site and the conduction electrons, as well as an enhanced density of states at the Fermi level for the conduction electrons. The ground state of this coupled system involves correlations between many electrons, and results in a net spin singlet. The Kondo problem can't be solved by perturbation theory, like many impurity problems.

The point is, with nanostructures it is now possible to implement all kinds of impurity problems experimentally. What is really exciting is the prospect of using these kinds of tunable model systems to study strong correlation physics (e.g. quantum phase transitions in heavy fermion compounds; non-Fermi liquid "bad metals") in a very controlled setting, or in regimes that are otherwise hard to probe (e.g., impurities driven out of equilibrium). This workshop is about 70 or 80 people, a mix of theorists and experimentalists, all interested in this stuff. When I get back I'll highlight a couple of the talks.

Thursday, August 16, 2007

Superluminality



Today this blurb from the New Scientist cause a bit of excitement around the web. While it sounds at first glance like complete crackpottery, and is almost certainly a case of terrible science journalism, it does involve an interesting physics story that I first encountered back when I was looking at grad schools. I visited Berkeley as a prospective student and got to meet Ray Chiao, who asked me how long it takes a particle with energy E to tunnel through a rectangular barrier of energetic height U > E and thickness d. He went to get a glass of water, and wanted me to give a quick answer when he got back a couple of minutes later. Well, if I wasn't supposed to do a real calculation, I figured there were three obvious guesses: (1) \( d/c\); (2) \(d/ (\hbar k/m)\), where \(k = \sqrt{2 m (U-E)}/\hbar\) - basically solving for the (magnitude of the imaginary) classical velocity and using that; (3) 0. It turns out that this tunneling time controversy is actually very subtle. When you think about it, it's a funny question from the standpoint of quantum mechanics. You're asking, of the particles that successfully traversed the barrier, how long were they in the classically forbidden region? This has a long, glorious history that is discussed in detail here. Amazingly, the answer is that the tunneling velocity (d / the tunneling time) can exceed c, the speed of light in a vacuum, depending on how it's defined. For example, you can consider a gaussian wave packet incident on a barrier, and ask how fast does the packet make it through. There will be some (smaller than incident) transmitted wavepacket, and if you look at how long it takes the center of the transmitted wave packet to emerge from the barrier after the center of the incident packet hits the barrier, you can get superluminal speeds out for the center of the wavepacket. (You can build up these distributions statistically by doing lots of single-photon counting experiments.) Amazingly, you can actually have a situation where the exiting pulse leaves the barrier before the entering pulse peak hits the barrier. This would correspond to negative (average) velocity (!), and has actually been demonstrated in the lab. So, shouldn't this bother you? Why doesn't this violate causality and break special relativity? The conventional answer is that no information is actually going faster than light here. The wavepackets we've been considering are all smooth, analytic functions, so that the very leading tail of the incident packet contains all the information. Since that leading tail is, in Gaussian packets anyway, infinite in extent, all that's going on here is some kind of pulse re-shaping. The exiting pulse is just a modified version in some sense of information that was already present there. It all comes down to how one defines a signal velocity, as opposed to a phase velocity, group velocity, energy velocity, or any of the other concepts dreamed up by Sommerfeld back in the early 20th century when people first worried about this. Now, this kind of argument from analyticity isn't very satisfying to everyone, particularly Prof. Nimtz. He has long argued that something more subtle is at work here - that superluminal signalling is possible, but tradeoffs between bandwidth and message duration ensure that causality can't be violated. Well, according to his quotes in today's news, apparently related to this 2-page thing on the arxiv, he is making very strong statements now about violating special relativity. The preprint is woefully brief and shows no actual data - for such an extraordinary claim in the popular press, this paper is completely inadequate. Anyway, it's a fun topic, and it really forces you to think about what causality and information transfer really mean.

Sunday, August 12, 2007

Kinds of papers

I've seen some recent writings about how theory papers come to be, and it got me thinking a bit about how experimental condensed matter papers come about, at least in my experience. Papers, or more accurately, scientific research projects and their results, seem to fall into three rough groupings for me:
  • The Specific Question. There's some particular piece of physics in an established area that isn't well understood, and after reading the literature and thinking hard, you've come up with an approach for getting the answer. Alternately, you may think that previous approaches that others have tried are inadequate, or are chasing the wrong idea. Either way, you've got a very specific physics goal in mind, a well-defined (in advance) set of experiments that will elucidate the situation, and a plan in place for the data analysis and how different types of data will allow you to distinguish between alternative physics explanations.
  • The New Capability. You've got an idea about a new experimental capability or technique, and you're out to develop and test this. If successful, you'll have a new tool in your kit for doing physics that you (and ideally everyone else) has never had before. While you can do cool science at this stage (and often you need to, if you want to publish in a good journal), pulling off this kind of project really sets the stage for a whole line of work along the lines of The Specific Question - applying your new skill to answer a variety of physics questions. The ideal examples of this would be the development of the scanning tunneling microscope or the atomic force microscope.
  • The (Well-Motivated) Surprise. You're trying to do either The Specific Question or The New Capability, and then all of the sudden you see something very intriguing, and that leads to a beautiful (to you, at least, and ideally to everyone else) piece of physics. This is the one that can get people hooked on doing research: you can know something about the universe that no one else knows. Luck naturally can play a role here, but "well-motivated" means that you make your own luck to some degree: you're much more likely to get this kind of surprise if you're looking at a system that is known to be physically interesting or rich, and/or using a new technique or tool.
Hopefully sometime in the future I'll give an anecdote or two about these. In the mean time, does anyone have suggestions on other categories that I've missed?

Behold the power of google

I am easily amused. They just put up google street-view maps of Houston, and while they didn't do every little road, they did index the driving routes through Rice University. In fact, you can clearly see my car here (it's the silver Saturn station wagon just to the right of the oak tree). Kind of cool, if a bit disturbing in terms of privacy.

Tuesday, August 07, 2007

This week in cond-mat

Another couple of papers that caught my eye recently....

arxiv:0707.2946 - Reilly et al., Fast single-charge sensing with an rf quantum point contact
arxiv:0708.0861 - Thalakulam et al., Shot-noise-limited operation of a fast quantum-point-contact charge sensor
It has become possible relatively recently to use the exquisit charge sensitivity of single-electron transistors (SETs) to detect motion of single electrons at MHz rates. The tricky bit is that a SET usually has a characteristic impedance on the order of tens of kOhms, much higher than either free space (377 Ohms) or typical radio-frequency hardware (50 Ohms). The standard approach that has developed is to terminate a coax line with an rf-SET; as the charge environment of the rf-SET changes, so does its impedance, and therefore so does the rf power reflected back up the coax. One can improve signal to noise by making an LC resonant circuit down at the rf-SET that has a resonance tuned to the carrier frequency used in the measurement. With some work, one can use a 1 GHz carrier wave and detect single charge motion near the rf-SET with MHz bandwidths. Well, these two papers use a gate-defined quantum point contact in a 2d electron gas instead of an rf-SET. See, rf-SETs are tough to make, are fragile, and have stability problems, all because they rely on ultrathin (2-3 nm) aluminum oxide tunnel barriers for their properties. In contrast, quantum point contacts (formed when a 2d electron gas is laterally constricted down to a size scale comparable to the Fermi wavelength of the electrons) are tunable, and like rf-SETs can be configured to have an impedance (typically 13 kOhms) that can be strongly dependent on the local charge configuration. Both the Harvard and Dartmouth groups have implemented these rf-QPCs, and the Dartmouth folks have demonstrated very nicely that theirs is as optimized as possible - its performance is limited by the fact that the current flowing through the QPC is composed of discrete electrons.

arxiv:0708.0646 - Hirsch, Does the h-index have predictive power?
*sigh*. The h-index is, like all attempts to quantify something inherently complex and multidimensional (in this case, scientific productivity and impact) in a single number, of limited utility. Here, Hirsch argues that the h-index is a good predictor of future scientific performance, and takes the opportunity to rebut criticisms that other metrics (e.g. average citations per paper) are better. This paper is a bit depressing to me. First, I think things like the citation index, etc. are a blessing and a curse. It's great to be able to follow reference trails around and learn new things. It's sociologically and psychologically of questionable good to be able to check on the impact of your own work and any competitor whose name you can spell. Second, Hirsch actually cites wikipedia as an authoritative source on how great the h-index is in academic fields beyond physics. I love wikipedia and use it all the time, but citing it in a serious context is silly. Ahh well. Back to trying to boost my own h-index by submitting papers.

Tuesday, July 31, 2007

Recent ACS + cond-mat

A couple of interesting recent results - a busy summer has really cut into my non-essential paper-reading, unfortunately.

One sideline that has popped up with the recent graphene feeding frenzy is trying to understand its optical properties. I don't mean anything terribly exotic - I mean just trying to get a good understanding of why it is possible, in a simple optical microscope, to see any optical contrast from atomically thin single layers of graphene. Papers that have looked at this include:
arxiv:0705.0259 - Blake et al., Making graphene visible
arxiv:0706.0029 - Jung et al., Simple approach for high-contrast optical imaging and characterization of graphene-based sheets
doi:10.1021/nl071254m (Nano Lett., in press) - Ni et al., Graphene thickness determination using reflection and contrast spectroscopy
UPDATE: Here's another one:
doi:10.1021/nl071158l (Nano Lett., in press) - Roddaro et al., The optical visibility of graphene: interference colors of ultrathin graphite on SiO2
It all comes down to the dielectric function of graphene sheets, how that evolves with thickness, and how that ultrathin dielectric layer interacts optically with the oxide coating on the substrate.

Another paper that looks important at a quick read is:
doi: 10.1021/nl071486l (Nano Lett., in press) - Beard et al., Multiple exciton generation in colloidal silicon nanocrystals
To excite the charge carriers in a (direct gap) semiconductor optically typically requires a photon with an energy exceeding the band gap, Eg, between the top of the valence band and the bottom of the conduction band. If an incident photon has excess energy, say 2Eg, what ordinarily happens is that a single electron-hole pair is produced, but that pair has excess kinetic energy. It's been shown recently that in certain direct-gap semiconductor nanocrystals, it's possible to generate multiple e-h pairs with single photons. That is, a photon with energy 3Eg might be able to make three e-h pairs. That's potentially big news for photovoltaics. In this new paper, Beard and coauthors have demonstrated the same sort of effect in Si nanocrystals. This is even more remarkable because bulk Si is an indirect gap semiconductor (this means that the because of the crystal structure of Si, taking an electron from the top of the valence band to the bottom of the conduction band requires more momentum than can be provided by just a photon with energy Eg). At a quick read, I don't quite get how this works in this material, but the data are pretty exciting.

Thursday, July 26, 2007

Texas and education

Governor Perry, why did you have to go and ruin my week? It's bad enough that the Texas Republican Party platform explicitly declares that "America is a Christian nation" - so much for not establishing a preferred religion. Now our governor has gone and appointed a creationist anti-intellectual to be the head of the state board of education. Frankly I don't care what his personal religious beliefs are, but I am extremely bothered that the governor has appointed a man who believes that education and intellectualism are essentially useless ("The belief seems to be spreading that intellectuals are no wiser as mentors, or worthier as exemplars, than the witch doctors or priests of old. I share that scepticism.") to run the state educational system. Great move, Governor. Ever wonder why it's hard to convince high tech industry to create jobs here?

Wednesday, July 25, 2007

Ob: Potter

This is the obligatory Harry Potter post. Yes, I read the 7th book, and while it's got a few narrative problems (characters sometimes behaving in deliberately obtuse ways for dramatic necessity - like nearly every episode of Lost), on the whole it was a satisfying wrap-up of the series. If you don't care about spoilers, here is a great parody of the whole thing (via Chad Orzel).

Thursday, July 19, 2007

This week in cond-mat

It's been a busy summer, hence the sparseness of my recent postings. Here are a couple of papers that caught my eye this past week.

arxiv:0707.1923 - Hogele et al., Quantum light from a carbon nanotube
Here the authors do careful time-resolved photoluminescence experiments on individual single-walled carbon nanotubes. By studying the time distribution of photon production, they can get insights into the exciton (bound electron-hole) dynamics that lead to light emission. They find evidence that photons are produced one-at-a-time in these structures, and that multiphoton processes are strongly suppressed. Perhaps nanotubes could be useful as sources of single photons, strongly desired for quantum cryptography applications.

arxiv:0707.2091 - Quek et al., Amine-gold linked single-molecule junctions: experiment and theory
This is a nice example of a mixed experiment/calculation paper in molecular electronics that actually has an interesting point. Very pretty experimental work by Venkataraman et al. at Columbia has shown that NH2-terminated molecules form better-defined contacts with Au electrodes than the conventional thiol (sulfur)-based chemistry. For example, looking at huge data sets from thousands of junction configurations, benzene diamine glommed into a Au break junction has a well-defined most likely conductance of around 0.0064 x 2e^2/h. Now theory collaborators have done a detailed examination via density functional theory of more than a dozen likely contact geometries and configurations for comparison. The calculations do show a well-defined junction conductance that's robust - however, the calculations overestimate the conductance by a factor of seven compared to experiment. The authors say that this shows that DFT likely misses important electronic correlation effects. Hmmm. It's a neat result, and now that they mention it, the almost every non-resonant molecular conduction calculation I've ever seen based on DFT overestimates the conduction by nearly an order of magnitude. The only underestimates of molecular conduction that come to mind are in the case of Kondo-based mechanisms, which can strongly boost conductance and are always missed by ordinary DFT.