Tuesday, December 30, 2014

A new proposal for a fifth force experiment

Milla Jovovich in “The Fifth Element”
I still find it amazing that all I see around me is made up of only some dozens particles and four interactions. For all we know. But maybe this isn’t all there is? Physicists have been speculating for a while now that our universe needs a fifth force to maintain the observed expansion rate, but this has turned out to be very difficult to test. A new paper by Burrage, Copeland and Hinds from the UK now proposes a test based on measuring the gravitational attraction felt by single atoms.
    Probing Dark Energy with Atom Interferometry
    Clare Burrage, Edmund J. Copeland, E. A. Hinds
    arXiv:1408.1409

Dark energy is often portrayed as mysterious stuff that fills the universe and pushes it apart, but stuff and forces aren’t separate things. Stuff can be a force carrier that communicates an interaction between other particles. In its simplest form, dark energy is an unspecified smooth, inert, and unchanging constant, the “cosmological constant”. But for many theorists such a constant is unsatisfactory because its origin is left unexplained. A more satisfactory explanation would be a dark-energy-field that fills the universe and has the desired effect of accelerating the expansion by modifying the gravitational interaction on long, super-galactic, distances.

The problem with using fields to modify the gravitational interaction on long distances and to thus explain the observations is that one quickly runs into problems at shorter distances. The same field that needs to be present between galaxies to push them apart should not be present within the galaxies, or within solar systems, because we should have noticed that already.

About a decade ago, Weltman and Khoury pointed out that a dark energy field would not affect gravity on short distances if it was suppressed by the density of matter (arXiv:astro-ph/0309411). The higher the density of matter, the smaller the value of the dark energy field, and the less it would affect the gravitational attraction. Such a field thus would be very weak within our galaxies, and only make itself noticeable between galaxies where the matter density is very low. They called this type of dark energy field the “chameleon field” because it seems to hide itself and merges into the background.

The very same property that makes the chameleon field such an appealing explanation for dark energy is also what makes it so hard to test. Fifth force experiments in the laboratory measure the gravitational interaction with very high precision, and they have so far reproduced standard gravity with ever increasing precision. These experiments are however not sensitive to the chameleon field, at least not in the parameter range in which it might explain dark energy. That is because the existing fifth force experiments measure the gravitational force between two macroscopic probes, for example two metallic plates, and the high density of the probes themselves suppresses the field one is trying to measure.

In their new paper, Burrage et al show that one does not run into this problem if one uses a different setting. To begin with they say the experiment should be done in a vacuum chamber as to get the background density to be as small as possible, and the value of the chameleon field as high as possible. The authors then show that the value of the field inside the chamber depends on the size of the chamber and the quality of the vacuum and that the field increases towards the middle of the chamber.

They calculate the force between a very small, for example atomic, sample and a larger sample, and show that the atom is too small to cause a large suppression of the chameleon field. The gravitational attraction between two atoms is too feeble to be measureable, so one still needs one macroscopic body. But when one looks at the numbers, replacing one macroscopic probe with a microscopic one would be enough to make the experiment sensitive to find out whether dark energy is a chameleon field, or at least some of it.

One way to realize such an experiment would be by using atom interferometry which has previously been demonstrated to be sensitive to the gravitational force. In these experiments, an atom beam is split in two, one half of it is subjected to some field, and then the beams are combined again. From the resulting interference pattern one can extract the force that acted on the beams. A similar setting could be used to test the chameleon field.

Holger Müller from the University of California at Berkeley, an experimentalist who works on atom interferometry, thinks it is possible to do the experiment. “It’s amazing to see how an experiment that is very realistic with current technology is able to probe dark energy. The technology should even allow surpassing the sensitivity expected by Burrage et al.,” he said.

I find this a very interesting paper, and also a hopeful one. It shows that while sending satellites into orbit and building multi-billion dollar colliders are promising ways to search for new physics, they are not the only ways. New physics can also hide in high precision measurements in your university lab, just ask the theorists. Who knows, there might be a chameleon hidden in your vacuum chamber.

This post first appeared on Starts with a Bang as "The Chameleon in the Vacuum Chamber".

Monday, December 29, 2014

The 2014 non-news: Where do these highly energetic cosmic rays come from?

As the year 2014 is nearing its end, lists with the most read stories are making the rounds. Everything in there, from dinosaurs over miracle cures, disease scares, Schadenfreude, suicide, the relic gravitational wave signal that wasn't, space-traffic accidents, all the way to a comet landing.

For the high energy physicists, this was another year of non-news though, not counting the one or the other baryon that I have a hard time getting excited about. No susy, no dark matter detection, no quantum gravity, no beyond the standard whatsoever.

My non-news of the year that probably passed you by is that the origin of highly energetic cosmic rays descended back into mystery. If you recall, in 2007, the Pierre Auger Collaboration announced that they had found a correlation between the directions from which they saw the highly energetic particles coming and the positions of galaxies with supermassive black holes, more generally referred to as active galactic nuclei. (Yes, I've been writing this blog for that long!)

This correlation came with some fineprint because highly energetic particles will eventually, after sufficiently long travel, scatter at one of the very dispersed photons of the cosmic microwave background. So you would not expect a correlation with these active galactic nuclei beyond a certain distance, and that seemed to be exactly what they saw. They didn't at this point have a lot of data so that the statistical significance wasn't very high. However, many people thought this correlation would become stronger with more data, and the collaboration probably thought so too, otherwise they wouldn't have published it.

But it didn't turn out this way. The correlation didn't become stronger. Instead by now it's pretty much entirely gone. In October, Katia Moskvitch at Nature News summed it up:

"Working with three-and-a-half years of data gleaned from 27 rays, Auger researchers reported that the rays seemed to preferentially come from points in the sky occupied by supermassive black holes in nearby galaxies. The implication was that the particles were being accelerated to their ultra-high energies by some mechanism associated with the giant black holes. The announcement generated a media frenzy, with reporters claiming that the mystery of the origin of cosmic rays had been solved at last.

But it had not. As the years went on and as the data accumulated, the correlations got weaker and weaker. Eventually, the researchers had to admit that they could not unambiguously identify any sources. Maybe those random intergalactic fields were muddying the results after all. Auger “should have been more careful” before publishing the 2007 paper, says Avi Loeb, an astrophysicist at Harvard University in Cambridge, Massachusetts."

So we're back to speculation on the origin of the ultra high energetic cosmic rays. It's a puzzle that I've scratched my head over for some while - more scratching is due.

Wednesday, December 24, 2014

Merry Christmas :)

I have a post about "The rising star of science" over at Starts with a Bang. It collects some of my thoughts on science and religion, fear and wonder. I will not repost this here next month, so if you're interested check it out over there. According to medium it's a 6 minutes read. You can get a 3 minutes summary in my recent video:


We wish you all happy holidays :)


From left to right: Inga the elephant, Lara the noisy one, me, Gloria the nosy one, and Bo the moose. Stefan is fine and says hi too, he isn't in the photo because his wife couldn't find the setting for the self-timer.

Tuesday, December 23, 2014

Book review: "The Edge of the Sky" by Roberto Trotta

The Edge of the Sky: All You Need to Know about the All-There-Is
Roberto Trotta
Basic Books (9. Oktober 2014)

It's two days before Christmas and you need a last-minute gift for that third-degree-uncle, heretofore completely unknown to you, who just announced a drop-in for the holidays? I know just the right thing for you: "The Edge of the Sky" by Roberto Trotta, which I found as free review copy in my mailbox one morning.

According to the back flap, Roberto Trotta is a lecturer in astrophysics at Imperial College. He has very blue eyes and very white teeth, but I have more twitter followers, so I win. Roberto set out to explain modern cosmology with only the thousand most used words of the English language. Unfortunately, neither "cosmology" nor "thousand" belongs to these words, and certainly not "heretofore" which might or might not mean what I think it means.

The result is a nice little booklet telling a story about "big-seers" (telescopes) and "star-crowds" (galaxies) and the "early push" (inflation) with a couple of drawings for illustration. It's pretty and kinda artsy which probably isn't a word at all. The book is also as useless as that price-winning designer chair in which one can't sit, but better than the chair because it's very slim and will not take up much space, or money. It's just the right thing to give to your uncle who will probably not read it and so he'll never find out that you think he's too dumb to know the word "particle". It is, in summary, the perfect re-gift, so go and stuff it into somebody's under-shoe-clothes - how am I doing?

Saturday, December 20, 2014

Has Loop Quantum Gravity been proved wrong?

Logo of site by name loop insight.
The insight to take away is that you have to
carefully look for those infinities
[Fast track to wisdom: Probably not. But then.]

The Unruh effect is the predicted, but so-far not observed, particle production seen by an accelerated observer in flat space. It is a result obtained using quantum field theory and does not include gravity, and the particles are thermally distributed with a temperature that is proportional to the acceleration. The origin of the particle production is that the notion of particles, like the passage of time, is observer-dependent, and so what is Bob’s vacuum might be Alice’s thermal bath.

The Unruh effect can be related to the Hawking effect, that is the particle production in the gravitational field of a black hole, by use of the equivalence principle. Neither of the two effects has anything to do with quantum gravity. In these calculations, space-time is treated as a fixed background field that has no quantum properties.

Loop Quantum Gravity (LQG) is an approach to quantum gravity that relies on a new identification of space-time degrees of freedom, which can then be quantized without running into the same problems as one does when quantizing perturbations of the metric. Or at least that’s the idea. The quantization prescription depends on two parameters, one is a length scale normally assumed to be of the order of the Planck length, and the other one is a parameter that everybody wishes wasn’t there and which will not be relevant in the following. The point is that LQG is basically a modification of the quantization procedure that depends on the Planck length.

In a recent paper now Hossain and Sadar from India claim that using the loop quantization method does not reproduce the Unruh effect

If this was correct, this would be really bad news for LQG. So of course I had to read the paper, and I am here to report back to you.

The Unruh effect has not been measured yet, but experiments have been done for some while to measure the non-gravitational analog of the Hawking effect. Since the Hawking effect is a consequence of certain transformations in quantum field theory that also apply to other systems, it can be studied in the laboratory. There is some ongoing controversy whether or not it has been measured already, but in my opinion it’s really just a matter of time until they’ve pinned down the experimental uncertainties and will confirm this. It would be theoretically difficult to claim that the Unruh effect does not exist when the Hawking effect does. So, if it’s true what they claim in the paper, then Loop Quantum Gravity, or its quantization method respectively, would be pretty much ruled out, or at least in deep trouble.

What they do in the paper is that they apply the two quantization methods to quantum fields in a fixed background. As is usual in this calculation, the background remains classical. Then they calculate the particle flux that an accelerated observer would see. For this they have to define some operators as limiting cases because they don’t exist the same way for the loop quantization method. They find in the end that while the normal quantization leads the expected thermal spectrum, the result for the loop quantization method is just zero.

I kinda want to believe it, because then at least something would be happening in quantum gravity! But I see a big problem with this computation. To understand it, you first have to know that the result with the normal quantization method isn’t actually a nice thermal distribution, it is infinity. This infinity can be identified by a suitable mathematical procedure, in which case one finds that it is the zero of a delta function in momentum space. Once identified, it can be factored out, and the prefactor of the delta function is the thermal spectrum that you’ve been looking for. One can trace back the physical origin of this infinity to find it is, roughly speaking, that you’ve looked at the flux for an infinite volume.

These types of infinites appear in quantum field theory all over the place and they can be dealt with by a procedure called regularization that is the introduction of a parameter, the “regulator”, whose purpose is to capture the divergences so that they can be cleanly discarded of. The important thing about regularization is that you have to identify the divergences first before you can get rid of them. If you try to divide out an infinite factor from a result that wasn’t divergent, all you get is zero.

What the authors do in the paper is that they use a standard regularization method for the Unruh effect that is commonly used for the normal quantization, and apply this regularization also to the other quantization. Now the loop quantization in some sense already has a regulator, that’s the finite length scale that, when the quantization is applied to space-time, results in a smallest unit of area and volume. If this length scale is first set to zero, and then the regulator is removed, one gets the normal Unruh effect. If one first removes the regulator, the result is apparently zero. (Or so they claim in the paper. I didn’t really check all their approximations of special functions and so on.)

My suspicion therefore is that the result would have been finite to begin with and that the additional regularization is an overkill. The result is zero, basically, because they’ve divided out an infinity too much.

The paper however is very confusingly written and at least I don’t see at first sight what’s wrong with their calculation. I’ve now consulted three people who work on related things and neither of them saw an obvious mistake. I myself don’t care enough about Loop Quantum Gravity to spend more time on this than I already have. The reason I am telling you about this is because there has been absolutely no reaction to this paper. You’d think if colleagues go about and allegedly prove wrong the theory you’re working on, they’d be shouted down in no time! But everybody loop quantum just seems to have ignored this.

So if you’re working on loop quantum gravity, I would appreciate a pointer to a calculation of the Unruh effect that either confirms this result or proves it wrong. And the rest of you I suggest spread word that loop quantum gravity has been proved wrong, because then I’m sure we will get a clarification of this very very quickly ;)

Saturday, December 13, 2014

The remote Maxwell Demon

During the summer, I wrote a paper that I dumped in an arxiv category called cond-mat.stat-mech, and then managed to entirely forget about it. So somewhat belatedly, here is a summary.

Pretty much the only recollection I have of my stat mech lectures is that every single one of them was inevitably accompanied by the always same divided box with two sides labeled A and B. Let me draw this for you:


Maxwell’s demon in its original version sits in this box. The demon’s story is a thought experiment meant to highlight the following paradox with the 2nd law of thermodynamics.

Imagine the above box is filled with a gas, and the gas is at a low temperature on side A and at a higher temperature on side B. The second law of thermodynamics says that if you open a window in the dividing wall, the temperatures will come to an average equilibrium value, and in this process entropy is maximized. Temperature is basically average kinetic energy, so the average speed of the gas atoms approaches the same value everywhere, just because this is the most likely thing to happen

The system can only do work on the way to equilibrium, but no longer once it’s arrived there. Once you’ve reached this state of maximum entropy, nothing happens any more, except for fluctuations. Unless you have a Maxwell demon...

Maxwell’s demon sits at the dividing wall between A and B when both sides are at the same temperature. He opens the window every time a fast atom comes from the left or a slow atom comes from the right, otherwise he keeps it closed. This has the effect of sorting fast and slow atoms so that, after some while, more fast atoms are on the right side than on the left side. This means the temperatures are not in equilibrium anymore and entropy has decreased. The demon thus has violated the second law of thermodynamics!

Well, of course he hasn’t, but it took a century for physicists to pin down the exact reason why. In brief it’s that the demon must be able to obtain, store, and use information. And he can only do that if he either starts at a low entropy that then increases, or brings along an infinite reservoir of low entropy. The total entropy never decreases, and the second law is well and fine.

It has only been during recent years that some versions of Maxwell’s demon have been experimentally realized in the laboratory. These demons use essentially information to drive a system out of equilibrium, which can then, in principle, do work.

It occurred to me that this must mean it should be possible to replace transfer of energy from a sender to a receiver by transfer of information, and this information transfer could take place with a much smaller energy than what the receiver gets out of the information. In essence this would mean one can down-convert energy during transmission.

The reason this is possible is that the relevant energy here is not the total energy – a system in thermal equilibrium has lots of energy. The relevant energy that we want at the receiving end is free energy – energy that can be used to do work. The signal does not need to contain the energy itself, it only needs to contain the information that allows one to drive the system out of equilibrium.

In my paper, I have constructed a concrete example for how this could work. The full process must include remote measuring, extraction of information from the measurement, sending of the signal, and finally making use of the signal to actually extract energy. The devil, or in this case the demon, is in the details. It took me some while to come up with a system simple enough so one could in the end compute the energy conversion and also show that the whole thing, remote demon included, obeys the Carnot limit on the efficiency of heat engines.

In the classical example of Maxwell’s demon, the necessary information is the velocity of the particles approaching the dividing wall, but I chose a simpler system with discrete energy levels, just because the probability distributions are then better to deal with. The energy extraction that my demon works with is a variant of stimulated emission that is also used in lasers.

The atoms in a laser are being “pumped” into an out-of equilibrium state, which has the property that as you inject light (ie, energy) with the right frequency, you get out more light of the same frequency than you sent in. This does not work if the system is in equilibrium though, it is then always more likely that the injected signal is absorbed rather than that it stimulates a net emission.

However, a system in equilibrium always has fluctuations. The atoms have some probability to be in an excited state, a state in which they could be stimulated to emit light. If you just knew which atoms were in the excited state, then you could target them specifically, and end up with twice the energy that you sent in.

So that’s what my remote demon does: It measures out of equilibrium fluctuations in some atomic system and targets these to extract energy. The main point is that the energy sent to the system can be much smaller than the extracted energy. It is, in essence, a wireless battery recharger. Except that the energies in question are, in my example, so tiny that it’s practically entirely useless.

I’ve never worked on anything in statistical mechanics before. Apparently I don’t even have a blog label to tag it! This was a fun project and I learned a lot. I even made a drawing to accompany it.


Saturday, December 06, 2014

10 things you didn’t know about the Anthropic Principle

“The anthropic principle – the idea that our universe has the properties it does because we are here to say so and that if it were any different, we wouldn’t be around commenting on it – infuriates many physicists, including [Marc Davis from UC Berkeley]. It smacks of defeatism, as if we were acknowledging that we could not explain the universe from first principles. It also appears unscientific. For how do you verify the multiverse? Moreover, the anthropic principle is a tautology. “I think this explanation is ridiculous. Anthropic principle… bah,” said Davis. “I’m hoping they are wrong [about the multiverse] and that there is a better explanation.””
~Anil Ananthaswamy, in “The Edge of Physics”
Are we really so special?
Starting in the mid 70s, the anthropic principle has been employed in physics as an explanation for values of parameters in the theories, but in 2014 I still come across ill-informed statements like the one above in Anil Ananthaswamy’s (otherwise very recommendable) book “The Edge of Physics”. I’m no fan of the anthropic principle because I don’t think it will lead to big insights. But it’s neither useless nor a tautology nor does it acknowledge that the universe can’t be explained from first principles.

Below the most important facts about the anthropic principle, where I am referring to the definition from Ananthaswamy’s quote “Our universe has the properties it does because if it were any different we wouldn’t be here to comment on it.”
  1. The anthropic principle doesn’t necessarily have something to do with the multiverse.

    The anthropic principle is correct regardless of whether there is a multiverse or not and regardless of what is the underlying explanation for the values of parameters in our theories, if there is one. The reason it is often brought up by multiverse proponents is that they claim the anthropic principle is the only explanation, and there is no other selection principle for the parameters that we observe. One then needs to show though that the value of parameters we observe is indeed the only one (or at least a very probable one) if one requires that life is possible. This is however highly controversial, see 2.

  2. The anthropic principle cannot explain the values of all parameters in our theories.

    The typical claim that the anthropic principle explains the value of parameters in the multiverse goes like this: If parameter x was just a little larger or smaller we wouldn’t exist. The problem with this argument is that small variations in one out of two dozen parameters do not consider the bulk of possible combinations. You’d really have to consider independent modifications of all parameters to be able to conclude there is only one combination supportive of life. This however is not a presently feasible calculation.

    Though we cannot presently scan the whole parameter space to find out which combinations might be supportive for life, we can do a little better than one and try at least a few. This has been done and thus we know that the claim that there is really only one combination of parameters that will create a universe hospitable to life is on very shaky ground.

    In their 2006 paper “A Universe Without Weak Interactions”, published in PRD, Harnik, Kribs, and Perez paper put forward a universe that seems capable of creating life and yet is entirely different from our own [arXiv:hep-ph/0604027]. Don Page argues that the universe would be more hospitable for life if the cosmological constant was smaller than the observed value [arxiv:1101.2444], and recently it was claimed that life might have been possible already in the early universe [arxiv:1312.0613. All these arguments show that a chemistry complex enough to support life can arise under circumstances that, while still special, are not anything like the ones we experience today.

  3. Even so, the anthropic principle might still explain some parameters.

    The anthropic principle might however still work for some parameters if their effect is almost independent on what the other parameters do. That is, even if one cannot use the anthropic principle to explain all values of parameters because one knows there are other combinations allowing for the preconditions of life, some of these parameters might need to have the same value in all cases. The cosmological constant is often claimed to be of this type.

  4. The anthropic principle is trivial but that doesn’t mean it’s obvious.

    Mathematical theorems, lemmas, and corollaries are results of derivations following from assumptions and definitions. They essentially are the assumptions, just expressed differently. They are always true and sometimes trivial. But often, they are surprising and far from obvious, though that is inevitably a subjective statement. Complaining that something is trivial is like saying “It’s just sound waves” and referring to everything from engine noise to Mozart.

  5. The anthropic principle isn’t useless.

    While the anthropic principle might strike you as somewhat silly and trivially true, it can be useful for example to rule out values of certain parameters. The most prominent example is probably the cosmological constant which, if it was too large, wouldn’t allow the formation of structures large enough to support life. This is not an empty conclusion. It’s like when I see you drive to work by car every morning and conclude you must be old enough to have a driver’s license. (You might just be stubbornly disobeying laws, but the universe can’t do that.) The anthropic principle is in its core function a consistency constraint on the parameters in our theories. One could derive from it predictions on the possible combinations of parameters, but since we have already measured them these are now merely post-dictions.

    Fred Hoyle's prediction of properties of the carbon nucleus that make possible the synthesis of carbon in stellar interiors — properties that were later discovered as predicted — is often quoted as successful application of the anthropic principle because Hoyle is said to have exploited the fact that carbon is central to life on Earth. Some historians have questioned whether this was indeed Hoyle's reasoning, but the mere fact that it could have been shows that anthropic reasoning can be a useful extrapolation of observation - in this case the abundance of carbon on our planet.

  6. The anthropic principle does not imply a causal relation.

    Though “because” suggests it, there is no causation in the anthropic principle. An everyday example for “because” not implying an actual cause: I know you’re sick because you’ve got a cough and a runny nose. This doesn’t mean the runny nose caused you to be sick. Instead, it was probably some virus. Alas, you can carry a virus without showing symptoms so it’s not like the virus is the actual “cause” of my knowing. Likewise, that there is somebody here to observe the universe did not cause a life-friendly universe into existence. (And the return, that a life-friendly universe caused our existence doesn’t work because it’s not like the life-friendly universe sat somewhere out there and then decided to come into existence to produce some humans.)

  7. The applications of the anthropic principle in physics have actually nothing to do with life.

    As Lee Smolin likes to point out, the mentioning of “life” in the anthropic principle is entirely superfluous verbal baggage (my words, not his). Physicists don’t usually have a lot of business with the science of self-aware conscious beings. They talk about formation of large scale structures or atoms that are preconditions for biochemistricy, but don’t even expect physicists to discuss large molecules. Talking about “life” is arguably catchier, but that’s really all there is to it.

  8. The anthropic principle is not a tautology in the rhetorical sense.

    It does not use different words to say the same thing: A universe might be hospitable to life and yet life might not feel like coming to the party, or none of that life might ever ask a why-question. In other words, getting the parameters right is a necessary but not a sufficient condition for the evolution of intelligent life. The rhetorically tautological version would be “Since you are here asking why the universe is hospitable to life, life must have evolved in that universe that now asks why the universe is hospitable to life.” Which you can easily identify as rhetorical tautology because now it sounds entirely stupid.

  9. It’s not a new or unique application.

    Anthropic-type arguments, based on the observation that there exists somebody in this universe capable of making an observation, are not only used to explain free parameters in our theories. They sometimes appear as “physical” requirements. For example: we assume there are no negative energies because otherwise the vacuum would be unstable and we wouldn’t be here to worry about it. And requirements like locality, separation of scales, and well-defined initial value problems are essentially based on the observation that otherwise we wouldn’t be able to do any science, if there was anybody to do anything at all. Logically, these requirements are the same as anthropic arguments, they just aren’t referred to it as such.

  10. Other variants of the anthropic principle have questionable scientific value

    The anthropic principle becomes speculative, for not to say unscientific, once you try to go beyond the definition that I referred to here. If one does not understand that a consistency constraint does not imply a causal relation then you come to the strange conclusion that humans caused the universe into existence. And if one does not accept that the anthropic principle is just a requirement that a viable theories has to fulfil, one is then stuck with the question why the parameter values are what they are. Here is where the multiverse comes back, for you can then argue that we are forced to believe in the “existence” of universes with all possible combinations. Or you can go off the deep end and argue that our universe was designed for the existence of life.

    Personally I feel the urge to wash my hands after having been in touch with these kinds of arguments. I prefer my principles trivially true.


This post previously appeared October 21st 2014 on Starts with a Bang.