Monday, September 30, 2013

Gauge Symmetry Violation (Short film)


Symmetry (Short Film) from Apostolos Vasileiadis on Vimeo.
A physics professor loses control over a false theory of his. A student is there to set things right.


Filmed at Nordita/AlbaNova or in tunnel system between the buildings respectively. Apparently some of the students here have, ehem, dark fantasies.

Thursday, September 26, 2013

The multiverse is not a paradigm and it’s not shifting anything.

Google “multiverse paradigm” and you get more than a thousand hits. According to Wikipedia a paradigm “describes distinct concepts or thought patterns”. Unfortunately, the multiverse is pretty much the opposite: There’s no distinct concept, but instead a variety of loosely related properties of existing theories that are being construed to have a common theme which, we are then told, is sign of an impending paradigm shift.

I’m starting to take offense in this forward defense. If the spread of multiversal “thought patterns” is sold as a paradigm shift, everybody opposed to the multiverse is discarded as being stuck in yesterday. It’s only the enlightened who are ahead of their time and understand the significance. I really don’t think there’s any paradigm here and certainly nothing is shifting. To see why, it’s helpful to distinguish two different classes of multiverses that are presently being discussed, usually thrown together.

1. The Multiverse of Disappointed Hopes

Science works by constructing models for real world systems. These models can then be used to understand what happens in the real world, and to make predictions. A theory is a map from a model to the real world. The model should not be confused with the theory itself. The theory is what tells you how to identify properties of the model with the real world. The model is the actual stand-in for the real world system.


Einstein’s General Relativity for example is a theory: it’s a prescription for how to deal with space-time and particles moving in it. A model is the space-time of a star or an approximately homogeneous matter distribution. It’s the theory of General Relativity, but the ΛCDM model. Likewise, there’s quantum field theory, and the standard model. Needless to say, not everybody uses this terminology all the time, but that’s how I want to use it.

Models and theories are not only used in physics and don’t necessarily have to be mathematical. Psychologists have models for human behavior that they apply to patients – the ‘real world’. A drawing is a model, in this case the “theory” that connects it to the real world comes for free with your visual cortex. A story is a model, the “theory” is your knowledge of the language that relates letters to real world objects or actions. And so on. The merit of mathematical models is that they have a very strict quality control, which is self-consistency.

And then there are toy models.

Toy models are models that do not have real world counterparts. It’s drawings of creatures that don’t exit or stories of people that have never lived. They’re playgrounds of creativity that can teach us lessons about the theory, which is why studying toy models is a very common and often fruitful exercise. There’s an infinite amount of such toy models. You could say there’s a whole multiverse of them, all these toy models that don’t map to any part of the universe we know. Asking whether what they describe is real is like asking if Harry Potter really exists because a story has been written about him. The difference between fantasy novels and physicist’s toy models is the size of the interested audience, but in spirit they’re the same exercises in creativity.



So, sure there are models that don’t describe the real world, in physics as well as in painting. That’s because mathematical consistency alone does not imply a model describes what we observe, much like using English does not imply you talk about real people. Additional requirements are needed besides consistency to construct a useful model, and these requirements are always agreement with observations, though this isn’t always explicitly phrased this way. When we assume Lorentz-invariance or renormalizability or absence of ghosts, these are physical requirements ultimately based on our experience.

This means a multiverse that you can get rid of by adding the requirement that the model needs to describe observation is neither new, nor surprising, nor something to worry about. It just means that mathematical consistency of whatever theory it is you’re dealing with is not sufficient to make a particular prediction. The string theory landscape is a multiverse of this type. The only reason people talk about this now is that many of them had been hoping string theory would make some requirements that one needs in the standard model unnecessary. Alas, these hopes were disappointed, though the last word might not be spoken yet.

Does it make sense to instead talk about probability distributions over the models you get when you refuse to use existing ties to observations, here specifically the values of certain parameters? No. Because that’s cherry picking the observations you want to neglect.

In the construction of the model there always enter many other observations that are being neglected if one considers such probability distributions, such as the number of (large) dimensions, Lorentz-invariance, or the existence of space-time to begin with – these are not requirements of mathematical consistency, these are physical requirements based on observations. If you wanted to be serious with asking for the probability of particular models, you should sample over all models, in the end over all that is mathematically consistent. You’d be left with Tegmark’s mathematical multiverse. And in that mathematical universe you’d have replaced the question “Which model describes the real world?” with “Where are we in the mathematical universe?” You don’t gain anything.


Once you have seen the power of mathematical models to describe natural systems, it is natural to ask if there is a mathematical model that describes “everything” we see. I believe there is. But people who search for a “theory of everything” today mean more than that. They want in particular a theory that delivers the parameters in the standard model. But even if that would be achieved, we would still have to use other axioms that are ultimately based on observations. So while it is worthwhile to try to find a simpler model that reduces the number of axioms, including values of parameters, we can never avoid using input from observation. If we do, we’ll end up with a multiverse which just tells us that mathematical consistency isn’t sufficient.

So if you have a multiverse that can be eliminated by the requirement that the model is consistent with observation, this isn’t a paradigm shift, it’s just disappointed hopes.

2. The multiverse package deal

But there’s a different type of multiverse, one that you cannot get rid of by requiring match to observation. It’s the case in which a theory applied to a model that describes a real world system necessarily maps into a space that is larger than what we observe. Eternal inflation and the many worlds interpretation of quantum mechanics are of this type. Or, more mundanely, there is nothing in ΛCDM that predicts the universe just ends beyond the distance that we can (presently) observe, so you have a multiverse beyond our observations.



This opens a can of interpretational worms because we can now endlessly discuss whether the not observable images of the map are real or not. Personally, I find this a rather fruitless debate about the meaning of the world ‘real’. To me a model is a tool to describe the real world and if it does that, and if it’s an improvement over other models, I don’t care if there are mathematical elements in the model that don’t correspond to real world observables. Mathematics is full of structures that for all we know don’t correspond to anything we observe anyway. I don’t see a reason why we must be able to observe them all.

But, no, I don’t think you should just shut up and calculate. Because we might be mistaken in thinking that what the theory predicts beyond our observable universe is indeed unobservable. Maybe we just haven’t asked the right questions and there are ways to observe it after all.

So it’s an interesting feature that theories can display, but it’s certainly not a new concept. There’s been a century of discussion about the presence of mathematical objects in quantum mechanics that for all we presently know are fundamentally non-observable. So if that’s a paradigm shift it’s one that has already happened long ago.

3. Wilzcek’s Multiversality

Frank Wilzcek recently had a paper on the arxiv titled “Multiversality”. The first half of the article is a nicely written general introduction, the second half is about axion cosmology and then the paper ends quite abruptly. The most interesting part of the paper are three positive answers to the question

“Are there aspects of observable reality, i.e. the universe, that can be explained by multiversality, but not otherwise?”


It is fruitful to look at the answers to gauge the depth of the existing arguments in favor of the multiverse:

“Yes – one is the apparent indeterminism of quantum mechanics, despite its deterministic equations.”

Wilczek claims here the apparent indeterminism of quantum mechanics can be explained by the many worlds interpretation but not otherwise. That’s an objectionable claim, in particular because the qualifier didn’t include anything about locality.
“Yes – the outrageously small, but non-zero, value of the dark energy density.”


Here he is claiming that there is no other way to explain the measured value of the dark energy density than anthropic reasoning and that anthropic reasoning necessarily implies a multiverse. There are many people who would object on the former and the latter is manifestly wrong. You don’t need a multiverse to do anthropic reasoning, see my post Misconceptions about the anthropic principle.

“Yes – the opaque and scattered values of many standard model parameters that are not subject to the discipline of selection.”
An interesting answer because it is phrased to suggest that the values of the standard model parameters are scattered to begin with. Even if they were however that wouldn’t force us to believe that any possible distribution of values actually exists in a more meaningful sense than Harry Potter exists.

Taken together, these answers tell you aptly just how weak the case for a multiverse really is.

Summary

We should distinguish between multiverses that you can eliminate by adding axioms to the theory that tie the model to the real world, and those that you can’t eliminate this way. The string theory landscape is of the former type, you “just” have to find the right vacuum, and good luck with finding that. Eternal inflation and the many worlds interpretation are of the latter type. In this case you get more than you asked for. One can interpret this type of multiverse as a calculation device which might have its uses. It might also turn out that these multiverses aren’t unobservable after all, so these ideas certainly merit some investigation. In any case however, there’s no paradigm shifting here.

Monday, September 23, 2013

Book Review: “You are not so Smart” by David McRaney

You Are Not So Smart: Why You Have Too Many Friends on Facebook, Why Your Memory Is Mostly Fiction, and 46 Other Ways You're Deluding Yourself
By David McRaney
Gotham; Reprint edition (November 6, 2012)

I know I said no more brain books, but this one’s been in the pipe. I’ll make this review short. McRaney in his book goes through 48 ‘brain bugs’ that are shortcomings of human cognition where evolutionary advantageous procedures are inappropriate to present-day situations. Having meanwhile read several books on the topic, I knew about 40 of these brain bugs and the rest are very similar to the ones I already knew.

What I was hoping for in McRaney’s book was some kind of structure, maybe a classification or categories, a big picture – some insight as to how it all ties together or where it’s going and what’s next. But the book is really just a selection of little essays, apparently the result of a blog by the same name, and that’s also what it reads like.

The 48 sections of the book do come with selected references and summaries of research studies that have been made, but a discussion of how well-established any particular result is and if there is maybe contradictory evidence is entirely lacking. Also lacking is space to address the question how these studies relate to behavior in the real world, what the evidence is for this, and, most important, if people change their behavior when being educated about shortcomings in their default mode of thinking.

In summary, the book is an easy read, but it’s not terribly insightful and somewhat uninspired. If you follow the popular cognitive science literature you’ll know pretty much everything that is in the book. The book might be useful for you however if you want to get a quick overview on what topics are presently being discussed in this area, without too much skepticsm or scientific background. Also, the essays all probably make good conversation starters.

Saturday, September 21, 2013

Dear Mr. President

Two weeks ago, Barak Obama visited Stockholm and spent half an hour or so on the KTH campus. Since Nordita is officially part of KTH, safety regulations went through the employee email list weeks in advance. Luckily I was away the great day. I was told later the Swedes were so successful scaring people off the impending traffic disaster that Stockholm was basically deserted during the President’s visit and elks were seen chewing licorice in front of the royal palace.

I flew back to Stockholm the following day. Lufthansa online check-in suffered an interesting technical glitch and produced a boarding pass for seat 1A business class. Yeah to software bugs. As I was sitting in the business class with leg space I don’t actually need (I’m not socialist, just short), I couldn’t help but wonder what, if I had 15 minutes, would I tell the President. Hell, what would you tell the man?

From the German perspective, the American political system looks strange, which is ironic given the history of Germany’s representative democracy. The strong role of the US President in particular and the focus on individuals rather than programs in general is the most obvious difference. Stranger even is that the political landscape in America is in practice a two party system. This has created a situation where, instead of different parties offering a spectrum of alternatives, the two parties morph to fit their potential clientele, or make it fit. And, needless to say, the wealthy part of the clientele lobbies for their interests, an influence that’s amplified by the almost complete lack of labor unions.

Yes, from a German perspective it seems strange that a country which values democracy so dearly practices it so badly. But then I’m not a political scientist, I just hope I know enough to put my two X in the right places on Sunday.

During the years I spent overseas, Academic America seemed to be overwhelmingly on the side of Obama’s Democratic Party. I recall many seminars in which an US American speaker would make jokes or political statements that clearly showed they were confident the majority of the audience would sympathize with their political views. And they were right of course. (Provided the audience was mostly American. These jokes don’t fly in Europe.) But during the last year I sense this support base faltering as the conditions for scientific research gradually worsen under Obama’s watch.

There are many things the man must shoulder and I’m sure they weigh heavily. Among all these weighty boulders, there’s a tiny little pebble that made me lose my faith the USA will overcome its anti-scientific congestion. It came with this headline:

    “Last month [March 2013], President Obama signed 600 pages of legislation to keep the government from shutting down, while shutting down much of the nation’s [political science] studies. Senator Tom Coburn (R-Okla.) secured Democrats’ approval for an amendment to the bill that eliminates the National Science Foundation’s political science studies, except those the NSF director deems relevant to national security or U.S. economic interests.”
By now, the NSF has cancelled the political-science grant cycle.

Dear Mr. President, how could you have let that happen?

Every major problem that this planet presently faces is primarily about organizing human life and negotiating complex problems with uncertain solutions. The existing political, social, and economic systems are insufficient to deal with these problems, and scientific knowledge is insufficiently integrated into decision making procedures. As societies and economies have become more interconnected, political institutions have not kept pace. The technology is there, the knowledge how to use it isn’t. This realization lies behind initiatives like the FuturITC and attempts to predict political unrest. Yes, that’s political science for you.

Today riots are organized on twitter, wars are led on YouTube, and election results predicted on online futures markets. Nobody knows what this means for the future of democracy. Do Facebook and Twitter help spread Democracy and Human Rights? Are the White House Petitions are good idea or do they just create noise? Yes, that’s political science for you. We all have too much information and not the faintest idea how to intelligently aggregate it and use it within our political systems. We need a scientific approach to institutional design. Trial and error is an archaic procedure that takes time that we don’t have, and errors have become too costly.

Just the situation to scrape funding for the political sciences, I see.

I am trying to imagine Angela Merkel suspends all governmental funding for political science. Germany is the land of the poets and thinker, the land of Kant, Hegel, Marx, Engels and Weber. Besides inventing compound nouns, Germans are also good with solidarity, strikes, and nudity. The Americans made very sure each German receives a solid education about the merits of democracy. I can see the outrage. I see the ‘68 students, now at retirement age, clogging the streets, “academic freedom” scrawled over their flopping breasts. “Censorship!” they shout. “Thoughts are free” they sing. Then the President of the United States calls. “Angie,” he says “Wtf?”

The great advantage of the American political system over the German one is however that the US President can only serve two terms, while the German chancellor can run till he or she drops dead.

Dear Mr President: I hope you tried a handful of the salty licorice that the Swedes chew down by the pound. Because that’d make you as sick as I feel when I read what American scientists must endure these days.

Tuesday, September 17, 2013

Quantum Gravity in Gamma Ray Bursts: Still Nothing

Small wavelength photons (blue) travel faster
than their long wavelength companions (red).
[Image Source]
Gamma ray bursts emit highly energetic photons that, by the time they reach us, have a long journey behind them. That makes these photons excellent candidates to test new physics: Because both the energies and the distance are extreme they potentially give us access to so-far undiscovered effects.

However, in contrast to supernova of type Ia, gamma ray bursts are one of a type – they’re not so much standard candles but surprise fireworks. That makes these photons not quite so excellent candidates to test new physics.

A story that dates back now more than a decade and that has been hailed as a ‘test of quantum gravity’ is that certain quantum gravitational effects could lead to an energy-dependence of the speed of light. In this case, photons of high energy would travel either faster or slower than the low energetic photons (depending on the sign of a parameter). Such an effect is not allowed by the presently established theories, and looking for a signal of an energy-dependent speed of light therefore tests deviations from Einstein’s theory.

Theoretically, there are two different ways this could happen, either by a breaking of Lorentz-invariance or by a deformation of Lorentz-invariance, and these cases have to be carefully distinguished. Both cases lead to an energy-dependent speed of light, but if Lorentz-invariance is broken, meaning there is a preferred restframe, then this would lead also to other effects that we should have seen already. This means if we do see such an unexpected effect in the emissions of gamma ray bursts, we’d know it’s not a breaking of Lorentz-invariance but a deformation. This would be considerably more exciting, but is also much more speculative.

My position on this has been, and still is, that a deformation of Lorentz-invariance is not well motivated and theoretically highly problematic, thus I don’t think an energy-dependent speed of light is plausible. But in the end the question is what the data says.

Data however is a reserved companion who just politely asks to be analyzed, and given that no two gamma ray bursts are alike it’s not at all clear how to do the analysis. It seems to me experimentalists are still poking around and trying out new methods. Occasionally a constraint comes out of this. The most recent constraint came out in two papers by Vlasios Vasileiou and a whole list of other people in no particular alphabetic order (if somebody can fill me in on the authorship order in that part of the community, please enlighten me).

To make a long story short, they propose three new ways to arrive at new bounds, all with advantages and disadvantages, and arrive at a bound that constrains new quantum gravitational effects to be beyond 7.6 times the Planck scale, at 95% confidence level. This means the new bound is both weaker and at a lower confidence level than the bound by Nemiroff et al that we previously discussed, so it’s non-news really. And that doesn’t even take into account that the more ways you try to extract a signal from the data, the less likely it will eventually be a real effect.

In a footnote in the discussion the authors of the new paper criticize the Nemiroff et al result basically for the same reasons that I put forward in my earlier blogpost: The constraint hinges very strongly on a few pairs of photons. But the advantage of the Nemiroff analysis is that it’s a clear and clean method that can rapidly increase to higher statistical relevance with more observations, provided we see just a couple more of such pairs. It merely relies on the statement that it’s quantifiably unlikely that a few photons arrive almost simultaneously if they weren't emitted simultaneously and traveled together – at the same speed. Unfortunately, the significance of that result could also decrease in relevance, and that for reasons that have nothing to do with the energy-dependence of the speed of light, just with the physics at the source.

The new approach in the Vasileiou et al paper is valuable however for trying to take into account an intrinsic dispersion of the source. But I think the great weakness of this bound is the same as the previous bounds: low statistics with results that strongly depend on one or a few gamma ray bursts. I doubt we’ll ever get rid of the possibility that source effects play a role unless red-shift is taken into account and different distances are sampled over. That’s because an energy-dependent speed of light should yield a stronger effect the farther away the source, while a source-dependent effect does not get stronger.

Either way, for me it’s a win-win situation :o) There’s either quantum gravity in the gamma ray burst measurements or there isn’t. If there is, it’s a huge boost for the field I work in. If not, I was right all along saying that there is no effect. At the moment however the situation isn’t entirely settled, so stay tuned.

Monday, September 16, 2013

Book Review: The Universe in the Rearview Mirror

The Universe in the Rearview Mirror: How Hidden Symmetries Shape Reality
By Dave Goldberg
Dutton Adult (July 11, 2013)

In his new book “The Universe in the Rearview Mirror,” Dave Goldberg expounds the important role of symmetries in the fundamental laws of physics. He starts with the discrete operations of charge-conjugation, parity, and time inversion, and their combinations. After introducing the reader to Emmy Noether and her work, he discusses continuous symmetries, homogeneity and isotropy, as well as Lorentz-invariance before continuing with gravity. The later chapters deal with gauge symmetries and symmetry breaking. The book finishes with existing proposals for physics beyond the standard model, grand unification, supersymmetry, and the missing theory of quantum gravity.

Goldberg does a remarkably good job conveying a very technical topic in non-technical terms and with only a handful of equations (yes, E=mc2 among them). He works mostly with analogies and writes in an engagingly colloquial way with a large dose of humor, though some readers might find the high density of jokes more distracting than helpful*. The bibliography and the guide to further reading provide helpful references for the readers who wish more details, and the book also has a brief glossary.

Symmetries that “shape reality,” as the subtitle of the book says, are a vast topic of course. Goldberg has focused on these symmetries that (for all we know) shape reality on the most basic level. He does not (except for the purpose of a brief analogy) touch upon the much broader topic of emergent symmetry and order in condensed matter systems, or in other areas of physics and science more generally. This focus has the benefit that the book is relatively lean (291 pages, hardcover) and maintains its momentum, but the blurb could have been more descriptive.

On the downside, the book is confusingly structured and the reader who doesn’t bring prior knowledge might become frustrated in several places. For example the WMAP mission is mentioned in the first chapter, without explanation for what exactly it measures and without an image. The radiation of the cosmic microwave background is again introduced in the third chapter, without referral to the earlier mentioning of WMAP, and temperature anisotropies are briefly mentioned here. Temperature anisotropies are then again introduced at the end of this chapter and here the WMAP image finally appears (low-resolution black-white), alas without the image being mentioned in the text and without explanation for what it shows.

In fact, while the graphics that have specifically been produced to accompany the text are well done and helpful, the book also contains a number of images that are useless and only loosely connected to the text. An image on page 41 I guess shows the Venus transit which is mentioned in the text on this page, or maybe it shows an exercise to find one’s blind spot. On page 109, the reader encounters a Klein bottle and the only reason I can infer is that the next page mentions Emmy Noether “took classes with Hilbert and Klein.” An image on page 118 (no caption) shows Einstein arcs and the explanation in the text amounts to “massive bodies bend light”. The image on page 250 remained a mystery to me until I found it in the Wikipedia entry to “Sisyphus” (mentioned on that page).

The book is also confusing and unstructured in other ways. Goldberg begins to talks about “the elusive dark matter particle” (in itself a questionable phrase) in Chapter 9 without so much as mentioning what dark matter is or what evidence we have for it. He uses the Planck length in chapter 6, but only explains it in Chapter 10. The cosmological constant problem is introduced twice. The elaboration on the twin paradox somehow misses to spell out what the resolution of the paradox is. It is mentioned that inflation was proposed “to get around the horizon problem” but the reader is not actually told how inflation solves the problem. Evidence for inflation amounts to “we’re reasonably certain that it is [correct]”. Goldberg elaborates on the multiverse and later on the compactified dimensions of M-theory, but does not connect the two topics. He speaks about the entropy of matter in the early universe before explaining what happened in the early universe. On page 167/168, I came across the possibly most opaque motivation for quantum gravity that I’ve ever encountered. Luckily there is a considerably better one on page 269. A quotation from Stephen Hawking expressing the opinion that information is not lost in black holes is dumped onto the reader in a description of black holes as “entropy-producing machines” without so much as mentioning the black hole information problem.

I’ll not go down the full list of similar notes that I took while reading; you get the picture.

Goldberg has to be credited for making his text timely by referring to very recent works, for example he mentions Verlinde’s contribution on entropic gravity. This reference (the only reference on the topic) appears in a section on the arrow of time and at least I could not infer the direct connection, besides both having something to do with entropy. Goldberg uses Max Tegmark’s proposed level structure of the multiverse and in the last chapter on physics beyond the standard model we meet Garrett Lisi the surfer without university affiliation who allegedly stunned everybody with proposing his theory of everything. It somehow goes unmentioned that Lisi has PhD in physics. I’m picking at this point not because I don’t think the E8 root diagram is pretty, but because the reader is left with the unfortunate impression that surfing is all you need to understand modern physics. Towards the end of the book the reader can find a very good summary of the recent discovery of the Higgs particle and its relevance.

In summary, the book is valuable for the selection of topics and for conveying the relevance of symmetries in the laws of nature, but the execution leaves wanting. Sean Carroll’s two books for example cover a substantial part of the physics built upon Goldberg’s hidden symmetries, but the reader who does not bring prior knowledge about modern physics will learn a great deal more from Carroll’s more didactic approach. Goldberg however succeeds in inspiring a sense of awe for the power of symmetries, not at least because awesome seems to be one of his favorite words.

*Humor, of course, is always a matter of taste. So let me just say that messages like “science nerds… spend … many nights alone” or physicists don’t know how to dress elegantly and don’t get invited to dinner parties, strike me more as funny-peculiar than funny-ha-ha.

Thursday, September 12, 2013

Whatever happened to AdS/CFT and the Quark Gluon Plasma?

A decade ago, the AdS/CFT correspondence was celebrated as a possible description of the quark gluon plasma. RHIC measurements of heavy ion collisions at that time showed a surprisingly small viscosity that lead to a revision of the previous models. Excitingly, a small viscosity appears naturally in the gauge-theory dual of the AdS/CFT correspondence, nevermind that QCD is neither conformal nor supersymmetric. This development was all the more welcome as it served to demonstrate that string theory is not useless, as critics claimed, but that it can provide insights which improve our understanding of physical processes in the real world.

The gauge-gravity correspondence rapidly became a boom area in high energy physics. After the viscosity, people looked at other observables, notably the energy loss of particles going through the plasma. In highly energetic particle collisions, quarks are produced in pairs, but due to confinement individual quarks are never measured. What is measured instead are color-neutral hadrons that the quarks decay into and that are bundled into the direction of the original quarks. These bundles of hadrons are called jets and in the simplest case there are two of them with total momenta that are back-to-back correlated owing to their common origin from the quark pair.

In a heavy ion collision, one of the quarks may have to pass through the quark gluon plasma and thereby loses energy. This leads to what is known as ‘jet quenching’, a pair of back-to-back correlated jets where the total energy on one side is reduced. The energy loss in the plasma can and has been calculated in different models for heavy ion collisions. There are about a handful of such models, and in the days before the LHC all tried to get in their predictions for the jet quenching at LHC energies, the central question being how the energy loss scales with the increase in collision energy.

After the LHC heavy ion runs, it turned out the data do not agree very well with the scaling expected for energy loss from the AdS/CFT correspondence – in fact from all the models it was the worst prediction. As we discussed in an earlier post, AdS/CFT predicts too much energy loss, the plasma is too strongly coupled.

AdS/CFT confronts data. Image Credits: Thorsten Renk.
For details and references, please refer to this earlier post.

That the scaling doesn’t fit well with the data need not be too much of a worry because these scaling arguments were quite general and in reality the process of propagation through the quark gluon plasma isn’t quite as simple. But clearly the new data called on theoretical physicists working on AdS/CFT to study the observables and improve their model or to call it a failure and move on. Alas, nothing like that happened.

Since the LHC data came in, for two years or so, I’ve now been sitting through AdS/CFT talks that would inevitably be motivated by the low viscosity of the quark gluon plasma and the RHIC data, frog spawn picture and all. And every time I’d raise my hand at the end of the seminar and ask for the speaker’s opinion on the recent LHC data, expecting an update on the work on that matter and that there is no need to worry because the models can be improved to accommodate the data. Instead, it was like the LHC never happened. I don’t work in this field and don’t even follow the literature closely, but it seemed that I knew more about the problems with the LHC results than the people who got paid for talks motivated by yesterday’s data.

What they’d typically say is that nobody really expected AdS/CFT to make quantitative predictions. Alas, even the qualitative prediction, the mere slope of the curve, is wrong. The only prediction that is “qualitatively” correct is that there is some energy loss. Besides this, it’s all well and fine that a new model doesn’t make quantitative predictions, but that’s not a status that should become permanent.

It’s not that the data went entirely unnoticed. A few brave souls took on the issue. In this paper Ficnar, Norona and Gyulassy looked at the effects of higher derivative corrections to the gravity sector. It's somewhat ad-hoc, but apparently does reduce the energy loss. There is however no fit to the data and I’m not sure what this does to other observables. In another work, Ficnar also took into account a time-dependence of the configuration, but the conclusions with respect to the jet quenching and LHC data remain vague and amount to “a more thorough numerical analysis is needed.” In a recent paper, William Horowitz summarized the situation as follows:

“Despite significant efforts, AdS/CFT estimates for light quark and gluon energy loss are qualitative at best… it is difficult to imagine that a relatively sophisticated estimate of the suppression would be consistent with data.”

I was thus thrilled when I heard a talk by Stephen Gubser (about recent work with Ficnar) at a conference in Frankfurt this July, because he spoke about a possibility to improve the AdS/CFT model to accommodate the LHC data. Unfortunately, Gubser and collaborators don’t have a paper about this on the arXiv yet, so all I can do is refer you to the slides. My vague recollection is that he said one needs to take into account the momentum on the endpoints of the strings and that this does improve the scaling of the energy loss and fits considerably better with the LHC measurements. Though, if I recall correctly, getting the slope to match the data requires pushing the parameter into a range where one actually shouldn’t trust the model anymore. So in the end this might not solve the problem either.

If that explanation sounds like I don’t really understand the details it’s because I don’t really understand the details. I didn’t take notes, and two months later that’s as much as I can recall when looking at the slides and the Princeton professor has not been very communicative upon my inquiry. I thus just want to draw your attention to this development – if you’re interested in the topic, I recommend you have an eye on Ficnar and Gubser’s next arXiv uploads. For all I can tell, these guys are the only ones who take the issue seriously and so far it doesn’t sound too promising to me. If I’m missing some references, please let me know.

I don’t know enough about the topic to tell how likely it is that the AdS/CFT model can be improved to fit the data, and personally I find the applications to condensed matter systems better motivated. What annoys me about this situation is that people working in the field continue to decorate themselves with false achievements when they use the viscosity of the quark gluon plasma to justify the relevance of their own work and that of string theory by large.

It’s time the community comes clean and draws a conclusion. Either AdS/CFT cannot describe the quark gluon plasma, then please bury this episode in the history books and move on. Or it can, and then I expect to see a curve that fits on the LHC data. At the very least I want to hear it’s on the to-do list. Yes, the LHC really happened.


Sunday, September 08, 2013

The Limits of Science

There’s been some buzz going through the blogosphere, following an essay by Steven Pinker on “Scientism”. On the one side of the debate are those who believe scientism is a higher state of consciousness, and on the other side those who think it’s a scientifically transmitted disease with a symptomatic itch that shouldn’t be scratched publicly. And I think they all failed to address the main point: Where are the limits of science? And how do we find them?

If I read essays by philosophers and social scientists and other academics in what is vaguely considered “soft science” I often can’t but sense a certain ring of panic. The physicists are coming, is what I read, they’re planning to take over with mathematics. Then they rush to ensure themselves and everybody else that no, no, some things can’t be described by mathematics. This appeals to the public because nobody likes to be predictable and many people are afraid of math. The softies line with the masses and end up being the good guys for perpetuating cognitive illusions, while the physicists are marked delusional reductionists. Welcome to the 21st century.

Oh yes, the softies will admit, there are meanwhile many mathematical models in the social sciences, and neuroscience has already made some discussions about consciousness entirely redundant. But look, they’ll say, these can only tell you something about statistics (scary math word). Human behavior can’t be modeled mathematically. After all we’ve got free will (unproved). We understand the models about us (irrelevant). Humans are special (said the human), the brain is complex (whatever that means), and it’s got qualia (defined by being unmeasurable). And, most importantly, humans are not elementary particles. (Always good to finish an argument with a completely irrelevant statement that everybody must agree on.)

The problem with these elaborations, besides making me wonder what these people get paid for, is that we presently know of no reason why some observations, like human behavior, cannot be modeled mathematically. But neither does anybody know for sure that it is possible. What we do know however is that it certainly is not presently possible. And that’s what determines the limits of science: our present possibilities, what we can do in practice, and not unknown and quite possible unknowable principles.

It shouldn’t be relevant to my argument, because I’m telling you what matters is what we can do in practice and not what we can do in principle. But just so you don’t misread me: I don’t believe that everything can be described by mathematics (for reasons I’ve laid out here). I’m just saying that we presently don’t know of any reason why it should not possible to describe human behavior mathematically.

Personally, I think it is possible but useless in that such a model would in the best case be a copy of the real system and would not deliver predictions. It would be like trying to understand the sun by simulating it in true resolution and real time on a computer cluster. Then you can either watch the sun or your computer, but besides this you haven’t really gained anything. If you believe that we live in a matrix as study-objects, then we live here because it was not possible to find a simpler way to analyze behavior of human societies than just creating and watching them.

So much about my beliefs. But these are beliefs because we don’t know whether they’re true, and in any case these limits that might exist in principle are far beyond the limits that presently exist in practice.

And of course science has limits. It has limits because our understanding is incomplete. These limits of science aren’t fixed and they are constantly shifting as we learn more about the world that we live in. In that process, topics that were previously inaccessible to the scientific method become accessible, and that creates friction in the communities.

Imagine the world of knowledge as having a core of hard science, surrounded by a belt of soft science, that goes over into interpretations, narrative, opinions, speculations, and eventually fantasy. The hard core expands as we learn: What once was a matter of interpretation becomes measurable. What once seemed beyond computational possibility becomes computable. What once was merely a story becomes supported by evidence. Problems arise if researchers refuse to use the best scientific methods of the day in their field. Then they are simply acting unprofessionally. And when they notice they’ve missed the boat they panic.

Where are the limits of science right now? That’s the discussion that we should have. And it’s not an easy one.

Let me give you three examples of what’s presently off-limits for mathematical modeling.

One is history. You could in principle imagine that it was possible to create a model about human behavior, say, in war-times, that produces outcomes that we can observe today, for example how people expressed themselves in the literature. Then you could analyze the literature to draw conclusions about the circumstances back then. Needless to say, converting experience into writing is so difficult to model mathematically that nobody can do this, and nobody even knows if it is possible. So instead historians go and read the literature and study the paintings, and try to interpret them by taking into account as much as they know, most notably about being human, something that they can do better than any software or equation. At least for now.

The second one is personal identity. Nobody really knows what it takes for a human brain to create a sense of self-awareness and the experience of being an individual actor in possession of a body. Neuroscience has collected a lot of information on that matter, but we’re far off from being able to mathematically model these processes. Much of the literature on the subject is interpretation of data or case-studies. But wait some decades and I’m sure we’ll know much more about what enables “you” to think of “yourself”.

The third example is politics. One often hears that science can only deliver the facts, but humans still have to make the decisions because they have to take into account “morals” and “values” that are off-limits for science. This is however empty vocabulary. Values and morals are just simplifying concepts that arise in our cultures. They are in the first line words that primarily serve the purpose of communicating opinions. Morals and values change over time, people tend to interpret them individually differently, and they might regard them more or less helpful for their self-expression. But there is nothing – in principle! – that prohibits science from predicting the emergence of certain morals and values. Again though, in practice, nobody can do this.

And that’s why science cannot replace politics. Because when you express your opinion about a possible change, you are projecting yourself into the future and try to find out whether or not it would be an improvement. Or, in the Darwinian mindset, whether you’ll be more or less well adapted to your environment. For this projection you need to know some facts, and these facts science can provide – with errorbars. But what science cannot do is to project you and your experience into the future. The best way that we presently know to do this projection is to ask people to do it themselves. There are pitfalls to this, because we are not actually good at predicting what we will think in ten years from now. But presently it’s the best we can do.

The last example also tells you why it is important to know the present limits of science. Because it raises the question what we know about human decision making and whether we can use that knowledge to make better decisions.

In summary. Science has limits, but they change over time. Knowing where the present limits of science are is important because that’s where opinions and interpretations become relevant to decision making. Excuse me for publicly scratching my itches.

Tuesday, September 03, 2013

What is Special Relativity?

I got issues. Here’s one. I don’t like what people say about special relativity. Because we’re friends, special relativity and I.

I got issues with certain people in particular, those writing popular science books. Sometimes I feel like have to thank every physicist who takes the time to write a book. But, well, I got issues. Also, I got sunglasses and a haircut, see photo.

I presently read “The Universe in the Rearview Mirror” (disclaimer: free copy) and here we go again. Yet another writer who gives special relativity a bad name.

Here’s the issue.

Ask some theoretical physicist what special relativity is and they’ll say something like “It’s the dynamics in Minkowski space” or “It’s the special case of general relativity in flat space”. (Representative survey taken among our household members, p=0.0003). But open a pop science book and they’ll try to tell you special relativity applies only to inertial frames, only to observers moving with constant velocities.

Now, as with all nomenclature it’s of course a matter of definition, but referring to special relativity as being only good for inertial frames is a bad terminology, and not only because it doesn’t agree with the modern use of the word. The problem is that general relativity is commonly, both among physicists and in the pop sci literature, referred to as Einstein’s theory of gravity, rubber sheet and all. Einstein famously used the equivalence principle to arrive at his theory of gravity and that principle says essentially: “The effects of gravity are locally indistinguishable from acceleration in flat space.” With the equivalence principle, all you need to do is to take acceleration in flat space and glue it locally to a curved space, and voila there’s general relativity. I’m oversimplifying somewhat, all right, but if you know a thing or two about tensor bundles that’s essentially it.

The issue is, if you don’t know how to describe acceleration in flat space then the equivalence principle doesn’t gain you anything. So if you’ve been told special relativity works only for constant velocities, it’s impossible to understand all the stuff about angels pulling lifts and so on. You also mistakenly come to believe that to resolve the twin paradox you need to take into account gravity, which is nonsense.

Yes, historically Einstein first published special relativity for inertial frames, after all that’s the simplest case, and that’s where the name comes from. But the essence of special relativity isn’t inertial frames, it’s the symmetry of Minkowski space. It’s absolutely no problem to apply special relativity to accelerated bodies. Heck, you can do Galilean relativity for accelerated bodies! All you need is to know what a derivative is. You can also, for that matter, do Galilean relativity in arbitrary coordinate frames. In fact, most first semester exercises seem to consist basically of such coordinate transformation, or at least that’s my recollection. So don’t try to tell me that the ‘general’ of relativity has something to do with the choice of coordinates.

So yes, historically special relativity started out being about constant velocities. But insisting – more than 100 years later – that special relativity is about inertial frames, and only about inertial frames, is like insisting a telephone is a device to transfer messages about cucumber salad, just because that happened to be the first thing that ever went through a phone line. It’s an unnecessarily confusing terminology.

Since special relativity is busy boosting your rocket ships with laser cannons and so on, on her* behalf I want to ask you for somewhat more respect. Special relativity is perfectly able to deal with accelerated observers.


*German nouns come in three genders: male, female and neuter. Special relativity, or theory in general, is a female noun. Time is female, space is male. The singularity is female, the horizon is male. Intelligence is female, insanity male. Don’t shoot the messenger.