Sunday, June 28, 2015

I wasn’t born a scientist. And you weren’t either.

There’s a photo which keeps cropping up in my facebook feed and it bothers me. It shows a white girl, maybe three years, kissing a black boy the same age. The caption says “No one is born racist.” It’s adorable. It’s inspirational. But the problem is, it’s not true.

Children aren’t saints. We’re born mistrusting people who look different from us, and we treat those who look like us better. Toddlers already have this “in-group bias” research says. Though I have to admit that, as a physicist, I am generally not impressed by what psychologists consider statistically significant, and I acknowledge it is generally hard to distinguish nature from nurture. But that a preference for people of similar appearance should be a result of evolution isn’t so surprising. We are more supportive to who we share genes with, family ahead of all, and looks are a giveaway.

As we grow up, we should become aware that our bias is both unnecessary and unfair, and take measures to prevent it from being institutionalized. But since we are born being extra suspicious about anybody not from our own clan, it takes conscious educational effort to act against the preference we give to people “like us.” Racist thoughts are not going away by themselves, though one can work to address them – or at least I hope so. But it starts with recognizing one is biased to begin with. And that’s why this photo bothers me. Denying a problem rarely helps solving it.

On the same romantic reasoning I often read that infants are all little scientists, and it’s only our terrible school education that kills curiosity and prevents adults from still thinking scientifically. That is wrong too. Yes, we are born being curious, and as children we learn a lot by trial and error. Ask my daughter who recently learned to make rainbows with the water sprinkler, mostly without soaking herself. But our brains didn’t develop to serve science, they developed to serve ourselves in the first place.

My daughters for example haven’t yet learned to question authority. What mommy speaks is true, period. When the girls were beginning to walk I told them to never, ever, touch the stove when I’m in the kitchen because it’s hot and it hurts and don’t, just don’t. They took this so seriously that for years they were afraid to come anywhere near the stove at any time. Yes, good for them. But if I had told them rainbows are made by garden fairies they’d have believed this too. And to be honest, the stove isn’t hot all that often in our household. Still today much of my daughters’ reasoning begins with “mommy says.” Sooner or later they will move beyond M-theory, or so I hope, but trust in authorities is a cognitive bias that remains with us through adulthood. I have it. You have it. It doesn’t go away by denying it.

Let me be clear that human cognitive biases aren’t generally a bad thing. Most of them developed because they are, or at least have been, of advantage to us. We are for example more likely to put forward opinions that we believe will be well-received by others. This “social desirability bias” is a side-effect of our need to fit into a group for survival. You don’t tell the tribal chief his tent stinks if you have a dozen fellows with spears in the back. How smart of you. While opportunism might benefit our survival, it rarely benefits knowledge discovery though.

It is because of our cognitive shortcomings that scientists have put into place many checks and methods designed to prevent us from lying to ourselves. Experimental groups for example go to lengths preventing bias in data analysis. If your experimental data are questionnaire replies then that’s that, but in physics data aren’t normally very self-revealing. They have to be processed suitably and be analyzed with numerical tools to arrive at useful results. Data has to be binned, cuts have to be made, background has to be subtracted.

There are usually many different ways to process the data, and the more ways you try the more likely you are to find one that delivers an interesting result, just by coincidence. It is pretty much impossible to account for trying different methods because one doesn’t know how much these methods are correlated. So to prevent themselves from inadvertently running multiple searches for a signal that isn’t there, many experimental collaborations agree on a method for data analysis before the data is in, then proceed according to plan.

(Of course if the data are made public this won’t prevent other people to reanalyze the same numbers over and over again. And every once in a while they’ll find some signal whose statistical significance they overestimate because they’re not accounting, can’t account, for all the failed trials. Thus all the CMB anomalies.)

In science as in everyday life the major problems though are the biases we do not account for. Confirmation bias is the probably most prevalent one. If you search the literature for support of your argument, there it is. If you try to avoid that person who asked a nasty question during your seminar, there it is. If you just know you’re right, there it is.

Even though it often isn’t explicitly taught to students, everyone who succeeded making a career in research has learned to work against their own confirmation bias. Failing to list contradicting evidence or shortcomings of one’s own ideas is the easiest way to tell a pseudoscientist. A scientist’s best friend is their inner voice saying: “You are wrong. You are wrong, wrong, W.R.O.N.G.” Try to prove yourself wrong. Then try it again. Try to find someone willing to tell you why you are wrong. Listen. Learn. Look for literature that explains why you are wrong. Then go back to your idea. That’s the way science operates. It’s not the way humans normally operate.

(And lest you want to go meta on me, the title of this post is of course also wrong. We are scientists in some regards but not in others. We like to construct new theories, but we don’t like being proved wrong.)

But there are other cognitive and social biases that affect science which are not as well-known and accounted for as confirmation bias. “Motivated cognition” (aka “wishful thinking”) is one of them. It makes you believe positive outcomes are more likely than they really are. Do you recall them saying the LHC would find evidence for physics beyond the standard model. Oh, they are still saying it will?

Then there is the “sunk cost fallacy”: The more time and effort you’ve spent on SUSY, the less likely you are to call it quits, even though the odds look worse and worse. I had a case of that when I refused to sign up for the Scandinavian Airline frequent flyer program after I realized that I'd be a gold member now had I done this 6 years ago.

I already mentioned the social desirability bias that discourages us from speaking unwelcome truths, but there are other social biases that you can see in action in science.

The “false consensus effect” is one of them. We tend to overestimate how much and how many other people agree with us. Certainly nobody can disagree that string theory is the correct theory of quantum gravity. Right. Or, as Joseph Lykken and Maria Spiropulu put it:
“It is not an exaggeration to say that most of the world’s particle physicists believe that supersymmetry must be true.” (Their emphasis.)
The “halo effect” is the reason we pay more attention to literally every piece of crap a Nobelprize winner utters. The above mentioned “in-group bias” is what makes us think researchers in our own field are more intelligent than others. It’s the way people end up studying psychology because they were too stupid for physics. The “shared information bias” is the one in which we discuss the same “known problems” over and over and over again and fail to pay attention to new information held only by a few people.

One of the most problematic distortions in science is that we consider a fact more likely the more often we have heard of it, called the “attentional bias” or the “mere exposure effect”. Oh, and then there is the mother of all biases, the “bias blind spot,” the insistence that we certainly are not biased.

Cognitive biases we’ve always had of course. Science has progressed regardless, so why should we start paying attention now? (Btw, it’s called the “status-quo-bias”.) We should pay attention now because shortcomings in argumentation become more relevant the more we rely on logical reasoning detached from experimental guidance. This is a problem which affects some areas of theoretical physics more than any other field of science.

The more prevalent problem though is the social biases whose effects become more pronounced the larger the groups are, the tighter they are connected, and the more information is shared. This is why these biases are so much more relevant today than a century, even two decades ago.

You can see these problems in pretty much all areas of science. Everybody seems to be thinking and talking about the same things. We’re not able to leave behind research directions that turn out fruitless, we’re bad at integrating new information, we don’t criticize our colleagues’ ideas because we are afraid of becoming “socially undesirable” when we mention the tent’s stink. We disregard ideas off the mainstream because these come from people “not like us.” And we insist our behavior is good scientific conduct, purely based on our unbiased judgement, because we cannot possibly be influenced by social and psychological effects, no matter how well established.

These are behaviors we have developed not because they are stupid, but because they are beneficial in some situations. But in some situations they can become a hurdle to progress. We weren’t born to be objective and rational. Being a good scientist requires constant self-monitoring and learning about the ways we fool ourselves. Denying the problem doesn’t solve it.

What I really wanted to say is that I’ve finally signed up for the SAS frequent flyer program.

Wednesday, June 24, 2015

Does faster-than-light travel lead to a grandfather paradox?

Whatever you do, don’t f*ck with mom.
Fast track to wisdom: Not necessarily.

I stopped going to church around the same time I started reading science fiction. Because who really needs god if you can instead believe in alien civilizations, wormholes, and cell rejuvenation? Oh, yes, I wanted to leave behind this planet for a better place. But my space travel enthusiasm suffered significantly once I moved from the library’s fiction aisle to popular science, and learned that the speed of light is the absolute limit. For all we know. And ever since I have of course wondered just how well we know this.

Fact is we’ve never seen anything move faster than the speed of light (except for illusions of motion), and it is both theoretically understood and experimentally confirmed that we cannot accelerate anything to become faster than light. That doesn’t sound good for what our chances of visiting the aliens are concerned, but it isn’t the main problem. It could just be that we haven’t looked in the right places or not tried hard enough. No, the main problem is that it is very hard to make sense of faster-than-light travel at all within the context of our existing theories. And if you can’t make sense of it, how can you build it?

Special relativity doesn’t forbid motion faster than light. It just tells you that you’d need an infinite amount of energy to accelerate something which is slower than light (“subluminal”) to become faster than light (“superluminal”). Ok, the infinite energy need won’t fly with the environmentalists, I know. But if you have a particle that always moves faster than light, its existence isn’t prohibited in principle. These particles are called “tachyons,” have never been observed, and are believed to not exist for two reasons. First, they have the awkward property of accelerating when they lose energy, which lets them induce instabilities that have to be fixed somehow. (In quantum field theory one can deal with tachyonic fields, and they play an important role, but they don’t actually transmit any information faster than light. So these are not so relevant to our purposes.) Second, tachyons seem to lead to causality problems.

The causality problems with superluminal travel come about as follows. Special relativity is based on the axiom that all observers have the same laws of physics, and these are converted from one observer to another by a well-defined procedure called Lorentz-transformation. This transformation from one observer to the other maintains lightcones, because the speed of light doesn’t change. The locations of objects relative to an observer can change when the observer changes velocity. But two observers at the same location with different velocities who look at an object inside the lightcone will agree on whether it is in the past or in the the future.

Not so however with objects outside the lightcone. For these, what is in the future for one observer can be in the past of another observer. This means then that a particle that for one observer moves faster than light – ie to a point outside the lightcone – actually moves backwards in time for another observer! And since in special relativity all observers have equal rights, neither of them is wrong. So once you accept superluminal travel, you are forced to also accept travel back in time.

At least that’s what the popular science books said. It’s nonsense of course because what does it mean for a particle to move backwards in time anyway? Nothing really. If you’d see a particle move faster than light to the left, you could as well say it moved backwards in time to the right. The particle doesn’t move in any particular direction on a curve in space-time because the particles’ curves have no orientation. Superluminal particle travel is logically perfectly possible as long as it leads to a consistent story that unfolds in time, and there is nothing preventing such a story.

Take as an example the below image showing the worldline of a particle that is produced, scatters twice to change direction, travels superluminally, and goes back in time to meet itself. You could interpret the very same arrangement as saying you have produced a pair of particles, one of which scatters and then annihilates again.

No, there is no problem with the travel of superluminal particles in principle. The problems start once we think of macroscopic objects, like spaceships. We attach to their curves an arrow of time, pointing into the direction in which the travelers age. And it’s here where the trouble starts. Now special relativity indeed tells you that somebody who travels faster than light will move backwards in time for another observer, because a change of reference frame will not reverse the travelers’ arrow of time. This is what creates the grandfather paradox, in which you can travel back in time to kill your own grandfather, resulting in you never be born. Here, requiring consistency would necessitate that it is somehow impossible for you to kill your grandfather, and it is hard to see how this would be insured by the laws of physics.

While it’s hard to see what conspiracy would prevent you from killing your grandpa, it is fairly easy to see that closing the loop backwards in time is prevented by the known laws of physics. We age because entropy increases. It increases in some direction that we can, for lack of a better word, call “forward” in time. This entropy increase is ultimately correlated with decoherence and thus probably also with the restframe of the microwave background, but for our purposes it doesn’t matter so much exactly in which direction it increases, just that it increases in some direction.

Now whenever you have a closed curve that is oriented in the direction in which the traveler presumably experience the passage of time, then the arrow of time on the curve must necessarily run against the increase of entropy somewhere. Any propulsion system able to do this would have to decrease entropy against the universe’s thrust of increasing it. And that’s what ultimately prevents time-travel. In the image below I have drawn the same worldline as above with an intrinsic arrow of time (the direction in which passengers age), and how it is necessarily incompatible with any existing arrow of time along one of the curves, which is thus forbidden.

There is no propulsion system that would be able to produce the necessary finetuning to decrease entropy along the route. But even if such a propulsion existed it would just mean that time in the spaceship now runs backwards. In other words, the passengers wouldn’t actually experience moving backwards in time, but instead moving forwards in time in the opposite direction. This would force us to buy into an instance of a grandfather pair creation, later followed by a grandchild pair annihilation. It doesn’t seem very plausible, and it violates energy conservation, but besides this it’s at least a consistent story.

I briefly elaborated on this in a paper I wrote some years ago as a sidenote (see page 6). But just last month there was a longer paper on the arxiv, by Nemiroff and Russell, that studied the problems with superluminal travel in a very concrete scenario. In their example, a spaceship leaves Earth, visits an exoplanet that moves with some velocity relative to Earth, and then returns. The velocity of the spaceship at the both launches is the same relative to the planet from which the ship launches, which means it’s a different velocity on the return trip.

The authors then calculate explicitly at which velocity the curves start going back in time. They arrive at the conclusion that the necessity of a consistent time evolution for the Earth observer would then require to interpret the closed loop in time as a pair creation event, followed by a later pair annihilation, much like I argued above. Note that singling out the Earth observer as the one demanding consistency with their arrow of time is in this case what introduces a preferred frame relative to which “forward in time” is defined.

The relevant point to take away from this is that superluminal travel in and by itself is not inconsistent. Leaving aside the stability problems with superluminal particles, they do not lead to causal paradoxa. What leads to causal paradoxa is allowing travel against the arrow of time which we, for better or worse, experience. This means that superluminal travel is possible in principle, even though travel backwards in time is not.

That travel faster than light is not prevented by the existing laws of nature doesn’t mean of course that it’s possible. There is also still the minor problem that nobody has the faintest clue how to do it... Maybe it’s easier to wait for the aliens to come visit us.

Thursday, June 18, 2015

No, Gravity hasn’t killed Schrödinger’s cat

There is a paper making the rounds which was just published in Nature Physics, but has been on the arXiv since two years:
    Universal decoherence due to gravitational time dilation
    Igor Pikovski, Magdalena Zych, Fabio Costa, Caslav Brukner
    arXiv:1311.1095 [quant-ph]
According to an article in New Scientist the authors have shown that gravitationally induced decoherence solves the Schrödinger’s cat problem, ie explains why we never observe cats that are both dead and alive. Had they achieved this, that would be remarkable indeed because the problem has been solved half a century ago. New Scientist also quotes the first author as saying that the effect discussed in the paper induces a “kind of observer.”

New Scientist further tries to make a connection to quantum gravity, even though everyone involved told the journalist it’s got nothing to do with quantum gravity whatsoever. There is also a Nature News article, which is more careful for what the connection to quantum gravity, or absence thereof, is concerned, but still wants you to believe the authors have shown that “completely isolated objects” can “collapse into one state” which would contradict quantum mechanics. If that could happen it would be essentially the same as the information loss problem in black hole evaporation.

So what did they actually do in the paper?

It’s a straight-forward calculation which shows that if you have a composite system in thermal equilibrium and you push it into a gravitational field, then the degrees of freedom of the center of mass (com) get entangled with the remaining degrees of freedom (those of the system’s particles relative to the center of mass). The reason for this is that the energies of the particles become dependent on their position in the gravitational field by the standard redshift effect. This means that if the system’s particles had quantum properties, then these quantum properties mix together with the com position, basically.

Now, decoherence normally works as follows. If you have a system (the cat) that is in a quantum state, and you get it in contact with some environment (a heat bath, the cosmic microwave background, any type of measurement apparatus, etc), then the cat becomes entangled with the environment. Since you don’t know the details of the environment however, you have to remove (“trace out”) its information to see what the cat is doing, which leaves you with a system that has now a classic probabilistic distribution. One says the system has “decohered” because it has lost its quantum properties (or at least some of them, those that are affected by the interaction with the environment).

Three things important to notice about this environmentally induced decoherence. First, the effect happens extremely quickly for macroscopic objects even for the most feeble of interactions with the environment. This is why we never see cats that are both dead and alive, and also why building a functioning quantum computer is so damned hard. Second, while decoherence provides a reason we don’t see quantum superpositions, it doesn’t solve the measurement problem in the sense that it just results in a probability distribution of possible outcomes. It does not result in any one particular outcome. Third, nothing of that requires an actually conscious observer; that’s an entirely superfluous complication of a quite well understood process.

Back to the new paper then. The authors do not deal with environmentally induced decoherence but with an internal decoherence. There is no environment, there is only a linear gravitational potential; it’s a static external field that doesn’t carry any degrees of freedom. What they show is that if you trace out the particle’s degrees of freedom relative to the com, then the com decoheres. The com motion, essentially, becomes classical. It can no longer be in a superposition once decohered. They calculate the time it takes for this to happen, which depends on the number of particles of the system and its extension.

Why is this effect relevant? Well, if you are trying to measure interference it is relevant because this relies on the center of mass moving on two different paths – one going through the left slit, the other through the right one. So the decoherence of the center of mass puts a limit on what you can measure in such interference experiments. Alas, the effect is exceedingly tiny, smaller even than the decoherence induced by the cosmic microwave background. In the paper they estimate the time it takes for 1023 particles to decohere is about 10-3 seconds. But the number of particles in composite systems that can presently be made to interfere is more like 102 or maybe 103. For these systems, the decoherence time is roughly 107 seconds - that’s about a year. If that was the only decoherence effect for quantum systems, experimentalists would be happy!

Besides this, the center of mass isn’t the only quantum property of a system, because there are many ways you can bring a system in superpositions that doesn’t affect the com at all. Any rotation around the com for example would do. In fact there are many more degrees of freedom in the system that remain quantum than that decohere by the effect discussed in the paper. The system itself doesn’t decohere at all, it’s really just this particular degree of freedom that does. The Nature News feature states that
“But even if physicists could completely isolate a large object in a quantum superposition, according to researchers at the University of Vienna, it would still collapse into one state — on Earth's surface, at least.”
This is just wrong. The object could still have many different states, as long as they share the same center of mass variable. A pure state left in isolation will remain in a pure state.

I think the argument in the paper is basically correct, though I am somewhat confused about the assumption that the thermal distribution doesn’t change if the system is pushed into a gravitational field. One would expect that in this case the temperature also depends on the gradient.

So in summary, it is a nice paper that points out an effect of macroscopic quantum systems in gravitational fields that had not previously been studied. This may become relevant for interferometry of large composite objects at some point. But it is an exceedingly weak effect, and I for sure am very skeptical that it can be measured any time in the soon future. This effect doesn’t teach us anything about Schrödinger’s cat or the measurement problem that we didn’t know already, and it for sure has nothing to do with quantum gravity.

Science journalists work in funny ways. Even though I am quoted in the New Scientist article, the journalist didn’t bother sending me a link. Instead I got the link from Igor Pikovski, one of the authors of the paper, who wrote to me to apologize for the garble that he was quoted with. He would like to pass on the following clarification:
“To clarify a few quotes used in the article: The effect we describe is not related to quantum gravity in any way, but it is an effect where both, quantum theory and gravitational time dilation, are relevant. It is thus an effect based on the interplay between the two. But it follows from physics as we know it.

In the context of decoherence, the 'observer' are just other degrees of freedom to which the system becomes correlated, but has of course nothing to do with any conscious being. In the scenario that we consider, the center of mass becomes correlated with all the internal constituents. This takes place due to time dilation, which correlates any dynamics to the position in the gravitational field and results in decoherence of the center of mass of the composite system.

For current experiments this effect is very weak. Once superposition experiments can be done with very large and complex systems, this effect may become more relevant. In the end, the simple prediction is that it only depends on how much proper time difference is acquired by the interfering amplitudes of the system. If it's exactly zero, no decoherence takes place, as for example in a perfectly horizontal setup or in space (neglecting special relativistic time dilation). The latter was used as an example in the article. But of course there are other means to make sure the proper time difference is minimized. How hard or easy that will be depends on the experimental techniques. Maybe an easier route to experimentally probe this effect is to probe the underlying Hamiltonian. This could be done by placing clocks in superposition, which we discussed in a paper in 2011. The important point is that these predictions follow from physics as we know, without any modification to quantum theory or relativity. It is thus 'regular' decoherence that follows from gravitational time dilation.”

Tuesday, June 16, 2015

The plight of the postdocs: Academia and mental health

This is the story of a friend of a friend, a man by name Francis who took his life at age 34. Francis had been struggling with manic depression through most of his years as a postdoc in theoretical physics.

It is not a secret that short-term contracts and frequent moves are the norm in this area of research, but rarely do we spell out the toll it takes one our mental health. In fact, most of my tenured colleagues who profit from cheap and replaceable postdocs praise the virtue of the nomadic lifestyle which, so we are told, is supposed to broaden our horizon. But the truth is that moving is a necessary, though not sufficient, condition to build your network. It isn’t about broadening your horizon, it’s to make the contacts for which you are later being bought in. It’s not optional, it’s a misery you are expected to pretend enjoying.

I didn’t know Francis personally, and I would never have heard of him if it wasn’t for the acknowledgements in Oliver Roston’s recent paper:

“This paper is dedicated to the memory of my friend, Francis Dolan, who died, tragically, in 2011. It is gratifying that I have been able to honour him with work which substantially overlaps with his research interests and also that some of the inspiration came from a long dialogue with his mentor and collaborator, Hugh Osborn. In addition, I am indebted to Hugh for numerous perceptive comments on various drafts of the manuscript and for bringing to my attention gaps in my knowledge and holes in my logic. Following the appearance of the first version on the arXiv, I would like to thank Yu Nakayama for insightful correspondence.

I am firmly of the conviction that the psychological brutality of the post-doctoral system played a strong underlying role in Francis’ death. I would like to take this opportunity, should anyone be listening, to urge those within academia in roles of leadership to do far more to protect members of the community suffering from mental health problems, particularly during the most vulnerable stages of their careers.”
As a postdoc, Francis lived separated from his partner, and had trouble integrating in a new group. Due to difficulties with the health insurance after an international move, he couldn’t continue his therapy. And even though highly gifted, he must have known that no matter how hard he worked, a secure position in the area of research he loved was a matter of luck.

I found myself in a very similar situation after I moved to the US for my first postdoc. I didn’t fully realize just how good the German health insurance system is until I suddenly was on a scholarship without any insurance at all. When I read the fineprint, it became pretty clear that I wouldn’t be able to afford an insurance that covered psychotherapy or medical treatment for mental disorders, certainly not when I disclosed a history of chronic depression and various cycles of previous therapy.

With my move, I had left behind literally everybody I knew, including my boyfriend who I had intended to marry. For several months, the only piece of furniture in my apartment was a mattress because thinking any further was too much. I lost 30 pounds in six months, and sometimes went weeks without talking to a human being, other than myself.

The main reason I’m still here is that I’m by nature a loner. When I wasn’t working, I was hiking in the canyons, and that was pretty much all I did for the better part of the first year. Then, when I had just found some sort of equilibrium, I had to move again to take on another position. And then another. And another. It still seems a miracle that somewhere along the line I managed to not only marry the boyfriend I had left behind, but to also produce two wonderful children.

Yes, I was lucky. But Francis wasn’t. And just statistically some of you are in that dark place right now. If so, then you, as I, have heard them talk about people who “managed to get diagnosed” as if depression was a theater performance in which successful actors win a certificate to henceforth stay in bed. You, as I, know damned well that the last thing you want is that anybody who you may have to ask for a letter sees anything but the “hard working” and “very promising” researcher who is “recommended without hesitation.” There isn’t much advice I can give, except that you don’t forget it’s in the nature of the disease to underestimate one’s chances of recovery, and that mental health is worth more than the next paper. Please ask for help if you need it.

Like Oliver, I believe that the conditions under which postdoctoral researchers must presently sell their skills are not conductive to mental health. Postdocs see friends the same age in other professions having families, working independently, getting permanent contracts, pension plans, and houses with tricycles in the yard. Postdoctoral research collects some of the most intelligent and creative people on the planet, but in the present circumstances many are unable to follow their own interests, and get little appreciation for their work, if they get feedback at all. There are lots of reasons why being a postdoc sucks, and most of them we can do little about, like those supervisors who’d rather die then say you did a good job, only once. But what we can do is improve employment conditions and lower the pressure to constantly move.

Even in the richest countries on the planet, like Germany and Sweden, it is very common to park postdocs on scholarships without benefits. These scholarships are tax-free and come, for the employer, at low cost. Since the tax evasion is regulated by law, the scholarships can typically last only one or two years. It’s not that one couldn’t hire postdocs on longer, regular contracts with social and health benefits, it’s just that in current thinking quantity counts more than quality: More postdocs produce more papers, which looks better in the statistic. That’s practiced, among many others, at my own workplace.

There are some fields of research which lend themselves to short projects and in these fields one or two year gigs work just fine. In other fields that isn’t so. What you get from people on short-term contracts is short-term thinking. It isn’t only that this situation is stressful for postdocs, it isn’t good for science either. You might be saving money with these scholarships, but there is always a price to pay.

We will probably never know exactly what Francis went through. But for me just the possibility that the isolation and financial insecurity, which are all too often part of postdoc life, may have contributed to his suffering is sufficient reason to draw attention to this.

The last time I met Francis’ friend Oliver, he was a postdoc too. He now has two children, a beautiful garden, and has left academia for a saner profession. Oliver sends the following message to our readers:
“I think maybe the best thing I can think of is advising never to be ashamed of depression and to make sure you keep talking to your friends and that you get medical help. As for academia, one thing I have discovered is that it is possible to do research as a hobby. It isn't always easy to find the time (and motivation!) but leaving academia needn't be the end of one's research career. So for people wondering whether academia will ultimately take too high a toll on their (mental) health, the decision to leave academia needn't necessarily equate with the decision to stop doing research; it's just that a different balance in one's life has to be found!”

[If you speak German or trust Google translate, the FAZ blogs also wrote about this.]

Friday, June 12, 2015

Where are we on the road to quantum gravity?

Damned if I know! But I got to ask some questions to Lee Smolin which he kindly replied to, and you can read his answers over at Starts with a Bang. If you’re a string theorist you don’t have to read it of course because we already know you’ll hate it.

But I would be acting out of character if not having an answer to the question posed in the title did prevent me from going on and distributing opinions, so here we go. On my postdoctoral path through institutions I’ve passed by string theory and loop quantum gravity, and after some closer inspection stayed at a distance from both because I wanted to do physics and not math. I wanted to describe something in the real world and not spend my days proving convergence theorems or doing stability analyses of imaginary things. I wanted to do something meaningful with my life, and I was – still am – deeply disturbed by how detached quantum gravity is from experiment. So detached in fact one has to wonder if it’s science at all.

That’s why I’ve worked for years on quantum gravity phenomenology. The recent developments in string theory to apply the AdS/CFT duality to the description of strongly coupled systems are another way to make this contact to reality, but then we were talking about quantum gravity.

For me the most interesting theoretical developments in quantum gravity are the ones Lee hasn’t mentioned. There are various emergent gravity scenarios and though I don’t find any of them too convincing, there might be something to the idea that gravity is a statistical effect. And then there is Achim Kempf’s spectral geometry that for all I can see would just fit together very nicely with causal sets. But yeah, there are like two people in the world working on this and they’re flying below the pop sci radar. So you’d probably never have heard of them if it wasn’t for my awesome blog, so listen: Have an eye on Achim Kempf and Raffael Sorkin, they’re both brilliant and their work is totally underappreciated.

Personally, I am not so secretly convinced that the actual reason we haven’t yet figured out which theory of quantum gravity describes our universe is that we haven’t understood quantization. The so-called “problem of time”, the past hypothesis, the measurement problem, the cosmological constant – all this signals to me the problem isn’t gravity, the problem is the quantization prescription itself. And what a strange procedure this is, to take a classical theory and then quantize and second quantize it to obtain something more fundamental. How do we know this procedure isn’t scale dependent? How do we know it works the same at the Planck scale as in our labs? We don’t. Unfortunately, this topic rests at the intersection of quantum gravity and quantum foundations and is dismissed by both sides, unless you count my own small contribution. It’s a research area with only one paper!

Having said that, I found Lee’s answers interesting because I understand better now the optimism behind the quote from his 2001 book, that predicted we’d know the theory of quantum gravity by 2015.

I originally studied mathematics, and it just so happened that the first journal club I ever attended, in '97 or '98, was held by a professor for mathematical physics on the topic of Ashtekar’s variables. I knew some General Relativity and was just taking a class on quantum field theory, and this fit in nicely. It was somewhat over my head but basically the same math and not too difficult to follow. And it all seemed to make much sense! I switched from math to physics and in fact for several years to come I lived under the impression that gravity had been quantized and it wouldn’t take long until somebody calculated exactly what is inside a black hole and how the big bang works. That, however, never happened. And here we are in 2015, still looking to answer the same questions.

I’ll restrain from making a prediction because predicting when we’ll know the theory for quantum gravity is more difficult than finding it in the first place ;o)

Tuesday, June 09, 2015

What is cosmological hysteresis?

Last week there were two new papers on the arXiv discussing an effect dubbed “cosmological hysteresis,” which, so the authors argue, would make cyclic cosmological models viable alternatives to inflation

Hysteresis is an effect more commonly known from solid state materials, when a material doesn’t return to its original state together with an external control parameter. The textbook example is a ferromagnet’s average magnetization whose orientation can be changed by applying an external magnetic field. Turn up the magnetic field and it drags with it the magnetization, but turn back the magnetic field and the magnetization lags behind. So for the same value of the magnetic field you can have two different values of magnetization, depending on whether you were increasing or decreasing the field.

Hysteresis in ferromagnets. Image credit: Hyperphysics.

This hysteresis is accompanied by the loss of energy into the material in form of heat, because one constantly has to work to turn the magnets, and in this cycle entropy increases. In fact I don’t know any example of hysteresis in which entropy does not increase.

What does this have to do with cosmology? Well, nothing really, except that it’s an analogy that the authors of the mentioned papers are drawing upon. They argue that a simple type of cyclic cosmological model with a scalar field has a similar type of hysteresis, but one which is not accompanied by entropy increase, and that this serves to make cyclic cosmology more appealing.

Cyclic cosmological models have been around since the early days of General Relativity. In such a model, each phase of expansion of the universe ends in a turnaround and subsequent contraction, followed by a bounce and a new phase of expansion. These models are periodic, but note that this doesn’t necessarily mean they are time-reversal invariant. (A sine curve is periodic and has a time-reversal invariance around the maxima and minima. A saw-tooth is periodic but not invariant under time-reversal.)

In any case, that the behavior of a system isn’t time-reversal invariant doesn’t mean its time evolution cannot be inverted. It just means it isn’t symmetric under this inversion. To our best present knowledge the time dependence of all existing systems can be inverted – theoretically. Practically this is normally not possible because such an inversion would require an extremely precise choice of initial conditions. It is easy enough to mix flour, sugar, and eggs to make a dough, but you could mix until we run out of oil (and Roberts) and would never see an egg separate from the sugar again.

Statistical mechanics quantifies the improbability in succeeding to reverse a time-dependence by the increase of entropy. A system is likely to develop into a state of higher entropy, but, except for fluctuations that are normally tiny, entropy doesn’t decrease because this is exceedingly unlikely to happen. That’s the second law of thermodynamics.

This second law of thermodynamics is also the main problem with cyclic cosmologies. Since entropy increases throughout each cycle, the next cycle cannot start from the same initial conditions. Entropy gradually builds up, and this is generally a bad thing if you want conditions in which life can develop because for that you need to maintain some type of order. The major obstacle in making convincing cyclic models is therefore to find a way to indeed reproduce the initial conditions. I don’t really know of a good solution to this. The maybe most appealing idea is that the next cycle isn’t actually initiated by the whole universe but only a small part of it, leading to “baby universe” scenarios. I toyed for some while with the idea to couple two universes that periodically push entropy back and forth, but this ended up in my dead drafts drawer, and ever since I’ve disliked cyclic cosmologies.

In the mentioned papers the authors observe that a cosmology coupled to a scalar field has two different attractors (solutions towards which the field develops) depending on whether the universe is expanding or contracting. In the expanding phase, a scalar field with a potential gets decelerated and slows down, which makes its behavior stable under perturbations because these get damped. In the contracting phase, the field gets accelerated instead, continues to grow, and becomes very sensitive to smallest perturbations because they get blown up. The time-dependence of this system is still reversible in theory, but not in practice for the same reason that you can’t unmix your dough. Since the unstable period tends to be very sensitive to smallest mistakes, you will not be able to reverse it perfectly.

Figure 2 from arXiv:1506.01247. Φ is the scalar field, a dot the time derivative. During expansion of the universe the field evolves along the arrows on the curves in the left figure. Different curves correspond to different initial conditions, but they all converge together. During contraction the field evolves along the curves as shown in the right figure, where the curves diverge with an increasing value of the field.


For the cyclic model this means that basically noise from small fluctuations builds up through each cycle. After the turnaround, the field will not exactly retrace its path but diverge from the time-reversal. That is why they refer to it as hysteresis.

Figure 1 from arXiv:1506.02260. a is the scale factor of the universe - a larger a means a larger universe, and w encodes the equation of state of the scalar field.  In this scenario, the scalar field doesn’t retrace the path it came after turnaround.


It also has the effect that the next cycle starts from a different initial condition (provided there is some mechanism that allows the universe to bounce, which necessitates some quantum gravity theory). In the studies in the paper, the noise is mimicked in a computer simulation by some small random number that is added to the field. More realistically you might think of it as quantum fluctuations.

Now, this all sounds plausible to me. There are two things though I don’t understand about this.

First, I don’t think it’s justified to say that in this case entropy doesn’t increase. The problem of having to finetune initial conditions to reverse the process demonstrates instead that entropy does increase - this is essentially the very definition of entropy increase! Second, and more important, I have no idea why that would makes cyclic cosmological models more interesting because they are just demonstrating exactly that it’s not possible to make these models periodic and one doesn’t return to anywhere close by the initial state.

In summary, cosmological hysteresis seems to exist under quite general circumstances, so there you have another cool word for your next dinner party that will make you sound very sciency. However, I don’t see how that effect makes cyclic cosmologies more appealing. What I learned from the papers though is that this very simple scalar field model already captures the increase in entropy through the cycles, which had not previously been clear to me. In fact this model is so instructive that maybe I should open that drawer with the dead drafts again...

Thursday, June 04, 2015

Social Media for Scientists

I recently gave a seminar on the use of social media for scientists at an internal event at Stockholm University. It was an opportunity to collect my thoughts, and also to summarize what shortcomings the presently available platforms have. It would be odd if I didn’t share this with you, wouldn’t it?

There are many uses of social media, but three of them are particularly important for science: Networking, Communication, and Outreach.

Networking can best be described as the making and maintain of contacts and the exchange of information. It blends into science communication, which is more generally about discussing your own or others’ research, with your community or with the public. And then there is public outreach, which has a broader aim because you may also want to draw attention to your institution or yourself, or to generally get people engaged in science.


Networking is really unavoidable if you want to work in science today – and you are almost certainly doing it already. Communication is essential for research, and so I think using social media to this end is part of being a good scientist. And I strongly encourage you to try if you like public outreach because of its many benefits. I don’t think every scientist must engage in public outreach; in the first line scientists should do science. But it can be very rewarding and helpful to your science too, so if you have both the time and the interest, you should definitely consider it.

A lot of scientists I know shy away from using social media for no good reason and seem to believe twitter and facebook are somehow not intellectual enough. Or maybe they mistrust their own abilities to withstand the temptation of cat videos. To me twitter, facebook, blogger and, to a lesser extent, Google plus and ResearchGate are simply tools that help me to stay up to date, keep in touch with colleagues, discuss science, get feedback and advice, and share my own research.

There are many other reasons to use social media, but they are all driven by the underlying changes in the communities: We are more people in science today than ever before, collaborations are becoming more international, there are more and more papers being published. Social media is a good way to manage this. If you’re not using social media, you are putting yourself at a disadvantage, it’s as simple as that.

I have the slides of my talk online here, where I have some remarks on the social media platforms presently most widely used by scientists: Twitter, Facebook, ResearchGate, LinkedIn and Google+. In physics in particular there are also the PhysicsForums and the Physics Stack Exchage which are well frequented and can be really useful to ask questions, and give or get answers. These services differ somewhat in their aim and use, so if you are new to this you might want to check these out and see what suits you best.

The existing platforms leave me wanting for the following reasons:

  • None of the existing social media sites covers the spectrum from professional to personal contacts.

    Presently we either have pages like LinkedIn and ResearchGate that focus entirely on job experience and skills and, in the case of ReseachGate, publications. Or we have sites like Facebook and Google+ where you don’t have this at all. But I know most of my colleagues personally, and I am also interested to hear what is going on in their life beyond the publication record or changes of affiliation. For me, like for many scientists I know, work life and private life blur together. Maybe it’s the shared experience of losing friends during the postdoc time that creates these ties, be that as it may, it’s a reality of research.

    I mostly use facebook, because there we also talk about the human side of science, the frustration with peer review, the nuisance of writing proposals, the inevitable rejections, the difficulties of balancing work with family, travel stress, conference experiences, and so on. To me this is part of life as a scientist. Every one of us goes through difficult times every one and then, and using social media is a way to both get and give support. So even if I were using a site like ResearchGate, I would still use other sites in addition to this.
  • None of the existing sites integrates a useful archiving function.

    This is something I really don’t understand: Why isn’t there a way to tag posts on either of these platforms with keywords or move them into folders for your own reference? On facebook you can now at least do a keyword search on your timeline, but it is working badly. On twitter too you can search your posts, though you have to use a third party service for that. Still, I and others I talked to, often get frustrated not being able to find a particular post or reference or comment.

    There are of course apps like for example Evernote that allow you to basically archive anything you want in categories of choice with keywords, and to a lesser extent you can do this on Feedly too. But then if you archive a reference, you will not have the discussion about it in the same place.
  • The professional sites are too public.

    Michael Nielsen in his book “Reinventing Discovery” has a charming analogy in which he describes a scientist with an unfinished idea as someone owning only one shoe, looking for a match without wanting to show the shoe to anybody. Nielsen describes how awkward and hesitant scientists can be before they start talking about their lonely shoes, and I find much truth in this analogy. It’s all well and fine if you have a question and ask it at the Stack Exchange or ResearchGate or facebook or wherever. But that really isn’t how it works if you are looking for a collaborator.

    To begin with you might not know exactly what the question is, or what you are looking for isn’t somebody explaining how to do a calculation but somebody interested enough to actually do it in exchange for being coauthor. There is also the prevalent academic paranoia of getting scooped. Especially in fields where competition is high, people don’t normally go around and publicly distribute their half-done research projects. And then, maybe most importantly, both the person asking and the person answering might have some misunderstandings and they might be afraid of their mistakes being publicly documented. Michael in his book lays out a vision for the matching of shoes, and I think what is really important in this is to allow scientists to find others with similar interests and then give them a private space to discuss off the record.
  • None if the existing services has an integration of bibliometric or scientometric data.

    There is plenty of data about coauthor networks within communities and also their evolution over time, mutually quoted references, and various ways to visualize research topics and their relation. I think this is relevant for scientists to know how many other people are working in their area, how it connects to other fields, who is working on making these connections, and how the field develops. Research has shown that many breakthroughs in science originate at the intersection of fields that weren’t previously known to be related, so these maps are interesting from a purely scientific perspective already. But they are also of personal use because they give researchers an idea about how their own research fits into the larger picture.

    Here is for example an interesting paper about pivot points in string theory. I know, the visualization isn’t all that great, but note that the paper is more than a decade old!
I could go on about how I hope that webinars will become better integrated and that conferences will come to have a better online presence since I find it extremely annoying and cumbersome that every institution is using their own registration system, but I can see that for what the software solution is concerned these are difficult to address by any one service.

What social media do you use to discuss science and how has your experience been with that?