Wednesday, June 29, 2011

This and That

Some random things that caught my attention recently:

Monday, June 27, 2011

Interna

So we're back in Germany. For the next months, I'm on parental leave again and Stefan works 9 to 5. Lara and Gloria are now almost 6 months old. They can now both roll over from back to belly, though not the other way round, and they've discovered their feet which make good toys that don't fall out of reach. They can grab and hold things, give them from one hand to the other, and bang them not only in their own but also in other people's faces. They can meanwhile eat quite well from a spoon, though they try to grab the spoon which makes feeding inevitably a mess.

Lara entertains us with a large variety of funny sounds ranging from moo-moo over uee-wee to fffff. The latter is particularly amusing when executed with a mouth full of mashed carrots. Gloria too finds distraction in her 5 minutes older sister and often turns to look at her or rolls into her direction. If Lara burps, Gloria laughs. Lara's hair finally seems to start growing, and it turns out to be lighter than it was at birth. Her eye color on the other hand is turning more brownish by the day. Gloria is still blue eyed and has a hint of blond hair.



Yes, my life has become very pink.

The girls now sleep reasonably well at night, but are more demanding during the day. Lara in particular manages to move around without actually being able to crawl and then gets stuck in all sorts of impossible positions. Gloria apparently loves to chew on cables, and it's good she doesn't have teeth yet. In the coming weeks, we'll have to childproof the apartment.

I have, to my great delight, meanwhile received parental benefits from the Swedish Försäkringskassan, at least for a couple of months, after I managed to convincingly explain I'm indeed still insured with them. The problem seems to have been caused by some EU agreement that assigns me to a German health insurance during my stay here. On the Swedish side however the health and social insurance are both in the domain of the same institution, so they seem to have concluded I'm back in Germany for good, never mind that I'm paying taxes in Sweden. Now they have some difficultly figuring out how many days I'm eligible for since Stefan doesn't live in Sweden. The Germans on the other hand have so far refused to pay a single cent of Stefan's benefits since they don't know what the Swedes will pay for me. The bottomline is we're still sitting on piles of paperwork and money is short. We've also learned of several people who've had similar difficulties which is both comforting and frustrating.

Our Saab's oil leak caused us some more headache than anticipated. Here in Germany we were told the broken part, some rusty hose, would have to be shipped from Sweden. Since we were on the way to Sweden anyway, we contacted some repair place there after arrival just to be told that Saab has only one warehouse for spare parts left, which is in Nyköping, and the part we need is out of stock. They could put in an order for fourhundredsomething Euro, and it might come in anything between next month or never. The car making more insulted noises by the day, I had the great idea to Google for 'Saab spare parts' in Swedish. Two days later I picked the part up from the post office; it came to about 25 Euro. To my amazement, it was indeed the right part and it's being replaced right now. Lesson learned: If you need a spare part for your car, buy it online yourself and bring it to your dealer.

Weather here in Germany is brilliant, 36 Grad, Es wird immer heisser, Es ist Sommer! and the women's soccer world cup has just begun.

Wednesday, June 22, 2011

No I wont agree to disagree

In a recent NYT article, I learned about the "argumentative theory of reasoning," suggested by Dan Sperber, a French social and cognitive scientist, who is director of the International Cognition and Culture Institute. The essence of his theory seems to be that the evolutionary purpose of argumentation is to win an argument. That, apparently, is a groundbreaking hypothesis as his colleagues mostly argue that the purpose of reasoning is to find the truth, leaving them puzzled why the human brain works so inefficiently to that end. Sperber's postdoc Hugo Mercier has a website that lists the predictions of this theory, most of which are actually postdictions.

I think they've forgotten to disentangle argumentation by subject. There's arguably arguments that for the sake of natural selection you're better off finding out the truth. You can convince me all you want that drinking distilled water will cleanse your soul, you're not going to reproduce 6 feet under. But if the argument is about getting your way (what's for dinner?) then you might indeed be better off packing on arguments in your favor and leaving out those that contradict you. The problem is of course that it's difficult to switch from one mode of argumentation to the other. That's why it's beneficial if scientists have some formal training in which they learn, if not actually the names of well-known cognitive biases, so at least procedures that have proven efficient in avoiding pitfalls of human cognition, cognition that has evolved for other purposes than, say, finding evidence for dark matter.

In any case, this reminded me of a little book I once saw on a bargain bin, "50 ways to stall a discussion." ("50 Arten, sich quer zu stellen" by Frans Krips, you can download it here.) If you ever sat in the 5th installment of yet another seemingly endless committee meeting, consider that everybody else read the book and took the advice very seriously. Here's a sample from the 50 ways:

  1. This was not sufficiently discussed
  2. We don't have enough information
  3. We should first find out how the matter has been dealt with elsewhere
  4. This is much too fast
  5. Deficient use of language
  6. Inadequate standard
  7. We first have to discuss some other problem
  8. There are other problems of higher societal relevance
  9. One just can't do it this way
  10. You can't expect that from the people
  11. We've discarded so many plans, who cares if we discard yet another
  12. We tried this already in 1976
  13. We haven't yet assessed the impact of our last decision
  14. Who exactly is responsible?
  15. We should contact an expert
  16. We have to set priorities straight
  17. We need a committee on this aspect

And then there is of course the Web2.0 deadlock: we have to agree to disagree. It too fails to differentiate between seeking for truth and seeking for compromise. We can agree to disagree on all matters of taste: Pizza or Sushi? Pink or blue? NIN or RHCP? but when it comes to science, disagreement means one of us is wrong. Finding the right answer is what science is all about. So it's Pizza tonight, dammit.

[Img Src: Very Demotivational]

Monday, June 20, 2011

Exploring Self-perception: Zakaryah Abdulkarim

[Last month, I volunteered for a study at the department of neuroscience at Karolinska Institute, if just out of curiosity to see the place. Eventually it didn't work out with my participation, but I got to meet Zakaryah, a student at the Institute, who kindly agreed to tell us a little about his work there. I certainly learned some new vocabulary. Enjoy!]

I read that you are looking for volunteers for a project. Can you tell us what this is all about?

Yes. The project that I am currently involved in is one in the field of cognitive neuroscience. It is part of the research conducted in the lab of Dr. Henrik Ehrsson at the Department of neuroscience, Karolinska Institute. In this project we use an established perceptual illusion called ‘the body swap illusion’ (Petkova & Ehrsson, 2008) in which healthy participants experience the body of a shop mannequin as their own body to understand the behavioral and neural mechanisms underlying the self-attribution of a whole body to oneself. In particular, we are interested in understanding the neural mechanisms underlying the unitary experience of owning an entire body rather than a set of fragmented body parts. My project will contribute important behavioral and physiological data in support of a neuroimaging study conducted by my direct supervisor PhD-candidate Valeria Petkova.

In my experiment the participants wear head-mounted virtual reality displays, through which they see the mannequin’s body. They then receive simultaneous visual and tactile stimulations of various body parts and fill out a questionnaire regarding their experience. Alternatively they might see a knife approaching the mannequin, in which case the sweating of their palms, the so called galvanic skin response, which is a measurement of the sympathetic nervous systems response to dangerous stimuli is measured via electrodes attached to the fingers of the participant. Since the knife is approaching the mannequin and not the body of the participant, the sweating of the palm is used as an objective measurement of the perception of the body ownership illusion.

What is that sort of research good for?

Understanding the perceptual and neural mechanism involved in how we perceive our own body might be useful in the development of neuroprosthetics. Further, understanding the mechanism underlying the healthy perception of body ownership can help develop diagnostic and therapeutic tools in the treatment of pathological disturbances of the bodily self perception in different groups of patients (i.e. stroke, paraplegia, schizophrenia, anorexia etc.). Finally, the results of this type of research are beneficial for some industrial applications, for example in the field of virtual reality, telerobotics or telepresence.

What future studies would you like to do?

I would probably want to investigate more exactly which areas of the brain are involved in producing this feeling of body ownership and various ways to manipulate this. In particular, it would be interesting to see if one could affect this illusion pharmacologically, and how the illusion is correlated to the features of the subjects, because interestingly, not everyone experience this illusion.

What are the presently most pressing open questions in the field?

Here are some examples:

- What are the exact characteristics (i.e. type, receptive field etc) of the neuronal population involved in the neural computation of body ownership?

-What is the exact role of each node in the neural network indentified to be associated with the sense of owning a body. With other words what is the specific role of the ventral premotor cortex, the intraparietal cortex, the putamen, and the cerebellum?

- What is the interplay between body ownership and the sense of agency in the mechanism of self-awareness?

Do you see any relevance for physics or a role for physicists in that kind of research? If so, what?

Of course! Aside from all the technical equipment that is needed to perform these studies, e.g. MRI-scanners, galvanic skin electrodes etc., this research brings up a lot of fundamental questions about how we perceive ourselves and our surroundings, how we make decisions, how effects on different scales interplay, and I believe physics can contribute a lot to those discussions.

For somebody interested in this research, what further reading can you recommend?

One could read some scientific articles about it, however those can be hard to understand if you do not have a background in medicine or neuroscience. I would recommend those who are interested to read bookchapters about this kind of research, which exist in most of the new books in cognitive neuroscience, for example this.

If I'm in Stockholm and interested volunteering for your or similar research, how do I get in contact?

If you are in Stockholm and interested in participating, the best thing to do would probably be to send me an email, my email-adress is: zakaryah.abdulkarim[at]stud.ki.se. The requirements differ depending on the study, but usually there is some experiment in our lab that one can participate in.


Zakaryah is a medical student at Karolinska Institute. In his free time, when he isn’t at Alba Nova taking some evening course that is, he likes exercising, hanging out with friends, and enjoying what the vegetarian cuisine has to offer.

Wednesday, June 15, 2011

Nonlocal correlations between the Canary Islands

Bell's inequality is the itch on the back of all believers in hidden variables. Based on only a few assumptions it states that some correlations in quantum mechanics can not be achieved by local realistic hidden variables theories. The correlations in hidden variables theories of that type have to fulfill an inequality, now named after John Bell, violations of which have been observed in experiment, thus hidden variables don't describe reality. But as always, the devil is in the details, and if one doesn't pay attention to the details, loopholes remain. For Bell's inequality, there are actually quite a few of them, and to date no experiment has managed to close them all.

The typical experiment for Bell's theorem makes use of a pair of photons (electrons), entangled in polarization (spin). The two particles are send in different directions and their polarizations are measured along different directions. The correlation among the pairs of repeated measurements is subject to Bell's inequality. (Or the more general CHSH inequality).

The maybe most obvious loophole, called the locality loophole, is that information could be locally communicated from one measurement to the other. Since information can maximally be transmitted by the speed of light this is the case if, for example, the second measurement is made with delay to the first, such that the second measurement is in the forward lightcone of the first. Another loophole is that the detector settings may possibly be correlated with the prepared state without any violations of locality if they are in the forward lightcone of the preparation. Since in this case the experimenter cannot actually set the detector as he wishes, it's called the freedom-of-choice loophole.

A case where both loopholes are present is depicted in the space-time image below. The event marked with "E" is the emission of the photos. The red lines are the worldlines of the entangled electrons or photons (in an optical fiber). "A" and "B" are the two measurements and "a" and "b" are the events at which the detector settings are chosen. Also in the image are the forward lightcones of the event "E" and "A".


So that's how you don't want to make your experiment if you're aiming to disprove locally realistic hidden variables. Instead, what you want to do is an experiment as in the second figure below, where not only the measurement events "A" and "B" are spacelike to each other (ie they are not in each other's lightcone), but also the events "a" and "b" at which the detector settings are chosen are spacelike to each other and to the emission of the photons.

Let us also recall that the lightcone is invariant under Lorentz-transformations and thus the statement whether two events are spacelike, timelike or lightlike to each other does not depend on the reference frame. If you manage to do it in one frame, it's good for all frames.

Looks simple enough in a diagram, less simple to actually do it: Entanglement is a fragile state and the speed of light, which is the maximum speed by which (hidden) information might travel is really, really fast. It helps if you let the entangled particles travel over long distances before you make the measurement, but then you have to be very careful in getting the timing right.

And that's exactly what a group of experimentalists around Anton Zeiliger did and published in November in their paper "Violation of local realism with freedom of choice" (arXiv version here). They closed for the first time both of the two above mentioned loopholes by choosing a setting that disabled communication between the measurement events as well as between the preparation of the photons and the choice of detector settings. The test was performed between two Canary Islands, La Palma and Tenerife.


[Image Source: Lonely Planet]

The polarization-entangled pairs of photons were produced in La Palma. One was guided to a transmitter telescope and sent over a distance of 144 km to Tenerife, where it was received by another telescope. The other photon made 6km of circles in a coiled optical fibre in La Palma. The detector settings in La Palma were chosen by a quantum random number generator 1.2 km away from the source, and in Tenerife by another similar but independent random number generator. The measurements violated Bell's inequality by more than 16 standard deviations.

What a beautiful experiment!

But if you're a believer in local realistic hidden variable theories, let me scratch your itch. You can't close the freedom-of-choice loophole in superdeterministic hidden variables theories with this method because there's no true randomness in that case. It doesn't matter where you locate your "random" generator, its outcome was determined arbitrarily long ago in the backward lightcone of the emission.

Monday, June 13, 2011

New Painting

Okay, it's not really new. I actually started it last fall, but only finished this week. It's called "Herbstschatten" (Shadow of fall). Click to enlarge.

Saturday, June 11, 2011

Extra Dimensions at the LHC: Status Update

The Planck scale is the scale at which quantum gravitational effects are expected to become important. An extrapolation of the strength of gravity gives a value of 1016TeV, which is far out of reach for collider experiments. In the late 90s however, it was pointed out by Arkani-Hamed, Dimopoulous and Dvali, that this extrapolation does not hold if our spacetime has additional spacelike dimensions with certain properties. If that was the case, the true Planck scale could actually be at a TeV, an idea that is appealing because it does away with the question why the Planck scale is so large, respectively why gravity is so weak, to begin with. The answer would be, well, it isn't, it is only apparently so: Our naive extrapolation doesn't hold because space-time isn't four-dimensional. (For more details, read my earlier post.)

This (and other) extra dimensional models with a lowered Planck scale have been very popular at the beginning of the last decade and caused an extraordinarily high paper production which reflects not only the number of theoretical particle physicists, but also their desperation to put their skills to work. The most thoroughly analysed consequence of such models are the modification of standard model cross-sections through virtual graviton exchange and the production of black holes at the LHC. The latter possibility in particular received a lot of attention in the media due to some folks who accused physicists of planning the end of the world just to increase their citation count. (For more details, read these earlier posts.)

In any case, the LHC is running now, data is coming in and models are being sorted out, so what's the status?

In arXiv:1101.4919, Franceschini et al have summarized constraints from the LHC's CMS and ATLAS experiments on virtual graviton production. For the calculation of the contributions from virtual gravitons one needs to introduce a cut-off Λ of dimension energy that, next to the lowered Planck scale, becomes another parameter of the result. The constraints are then shown as contour plots in a two parameter space, the one parameter being the 'true' fundamental Planck scale, here denoted MD, and the other one being mentioned cut-off, or its ratio to MD respectively. One would expect the cut-off to be in the range of the lowered Planck-scale, though it might be off by a factor 2π or so, so the ratio should be of the order one. The figure below (Fig. 6 from arXiv:1101.4919) shows the bounds for the case of 4 additional spacelike dimensions:

The continuous line is the constraint from CMS data (after 36/pb integrated luminosity. Don't know what that means? Read this), and the dashed line is the constraint from ATLAS. The shaded area shows the excluded area. As you can see, a big part of the parameter space for values in the popular TeV range is meanwhile excluded.

Now what about the black holes? A black hole with a mass a few times the lowered Planck mass would already be well described by Hawking's calculation for particle emission, usually called Hawking-radiation. It would have a temperature (or average energy of primary emitted particles) of some hundred GeV. Just statistically, a big fraction of the emitted particles carry color charges and are not directly detected, but they form color strings that subsequently decay into a shower of hadrons, ie color neutral particles (pions, protons, etc). This process is called hadronization, and the event is called a jet. Depending on how many jets you get, it's a di-jet, tri-jet or multi-jet. The black hole's Hawking radiation would typically make a lot of particles and thus contribute to the multi-jets. One expects some multi-jets already from usual standard-model processes ("the background"), but the production of black holes should significantly increase the number. The figure below (from this paper by the CMS collaboration) shows an actual multi-jet event at the LHC:


In the paper arXiv:1012.3375 [hep-ex], the CMS collaboration summarized constraints on the lower mass of black holes in models with extra dimensions. For this, they analyzed the amount of multi-jet events in their data. The figure below (Fig 2 from arXiv:1012.3375) contrasts the predictions from the Standard Model with those of models with black hole production, for events with multiplicity N larger than 3 (that includes jets, but also photons, electrons and muons that don't hadronize).

On the vertical axis is the number of multi-jet events per bin of 100 GeV, on the horizontal axis the total transverse energy of the event (if you don't know what that means think of it as just the total energy). The solid blue line is the Standard Model prediction, the shaded area depicts the uncertainty. The various dotted and dashed lines are the predictions for the number of such events for different values of the minimal black hole mass, usually assumed to be in the range of the lowered Planck scale. These lines are created by use of event generators, ie numerical simulations. From this and similar data, the CMS collaboration is able to conclude that they haven't seen any black holes for minimum masses up to 4.5 TeV. CMS has an update on these constraints here, where they've pushed the limits up to 5 TeV, if not with amazingly high confidence level.

Some comments are in order though for the latter analysis. It argues with the production of multi-jets by black holes. This is a reliable prediction only for black holes produced with masses at least a few times above the lowered Planck scale. The reason is that a black hole of Planck mass is a quantum gravitational object and it is not correctly described by Hawking's semi-classical calculation. How to correctly describe it, nobody really knows. It is for the sake of numerics typically assumed that a black hole of Planck mass makes a final decay into a few particles. But that's got nothing to do with theory, it is literally just a subroutine in a code that randomly chooses some particles and their momenta such that all conservation laws are fulfilled. (The codes are shareware, look it up if you don't believe it.)

That procedure wouldn't be a problem if that was just some pragmatic measure to deal with the situation that has no impact on the prediction. Unfortunately it is the case that almost all black holes that would be produced at the LHC would be produced in the quantum gravitational regime. The reason is simply that the LHC is a hadron collider, and all the energy from the protons is redistributed on its constituents (called partons). As a result of this, the vast majority of the black holes produced have masses as low as possible, ie close by the new Planck scale.

What that means is that it is actually far from clear what the CMS constraints on excess of multi-jets mean for the production of black holes. A similar argument was recently made by Seong Chan Park in Critical comment on the recent microscopic black hole search at the LHC, arXiv:1104.5129.

Summary: It clearly doesn't look good for models with a lowered Planck scale. While it is in many cases not possible to falsify a model, but just to implausify it, large extra dimensions are becoming less plausible by the day. Nevertheless, one should exert scientific caution and not jump to conclusions. The relevance of CMS constraints on multi-jets depends partly on assumptions about the black holes' final decay that are not theoretically justified.

Question for the experts: Why do the curves in Fig 2 of the CMS paper seem to have a bump around the mininum black hole mass even though N > Nmin?

Sunday, June 05, 2011

Stronger than the universe

Two weeks ago, we had hail here in Stockholm. At that time I was homewards bound on the highway, and that's where I would be staying for half an hour while rescue crew scratched a motorbike off the middle lane. On the radio run "Heartbreaker" by Dionne Warwick. It's one of these songs I've heard a million times but never listened to, girl in love, guy who doesn't call, same old story. "Why do you have to be a heartbreaker, When I was bein' what you want me to be?" I probably wouldn't call her either. There's Swedish "nyheter" on the other frequencies, but I already knew the weather was sucking greatly, the highway was clogged, and the rest I wouldn't understand anyway, that being the state of my Swedish. Hail drumming on the car roof, Dionne sang "My love is stronger than the universe," and the physicist in me couldn't avoid asking WTF is that supposed to mean? (It's not a four letter word. No, it isn't.)

Okay, so the universe is supposed to have a strength. It springs to mind the gravitational force exerted by all the mass in the universe. Since you can't place yourself outside the universe (probably where Dionne's guy sits) the question is what's the force acting on you while inside, caused by the expansion of the universe? Well, we know that bound systems up to galactic scales don't take part in the expansion, but let's forget that for a moment and pretend the universe would try to rip lovers apart on planetary surfaces. If Dionne's non-caller was as far away from her as he could possibly get on Earth, ie 10,000 km or so, the force comes to 10-26N. Not very impressive. The laws of attraction might get you into trouble, but actually gravity is even weaker than the weak force.

No, we have to think about this differently. We should be asking what's the strength of the structure of the universe? So, as everybody knows, the universe is made of strings, and a string has a tension which is something like the square of the Planckmass, take or give some orders of magnitude. Putting all dimensionful units back in, that comes out to be about 1044N. We could compare this to the force acting on Dionne on the surface of a neutron star, which is a measly 1014N. Yes, clearly, there's string theory on the radio. Though I suspect you'd get pretty much the same answer asking what it takes to break a link in a fundamental spin network.

Passing by the accident zone I contemplate the lack of friction and the forces at work. The radio plays Tori Amos, Little Earthquakes. It doesn't take much to rip us into pieces.

Wednesday, June 01, 2011

Four links to Paul Dirac

The other day I was wondering out aloud whether somebody had ever checked the average number of co-authors to the next Nobelprize winner, because sometimes it seems to me like everybody knows everybody in theoretical physics. And it's not even a small community. Well, I don't know if anybody has actually measured the diameter of the physics coauthor network, but I saw this morning that the AMS has a tool to calculate 'collaboration distance' which is pretty much self-explanatory:


So, let's see how far I'm away from Paul Dirac coauthor-wise...


Not so far actually, thanks to Lee. Dirac's paper on the list above is a Nature article from 1952 on the question "Is there an Aether?" What about Albert Einstein then?

And go:


With 5 links to Albert Einstein! That's less than I would have guessed. With 6 links you can probably connect any two authors.

Unfortunately, the AMS database doesn't seem to contain experimentalists. Neither could I find any description of the algorithm used. It runs amazingly fast, and it makes me a little suspicious that in no query I tried did I get two paths with the same length, though that might have been coincidence.

So, have fun playing around.