Thursday, February 28, 2013

The simulation hypothesis and other things I don’t believe

Some years ago at SciFoo I sat through a session by Nick Bostrom, director of the Future of Humanity Institute, who elaborated on the risk that we live in a computer simulation and somebody might pull the plug, thereby deliberately or accidentally erasing all of mankind.

My mind keeps wandering back to Bostrom’s session. You might think that discussing the probability of human extinction due to war, disease or accident is a likely cause of insomnia. The simulation hypothesis in particular is the stuff that dreams and nightmares are made of - a modern religion with an omnipotent programmer. In this light, it is not so surprising that the simulation hypothesis is popular on the internet, though Keanu Reeves clearly had a role in this popularity, which now gives me an excuse to decorate my blog with his photo.

But while I do sometimes get headaches over questions concerning the nature of reality, the simulation hypothesis is not among the things that keep me up at night (neither is Keanu Reeves, thanks for asking).  After some soul searching I realized that I don’t believe in the simulation hypothesis for the same reason I don’t believe in alien abductions. Before science fiction literature and its alien characters became popular, there was no such thing as alien abduction. Instead, people commonly thought they were possessed by demons. It is believed today that sleep paralysis is a likely origin of hallucinations and out-of-body experiences, an interesting topic on its own right, but the point here is that popular culture creates hypotheses, and present culture is a collective limit to our imagination.

People today ponder the idea that reality is a computer simulation in the same way that post-Newtonian intellectuals thought of the universe as a clockwork. The clockwork universe theory seems bizarre today, now that we know many things that Newtonian mechanics cannot describe. But then people used to wear strange wigs and women stood around in dresses barely able to walk, let alone breathe, so what did they know. And chances are, 200 years from now the simulation hypothesis will seem equally bizarre as the idea to transfer fat from the butt to the lips or take notes by rubbing graphite on paper.

A more scientific way to phrase this is that the simulation hypothesis creates a coincidence problem, much like the coincidence problem for the cosmological constant. For the cosmological constant the coincidence problem is this: Throughout the expansion of the universe, matter dilutes and the constant stays constant. Why do we just happen to live in a period when both have about the same value? For the simulation hypothesis the coincidence problem is this: Why do we just happen to live in a period where we discover the very means by which the universe is run?

To me, it’s too much of a coincidence to be plausible. I will put this down as a corollary of the Principle of Finite Imagination “Just because humans do not or cannot imagine something doesn’t mean it does not or cannot exist.” Corollary:  “If humans put forward a hypothesis based on something they have just learned to imagine, it is most likely a cultural artifact and not of fundamental relevance.” Though the possibility exists that present day human imagination is the eclipse of scientific insight, the wish to be special vastly amplifies believes in this possibility.

That having been said, another way to approach the question is to ask for scientific evidence of the simulation hypothesis. There has been some work on this, and occasionally it appears on the arxiv, such as this paper last year which studied the possibility that The Simulator runs state-of-the art lattice QCD. I find it peripherally interesting and applaud the authors for applying scientific thought to vagueness (for other attempts at this, check their reference list). Alas, the scenario that Bostrom has in mind is infinitely meaner than theirs. As he explains in this paper, to save on calculational power only that part of reality is simulated that is currently observed:
“In order to get a realistic simulation of human experience, much less [than simulating the entire universe] is needed – only whatever is required to ensure that the simulated humans, interacting in normal human ways with their simulated environment, don’t notice any irregularities.”
So you’d never observe any effects of finite lattice spacing because whenever you look all symmetries are restored. Wicked. It also creates other scientific problems.

To begin with, unless you want to populate the simulation by hand, you need a process in which self-awareness is created out of simpler bits. And to prevent self-aware beings from noting the simulation’s limits, you then need a monitoring program that identifies when the self-aware parts attempt to make an observation and exactly which observation. Then you need to provide them with this observation, so that the observation is the same as they would have gotten had you run the full simulation. This might work fine in some cases, say, vacuum fluctuations, because nobody really cares what a vacuum fluctuation does when you’re not looking. If you have a complex system however, reducing the complexity systematically and blowing it back up is difficult if not impossible.

Take a system that’s still fairly simple, like a galaxy. If nobody is pointing a telescope at it, you don’t want to bother with its time evolution. But then how do you make sure that observations at different times are consistent? And then there’s the possibility that somewhere in the galaxy that humans weren’t observing intelligent life developed that would one day land on planet Earth. If your simulation by design doesn’t take into account events like this, it’s strangely anthropocentric. It also then raises the question why bother with 7 billion people to begin with? Would not an island do, and the rest of us pop in and out of existence to amuse the islanders? This reminds me, I have to book a flight to Iceland.

To avoid these problems, The Simulator would use a much simpler method: deter observations that might test the limits, much like it is difficult to reach the boundary of Dark City. And suddenly it makes sense, doesn’t it? All the recent budget cuts to research funding, even in areas like theoretical physics, the possibly most cost-efficient insight engine running on little more than graphite rubbing on paper. It’s all to deter us from discovering the boundaries of our simulation. Now if saying hello to the programmer who runs the simulation we live in isn’t an argument to support basic research, then I don’t know what is. I’ll leave you with this thought and book my flight before I pop out of existence again.

Saturday, February 23, 2013

Book review: "The Theoretical Minimum" by Susskind and Hrabovsky

The Theoretical Minimum: What You Need to Know to Start Doing Physics
By Leonard Susskind, George Hrabovski
Basic Books (January 29, 2013)

Susskind made his lecture notes into a book and did a great job. His book is explicitly not aimed at students but at everybody with an interest in physics who wants to expand their toolkit and start speaking the language of physicists.

The book primarily covers classical mechanics: momentum and forces, energy and potentials, up to the principle of least action, Hamiltonian mechanics and poisson brackets. In content it is very similar to the lecture notes that I learned from, it might also remind you of Goldstein's classical book on classical mechanics. However, what's special about Susskind's book is that he introduces along the way all the mathematical concepts that are needed, starting with vectors and functions to integration and differentiation. The book is thus very self-contained and yet really brief and to the point, which is quite an achievement.

It seems pretty obvious that there will be a sequel to this book that continues this educational effort.

I appreciate this book very much. It would have been dramatically useful for me when I was a teenager, because there is a gap in the physics literature between high school level and the level aimed at students, a gap this book can bridge. However, if you think this book will bring to up to speed with modern physics, you got it wrong. It's a long way to quantum field theory and there really are no shortcuts. Susskind's book, and the ones that will probably follow, however might be the shortest route, the one of least action so to say.

That having been said, I'm not a teenager anymore and frankly don't have much use for the book. Which is why I'll give away my copy for free. The book will go to the first person who has a mailing address in Europe and leaves a comment to this blogpost telling us why you want the book and what is your interest in physics.

Update: The book is gone.

Wednesday, February 20, 2013

Thumbs up for the Cambridge University Press Customer Service

Some years ago, I bought a copy of Stephani et al's book "Exact Solutions of Einstein's Field Equations" from Cambridge University Press. It's pretty much an encyclopedia of all that's known about Einstein's Field Equations. It's the type of book you turn to for advice when you've got a problem, not a textbook you read front to back. So I hope you'll forgive me when I say it took me a few months to notice that the copy I bought was a misprint with several empty pages towards the middle. These are the obscurer parts of the book whose physical applications are at least to me somewhat unclear, and I thought I would just never need whatever should have been printed on these pages anyway.

Over the years however I developed the distinct paranoia that whenever I was looking for something that I could not find in Stephani's book, it was certainly printed on the missing pages. Some time last week, frustrated by yet another intractable set of equations one gets without a good ansatz for the metric, I wrote to Cambridge University Press customer service, complaining about the misprint, with the above photo attached.

Needless to say, several years after purchasing the book I don't have a receipt. Nevertheless, I got a reply within 24 hours, with an apology for the misprint. Alas, the hardcover version that I have is out of print, if a paperback would be okay. "Sure", I wrote back. They asked for my shipping address and a week later I have a brand new copy, all for free. Now if I don't find an answer to a problem I was looking for, I have no empty pages to blame any more.

Sunday, February 17, 2013

The Future of Peer Review

This week's cover of The Economist.
A year ago, I told you what I think is the future of scientific peer review: Peer review that is conducted independently from the submission of a manuscript to a journal. You would get a report from an institution offering such a service, possibly some already existing publisher, possibly some new institution specifically created for this purpose. This report you could then use together with submission of your paper to a journal, but you could also use it with open access databases. You could even use it in company with your grant proposals if that seems suitable. I call it pre-print peer review.

I argued earlier that, irrespective of what you think about this, it's going to happen. You just have to extrapolate the present situation: There is a lot of anger among scientists about publishers who charge high subscription fees. And while I know some tenured people who simply don't bother with journal publication any more and just upload their papers to the arXiv, most scientists need the approval stamp that a journal publication presently provides: it shows that peer review has taken place. The easiest way to break this dependence on journals is to offer peer review by other means. This will make the peer review process more to the point and more effective.

The benefit of this change over other, more radical, changes that have been proposed is that it stays very close to the present model in that the procedure of peer review itself need not be changed. It's just the provider that changes.

I am thus really excited that the recent issue of Nature reports that one such service exists now and another one is about to be created:
The one that already exists is called Peerage of Science, based in Jyväskylä, Finland. Yeah, right, the Nordic people, they're always a little faster than the rest of the world. Peerage of Science seems to have launched a little more than a year ago, but this is the first time I've heard of it. The one in the making is US based and the project is managed by a guy called Keith Collier.

Of course it's difficult to say whether such a change will catch on. Academia has a large inertia, and it depends a lot on whether people will accept independent reviews. But I am confident, so let me make a prediction, just for the fun of it: In 5 years there will be a dozen of such services, some run by publishers. In ten years, most of peer review will take place this way.

Tuesday, February 12, 2013

The end of science is near, again.

The recent Nature issue has a comment titled
by Dean Keith Simonton who is professor of psychology at UC Davis. Ah, wait, according to his website he isn't just professor, he is Distinguished Professor. His piece is subscription only, so let me briefly summarize what he writes. Simonton notes it has become rare that new disciplines of science are being created:
“Our theories and instruments now probe the earliest seconds and farthest reaches of the Universe, and we can investigate the tiniest of life forms and the shortest-lived of subatomic particles. It is difficult to imagine that scientists have overlooked some phenomenon worthy of its own discipline alongside astronomy physics, chemistry and biology. For more than a century, any new discipline has been a hybrid of one of these, such as astrophysics, biochemistry or astrobiology. Future advances are likely to build on what is already known rather than alter the foundations of knowledge. One of the biggest recent scientific accomplishments is the discovery of the Higgs boson – the existence of which was predicted decades ago.”
He argues that scientific progress will not stall, but what’s going to happen is that we’ll be filling in the dots in a landscape whose rough features are now known:
“Just as athletes can win an Olympic gold medal by beating the world record only by a fraction of a second, scientists can continue to receive Nobel prizes for improving the explanatory breadth of theories of the preciseness of measurements.”
I have some issues with his argument

First, he doesn’t actually discuss scientific genius or any other type of genius. He is instead talking about the foundation of knowledge that he seems to imagine as building blocks of scientific disciplines. While it seems fair to say that the creation of a new scientific discipline scores high on the genius scale, it’s not a necessary criterion. Simonton acknowledges
“[I]f anything, scientists today might require more raw intelligence to become a first-rate researcher than it took to become a genius during… the scientific revolution in the sixteenth and seventeenth century, given how much information and experience researchers must now acquire to become proficient.”
but one is still left wondering what he means with genius to begin with, or why it appears in the title of his comment if he doesn’t explain or discuss it.

Second, I am unhappy with his imagery of the foundations of knowledge, which I must have as I believe in reductionism. The foundation is, always, what’s the currently most fundamental theory and it presently resides in physics. Other disciplines have their own “knowledge” that exists independently of physics, because the derivation of other discipline’s “knowledge” is not presently possible, or if it was, it would be entirely impractical.

The difference between these two images matters: In Simonton’s image there’s each discipline and its knowledge. In my image there’s physics and the presently unknown relations between physics and other theories (and thereby these theories among each other). You see then what Simonton is missing: Yes, we know the very large and the very small quite well. But our understanding of complex systems and their behavior has only just begun. Now if we understand better the complex systems that are subject of study in disciplines like biology, neuroscience and politics, this might not create a new discipline in that the name would probably not change. But it has the potential to vastly increase our understanding of the world around us, in very contrast to the incremental improvements that Simonton believes we’re headed towards. Simonton’s argument is akin to saying that once one knows the anatomy of the human body, the rest of medicine is just details.

Third, he has a very limited imagination. I am imagining extraterrestrial life making use of chemistry entirely alien to ours, with cultures entirely different from ours, or disembodied conscious beings floating through the multiverse. You can see what I’m saying: there’s more to the universe than we have seen so far and there is really no telling what we’ll find if we keep on looking.

Fourth, he is underestimating the relevance of what we don’t know. Simonton writes
“The core disciplines have accumulated not so much anomalies as mere loose ends that will be tidied up one way or another. A possible exception is theoretical physics, which is as yet unable to integrate gravity with the other three forces of nature.”
I guess he deserves credit for having heard or quantum gravity. Yes, the foundations are incomplete. But that's not a small missing piece, it's huge, and nobody knows how huge.

To draw upon an example I used earlier, imagine that our improved knowledge of the fundamental ingredients of our theories would allow us to create synthetic nuclei (molecei) that would not have been produced by any natural processes anywhere in the universe. They would have their own chemistry, their own biology, and would interact with the matter we already have in novel ways. Now you could complain that this would be just another type of chemistry rather than a new discipline, but that’s just nomenclature. The relevant point is that this would be a dramatic discovery affecting all of the natural sciences. You never know what you’ll find if you follow the loose ends.

In summary: It might be true what Simonton says, that we have made pretty much all major discoveries and everything that is now to come will be incremental. Or it might not be true. I really do not see what evidence his “thesis”, as he calls it, is based upon, other than stating the obvious, that the low hanging fruits are the first to be eaten.

Aside: John Barrow in his book “Impossibility” discussed the three different scenarios of scientific progress: progress ending, asymptotically stagnating, or forever expanding. I found it considerably more insightful than Simonton’s vague comment.

Friday, February 08, 2013

Book review "The Edge of Physics" by Anil Ananthaswamy

The Edge of Physics: A Journey to Earth's Extremes to Unlock the Secrets of the Universe
By Anil Ananthaswamy
Mariner Books (January 14, 2011)

In "The Edge of Physics", Ananthaswamy takes the reader on a trip to some of the presently most exciting experiments in physics. The Soudan Mine where physicists are looking for direct detection of dark matter, the Baikal Lake with its underwater neutrino detectors, the Square Kilometre Array in South Africa, the VLT in Chile, the IceCube Neutrino Observatory at the South Pole, and others more before he finishes his travels at CERN in Geneva.

Along this trip one learns a lot not only about the scenery, but also about physics and the history of physics. Ananthaswamy doesn't add the experiments as an afterthought to elaborations on quantum mechanics and special relativity, but the experiments and the people working on them take lead. His theoretical explanations are brief but to the point. The appendix contains the shortest summaries of the Standard Model and the Concordance Model that I've ever seen. He explains enough so the reader can understand which new physics the experiments are looking for and what the relevance is, but always quickly comes back to show how this search proceeds in reality.

I found this book hugely enjoyable because it is not your typical popular science book. I didn't have to make my way through yet another chapter that promises to explain general relativity without equations, and I learned quite some things along the way. It's amazing how many details experimentalists have to think about that would never have occurred to me. Ananthaswamy tells stories of people who found their destiny, stories of courage, stories of trial and error, and some quite dramatic accidents and almost accidents. It's a very well written narrative.

I have only one complaint about this book which is that it would have very much benefited from some illustrations, be that to explain the CMB power spectrum, the generations and families in the Standard Model, the thermal history of the universe, or sketches of the experiments and their parts.

In summary, I can recommend this book to everybody with an interest in contemporary physics or the history of physics. If you have no clue about particle physics or cosmology whatsoever, you might not be able to follow some of the explanations, which are really brief. But even then you'll still take something away from this book. I'd give "The Edge of Physics" 5 out of 5 stars.

Tuesday, February 05, 2013

Consequences of using the journal impact factor

An interesting paper that should be mandatory literature for everybody making decisions on grant or job application, especially for those people impressed by high profile journals on publication lists:
It's a literature review that sends a clear message about the journal impact factor. The authors argue the impact factor is useless in the best case and harmful to science in the worst case.

The annually updated Thomson Reuters journal impact factor (IF) is, in principle, the number of citations to articles in a journal divided by the number of all articles in that journal. In practice, there is some ambiguity about what counts as "article" that is subject of negotiation with Thomson Reuters. For example, journals that publish editorials will not want them to count among the articles because they get rarely cited in the scientific literature. Unfortunately, this freedom in negotiation results in a lack of transparency that casts doubt on the objectivity of the IF. While I knew that, the problem seems to be worse than I thought. Brembs and Munafò quote some findings:
"For instance, the numerator and denominator values for Current Biology in 2002 and 2003 indicate that while the number of citations remained relatively constant, the number of published articles dropped...

In an attempt to test the accuracy of the ranking of some of their journals by IF, Rockefeller University Press purchased access to the citation data of their journals and some competitors. They found numerous discrepancies between the data they received and the published rankings, sometimes leading to differences of up to 19% [86]. When asked to explain this discrepancy, Thomson Reuters replied that they routinely use several different databases and had accidentally sent Rockefeller University Press the wrong one. Despite this, a second database sent also did not match the published records. This is only one of a number reported errors and inconsistencies [87,88]."
(For references in this and the following quotes, please see Brembs and Munafò's paper.)

That is already a bad starting point. But more interesting is that, even though there are surveys confirming that the IF captures quite well researcher's perception of high impact, if one looks at the numbers, it actually doesn't tell much about the promise of articles in these journals:

"[J]ournal rank is a measurable, but unexpectedly weak predictor of future citations [26,55–59]... The data presented in a recent analysis of the development of [the] correlations between journal rank and future citations over the period from 1902-2009 reveal[s that]... the coefficient of determination between journal rank and citations was always in the range of ~0.1 to 0.3 (i.e., very low)."
And that is despite there being reasons to expect a correlation because high profile journals put some effort into publicizing articles and you can expect people to cite high IF journals just to polish their reference list. However,
"The only measure of citation count that does correlate strongly with journal rank (negatively) is the number of articles without any citations at all [63], supporting the argument that fewer articles in high-ranking journals go unread...

Even the assumption that selectivity might confer a citation advantage is challenged by evidence that, in the citation analysis by Google Scholar, only the most highly selective journals such as Nature and Science come out ahead over unselective preprint repositories such as ArXiv and RePEc (Research Papers in Economics) [64]."
So IFs of journals in publication lists don't tell you much. That scores as useless, but what's the harm? Well, there are some indications that studies published in high IF journals are less reliable, ie more likely to contain exaggerated claims and cannot later be reproduced.
"There are several converging lines of evidence which indicate that publications in high ranking journals are not only more likely to be fraudulent than articles in lower ranking journals, but also more likely to present discoveries which are less reliable (i.e., are inflated, or cannot subsequently be replicated).

Some of the sociological mechanisms behind these correlations have been documented, such as pressure to publish (preferably positive results in high-ranking journals), leading to the potential for decreased ethical standards [51] and increased publication bias in highly competitive fields [16]. The general increase in competitiveness, and the precariousness of scientific careers [52], may also lead to an increased publication bias across the sciences [53]. This evidence supports earlier propositions about social pressure being a major factor driving misconduct and publication bias [54], eventually culminating in retractions in the most extreme cases."
The "decline effect" (effects getting less pronounced in replications) and the problems with reproducability of published research findings have recently gotten quite some attention. The consequences for science that Brembs and Munafò warn of are
"It is conceivable that, for the last few decades, research institutions world-wide may have been hiring and promoting scientists who excel at marketing their work to top journals, but who are not necessarily equally good at conducting their research. Conversely, these institutions may have purged excellent scientists from their ranks, whose marketing skills did not meet institutional requirements. If this interpretation of the data is correct, we now have a generation of excellent marketers (possibly, but not necessarily also excellent scientists) as the leading figures of the scientific enterprise, constituting another potentially major contributing factor to the rise in retractions. This generation is now in charge of training the next generation of scientists, with all the foreseeable consequences for the reliability of scientific publications in the future."
Or, as I like to put it, you really have to be careful what secondary critera (publications in journals with high impact factor) you use to substitute for the primary goal (good science). If you use the wrong criteria you'll not only not reach an optimal configuration, but make it increasingly harder to ever get there because you're changing the background on which you're optimizing (selecting for people with non-optimal strategies).

It should clearly give us something to think that even Gordon Macomber, the new head of Thomson Reuters, warns of depending on publication and citation statistics.

Thanks to Jorge for drawing my attention to this paper.