Thursday, February 15, 2018

What does it mean for string theory that the LHC has not seen supersymmetric particles?



The LHC data so far have not revealed any evidence for supersymmetric particles, or any other new particles. For all we know at present, the standard model of particle physics suffices to explain observations.

There is some chance that better statistics which come with more data will reveal some less obvious signal, so the game isn’t yet over. But it’s not looking good for susy and her friends.
Simulated signal of black hole
production and decay at the LHC.
[Credits: CERN/ATLAS]

What are the consequences? The consequences for supersymmetry itself are few. The reason is that supersymmetry by itself is not a very predictive theory.

To begin with, there are various versions of supersymmetry. But more importantly, the theory doesn’t tell us what the masses of the supersymmetric particles are. We know they must be heavier than something we would have observed already, but that’s it. There is nothing in supersymmetric extensions of the standard model which prevents theorists from raising the masses of the supersymmetric partners until they are out of the reach of the LHC.

This is also the reason why the no-show of supersymmetry has no consequences for string theory. String theory requires supersymmetry, but it makes no requirements about the masses of supersymmetric particles either.

Yes, I know the headlines said the LHC would probe string theory, and the LHC would probe supersymmetry. The headlines were wrong. I am sorry they lied to you.

But the LHC, despite not finding supersymmetry or extra dimensions or black holes or unparticles or what have you, has taught us an important lesson. That’s because it is clear now that the Higgs mass is not “natural”, in contrast to all the other particle masses in the standard model. That the mass be natural means, roughly speaking, that getting masses from a calculation should not require the input of finely tuned numbers.

The idea that the Higgs-mass should be natural is why many particle physicists were confident the LHC would see something beyond the Higgs. This didn’t happen, so the present state of affairs forces them to rethink their methods. There are those who cling to naturalness, hoping it might still be correct, just in a more difficult form. Some are willing to throw it out and replace it instead with appealing to random chance in a multiverse. But most just don’t know what to do.

Personally I hope they’ll finally come around and see that they have tried for several decades to solve a problem that doesn’t exist. There is nothing wrong with the mass of the Higgs. What’s wrong with the standard model is the missing connection to gravity and a Landau pole.

Be that as it may, the community of theoretical particle physicists is currently in a phase of rethinking. There are of course those who already argue a next larger collider is needed because supersymmetry is just around the corner. But the main impression that I get when looking at recent publications is a state of confusion.

Fresh ideas are needed. The next years, I am sure, will be interesting.



I explain all about supersymmetry, string theory, the problem with the Higgs-mass, naturalness, the multiverse, and what they have to do with each other in my upcoming book “Lost in Math.”

Monday, February 12, 2018

Book Update: First Review!

The final proofs are done and review copies were sent out. One of the happy receivers, Emmanuel Rayner, read the book within two days and so we have a first review on Goodreads now. That’s not counting the two-star review by someone who I am very sure hasn’t read the book because he “reviewed” it before there were review copies. Tells you all about online ratings you need to know.

The German publisher, Fischer, is still waiting for the final manuscript which has not yet left the US publisher’s rear end. Fischer wants to get started on the translation so that the German edition appears in early fall, only a few months later than the US edition.

Since I get this question a lot, no, I will not translate the book myself. To begin with, it seemed like a rather stupid thing to do, agree on translating an 80k word manuscript if someone else can do it instead. Maybe more importantly, my German writing is miserable, that owing to a grammar reform which struck the country the year after I had moved overseas, and which therefore entirely passed me by. It adds to this that the German spell-check on my laptop isn’t working (it’s complicated), I have an English keyboard, hence no umlauts, and also did I mention I didn’t have to do it in the first place.

Problems start with the title. “Lost in Math” doesn’t translate well to German, so the Fischer people search for a new title. Have been searching for two months, for all I can tell. I imagine them randomly opening pages of a dictionary, looking for inspiration.

Meanwhile, they have recruited and scheduled an appointment for me with a photographer to take headshots. Because in Germany you leave nothing to coincidence. So next week I’ll be photographed.

In other news, end of February I will give a talk at a workshop on “Naturalness, Hierarchy, and Fine Tuning” in Aachen, and I agreed to give a seminar in Heidelberg end of April, both of which will be more or less about the topic of the book. So stop by if you are interested and in the area.

And do not forget to preorder a copy if you haven’t yet done so!

Wednesday, February 07, 2018

Which problems make good research problems?

mini-problem [answer here]
Scientists solve problems; that’s their job. But which problems are promising topics of research? This is the question I set out to answer in Lost in Math at least concerning the foundations of physics.

A first, rough, classification of research problems can be made using Thomas Kuhn’s cycle of scientific theories. Kuhn’s cycle consists of a phase of “normal science” followed by “crisis” leading to a paradigm change, after which a new phase of “normal science” begins. This grossly oversimplifies reality, but it will be good enough for what follows.

Normal Problems

During the phase of normal science, research questions usually can be phrased as “How do we measure this?” (for the experimentalists) or “How do we calculate this?” (for the theorists).

The Kuhn Cycle.
[Img Src: thwink.org]
In the foundations of physics, we have a lot of these “normal problems.” For the experimentalists it’s because the low-hanging fruits have been picked and measuring anything new becomes increasingly challenging. For the theorists it’s because in physics predictions don’t just fall out of hypotheses. We often need many steps of argumentation and lengthy calculations to derive quantitative consequences from a theory’s premises.

A good example for a normal problem in the foundations of physics is cold dark matter. The hypothesis is easy enough: There’s some cold, dark, stuff in the cosmos that behaves like a fluid and interacts weakly both with itself and other matter. But that by itself isn’t a useful prediction. A concrete research problem would instead be: “What is the effect of cold dark matter on the temperature fluctuations of the cosmic microwave background?” And then the experimental question “How can we measure this?”

Other problems of this type in the foundations of physics are “What is the gravitational contribution to the magnetic moment of the muon?,” or “What is the photon background for proton scattering at the Large Hadron Collider?”

Answering such normal problems expands our understanding of existing theories. These are calculations that can be done within the frameworks we have, but calculations can be be challenging.

The examples in the previous paragraphs are solved problems, or at least problems that we know how to solve, though you can always ask for higher precision. But we also have unsolved problems in this category.

The quantum theory of the strong nuclear force, for example, should largely predict the masses of particles that are composed of several quarks, like neutrons, protons, and other similar (but unstable) composites. Such calculations, however, are hideously difficult. They are today made by use of sophisticated computer code – “lattice calculations” – and even so the predictions aren’t all that great. A related question is how does nuclear matter behave in the core of neutron stars.

These are but some randomly picked examples for the many open questions in physics that are “normal problems,” believed to be answerable with the theories we know already, but I think they serve to illustrate the case.

Looking beyond the foundations, we have normal problems like predicting the solar cycle and solar weather – difficult because the system is highly nonlinear and partly turbulent, but nothing that we expect to be in conflict with existing theories. Then there is high-temperature superconductivity, a well-studied but theoretically not well-understood phenomenon, due to the lack of quasi-particles in such materials. And so on.

So these are the problems we study when business goes as normal. But then there are problems that can potentially change paradigms, problems that signal a “crisis” in the Kuhnian terminology.

Crisis Problems

The obvious crisis problems are observations that cannot be explained with the known theories.

I do not count most of the observations attributed to dark matter and dark energy as crisis problems. That’s because most of this data can be explained well enough by just adding two new contributions to the universe’s energy budget. You will undoubtedly complain that this does not give us a microscopic description, but there’s no data for the microscopic structure either, so no problem to pinpoint.

But some dark matter observations really are “crisis problems.” These are unexplained correlations, regularities in galaxies that are hard to come by with cold dark matter, such as the Tully-Fisher-relation or the strange ability of dark matter to seemingly track the distribution of matter. There is as yet no satisfactory explanation for these observations using the known theories. Modifying gravity successfully explains some of it but that brings other problems. So here is a crisis! And it’s a good crisis, I dare to say, because we have data and that data is getting better by the day.

This isn’t the only good observational crisis problem we presently have in the foundations of physics. One of the oldest ones, but still alive and kicking, is the magnetic moment of the muon. Here we have a long-standing mismatch between theoretical prediction and measurement that has still not been resolved. Many theorists take this as an indication that this cannot be explained with the standard model and a new, better, theory is needed.

A couple more such problems exist, or maybe I should say persist. The DAMA measurements for example. DAMA is an experiment that searches for dark matter. They have been getting a signal of unknown origin with an annual modulation, and have kept track of it for more than a decade. The signal is clearly there, but if it was dark matter that would conflict with other experimental results. So DAMA sees something, but no one knows what it is.

There is also the still-perplexing LSND data on neutrino oscillation that doesn’t want to agree with any other global parameter fit. Then there is the strange discrepancy in the measurement results for the proton radius using two different methods, and a similar story for the lifetime of the neutron. And there are the recent tensions in the measurement of the Hubble rate using different methods, which may or may not be something to worry about.

Of course each of these data anomalies might have a “normal” explanation in the end. It could be a systematic measurement error or a mistake in a calculation or an overlooked additional contribution. But maybe, just maybe, there’s more to it.

So that’s one type of “crisis problem” – a conflict between theory and observations. But besides these there is an utterly different type of crisis problem, which is entirely on the side of theory-development. These are problems of internal consistency.

A problem of internal consistency occurs if you have a theory that predicts conflicting, ambiguous, or just nonsense observations. A typical example for this would be probabilities that become larger than one, which is inconsistent with a probabilistic interpretation. Indeed, this problem was the reason physicists were very certain the LHC would see some new physics. They couldn’t know it would be the Higgs, and it could have been something else – like an unexpected change to the weak nuclear force – but the Higgs it was. It was restoring internal consistency that led to this successful prediction.

Historically, studying problems of consistency has led to many stunning breakthroughs.

The “UV catastrophe” in which a thermal source emits an infinite amount of light at small wavelength is such a problem. Clearly that’s not consistent with a meaningful physical theory in which observable quantities should be finite. (Note, though, that this is a conflict with an assumption. Mathematically there is nothing wrong with infinity.) Planck solved this problem, and the solution eventually led to the development of quantum mechanics.

Another famous problem of consistency is that Newtonian mechanics was not compatible with the space-time symmetries of electrodynamics. Einstein resolved this disagreement, and got special relativity. Dirac later resolved the contradiction between quantum mechanics and special relativity which, eventually, gave rise to quantum field theory. Einstein further removed contradictions between special relativity and Newtonian gravity, getting general relativity.

All these have been well-defined, concrete, problems.

But most theoretical problems in the foundations of physics today are not of this sort. Yes, it would be nice if the three forces of the standard model could be unified to one. It would be nice, but it’s not necessary for consistency. Yes, it would be nice if the universe was supersymmetric. But it’s not necessary for consistency. Yes, it would be nice if we could explain why the Higgs mass is not technically natural. But it’s not inconsistent if the Higgs mass is just what it is.

It is well documented that Einstein and even more so Dirac were guided by the beauty of their theories. Dirac in particular was fond of praising the use of mathematical elegance in theory-development. Their personal motivation, however, is only of secondary interest. In hindsight, the reason they succeeded was that they were working on good problems to begin with.

There are a few real theory-problems in the foundations of physics today, but they exist. One is the lacking quantization of gravity. Just lumping the standard model together with general relativity doesn’t work mathematically, and we don’t know how to do it properly.

Another serious problem with the standard model alone is the Landau pole in one of the coupling constants. That means that the strength of one of the forces becomes infinitely large. This is non-physical for the same reason the UV catastrophe was, so something must happen there. This problem has received little attention because most theorists presently believe that the standard model becomes unified long before the Landau pole is reached, making the extrapolation redundant.

And then there are some cases in which it’s not clear what type of problem we’re dealing with. The non-convergence of the perturbative expansion is one of these. Maybe it’s just a question of developing better math, or maybe there’s something we get really wrong about quantum field theory. The case is similar for Haag’s theorem. Also the measurement problem in quantum mechanics I find hard to classify. Appealing to a macroscopic process in the theory’s axioms isn’t compatible with the reductionist ideal, but then again that is not a fundamental problem, but a conceptual worry. So I’m torn about this one.

But for what crisis problems in theory development are concerned, the lesson from the history of physics is clear: Problems are promising research topics if they really are problems, which means you must be able to formulate a mathematical disagreement. If, in contrast, the supposed problem is that you simply do not like a particular aspect of a theory, chances are you will just waste your time.



Homework assignment: Convince yourself that the mini-problem shown in the top image is mathematically ill-posed unless you appeal to Occam’s razor.

Wednesday, January 31, 2018

Physics Facts and Figures

Physics is old. Together with astronomy, it’s the oldest scientific discipline. And the age shows. Compared to other scientific areas, physics is a slowly growing field. I learned this from a 2010 paper by Larsen and van Ins. The authors counted the number of publications per scientific areas. In physics, the number of publications grows at an annual rate of 3.8%. This means it currently takes 18 years for the body of physics literature to double. For comparison, the growth rate for publications in electric engineering and technology is 9% (7.5%) and has a doubling time of 8 years (9.6 years).

The total number of scientific papers closely tracks the total number of authors, irrespective of discipline. The relation between the two can be approximately fit by a power law, so that the number of papers is equal to the number of authors to the power of β. But this number, β, turns out to be field-specific, which I learned from a more recent paper: “Allometric Scaling in Scientific Fields” by Dong et al.

In mathematics the exponent β is close to one, which means that the number of papers increases linearly with the number of authors. In physics, the exponent is smaller than one, approximately 0.877. And not only this, it has been decreasing in the last ten years or so. This means we are seeing here diminishing returns: More physicists result in a less than proportional growth of output.

Figure 2 from Dong et al, Scientometrics 112, 1 (2017) 583.
β measures is the exponent by which the number of papers
scales with the number of authors. 
The paper also found some fun facts. For example, a few sub-fields of physics are statistical outliers in that their researchers produce more than the average number papers. Dong et al quantified this by a statistical measure that unfortunately doesn’t have an easy interpretation. Either way, they offer a ranking of the most productive sub-fields in physics which is (in order):

(1) Physics of black holes, (2) Cosmology, (3) Classical general relativity, (4) Quantum information (5) Matter waves (6) Quantum mechanics (7) Quantum field theory in curved space time (8) general theory and models of magnetic ordering (9) Theories and models of many electron systems (10) Quantum gravity.

Isn’t it interesting that this closely matches the fields that tend to attract media attention?

Another interesting piece of information that I found in the Dong et al paper is that in all sub-fields the exponent relating the numbers of citations with the number of authors is larger than one, approximately 1.1. This means that on the average the more people work in a sub-field, the more citation they receive. I think this is relevant information for anyone who wants to make sense of citation indices.

A third paper that I found very insightful to understand the research dynamics in physics is “A Century of Physics” by Sinatra et al. Among other things, they analyzed the frequency by which sub-fields of physics reference to their own or other sub-fields. The most self-referential sub-fields, they conclude, are nuclear physics and the physics of elementary particles and fields.

Papers from these two sub-fields also have by far the lowest expected “ultimate impact” which the authors define as the typical number of citations a paper attracts over its lifetime, where the lifetime is the typical number of years in which the paper attracts citations (see figure below). In nuclear physics (labelled NP in figure) and and particle physics (EPF), the interest of papers is short-term and the overall impact remains low. By this measure, the category with the highest impact is electromagnetism, optics, acoustics, heat transfer, classical mechanics and fluid dynamics (labeled EOAHCF).

Figure 3 e from Sinatra et al, Nature Physics 11, 791–796 (2015).

A final graph from the Sinatra et al paper which I want to draw your attention to is the productivity of physicists. As we saw earlier, the total number of papers normalized to the total number of authors is somewhat below 1 and has been falling in the recent decade. However, if you look at the number of papers per author, you find that it has been sharply rising since the early 1990s, ie, basically ever since there was email.

Figure 1 e from Sinatra et al, Nature Physics 11, 791–796 (2015)

This means that the reason physicists seem so much more productive today than when you were young is that they collaborate more. And maybe it’s not so surprising because there is a strong incentive for that: If you and I both write a paper, we both have one paper. But if we agree to co-author each other’s paper, we’ll both have two. I don’t mean to accuse scientists of deliberate gaming, but it’s obvious that accounting for papers by the number puts single-authors at a disadvantage.

So this is what physics is, in 2018. An ageing field that doesn’t want to accept its dwindling relevance.

Thursday, January 25, 2018

More Multiverse Madness

The “multiverse” – the idea that our universe is only one of infinitely many – enjoys some credibility, at least in the weirder corners of theoretical physics. But there are good reasons to be skeptical, and I’m here to tell you all of them.

Before we get started, let us be clear what we are talking about because there isn’t only one but multiple multiverses. The most commonly discussed ones are: (a) The many worlds interpretation of quantum mechanics, (b) eternal inflation, and (c) the string theory landscape.

The many world’s interpretation is, guess what, an interpretation. At least to date, it makes no predictions that differ from other interpretations of quantum mechanics. So it’s up to you whether you believe it. And that’s all I have to say about this.

Eternal inflation is an extrapolation of inflation, which is an extrapolation of the concordance model, which is an extrapolation of the present-day universe back in time. Eternal inflation, like inflation, works by inventing a new field (the “inflaton”) that no one has ever seen because we are told it vanished long ago. Eternal inflation is a story about the quantum fluctuations of the now-vanished field and what these fluctuations did to gravity, which no one really knows, but that’s the game.

There is little evidence for inflation, and zero evidence for eternal inflation. But there is a huge number of models for both because available data don’t constraint the models much. Consequently, theorists theorize the hell out of it. And the more papers they write about it, the more credible the whole thing looks.

And then there’s the string theory landscape, the graveyard of disappointed hopes. It’s what you get if you refuse to accept that string theory does not predict which particles we observe.

String theorists originally hoped that their theory would explain everything. When it became clear that didn’t work, some string theorists declared if they can’t do it then it’s not possible, hence everything that string theory allows must exist – and there’s your multiverse. But you could do the same thing with any other theory if you don’t draw on sufficient observational input to define a concrete model. The landscape, therefore, isn’t so much a prediction of string theory as a consequence of string theorists’ insistence that theirs a theory of everything.

Why then, does anyone take the multiverse seriously? Multiverse proponents usually offer the following four arguments in favor of the idea:

1. It’s falsifiable!

Our Bubble Universe.
Img: NASA/WMAP.
There are certain cases in which some version of the multiverse leads to observable predictions. The most commonly named example is that our universe could have collided with another one in the past, which could have left an imprint in the cosmic microwave background. There is no evidence for this, but of course this doesn’t rule out the multiverse. It just means we are unlikely to live in this particular version of the multiverse.

But (as I explained here) just because a theory makes falsifiable predictions doesn’t mean it’s scientific. A scientific theory should at least have a plausible chance of being correct. If there are infinitely many ways to fudge a theory so that the alleged prediction is no more, that’s not scientific. This malleability is a problem already with inflation, and extrapolating this to eternal inflation only makes things worse. Lumping the string landscape and/or many worlds on top of doesn’t help parsimony either.

So don’t get fooled by this argument, it’s just wrong.

2. Ok, so it’s not falsifiable, but it’s sound logic!

Step two is the claim that the multiverse is a logical consequence of well-established theories. But science isn’t math. And even if you trust the math, no deduction is better than the assumptions you started from and neither string theory nor inflation are well-established. (If you think they are you’ve been reading the wrong blogs.)

I would agree that inflation is a good effective model, but so is approximating the human body as a bag of water, and see how far that gets you making sense of the evening news.

But the problem with the claim that logic suffices to deduce what’s real runs deeper than personal attachment to pretty ideas. The much bigger problem which looms here is that scientists mistake the purpose of science. This can nicely be demonstrated by a phrase in Sean Carroll’s recent paper. In defense of the multiverse he writes “Science is about what is true.” But, no, it’s not. Science is about describing what we observe. Science is about what is useful. Mathematics is about what is true.

Fact is, the multiverse extrapolates known physics by at least 13 orders of magnitude (in energy) beyond what we have tested and then adds unproved assumptions, like strings and inflatons. That’s not science, that’s math fiction.

So don’t buy it. Just because they can calculate something doesn’t mean they describe nature.

3. Ok, then. So it’s neither falsifiable nor sound logic, but it’s still business as usual.

The gist of this argument, also represented in Sean Carroll’s recent paper, is that we can assess the multiverse hypothesis just like any other hypothesis, by using Bayesian inference.

Bayesian inference a way of probability assessment in which you update your information to arrive at what’s the most likely hypothesis. Eg, suppose you want to know how many people on this planet have curly hair. For starters you would estimate it’s probably less than the total world-population. Next, you might assign equal probability to all possible percentages to quantify your lack of knowledge. This is called a “prior.”

You would then probably think of people you know and give a lower probability for very large or very small percentages. After that, you could go and look at photos of people from different countries and count the curly-haired fraction, scale this up by population, and update your estimate. In the end you would get reasonably accurate numbers.

If you replace words with equations, that’s how Bayesian inference works.

You can do pretty much the same for the cosmological constant. Make some guess for the prior, take into account observational constraints, and you will get some estimate for a likely value. Indeed, that’s what Steven Weinberg famously did, and he ended up with a result that wasn’t too badly wrong. Awesome.

But just because you can do Bayesian inference doesn’t mean there must be a planet Earth for each fraction of curly-haired people. You don’t need all these different Earths because in a Bayesian assessment the probability represents your state of knowledge, not the distribution of an actual ensemble. Likewise, you don’t need a multiverse to update the likelihood of parameters when taking into account observations.

So to the extent that it’s science as usual you don’t need the multiverse.

4. So what? We’ll do it anyway.

The fourth, and usually final, line of defense is that if we just assume the multiverse exists, we might learn something, and that could lead to new insights. It’s the good, old Gospel of Serendipity.

In practice this means that multiverse proponents insist on interpreting probabilities for parameters as those of an actual ensemble of universes, ie the multiverse. Then they have the problem of where to get the probability distribution from, a thorny issue since the ensemble is infinitely large. This is known as the “measure problem” of the multiverse.

To solve the problem, they have to construct a probability distribution, which means they must invent a meta-theory for the landscape. Of course that’s just another turtle in the tower and will not help finding a theory of everything. And worse, since there are infinitely many such distributions you better hope they’ll find one that doesn’t need more assumptions than the standard model already has, because if that was so, the multiverse would be shaved off by Occam’s razor.

But let us assume the best possible outcome, that they find a measure for the multiverse according to which the parameters of the standard model are likely, and this measure indeed needs fewer assumptions than just postulating the standard model parameters. That would be pretty cool and I would be duly impressed. But even in this case we don’t need the multiverse! All we need is the equation to calculate what’s presumably a maximum of a probability distribution. Thus, again, Occam’s razor should remove the multiverse.

You could then of course insist that the multiverse is a possible interpretation, so you are allowed to believe in it. And that’s all fine by me. Believe whatever you want, but don’t confuse it with science.


The multiverse and other wild things that physicists believe in are subject of my upcoming book “Lost in Math” which is now available for preorder.

Wednesday, January 17, 2018

Pure Nerd Fun: The Grasshopper Problem

illustration of grasshopper.
[image: awesomedude.com]
It’s a sunny afternoon in July and a grasshopper lands on your lawn. The lawn has an area of a square meter. The grasshopper lands at a random place and then jumps 30 centimeters. Which shape must the lawn have so that the grasshopper is most likely to land on the lawn again after jumping?

I know, sounds like one of these contrived but irrelevant math problems that no one cares about unless you can get famous solving it. But the answer to this question is more interesting than it seems. And it’s more about physics than it is about math or grasshoppers.

It turns out the optimal shape of the lawn greatly depends on how far the grasshopper jumps compared to the square root of the area. In my opening example this ratio would have been 0.3, in which case the optimal lawn-shape looks like an inkblot

From Figure 3 of arXiv:1705.07621



No, it’s not round! I learned this from a paper by Olga Goulko and Adrian Kent, which was published in the Proceedings of the Royal Society (arXiv version here). You can of course rotate the lawn around its center without changing the probability of the grasshopper landing on it again. So, the space of all solutions has the symmetry of a disk. But the individual solutions don’t – the symmetry is broken.

You might know Adrian Kent from his work on quantum foundations, so how come his sudden interest in landscaping? The reason is that problems similar to this appear in certain types of Bell-inequalities. These inequalities, which are commonly employed to identify truly quantum behavior, often end up being combinatorial problems on the unit sphere. I can just imagine the authors sitting in front of this inequality, thinking, damn, there must be a way to calculate this.

As so often, the problem isn’t mathematically difficult to state but dang hard to solve. Indeed, they haven’t been able to derive a solution. In their paper, the authors offer estimates and bounds, but no full solution. Instead what they did (you will love this) is to map the problem back to a physical system. This physical system they configure so that it will settle on the optimal solution (ie optimal lawn-shape) at zero temperature. Then they simulate this system on the computer.

Concretely, the simulate the lawn of fixed area by randomly scattering squares over a template space that is much larger than the lawn. They allow a certain interaction between the little pieces of lawn, and then they calculate the probability for the pieces to move, depending on whether or not such a move will improve the grasshopper’s chance to stay on the green. The lawn is allowed to temporarily go into a less optimal configuration so that it will not get stuck in a local minimum. In the computer simulation, the temperature is then gradually decreased, which means that the lawn freezes and thereby approaches its most optimal configuration.

In the video below you see examples for different values of d, which is the above mentioned ratio between the distance the grasshopper jumps and the square root of the lawn-area:





For very small d, the optimal lawn is almost a disc (not shown in the video). For increasingly larger d, it becomes a cogwheel, where the number of cogs depends on d. If d increases above approximately 0.56 (the inverse square root of π), the lawn starts falling apart into disconnected pieces. There is a transition range in which the lawn doesn’t seem to settle on any particular shape. Beyond 0.65, there comes a shape which they refer to as a “three-bladed fan”, and after that come stripes of varying lengths.

This is summarized in the figure below, where the red line is the probability of the grasshopper to stay on the lawn for the optimal shape:
Figure 12 of arXiv:1705.07621

The authors did a number of checks to make sure the results aren’t numerical artifacts. For example, they checked that the lawn’s shape doesn’t depend on using a square grid for the simulation. But, no, a hexagonal grid gives the same results. They told me by email they are looking into the question whether the limited resolution might hide that the lawn shapes are actually fractal, but there doesn’t seem to be any indication for that.

I find this a super-cute example for how much surprises seemingly dull and simple math problems can harbor!

As a bonus, you can get a brief explanation of the paper from the authors themselves in this brief video.

Tuesday, January 16, 2018

Book Review: “The Dialogues” by Clifford Johnson

Clifford Johnson is a veteran of the science blogosphere, a long-term survivor, around already when I began blogging and one of the few still at it today. He is professor at the Department of Physics and Astronomy at the University of Southern California (in LA).

I had the pleasure of meeting Clifford in 2007. Who’d have thought back then that 10 years later we would both be in the midst of publishing a popular science book?

Clifford’s book was published by MIT Press just two months ago. It’s titled The Dialogues: Conversations about the Nature of the Universe and it’s not just a book, it’s a graphic novel! Yes, that’s right. Clifford doesn’t only write, he also draws.

His book is a collection of short stories which are mostly physics-themed, but also touch on overarching questions like how does science work or what’s the purpose of basic research to begin with. I would characterize these stories as conversation starters. They are supposed to make you wonder.

But just because it contains a lot of pictures doesn’t mean The Dialogues is a shallow book. In contrast, a huge amount of physics is packed into it, from electrodynamics to the multiverse, the cosmological constant, a theory of everything and to gravitational waves. The reader also finds references for further reading in case they wish to learn more.

I found the drawings were put to good use and often add to the explanation. The Dialogues is also, I must add, a big book. With more than 200 illustrated pages, it seems to me that offering it for less than $30 is a real bargain!

I would recommend this book to everyone who has an interest in the foundations of physics. Even if you don’t read it, it will still look good on your coffee table ;)




Win a copy!

I bought the book when it appeared, but later received a free review copy. Now I have two and I am giving one away for free!

The book will go to the first person who submits a comment to this blogpost (not elsewhere) listing 10 songs that use physics-themed phrases in the lyrics (not just in the title). Overly general words (such as “moon” or “light”) or words that are non-physics terms which just happen to have a technical meaning (such as “force” or “power”) don’t count.

The time-stamp of your comment will decide who was first, so please do not send your list to me per email. Also, please only make a submission if you are willing to provide me with a mailing address.

Good luck!

Update:
The book is gone.

Wednesday, January 10, 2018

Superfluid dark matter gets seriously into business

very dark fluid
Most matter in the universe isn’t like the stuff we are made of. Instead, it’s a thinly distributed, cold, medium which rarely interacts both with itself and with other kinds of matter. It also doesn’t emit light, which is why physicists refer to it as “dark matter.”

A recently proposed idea, according to which dark matter may be superfluid, has now become more concrete, thanks to a new paper by Justin Khoury and collaborators.

Astrophysicists invented dark matter because a whole bunch of observations of the cosmos do not fit with Einstein’s theory of general relativity.

According to general relativity, matter curves space-time and, in return, the curvature dictates the motion of matter. Problem is, if you calculate the response of space-time to all the matter we know, then the observed motions doesn’t fit the prediction from the calculation.

This problem exists for galactic rotation curves, velocity distributions in galaxy clusters, for the properties of the cosmic microwave background, for galactic structure formation, gravitational lensing, and probably some more that I’ve forgotten or never heard about in the first place.

But dark matter is only one way to explain the observation. We measure the amount of matter and we observe its motion, but the two pieces of information don’t match up with the equations of general relativity. One way to fix this mismatch is to invent dark matter. The other way to fix this is to change the equations. This second option has become known as “modified gravity.”

There are many types of modified gravity and most of them work badly. That’s because it’s easy to break general relativity and produce a mess that’s badly inconsistent with the high-precision tests of gravity that we have done within our solar system.

However, it has been known since the 1980s that some types of modified gravity explain observations that dark matter does not explain. For example, the effects of dark matter in galaxies become relevant not at a certain distance from the galactic center, but below a certain acceleration. Even more perplexing, this threshold of acceleration is related to the cosmological constant. Both of these features are difficult to account for with dark matter. Astrophysicists have also established a relation between the brightness of certain galaxies and the velocities of their outermost stars. Named “Baryonic Tully Fisher Relation” after its discoverers, it is also difficult to explain with dark matter.

On the other hand, modified gravity works badly in other cases, notably in the early universe where dark matter is necessary to get the cosmic microwave background right, and to set up structure formation so that the result agrees with what we see.

For a long time I have been rather agnostic about this, because I am more interested in the structure of fundamental laws than in the laws themselves. Dark matter works by adding particles to the standard model of particle physics. Modified gravity works by adding fields to general relativity. But particles are fields and fields are particles. And in both cases, the structure of the laws remains the same. Sure, it would be great to settle just exactly what it is, but so what if there’s one more particle or field.

It was a detour that got me interested in this: Fluid analogies for gravity, a topic I have worked on for a few years now. Turns out that certain kinds of fluids can mimic curved space-time, so that perturbations (say, density fluctuations) in the fluid travel just like they would travel under the influence of gravity.

The fluids under consideration here are usually superfluid condensates with an (almost) vanishing viscosity. The funny thing is now that if you look at the mathematical description of some of these fluids, they look just like the extra fields you need for modified gravity! So maybe, then, modified gravity is really a type of matter in the end?

I learned about this amazing link three years ago from a paper by Lasha Berezhiani and Justin Khoury. They have a type of dark matter which can condense (like vapor on glass, if you want a visual aid) if a gravitational potential is deep enough. This condensation happens within galaxies, but not in interstellar space because the potential isn’t deep enough. The effect that we assign to dark matter, then, comes partly from the gravitational pull of the fluid and partly from the actual interaction with the fluid.

If the dark matter is superfluid, it has long range correlations that give rise to the observed regularities like the Tully-Fisher relation and the trends in rotation curves. In galaxy clusters, on the other hand, the average density of (normal) matter is much lower and most of the dark matter is not in the superfluid phase. It then behaves just like normal dark matter.

The main reason I find this idea convincing is that it explains why some observations are easier to account for with dark matter and others with modified gravity: It’s because dark matter has phase transitions! It behaves differently at different temperatures and densities.

In solar systems, for example, the density of (normal) matter is strongly peaked and the gradient of the gravitational field near a sun is much larger than in a galaxy on the average. In this case, the coherence in the dark matter fluid is destroyed, which is why we do not observe effects of modified gravity in our solar system. And in the early universe, the temperature is too high and dark matter just behaves like a normal fluid.

In 2015, the idea with the superfluid dark matter was still lacking details. But two months ago, Khoury and his collaborators came out with a new paper that fills in some of the missing pieces.

Their new calculations take into account that in general the dark matter will be a mixture of superfluid and normal fluid, and both phases will make a contribution to the gravitational pull. Just what the composition is depends on the gravitational potential (caused by all types of matter) and the equation of state of the superfluid. In the new paper, the authors parameterize the general effects and then constrain the parameters so that they fit observations.

Yes, there are new parameters, but not many. They claim that the model can account for all the achievements of normal particle dark matter, plus the benefits of modified gravity on top.

And while this approach very much looks like modified gravity in the superfluid phase, it is immune to the constraint from the measurement of gravitational waves with an optical counterpart. That is because both gravitational waves and photons couple the same way to the additional stuff and hence should arrive at the same time – as observed.

It seems to me, however, that in the superfluid model one would in general get a different dark matter density if one reconstructs it from gravitational lensing than if one reconstructs it from kinetic measurements. That is because the additional interaction with the superfluid is felt only by the baryons. Indeed, this discrepancy could be used to test whether the idea is correct.

Khoury et al don’t discuss the possible origin of the fluid, but I like the interpretation put forward by Erik Verlinde. According to Verlinde, the extra-fields which give rise to the effects of dark matter are really low-energy relics of the quantum behavior of space-time. I will admit that this link is presently somewhat loose, but I am hopeful that it will become tighter in the next years. If so, this would mean that dark matter might be the key to unlocking the – still secret – quantum nature of gravity.

I consider this one of the most interesting developments in the foundations of physics I have seen in my lifetime. Superfluid dark matter is without doubt a pretty cool idea.

Tuesday, January 09, 2018

Me, elsewhere

Beginning 2018, I will no longer write for Ethan Siegel’s Forbes collection “Starts With a Bang.” Instead, I will write a semi-regular column for Quanta Magazine, the first of which -- about asymptotically safe gravity -- appeared yesterday.

In contrast to Forbes, Quanta Magazine keeps the copyright, which means that the articles I write for them will not be mirrored on this blog. You actually have to go over to their site to read them. But if you are interested in the foundations of physics, take my word that subscribing to Quanta Magazine is well worth your time, not so much because of me, but because their staff writers have so-far done an awesome job to cover relevant topics without succumbing to hype.

I also wrote a review of Jim Baggott’s book “Origins: The Scientific Story of Creation” which appeared in the January issue of Physics World. I much enjoyed Baggott’s writing and promptly bought another one of his books. Physics World  doesn’t want me to repost the review in text, but you can read the PDF here.

Finally, I wrote a contribution to the proceedings of a philosophy workshop I attended last year. In this paper, I summarize my misgivings with arguments from finetuning. You can now find it on the arXiv.

If you want to stay up to date on my writing, follow me on Twitter or on Facebook.

Wednesday, January 03, 2018

Sometimes I believe in string theory. Then I wake up.

They talk about me.
Grumpy Rainbow Unicorn.
[Image Source.]

And I can’t blame them. Because nothing else is happening on this planet. There’s just me and my attempt to convince physicists that beauty isn’t truth.

Yes, I know it’s not much of an insight that pretty ideas aren’t always correct. That’s why I objected when my editor suggested I title my book “Why Beauty isn’t Truth.” Because, duh, it’s been said before and if I wanted to be stale I could have written about how we’re all made of stardust, aah-choir, chimes, fade and cut.

Nature has no obligation to be pretty, that much is sure. But the truth seems hard to swallow. “Certainly she doesn’t mean that,” they say. Or “She doesn’t know what she’s doing.” Then they explain things to me. Because surely I didn’t mean to say that much of what goes on in the foundations of physics these days is a waste of time, did I? And even if, could I please not do this publicly, because some people have to earn a living from it.

They are “good friends,” you see? Good friends who want me to believe what they believe. Because believing has bettered their lives.

And certainly I can be fixed! It’s just that I haven’t yet seen the elegance of string theory and supersymmetry. Don’t I know that elegance is a sign of all successful theories? It must be that I haven’t understood how beauty has been such a great guide for physicists in the past. Think of Einstein and Dirac and, erm, there must have been others, right? Or maybe it’s that I haven’t yet grasped that pretty, natural theories are so much better. Except possibly for the cosmological constant, which isn’t pretty. And the Higgs-mass. And, oh yeah, the axion. Almost forgot about that, sorry.

But it’s not that I don’t think unified symmetry is a beautiful idea. It’s a shame, really, that we have these three different symmetries in particle physics. It would be so much nicer if we could merge them to one large symmetry. Too bad that the first theories of unification led to the prediction of proton decay and were ruled out. But there are a lot other beautiful unification ideas left to work on. Not all is lost!

And it’s not that I don’t think supersymmetry is elegant. It combines two different types of particles and how cool is that? It has candidates for dark matter. It alleviates the problem with the cosmological constant. And it aids gauge coupling unification. Or at least it did until LHC data interfered with our plans to prettify the laws of nature. Dang.

And it’s not that I don’t see why string theory is appealing. I once set out to become a string theorist. I do not kid you. I ate my way through textbooks and it was all totally amazing, how much you get out from the rather simple idea that particles shouldn’t be points but strings. Look how much consistency dictates you to construct the theory. And note how neatly it fits with all that we already know.

But then I got distracted by a disturbing question: Do we actually have evidence that elegance is a good guide to the laws of nature?

The brief answer is no, we have no evidence. The long answer is in my book and, yes, I will mention the-damned-book until everyone is sick of it. The summary is: Beautiful ideas sometimes work, sometimes they don’t. It’s just that many physicists prefer to recall the beautiful ideas which did work.

And not only is there no historical evidence that beauty and elegance are good guides to find correct theories, there isn’t even a theory for why that should be so. There’s no reason to think that our sense of beauty has any relevance for discovering new fundamental laws of nature.

Sure, if you ask those who believe in string theory and supersymmetry and in grand unification, they will say that of course they know there is no reason to believe a beautiful theory is more likely to be correct. They still work on them anyway. Because what better could they do with their lives? Or with their grants, respectively. And if you work on it, you better believe in it.

I consent, not all math is equally beautiful and not all math is equally elegant. I yet have to find anyone, for example, who thinks Loop Quantum Gravity is more beautiful than string theory. And isn’t it interesting that we share this sense of what is and isn’t beautiful? Shouldn’t it mean something that so many theoretical physicists agree beautiful math is better? Shouldn’t it mean something that so many people believe in the existence of an omniscient god?

But science isn’t about belief, it’s about facts, so here are the facts: This trust in beauty as a guide, it’s not working. There’s no evidence for grand unification. There’s no evidence for supersymmetry, no evidence for axions, no evidence for moduli, for WIMPs, or for dozens of other particles that were invented to prettify theories which work just fine without them. After decades of search, there’s no evidence for any of these.

It’s not working. I know it hurts. But now please wake up.

Let me assure you I usually mean what I say and know what I do. Could I be wrong? Of course. Maybe tomorrow we’ll discover supersymmetry. Not all is lost.