WIMP #9: Is Einstein’s gravity better than his miracle year?

In 1905 Einstein published four groundbreaking papers, resulting in 1905 being referred to as his “Annus mirabilis” or “miraculous year”. His theory of gravity “General Relativity” was published later (1915) an isn’t part of his miracle year. Nonetheless, “General Relativity” underlies our model of the entire Universe, and predicted things like gravitational waves that we are still learning about 100 years after Einstein proposed it. I guess most of us would be happy having either of these things on our CV… but imagine having both!

WIMP #8: The artist who was decades ahead of his time

The most popular alternative to Einstein’s gravity was first written down by Gregory Horndeski in 1974; it received little attention at the time and Horndeski left physics and became an artist. The theory was rediscovered in the last decade and now bears his name.

Figure: One of Horndeski’s paintings (LINK). His website can be found HERE.

Seeing red

Nearly every galaxy we look at is redder than it should be.

Not only that, but how much redder they are tells us how far away they are. This is very convenient for us, and makes our lives a lot easier.

Redshifts

Some of you may have heard about redshifts and blueshifts. The basic idea is quite simple: light coming from things travelling away from us appears redder, and due to the expansion of the Universe*, (most) galaxies appear to be travelling away from us, and therefore appear to be redder.

If this seems a bit weird, forget galaxies for a moment, and remember the last time an emergency vehicle whizzed past you with the siren going. As it travels towards you, then passes and goes away from you, the sound of the siren changes. It goes from a higher pitch coming towards you, to a lower pitch going away. Light does the same thing, but typically not noticeably by humans in everyday situations. For light, “higher pitched” means that it appears bluer, and “lower pitched” means that it appears redder. This latter is what we call “redshift”: the reddening of light because it was emitted by something travelling away from us.

Galaxies emit light at every frequency, but not quite the same amount of each, due to the presence of different chemical elements in the galaxies, which are each associated with specific frequencies. All of the frequencies appear redder by the same amount, so, if you observe enough different frequencies of light from a given galaxy, you can work out how it would look if it wasn’t moving away from you. And therefore how much it is redshifted.

Figure: A spectrum from a galaxy. The x-axis shows the frequency of the light, and the height of the black line shows how much light is received at that frequency. Some of the peaks and troughs have been labelled with the elements that cause them.

Why are redshifts important?

Not only does the expansion of the Universe mean that most galaxies appear to be travelling away from us (and therefore look redder), but the further a galaxy is away from us, the faster it is travelling away from us. So light from galaxies that are further away is “redshifted” (made redder) by more. This means that an accurate measurement of a galaxy’s redshift, is an accurate measurement of how far away that galaxy is.

Knowing the distances to galaxies accurately is a really important part of testing our models of the Universe, and if we get them wrong it could mean that we draw the wrong conclusions from our data. This includes our conclusions about the dark sector.

Historically, when astrophysicists do big surveys (i.e. scan a lot of the sky with a big telescope for a long time), looking for lots of galaxies, there are two ways to do it.

Firstly there is spectroscopy: lots of light of very specific frequencies is carefully collected for each galaxy, so there is lots of information and the redshift of each galaxy can be computed very accurately. The problem with this is that it is very time consuming to collect all of this information for each individual galaxy.

The alternative is called photometry. This involves collecting light through a few big filters, each of which have lots of frequencies contributing to them, so all the frequencies in each filter get muddled together. This means that the redshift of each galaxy can’t be determined anywhere near as precisely as with spectroscopy. But, on the bright side (pun intended), many more galaxies can be found and measured much faster.

A new survey

A recent paper (HERE) talks about the “J-PAS” survey, which is a proposed survey from a 2.5 metre telescope at the Javalambre Astrophysics Observatory in Spain (about 2 hours inland from Valencia), which is on top of a mountain at about 2000m above sea level.

Photo of the Javalambre Observatory site. Imagine having that view and then being paid to look up instead of around!

One of the innovative things about this survey is that it is a hybrid between the usual two approaches described above. They call this hybrid survey “spectro-photometric”, again showing that us scientists are really bad at interesting names! This approach uses lots of narrow filters to simultaneously gets lots of galaxies, and get good data (and accurate redshifts) about those galaxies. This will allow them to do a good survey, relatively quickly and cheaply.

The reason for doing surveys like these to find lots of galaxies is that, once we have found them, we calculate some specific statistical quantities from the data, which are quantities that our theories predict. This allows us to see which theories’ predictions agree best with the data. The two main things that will be computed from the J-PAS survey will be “galaxy clustering” and “weak gravitational lensing”.

“Galaxy clustering” basically just means making a big catalogue of where each galaxy is on the sky and how far away they each are from us (i.e. their redshift, hence why having accurate values for these is so important). Then, from this catalogue, we calculate some numbers that represent how clustered (i.e. how clumped together) the matter is in the Universe, and how it is distributed. In different theories of gravity, the matter will be distributed in different ways, and will be clumped together more in some theories than others.

“Gravitational lensing” is just the statement that gravity affects light. As a result, the images that we see of each galaxy are distorted slightly. Because these distortions are caused by gravity, which is itself caused by the matter in the Universe, the distortions also tell us about the clumpiness and distribution of matter in the Universe. The images of all the galaxies in the survey are used to measure how much distortion is happening.

(Aside: this distortion is called “weak” gravitational lensing because it is a relatively minor effect. Gravitational lensing can manifest in many other ways, see for example WIMP #4 HERE.)

How does this all relate to the dark sector?

Combining the measurements of how clumpy the Universe is, and how much gravitational lensing there has been, allows us to infer how gravity works and how strong it is. By calculating these statistics in different theories of gravity, and comparing each of them to the data, we can see if there is any preference to replace Einstein’s gravity with a different theory.

In this paper they use something that is sometimes called “Model Independent Modified Gravity”. This is a topic close to my heart, which I worked on during my PhD, so I will doubtless say more about it in a future post. For now, suffice to say that the idea is, rather than writing down a full alternative theory, to just write down some general ways in which gravity could differ from Einstein’s theory. Then one uses the data to see if there is any evidence for any of these possible modifications to Einstein’s theory.

In this work they choose two parameters, which represent (loosely speaking) (A) the strength of gravity, and (B) how much gravity affects massless things (light) vs how much it affects things with mass (planets, stars, galaxies, dark matter, people and so on). These encompass many of the ways that gravity could be modified.

The bottom line of this paper is that they are predicting that the J-PAS survey will be able to detect such differences from Einstein gravity if they are at a level of about 5% (or more).

It would be very exciting if a deviation from Einstein’s gravity was found in this way, but even if it isn’t, we will have tested it in a new way and, knowing that gravity can’t be more than 5% different from Einstein’s gravity, is a very useful piece of information all by itself.

*Correctly and rigorously interpreting the “expansion” of the Universe is a thorny topic, in which stretchy space, galaxies stuck on inflating balloons and all sorts of other things are invoked. In this post I give a very simple picture, which captures the relevant details for what I am talking about. Don’t get too hung up on the details; maybe I will post in more detail about interpreting the expansion in a future post.

Figure credits:
1) Galaxy spectrum as provided on the zooniverse site HERE. Specific image available HERE.
2) OAJ Observatory, from their website. Specific image available HERE.

WIMP #7: We are the 4.8%

According to our best current data, dark matter and dark energy comprise 95.2% of the energy in the Universe. Meaning that the sum of the Earth, me, you, every star, every planet, the light given out by every star, everything in the periodic table, and basically everything in the Universe that we humans can touch and consider to be “stuff”, only represents 4.8% of the energy in the Universe.

Simulating the Universe

The entire Universe is just an elaborate simulation running on someone’s computer… my computer. And the computers of many other cosmologists too of course.

Figure: The Universe in a box (inside a computer somewhere on the Earth).

I’m cheating a little with this post, because it is about a recent review article, not new science. Reviews are used to consolidate and concisely present the state of play in a given field, making it easier for scientists to learn a new area or check the status quo.

The review article in question (HERE) is about cosmological simulations, often referred to as “N-body” simulations, because they were first used to solve the “N-body problem”. This is simply the problem of working out the trajectories of a number (N) of bodies interacting only through gravity.

A wonderful story I heard as a PhD student is about one of the earliest N-body simulations. In the 1940s, long before the computing resources we have now, a scientist named Erik Holmberg took 74 light bulbs and arranged them into 2 groups of 37 representing two galaxies. The clever thing he had spotted is that the intensity of light, like gravitational force, falls in proportion to the distance squared. So he just needed to measure the intensity of light from the other 73 light bulbs at the position of each individual light bulb, and he essentially had a measure of the gravitational force. Then he just moved the light bulbs accordingly. Genius!

These days there is an entire sub-field of cosmology (and astrophysics) based on running and interpreting very sophisticated computer simulations of the Universe.

Why do these big simulations?

The N-body simulations discussed in this review are very computationally expensive (i.e they require lots of computing resources). I once ran a single simulation that used the equivalent of more than 500 desktop computers for a week, and this is much smaller than the biggest simulations that the community has run. These simulations are also quite a crude way to solve the equations we are trying to solve. So why do we do them? Simply put, on the smaller scales in the Universe, we have (almost) no alternatives.

When we think about the largest scales in the Universe, we describe things in terms of two Universes: an “average Universe” (i.e. with everything averaged out so the Universe is the same everywhere) and then a “perturbed Universe”, which means a Universe where there are small differences between it and the average Universe, and these differences are allowed to be different in different places.

It is a bit like describing the surface of the Earth: the average Universe is “sea level” and the height/depth of each point on the Earth’s surface is described as either “above sea level” or “below sea level”. In the same way, the amount of stuff at each point in the Universe is described as “more than average” or “less than average”.

Importantly, on large scales, it seems that this “average universe” plus “small differences from average” way of describing the Universe is a good description. This makes the (very complicated) equations of Einstein’s gravity much simpler, and therefore we can solve the equations describing the Universe relatively easily. In the same way, the description of the Earth is also a good one: if one zooms out, the Earth is very similar to a ball, with even the biggest deviations (like Everest) being quite small on the scale of the whole ball.

However, this approach fails on smaller scales, i.e. if one zooms in. For example, a galaxy, and the patch of empty “outer space” next to the galaxy, are both very different from average. This means that on smaller scales in the Universe, we can’t describe the structures in the Universe in terms of small differences from the average Universe. As a result, Einstein’s equations become too complicated for us to solve in the usual way and we have to come up with a different solution.

This different solution is to run these “N-body simulations”: big computer simulations that track the gravitational evolution of billions of particles, where each particle might have the mass of an entire galaxy! These simulations mostly rely on using Newtonian gravitational laws rather than Einstein’s gravity, but this is a reasonable approximation for what is being studied here.

How do these simulations relate to dark matter and dark energy?

The review catalogues many of the ways that these simulations have been useful for us to understand the effect of different dark matter, dark energy, and alternative gravity models. As well as showing us why some of them probably aren’t correct.

One of the things highlighted in the review is the so-called “small scale problems” of our current cosmological model. The standard Cold Dark Matter approach works well on the large and intermediate scales in the simulation, but on sub-galactic scales (i.e. phenomena on scales below the size of a typical galaxy), a bunch of problems seem to be present. This includes too many mini “satellite” galaxies appearing in our simulations, and not having the right amount of dark matter near the centre of some types of galaxies.

These issues are one of the motivations for considering alternative explanations of the dark sector: we hope that at least one of these alternative theories will not have these problems.

However, it is worth noting that we aren’t yet sure to what extent these problems are caused by simplifications and approximations in our simulations. It may be that with better simulations all of these problems vanish and therefore this particular justification for looking at alternative models of dark matter and dark energy will vanish.

Some interesting alternatives for which cosmologists have run simulations of the Universe are Warm Dark Matter and dynamical dark energy.

Warm Dark Matter: In the usual paradigm, the “Cold” in Cold Dark Matter basically means that early in the Universe’s history, the speeds of the dark matter particles were very small, and we can basically ignore them. Warm Dark Matter allows these speeds to be a bit higher, such that they aren’t really fast, but neither can they be ignored. These speeds mean that below a certain length-scale, the particles are moving too fast to clump together. The hope is that this reduces the number of small galaxies, and also reduces the amount of dark matter at the centres of some galaxies, and thus solves some of the problems in the simulations. However, in practice it doesn’t seem to be working out so well so far.

Dynamical dark energy: this means allowing the amount of dark energy, and maybe even some of its properties, to change over time. This is different to the standard picture containing the cosmological constant, which does not evolve with time. The physical idea here is that there is an extra field in the Universe, whose properties are affected by the evolution of the Universe, and the other things in the Universe. Despite being quite simple (because it only involves the “average Universe” discussed above), this model can have a surprisingly large impact on galaxy properties. This is because the structure in the Universe is the end product of things happening over a long time, so even small differences can accumulate.

Conclusions

The really important thing to take away from this is that cosmologists run sophisticated simulations of the Universe, which can be used to make cool pictures. A selection of these are included in the paper.

There are currently differences between our simulations and our observations, and these might be due to either imperfect simulations or because our model of dark matter and dark energy is wrong. We are working hard to find out which of these it is.

Apologies, this post is a day or so late due to being ill over the weekend. Hopefully things will run to time in future 🙂

Figure credits:
First: From the centre for cosmological simulations, http://cosmicweb.uchicago.edu/filaments.html, http://cosmicweb.uchicago.edu/images/bnr0647.gif
Second: Screenshot from the paper under discussion, available HERE.