291
PHY418 Particle Astrophysics Susan Cartwright

PHY418 Particle Astrophysics Susan Cartwright · 2015. 10. 29. · Chapter 1 Introduction 1.1 What is particle astrophysics? Particle astrophysics, also known as astroparticle physics,

  • Upload
    others

  • View
    4

  • Download
    1

Embed Size (px)

Citation preview

  • PHY418 Particle Astrophysics

    Susan Cartwright

  • Contents

    1 Introduction 71.1 What is particle astrophysics? . . . . . . . . . . . . . . . . . . . . 71.2 Early-universe cosmology . . . . . . . . . . . . . . . . . . . . . . 8

    1.2.1 Inflation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91.2.2 Baryogenesis . . . . . . . . . . . . . . . . . . . . . . . . . 11

    1.3 The physics of dark energy . . . . . . . . . . . . . . . . . . . . . 171.4 High-energy processes in astrophysics . . . . . . . . . . . . . . . . 20

    1.4.1 The non-thermal universe . . . . . . . . . . . . . . . . . . 211.4.2 Detection techniques . . . . . . . . . . . . . . . . . . . . . 22

    1.5 Neutrinos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251.5.1 Neutrinos in cosmology . . . . . . . . . . . . . . . . . . . 251.5.2 Solar neutrinos . . . . . . . . . . . . . . . . . . . . . . . . 261.5.3 Supernova neutrinos . . . . . . . . . . . . . . . . . . . . . 261.5.4 Atmospheric neutrinos . . . . . . . . . . . . . . . . . . . . 281.5.5 High-energy neutrinos . . . . . . . . . . . . . . . . . . . . 28

    1.6 Dark matter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291.6.1 Astrophysical and cosmological evidence for dark matter . 291.6.2 Dark matter candidates . . . . . . . . . . . . . . . . . . . 301.6.3 Detection of dark matter . . . . . . . . . . . . . . . . . . 34

    1.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401.8 Questions and Problems . . . . . . . . . . . . . . . . . . . . . . . 40

    2 Astrophysical Accelerators: The Observational Evidence 432.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432.2 Cosmic rays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

    2.2.1 A brief history . . . . . . . . . . . . . . . . . . . . . . . . 442.2.2 Detection of cosmic rays . . . . . . . . . . . . . . . . . . . 452.2.3 Observed properties . . . . . . . . . . . . . . . . . . . . . 56

    2.3 Radio emission . . . . . . . . . . . . . . . . . . . . . . . . . . . . 692.3.1 The radio sky . . . . . . . . . . . . . . . . . . . . . . . . . 692.3.2 Radio emission mechanisms . . . . . . . . . . . . . . . . . 712.3.3 Electromagnetic radiation from an accelerated charge . . 732.3.4 Bremsstrahlung . . . . . . . . . . . . . . . . . . . . . . . . 762.3.5 Synchrotron radiation . . . . . . . . . . . . . . . . . . . . 802.3.6 Self absorption . . . . . . . . . . . . . . . . . . . . . . . . 862.3.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . 88

    2.4 High energy photons . . . . . . . . . . . . . . . . . . . . . . . . . 892.4.1 High energy photons and particle astrophysics . . . . . . 892.4.2 Mechanisms of high-energy photon emission . . . . . . . . 892.4.3 X-rays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 942.4.4 Soft γ-rays . . . . . . . . . . . . . . . . . . . . . . . . . . 972.4.5 Intermediate energy γ-rays . . . . . . . . . . . . . . . . . 103

    3

  • 4

    2.4.6 High-energy γ-rays . . . . . . . . . . . . . . . . . . . . . . 106

    2.5 High-energy neutrinos . . . . . . . . . . . . . . . . . . . . . . . . 118

    2.5.1 Neutrino production and the Waxman–Bahcall bound . . 119

    2.5.2 Interaction of high-energy neutrinos with matter . . . . . 121

    2.5.3 Detection of high-energy neutrinos . . . . . . . . . . . . . 123

    2.5.4 Future prospects . . . . . . . . . . . . . . . . . . . . . . . 126

    2.5.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . 127

    2.6 An overview of sources . . . . . . . . . . . . . . . . . . . . . . . . 127

    2.6.1 Galactic sources . . . . . . . . . . . . . . . . . . . . . . . 128

    2.6.2 Extragalactic sources . . . . . . . . . . . . . . . . . . . . . 131

    2.6.3 Transient sources . . . . . . . . . . . . . . . . . . . . . . . 133

    2.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137

    2.8 Questions and Problems . . . . . . . . . . . . . . . . . . . . . . . 138

    3 Astrophysical Accelerators: Acceleration Mechanisms 141

    3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141

    3.2 The diffusion-loss equation . . . . . . . . . . . . . . . . . . . . . . 142

    3.3 Fermi second-order acceleration . . . . . . . . . . . . . . . . . . . 143

    3.4 Astrophysical shocks . . . . . . . . . . . . . . . . . . . . . . . . . 146

    3.4.1 Shock jump conditions . . . . . . . . . . . . . . . . . . . . 147

    3.4.2 The role of collisionless shocks . . . . . . . . . . . . . . . 149

    3.4.3 Effect of magnetic fields . . . . . . . . . . . . . . . . . . . 150

    3.4.4 Observations of astrophysical shocks . . . . . . . . . . . . 152

    3.5 Diffusive shock acceleration . . . . . . . . . . . . . . . . . . . . . 154

    3.5.1 Test particle approach . . . . . . . . . . . . . . . . . . . . 154

    3.5.2 Beyond the test-particle approach . . . . . . . . . . . . . 158

    3.5.3 Shock drift acceleration . . . . . . . . . . . . . . . . . . . 161

    3.6 Relativistic shocks . . . . . . . . . . . . . . . . . . . . . . . . . . 162

    3.7 Particle acceleration by magnetic reconnection . . . . . . . . . . 165

    3.8 Propagation of cosmic rays through the Galaxy . . . . . . . . . . 167

    3.9 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170

    3.10 Questions and Problems . . . . . . . . . . . . . . . . . . . . . . . 172

    4 Astrophysical Accelerators: Sources 175

    4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175

    4.2 Particle acceleration in the solar system . . . . . . . . . . . . . . 177

    4.2.1 Solar flares and coronal mass ejections . . . . . . . . . . . 178

    4.2.2 Planetary bow shocks . . . . . . . . . . . . . . . . . . . . 179

    4.2.3 The termination shock . . . . . . . . . . . . . . . . . . . . 181

    4.3 Galactic sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182

    4.3.1 Supernovae and supernova remnants . . . . . . . . . . . . 182

    4.3.2 Evolution of a supernova remnant . . . . . . . . . . . . . 184

    4.3.3 Observational evidence of particle acceleration . . . . . . 186

    4.3.4 Pulsar wind nebulae . . . . . . . . . . . . . . . . . . . . . 191

    4.3.5 Evolution of pulsar wind nebulae . . . . . . . . . . . . . . 193

    4.3.6 Particle acceleration in PWNe . . . . . . . . . . . . . . . 194

    4.4 Extragalactic sources . . . . . . . . . . . . . . . . . . . . . . . . . 198

    4.4.1 Gamma-ray bursts . . . . . . . . . . . . . . . . . . . . . . 199

    4.4.2 Gamma-ray bursts as sources of UHE cosmic rays . . . . 225

    4.4.3 Radio-loud active galactic nuclei . . . . . . . . . . . . . . 230

    4.4.4 Particle acceleration in AGN . . . . . . . . . . . . . . . . 243

    4.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253

  • 5

    4.6 Questions and Problems . . . . . . . . . . . . . . . . . . . . . . . 258

  • 6

  • Chapter 1

    Introduction

    1.1 What is particle astrophysics?

    Particle astrophysics, also known as astroparticle physics, is essentially the useof particle physics techniques, either experimental or theoretical, to address as-trophysical questions, or conversely the use of astrophysical data to constraintheories of particle physics. Examples of the former include gamma-ray as-tronomy and the development of the theory of inflation as an outgrowth fromGrand Unified Theories; an example of the latter is the use of solar neutrinosto measure neutrino oscillation parameters.

    Particle astrophysics as a discipline in its own right is a relatively recentdevelopment, and the topics included under its umbrella vary from place toplace. The journal Astroparticle Physics defines its subject matter as[1]

    • High-energy cosmic-ray physics and astrophysics;

    • Particle cosmology;

    • Particle astrophysics;

    • Related astrophysics: supernova, AGN, cosmic abundances, dark matteretc.;

    • High-energy, VHE and UHE gamma-ray astronomy;

    • High- and low-energy neutrino astronomy;

    • Instrumentation and detector developments related to the above-mentionedfields

    (a somewhat unsatisfactory definition since it includes “particle astrophysics”as a topic in its own right!). The Science and Technology Funding Council(STFC) defines particle astrophysics as “that branch of particle physics thatstudies elementary particles of astronomical origin, and their relation to astro-physics and cosmology” [2], but its description of the activities funded underthis heading[3] includes gravitational waves, which do not seem to fit this def-inition. The 2008 and 2011 Roadmap documents of the Astroparticle PhysicsEuropean Consortium (ApPEC) [4] define their subject as “the intersection ofastrophysics, particle and nuclear physics and cosmology. It addresses questionslike the nature of dark matter and dark energy, the physics of the Big Bang, thestability of protons, the properties of neutrinos and their role in cosmic evolu-tion, the interior of the Sun or supernovae as seen with neutrinos, the origin ofcosmic rays, the nature of the Universe at extreme energies and violent cosmic

    7

  • 8 CHAPTER 1. INTRODUCTION

    processes as seen with gravitational waves.” The table of contents of the 2011roadmap includes, as chapter or section headings,

    • charged cosmic rays;

    • gamma-ray astrophysics;

    • high-energy neutrinos;

    • dark matter;

    • neutrino mass measurements (direct and via double beta decay);

    • low-energy neutrino astronomy;

    • proton decay;

    • dark energy;

    • gravitational waves.

    Despite the minor variations, a fairly coherent picture of particle astro-physics emerges from these definitions. Essentially, the core disciplines of par-ticle astrophysics are

    1. early-universe cosmology;

    2. the physics of dark energy;

    3. high-energy processes in astrophysics;

    4. neutrinos;

    5. dark matter.

    I have omitted gravitational waves from this list, despite their inclusion by boththe STFC and ApPEC, because neither their production nor their detectioninvolves particle physics (essentially, this is a classical phenomenon described bygeneral relativity). However, the special case of primordial gravitational waves,detected via the imprint they leave on the polarisation of the cosmic microwavebackground, does belong in particle astrophysics because of its relevance toearly-universe cosmology.

    In the rest of this chapter, we will briefly introduce each of the topics listedabove. The rest of the course, however, will focus almost exclusively on the thirdand fourth items. Particle cosmology and the physics of dark energy will notbe discussed in detail because they are very technical theoretical topics whichwould require a whole module to cover in adequate depth, while dark matteris covered in PHY326/426 Dark Matter and the Universe [5], and therefore willonly be summarised here.

    1.2 Early-universe cosmology

    The temperature of the universe now, as measured by the cosmic microwavebackground, is 2.72548 ± 0.00057 K [6]. Temperature scales as (1 + z), wherez is the redshift (see PHY306/406 Introduction to Cosmology [7]), so the earlyuniverse was much hotter than this, and hence had higher characteristic energies(E ≃ kBT , where Boltzmann’s constant kB = 8.65 × 10−5 eV K−1). Big Bangnucleosynthesis (see PHY306/406 and PHY320 Nuclear Astrophysics [8]) takes

  • 1.2. EARLY-UNIVERSE COSMOLOGY 9

    place a few minutes after the Big Bang, at temperatures of order 109 K andenergies of order 0.1 MeV. Energies above a few MeV, corresponding to timesbefore 1 s or so after the Big Bang, are too high for nuclear physics and fall intothe domain of particle physics, so early-universe cosmology is in many ways anapplication of theoretical particle physics, and is often referred to as particlecosmology.

    1.2.1 Inflation

    One of the first applications of theoretical particle physics to early-universecosmology was the development of the idea of inflation [9, 10], which used thephysics of Grand Unified Theories at very high energies (E ∼ 1016 GeV, t ∼10−35 s after the Big Bang) to drive a brief period of extremely rapid expansion(usually assumed to be exponential, a ∝ exp(Ht), although a steep power law,a ∝ tn where n > 1, will also work). Inflation was originally postulated toaccount for two observations which are otherwise difficult to reconcile with theclassical Big Bang model: the fact that the universe is observed to have a flat(Euclidean) geometry, and the extremely high level of isotropy displayed by thecosmic microwave background (see PHY306/406 for further details).

    Inflation also accounts for the small (∼10−5) anisotropies of the microwavebackground, which arise from quantum fluctuations of the vacuum “frozen in”and expanded to macroscopic size by the rapid expansion. Inflation modelspredict that the spectral index of the fluctations, n, should be about 0.95,in good agreement with the fitted value of 0.9603 ± 0.0073 from Planck [11](and 0.968 ± 0.012 from the 9-year WMAP results[12]; this is not one of theparameters on which Planck and WMAP disagree).

    Further support for inflation has recently been provided by the BICEP2experiment[13], which reported the detection of B-mode polarisation in the cos-mic microwave background. In the early universe, B-mode polarisation mustbe generated by gravitational waves: density fluctuations can only produce so-called E-mode polarisation (patterns with even parity, i.e. symmetric underreflection, unlike the odd-parity patterns of B-mode). E-mode polarisation canbe converted to B-mode later in the history of the universe, by the distortionsintroduced by gravitational lensing, but this occurs at much smaller angularscales than the primordial B modes. The existence of such primordial gravita-tional waves is a solid prediction of inflation—they arise because of fluctuationsin the gravitational field being “blown up” to macroscopic scale, in much thesame way as the density fluctuations—and is not expected in some competingmodels such as those based on extra dimensions, so the BICEP2 results, ifconfirmed, will provide strong evidence for the reality of inflation. However,the level of polarisation observed by BICEP2 is surprisingly high: if this is notan accident of statistics (the statistical error of the result is large), it will putsevere constraints on many theoretical models of inflation.

    The key theoretical ingredient of inflation is the existence of a scalar field,the inflaton φ, which has a non-zero potential energy V (φ) when φ = 0, anda minimum (zero?) value at some non-zero value of φ. At high energies, theenergy density of the universe is dominated by V (φ), but as the universe coolsφ must eventually settle down to its minimum. For inflation to work, the high-V region near φ = 0 must take the form of a nearly flat plateau, terminatedby a sharp drop-off to the minimum: the inflationary period occurs while φslowly rolls off its plateau, and ends at the sharp drop. The energy released atthe drop reheats the universe, producing large numbers of particle-antiparticle

  • 10 CHAPTER 1. INTRODUCTION

    pairs: this is essential, because the pre-inflation number density of particles hasbeen diluted to essentially zero by the inflation (a visible universe containingonly one particle is not consistent with observations!).

    The equation of state of a scalar field is given by

    Eφ =1

    2~c3φ̇2 + V (φ),

    Pφ =1

    2~c3φ̇2 − V (φ),

    (1.1)

    where Eφ is the energy density, Pφ is the pressure, and φ̇ = dφ/dt. Exponentialinflation corresponds to the case where V φ ≫ φ̇2/(2~c3), in which case theequation of state is approximately that of a cosmological constant, P = −E .As shown in PHY306/406[7], a universe dominated by a cosmological constantexpands exponentially, a(t) ∝ exp(Ht) where a(t) is the scale factor and H =ȧ/a is the rate of expansion.

    The properties of the inflaton field are reminiscent of those of the Higgs field[14], which is also a scalar field permeating all of space, and also has its minimumvalue at a non-zero value of the field. It is tempting to suggest that the inflatonfield might actually be the Higgs field, which would be an elegant solution tothe problem. Unfortunately, the constraints on the inflaton potential requiredfor inflation to work lead to a näıve prediction that the mass of the inflatonshould be around 1013 GeV, which is certainly not consistent with the Higgs. Itis possible to persuade the Higgs field to drive inflation (see, for example, [15])by giving it non-standard couplings, but the resulting model predicts a verylow level of primordial gravitational waves, in contrast to the rather high levelobserved by BICEP2. However, extensions to the Standard Model generallyrequire extensions to the Higgs sector—for example, supersymmetry has twoHiggs doublets and five physical Higgs bosons, as opposed to one doublet andone boson in the Standard Model—so the lack of a fit with our one known Higgsboson is not a disaster.

    Although inflation provides a conceptually elegant solution to the horizonand flatness problems of the classical Big Bang, and makes predictions (thegeometry of the universe should be extremely close to flat; the spectral indexof the anisotropies should be ∼0.95; there should be primordial gravitationalwaves) that are borne out by observation, the detailed particle physics underly-ing the idea appears problematic. The particular form of the inflaton potentialnecessary to make inflation work does not emerge naturally from the theory,but is put in “by hand,” and the small coupling of the inflaton field makes itdifficult to achieve thermal equilibrium. As the original motivation for intro-ducing inflation was to avoid the need to fine-tune initial conditions, it is notsatisfactory to find that one then has to fine-tune the properties of the inflatonfield!

    These problems are addressed by the chaotic inflation model (see, e.g., [16]),which works for a much wider range of potentials—the potential just has to besufficiently flat—and initial conditions. The basic idea of chaotic inflation isthat if the inital value of the scalar field φ is large, so that it dominates theenergy density of the universe, the natural evolution of the Friedman equationwill automatically lead to a quasi-exponential inflation (see [16], pp 6–7).

    Unlike the original inflation models, chaotic inflation is not intimately linkedto GUT phase transitions and does not require fine-tuning of the properties ofthe inflaton field; from the argument in the previous paragraph, nor should itrequire fine-tuning of the initial conditions (this point is highly debated, butLinde[16] claims that the criticisms are based on invalid assumptions).

  • 1.2. EARLY-UNIVERSE COSMOLOGY 11

    As discussed by Guth in his original paper[9], the minimum amount of infla-tion needed to solve the horizon and flatness problems is about 60 e-foldings (i.e.an expansion factor of e60, or 1026). Chaotic inflation typically leads to muchlarger factors—Linde[16] quotes factors of order 1010

    10

    ! This implies that ourvisible universe is a very tiny part of a much larger cosmos. In addition, manyinflation models lead to the scenario of eternal inflation, in which large quan-tum fluctuations during the inflation phase spawn separate “mini-universes,”possibly with different low-energy physics, e.g. as a result of different compact-ification of the extra dimensions in string theories. This aspect of inflationprovides an “escape” from fine-tuning problems such as the size of the cosmo-logical constant: as a cosmological constant of “natural” size (∼ 10120 timeslarger than what we observe) would make life impossible, we must necessarilylive in a mini-universe with an unusually small value. (This is an example ofthe Weak Anthropic Principle [17]; such arguments are generally disliked byscientists because they are not very fruitful from a scientific perspective, butthe basic logic of the argument—“we exist, therefore the laws of physics mustbe such as to permit us to exist”—is hard to fault.)

    Inflation is certainly particle cosmology: scalar fields and quantum fluctu-ations belong to theoretical particle physics rather than classical cosmology.However, it is somewhat detached from the rest of theoretical particle physics:the inflaton field is introduced ad hoc rather than being deduced from the widercontext of particle physics (except in so far as extensions to the Standard Modelof particle physics do tend to predict additional scalar fields). This contrastswith other applications of theoretical particle physics to astrophysics and cos-mology, such as baryogenesis (see next section) and non-baryonic dark matter(see below and PHY326/426), where the relevant theories also have implica-tions for “traditional” experimental particle physics. Partly for this reason,and partly because the theory of inflation rapidly becomes very technical, weshall not cover inflation any further in this course. Interested students shouldrefer to Linde’s lecture notes[16], bearing in mind that Linde, as a co-inventorand vocal proponent of the theory, may not be exactly unbiased in his assess-ment of the arguments!

    1.2.2 Baryogenesis

    The other aspect of early-universe cosmology with clear links to theoreticalparticle physics is the problem of baryogenesis—why does the universe containmatter, but not antimatter?

    When we create particles in terrestrial accelerators, we always create particle-antiparticle pairs (e.g. e+e− → qq), in accordance with the empirical conserva-tion laws for baryon number B and lepton number L. However, the universeappears to contain baryons but no antibaryons1, since (1) we do not observeany significant amount of antimatter locally—only a very small proportion ofthe cosmic-ray flux is antiparticles, consistent with recent production by high-energy particle collisions—and (2) nor do we observe the γ-ray flux from in-tergalactic space that would be expected if some galaxies were entirely matterwhile others were entirely antimatter.

    In terms of number densities, though not of energy densities, the universe to-day is entirely dominated by the photons of the cosmic microwave background:

    1Note that cosmologists tend to regard all Standard Model particles, with the possibleexception of neutrinos, as “baryons”. The reason for this is that the baryons completelydominate the mass: as the universe is electrically neutral on large scales, there’s an electronfor every proton, but the electron mass is only about 1/1800 of the proton mass.

  • 12 CHAPTER 1. INTRODUCTION

    there are about 1.6 billion photons for every proton. At temperatures wherepair production (γγ → ff̄) and annihilation (ff̄ → γγ) are in equilibrium,we would expect the numbers of photons and fermions to be approximatelyequal, so this huge disparity strongly suggests that most of the particles andantiparticles did indeed annihilate in the early universe, but some asymme-try in this process led to a remnant population of baryons and leptons whichwe now see (and of which we are made, so this one-in-a-billion imbalance israther important to us!). The production of this remnant population is knownas baryogenesis, and is one of the great unsolved problems of early-universecosmology.

    The conditions necessary for baryogenesis were laid out by Andrei Sakharov[18] in 1967: they are

    1. interactions that violate baryon number conservation must exist;

    2. C (charge conjugation) and CP (charge conjugation and parity) symme-tries must both be violated;

    3. the reactions must take place out of thermal equilibrium.

    The first condition is obvious: if the universe starts from a matter-antimattersymmetric state in which B = 0, it cannot reach a state in which B > 0without violating baryon-number conservation! The third is also obvious: inthermal equilibrium, forward and reverse reactions proceed at equal rates, soour hypothetical B-violating reaction would go equally in both directions, withno net gain. The argument for the second is similar to this: if C is conserved,reactions which increase B will be balanced by antireactions that decrease B,and if CP is conserved (even if C is violated), B-increasing reactions will bebalanced by mirror-image B-decreasing antireactions.

    As the early universe is expanding and cooling at a very rapid rate, the thirdcondition is easily satisfied, as was first pointed out explicitly by Gamow [19] in1946 (in the context of nucleosynthesis). Surprisingly, the first condition is alsosatisfied in the Standard Model: conservation of B and L is an “accidental”property of SM interactions, not a consequence of a fundamental symmetry ofthe Lagrangian. In 1976, Gerard ’t Hooft [20] pointed out that a certain classof non-perturbative transitions violate B (though they conserve B−L). Thesenon-perturbative processes, known as sphalerons, can convert three baryonsinto three antileptons or vice versa (the number has to be an integral multipleof the number of families, so the smallest possibility is 3). Sphalerons are aquantum tunnelling phenomenon: at today’s low energies, they are suppressedto unobservably tiny levels, but they would have occurred readily at the veryhigh energies of the early universe. This means that lepton-number-violating in-teractions can be bootstrapped into baryon production through such processes,a concept known as leptogenesis.

    It is very possible that lepton number violation occurs and can be ob-served today, albeit at low levels. The key to this possibility is the existenceof neutrinos—electrically neutral fundamental fermions. For charged particles,the particle and the antiparticle are observationally distinguished by their op-posite charges, e.g. the electron and the positron. Even neutral baryons likethe neutron are composed of charged constituents and are therefore distin-guishable: electron scattering experiments would see a difference between theneutron (with one charge +23 up quark and two charge −13 down quarks) andthe antineutron (one −23 anti-up and two +13 anti-downs). For the neutrino,on the other hand, there is no obvious difference between the particle and the

  • 1.2. EARLY-UNIVERSE COSMOLOGY 13

    antiparticle, except that one produces the charged lepton when it interacts, e.g.νµ + n → µ− + p, and the other the charged antilepton, νµ + p → µ+ + n.This might seem like a perfectly adequate distinction, but the weak inter-action has the interesting property of being left-handed : only particles withleft-handed chirality, and antiparticles with right-handed chirality, can interactweakly. Therefore the apparent distinction between neutrino and antineutrinomight really be a distinction between the two chiral states of the same particle,and thus the neutrino and the antineutrino would be different states of the sameparticle. Fermions with this property are called Majorana particles, after theItalian theoretical physicist Ettore Majorana2.

    This would be a purely academic distinction if the neutrino were massless,because a massless neutrino has a well-defined handedness. However, the neu-trino is not massless, and therefore a neutrino which is produced as left-handedmay have a very small probability of subsequently interacting as a right-handedobject, i.e. an antineutrino. This could be observed through the rare process ofdouble beta decay.

    In nuclear physics, we find that even-A nuclei are more tightly bound, i.e.have lower masses, if they have even Z than they are if they have odd Z. Thisis a result of the pairing up of nucleons: odd-odd nuclei have two unpairednucleons (one proton and one neutron), and thus a lower binding energy thaneven-even nuclei. As a consequence, it is possible for an even-even nucleus(A,Z) to have a lower mass than either of its immediate neighbours (A,Z± 1),but a higher mass than a next-to-nearest neighbour (A,Z ± 2). An example ofthis is 7632Ge (atomic mass 75.921402 u), which is lighter than

    7633As (75.922393

    u) but heavier than 7634Se (75.919212 u).Isotopes like 7632Ge are stable to single beta decay, but in principle unstable

    to double beta decay,76

    32Ge → 76

    34Se + 2e− + 2 νe. (1.2)

    This is a perfectly legitimate decay mode, obeying all the rules of nuclear andparticle physics, but the probability of two simultaneous weak decays is sosmall that 7632Ge is to all intents and purposes an entirely stable isotope (itmakes up 7.8% of natural germanium). The two-neutrino double beta decay(2νββ) described by equation 1.2 has in fact been observed for this isotope: themeasured half-life is (1.74 ± 0.01+0.18−0.16) × 1021 years[21].

    From the point of view of baryogenesis, the interesting process is not 2νββbut neutrinoless double beta decay (0νββ), a variant which can occur only ifneutrinos are Majorana particles. In this case, the neutrino is an internal line inthe Feynman diagram, being produced at one vertex as a neutrino and absorbedat the other as an antineutrino. The result is

    76

    32Ge → 76

    34Se + 2e−, (1.3)

    which violates lepton number by 2. The signature of 0νββ is that the twoelectrons come out back to back, each with energy equal to half the Q-valueof the decay, since there are no neutrinos to carry off energy and momentum.Unfortunately, since neutrinos are very nearly purely left-handed particles, theprobability of the right-handed (antineutrino) interaction is extremely small,so the half-life of the 0νββ decay mode is expected to be long even comparedto the 2νββ mode. No confirmed positive results have yet been reported: themost recent limit for 7632Ge, from the GERDA experiment[22] is t1/2 > 2.1×1025

    2Another example of a—hypothetical—Majorana particle is the neutralino, a leading darkmatter candidate

  • 14 CHAPTER 1. INTRODUCTION

    years (at 90% confidence level). Calculated rates depend on the nuclear matrixelement for the decay (which is a challenging theoretical calculation and sub-ject to large errors) and the effective neutrino mass; if neutrino masses followthe inverted hierarchy, where at least two neutrino mass eigenstates must havemasses of order 0.05 eV/c2, the expected rates should be observable with thenext generation of detectors. Such an observation would both disprove lep-ton number conservation and establish that the neutrino is indeed a Majoranaparticle, as well as providing the first ever measurement of an absolute neu-trino mass (as opposed to the squared mass differences measured in neutrinooscillation experiments).

    Although 0νββ violates lepton number, this process itself cannot be respon-sible for baryogenesis: it is much too slow. We want a process that will generatebaryon number efficiently in the early universe, and then shut down (since wedo not currently observe large-scale violation of B or L). It turns out that theconcept of neutrinos as Majorana particles not only predicts 0νββ, but alsoleads to such a mechanism.

    One of the most appealing aspects of the Majorana picture of neutrinosis that it provides a natural explanation for the fact that their masses, whilenon-zero, are many orders of magnitude less than the masses of the other fun-damental fermions (tritium beta-decay currently limits the effective mass ofthe electron neutrino to < 2.2 eV/c2, and Planck finds that the sum of all 3neutrino masses must be < 0.23 eV/c2, though the latter limit has some modeldependence). The trick relies on the existence of right-handed (and thereforenon-interacting) neutrinos, which decouple from the rest of the fundamentalparticles when the grand unified theory breaks down to the Standard Model.They would therefore naturally have a mass M corresponding to the GUT scaleof 1015 GeV or so. If we assume that the mass term for neutrinos in the Stan-dard Model Lagrangian contains both a Dirac term like those for the chargedfermions and a Majorana term, we wind up with a combined mass term[23]

    (

    νL νCR

    )

    (

    0 mm M

    )(

    νCLνR

    )

    , (1.4)

    where m is the Dirac mass, which we assume is similar to those of the otherfermions, say 1–100 GeV/c2, M is the Majorana mass and ν and νC are the neu-trino and antineutrino wavefunctions respectively (the C superscript stands forcharge conjugation). The off-diagonal Dirac terms couple left- and right-handedstates, while the on-diagonal Majorana terms couple particle and antiparticle.Therefore, for a purely Dirac particle like the electron, the left- and right-handedstates must have equal mass, whereas for a Majorana particle their masses canbe quite different—in this case, 0 and M .

    To get from (1.4) to the physical neutrino mass eigenstates, we need todiagonalise the matrix, which gives us one predominantly right-handed statewith mass M and one predominantly left-handed state with mass m2/M . IfM is large, the mass of this second state (which is the one that couples tothe weak interaction) is therefore automatically very small. This is the seesawmechanism (so called because the higher the right-handed mass M goes, thelower the left-handed mass m2/M becomes).

    If the seesaw mechanism is the correct explanation of the light neutrinomasses, it necessarily implies the existence of at least two massive, predomi-nantly right-handed “neutrino” states N (because neutrino oscillation experi-ments guarantee that at least two of the three light, predominantly left-handedneutrinos have non-zero masses). Being very massive, these states couple

  • 1.2. EARLY-UNIVERSE COSMOLOGY 15

    strongly to the Higgs field and will decay by N → ℓ(ℓ̄) + h, where ℓ is alepton and h is a Higgs boson. These are lepton-number-violating decays (theN , being a Majorana particle, does not have a well-defined lepton number), andleptogenesis occurs if the decay rates to ℓ and ℓ̄ are different. (In general, thesedecays also violate CP symmetry, so all three Sakharov conditions are satis-fied.) Because the masses of the Ns are large, these decays occur in the veryearly universe when sphaleron transitions are common, so the lepton numberasymmetry is partially transformed into a baryon number asymmetry.

    Leptogenesis is an attractive way to generate the baryon asymmetry be-cause of its close link to the seesaw mechanism, and because the propositionthat lepton number conservation can be violated in the neutrino sector is ex-perimentally testable, as is the existence of CP violation in the neutrino sector(although, in general, the CP -violating phase in neutrino oscillations, which ismeasurable, does not provide any useful constraints on the CP -violating phasesin the heavy sector that are relevant to leptogenesis). There is also a link be-tween leptogenesis and axions[24], relating the mass of the lightest right-handedneutrino to the axion symmetry-breaking scale. Axions are a possible candi-date for cold dark matter (see below), so this might provide a link betweentwo of the major unsolved problems of cosmology—the origin of the baryonasymmetry and the nature of dark matter.

    Despite these advantages, leptogenesis is not without problems—for exam-ple, if the dark matter is supersymmetric particles rather than axions, leptoge-nesis tends to be associated with overproduction of gravitinos—and is far fromthe only available model of baryogenesis. CP -violating processes are known tooccur in the quark sector, and sphaleron transitions in the early universe cangenerate non-zero B and L (while conserving B−L) as discussed above. There-fore it is possible to envision processes which generate non-zero B directly, inhadronic interactions, rather than indirectly through the neutrino sector, andindeed all the earliest models of baryogenesis were of this type. (Leptogenesisis only viable if neutrinos have mass, and therefore was not seriously consid-ered until neutrino oscillations were established in the late 1990s.) There arethree main mechanisms for hadronic baryogenesis: GUT baryogenesis, StandardModel baryogenesis and electroweak baryogenesis.

    In GUT baryogenesis, the baryon number violation occurs through the GUTinteractions rather than via sphalerons. Because grand unified theories explic-itly unite the quark and lepton sectors of the Standard Model, there are heavygauge bosons X (with spin 1) and Higgs bosons Y (with spin 0) which directlycouple quarks to leptons, producing B and L violating interactions such asX → qLeR. The Y decays in particular can occur at temperatures low com-pared to the mass of the Y , and therefore out of thermal equilibrium as requiredby the Sakharov conditions.

    The trouble with GUT baryogenesis is that it obviously takes place at GUT-scale energies. This requires that the reheating phase after inflation reach highenough energies to produce X and Y bosons (since any such production beforeinflation gets diluted to nothing by the inflationary expansion). But one of theoriginal motivations for inflation was to dilute away undesirable relics of theGUT scale such as magnetic monopoles, which are massive enough to overclosethe universe and result in a rapid Big Crunch (clearly contrary to observation).Therefore, we would rather have baryogenesis taking place at a lower energyscale, too low to risk producing large numbers of such GUT relics.

    At the other extreme, the observed hadron-sector CP violation in the Stan-dard Model, together with B violation via sphaleron processes, should produce

  • 16 CHAPTER 1. INTRODUCTION

    some level of baryon asymmetry in the early universe. This obviously has theadvantage of requiring no new physics whatsoever, and therefore being in prin-ciple testable at existing energies. Unfortunately, the level of CP violation inthe Standard Model appears to be too low to produce the observed baryonasymmetry[25], so this minimalist approach does not work: new physics isrequired to introduce new sources of CP violation. At least there is no require-ment that the new physics must live at GUT energy scales, so this could stillbe subject to experimental verification.

    The other problem with Standard Model baryogenesis is the out-of-equili-brium requirement imposed by the Sakharov conditions. This usually meansthat the strength of the interactions has to be ≪ m/MPl, where MPl is thePlanck mass, ∼ 1019 GeV/c2[25] (this arises because the Hubble parameterH ∝ 1/MPl). Given that the Standard Model mass scale is of order 100 GeV/c2,corresponding to the masses of the W, Z and Higgs, this implies an unreasonablyweak interaction (recall that the electromagnetic coupling constant α = 1/137).A possible way round this is provided by the electroweak phase transition, i.e.the point at which, as the universe cools, the combined electroweak interactionbreaks down into separate weak and electromagnetic components. If this occurssufficiently abruptly—that is, if it’s a first order phase transition—it can providethe necessary departure from equilibrium; this is electroweak baryogenesis.

    A first order phase transition tends to proceed by forming bubbles of thenew phase—e.g., boiling water. The bubble walls can provide sites for out-of-equilibrium reactions. In contrast, second-order phase transitions are typicallysmooth and continuous, and are much less likely to induce out-of-equilibriumconditions.

    As the electroweak phase transition is closely related to the behaviour ofthe Higgs field[25] (it is, after all, the Higgs field that generates the masses ofthe W and Z), the critical issue in determining the order of the transition isthe shape of the Higgs potential. Unfortunately it appears that a first-ordertransition requires a Higgs mass of < 75 GeV/c2 or so[25], so a Higgs mass of125 GeV/c2 suggests a smooth transition and no baryogenesis. Rescuing thesituation requires new physics, such as an additional Higgs doublet.

    Adding supersymmetry to the Standard Model changes the picture, be-cause we now have to consider not only an additional Higgs doublet, but alsothe effects of sparticle interactions. There are also many different versions ofsupersymmetry, some much more strongly constrained than others. The moststudied version is the Minimal Supersymmetric Standard Model (MSSM). Asits name suggests, this model contains only the minimum number of additionalparticles needed to provide supersymmetric partners (one per Standard Modelparticle, and one additional Higgs doublet, which produces four additional Higgsbosons and their SUSY partners).

    Electroweak baryogenesis in the MSSM turns out to be difficult to realise[25]:generally the SUSY particle masses need to be rather high (several TeV/c2),which is not “natural” since the motivation for SUSY—keeping the Higgs masslight by cancellation of correction terms—suggests that SUSY masses shouldbe closer to the electroweak scale. On the other hand, the failure to find SUSYat LHC argues for higher masses, so perhaps this is less of a problem than itwas perceived to be in 2006.

    “Next-to-minimal supersymmetry” (NMSSM), which adds an extra scalarfield to the MSSM particle content, has considerably more freedom of manoeu-vre than minimal SUSY and can surely accommodate baryogenesis, but it isnot obvious that this theory is well motivated. All of these arguments would

  • 1.3. THE PHYSICS OF DARK ENERGY 17

    become much more concrete if SUSY particles were actually discovered, eitherat the LHC or by direct dark matter searches.

    Considered as particle astrophysics, baryogenesis has much clearer links tothe rest of theoretical particle physics than inflation, and tests of baryogenesismodels often involve “conventional” particle physics such as LHC experiments.Here we have focused on leptogenesis, because the connection with the neutrinosector links it more closely to the rest of particle astrophysics, but the variousmodels of hadronic baryogenesis sketched above are by no means ruled out (see[25] for more information), and in many cases offer testable predictions for LHCphysics or dark matter searches. A title search for “baryogenesis” in the arXivpreprint server[26] will demonstrate the wide range of baryogenesis models stillunder active consideration. Unfortunately, doing justice to baryogenesis re-quires at least graduate-level understanding of theoretical particle physics, andfor this reason we will not be covering it in more depth in this course.

    1.3 The physics of dark energy

    In general relativity, the expansion of the universe is described by the Friedmanequation,

    H2 =8πG

    3c2E − kc

    2

    R20a2

    3, (1.5)

    where H is the Hubble parameter, E is the energy density in matter and radi-ation (the latter is negligible at the present time, but dominates in the earlyuniverse), k is the curvature (+1, 0 or –1), R0 is the radius of curvature, anda is the scale factor, defined to be equal to 1 at the present time. The energydensity is often expressed in terms of the density parameter Ω = E/Ecrit, wherethe critical density Ecrit is given by

    Ecrit =3c2H2

    8πG. (1.6)

    The subscripts r and m distinguish the contributions to the density of radiationand (non-relativistic) matter; the subscript 0 indicates the value of the quantityat the present time.

    Dark energy, in the form of the cosmological constant Λ, was first introducedinto cosmology by Einstein himself, in 1917. Einstein’s intent was to modifythe equations of general relativity so as to permit them to describe a staticuniverse, in agreement with the observational data of the time. With Hubble’sestablishment of the expansion of the universe in 1929–31, this motivation forthe introduction of Λ disappeared, and for most of the following 60 years or soit was generally assumed to be zero, despite the lack of any theoretical justi-fication for this. Ironically, given that it was introduced to make the universestatic, it was the discovery that the expansion of the universe is actually accel-erating, rather than slowing down as all Λ = 0 models predict, that returnedthe cosmological constant to favour in the late 1990s[27].

    The observational evidence for Λ > 0 comes from astrophysics and cosmol-ogy rather than particle astrophysics, and is discussed in PHY306/406. Briefly,the principal points include:

    • the Hubble diagram for Type Ia supernovae, showing accelerating expan-sion in recent times (z < 0.6 or so);

  • 18 CHAPTER 1. INTRODUCTION

    • analysis of the anisotropies of the cosmic microwave background, whichindicates that the universe is geometrically flat (Ωtot = 0), but that thematter density Ωm0 is only ∼0.3;

    • simulations of large scale structure, which show good agreement withobservations only if Λ > 0;

    • analysis of the X-ray emission from rich clusters of galaxies, which showsa consistent ratio of gas mass to total mass only if ΩΛ ∼ 0.65;

    • comparison of the age of the universe derived from H0 with the agesof old objects, e.g. globular clusters (ages derived from stellar evolutionfits to the Hertzsprung-Russell diagram) and individual metal-poor stars(radiometrically dated using uranium and thorium).

    These independent lines of evidence are all consistent with a cosmological modelin which Ωm0 ≃ 0.3 and ΩΛ0 ≃ 0.7. It is fair to say that the existence ofa Λ-like component dominating the present energy density of the universe iswell established. What that component actually is, however, is very far fromwell established—and this topic certainly does fall within the remit of particleastrophysics.

    Physically, the standard cosmological constant, with equation of state PΛ =−EΛ, represents the energy density of the vacuum. The idea that this should benon-zero is entirely reasonable in the context of quantum mechanics: accordingto the Uncertainty Principle, empty space should be full of virtual particle-antiparticle pairs that spontaneously appear and then re-annihilate (after a timeshort enough that ∆E∆t < ~), so on average its energy should not be zero. Theproblem is actually the opposite: calculations of the expected vacuum energygive values that are too large by many orders of magnitude (a factor of 10120

    in the Standard Model, reduced to “only” 1060 in supersymmetric models).This is because the momenta of the virtual particles are unknown (since theyre-annihilate without being observed), so one has to integrate over all possiblevalues of the momentum and all possible types of particle, giving[27]

    EΛ =1

    2

    fields

    gi

    ∞∫

    0

    k2 +m2d3k

    (2π)3, (1.7)

    where k and m are the momentum and mass of the particle being created andgi is the number of degrees of freedom of the field (e.g. 2 for a photon, whichhas two possible polarisation states). As it stands, this integral is infinite: itdiverges at the upper limit. We can make it finite by only integrating up to somecut-off factor kmax: it is then ∝ k4max. The justification for such an apparentlyarbitrary cut-off is normally the appearance of new physics; unfortunately, thecut-off value one would need to impose to get close to the observed value ofΛ is about 0.01 eV, whereas the natural cut-off scales for new physics are thePlanck mass (∼ 1019 GeV) for the Standard Model and around 1 TeV forsupersymmetric models.

    There is no very obvious escape from this problem. Supersymmetry helpsbecause the gi factor has a negative sign for bosons and a positive one forfermions, so if it were an exact symmetry the contributions from particle andsparticle would cancel each other out. Unfortunately supersymmetry is clearlynot an exact symmetry (the masses of SUSY particles are much greater thanthose of their Standard Model partners, with the possible exception of the

  • 1.3. THE PHYSICS OF DARK ENERGY 19

    stop squark), so the net contribution is ∝ M4 where M is the mass scale atwhich the symmetry is broken, assumed to be of order 1 TeV as stated above(perhaps a bit higher, given that the LHC has so far failed to find SUSY). Thishas motivated theorists to seek alternative models for this component of theuniverse—hence the introduction of the less-specific term dark energy in placeof “cosmological constant”. Another possibility is suggested by the anthropicprinciple: a universe with the natural value of Λ would expand too rapidly forgalaxies to form, and so we must live in a universe with an anomalously lowvalue of Λ. This argument makes most sense in a “multiverse” model such asthat produced by eternal inflation (see page 11): if it is assumed that the valueof Λ, while constant for any given mini-universe, varies randomly from onemini-universe to the next, it could be that the mini-universe in which we livehas a quite exceptionally small value of Λ (whereas the overwhelming majorityof mini-universes have “natural” values of Λ and are uninhabitable). However,as noted earlier, most theorists have an aversion to such anthropic-principlearguments because they are scientifically unproductive; in addition, it is notat all obvious from the argument presented above that Λ should behave like arandom variable.

    If we do not rely on the anthropic principle and instead seek alternativemodels, the obvious approach, as adopted in inflation (see section 1.2.1) is topostulate a scalar field[27]. From equation (1.1), the equation of state of ascalar field is PS = wES where

    w =−1 + φ̇2/2~c3V1 + φ̇2/2~c3V

    . (1.8)

    As noted in section 1.2.1, in the case where φ̇2 ≪ 2~c3V , w ≃ −1: a slowlyvarying scalar field will look very like a cosmological constant. In general, thevalue of w will change with time as the field evolves: depending on the shape ofV (φ), models can “freeze” (w evolves towards −1) or “thaw” (w is initially ∼ −1but evolves away from −1) [27]. Such a time-varying w is usually parameterisedas w(a) = w0 + (1 − a)wa, where w0 and wa are constants and a(t) is the scalefactor (normalised to 1 at the present time). Observational data are beginningto constrain the values of w0 and wa (see, e.g., figures 35 and 36 of [11]), butso far the constraints are not very strong.

    An attractive feature of some freezing models—so-called “tracker models”—is that the energy density of the scalar field tracks the dominant conventionalenergy density (radiation or matter) at early times before starting to diverge:this makes the coincidence that we happen to live in the epoch at which both Ωmand ΩΛ are comparable in size less improbable than it is in the straightforwardvacuum energy model. Attributing dark energy to a scalar field also raises thepossibility that the accelerated expansion of the present epoch could be relatedsomehow to the accelerated expansion of inflation, since in this scenario bothare driven by scalar fields, albeit with dramatically different energy scales.

    The principal issue with scalar-field models of dark energy is, again, theextremely small value of EΛ at the present time. This requires a very flat po-tential V (φ), a very small effective mass, and also an extremely weak couplingof the scalar field to other particles (to avoid introducing unobserved and thusunwanted long-range forces) [27]. It is difficult to incorporate this peculiarfield into the wider context of theoretical particle physics. The only knownparticles that operate at a similar energy scale are neutrinos, and unsurpris-ingly some theorists have attempted to treat this as a meaningful relationshiprather than a coincidence. In neutrino dark energy models (see, e.g., [28]), the

  • 20 CHAPTER 1. INTRODUCTION

    scalar field couples to neutrinos, and its energy density is causally related tothe neutrino mass (which in these models is generated dynamically and changeswith time). The behaviour of neutrino dark energy has a tendency to becomeunstable when the neutrinos of the cosmic neutrino background become non-relativistic—certainly the case at the present time, for neutrino masses of order0.05 eV/c2—but models avoiding this problem can be constructed[28]. As theneutrino mass grows with time in these models, a possible experimental sig-nature would be a conflict between a measured neutrino mass (e.g. a positiveresult from the KATRIN tritium beta decay experiment) and the upper limiton the sum of neutrino masses derived from the CMB.

    In the context of Einstein’s field equations, vacuum energy and scalar fieldsmodify the stress-energy tensor, i.e. the matter side of the equation. An al-ternative approach is to attack the geometric side, i.e. to modify gravity. Thisapproach can be motivated by extra-dimension models, since in many such mod-els gravity (unlike the other forces) also propagates in the extra dimensions andhence is not perfectly described by general relativity. An example discussed in[27] modifies the Friedman equation to

    H2 =8πGE

    3c2+H

    ri, (1.9)

    where ri is a length scale. The extra term H/ri causes acceleration at latetimes when the energy density is small. Unfortunately, such models tend tohave unphysical features such as tachyons; Frieman, Turner and Huterer[27]conclude that “it is not clear that a self-consistent model with this dynamicalbehaviour exists.”

    In conclusion, the physics of dark energy certainly belongs in the field ofparticle astrophysics, but so far is proving a rather intractable problem. Noneof the possible approaches—vacuum energy, dynamically generated dark energyfrom scalar fields, modified gravity, or simply assuming that we live in a universewhich is inhomogeneous on large scales3—has yet yielded a good explanation ofthe observations: so far, the data are all consistent with a simple cosmologicalconstant (w = −1 at all times), but there is no theoretical motivation for itssmall value. Better observational data, more strongly constraining the darkenergy equation of state and its possible time variation, should help to decidewhich avenues of theoretical speculation to pursue.

    1.4 High-energy processes in astrophysics

    Through most of its history, astronomy has been the study of starlight (re-flected starlight, in the case of planets). Starlight is thermal (approximatelyblackbody) radiation with an effective temperature ranging from about 3000 to30000 K, corresponding to energies of order 1 eV. The nuclear fusion processesthat power starlight take place at temperatures of around 107 K for hydrogenburning, going up to 108 K for helium and a few times 109 K for the shortperiod of heavy-element fusion prior to supernova explosion: these tempera-tures correspond to nuclear physics energies of 1–100 keV. The iron peak innuclear abundances (see PHY320) is evidence that the elements around iron are

    3The idea here is that we happen to live in a region which is underdense compared to therest of the universe; the extra gravitational forces introduced by this can mimic the effect ofa cosmological constant. In order for this to be consistent with the highly isotropic nature ofthe CMB, the underdense region must be very large and the Milky Way must be rather closeto the centre of it. This looks decidedly contrived.

  • 1.4. HIGH-ENERGY PROCESSES IN ASTROPHYSICS 21

    made in conditions of nuclear statistical equilibrium, indicating temperaturesof order a few MeV (the binding energy of the most stable elements is about9 MeV per nucleon), but this is still more the domain of nuclear than of par-ticle physics. However, the advent of radio astronomy after the second worldwar, followed by the rest of the electromagnetic spectrum up to γ rays fromthe 1960s onwards, provided clear evidence that thermal emission is not theonly source of radiation in the cosmos. Further evidence comes in the form ofcosmic rays, energetic charged particles first unambiguously detected by Vic-tor Hess in 1911. The cosmic ray energy spectrum goes up to extraordinarilyhigh energies (∼ 1020 eV or more—that’s over a joule of kinetic energy in asingle proton!), clearly demonstrating the existence of astrophysical particle ac-celerators. Unfortunately, as we shall discuss later, the Galactic magnetic fielddeflects even energetic charged particles to such an extent that the sources ofthese ultra-high-energy cosmic rays still remain unidentified.

    1.4.1 The non-thermal universe

    Thermal radiation is described, at least approximately, by the Planck function,

    Bν(T ) =2hν3

    c21

    exp(

    hνkBT

    )

    − 1, (1.10)

    where ν is frequency, h is Planck’s constant and kB is Boltzmann’s constant.For low frequencies,

    exp

    (

    kBT

    )

    − 1 ≃ hνkBT

    ,

    giving the Rayleigh-Jeans approximation

    Bν(T ) ≃2ν2kBT

    c2. (1.11)

    If astrophysical radio sources were thermal in nature, one would therefore ex-pect them to have a spectral energy distribution with flux ∝ ν2. In fact, thespectra of radio galaxies usually follow power laws with negative spectral in-dices, S ∝ να where S is the flux and the spectral index α ∼ −1 (within abouta factor of 2). Therefore the radio emission cannot be thermal. It is in factsynchrotron radiation, produced by a population of relativistic electrons gyrat-ing in a magnetic field (the name comes from the observation of this radiationin terrestrial particle accelerators, i.e. synchrotrons). Supernova remnants suchas the Crab Nebula also emit synchrotron radiation at radio frequencies, andtherefore must also accelerate electrons to relativistic speeds. It is ironic thatthe lowest-energy radiation in the electromagnetic spectrum provided the firstevidence of high-energy processes in astrophysical sources.

    X-ray and γ-ray emission also provides evidence of high-energy processes atwork in sources such as supernova remnants and active galaxies. Many sourceshave spectral energy distributions consistent with inverse Compton scattering,where photons gain energy by back-scattering off fast electrons (in contrastto “normal” Compton scattering where X-rays lose energy by scattering offstationary or slowly moving electrons). This again requires a population ofrelativistic electrons in the source. The same sources frequently emit in boththe radio and the X-ray/γ-ray regime, as the same population of fast electronscan both emit (radio-frequency) synchrotron photons and back-scatter themto much higher frequencies: this is known as synchrotron-self-Compton (SSC)

  • 22 CHAPTER 1. INTRODUCTION

    emission. The relative normalisation of the synchrotron radiation and the in-verse Compton flux is set by the magnetic field strength (generally not measuredindependently, but fitted from the flux).

    Synchrotron radiation and inverse Compton emission require populationsof relativistic electrons (associated, in the first case, with magnetic fields), butdo not require fast protons or ions. However, the observation of cosmic rayfluxes extending to extremely high energies unambiguously demonstrates thathadrons are also accelerated by some (unidentified) type(s) of astrophysicalsource. The presence of high-energy protons in γ-ray sources could be signalledby a different spectral shape: high-energy protons colliding with ambient gas orradiation would be expected to produce large numbers of pions, and the decayπ0 → γγ would convert these into a γ-ray signal with a much flatter spectrumthan that of inverse Compton scattering. In addition, charged pions would de-cay through π+ → µ+νµ (π− → µ−νµ): observations of high-energy neutrinoswould unambiguously tag a source as accelerating hadrons. Unfortunately, al-though very-high-energy neutrinos have recently been observed by IceCube[29],no point sources have yet been identified.

    In summary, observations outside the optical waveband reveal that severaltypes of astrophysical object are in effect particle accelerators, capable of ac-celerating electrons, at least, up to very high energies. Cosmic-ray observationssupplement this by demonstrating a need for hadron accelerators as well. Theseobservations pose questions to particle astrophysicists: what is the accelerationmechanism (or mechanisms); where does the acceleration take place; what isthe origin (or origins) of high-energy cosmic rays; and what do the answers tothese questions tell us about the nature of the astrophysical sources?

    1.4.2 Detection techniques

    So far, we have covered topics relevant to theoretical particle astrophysics: infla-tion and baryogenesis, the physics of dark energy, and the nature and locationof particle acceleration in astrophysical sources. However, high-energy particleastrophysics also includes experimental aspects: while radio astronomy and thelower-energy end of X-ray astronomy qualify as conventional astronomy withfocusing paraboloid optics (albeit, in the case of X-ray telescopes, with uncon-ventional geometry), very high energy photons and charged particles requiretechnology more usually associated with particle physics experiments.

    High-energy photons (γ-rays) do not reflect from materials, so conventionalastronomical imaging optics are not possible. Instead, a variety of techniquesare used, as listed below.

    • Coded mask telescopes (see, e.g., [30]) work by placing a patterned maskin front of the instrument. The mask consists of a complex and care-fully designed pattern of opaque and transparent sections, such that theshadow it casts on the instrument depends on the direction of the incom-ing flux. A deconvolution algorithm is used to construct an image of thefield being viewed. Although the coded mask technique seems wasteful—you are deliberately blocking off quite a large fraction of your collectingarea—it is useful in the hard X-ray/soft γ-ray regime (∼3 keV–20 MeV),where reflecting optics do not work but the incident photon is too softfor a tracking calorimeter (see below). They also have the advantage ofa large field of view, and hence make good survey or transient-finding in-struments. Examples of coded mask telescopes include the IBIS telescopeand SPI spectrometer on INTEGRAL, the Burst Alert Telescope (BAT)

  • 1.4. HIGH-ENERGY PROCESSES IN ASTROPHYSICS 23

    on Swift and the Wide Field Camera on BeppoSAX[31].

    • Compton imaging uses Compton scattering to produce an image. If boththe scattered particles—the electron and the photon—are detected, andtheir energies and positions measured, relativistic kinematics can be usedto reconstruct the energy and direction of the incoming photon. Thistechnique was used in the COMPTEL instrument[32] on the ComptonGamma Ray Observatory (CGRO) satellite, and is also used in medicalimaging. COMPTEL covered an energy range of 0.8–30 MeV with anangular resolution of 1.7–4.4◦ for individual photons (the source itselfcould be located with a precision of 5–30 arcmin). The field of view wasabout one steradian.

    • Pair-conversion tracking calorimeters are used for higher energy γ-rays,which will readily convert to e+e− pairs when passing through material.These are genuine particle physics experiments, much more comprehen-sible to an LHC physicist than to a conventional astronomer! The ingre-dients are (1) thin plates of absorber to encourage the γs to convert, in-terspersed with (2) tracking elements to detect and reconstruct the e+e−

    pair and followed by (3) calorimetry to measure the energy. The firstsuch instrument was EGRET (the Energetic Gamma Ray ExperimentTelescope) [33] aboard CGRO. The EGRET pair conversion spectrome-ter consisted of metal plates alternating with spark chambers, followedby thallium-doped sodium iodide (NaI(Tl)) scintillating crystals for en-ergy measurement. The instrument was covered with a plastic scintilla-tor dome in anticoincidence for background rejection (to veto incomingcharged particles). EGRET was sensitive to γ-rays with energies between20 MeV and 30 GeV.

    The successor to EGRET is the Large Area Telescope (LAT) on boardthe Fermi satellite[34]. The LAT has much more modern particle physicstechnology: the converter-tracker consists of tungsten absorber inter-leaved with silicon strip detectors for tracking, and the calorimeter sectionis thallium-doped caesium iodide scintillating crystals. The anticoinci-dence detector consists of plastic scintillator tiles. The LAT is sensitiveto photons in the energy range 20 MeV–300 GeV, with an energy reso-lution of order 10% and a single-photon angular resolution ranging from0.15◦ above 10 GeV to 3.5◦ at 100 MeV. The field of view is 2.4 steradians,and point sources can be located to better than 0.5′.

    For energies above 300 GeV, space-based experiments are not practical be-cause the calorimeter needed to contain such high-energy showers would be tooheavy, and because high-energy events are rare enough to require larger col-lecting areas (and thus even heavier calorimeters). Therefore, the very highestenergy γs are detected using ground-based instruments.

    • Imaging air Cherenkov telescopes (IACTs) detect the electromagneticshower produced when a very-high-energy γ enters the atmosphere. Thesecondary e± produced in the shower have high enough energies that theyare travelling at speeds greater than c/n, where n is the refractive indexof air, and therefore generate Cherenkov radiation [35] in a narrow coneabout the direction of the incoming photon. This light is collected bya parabolic mirror and focused on to a “camera” consisting of an arrayof small photomultiplier tubes. Examples of IACTs include H.E.S.S. inNamibia, MAGIC in the Canary Islands and VERITAS in the USA. The

  • 24 CHAPTER 1. INTRODUCTION

    low energy threshold depends on the size of the telescope, but is typically30–100 GeV; γs are detected up to energies of many TeV. The main prob-lem is that the Cherenkov emission is very faint, so these telescopes havea relatively poor duty cycle: they can operate only on clear, dark nights.

    The detection of cosmic rays presents similar challenges. Cosmic rays aregenerally protons or heavier nuclei, and therefore the primary cosmic rays them-selves are not detected at ground level—just the secondary cosmic rays such asmuons, which are the products of the interactions of primary cosmic rays withthe atmosphere.

    Early cosmic-ray experiments were flown on high-altitude balloons or rock-ets. As with γ-ray detection, modern experiments divide into relatively smallspace-based detectors concentrating on the lower-energy part of the spectrum,and much larger ground-based arrays to detect the rarer ultra-high-energy cos-mics.

    The orbiting cosmic-ray observatories PAMELA[36] (a satellite) and AMS-02[37] (on the International Space Station) are both magnetic spectrometers,similar to many accelerator-based particle physics experiments. Both instru-ments have similar aims, namely to study the antimatter component of cos-mic rays (positrons, antiprotons and perhaps heavier antinuclei such as an-tideuterons and antihelium), to conduct indirect searches for dark matter (seebelow) and to provide precise measurements of the primary cosmic ray flux andspectrum, and its variation over time.

    Ground-based cosmic ray experiments, like air Cherenkov telescopes, detectthe extensive air shower (EAS) produced when a high-energy primary cosmicray interacts with the Earth’s atmosphere. There are two principal approaches.

    • Nitrogen fluorescence is produced when the secondaries from the interac-tion (particularly e±) excite nitrogen molecules in the atmosphere. Thede-excitation produces line emission in the near UV (300–400 nm), whichis detected using telescopes very similar to the Cherenkov telescopes de-scribed above. Unlike Cherenkov radiation, the fluorescence is emittedisotropically, so fluorescence telescopes generally see the shower “side-on” rather than “head-on”; like Cherenkov radiation, it is very faint andtherefore detectable only on clear, dark nights.

    • Ground arrays are, as the name suggests, arrays of small, semi-autono-mous detectors designed to sample the fraction of the EAS secondariesthat reach the ground. Each small detector triggers independently andsends its time-stamped data to a central facility which combines the datafrom all detectors to reconstruct the shower. The small detectors need tobe simple, robust and cheap to construct (since you want to instrument asmuch area as possible): the preferred technologies are Cherenkov radiation(using small, self-contained water tanks) or scintillators. Some arrays havealso included specialised muon detectors (underground, or underneath themain detectors, so that only muons reach them) to study the particlecontent of the shower.

    Ground arrays have the advantage of a 24-hour duty cycle, but the disad-vantage that in order to cover a large area you must physically distributedetectors over a large area (in contrast to fluorescence telescopes whichcan detect fluorescence originating a long way from the actual telescope).The largest ground array, the Pierre Auger Observatory[38], is a hybridinstrument combining a very large ground array (1600 water Cherenkov

  • 1.5. NEUTRINOS 25

    tanks) with a set of fluorescence telescopes arranged to look out over thearray, so that on suitable nights both fluorescence and ground samplingdata will be available.

    Both high-energy γ-rays and cosmic rays are classic particle astrophysics:particle physics technology harnessed to astrophysical applications. It is alsoworth noting that high-energy particle physics began as cosmic-ray physics:the early discoveries such as the positron, the muon, the pion and strangeparticles were all made in cosmic rays, before terrestrial particle acceleratorswere developed.

    1.5 Neutrinos

    Neutrinos are probably the second most abundant particle in the universe, afterphotons (and possibly axions, if dark matter consists of axions). In view of theirweak interactions, nothing is “optically thick” to most neutrinos: for example,the solar neutrinos we detect on Earth have come directly from fusion reactionsin the core of the Sun, whereas the photon diffusion time from the core to thephotosphere is of the order of 200000 years. In principle, therefore, neutrinoscan carry information about processes occurring deep inside astrophysical ob-jects, which cannot possibly be directly observed using photons. Unfortunately,neutrinos are equally reluctant to interact with detectors, so only extremely in-tense neutrino fluxes provide useful numbers of events in terrestrial detectors.To date, only two astrophysical sources of neutrinos have been identified: theSun, and Supernova 1987A in the Large Magellanic Cloud.

    Astrophysical neutrinos are produced in many contexts, from the early uni-verse to the interiors of main-sequence stars, and span a wide range of energies.Their properties are important in many branches of astrophysics and cosmology.

    1.5.1 Neutrinos in cosmology

    Like the other fundamental particles, neutrinos are produced in large numbersduring the reheating period immediately after inflation. Because of their weakinteractions, they decouple from the rest of the matter in the universe at atemperature ∼1 MeV, a second or so after the Big Bang, and we expect thatboth neutrinos and antineutrinos will have survived to the present day.

    The Cosmic Neutrino Background (CνB) is very similar to the cosmic mi-crowave background (CMB), except that

    • it has a Fermi-Dirac distribution rather than a blackbody distribution;

    • it is at a slightly lower temperature (1.95 K rather than 2.725 K), becausethe photons gain extra energy when electrons and positrons annihilate inthe early universe (at T ∼ 0.3 MeV) whereas the neutrinos, which havealready decoupled at that point, do not.

    The number of relic neutrinos is predicted to be about 340 per cubic centimetre,split equally among six types (three neutrinos and three antineutrinos). Thiswould be extremely difficult to verify experimentally, because the interactioncross-section of such a low energy neutrino is tiny even by weak interactionstandards.

    In the early universe, the CνB is a significant part of the total energy densityof the universe. This has a number of consequences:

  • 26 CHAPTER 1. INTRODUCTION

    • as H2 ∝ E , the neutrinos contribute to the early expansion of the uni-verse, and therefore affect the outcome of big bang nucleosynthesis (fasterexpansion implies faster cooling, so the neutrons have less time to decaybefore nucleosynthesis begins, and hence more 4He is made);

    • since neutrinos are not massless, they act as hot dark matter, which affectsthe formation of large-scale structure and the anisotropies of the cosmicmicrowave background.

    These effects can be used to place limits on the number of neutrino species andthe total mass of all species of neutrinos,

    imνi . Planck [11] quotes Neff =3.30 ± 0.27 for the effective number of neutrino species and ∑imνi < 0.23eV/c2 for the total mass; the latter is a much stronger limit than any currentlyobtained by direct experiments, but there is some model dependence.

    In addition, as discussed in section 1.3, attempts have been made to connectneutrinos with dark energy, on the grounds that they have a similar energy scale.

    1.5.2 Solar neutrinos

    Certainly the most intensively studied astrophysical neutrinos are those pro-duced by solar fusion reactions. These have fairly low energies, ranging from400 keV or so for the most numerous pp neutrinos (from p+ p→ d+ e+ + νe)up to around 15 MeV for the rare 8B neutrinos (from 8B → 8Be+e+ +νe). Thereactions that produce solar neutrinos are discussed in more detail in PHY320.

    Solar neutrinos have the potential to probe the fusion reactions in the Sun’sinterior—for example, it might be possible to make a direct measurement ofthe fraction of the Sun’s luminosity produced by the CNO reaction cycle, whichproduces a different set of neutrinos with different energies. However, so fartheir principal application has been in understanding the physics of neutrinos.

    All solar neutrinos are originally produced as νe: the Q-values of the reac-tions are not more than a few MeV, precluding the production of the heaviercharged leptons (µ and τ), and therefore of their associated neutrinos. How-ever, experiments which detect only νe consistently detect too few, by a factorof 2–3 depending on the energy range to which they are sensitive. This is theso-called Solar Neutrino Problem, which remained unresolved for many years.Its resolution in terms of neutrino oscillations was finally definitively demon-strated in 2002 by the SNO experiment[39], which used neutrino interactionson heavy water (D2O) to prove that the total neutrino flux was as predicted bytheorists, the deficit being due to transformation of νe into some other flavour.

    1.5.3 Supernova neutrinos

    The other confirmed source of astrophysical neutrinos is the Type II core-collapse supernova SN 1987A. A total of 24 neutrinos were observed by threeexperiments (Kamiokande-II, IMB, and Baksan) about three hours before theoptical explosion was detected. This slight time difference is expected, becausethe first stages of the explosion are opaque to photons, though not (of course)to neutrinos4. The number of neutrinos observed, and their energies, were con-

    4Given that the LMC is about 50 kpc away, this almost-negligible time difference was hardto reconcile with the September 2011 claim by the OPERA experiment that their neutrinoswere travelling faster than light—neutrinos travelling at the speed implied by the OPERAresults would have arrived four years early, not three hours! Much theoretical fudging wentinto attempting to reconcile these results, but the OPERA measurement was simply wrong.

  • 1.5. NEUTRINOS 27

    sistent with the expectation that about 99% of the energy of a core-collapsesupernova is emitted in the neutrino burst, with only about 1% going into thevisible explosion.

    Simulations of core-collapse supernovae suggest that the intense neutrinoemission is essential to the physics of the supernova itself. The explosion isinitiated when infalling material bounces off the surface of the newly-formedneutron star, creating a shock front: however, in early simulations the shockpromptly stalled, causing the rest of the stellar material to fall back on tothe neutron star. This produced a black hole and no visible explosion, incontradiction to observations (core-collapse supernovae definitely do explode!).Part of the problem was deficiencies in the simulations: supernova ignitionseems to be quite asymmetric, so the early models which assumed sphericalsymmetry (to reduce a 3D problem to a 1D one, with enormous saving incomputing power) were not reproducing the physics properly. However, thisalone is not enough. It appears that the shock is revived by neutrino heating :the density of the material is so high, and the neutrino flux so great, that asignificant amount of energy is dumped by the neutrinos into the stalled shock,reinvigorating the explosion.

    The number of neutrinos detected from SN 1987A was not large enoughto do more than order-of-magnitude calculations (not that this is reflected inthe enormous number of theoretical papers on the subject...). However, shoulda supernova explode in our Galaxy, the number of neutrinos that would beobserved by the current generation of detectors would be well into the thousands(a Galactic supernova would be a factor of 5 closer than SN 1987A, and Super-Kamiokande is about an order of magnitude larger than Kamiokande-II). Sucha data sample would provide opportunities for both neutrino physics (the initial“neutronisation pulse” of νe, generated by the formation of the neutron star, issharp enough that correlations between arrival time and neutrino energy couldbe used to set limits on, or perhaps even measure, the neutrino mass) and theastrophysics of supernova explosions (from the time and energy spectra of thesubsequent “thermal” neutrinos produced in the early stages of the explosion).Of course, we do not know when the next such event will occur—arguably,given the observations of Tycho’s supernova in 1572 and Kepler’s in 1604, andthe dating of the Cas A explosion to ∼1670, we have been unlucky to observeno Galactic supernovae at all in the last 300 years (admittedly, both Tycho’sand Kepler’s supernovae seem to have been of Type Ia, and would not haveproduced neutrino bursts). We can but hope.

    Of course, neutrinos from past core-collapse supernovae still exist, and arestill travelling outwards from the original explosion at approximately the speedof light. The flux from such supernova relic neutrinos is much lower than theburst from a Galactic supernova, but it is continuous and detectable at alltimes (if detectable at all). Calculations indicate[40] that there might be a“window” of observability between 20 and 30 MeV: below 20 MeV, the signal isdrowned out by solar neutrinos, and above 30 MeV by atmospheric neutrinosfrom cosmic-ray interactions. Searches by Super-Kamiokande have so far beenunsuccessful[41], but the next generation of still larger neutrino detectors mightdo better. The detection of supernova relic neutrinos could be used to constrainthe history of the star formation rate, and would also provide information aboutneutrino properties (e.g. oscillations, and limits on neutrino lifetimes).

  • 28 CHAPTER 1. INTRODUCTION

    1.5.4 Atmospheric neutrinos

    When primary cosmic rays interact in the atmosphere, they produce pions.Charged pions subsequently decay to muons and νµ, and the muons then decayby µ− → e−νµνe (or the equivalent for µ+). Therefore, cosmic ray interactionsproduce a flux of atmospheric neutrinos. At low energies, essentially all of themuons decay, and the atmospheric neutrino flux should consist of νµ and νe inthe ratio 2:1 (ignoring the distinction between neutrinos and antineutrinos); forhigher energies, time dilation effects will allow some muons to reach the groundbefore decaying, and the νµ to νe ratio should be greater than 2.

    In fact, we find that the νµ : νe ratio depends on the zenith angle of theneutrinos: it is as predicted for neutrinos coming straight down (and thereforetravelling about 20 km), but decreases with increasing zenith angle, reachinga minimum for neutrinos coming straight up (and therefore travelling about12800 km).[42] This is an effect of neutrino oscillations: the νµ are oscillatinginto ντ over the longer distances. Atmospheric neutrino measurements providedthe first generally accepted evidence for neutrino oscillations[43] (in fact, thesolar neutrino problem (see above) had been providing such evidence for twodecades, but its reliance on calculations of the solar neutrino flux based ontheoretical models made people reluctant to accept it as definitive).

    Atmospheric neutrinos qualify as particle astrophysics, since they are sec-ondary products of cosmic rays, but are not generally regarded as such, be-cause the analysis of atmospheric neutrino data provides information about theproperties of neutrinos, not about cosmic rays. Their principal significance forneutrino astronomy is as an irreducible background in searches for high-energyneutrinos from astrophysical sources.

    1.5.5 High-energy neutrinos

    Observations of cosmic rays (see above) provide conclusive proof that someastrophysical sources emit ultra-high-energy protons. Such protons will inter-act with ambient gas and/or photons in the source to produce pions, and thecharged pions will decay into muons and neutrinos. (This is a well-establishedprocess, which is responsible for the atmospheric neutrino flux discussed in thepreceding section, and also for the production of neutrino beams from terres-trial particle accelerators.) Therefore, all sources of high-energy cosmic raysshould also be sources of high-energy neutrinos. As a consequence of the decaykinematics, the neutrino energies will typically be about 5–10% of the protonenergies—still very high, given that the proton energies range up to > 1020 eV.The great advantage of the neutrinos is that, being uncharged, they will not bedeflected by the Galactic magnetic field and will therefore point back to theirplace of origin. The great disadvantage is that they are weakly interacting andwill therefore be very difficult to detect in the first place. It is therefore notat all surprising that point sources of high-energy astrophysical neutrinos havenot yet been identified, despite a couple of decades of searching.

    Given the small interaction cross-section, the paramount design criterion for“neutrino telescopes” is that they must be as large as possible. As a result, theusual approach is not to build a structure, but instead to instrument a naturally-occurring target medium. So far, the technique of choice is Cherenkov radiation(from the charged lepton produced when a neutrino interacts by W exchange)in natural bodies of water, either liquid (Lake Baikal or the Mediterranean)or solid (the Antarctic icecap). Strings of “optical modules” (consisting of alarge photomultiplier tube and its associated electronics, housed in a pressure-

  • 1.6. DARK MATTER 29

    resistant glass sphere) are lowered into the water or ice, and the charge andtiming information used to reconstruct the Cherenkov cone. A number of neu-trino telescopes are currently in operation: the most successful, simply becauseit has the largest instrumented volume, is the IceCube experiment at the SouthPole[44]. IceCube has detected high-energy neutrinos at a rate above the ex-pectation from the atmospheric neutrino background[29], but the number ofevents to date is small and there is no statistically significant evidence for pointsources. This situation will doubtless improve over time.

    Other methods of detecting high-energy neutrinos have been proposed, al-though most are presently still at the stage of R&D or feasibility studies.

    • The Askaryan effect is a transient radio signal produced when fast par-ticles travel through a dielectric medium (it’s a form of Cherenkov ra-diation). It should in principle be possible to use this effect to detectthe electromagnetic shower produced when a very-high-energy neutrinointeracts in a radio-transparent medium such as ice or rock (but not liq-uid water). The ANITA balloon experiment[45], for example, uses theAntarctic icecap as the radiator and is sensitive to neutrinos with ener-gies > 1018 eV; to date (after two flights), no significant signal has beenobserved[46]. Other Askaryan-based searches have used radio telescopesas detectors and the Moon as the radiator.

    • Acoustic detection of neutrinos relies on the fact that at extreme ener-gies (∼ 1020 eV), neutrino interactions are not weak—the W and Z areeffectively massless at these energies—so neutrinos will initiate an electro-magnetic shower when they penetrate material. In the ocean, the energydumped by the shower into a narrow cylinder of water will result in a pres-sure pulse, which can be detected by hydrophones. This has the advantagethat the range of sound in water is very large (so a large volume can beinstrumented with a small number of detectors) and that hydrophonesare off-the-shelf equipment; the disadvantage is that the ocean is a verynoisy place, and sophisticated signal processing is required to pick out thecharacteristic bipolar pulse shape of a neutrino event. Also, the thresholdis very high, so the expected rates are correspondingly low. Nevertheless,this is such an attractive idea that several feasibility studies have beenconducted, including the Sheffield-led ACORNE[47] experiment using ahydrophone array off the west coast of Scotland.

    1.6 Dark matter

    Dark matter is the classic example of particle astrophysics: it is a dominantconstituent of the universe and has important effects in cosmology and astro-physics, but both its theoretical explanation and its detection and identificationrely on particle physics. However, dark matter is covered in detail in PHY326[5],so I will only summarise the main points here. For a good review article on thismaterial, consult Feng[48].

    1.6.1 Astrophysical and cosmological evidence for dark matter

    The original astrophysical evidence for dark matter was dynamical: the orbitalmotions of stars and gas in galaxies, and of galaxies in clusters of galaxies, aretoo fast to be accounted for by the luminous material. This was first noted by

  • 30 CHAPTER 1. INTRODUCTION

    Fritz Zwicky in 1933 (galaxies in the Coma cluster), and subsequently studiedin detail by Vera Rubin and colleagues (rotation curves of spiral galaxies).

    This original evidence has now been supplemented through a number ofindependent routes:

    • the temperature profile of the intracluster medium (extremely hot, low-density gas) that pervades rich clusters of galaxies, measured using itsX-ray emission, shows that the gas mass (which greatly exceeds the massof the galaxies themselves) accounts for only about one-sixth of the totalgravitational mass;

    • studies of both weak and strong gravitational lensing show that the lensingmass is larger and more widely ditributed than the luminous mass;

    • simulations of large scale structure cannot reproduce the observed distri-bution of galaxies without incorporating dark matter;

    • analysis of the power spectrum of the cosmic microwave background showsthat cold dark matter must account for about 25% of the total energydensity of the universe.

    The astrophysical and cosmological evidence also provides information aboutthe nature of dark matter. The abundances of the light isotopes 2H, 4He and7Li, which are produced in the early universe, determine the baryon-to-photonratio η, or equivalently the density of baryonic matter Ωb0, where Ω is the den-sity in units of the critical density. This is found to be Ωb0 ≃ 0.04, which isabout 10 times greater than the stellar density (so most baryonic matter is notluminous), but about 6 times less than the matter density inferred from thecosmic microwave background or the gravitational potentials of rich clusters.Thi