152
13 Lecture in physics Quantum physics Elementary particles Astrophysics Cosmology

13 Lecture in physics Quantum physics Elementary particles Astrophysics Cosmology

Embed Size (px)

Citation preview

13 Lecture in physics

Quantum physicsElementary particles

AstrophysicsCosmology

In the tutorial on 12.12.2014 we have revised geometric optics and covered some new topics: quantum physics of molecules, nuclear physics (alpha decay, beta decay), etc.

Attend the tutorials on Thursday and Friday this week before the Christmas week. Check the web site for the details.

We need to revise quantum physics (matter waves, Schrodinger Cat, etc.), continue nuclear physics (gamma decay, nuclear energy, etc.), get to elementary particles, astrophysics and cosmology.

More homework is due 17 and 24 December 2014. It is on the web sites.

Study solutions on the web sites of math, calculus and physics

Geometric optics

Mirror’s size is half of the size of the object

27. Basics of quantum physics

28. Quantum Mechanics of Atoms

Eigenvalues and eigenvectors

An eigenvector or characteristic vector of a square matrix A is a non-zero vector v that, when multiplied with A, yields a scalar multiple of itself; the scalar multiplier is often denoted by λ.

Schrödinger's cat

Schrödinger's cat (continued)

Schrödinger's cat is a thought experiment, sometimes described as a paradox, devised by Austrian physicist Erwin Schrödinger in 1935. It illustrates what he saw as the problem of the Copenhagen interpretation of quantum mechanics applied to everyday objects. The scenario presents a cat that is randomly put in a state where alive and dead are both possibilities, requiring further observation to determine which. The thought experiment is also often featured in theoretical discussions of the interpretations of quantum mechanics. In the course of developing this experiment, Schrödinger coined the term Verschränkung (entanglement).

Wave functionA wave function or wavefunction (also named a state function) in quantum mechanics describes the quantum state of a system of one or more particles, and contains all the information about the system considered in isolation. Quantities associated with measurements, such as the average momentum of a particle, are derived from the wavefunction by mathematical operations that describe its interaction with observational devices. Thus it is a central entity in quantum mechanics. The most common symbols for a wave function are the Greek letters ψ or Ψ (lower-case and capital psi). The Schrödinger equation determines how the wave function evolves over time, that is, the wavefunction is the solution of the Schrödinger equation. The wave function behaves qualitatively like other waves, like water waves or waves on a string, because the Schrödinger equation is mathematically a type of wave equation. This explains the name "wave function", and gives rise to wave–particle duality. The wave of the wave function, however, is not a wave in physical space; it is a wave in an abstract mathematical "space", and in this respect it differs fundamentally from water waves or waves on a string.

Wave function (continued)The wave function for a given system does not have a unique representation. Most commonly, it is taken to be a function of all the position coordinates of the particles and time, that is, the wavefunction is in "position space". However, we could also consider a wave function in "momentum space"; a function of all the momenta of the particles and time instead. In general, the wave function of a system is a function of continuous and discrete variables characterizing the system's degrees of freedom, and there is one wavefunction for the entire system, not a separate wavefunction for each particle in the system. Some particles, like electrons, have nonzero spin, and the wavefunction must include this fundamental property as an intrinsic degree of freedom. In general, the wavefunction for a particle with half-integer spin is a spinor. Particles with spin zero are called scalar particles, those with spin 1 vector particles, and more generally for higher integer spin, tensor particles. The latter are always composite. No elementary particle with spin 3⁄2 or higher is known, except for the hypothesized spin 2 graviton.

Wave function (continued)

In most treatments of quantum mechanics, the wavefunction is complex-valued. In one important interpretation of quantum mechanics called the Copenhagen interpretation, the modulus squared of the wavefunction, |ψ|2, is a real number interpreted as the probability density of finding a particle in a given place at a given time, if the particle's position is to be measured. Since the wavefunction is complex valued, only its relative phase and relative magnitude can be measured. It does not directly tell anything about the magnitudes or directions of measurable observables, one has to apply quantum operators to the wave function ψ and find the eigenvalues which correspond to sets of possible results of measurement.

Wave function (continued)

However, complex numbers are not necessarily used in all treatments. Louis de Broglie in his later years proposed a real-valued wave function connected to the complex wave function by a proportionality constant and developed the de Broglie–Bohm theory.

Wave function (continued)

The unit of measurement for ψ depends on the system. For one particle in three dimensions, its units are [length]−3/2. These unusual units are required so that an integral of |ψ|2 over a region of three-dimensional space is a unitless probability (the probability that the particle is in that region). For different numbers of particles and/or dimensions, the units may be different and can be found by dimensional analysis.

Schrödinger's equation

In quantum mechanics, the Schrödinger equation is a partial differential equation that describes how the quantum state of a physical system changes with time. It was formulated in late 1925, and published in 1926, by the Austrian physicist Erwin Schrödinger.

Schrödinger's equation (continued)

In classical mechanics, the equation of motion is Newton's second law, (F = ma), used to mathematically predict what the system will do at any time after the initial conditions of the system. In quantum mechanics, the analogue of Newton's law is Schrödinger's equation for a quantum system (usually atoms, molecules, and subatomic particles whether free, bound, or localized). It is not a simple algebraic equation, but in general a linear partial differential equation, describing the time-evolution of the system's wave function (also called a "state function").

Schrödinger's equation (continued)

The concept of a wavefunction is a fundamental postulate of quantum mechanics. Schrödinger's equation is also often presented as a separate postulate, but some authors assert it can be derived from symmetry principles. Generally, "derivations" of the SE demonstrate its mathematical plausibility for describing wave–particle duality.

Schrödinger's equation (continued)

In the standard interpretation of quantum mechanics, the wave function is the most complete description that can be given to a physical system. Solutions to Schrödinger's equation describe not only molecular, atomic, and subatomic systems, but also macroscopic systems, possibly even the whole universe. The Schrödinger equation, in its most general form, is consistent with either classical mechanics or special relativity, but the original formulation by Schrödinger himself was non-relativistic.

Schrödinger's equation (continued)

The Schrödinger equation is not the only way to make predictions in quantum mechanics - other formulations can be used, such as Werner Heisenberg's matrix mechanics, and Richard Feynman's path integral formulation.

Schrödinger's equation (continued)

Schrödinger's equation (continued)

Schrödinger's equation (continued)

The solution to the wave equation is mathematically similar to classical mechanics and electrodynamics

Dirac equation

Spinor

Schrodinger equation and Dirac equation are linear equations

Pauli equation

In quantum mechanics, the Pauli equation or Schrödinger–Pauli equation is the formulation of the Schrödinger equation for spin-½ particles, which takes into account the interaction of the particle's spin with an external electromagnetic field. It is the non-relativistic limit of the Dirac equation and can be used where particles are moving at speeds much less than the speed of light, so that relativistic effects can be neglected. It was formulated by Wolfgang Pauli in 1927.

Laser

A laser is a device that emits light through a process of optical amplification based on the stimulated emission of electromagnetic radiation. The term "laser" originated as an acronym for "light amplification by stimulated emission of radiation". A laser differs from other sources of light because it emits light coherently. Spatial coherence allows a laser to be focused to a tight spot, enabling applications like laser cutting and lithography. Spatial coherence also allows a laser beam to stay narrow over long distances (collimation), enabling applications such as laser pointers. Lasers can also have high temporal coherence which allows them to have a very narrow spectrum, i.e., they only emit a single color of light. Temporal coherence can be used to produce pulses of light—as short as a femtosecond.

Laser (continued)

Lasers have many important applications. They are used in common consumer devices such as optical disk drives, laser printers, and barcode scanners. Lasers are used for both fiber-optic and free-space optical communication. They are used in medicine for laser surgery and various skin treatments, and in industry for cutting and welding materials. They are used in military and law enforcement devices for marking targets and measuring range and speed. Laser lighting displays use laser light as an entertainment medium.

Pairs production

Quantum teleportation

Information without energy transfer

29. Molecules and solids

Logical gates

30. Nuclear Physics and Radioactivity

Half-life

Exercises:

Find the half-life of an isotope with 1% decay rate per year.

31. Nuclear Energy; Effects and Uses of Radiation

Main nuclear reaction in nuclear energy production

32. Elementary particles

Quark

A quark (/ˈkwɔrk/ or /ˈkwɑrk/) is an elementary particle and a fundamental constituent of matter. Quarks combine to form composite particles called hadrons, the most stable of which are protons and neutrons, the components of atomic nuclei. Due to a phenomenon known as color confinement, quarks are never directly observed or found in isolation; they can be found only within hadrons, such as baryons (of which protons and neutrons are examples), and mesons. For this reason, much of what is known about quarks has been drawn from observations of the hadrons themselves.

Quark (continued)

There are six types of quarks, known as flavors: up, down, strange, charm, bottom, and top. Up and down quarks have the lowest masses of all quarks. The heavier quarks rapidly change into up and down quarks through a process of particle decay: the transformation from a higher mass state to a lower mass state. Because of this, up and down quarks are generally stable and the most common in the universe, whereas strange, charm, bottom, and top quarks can only be produced in high energy collisions (such as those involving cosmic rays and in particle accelerators).

Quark (continued)

Quarks have various intrinsic properties, including electric charge, mass, color charge and spin. Quarks are the only elementary particles in the Standard Model of particle physics to experience all four fundamental interactions, also known as fundamental forces (electromagnetism, gravitation, strong interaction, and weak interaction), as well as the only known particles whose electric charges are not integer multiples of the elementary charge. For every quark flavor there is a corresponding type of antiparticle, known as an antiquark, that differs from the quark only in that some of its properties have equal magnitude but opposite sign.

Quark (continued)

The quark model was independently proposed by physicists Murray Gell-Mann and George Zweig in 1964. Quarks were introduced as parts of an ordering scheme for hadrons, and there was little evidence for their physical existence until deep inelastic scattering experiments at the Stanford Linear Accelerator Center in 1968. Accelerator experiments have provided evidence for all six flavors. The top quark was the last to be discovered at Fermilab in 1995.

Lepton

A lepton is an elementary, spin 1⁄2 particle that does not undergo strong interactions, but is subject to the Pauli exclusion principle. The best known of all leptons is the electron, which governs nearly all of chemistry as it is found in atoms and is directly tied to all chemical properties. Two main classes of leptons exist: charged leptons (also known as the electron-like leptons), and neutral leptons (better known as neutrinos). Charged leptons can combine with other particles to form various composite particles such as atoms and positronium, while neutrinos rarely interact with anything, and are consequently rarely observed.

Lepton (continued)There are six types of leptons, known as flavours, forming three generations.[2] The first generation is the electronic leptons, comprising the electron (e−) and electron neutrino (νe); the second is the muonic leptons, comprising the muon (μ−) and muon neutrino (νμ); and the third is the tauonic leptons, comprising the tau (τ−) and the tau neutrino (ντ). Electrons have the least mass of all the charged leptons. The heavier muons and taus will rapidly change into electrons through a process of particle decay: the transformation from a higher mass state to a lower mass state. Thus electrons are stable and the most common charged lepton in the universe, whereas muons and taus can only be produced in high energy collisions (such as those involving cosmic rays and those carried out in particle accelerators).

Lepton (continued)

Leptons have various intrinsic properties, including electric charge, spin, and mass. Unlike quarks however, leptons are not subject to the strong interaction, but they are subject to the other three fundamental interactions: gravitation, electromagnetism (excluding neutrinos, which are electrically neutral), and the weak interaction. For every lepton flavor there is a corresponding type of antiparticle, known as antilepton, that differs from the lepton only in that some of its properties have equal magnitude but opposite sign. However, according to certain theories, neutrinos may be their own antiparticle, but it is not currently known whether this is the case or not.

Lepton (continued)The first charged lepton, the electron, was theorized in the mid-19th century by several scientists and was discovered in 1897 by J. J. Thomson. The next lepton to be observed was the muon, discovered by Carl D. Anderson in 1936, which was classified as a meson at the time. After investigation, it was realized that the muon did not have the expected properties of a meson, but rather behaved like an electron, only with higher mass. It took until 1947 for the concept of "leptons" as a family of particle to be proposed. The first neutrino, the electron neutrino, was proposed by Wolfgang Pauli in 1930 to explain certain characteristics of beta decay. It was first observed in the Cowan–Reines neutrino experiment conducted by Clyde Cowan and Frederick Reines in 1956. The muon neutrino was discovered in 1962 by Leon M. Lederman, Melvin Schwartz and Jack Steinberger, and the tau discovered between 1974 and 1977 by Martin Lewis Perl and his colleagues from the Stanford Linear Accelerator Center and Lawrence Berkeley National Laboratory.[11] The tau neutrino remained elusive until July 2000, when the DONUT collaboration from Fermilab announced its discovery.

Lepton (continued)

Leptons are an important part of the Standard Model. Electrons are one of the components of atoms, alongside protons and neutrons. Exotic atoms with muons and taus instead of electrons can also be synthesized, as well as lepton–antilepton particles such as positronium.

Feynman diagram

Standard Model

The Standard Model of particle physics is a theory concerning the electromagnetic, weak, and strong nuclear interactions, which mediate the dynamics of the known subatomic particles. It was developed throughout the latter half of the 20th century, as a collaborative effort of scientists around the world. The current formulation was finalized in the mid-1970s upon experimental confirmation of the existence of quarks. Since then, discoveries of the top quark (1995), the tau neutrino (2000), and more recently the Higgs boson (2013), have given further credence to the Standard Model. Because of its success in explaining a wide variety of experimental results, the Standard Model is sometimes regarded as a "theory of almost everything".

Standard Model (continued)

The Standard Model falls short of being a complete theory of fundamental interactions. It does not incorporate the full theory of gravitation as described by general relativity, or account for the accelerating expansion of the universe (as possibly described by dark energy). The model does not contain any viable dark matter particle that possesses all of the required properties deduced from observational cosmology. It also does not incorporate neutrino oscillations (and their non-zero masses). Although the Standard Model is believed to be theoretically self-consistent and has demonstrated huge and continued successes in providing experimental predictions, it does leave some phenomena unexplained.

Standard Model (continued)

The development of the Standard Model was driven by theoretical and experimental particle physicists alike. For theorists, the Standard Model is a paradigm of a quantum field theory, which exhibits a wide range of physics including spontaneous symmetry breaking, anomalies, non-perturbative behavior, etc. It is used as a basis for building more exotic models that incorporate hypothetical particles, extra dimensions, and elaborate symmetries (such as supersymmetry) in an attempt to explain experimental results at variance with the Standard Model, such as the existence of dark matter and neutrino oscillations.

Standard Model (continued)

This is the Lagrangian of the Standard Model of Particle Physics.

Antimatter

In particle physics, antimatter is material composed of antiparticles, which have the same mass as particles of ordinary matter but have opposite charge and other particle properties such as lepton and baryon number, quantum spin, etc. Encounters between particles and antiparticles lead to the annihilation of both, giving rise to varying proportions of high-energy photons (gamma rays), neutrinos, and lower-mass particle–antiparticle pairs. Setting aside the mass of any product neutrinos, which represent released energy which generally continues to be unavailable, the end result of annihilation is a release of energy available to do work, proportional to the total matter and antimatter mass, in accord with the mass-energy equivalence equation, E=mc2.

Antimatter (continued)

Antiparticles bind with each other to form antimatter just as ordinary particles bind to form normal matter. For example, a positron (the antiparticle of the electron) and an antiproton can form an antihydrogen atom. Physical principles indicate that complex antimatter atomic nuclei are possible, as well as anti-atoms corresponding to the known chemical elements. To date, however, anti-atoms more complex than antihelium have neither been artificially produced nor observed in nature. Studies of cosmic rays have identified both positrons and antiprotons, presumably produced by high-energy collisions between particles of ordinary matter.

Antimatter (continued)

There is considerable speculation as to why the observable universe is apparently composed almost entirely of ordinary matter, as opposed to a more symmetric combination of matter and antimatter. This asymmetry of matter and antimatter in the visible universe is one of the greatest unsolved problems in physics. The process by which this asymmetry between particles and antiparticles developed is called baryogenesis.

Antimatter (continued)

Antimatter in the form of anti-atoms is one of the most difficult materials to produce. Antimatter in the form of individual anti-particles, however, is commonly produced by particle accelerators and in some types of radioactive decay.

Neutrino (continued)

Neutrinos do not carry electric charge, which means that they are not affected by the electromagnetic forces that act on charged particles such as electrons and protons. Neutrinos are affected only by the weak sub-atomic force, of much shorter range than electromagnetism, and gravity, which is relatively weak on the subatomic scale. Therefore a typical neutrino passes through normal matter unimpeded.

Neutrino (continued)

Neutrinos are created as a result of certain types of radioactive decay, or nuclear reactions such as those that take place in the Sun, in nuclear reactors, or when cosmic rays hit atoms. There are three types, or "flavors", of neutrinos: electron neutrinos, muon neutrinos and tau neutrinos. Each type is associated with an antiparticle, called an "antineutrino", which also has neutral electric charge and half-integer spin. Whether or not the neutrino and its corresponding antineutrino are identical particles has not yet been resolved, even though the antineutrino has an opposite chirality to the neutrino.Most neutrinos passing through the Earth emanate from the Sun. About 65 billion (6.5×1010) solar neutrinos per second pass through every square centimeter perpendicular to the direction of the Sun in the region of the Earth.

Quantum field theory

In theoretical physics, quantum field theory (QFT) is a theoretical framework for constructing quantum mechanical models of subatomic particles in particle physics and quasiparticles in condensed matter physics. A QFT treats particles as excited states of an underlying physical field, so these are called field quanta.

Quantum field theory (continued)

For example, quantum electrodynamics (QED) has one electron field and one photon field; quantum chromodynamics (QCD) has one field for each type of quark; and, in condensed matter, there is an atomic displacement field that gives rise to phonon particles. Edward Witten describes QFT as "by far" the most difficult theory in modern physics.

Quantum field theory (continued)

In QFT, quantum mechanical interactions between particles are described by interaction terms between the corresponding underlying fields. QFT interaction terms are similar in spirit to those between charges with electric and magnetic fields in Maxwell's equations. However, unlike the classical fields of Maxwell's theory, fields in QFT generally exist in quantum superpositions of states and are subject to the laws of quantum mechanics.

Quantum field theory (continued)

Quantum mechanical systems have a fixed number of particles, with each particle having a finite number of degrees of freedom. In contrast, the excited states of a QFT can represent any number of particles. This makes quantum field theories especially useful for describing systems where the particle count/number may change over time, a crucial feature of relativistic dynamics.

Quantum field theory (continued)

Because the fields are continuous quantities over space, there exist excited states with arbitrarily large numbers of particles in them, providing QFT systems with an effectively infinite number of degrees of freedom. Infinite degrees of freedom can easily lead to divergences of calculated quantities (i.e., the quantities become infinite). Techniques such as renormalization of QFT parameters or discretization of spacetime, as in lattice QCD, are often used to avoid such infinities so as to yield physically meaningful results.

Quantum field theory (continued)

Most theories in standard particle physics are formulated as relativistic quantum field theories, such as QED, QCD, and the Standard Model. QED, the quantum field-theoretic description of the electromagnetic field, approximately reproduces Maxwell's theory of electrodynamics in the low-energy limit, with small non-linear corrections to the Maxwell equations required due to virtual electron–positron pairs.

Quantum field theory (continued)

In the perturbative approach to quantum field theory, the full field interaction terms are approximated as a perturbative expansion in the number of particles involved. Each term in the expansion can be thought of as forces between particles being mediated by other particles. In QED, the electromagnetic force between two electrons is caused by an exchange of photons. Similarly, intermediate vector bosons mediate the weak force and gluons mediate the strong force in QCD. The notion of a force-mediating particle comes from perturbation theory, and does not make sense in the context of non-perturbative approaches to QFT, such as with bound states.

Quantum field theory (continued)

The gravitational field and the electromagnetic field are the only two fundamental fields in nature that have infinite range and a corresponding classical low-energy limit, which greatly diminishes and hides their "particle-like" excitations. Albert Einstein in 1905, attributed "particle-like" and discrete exchanges of momenta and energy, characteristic of "field quanta", to the electromagnetic field. Originally, his principal motivation was to explain the thermodynamics of radiation.

Quantum field theory (continued)

Although the photoelectric effect and Compton scattering strongly suggest the existence of the photon, it is now understood that they can be explained without invoking a quantum electromagnetic field; therefore, a more definitive proof of the quantum nature of radiation is now taken up into modern quantum optics as in the antibunching effect.

Quantum field theory (continued)

There is currently no complete quantum theory of the remaining fundamental force, gravity. Many of the proposed theories to describe gravity as a QFT postulate the existence of a graviton particle that mediates the gravitational force. Presumably, the as yet unknown correct quantum field-theoretic treatment of the gravitational field will behave like Einstein's general theory of relativity in the low-energy limit. Quantum field theory of the fundamental forces itself has been postulated to be the low-energy effective field theory limit of a more fundamental theory such as superstring theory.

Quantum electrodynamics

In particle physics, quantum electrodynamics (QED) is the relativistic quantum field theory of electrodynamics. In essence, it describes how light and matter interact and is the first theory where full agreement between quantum mechanics and special relativity is achieved. QED mathematically describes all phenomena involving electrically charged particles interacting by means of exchange of photons and represents the quantum counterpart of classical electromagnetism giving a complete account of matter and light interaction.

Quantum electrodynamics (continued)

In technical terms, QED can be described as a perturbation theory of the electromagnetic quantum vacuum. Richard Feynman called it "the jewel of physics" for its extremely accurate predictions of quantities like the anomalous magnetic moment of the electron and the Lamb shift of the energy levels of hydrogen.

Quantum chromodynamics

In theoretical physics, quantum chromodynamics (QCD) is the theory of strong interactions, a fundamental force describing the interactions between quarks and gluons which make up hadrons such as the proton, neutron and pion. QCD is a type of quantum field theory called a non-abelian gauge theory with symmetry group SU(3). The QCD analog of electric charge is a property called color. Gluons are the force carrier of the theory, like photons are for the electromagnetic force in quantum electrodynamics. The theory is an important part of the Standard Model of particle physics. A huge body of experimental evidence for QCD has been gathered over the years.

Quantum chromodynamics (continued)

QCD enjoys two peculiar properties:• Confinement, which means that the force between quarks does not

diminish as they are separated. Because of this, when you do separate a quark from other quarks, the energy in the gluon field is enough to create another quark pair; they are thus forever bound into hadrons such as the proton and the neutron or the pion and kaon. Although analytically unproven, confinement is widely believed to be true because it explains the consistent failure of free quark searches, and it is easy to demonstrate in lattice QCD.

• Asymptotic freedom, which means that in very high-energy reactions, quarks and gluons interact very weakly creating a quark–gluon plasma. This prediction of QCD was first discovered in the early 1970s by David Politzer and by Frank Wilczek and David Gross. For this work they were awarded the 2004 Nobel Prize in Physics.

Quantum chromodynamics (continued)

The phase transition temperature between these two properties has been measured by the ALICE experiment to be around 160 MeV. Below this temperature, confinement is dominant, while above it, asymptotic freedom becomes dominant.

Electroweak interaction

In particle physics, the electroweak interaction is the unified description of two of the four known fundamental interactions of nature: electromagnetism and the weak interaction. Although these two forces appear very different at everyday low energies, the theory models them as two different aspects of the same force. Above the unification energy, on the order of 100 GeV, they would merge into a single electroweak force. Thus if the universe is hot enough (approximately 1015 K, a temperature exceeded until shortly after the Big Bang) then the electromagnetic force and weak force merge into a combined electroweak force. During the electroweak epoch, the electroweak force separated from the strong force. During the quark epoch, the electroweak force split into the electromagnetic and weak force.

Electroweak interaction (continued)

For contributions to the unification of the weak and electromagnetic interaction between elementary particles, Abdus Salam, Sheldon Glashow and Steven Weinberg were awarded the Nobel Prize in Physics in 1979. The existence of the electroweak interactions was experimentally established in two stages, the first being the discovery of neutral currents in neutrino scattering by the Gargamelle collaboration in 1973, and the second in 1983 by the UA1 and the UA2 collaborations that involved the discovery of the W and Z gauge bosons in proton–antiproton collisions at the converted Super Proton Synchrotron. In 1999, Gerardus 't Hooft and Martinus Veltman were awarded the Nobel prize for showing that the electroweak theory is renormalizable.

Higgs boson

The Higgs boson or Higgs particle is an elementary particle in the Standard Model of particle physics. Its main relevance is that it is the smallest possible excitation of the Higgs field – a field that unlike the more familiar electromagnetic field cannot be "turned off", but instead takes a non-zero constant value almost everywhere. The presence of this field explains why some fundamental particles have mass while the symmetries controlling their interactions should require them to be massless, and why the weak force has a much shorter range than the electromagnetic force.

Higgs boson (continued)Despite being present everywhere, the existence of the Higgs field has been very hard to confirm, because it is extremely hard to create excitations (i.e. Higgs particles). The search for this elusive particle has taken more than 40 years and led to the construction of one of the world's most expensive and complex experimental facilities to date, CERN's Large Hadron Collider,[8] able to create Higgs bosons and other particles for observation and study. On 4 July 2012, the discovery of a new particle with a mass between 125 and 127 GeV/c2 was announced; physicists suspected that it was the Higgs boson. By March 2013, the particle had been proven to behave, interact and decay in many of the ways predicted by the Standard Model, and was also tentatively confirmed to have positive parity and zero spin, two fundamental attributes of a Higgs boson. This appears to be the first elementary scalar particle discovered in nature. More data is needed to know if the discovered particle exactly matches the predictions of the Standard Model, or whether, as predicted by some theories, multiple Higgs bosons exist.

Higgs boson (continued)

The Higgs boson is named after Peter Higgs, one of six physicists who, in 1964, proposed the mechanism that suggested the existence of such a particle. Although Higgs's name has come to be associated with this theory, several researchers between about 1960 and 1972 each independently developed different parts of it. In mainstream media the Higgs boson has often been called the "God particle", from a 1993 book on the topic; the nickname is strongly disliked by many physicists, including Higgs, who regard it as inappropriate sensationalism. On December 10, 2013 two of the original researchers, Peter Higgs and François Englert, were awarded the Nobel Prize in Physics for their work and prediction. Englert's co-researcher Robert Brout had died in 2011 and the Nobel Prize is not ordinarily given posthumously.

Higgs boson (continued)In the Standard Model, the Higgs particle is a boson with no spin, electric charge, or colour charge. It is also very unstable, decaying into other particles almost immediately. It is a quantum excitation of one of the four components of the Higgs field. The latter constitutes a scalar field, with two neutral and two electrically charged components, and forms a complex doublet of the weak isospin SU(2) symmetry. The field has a "Mexican hat" shaped potential with nonzero strength everywhere (including otherwise empty space), which in its vacuum state breaks the weak isospin symmetry of the electroweak interaction. When this happens, three components of the Higgs field are "absorbed" by the SU(2) and U(1) gauge bosons (the "Higgs mechanism") to become the longitudinal components of the now-massive W and Z bosons of the weak force. The remaining electrically neutral component separately couples to other particles known as fermions (via Yukawa couplings), causing these to acquire mass as well. Some versions of the theory predict more than one kind of Higgs fields and bosons. Alternative "Higgsless" models would have been considered if the Higgs boson was not discovered.

Yang–Mills theory

Yang–Mills theory is a gauge theory based on the SU(N) group, or more generally any compact, semi-simple Lie group. Yang–Mills theory seeks to describe the behavior of elementary particles using these non-Abelian Lie groups and is at the core of the unification of the electromagnetic and weak forces (i.e. U(1) × SU(2)) as well as quantum chromodynamics, the theory of the strong force (based on SU(3)). Thus it forms the basis of our understanding of particle physics, the Standard Model.

Yang–Mills–Higgs equations

In mathematics, the Yang–Mills–Higgs equations are a set of non-linear partial differential equations for a Yang–Mills field, given by a connection, and a Higgs field, given by a section of a vector bundle.

Klein–Gordon equation

The Klein–Gordon equation (Klein–Fock–Gordon equation or sometimes Klein–Gordon–Fock equation) is a relativistic version of the Schrödinger equation.Its solutions include a quantum scalar or pseudoscalar field, a field whose quanta are spinless particles. It cannot be straightforwardly interpreted as a Schrödinger equation for a quantum state, because it is second order in time and because it does not admit a positive definite conserved probability density. Still, with the appropriate interpretation, it does describe the quantum amplitude for finding a point particle in various places, the relativistic wavefunction, but the particle propagates both forwards and backwards in time. Any solution to the Dirac equation is automatically a solution to the Klein–Gordon equation, but the converse is not true.

Grand Unified Theory

A Grand Unified Theory (GUT) is a model in particle physics in which at high energy, the three gauge interactions of the Standard Model which define the electromagnetic, weak, and strong interactions or forces, are merged into one single force. This unified interaction is characterized by one larger gauge symmetry and thus several force carriers, but one unified coupling constant. If Grand Unification is realized in nature, there is the possibility of a grand unification epoch in the early universe in which the fundamental forces are not yet distinct.

Grand Unified Theory (continued)

Models that do not unify all interactions using one simple Lie group as the gauge symmetry, but do so using semisimple groups, can exhibit similar properties and are sometimes referred to as Grand Unified Theories as well.

Grand Unified Theory (continued)

Unifying gravity with the other three interactions would provide a theory of everything (TOE), rather than a GUT. Nevertheless, GUTs are often seen as an intermediate step towards a TOE.

Grand Unified Theory (continued)

Because their masses are predicted to be just a few orders of magnitude below the Planck scale, at the GUT scale, well beyond the reach of foreseen particle colliders experiments, novel particles predicted by GUT models cannot be observed directly. Instead, effects of grand unification might be detected through indirect observations such as proton decay, electric dipole moments of elementary particles, or the properties of neutrinos.[1] Some grand unified theories predict the existence of magnetic monopoles.

Grand Unified Theory (continued)

As of 2012, all GUT models which aim to be completely realistic are quite complicated, even compared to the Standard Model, because they need to introduce additional fields and interactions, or even additional dimensions of space. The main reason for this complexity lies in the difficulty of reproducing the observed fermion masses and mixing angles. Due to this difficulty, and due to the lack of any observed effect of grand unification so far, there is no generally accepted GUT model.

Exercises

7. High resolution with electrons: What is the wavelength and hence the expected resolution, for a beam of 1.3 GeV electrons?

Approach:Rest mass energy of the electron = 0.5 MeV/c2, thus, we can ignore the term (m0c2)2 in the equation E2 = p2c2 + (m0c2)2.

λ

Exercises (continued)

8. What is the wavelength of a proton with KE = 1 TeV?

Approach:Rest mass energy of the proton = 1 GeV/c2, thus, we can ignore the term (m0c2)2 in the equation E2 = p2c2 + (m0c2)2.

λ

Exercises (continued)

9. A small cyclotron of maximum radius R = 0.25 m accelerates protons in a 1.7-T magnetic field. Calculate a frequency needed for the applied alternating voltage and the kinetic energy of protons when they leave the cyclotron.

Approach:• F = qvB• F = ma = mv2r-1

• T = D/v = 2πr/v• f = 1/T

Exercises (continued)

10. What is the speed of a 1 TeV proton?

Approach:• Rest mass of proton = 1 GeV, thus, relativistic

calculations must be used• KE = (γ – 1)m0c2

Exercises (continued)11. Proton decay: An experiment uses 3300 tons of water waiting to see a proton decay of the type p → π0 + e+. If the experiment is run for 4 years without detecting a decay, estimate the lower limit on the proton half-life.

• Approach: • ∆N = -λN∆t = - N∆t• Th = ∆tLn2• ∆N < 1• Th > N(4 years)(0.7)

• Every molecule of water H2O contains 2 + 8 = 10 protons.

• 1 mol of water has the approximate mass of 18 g and contains NA molecules of water.

• NA = 6×1023 molecules/mol.• There are approximately 1033 protons in 3300 tons of water, thus, N = 1033.

33. Astrophysics and cosmology

Galaxy

A galaxy is a massive, gravitationally bound system consisting of stars, stellar remnants, an interstellar medium of gas and dust, and dark matter, an important but poorly understood component. The word galaxy is derived from the Greek galaxias (γαλαξίας), literally "milky", a reference to the Milky Way. Examples of galaxies range from dwarfs with as few as ten million (107) stars to giants with one hundred trillion (1014) stars, each orbiting their galaxy's own center of mass.

Galaxy (continued)

Galaxies contain varying numbers of planets, star systems, star clusters and types of interstellar clouds. In between these objects is a sparse interstellar medium of gas, dust, and cosmic rays. Many galaxies are believed to have supermassive black holes (SMBH) at their center. These are thought to be the primary driver of active galactic nuclei found at the core of such galaxies. The Milky Way's presumed SMBH, known as Sagittarius A*, has a mass four million times that of our sun.

Galaxy (continued)

Galaxies have been historically categorized according to their apparent shape, usually referred to as their visual morphology. A common form is the elliptical galaxy, which has an ellipse-shaped light profile. Spiral galaxies are disk-shaped with dusty, curving arms. Those with irregular or unusual shapes are known as irregular galaxies and typically originate from disruption by the gravitational pull of neighboring galaxies. Such interactions between nearby galaxies, which may ultimately result in a merger, sometimes induce significantly increased incidents of star formation leading to starburst galaxies.

Galaxy (continued)

There are probably more than 170 billion galaxies in the observable universe. Most are 1,000 to 100,000 parsecs in diameter and usually separated by distances on the order of millions of parsecs (or megaparsecs). Intergalactic space (the space between galaxies) is filled with a tenuous gas of an average density less than one atom per cubic meter. The majority of galaxies are organized into a neither fully random nor fully deterministic set of associations known as galaxy groups and clusters, which, in turn usually form larger superclusters. At the largest scale, these associations are generally arranged into sheets and filaments, which are surrounded by immense voids.

Light-year

A light-year (symbol: ly), sometimes written light year or lightyear, is a unit of length used informally to express astronomical distances. It is equal to just under 10 trillion kilometres (or about 6 trillion miles). As defined by the International Astronomical Union (IAU), a light-year is the distance that light travels in vacuum in one Julian year. Because it includes the word year, the term light-year is sometimes misinterpreted as a unit of time.The light-year is most often used when expressing distances to stars and other distances on a galactic scale, especially in non-specialist and popular science publications. The unit usually used in professional astrometry is the parsec (symbol: pc, approximately 3.26 light-years; the distance at which one astronomical unit subtends an angle of one second of arc).

Stellar evolution

Stellar evolution is the process by which a star undergoes a sequence of radical changes during its lifetime. Depending on the mass of the star, this lifetime ranges from only a few million years for the most massive to trillions of years for the least massive, which is considerably longer than the age of the universe. The table shows the lifetimes of stars as a function of their masses. All stars are born from collapsing clouds of gas and dust, often called nebulae or molecular clouds. Over the course of millions of years, these protostars settle down into a state of equilibrium, becoming what is known as a main-sequence star.

Stellar evolution (continued)Nuclear fusion powers a star for most of its life. Initially the energy is generated by the fusion of hydrogen atoms at the core of the main-sequence star. Later, as the preponderance of atoms at the core becomes helium, stars like the Sun begin to fuse hydrogen along a spherical shell surrounding the core. This process causes the star to gradually grow in size, passing through the subgiant stage until it reaches the red giant phase. Stars with at least half the mass of the Sun can also begin to generate energy through the fusion of helium at their core, whereas more massive stars can fuse heavier elements along a series of concentric shells. Once a star like the Sun has exhausted its nuclear fuel, its core collapses into a dense white dwarf and the outer layers are expelled as a planetary nebula. Stars with around ten or more times the mass of the Sun can explode in a supernova as their inert iron cores collapse into an extremely dense neutron star or black hole. Although the universe is not old enough for any of the smallest red dwarfs to have reached the end of their lives, stellar models suggest they will slowly become brighter and hotter before running out of hydrogen fuel and becoming low-mass white dwarfs.

Stellar evolution (continued)

Stellar evolution is not studied by observing the life of a single star, as most stellar changes occur too slowly to be detected, even over many centuries. Instead, astrophysicists come to understand how stars evolve by observing numerous stars at various points in their lifetime, and by simulating stellar structure using computer models.

Nucleosynthesis

Nucleosynthesis is the process that creates new atomic nuclei from pre-existing nucleons, primarily protons and neutrons. The first nuclei were formed about three minutes[citation needed] after the Big Bang, through the process called Big Bang nucleosynthesis. It was then that hydrogen and helium formed to become the content of the first stars, and is responsible for the present hydrogen/helium ratio of the cosmos.

Nucleosynthesis (continued)

With the formation of stars, heavier nuclei were created from hydrogen and helium by stellar nucleosynthesis, a process that continues today. Some of these elements, particularly those lighter than iron, continue to be delivered to the interstellar medium when low mass stars eject their outer envelope before they collapse to form white dwarfs. The remains of their ejected mass form the planetary nebulae observable throughout our galaxy.

Nucleosynthesis (continued)

Supernova nucleosynthesis within exploding stars by fusing carbon and oxygen is responsible for the abundances of elements between magnesium (atomic number 12) and nickel (atomic number 28). Supernova nucleosynthesis is also thought to be responsible for the creation of rarer elements heavier than iron and nickel, in the last few seconds of a type II supernova event. The synthesis of these heavier elements absorbs energy (endothermic) as they are created, from the energy produced during the supernova explosion. Some of those elements are created from the absorption of multiple neutrons (the R process) in the period of a few seconds during the explosion. The elements formed in supernovas include the heaviest elements known, such as the long-lived elements uranium and thorium.

Nucleosynthesis (continued)

Cosmic ray spallation, caused when cosmic rays impact the interstellar medium and fragment larger atomic species, is a significant source of the lighter nuclei, particularly 3He, 9Be and 10,11B, that are not created by stellar nucleosynthesis.

Nucleosynthesis (continued)

In addition to the fusion processes responsible for the growing abundances of elements in the universe, a few minor natural processes continue to produce very small numbers of new nuclides on Earth. These nuclides contribute little to their abundances, but may account for the presence of specific new nuclei. These nuclides are produced via radiogenesis (decay) of long-lived, heavy, primordial radionuclides such as uranium and thorium. Cosmic ray bombardment of elements on Earth also contribute to the presence of rare, short-lived atomic species called cosmogenic nuclides.

General relativity

General relativity, or the general theory of relativity, is the geometric theory of gravitation published by Albert Einstein in 1916 and the current description of gravitation in modern physics. General relativity generalizes special relativity and Newton's law of universal gravitation, providing a unified description of gravity as a geometric property of space and time, or spacetime. In particular, the curvature of spacetime is directly related to the energy and momentum of whatever matter and radiation are present. The relation is specified by the Einstein field equations, a system of partial differential equations.

General relativity (continued)

Some predictions of general relativity differ significantly from those of classical physics, especially concerning the passage of time, the geometry of space, the motion of bodies in free fall, and the propagation of light. Examples of such differences include gravitational time dilation, gravitational lensing, the gravitational redshift of light, and the gravitational time delay. The predictions of general relativity have been confirmed in all observations and experiments to date. Although general relativity is not the only relativistic theory of gravity, it is the simplest theory that is consistent with experimental data. However, unanswered questions remain, the most fundamental being how general relativity can be reconciled with the laws of quantum physics to produce a complete and self-consistent theory of quantum gravity.

General relativity (continued)Einstein's theory has important astrophysical implications. For example, it implies the existence of black holes—regions of space in which space and time are distorted in such a way that nothing, not even light, can escape—as an end-state for massive stars. There is ample evidence that the intense radiation emitted by certain kinds of astronomical objects is due to black holes; for example, microquasars and active galactic nuclei result from the presence of stellar black holes and black holes of a much more massive type, respectively. The bending of light by gravity can lead to the phenomenon of gravitational lensing, in which multiple images of the same distant astronomical object are visible in the sky. General relativity also predicts the existence of gravitational waves, which have since been observed indirectly; a direct measurement is the aim of projects such as LIGO and NASA/ESA Laser Interferometer Space Antenna and various pulsar timing arrays. In addition, general relativity is the basis of current cosmological models of a consistently expanding universe.

General relativity (continued)

General relativity (continued)

Lambda – term accounts for the dark matter and the dark energy

Equivalence principle:

Inertial mass = gravitational mass

Black hole

A black hole is a region of spacetime where gravity is so strong that no particle or light ray entering that region can ever escape from it. The theory of general relativity predicts that a sufficiently compact mass will deform spacetime to form a black hole. The boundary of the region from which no escape is possible is called the event horizon. Although crossing the event horizon has enormous effect on the fate of the object crossing it, it appears to have no locally detectable features. In many ways a black hole acts like an ideal black body, as it reflects no light. Moreover, quantum field theory in curved spacetime predicts that event horizons emit Hawking radiation, with the same spectrum as a black body of a temperature inversely proportional to its mass. This temperature is on the order of billionths of a kelvin for black holes of stellar mass, making it essentially impossible to observe.

Black hole (continued)

Objects whose gravitational fields are too strong for light to escape were first considered in the 18th century by John Michell and Pierre-Simon Laplace. The first modern solution of general relativity that would characterize a black hole was found by Karl Schwarzschild in 1916, although its interpretation as a region of space from which nothing can escape was first published by David Finkelstein in 1958. Long considered a mathematical curiosity, it was during the 1960s that theoretical work showed black holes were a generic prediction of general relativity. The discovery of neutron stars sparked interest in gravitationally collapsed compact objects as a possible astrophysical reality.

Black hole (continued)

Black holes of stellar mass are expected to form when very massive stars collapse at the end of their life cycle. After a black hole has formed, it can continue to grow by absorbing mass from its surroundings. By absorbing other stars and merging with other black holes, supermassive black holes of millions of solar masses (M☉) may form. There is general consensus that supermassive black holes exist in the centers of most galaxies.

Black hole (continued)

Despite its invisible interior, the presence of a black hole can be inferred through its interaction with other matter and with electromagnetic radiation such as light. Matter falling onto a black hole can form an accretion disk heated by friction, forming some of the brightest objects in the universe. If there are other stars orbiting a black hole, their orbit can be used to determine its mass and location. Such observations can be used to exclude possible alternatives (such as neutron stars). In this way, astronomers have identified numerous stellar black hole candidates in binary systems, and established that the core of the Milky Way contains a supermassive black hole of about 4.3 million M☉.

The mass of the universe is often quoted as 1050 tonnes or 1053 kg.

Schwartzchild radius

Exercises

Find the Schwarzschild radius for your textbook of 1 kg.

Approach:R = G = 7×10-11 N(m kg)2

Hubble's law

Hubble's law is the name for the observation in physical cosmology that: (1) objects observed in deep space (extragalactic space, ~10 megaparsecs or more) are found to have a Doppler shift interpretable as relative velocity away from the Earth; and (2) that this Doppler-shift-measured velocity, of various galaxies receding from the Earth, is approximately proportional to their distance from the Earth for galaxies up to a few hundred megaparsecs away. This is normally interpreted as a direct, physical observation of the expansion of the spatial volume of the observable universe.

Hubble's law (continued)

The motion of astronomical objects due solely to this expansion is known as the Hubble flow. Hubble's law is considered the first observational basis for the expanding space paradigm and today serves as one of the pieces of evidence most often cited in support of the Big Bang model.

Hubble's law (continued)

Although widely attributed to Edwin Hubble, the law was first derived from the general relativity equations by Georges Lemaître in a 1927 article where he proposed the expansion of the universe and suggested an estimated value of the rate of expansion, now called the Hubble constant. Two years later Edwin Hubble confirmed the existence of that law and determined a more accurate value for the constant that now bears his name. Hubble inferred the recession velocity of the objects from their redshifts, many of which were earlier measured and related to velocity by Vesto Slipher in 1917.

Hubble's law (continued)

The law is often expressed by the equation v = H0D, with H0 the constant of proportionality (Hubble constant) between the "proper distance" D to a galaxy (which can change over time, unlike the comoving distance) and its velocity v (i.e. the derivative of proper distance with respect to cosmological time coordinate; see Uses of the proper distance for some discussion of the subtleties of this definition of 'velocity'). The SI unit of H0 is s−1 but it is most frequently quoted in (km/s)/Mpc, thus giving the speed in km/s of a galaxy 1 megaparsec (3.09×1019 km) away. The reciprocal of H0 is the Hubble time.

Hubble's law (continued)

Big Bang

The Big Bang theory is the prevailing cosmological model for the birth of the universe. The model postulates that at some moment all of space was contained in a single point from which the universe has been expanding ever since. Modern measurements place this moment at approximately 13.8 billion years ago, which is thus considered the age of the universe. After the initial expansion, the universe cooled sufficiently to allow the formation of subatomic particles, and later simple atoms. Giant clouds of these primordial elements later coalesced through gravity to form stars and galaxies. The Big Bang theory does not provide any explanation for the initial conditions of the universe; rather, it describes and explains the general evolution of the universe going forward from that point on.

Big Bang (continued)Since Georges Lemaître first noted, in 1927, that an expanding universe might be traced back in time to an originating single point, scientists have built on his idea of cosmic expansion. While the scientific community was once divided between supporters of two different expanding universe theories—the Big Bang and the Steady State theory, accumulated empirical evidence provides strong support for the former. In 1929, Edwin Hubble discovered indications that all galaxies are drifting apart at high speeds. In 1964, the cosmic microwave background radiation was discovered, which was crucial evidence in favor of the Big Bang model, since that theory predicted the existence of background radiation throughout the universe before it was discovered. The known physical laws of nature can be used to calculate the characteristics of the universe in detail back in time to an initial state of extreme density and temperature.

Lambda-CDM modelThe ΛCDM (Lambda Cold Dark Matter) or Lambda-CDM model is a parametrization of the Big Bang cosmological model in which the universe contains a cosmological constant, denoted by Lambda (Greek Λ), associated with dark energy, and cold dark matter (abbreviated CDM). It is frequently referred to as the standard model of Big Bang cosmology, since it is the simplest model that provides a reasonably good account of the following properties of the cosmos:• the existence and structure of the cosmic microwave background• the large-scale structure in the distribution of galaxies• the abundances of

hydrogen (including deuterium), helium, and lithium• the accelerating expansion of the universe observed in the light from

distant galaxies and supernovas

Lambda-CDM model (continued)

The model assumes that general relativity is the correct theory of gravity on cosmological scales. It emerged in the late 1990s as a concordance cosmology, after a period of time when disparate observed properties of the universe appeared mutually inconsistent, and there was no consensus on the makeup of the energy density of the universe.The ΛCDM model can be extended by adding cosmological inflation, quintessence and other elements that are current areas of speculation and research in cosmology.Some alternative models challenge the assumptions of the ΛCDM model. Examples of these are modified Newtonian dynamics, modified gravity and theories of large-scale variations in the matter density of the universe.

Equivalence principle

In the physics of general relativity, the equivalence principle is any of several related concepts dealing with the equivalence of gravitational and inertial mass, and to Albert Einstein's observation that the gravitational "force" as experienced locally while standing on a massive body (such as the Earth) is actually the same as the pseudo-force experienced by an observer in a non-inertial (accelerated) frame of reference.

Dark matterDark matter is a kind of matter in astronomy and cosmology to account for gravitational effects that appear to be the result of invisible mass. Dark matter cannot be seen directly with telescopes; evidently it neither emits nor absorbs light or other electromagnetic radiation at any significant level. It is otherwise hypothesized to simply be matter that is not reactant to light. Instead, the existence and properties of dark matter are inferred from its gravitational effects on visible matter, radiation, and the large-scale structure of the universe. According to the Planck mission team, and based on the standard model of cosmology , the total mass–energy of the known universe contains 4.9% ordinary matter, 26.8% dark matter and 68.3% dark energy. Thus, dark matter is estimated to constitute 84.5% of the total matter in the universe, while dark energy plus dark matter constitute 95.1% of the total content of the universe.

Dark matter (continued)

Astrophysicists hypothesized dark matter because of discrepancies between the mass of large astronomical objects determined from their gravitational effects and the mass calculated from the "luminous matter" they contain: stars, gas, and dust. Although based upon flawed or inadequate evidence, dark matter was postulated by Jan Oort in 1932 to account for the orbital velocities of stars in the Milky Way and by Fritz Zwicky in 1933 to account for evidence of "missing mass" in the orbital velocities of galaxies in clusters. Adequate evidence from galaxy rotation curves was discovered by Horace W. Babcock in 1939, but was not attributed to dark matter. The first to postulate dark matter based upon robust evidence was Vera Rubin in the 1960s–1970s, using galaxy rotation curves.

Dark matter (continued)

Subsequently, many other observations have indicated the presence of dark matter in the universe, including gravitational lensing of background objects by galaxy clusters such as the Bullet Cluster, the temperature distribution of hot gas in galaxies and clusters of galaxies, and more recently the pattern of anisotropies in the cosmic microwave background. According to consensus among cosmologists, dark matter is composed primarily of a not yet characterized type of subatomic particle. The search for this particle, by a variety of means, is one of the major efforts in particle physics today.Although the existence of dark matter is generally accepted by the mainstream scientific community, some alternative theories of gravity have been proposed, such as MOND and TeVeS, which try to account for the anomalous observations without requiring additional matter.

Dark energy

In physical cosmology and astronomy, dark energy is a hypothetical form of energy which permeates all of space and tends to accelerate the expansion of the universe. Dark energy is the most accepted hypothesis to explain the observations since the 1990s indicating that the universe is expanding at an accelerating rate. According to the Planck mission team, and based on the standard model of cosmology, on a mass–energy equivalence basis, the observable universe contains 26.8% dark matter, 68.3% dark energy (for a total of 95.1%) and 4.9% ordinary matter. Again on a mass–energy equivalence basis, the density of dark energy (6.91 × 10−27 kg/m3) is very low: in the solar system, it is estimated only 6 tons of dark energy would be found within the radius of Pluto's orbit. However, it comes to dominate the mass–energy of the universe because it is uniform across space.

Dark energy (continued)

Two proposed forms for dark energy are the cosmological constant, a constant energy density filling space homogeneously, and scalar fields such as quintessence or moduli, dynamic quantities whose energy density can vary in time and space. Contributions from scalar fields that are constant in space are usually also included in the cosmological constant. The cosmological constant can be formulated to be equivalent to vacuum energy. Scalar fields that do change in space can be difficult to distinguish from a cosmological constant because the change may be extremely slow.

Dark energy (continued)

High-precision measurements of the expansion of the universe are required to understand how the expansion rate changes over time and space. In general relativity, the evolution of the expansion rate is parameterized by the cosmological equation of state (the relationship between temperature, pressure, and combined matter, energy, and vacuum energy density for any region of space). Measuring the equation of state for dark energy is one of the biggest efforts in observational cosmology today.

Dark energy (continued)

Adding the cosmological constant to cosmology's standard FLRW metric leads to the Lambda-CDM model, which has been referred to as the "standard model" of cosmology because of its precise agreement with observations. Dark energy has been used as a crucial ingredient in a recent attempt to formulate a cyclic model for the universe.

Friedmann equations

The Friedmann equations are a set of equations in physical cosmology that govern the expansion of space in homogeneous and isotropic models of the universe within the context of general relativity. They were first derived by Alexander Friedmann in 1922 from Einstein's field equations of gravitation for the Friedmann–Lemaître–Robertson–Walker metric and a perfect fluid with a given mass density and pressure . The equations for negative spatial curvature were given by Friedmann in 1924.

Anthropic principle

In astrophysics and cosmology, the anthropic principle (from Greek anthropos, meaning "human") is the philosophical consideration that observations of the physical Universe must be compatible with the conscious and sapient life that observes it. Some proponents of the anthropic principle reason that it explains why the Universe has the age and the fundamental physical constants necessary to accommodate conscious life. As a result, they believe it is unremarkable that the universe's fundamental constants happen to fall within the narrow range thought to be compatible with life.

Anthropic principle (continued)The strong anthropic principle (SAP) as explained by John D. Barrow and Frank Tipler (see variants) states that this is all the case because the Universe is compelled, in some sense, to eventually have conscious and sapient life emerge within it. Some critics of the SAP argue in favor of a weak anthropic principle (WAP) similar to the one defined by Brandon Carter, which states that the universe's ostensible fine tuning is the result of selection bias: i.e., only in a universe capable of eventually supporting life will there be living beings capable of observing and reflecting upon any such fine tuning, while a universe less compatible with life will go unbeheld. Most often such arguments draw upon some notion of the multiverse for there to be a statistical population of universes to select from and from which selection bias (our observance of only this Universe, apparently compatible with life) could occur.

Theories of Everything (TOE)

Fine tuning of the physical constants

Exercises

12. Estimate the total mass of our Galaxy using the orbital dada of the Sun (including our solar system) about the center of the Galaxy. Assume that most of the mass of the Galaxy is concentrated near the center of the Galaxy.

Exercises (continued)

13. Suppose we could place a huge mirror 1 light-year away from us. What would we see in this mirror if it is facing us on Earth? When did it take place? (This might be called a time machine).

Exercises (continued)

14. Suppose a particular star has absolute luminosity equal to that of the Sun but is 10 ly away from Earth. By what factor will it appear dimmer than the Sun?

Exercises (continued)

15. Given z, what is v? What is the speed of a Galaxy whose redshift parameter is measured to be 5?

Exercises (continued)

16. If v = 0.7c, what is z?