Exploring parameter constraints on quintessential dark energy: The pseudo-Nambu-Goldstone-boson model

  • Published on

  • View

  • Download


Exploring parameter constraints on quintessential dark energy: The pseudo-Nambu-Goldstone-boson modelAugusta Abrahamse, Andreas Albrecht, Michael Barnard, and Brandon BozekPhysics Department, University of California, Davis, California, USA(Received 17 January 2008; published 5 May 2008)We analyze the constraining power of future dark energy experiments for pseudo-Nambu-Goldstone-boson (PNGB) quintessence. Following the Dark Energy Task Force (DETF) methodology, we forecastdata for three experimental stages: Stage 2 represents in-progress projects relevant to dark energy; Stage 3refers to medium-sized experiments; Stage 4 comprises larger projects. We determine the posteriorprobability distribution for the parameters of the PNGB model using Markov Chain Monte Carlo analysis.Utilizing data generated on a CDM cosmology, we find that the relative power of the different datastages on PNGB quintessence is roughly comparable to the DETF results for the w0 wa parametrizationof dark energy. We also generate data based on a PNGB cosmological model that is consistent with aCDM fiducial model at Stage 2. We find that Stage 4 data based on this PNGB fiducial model will ruleout a cosmological constant by at least 3.DOI: 10.1103/PhysRevD.77.103503 PACS numbers: 95.36.+x, 98.80.EsI. INTRODUCTIONA growing number of observations indicate that theexpansion of the universe is accelerating. Given our currentunderstanding of physics, this phenomenon is a mystery. IfEinsteins gravity is correct, then it appears that approxi-mately 70% of the energy density of the universe is in theform of a dark energy. Although there are many ideas ofwhat the dark energy could be, as of yet, none of themstands out as being particularly compelling. Future obser-vations will be crucial to developing a theoretical under-standing of dark energy.A number of new observational efforts have been pro-posed to probe the nature of dark energy, but evaluating theimpact of a given proposal is complicated by our poortheoretical understanding of dark energy. In light of thisissue, a number of model independent methods have beenused to explore these issues (see, for example, [1,2]).Notably, the Dark Energy Task Force (DETF) produced areport in which they used the wo wa parametrization ofthe equation of state evolution in terms of the scale factor,wa wo wa1 a [3]. The constraints on the pa-rameters wo and wa were interpreted in terms of a figureof merit (FoM) designed to quantify the power of futuredata and guide the selection and planning of differentobservational programs. Improvements in the DETF FoMbetween experiments indicate increased sensitivity to pos-sible dynamical evolution of the dark energy. This iscrucial information, since current data is consistent witha cosmological constant and any detection of a deviationfrom a cosmological constant would have a tremendousimpact. There are, however, a number of questions leftunanswered when the dark energy is modeled using ab-stract parameters such as w0 wa that are perhaps betteraddressed with the analysis of actual theoretical models ofdark energy.First of all, the wo wa parametrization is simplisticand not based on a physically motivated model of darkenergy. Although simplicity is part of the appeal of thisparametrization, some of the most popular dark energymodels exhibit behavior that cannot be described by thewo wa parametrization. The pseudo-Nambu-Goldstone-boson (PNGB) quintessence model considered in this pa-per, for instance, allows equations of state that cannot beapproximated by the wo wa parametrization, as shownin Fig. 1.Conversely, the wo wa parametrization may allowsolutions that do not correspond to a physically motivatedmodel of dark energy. Because of these issues, one couldwonder whether the DETF FoMs are somehow mislead-ing. Various concerns with the DETF analysis have alreadybeen explored. Albrecht and Bernstein used a more com-plex parametrization of the scale factor to check the valid-ity of the w0 wa approximation [4], and Huterer andPeiris considered a generalized class of scalar field models[5]. Additionally, it has been suggested that the DETF datamodels might be improved (see, for instance, [6]).As of yet, however, no actual proposed models of dy-namical dark energy have been considered in terms offuture data. Given the issues above, such an analysis isan important compliment to existing work. Of course allspecific models of dark energy are suspect for variousreasons, and one can just as well argue that it is better tomake the case for new experiments in a more abstractparameter space rather than tying our future efforts tospecific models that themselves are problematic. Ratherthan take a side in this discussion, our position is thatgiven the diversity of views on the subject, a model-basedassessment will have an important role in an overall as-sessment of an observational program.In this paper we consider the pseudo-Nambu-Goldstone-boson quintessence model of dark energy [7]. As one of thePHYSICAL REVIEW D 77, 103503 (2008)1550-7998=2008=77(10)=103503(13) 103503-1 2008 The American Physical Societymost well-motivated quintessence models from a particlephysics perspective, it is a worthwhile one to study. We usethe data models forecasted by the DETF and generate twotypes of data sets, one based on a CDM backgroundcosmology and one based on a background cosmologywith PNGB dark energy using a specific fiducial set ofPNGB parameters. We determine the probability space forthe PNGB parameters given the data using Markov ChainMonte Carlo analysis. This paper is part of a series ofpapers in which a number of quintessence models areanalyzed in this manner [8,9].We show that the allowed regions of parameter spaceshrink as we progress from Stage 2 to Stage 3 to Stage 4data in much the same manner as was seen by the DETF inthe w0 wa space. This result holds for both CDM andthe PNGB data models. Additionally, with our choice ofPNGB fiducial background model, we demonstrate theability of Stage 4 data to discriminate between a universedescribed by a cosmological constant and one containingan evolving PNGB field. As cosmological data continues toimprove, careful analysis of specific dark energy modelsusing real data will become more and more relevant.MCMC analysis can be computationally intensive andtime consuming. Since future work in this area is likelyto encounter similar challenges, we discuss some of thedifficulties we discovered and solutions we implemented inour MCMC exploration of PNGB parameter space.II. PNGB QUINTESSENCEQuintessence models of dark energy are popular con-tenders for explaining the current acceleration of the uni-verse [1012]. Although the cosmological constant isregarded by many to be the simplest theory of the darkenergy, the required value of the cosmological constantappears to be many orders of magnitude too small in naiveparticle theory estimates. In quintessence models this prob-lem is not solved. Instead it is generally sidestepped byassuming some unknown mechanism sets the vacuum en-ergy to exactly zero, and the dark energy is due to a scalarfield evolving in a pressure dominated state. As such fieldscan appear in many proposed fundamental theories, andas the mechanism mimics ideas familiar from cosmicinflation, quintessence models are regarded by many (butcertainly not by everyone [13]) to be at least as plausible asa cosmological constant [14,15].Here the quintessence field is presumed to be homoge-neous in space, and is described by some scalar degree offreedom and a potential V which governs the fieldsevolution. In a Friedmann-Robertson-Walker (FRW)spacetime, the fields evolution is given by 3H _ dVd 0; (1)whereH _aa(2)andH2 13M2pr m k; (3)where MP is the reduced Planck mass, r is the energydensity of radiation, m is the energy density of nonrela-tivistic matter, and k is the effective energy density ofspacetime curvature. The energy density and pressure as-sociated with the field are 12 _2 V; P 12 _2 V (4)and the equation of state w is given byw P: (5)If the potential energy dominates the energy of the field,then as can be seen in Eq. (4) the pressure will be negativeand in some cases can be sufficiently so to give rise toacceleration as the universe expands.00.511.522.5310.500.51zw00.511.522.5310.500.51zwFIG. 1 (color online). Examples of possible equations of state for PNGB quintessence (left panel). Attempts to imitate this behaviorwith wz curves for the w0 wa parametrization are depicted on the right.ABRAHAMSE, ALBRECHT, BARNARD, AND BOZEK PHYSICAL REVIEW D 77, 103503 (2008)103503-2The PNGB model of quintessence is considered com-pelling because it is one of the few models that seemsnatural from the perspective of 4D effective field the-ory. In order to fit current observations, the quintessencefield must behave at late times approximately as a cosmo-logical constant. It must be rolling on its potential with-out too much contribution from kinetic energy, and thevalue of the potential must be close to the observed valueof the dark energy density, which is on the order of thecritical density of the universe c3H20m2p1:881026h2 kg=m3 or 2:3 10120h2 in reduced Planck units,where h H0=100. These considerations require the fieldto be nearly massless and the potential to be extraordinarilyflat from the point of view of particle physics. In general,radiative corrections generate large mass renormalizationsat each order of perturbation theory unless symmetriesexist to suppress this effect [1618]. In order for such fieldsto seem reasonable, at least on a technical level, their smallmasses must be protected by symmetries such that whenthe masses are set to zero they cannot be generated in anyorder of perturbation theory. Many believe that pseudo-Nambu-Goldstone bosons are the simplest way to haveultralow mass, spin-0 particles that are natural in a quan-tum field theory sense. An additional attraction of themodel is that the parameters of the PNGB model mightbe related to the fundamental Planck and electroweakscales in a way that solves the cosmic coincidence problem[19].The potential of the PNGB field is well approximated byV M4cosf 1(6)(where higher derivative terms and instanton correctionsare ignored) and is illustrated in Fig. 2. The evolution of thedark energy is controlled by the two parameters of thePNGB potential, M4 and f, and the initial conditions, Iand _I. We take _I 0, since we expect the high expan-sion rate of the early universe to rapidly damp nonzerovalues of _I. The initial value of the field, I, takes valuesbetween 0 and f. This is because the potential is periodicand symmetric. Since a starting point of I=f on thepotential is equivalent to starting at nI=f and rollingin the opposite direction down the potential, we require0We generate and analyze two versions of data. One isbuilt around a cosmological constant model of the uni-verse. The other is based on a PNGB fiducial model. Thelatter is chosen to be consistent with a cosmological con-stant for Stage 2 data.We use Markov Chain Monte Carlo analysis with aMetropolis-Hastings stepping algorithm [2224] to evalu-ate the likelihood function for the parameters of our model.The details of our methods are discussed in Appendix B.MCMC lends itself to our analysis because our probabilityspace is both non-Gaussian and also depends on a largenumber of parameters. These include the PNGB modelparameters: M4, f, and I, the cosmological parameters:!m, !k, !B, , ns (as defined by the DETF), and thevarious nuisance and/or photo-z parameters accounting forerror and uncertainties in the data).In order for the results of an MCMC chain to be mean-ingful, there must exist a finite, stationary distribution towhich the Markov chain may converge in a finite numberof steps. Degeneracies between parameters, i.e., combina-tions of different parameters that give rise to identicalcosmologies, correspond to unconstrained directions inthe probability distribution. Unless some transformationof parameters is found and/or a cutoff placed on theseparameters, the MCMC will step infinitely in this directionand can never converge to a stationary distribution.Additionally, the shape of the probability distribution candrastically affect the efficiency of the chain. A large por-tion of the task of analyzing the PNGB model, therefore,involves finding convenient parametrizations and cutoffs tofacilitate MCMC exploration of the posterior distribution.The probability space of the PNGB model becomesmore tractable from an MCMC standpoint if we transformfrom the original variables to ones that are more directlyrelated to cosmological observables constrained by thedata. Such parametrizations make it easier to identifydegeneracies and also tend to make the shape of theprobability distribution more Gaussian. As discussed inSec. II, the dynamics of the PNGB field depend on itspotential V M4cosf 1, and the specific valuesofM4, f, and I. In order to fit current data the field musthang on the potential approximating a cosmological con-stant for most of the expansion history of the universe. Ifthe field never rolls it acts as a cosmological constant for alltimes with a value corresponding to the initial energydensity of the field, VI VI. To first order, then, VIsets the overall scale of the dark energy density. Since VIhas more physical significance than M4, it is a moreefficient choice for our MCMC analysis.Additionally, the phase I=f of the fields startingpoint in the cosine potential is closely related to the initialslope of the potential. The slope affects the time scale onwhich the field will evolve. If, for instance, I=f 0 theM4f0.19 0.2 0.21 0.22 0.23 0.24 0.375 0.38 0.385 0.39 0.395 3 (color online). 2D confidence regions for f vs M4 (left panel) and f vs VI (right panel). The concave feature of the f vsM4 contours means it is inefficiently explored by MCMC. Contours in the f VI space are nearly Gaussian and better facilitateconvergence.0 1 2 3 4 500. 4 (color online). PNGB potentials (dashed line) and withthe entire field evolution shown in thick solid curves. Thedifferent curves show increasing values of f from left to right.The smallest value of f (bottom curve) gives a nearly static darkenergy and fits a cosmological constant well. Any larger valuefor f will also fit the data because the potential will be flatter,and the field will evolve even less.ABRAHAMSE, ALBRECHT, BARNARD, AND BOZEK PHYSICAL REVIEW D 77, 103503 (2008)103503-4field starts out on the very top of the potential where theslope is exactly zero, and the field will not evolve. Startingcloser to the inflection point of the potential results in asteeper initial slope and the field will roll faster toward theminimum. Since the variableI can correspond to both flatinitial slopes (if f is large) or steep ones (if f is small),I=f is more directly related to the dynamics of the fieldand is therefore a superior parameter choice. These newparameters, as illustrated in Fig. 3, also result in a proba-bility distribution that is more Gaussian and thus is moreeasily explored by our MCMC algorithm.Even more important than choosing physically relevantparameters is deciding how to handle divergent directionsin probability space. For PNGB quintessence the parameterf must be cut off in some way because it can becomearbitrarily large without being constrained by the data. Forthe fiducial model based on a CDM universe, solutionswhere the field does not evolve for the entire expansionhistory of the universe, i.e., behaves as a cosmologicalconstant, can fit the forecast data perfectly. If such a choiceof parameters is found, then larger and larger values of fwill only make the potential flatter and flatter. If the fielddid not evolve significantly for the smaller values of f, thiswill be even more true as the potential flattens. Hence, fcan become arbitrarily large and the cosmological observ-ables will remain identical, as shown in Fig. 4.In general, it is possible to achieve identical deviationsfrom a cosmological constant by increasing f while at thesame time movingI=f towards the inflection point of thepotential. Increasing f flattens the potential, but by chang-ing I=f, the slope of the potential can be held nearlyconstant and the evolution of the field will not changesignificantly. In order to achieve results with MCMC, itis necessary to choose some cutoff f so that this infinitedirection is bounded. We choose f I/ffStage II0 0.4 0.8 1.300.51I/ffStage III Optimistic0 0.4 0.8 1.300.51I/ffStage IV Ground Optimistic0 0.4 0.8 1.300.51I/ffStage IV Space Optimistic0 0.4 0.8 1.300.51FIG. 6 (color online). fI=f 1, 2, and 3 sigma confidence regions for DETF optimistic combined data sets.0.34 0.38 0.4 0.4400.40.81.3VI I/fStage II0.34 0.38 0.4 0.4400.40.81.3VI I/fStage III Optimistic0.34 0.380.4 0.4400.40.81.3VI I/fStage IV Space Optimistic0.34 0.38 0.4 0.4400.40.81.3VI I/fStage IV Ground OptimisticFIG. 5 (color online). VI I=f 1, 2, and 3 sigma confidence regions for DETF optimistic combined data sets.ABRAHAMSE, ALBRECHT, BARNARD, AND BOZEK PHYSICAL REVIEW D 77, 103503 (2008)103503-6expect quantum fluctuations to displace the field from thetop of potential, it is more reasonable to expect the PNGBfield to start after the inflection point of the potential [16].If either this argument or the theoretical reasons for thebound f 0.36 0.420.45 0.500.511.5VI I/fStage II0.36 0.420.45 0.50.511.5VI I/fStage III Optimistic0.36 0.42 0.45 0.500.511.5VI I/fStage IV Space Optimistic0.36 0.42 0.45 0.500.511.5VI I/fStage IV Ground OptimisticFIG. 9 (color online). VI I 1, 2, and 3 sigma confidence regions for DETF optimistic combined data sets using the PNGBbackground cosmological model.I/ffStage II0.5 1 1.500.51I/ffStage III Optimistic0.5 1 1.500.51I/ffStage IV Ground Optimistic0 0.5 1 1.500.51I/ffStage IV Space Optimistic0 0.5 1 1.500.51FIG. 10 (color online). fI=f 1, 2, and 3 sigma confidence regions for DETF optimistic combined data sets using the PNGBbackground cosmological model.ABRAHAMSE, ALBRECHT, BARNARD, AND BOZEK PHYSICAL REVIEW D 77, 103503 (2008)103503-8[9] are somewhat better constrained by the DETF Stage 4ground data models than by DETF Stage 4 space.B. PNGB fiducial modelIn addition to considering a CDM fiducial model, weevaluate the power of future experiments assuming thedark energy is really due to PNGB quintessence. OurPNGB fiducial parameter values (shown in Table I in unitsof h2) were chosen such that the fiducial model lies withinthe 95% confidence region for Stage 2 CDM data, butdemonstrates a small amount of dark energy evolution thatcan be resolved by Stage 4 experiments.The left panel of Fig. 8 shows the potential and evolutionof the field for this model. The right panel depicts wz. Itcan be seen that today (z 0) the deviation of the fieldfrom wz 1 is only about 10%.Repeating our MCMC analysis for the PNGB fiducialmodel, we again marginalize over all but two parameters todepict the 2-d confidence regions for the dark energyparameters. Figure 9 depicts the VI I=f contours. Itcan be seen that the I=f 0 axis corresponding to thefield sitting on the top of its potential and not evolving, isallowed at Stage 2 but becomes less favored by subsequentstages of the data. By Stage 4 it is ruled out by more than3.Figure 10 depicts the I=f f contours. Again it canbe seen that for larger values of f, I=f must be nonzero.By Stage 4 optimistic, only extreme fine-tuning with fallows I=f to approach zero, so that the field will bedisplaced from the top of the potential and have started toroll by just the right amount by late times.Figure 11 depicts VI versus !DE. At Stage 2 !DE 0is still is within the 1 confidence region. But subsequentstages of the data disfavor this result. Stage 4 optimisticrules out zero evolution of the dark energy by more than3.V. DISCUSSION AND CONCLUSIONSWith experiments such as the ones considered by theDETF on the horizon, data sets will be precise enough tomake it both feasible and important to analyze dynamicmodels of dark energy. The analysis of such models, there-fore, should play a role in the planning of theseexperiments.With our analysis of PNGB quintessence, we haveshown how future data can constrain the parameter spaceof this model. We have shown likelihood contours for aselection of combined DETF data models, and found theincrease in parameter constraints with increasing dataquality to be broadly consistent with the DETF results inw0 wa space. Direct comparison with the DETF figuresof merit is nontrivial because PNGB quintessence dependson three parameters, whereas the DETF FoM were calcu-lated on the bases of two, but in our two dimensionalprojections we saw changes in the area that are consistentwith DETF results. Specifically, the DETF demonstrated aDEV IStage II 0 0.05 0.1 0.150.350.40.450.5DEV IStage III Optimistic0 0.05 0.1 IStage IV Ground Optimistic0 0.05 0.1 IStage IV Space Optimistic0 0.05 0.1 11 (color online). VI !DE 1, 2, and 3 sigma confidence regions for DETF optimistic combined data, where !DE is theamount of change in the dark energy density from the radiation era until today.EXPLORING PARAMETER CONSTRAINTS ON . . . PHYSICAL REVIEW D 77, 103503 (2008)103503-9factor of roughly three decrease in allowed parameter areawhen moving from Stage 2 to good combinations of Stage3 data, and a factor of about ten in area reduction whengoing from Stage 2 to Stage 4. We saw decreases by similarfactors in our two dimensional projections. We have pre-sented likelihood contour plots for specific projected datasets as an illustration. In the course of this work weproduced many more such contour plots to explore theother possible data combinations considered by theDETF including the data with pessimistic estimates ofsystematic errors. We found no significant conflict betweenour results in the PNGB parameter space and those of theDETF in w0 wa space.As discussed in [15], we believe the fact that we havedemonstrated (here and elsewhere [8,9]) results that arebroadly similar to those of the DETF despite the verydifferent families of functions wa considered is relatedto the fact pointed out in [4] that overall the good DETFdata sets will be able to constrain many more features ofwa than are present in the w0 wa ansatz alone.As data continues to improve, MCMC analysis of dy-namic dark energy models will likely become more popu-lar. Our experience with the PNGBmodel could be relevantto future work. We find that the theoretical parameters ofthe model are not in general the best choice for MCMC.Transforming to variables that are closely related to thephysical observables can help MCMC converge more effi-ciently. Additionally, it is necessary to cut off uncon-strained directions in parameter space. It would bedesirable to find bounds that have some physical motiva-tion. For PNGB quintessence, we find that the initial valueof the potential, VI, and the initial phase of the field,I=f, are more convenient than the original model pa-rameters, and that there is some motivation for placing thebound f >>>>>>:1jkjp sinh jkjp zi k < 0zi k 01jkjp sin jkjp zi k > 0(A3)andzi 0 zi Z 1aidaa2Ha (A4)with jkj H20jkj H0h 2j!kj.Uncertainties in absolute magnitude M as well as theabsolute scale of the distance modulus lead to the intro-duction of an offsetoff nuisance parameter in all SNe datasets, giving zi ! zi off .Other systematic errors are modeled by more nuisanceparameters. The peak brightness of supernovae, for in-stance, may have some z-dependent behavior that is notfully understood. We include this uncertainty in our analy-sis by allowing the addition of small linear and quadraticoffsets in z. Additionally, each SNe data model combines acollection of nearby supernovae with a collection of moredistant ones. Possible differences between the two groupsare modeled by considering the addition of a constantoffset to the near group. The distance modulus becomesABRAHAMSE, ALBRECHT, BARNARD, AND BOZEK PHYSICAL REVIEW D 77, 103503 (2008)103503-10zicalc zi off linzi quadz2i shiftznear:(A5)In addition, some experiments will measure supernovaeredshifts photometrically instead of spectroscopically.There may be a bias in the measurement of the photo-zsin each bin. This uncertainty is expressed by another set ofnuisance parameters, zi, that can shift the values of eachzi. These observables become i zi zi.Priors are assigned to each of the nuisance parameters(except for off , which is left unconstrained) which reflectthe projected strength of the various observational pro-grams. Additionally, statistical errors are presumed to beGaussian and are given by the diagonal covariance matrixCij 2i , where i reflects the uncertainty in the iobservables for each data set.The likelihood function L for the supernovae data can becalculated from the chi-squared statistic, where 2 2 lnL. For data sets with photometrically determinedredshifts chi-squared is2 X zi zi zidata22i2lin2lin2quad2quad2near2nearXz2i2zi: (A6)For data sets with spectroscopic redshifts the chi-squared isthe same minus the contribution from redshift shiftparameters.2. Baryon acoustic oscillationsLarge scale variations in the baryon density in the uni-verse have a signature in the matter power spectrum thatwhen calibrated via the CMB provides a standard ruler forprobing the expansion history of the universe. The observ-ables for BAO data (after extraction from the mass powerspectrum) are the comoving angular diameter distance,dcoa zi, and the expansion rate, Hzi, wheredcoa adL (A7)and zi indicates the z bin for each data point. The quality ofthe data probe is modeled by the covariance matrix foreach observable type, as described in Sec. 4 of the DETFtechnical appendix. Additionally, some BAO observationsuse photometrically determined redshifts, in which casezi are added as nuisance parameters as for the super-novae, to describe the uncertainty in each redshift bin.The likelihood function for BAO observations is2 X dcoa zi zi dcoa-datazi22diX Hzi zi Hdatazi22HiXz2i2zi: (A8)3. Weak gravitational lensingLight from background sources is deflected from astraight path to the observer by mass in the foreground.From high resolution imaging of large numbers of gal-axies, it is possible to detect statistical correlations in thestretching of galaxies, cosmic shear. From this fore-ground mass distributions can be determined. The massdistribution as a function of redshift provides a probe of thegrowth history of density perturbations, gz, where gz(in linear perturbation theory) depends on dark energy viag 2H _g 3mH2o2a3g: (A9)Additionally, because the amount of lensing depends onthe ratios of distances between the observer, the lens andthe source, gravitational lensing also probes the expansionhistory of the universe, Dz.The direct observables of weak lensing surveys consid-ered by the DETF are the power spectrum of the lensingsignal and the cross correlation of the lensing signal withforeground structure. Systematic and statistical uncertain-ties are described by a Fisher matrix in this space. As isdetailed in the DETF appendix, it is possible to transformfrom this parameter space to the variables directly depen-dent on dark energy, gz andDz. These become the weaklensing observables we use in our analysis.In addition to depending on the dark energy model, weaklensing observations depend on the cosmological parame-ters matter density !m, baryon density !B, effective cur-vature density !k, the spectral index nS, and the amplitudeof the primordial power spectrum . These parameters aretreated as nuisance parameters with priors imposed by theFisher matrix.Last, since ground-based lensing surveys will photomet-rically determine redshifts, as for SNe and BAO data, wemust model the uncertainty in redshift bins. Again this isdone by allowing each zi bin to vary by some amount zi.The weak lensing observables are given to be the vectorX!obs !m;!k;!B; ns; ; dcoa zi; gzi; lnazi;(A10)where azi is the scale factor corresponding to the redshiftbins for the data. The error matrix is nondiagonal in thespace of these observables so chi-squared is given by2 X!obs X!obs-dataFlensingX!obs X!obs-data>:(A11)4. Planck CMBAs with baryon oscillations, observations of anisotropiesin the cosmic microwave background probe the expansionhistory of the universe by providing a characteristic lengthscale at the time of last scattering. As with weak lensing,our Planck observables are extrapolated from the CMBEXPLORING PARAMETER CONSTRAINTS ON . . . PHYSICAL REVIEW D 77, 103503 (2008)103503-11temperature and polarization maps. The observable spaceconstrained by Planck becomes: ns, !m, !B, , lnS.These variables are constrained via the Fisher matrix in thisspace (we use the same one used in [4]). The chi-squared iscalculated as2 X!obs X!obs-dataFPlanckX!obs X!obs-data>:(A12)APPENDIX B: MCMCMarkov Chain Monte Carlo simulates the likelihoodsurface for a set of parameters by sampling from theposterior distribution via a series of random draws. Thechain steps semistochastically in parameter space via theMetropolis-Hastings algorithm such that more probablevalues of the space are stepped to more often. When thechain has converged it is considered a fair sample of theposterior distribution, and the density of points representsthe true likelihood surface. (Explanations of this techniquecan be found in [22,26,27]).With the Metropolis-Hastings algorithm, the chain startsat an arbitrary position in parameter space. A candidateposition 0 for the next step in the chain is drawn from aproposal probability density q; 0. The candidate pointin parameter space is accepted and becomes the next stepin the chain with the probability; 0 min1;P0q0; Pq; 0; (B1)where P is the likelihood of the parameters given thedata. If the proposal step 0 is rejected, the point becomesthe next step in the chain. Although many distributions areviable for the proposal density q; 0, for simplicity wehave chosen to use a Gaussian normal distribution. (Itshould be noted that, in general, the dark energy parame-ters of the model are not Gaussian distributed. The powerof the MCMC procedure lies in the fact that it can probeposterior distributions that are quite different from theproposal density q; 0.) Since this is symmetric,q; 0 q0; , we need only consider the ratios ofthe posteriors in the above stepping criterion.For the results of the Markov chain to be valid, it mustequilibrate, i.e., converge to the stationary distribution. Ifsuch a distribution exists, the Metropolis-Hastings algo-rithm guarantees that the chain will converge as the chainlength goes to infinity. In practice, however, we must workwith chains of finite length. Moreover, from the standpointof computational efficiency, the shorter our chains can beand still reflect the true posterior distribution of the pa-rameters, the better. Hence a key concern is assuring thatour chains have equilibrated. Though there are many con-vergence diagnostics, chains may only fail such tests in thecase of nonequilibrium; none guarantee that the chain hasconverged [28]. We therefore monitor the chains in avariety of ways to convince ourselves that they actuallyreflect the underlying probability space.Our first check involves updating our proposal distribu-tion q; 0, which we have already chosen to be Gaussiannormal. Each proposal step is drawn randomly from thisdistribution. The size of the changes generated in any givenparameter direction depend on the covariance matrix weuse to define q; 0. We start by guessing the form of thecovariance matrix and run a short chain (O105 steps)after which we calculate the covariance matrix of theMarkov chain. We then use this covariance matrix to definethe Gaussian proposal distribution for the next chain. Werepeat this process until the covariance matrix stops chang-ing systematically. This implies that the Gaussian approxi-mation to the posterior has been found. In addition toindicating convergence, this also assists the efficiency ofour chains. The more the proposal distribution reflects theposterior, the quicker the Markov chain will approximatethe underlying distribution.One convergence concern is that we might not be ex-ploring the entire probability space. Since too large of aproposal step can cause excessive rejection of parameterspace, we shrink our covariance matrix by a fixed amountto ensure that the small scale features of the posterior arethoroughly probed. However, it then is possible, for in-stance, that if we started our chains at a random point inprobability space and our step sizes are too small, the chainmay have wandered to a local maximum from which it willnot exit in a finite time. We could be missing other featuresof the underlying probability space. We convince ourselvesthat this is not the case by starting chains at different pointsin parameter space. We find that the chains consistentlyreflect the same probability distribution, and, hence, weconclude that we are truly sampling from the full posterior.After we have determined our chains are fully exploringprobability space and we have optimized our Gaussianproposal distribution, we run a longer chain to betterrepresent the probability space of our variables. We con-sider the chain to be long enough when the 95% contour isreasonably smooth. For most data sets, chains of O106are sufficient although the larger the probability space, thelonger the chains must be. (In particular, Stage 4 grounddata involves a large number of nuisance parameters andmay take two to 3 times longer to return smooth contours.)With the final chains, we must control for both burn-in andcorrelations between parameter steps. Burn-in refers to thenumber of steps a chain must take before it starts samplingfrom the stationary distribution. Because we have alreadyrun a number of preliminary chains, we know approxi-mately the mean parameters of our model. We find that themeans refer to a point in probability space close to themaximum of the distribution. (Generically, this does nothave to be true if the probability space is asymmetrical.) Ifwe use this as our starting point, our chains do not have towander long before they appear to sample from the sta-ABRAHAMSE, ALBRECHT, BARNARD, AND BOZEK PHYSICAL REVIEW D 77, 103503 (2008)103503-12tionary distribution. We control for this by removing differ-ent amounts from the start of the chain. For instance, if wecut out the first 1000 steps and calculate the contours andcompare this to contours calculated with the first 100 000steps removed we find that the shape of the 2D contoursremain essentially the same. We can conclude, therefore,that chains very quickly begin sampling the posteriordistribution and we need not worry about burn-in.Correlations between steps may also affect the represen-tativeness of the samples generated via MCMC. The ef-fects, however, may be controlled for by either thinning thechains by a given amount or by running chains of sufficientlength such that the correlations become unimportant. Weexperiment with different thin factors (taking every step,every 10th step, and every 50th step and we find very littledifference in our results. Hence we conclude that thesampling of our chains are not greatly affected bycorrelations.Last, we apply a numerical diagnostic similar to thatused by Dick et al. [29] to test the conversion of our chains.(This technique is a modification of the Geweke diagnostic[30].) We compare the means calculated from the first 10%of the chain (after burn-in of 1000) to the means calculatedfrom the last 10%. If the chain has converged to the sta-tionary distribution, then these values should be approxi-mately equal. If mean1imean2iiiis large, where ii is thestandard deviation determined by the chain for the parame-ter i, then the chain is likely to still be drifting. We findthat for our chains mean1imean2iii < :1 for 95% of theparameters. The remaining parameters are no less than ii5away from each other. Coupling this with the qualitativemonitoring of the chains described above, we are confidentthat our chains do a good job of reflecting the posteriorprobability distribution of our model.[1] D. Huterer and M. S. Turner, Phys. Rev. D 60, 081301(R)(1999).[2] D. Huterer and M. S. Turner, Phys. Rev. D 64, 123527(2001).[3] A. Albrecht et al., arXiv:astro-ph/0609591.[4] A. Albrecht and G. Bernstein, Phys. Rev. D 75, 103003(2007).[5] D. Huterer and H.V. Peiris, Phys. Rev. D 75, 083503(2007).[6] M. Schneider, L. Knox, H. Zhan, and A. Connolly,Astrophys. J. 651, 14 (2006).[7] J. A. Frieman, C. T. Hill, A. Stebbins, and I. Waga, Phys.Rev. Lett. 75, 2077 (1995).[8] B. Bozek, A. Abrahamse, A. Albrecht, and M. Barnard,following Article, Phys. Rev. D 77, 103504 (2008).[9] M. Barnard, A. Abrahamse, A. Albrecht, B. Bozek, andM. Yashar, preceding Article, Phys. Rev. D 77, 103502(2008).[10] S.M. Carroll, arXiv:astro-ph/0107571.[11] E. J. Copeland, M. Sami, and S. Tsujikawa, Int. J. Mod.Phys. D 15, 1753 (2006).[12] V. Sahni, Classical Quantum Gravity 19, 3435 (2002).[13] R. Bousso, Gen. Relativ. Gravit. 40, 607 (2008).[14] P. J. Steinhardt, Phys. Scr. T117, 34 (2005).[15] A. Albrecht, AIP Conf. Proc. 957, 3 (2007).[16] N. Kaloper and L. Sorbo, J. Cosmol. Astropart. Phys. 04(2006) 007.[17] C. F. Kolda and D.H. Lyth, Phys. Lett. B 458, 197 (1999).[18] S.M. Carroll, Phys. Rev. Lett. 81, 3067 (1998).[19] L. J. Hall, Y. Nomura, and S. J. Oliver, Phys. Rev. Lett. 95,141302 (2005).[20] M. Dine, arXiv:hep-th/0107259.[21] T. Banks, M. Dine, P. J. Fox, and E. Gorbatov, J. Cosmol.Astropart. Phys. 06 (2003) 001.[22] D. Gamerman, Markov Chain Monte Carlo: StochasticSimulation for Bayesian Inference (Chapman & Hall,London, 1997).[23] N. Metropolis, A.W. Rosenbluth, M.N. Rosenbluth, A. H.Teller, and E. Teller, J. Chem. Phys. 21, 1087 (1953).[24] W.K. Hastings, Biometrika 57, 97 (1970).[25] R. R. Caldwell and E.V. Linder, Phys. Rev. Lett. 95,141301 (2005).[26] A. Lewis and S. Bridle, Phys. Rev. D 66, 103511 (2002).[27] N. Christensen, R. Meyer, L. Knox, and B. Luey, ClassicalQuantum Gravity 18, 2677 (2001).[28] B. Carlin and M.K. Cowles, J. Am. Stat. Assoc. 91, 883(1996).[29] J. Dick, L. Knox, and M. Chu, J. Cosmol. Astropart. Phys.07 (2006) 001.[30] E. Geweke, Evaluating the accuracy of sampling basedapproaches to the calculation of posterior moments(1992).EXPLORING PARAMETER CONSTRAINTS ON . . . PHYSICAL REVIEW D 77, 103503 (2008)103503-13


View more >