12
Targeting utility customers to improve energy savings from conservation and efficiency programs Nicholas W. Taylor , Pierce H. Jones, M. Jennison Kipp University of Florida, Program for Resource Efficient Communities, PO Box 110940, 2295 Mowry Rd, Gainesville, FL 32611-2081, United States highlights Improving DSM program impacts by targeting high energy users. DSM energy savings potential hinges on pre-participation performance. Targeting can benefit different utilities and energy efficiency programs. Overall performance can be improved by up to 250% via targeting strategies. article info Article history: Received 3 April 2013 Received in revised form 25 July 2013 Accepted 6 October 2013 Available online 25 November 2013 Keywords: Demand-side management (DSM) Residential buildings Performance evaluation Heating, ventilation and air conditioning (HVAC) Energy audit Insulation abstract Electric utilities, government agencies, and private interests in the US have committed and continue to invest substantial resources – including billions of dollars of financial capital – in the pursuit of energy efficiency and conservation through demand-side management (DSM) programs. While most of these programs are deemed to be cost effective, and therefore in the public interest, opportunities exist to improve cost effectiveness by targeting programs to those customers with the greatest potential for energy savings. This article details an analysis of three DSM programs offered by three Florida municipal electric utilities to explore such opportunities. First, we estimate programs’ energy savings impacts; sec- ond, we measure and compare energy savings across subgroups of program participants as determined by their pre-intervention energy performance, and third, we explore potential changes in program impacts that might be realized by targeting specific customers for participation in the DSM programs. All three programs resulted in statistically significant average (per-participant) energy savings, yet aver- age savings varied widely, with the customers who performed best (i.e., most efficient) before the inter- vention saving the least energy and those who performed worst (i.e., least efficient) before the intervention saving the most. Assessment of alternative program participation scenarios with varying levels of customer targeting suggests that program impacts could be increased by as much as 80% for a professional energy audit program, just over 100% for a high-efficiency heat pump upgrade program, and nearly 250% for an attic insulation upgrade program. Findings are directly relevant for utility program administrators seeking to improve program outcomes. Ó 2013 Elsevier Ltd. All rights reserved. 1. Introduction In the United States, there are over 1400 different rebate, loan or grant programs offered by utilities or federal, state, and local governments for various energy efficiency upgrades [1]. The vast majority of these programs are rebates administered by utility pro- viders. Nationally, utilities spent over 45 billion dollars between 1989 and 2010 on customer energy-efficiency and conservation, or demand-side management (DSM) programs [2]. These DSM pro- grams are usually funded by utility customers (ratepayers), either directly through on-bill fees and surcharges, or indirectly through rate increases [3]. Therefore, ratepayers have reasonable expecta- tions and interest in such programs being implemented in a cost- effective, if not least-cost manner [4]. Consequently, significant ef- fort has been invested in measuring the cost effectiveness and effi- ciency of DSM programs [5]. These program investments – and the demand for impact evaluations that accompanies them – are pro- jected to grow in the coming years due to increased pressure from state-level energy regulation, costs and challenges of building additional production capacity, fuel costs and potential carbon or renewable energy regulation [6]. Utilities have a number of motivations for engaging in DSM, depending on their ownership and regulatory structure [7]. State reg- ulatory authorities generally require that utilities (investor-owned 0306-2619/$ - see front matter Ó 2013 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.apenergy.2013.10.012 Corresponding author. Tel.: +1 352 392 3121; fax: +1 352 392 9033. E-mail addresses: nwtaylor@ufl.edu (N.W. Taylor), piercejones@ufl.edu (P.H. Jones), mjkipp@ufl.edu (M.J. Kipp). Applied Energy 115 (2014) 25–36 Contents lists available at ScienceDirect Applied Energy journal homepage: www.elsevier.com/locate/apenergy

Targeting utility customers to improve energy savings from conservation and efficiency programs

Embed Size (px)

Citation preview

Page 1: Targeting utility customers to improve energy savings from conservation and efficiency programs

Applied Energy 115 (2014) 25–36

Contents lists available at ScienceDirect

Applied Energy

journal homepage: www.elsevier .com/locate /apenergy

Targeting utility customers to improve energy savings from conservationand efficiency programs

0306-2619/$ - see front matter � 2013 Elsevier Ltd. All rights reserved.http://dx.doi.org/10.1016/j.apenergy.2013.10.012

⇑ Corresponding author. Tel.: +1 352 392 3121; fax: +1 352 392 9033.E-mail addresses: [email protected] (N.W. Taylor), [email protected] (P.H.

Jones), [email protected] (M.J. Kipp).

Nicholas W. Taylor ⇑, Pierce H. Jones, M. Jennison KippUniversity of Florida, Program for Resource Efficient Communities, PO Box 110940, 2295 Mowry Rd, Gainesville, FL 32611-2081, United States

h i g h l i g h t s

� Improving DSM program impacts by targeting high energy users.� DSM energy savings potential hinges on pre-participation performance.� Targeting can benefit different utilities and energy efficiency programs.� Overall performance can be improved by up to 250% via targeting strategies.

a r t i c l e i n f o

Article history:Received 3 April 2013Received in revised form 25 July 2013Accepted 6 October 2013Available online 25 November 2013

Keywords:Demand-side management (DSM)Residential buildingsPerformance evaluationHeating, ventilation and air conditioning(HVAC)Energy auditInsulation

a b s t r a c t

Electric utilities, government agencies, and private interests in the US have committed and continue toinvest substantial resources – including billions of dollars of financial capital – in the pursuit of energyefficiency and conservation through demand-side management (DSM) programs. While most of theseprograms are deemed to be cost effective, and therefore in the public interest, opportunities exist toimprove cost effectiveness by targeting programs to those customers with the greatest potential forenergy savings. This article details an analysis of three DSM programs offered by three Florida municipalelectric utilities to explore such opportunities. First, we estimate programs’ energy savings impacts; sec-ond, we measure and compare energy savings across subgroups of program participants as determinedby their pre-intervention energy performance, and third, we explore potential changes in programimpacts that might be realized by targeting specific customers for participation in the DSM programs.All three programs resulted in statistically significant average (per-participant) energy savings, yet aver-age savings varied widely, with the customers who performed best (i.e., most efficient) before the inter-vention saving the least energy and those who performed worst (i.e., least efficient) before theintervention saving the most. Assessment of alternative program participation scenarios with varyinglevels of customer targeting suggests that program impacts could be increased by as much as 80% fora professional energy audit program, just over 100% for a high-efficiency heat pump upgrade program,and nearly 250% for an attic insulation upgrade program. Findings are directly relevant for utility programadministrators seeking to improve program outcomes.

� 2013 Elsevier Ltd. All rights reserved.

1. Introduction

In the United States, there are over 1400 different rebate, loanor grant programs offered by utilities or federal, state, and localgovernments for various energy efficiency upgrades [1]. The vastmajority of these programs are rebates administered by utility pro-viders. Nationally, utilities spent over 45 billion dollars between1989 and 2010 on customer energy-efficiency and conservation,or demand-side management (DSM) programs [2]. These DSM pro-grams are usually funded by utility customers (ratepayers), either

directly through on-bill fees and surcharges, or indirectly throughrate increases [3]. Therefore, ratepayers have reasonable expecta-tions and interest in such programs being implemented in a cost-effective, if not least-cost manner [4]. Consequently, significant ef-fort has been invested in measuring the cost effectiveness and effi-ciency of DSM programs [5]. These program investments – and thedemand for impact evaluations that accompanies them – are pro-jected to grow in the coming years due to increased pressure fromstate-level energy regulation, costs and challenges of buildingadditional production capacity, fuel costs and potential carbon orrenewable energy regulation [6].

Utilities have a number of motivations for engaging in DSM,depending on their ownership and regulatory structure [7]. State reg-ulatory authorities generally require that utilities (investor-owned

Page 2: Targeting utility customers to improve energy savings from conservation and efficiency programs

26 N.W. Taylor et al. / Applied Energy 115 (2014) 25–36

and in some cases municipal and cooperative) meet specific peakdemand and energy conservation goals [3]. Requirements forcost-effective and efficient implementation typically accompanythese goals, and program cost effectiveness depends largely onmaximizing energy and demand savings (i.e., energy impacts). Inmost cases, peak demand goals are aligned with utility objectives(as peak demand drives utilities’ overall costs) whereas energyconsumption goals are aligned with customer objectives (as energysavings lead directly to lower energy bills). ‘‘Although reducingpeak demand is an important goal of many DSM programs, stateregulatory authorities that mandate such programs rely heavilyon the presumed positive effect of DSM on energy efficiency to jus-tify their continued existence’’ [3], p. 20.

Expectations of demand and energy savings are established byutility staff and consultants who use various sources, data andmethodological approaches. Deemed savings estimates can rangefrom those based on manufacturer claims to those derived usingadvanced engineering models and statistical billing analyses. Giventhese expected savings potentials, DSM administrators must workwithin resource and technical constraints to achieve energy anddemand savings targets, largely by maximizing customer partici-pation. All utility customers are not equal in their energy savingspotential, however. Among the population of eligible participants,there is a wide range of technical, achievable, and economic poten-tials for efficiency gains from different DSM programs and energyconservation measures. Furthermore, the law of diminishing mar-ginal returns – well documented in economic and engineering lit-erature – suggests that customers who have already madesubstantial investments in energy efficiency or those who alreadyperform well will achieve less additional savings with addedinvestment [8]. Empirical data from DSM impact evaluations vali-date theoretical expectations of diminishing returns to energy-effi-ciency investments [9].

In addition to managing the realities of diminishing returns toefficiency interventions, DSM administrators can have troubleachieving target participation rates [10]. This may lead to increasedmarketing efforts and revision of rebate structures and/or alloca-tions to meet program goals. Some argue that targeting programsto non-traditional markets (such as multi-family and rental hous-ing) is the key to increasing customer participation [9] and improv-ing DSM program performance. Others have suggested thatconservation programs could be more appropriately targeted with-in customer classes to increase their average and overall energyimpacts [11].

Ultimately, which customers participate in DSM programs maybe just as critical a driver of program cost effectiveness, or more so,than how many participate. In this study, we explore this possibil-ity by analyzing three energy conservation and efficiency programsimplemented in different utility territories and assessing the po-tential impacts of customer targeting on energy savings and overallprogram cost effectiveness. We propose a targeting strategy thatcould be adopted by utilities to quickly assess the whole-house en-ergy performance and energy savings potential of each residentialcustomers and identify ideal candidates for participation in DSMprograms.

While the U.S. Energy Service Company (ESCO) industry has ex-panded rapidly in recent years [12] and a large suite of whole-house modeling tools are available to homeowners to inform deci-sions about energy-efficiency investments [13], few such resources– particularly those in the public domain – are available to guideutility decisions about targeting specific energy efficiency mea-sures to select residential customers. The analysis presented hereaddresses this apparent gap with the ultimate, overarching goalof increasing both total energy savings from and overall cost effec-tiveness of utility DSM programs.

2. Objectives

Using the Annual Community Baselines� (ACB) approach tomeasure residential energy performance [14], the three objectivesand associated research questions of this study are to:

(1) Assess energy-efficiency program impacts (energy savings)for three energy conservation measures in three utility terri-tories, measuring average savings for the first year post-participation (empirical; impact evaluation). Are therestatistically significant first-year energy savings from eachof the three programs?

(2) Measure and compare energy savings across subgroups ofprogram participants as determined by their pre-interven-tion energy performance (empirical; ex-post analysis). Arethere statistically significant differences and/or trends infirst-year energy savings between pairs of participantsubgroups?

(3) Explore potential changes in program outcomes that couldbe realized by targeting customers based on their pastenergy performance (scenario-based, ex-ante analysis).Could energy savings and program cost effectiveness beimproved by using a targeting strategy to shift participationfrom those customers with the least potential for savings tothose with greater potential to benefit?

To achieve these objectives, we study three common utility res-idential energy conservation and efficiency programs (high-effi-ciency heat pump, attic insulation upgrade, and professionalenergy audit) offered by three municipal utilities.

The three utilities studied – Orlando Utilities Commission(OUC); Gainesville Regional Utilities (GRU); and JEA, which servesthe City of Jacksonville – are all in Florida, which has a coolingdominate climate. Average HDD calculated using a balance pointof 18 �C in the utility territories over the study period (2008–2012) was 544 (�F HDD = 980) while CDD was 1784 (�FCDD = 3211). Residential housing stocks in all three utility territo-ries are predominately single-family, single-story, detached hous-ing. The majority of houses are of relatively recent vintage (late1970s); slab-on-grade construction with wood frame insulatedwalls (RSI 2.29 or R-13) and ceilings (RSI 3.35 or R-19); single-panewindows; ventilated attic spaces and asphalt/fiberglass shingleroofs. Nearly all homes have central, forced-air, heating and cool-ing systems.

3. Methods

An end-use billing analysis approach is typically more reliablefor estimating net incremental energy savings than relying solelyon deemed savings from predictive engineering or simulationmodels. Metcalf and Hassett found that engineering estimateswere 50% higher than savings found from an analysis of monthlyend-use billing data [15]. Loughran and Kulick noted that utilitiescommonly use engineering models to estimate potential energysavings and found that when savings are measured using actualbilling data, the original deemed values for savings ‘‘are rarelyachieved’’ [3], p. 22. In a study of California’s utility conservationprograms, Kaufman and Palmer concluded that ‘‘reported savingsestimates are systematically higher than the evaluated savingsestimates’’ [16], p. 263.

The ACB analysis method adopts key elements of conventionalcomparison-group, regression, and difference-in-difference tech-niques for measuring energy performance and combines them inan innovative, yet fairly simple way that can generate reliable

Page 3: Targeting utility customers to improve energy savings from conservation and efficiency programs

N.W. Taylor et al. / Applied Energy 115 (2014) 25–36 27

results while reducing cost of implementation. To estimate DSMprogram impacts (energy savings), end-use billing, program partic-ipation, and building characteristic data are used.

3.1. Program impact analysis using annual community baselines

The ACB analysis approach involves four main steps. The firststep applies a naïve multiple regression [17] to energy consump-tion and building characteristic data to generate normalized con-sumption baselines for every single-family home/customer in agiven utility territory in a given year [18]. In practical terms, thiscreates a ‘‘comparison-group’’ within each home’s Annual Commu-nity Baseline, or ACB. Unlike bottom-up calculations of performancebaselines [19], ACB measures are derived using a top-down, whole-house approach.

The second step calculates the difference between each home’sactual annual consumption and its ACB-predicted consumption togenerate a snapshot of its energy performance. In this instance, ahome’s performance is measured relative to what we might expectfor that particular home based on the energy consumption andbuilding characteristics of all homes in the community. Each snap-shot is a static ACB performance measure, estimated for each homein the population, in both relative (percent) and absolute (total en-ergy) terms for a given time period (i.e., a year).

The third step is a simple difference-in-differences approach,measuring the change in static ACB performance between one timeperiod and the next. This generates dynamic ACB performance mea-sures, or change in performance, for each home in the community.These measures are applied to analyses where a known energy-efficiency intervention has occurred during a given time periodfor a subgroup of the population and we are interested in measur-ing pre-post changes in performance.

The final step in the ACB method is to measure energy savingsfor a group of homes of interest. This is done by applying statisticaltechniques (generally a t-test or analysis of variance) to comparedifferences in static or dynamic energy performance between par-ticipant (treatment) and non-participant (control) groups ofhomes. This step can be applied to measure energy savings bothwithin years (static) and between years (dynamic) and hinges onthe assumption that the within-year percent static performancemeasure for control homes remains stable from one year to thenext, all else held constant. It is also used to test statistical signif-icance and generate confidence intervals for savings estimates. Forthe analysis presented here, we consider savings estimates with aconfidence level of 95% or greater (or a p-value of 0.05 or less) to bestatistically significant.

Table 1 summarizes each step of the ACB methodology, includ-ing inputs, outputs and mathematical or statistical techniques ap-plied in each step. Fig. 1 illustrates output from the first step of themethod, plotting each home’s actual consumption and its corre-sponding ACB. The ACB data points appear collectively as a fittedline, yet this line is actually made up of the baseline performance

Table 1Overview of the ACB Method.

Step: output Inputs Mathestatist

Step 1: AnnualCommunityBaseline (ACB)

Annual energy consumption; home size; home age;number of bedrooms; number of bathrooms

Naiveregres

Step 2: Static ACBperformance

Actual consumption; baseline (normalized)consumption

Differe

Step 3: Dynamic ACBperformance

Pre-participation static ACB performance; post-participation static ACB performance

DiffereDiffere

Step 4: Energysavings

Treatment group dynamic ACB performance; controlgroup dynamic ACB performance

t-testvarian

measures (ACBs) for thousands of individual homes. Jones et al.(2010) present a detailed explanation of the ACB method and anapplied analysis that compares its results directly to those of othercommon methods of estimating energy savings [20], includingtime series, time series with normalized annual consumption,and time series with comparison groups [14]. This methodologywas applied to estimate residential DSM program impacts [18].

3.1.1. Conservation and efficiency programs evaluatedFor the first program, we examined savings for utility customers

in Orlando, Florida who received rebates for replacing existing heatpump (HVAC) systems with high-efficiency (coefficient of perfor-mance of 3.7 or SEER-15) equipment during the 2009 calendar year[21]. There were 169 customers who took advantage of this rebateprogram during the test period. For the second program, we mea-sured savings for customers in Gainesville, Florida who receivedrebates for installing additional attic insulation in the period be-tween November 1, 2009 and October 31, 2010. Participating cus-tomers were required to add insulation of at least RSI 3.35 (R-19)to any existing attic insulation to qualify for the incentive [22].There were 181 customers who took advantage of this rebate pro-gram during the test period. For the third program, we assessedsavings for utility customers in Jacksonville, Florida who receivedan energy audit by a Building Performance Institute (BPI) or Resi-dential Energy Services Network (RESNET) certified professionalin the period between November 1, 2010 and March 31, 2011.Audits included a full walk-through inspection and tests of build-ing envelope and ductwork leakage. Auditors provided homeown-ers with a list of (a) behaviors to improve home operations,maintenance and energy efficiency; (b) recommended upgrades;and (c) applicable utility, state and federal incentives [23]. Therewere 232 customers who took advantage of this rebate programduring the test period. Fig. 2 shows the pre-intervention, participa-tion, and post-intervention periods for the three programs evalu-ated. Note that for all three programs, the pre- and post-participation periods are each one full calendar year.

3.1.2. Data collection and screeningData collection: OUC, GRU and JEA provided electric consump-

tion data, GRU also provided natural gas consumption data, andthe Orange, Alachua, and Duval County Property Appraisers pro-vided housing characteristics data. Table 2 lists all fields includedin the raw datasets used to generate the primary (clean) analysisdataset. First, monthly household energy consumption data werecombined and expressed as total annual energy use for each homein each utility service area. Within JEA and OUC service territoriesthere is no natural gas fuel service and total annual energy use isexpressed in kilowatt hours. For GRU service territory natural gasconsumption (originally measured in therms) was converted toBritish Thermal Units (BTUs) and then to equivalent kilowatt hours(ekWh). Electric (kWh) and natural gas consumption (ekWh) were

matical orical technique

Summary

multiplesion

Use energy consumption and housing characteristics to create anormalized baseline for each home in the community

nce Compare the home’s actual consumption to its baseline

nce ofnces

Compare performance before participation to performance afterparticipation

or analysis ofce2

Compare savings in the treatment group to savings in thecontrol group to test for significant differences

Page 4: Targeting utility customers to improve energy savings from conservation and efficiency programs

Fig. 1. Example of total annual energy consumption for individual homes plotted along with Annual Community Baseline values for each home. Homes are ordered byascending ACB value.

Fig. 2. Pre-participation, participation, and post-participation time periods for programs evaluated.

Table 2Data sources and fields in primary dataset.

Data source Data field

Utility billing Customer numberPremise numberAddressMeter read dateNumber of days in read cycleEnergy usage typeEnergy usage value

Utility DSM Customer numberPremise numberProgram participation dateProgram type

Property appraiser Parcel numberAddressYear builtConditioned areaNumber of bedroomsNumber of bathrooms

28 N.W. Taylor et al. / Applied Energy 115 (2014) 25–36

then combined to measure total annual energy consumption(ekWh).

Utility billing cycles do not reflect exact calendar months andoften fluctuate in the number of days of service recorded in eachbilling cycle. Customers whose billing records showed less than350 days of electrical service per year, within the study period,were removed from the study population. To account for differ-ences in the number of billed days for each customer, total annualenergy consumption values were normalized to reflect the full cal-endar year by multiplying the average daily use for each customer,for the number of billed days, by 365 days. Next, utility billing andproperty appraiser datasets were merged using customer and loca-tion identifiers to ensure that only homes consistently occupiedover the study period were included in the analysis. Finally, the an-nual consumption and property appraiser dataset was mergedwith the DSM program participation dataset.

Data screening: The primary dataset was screened on buildingand energy use characteristics to generate the final analysis data-set. Screening criteria were chosen to ensure that the final sampleof homes in the analysis included only those with reliable housingand energy use records for single-family detached housing in thestudy region. For the building characteristics, square meters of con-ditioned area and year built were screened to reflect ‘‘typical’’ re-gional housing stock size and age. We retained homes withbetween 46.5 and 465.5 m2 (500 and 5000 square feet) of condi-tioned space and those built between 1920 and the year prior tothe pre-intervention study period. Next, we screened recordsagainst energy consumption, measured in terms of both total an-nual energy use (kWh or ekWh) and energy intensity (kwh orekWh per square meter of conditioned area), to ensure that homesin the final analysis dataset were consistently occupied during thestudy period and had measured energy intensity within a reason-able range for single-family homes in the study area. We appliedobjective screening criteria to the energy use characteristics byremoving homes in the ‘‘tails’’ of the statistical distributions forthe study populations. Specifically, we removed homes in theupper and lower 2.5% of pre-intervention total energy use and en-ergy intensity.

Next, we screened for change in total energy consumption be-tween pre- and post-intervention periods beyond what we can rea-sonably expect for the given conservation and energy-efficiencyinterventions. Recognizing that energy consumption changes fromyear to year as weather, economic conditions, etc. change, we re-moved homes that fall outside of a reasonable range of change inenergy from one year to the next. For this analysis, we removedhomes where energy consumption changed by ±40% of the medianchange in energy consumption for the population as a whole. A 40%drop in normalized annual energy consumption is the most thatwe would reasonably expect for the most impactful program inthe analysis, high-efficiency heat pump replacement. Similarly, a40% increase is the most we could reasonably expect for the popu-lation of control homes. We use this balanced screening measure to

Page 5: Targeting utility customers to improve energy savings from conservation and efficiency programs

N.W. Taylor et al. / Applied Energy 115 (2014) 25–36 29

identify both extremely high and extremely low energy usechanges and to avoid a bias in the population of ACB homes(DSM participants and non-participants) toward those with greaterpre-post savings. As an example, if the median change in consump-tion was +3% for the study population, then homes that increasedtheir energy use by more than 43% or decreased it by more than37% were excluded from the final analysis dataset.

As a final step, we removed homes that were known to have ta-ken part in energy conservation programs other than the program ofinterest during the study period. We also removed homes that par-ticipated in the program of interest during the pre- or post-partic-ipation time periods. This step helped to ensure that energysavings of program participants are measured relative to thosewith no known interventions during the full analysis period. Forthe JEA professional energy audit participants, this step providedsome degree of confidence that we were not measuring effects ofmajor equipment retrofits as opposed to behavioral changes orlow-cost modifications induced by the audit itself. Table 3 liststhe criteria (general, building characteristics, and energy consump-tion) against which homes were screened.

3.1.3. Statistical modelThis study applies the same basic statistical model to homes in

the final analysis dataset from each utility territory to assess pro-gram impacts of each of the three separate DSM interventions.ACBs for each of the utility’s customer populations were calculatedfor the pre- and post-intervention years using a consistent set ofdata fields for the entire population of single-family detachedhomes. The ACB regression model for these analyses used condi-tioned floor area (square meters), number of bedrooms, numberof bathrooms, and year built (age) as predictors of annual energyconsumption and actual (billed) annual energy consumption asthe dependent variable. From the regression output, normalized(predicted) consumption represents each home’s performancebaseline (ACB) for a given year. Residuals, or error terms from eachannual regression represent each home’s static ACB performancemeasure. Changes in these ACB performance measures were com-pared across participant and non-participant homes to estimateDSM program impacts and address the first objective of our study.

3.2. Program impact analysis by participant subgroups

The second and third objectives of our study evaluate differ-ences in energy savings across subgroups of participants and ex-plore the potential for use of static ACB performance measures totarget conservation and energy-efficiency interventions to certaingroups of the customer population, in turn improving program per-formance (i.e., overall energy savings and cost effectiveness).

Table 3Data screening criteria applied to primary analysis dataset for each utility and program e

Screening type Screening criterion

General 1 Account and 1 building on 1 parcelAccount must be the same over the entire study periodMore than 350 days of consumption records in each calendar

Buildingcharacteristics

Homes built after 1920 and at least 1 year prior to the study pBetween 46.5 and 465.5 square meters of conditioned space (No missing characteristic data

Energyconsumption

Remove homes in the upper and lower 2.5% of pre-interventioRemove homes in the lower 2.5% of post-intervention consumRemove upper and lower 2.5% of pre-intervention energy inteRemove lower 2.5% of post-intervention energy intensity recoRemove homes where percent difference in total energy consupopulation median

Programparticipation

Remove homes that participated in other energy conservationRemove homes that participated in the energy conservation p

After the initial test of average first-year energy savings fromeach program, subpopulations of the treatment groups were testedto see if variations in program related savings were significant. Thegoal of this portion of the analysis was to assess whether or not tar-geting certain customers for participation in energy-efficiency pro-grams might result in greater energy savings per dollar of programexpenditure. The selected targeting regime breaks all utility cus-tomers into quartiles (Q1–Q4) of pre-participation relative staticACB energy performance (step 2 of ACB method described previ-ously) and tests subgroups of the participant population to deter-mine the relationship between pre-participation ACBperformance and savings. To clarify, Quartile 1 (Q1) includes thosecustomers who are in the lowest 25% and Quartile 4 (Q4) includesthose customers who are in the highest 25% of energy consumptionpre-participation versus their static ACB. Fig. 3 illustrates the direc-tional relationships between each participant group quartile, pre-participation static ACB performance, and expected energy savings(or energy savings potential). We expect that customers with thebest static ACB performance (i.e., low energy consumers) prior toparticipation are likely to save the least energy and those withthe worst static ACB performance (i.e., high energy consumers)are likely save the most. With this approach, we are effectivelyusing static ACB performance measures as proxies for customers’energy savings potential.

Due to the nature of the targeting method (separating homes bypre-intervention performance quartile), variance in savings esti-mates was significantly different between the treatment and con-trol populations. To properly account for this variability andprovide a reasonably conservative measure of statistical signifi-cance with respect to differences in percent savings betweengroups, we employ a non-parametric comparison method. Todetermine if there were statistically significant differences in per-cent energy savings between targeted subgroups, their savingsestimates were tested against those of the control populationand against each other using the Wilcoxon Method [24] and Wal-lis-Lehmann Difference Estimate [25].

3.3. Potential improvements from customer targeting

Next, we extrapolate findings from the analysis of DSM programimpacts across participant subgroups and use a scenario-based ap-proach to explore the potential for improvements to program im-pacts via customer targeting (study objective three). The firstscenario, no change or ‘‘Status Quo’’, assumes equivalent overallconditions and impacts of the three energy-efficiency programsas measured by results from study objectives one and two: sametimeline; control population; total number of participants; partic-ipating customers; and total and percent energy savings realized.

valuated.

yeareriod

500 and 5000 square feet)

n consumption recordsption recordsnsity recordsrdsmption pre-vs. post-intervention changes by more than 40% above or below the

programs during the analysis periodrogram in either the pre-intervention or post-intervention analysis period

Page 6: Targeting utility customers to improve energy savings from conservation and efficiency programs

Fig. 3. Participant subgroup quartiles, ACB performance and energy savings.

Table 4Program participation scenarios for evaluation of targeting potential.

Program participation scenario name Scenario type Pre-participationstatic ACBperformancequartile

Status Quo Actual Q1 Q2 Q3 Q4Low-hanging fruit Hypothetical Q2 Q3 Q4Reaching Hypothetical Q3 Q4Great expectations Hypothetical Q4

30 N.W. Taylor et al. / Applied Energy 115 (2014) 25–36

The second scenario, minimal gains via targeting or ‘‘Low-HangingFruit’’, assumes that all participants from the Q1 pre-participationsubgroup are substituted entirely by participants from the Q4 sub-group. The third scenario, moderate gains via targeting or ‘‘Reach-ing’’, assumes that all participants from the Q1 and Q2 subgroupsare substituted entirely by participants from the Q4 subgroup.The fourth scenario, maximum gains via targeting or ‘‘Great Expec-tations’’, assumes all of the participants from the Q1, Q2, and Q3subgroups are substituted by participants from the Q4 subgroup.Table 4 summarizes the four program participation scenarios de-fined to address the third study objective. Note that the increasingfont size of Q4 is reflective of the increasing number of programparticipants coming from that pre-participation static ACB perfor-mance quartile, although the total size of the participant popula-tion is constant across scenarios.

Table 5Data collection and screening results.

Conservation/efficiency program Primary dataset Control populatio

Orlando, Florida high-efficiency heat pump 42,420 42,251Gainesville, Florida attic insulation upgrade 23,254 23,073Jacksonville, Florida professional energy audit 119,943 119,711

Fig. 4. Program impact evaluation results: pre-pos

4. Results and discussion

4.1. Objective 1: program impact analysis – overall energy savings

This section summarizes results of the program impact analysis(study objective one). Table 5 gives results of the data screeningprocess showing sample sizes for the study population includingthe total number of households used to create Annual CommunityBaselines (Analysis Dataset), those in the control group and theparticipant group for each of the utility territories. Analysis data-sets for this study included between 83% and 89% of all of the cus-tomers in single-family, detached homes in each utility territory.The participant groups included between 70% and 88% of all ofthe customers in single-family, detached homes that participatedin the programs tested in each utility territory during the definedintervention test period.

Relative (percent) and absolute (kWh/ekWh) savings, levels ofstatistical significance, and 95% confidence intervals for savingswere calculated for each DSM intervention in each utility territory.Fig. 4 summarizes results of the overall program impact evaluation.

4.1.1. OUC/HVACAverage savings for the OUC high-efficiency heat pump partici-

pants during calendar year 2010 was 11.4% (2054 kWh). The 95%confidence interval for the average savings was 8.9% to 13.9%(1574 to 2534 kWh).

n Program participants ScreeningAnalysis dataset Control group Participant

group

169 35,202 (83%) 35,053 (83%) 149 (88%)181 20,024 (86%) 19,898 (86%) 126 (70%)232 106,346 (89%) 106,130 (89%) 162 (70%)

t change in static ACB performance measures.

Page 7: Targeting utility customers to improve energy savings from conservation and efficiency programs

N.W. Taylor et al. / Applied Energy 115 (2014) 25–36 31

4.1.2. GRU/InsulationAverage savings for the GRU attic insulation rebate participants

during the post-participation period (November 1, 2010 to October31, 2011) were 3.7% (688 ekWh). The 95% confidence interval forthe average savings was 1.1% to 6.3% (198 to 1178 ekWh).

4.1.3. JEA/Professional energy auditAverage savings for the JEA energy audit participants during the

post-participation period (May 1, 2011 to April 30, 2012) were 3.2%(533 kWh). The 95% confidence interval for the average savingswas 1.0% to 5.5% (188 to 878 kWh).

Fig. 5. Annual energy use of Q1 and Q4 program participants in each utility/program plascending ACB value.

4.1.4. DiscussionAll savings estimates are statistically significant not only at the

95% threshold, but also at the 99% level. In other words, we are 99%confident that, on average, the program participants saved energy(i.e., their ACB performance improved significantly) relative tonon-participants in the first year following the energy conservationor efficiency intervention.

4.2. Objective 2: program impact analysis – energy savings bypre-intervention ACB quartile

To evaluate variation in program impacts across subgroups ofthe customer populations, each of the pre-participation quartile

otted along with ACB for pre- and post-intervention periods. Homes are ordered by

Page 8: Targeting utility customers to improve energy savings from conservation and efficiency programs

32 N.W. Taylor et al. / Applied Energy 115 (2014) 25–36

subgroups were tested against the control population and againsteach of the other subgroups. Fig. 5 plots the pre- and post-partici-pation static performance measures of the first quartile (Q1) andfourth quartile (Q4) program participant subgroups. Notice thatfor all three conservation and efficiency programs, the position ofthe Q1 participants’ actual energy consumption relative to theirbaseline (normalized/expected) consumption changes very littlebetween pre- and post-intervention periods whereas the positionof the Q4 participants shifts noticeably.

We next test whether the differences in these shifts (dynamicACB performance) across pre-intervention quartiles are statisti-cally significant. Fig. 6 shows average relative and absolute savingsand 95% confidence intervals for the entire participant group andfor each of the quartile subgroups against the control (non-partic-ipant) population of homes. It also illustrates the magnitude andstatistical significance of the Q1 and Q4 participant shifts illus-trated in Fig. 5. Table 6 gives relative (percent), and Table 7 givesabsolute (kWh/ekWh) savings estimates, 95% confidence intervalsand statistical significance levels for comparisons between eachpair of quartile subgroups and between quartiles and the controlpopulation in each utility territory. Information presented in thesetables can be extrapolated to characterize the difference in

Fig. 6. Relative and absolute annual energy savings and 95% confidence intervals for thethat if the zero axis falls within the confidence interval for any treatment group, no sta

program savings that could be anticipated had the participantgroup come entirely from one subgroup versus any other (i.e.,the expected savings advantage of targeting one customer sub-group over another for participation in the DSM program).

4.2.1. OUC/HVACAll of the participant subgroups had statistically significant sav-

ings relative to the control group. In addition, there were statisti-cally significant differences in savings between each of theparticipant subgroups except for between Q1 and Q2. The Q4 sub-group had the highest average savings compared to the controlgroup, 22.2% (4162 kWh). The Q4 subgroup also had the highestsavings relative to any other treatment subgroup. The Q4 subgroupsaved 17.8% (3467 kWh) more energy than the Q1 treatmentsubgroup.

4.2.2. GRU/InsulationNo statistically significant savings occurred for the Q1 and Q2

subgroups. The Q3 subgroup saved an average of 6.7%(1268 kWh) and the 95% confidence interval for these savingswas 1.3% to 11.8% (257 to 2220 kWh). The Q4 subgroup had thehighest savings with an average of 12.6% (2382 kWh) and the

entire participant groups and each of the pre-intervention quartile subgroups. Notetistically significant savings were found at the 95% confidence level.

Page 9: Targeting utility customers to improve energy savings from conservation and efficiency programs

N.W. Taylor et al. / Applied Energy 115 (2014) 25–36 33

95% confidence interval for these savings was 6.9% to 19.0% (1320–3562 kWh). Both Quartiles 3 and 4 saved significantly more energy,9.8% (1685 kWh) and 14.4% (2716 kWh) respectively, than the Q1subgroup. In addition, the Q4 subgroup saved significantly moreenergy (11.8% or 2116 kWh) than the Q2 subgroup.

4.2.3. JEA/Professional energy auditNo statistically significant savings were found for the Q1 and Q2

subgroups. Savings for the Q3 subgroup, 4.7% (830 kWh), were sig-nificant at the 90% confidence level. The Q4 subgroup had an aver-age savings of 5.8% (960 kWh) and the 95% confidence interval forthese savings was 0.8% to 11.1% (156–1806 kWh). The Q3 and Q4subgroups also saved significantly more energy than the Q1 sub-group, 4.7% (827 kWh) and 4.5% (811 kWh) respectively.

4.2.4. DiscussionSavings across quartile subgroups followed similar trends for

each of the DSM programs tested. Table 8 summarizes results ofthe analysis of savings by pre-intervention static performancequartile, including statistical significance levels of differences be-tween quartile subgroups. The subgroups with the best energy per-formance in the year before participation (Q1) saved the leastenergy in the year following participation. The subgroups withthe worst energy performance in the year before participation(Q4) saved the most. In addition, the Q4 subgroup saved signifi-cantly more energy (kWh) than the Q1 and Q2 subgroups in eachof the DSM programs tested. Differences in energy savings werealso greater for programs that we expect to save more energy.The range of savings among the subgroups in the high-efficiency

Table 6Pairwise comparison of relative energy savings across all treatment groups.

Level A -Level B

Orlando, Florida (OUC) high-efficiency HVAC upgrade ALL* ControlQ1 ControlQ2 ControlQ3 ControlQ4 ControlQ2 Q1Q3 Q1Q4 Q1Q3 Q2Q4 Q2Q4 Q3

Gainesville, Florida (GRU) attic insulation upgrade ALL* ControlQ1 ControlQ2 ControlQ3 ControlQ4 ControlQ2 Q1Q3 Q1Q4 Q1Q3 Q2Q4 Q2Q4 Q3

Jacksonville, Florida (JEA) professional energy audit ALL* ControlQ1 ControlQ2 ControlQ3 ControlQ4 ControlQ2 Q1Q3 Q1Q4 Q1Q3 Q2Q4 Q2Q4 Q3

* Note: Statistical confidence level indicated by at 90%.** Note: Statistical confidence level indicated by at 95%.*** Note: Statistical confidence level indicated by at 99%.

heat pump upgrade program, attic insulation upgrade, and profes-sional energy audit programs were 17.8%, 14.4%, and 4.5% respec-tively. Savings presented here can be extrapolated to characterizeaverage savings that would have been achieved had the actual par-ticipant group been comprised of greater proportions of high en-ergy consumers (i.e., those with relatively poor pre-interventionstatic ACB performance).

4.3. Objective 3: extrapolation of impact analysis to estimate potentialbenefits of targeting

Table 9 shows, compares, and extrapolates from the four alter-native program participation scenarios to estimate average (per-participant) energy savings, total program energy savings andoverall potential for improvement in program impacts. As notedpreviously, the ‘‘Status Quo’’ scenario represents actual participa-tion and measured energy savings for each of the three programs– an ex-post analysis – and all other scenarios are hypotheticallydefined based on different levels of targeting success – an ex-anteanalysis – and the total number of program participants in eachprogram is held constant across the four scenarios.

4.3.1. OUC/HVACFor the high-efficiency heat pump upgrade program, the poten-

tial benefit of targeting at the ‘‘Low-Hanging Fruit’’ level vs. the‘‘Status Quo’’ is a 41% marginal improvement in program impact,totaling an additional 126 megawatt-hours (MWh) of energysaved. Achieving the ‘‘Reaching’’ level of program participation isexpected to yield 85% improvement relative to the ‘‘Status Quo’’,

n (A) n (B) p-Value Savings (%) Lower 95% Upper 95%

149 35,053 <.0001 11.4 8.9 13.9***

32 35,053 0.0300 4.5 0.5 8.4**

45 35,053 0.0004 6.9 3.2 10.6***

41 35,053 <.0001 15.2 11.3 19.0***

31 35,053 <.0001 22.2 16.1 28.6***

45 32 0.2621 2.6 �2.1 7.141 32 <.0001 10.6 5.8 15.7***

31 32 <.0001 17.8 9.7 25.9***

41 45 0.0007 8.2 3.5 13.0***

31 45 0.0001 15.3 7.7 23.7***

31 41 0.0671 7.0 �0.4 15.2*

126 20,024 0.0056 3.7 1.1 6.3***

27 20,024 0.3811 �2.0 �6.7 2.540 20,024 0.3823 2.0 �2.5 6.630 20,024 0.0169 6.7 1.3 11.8**

29 20,024 <.0001 12.6 6.9 19.0***

40 27 0.1578 4.6 �2.0 11.030 27 0.0064 9.8 2.0 15.4***

29 27 0.0002 14.4 7.3 25.2***

30 40 0.2149 4.1 �2.4 10.629 40 0.0063 11.8% 3.8 19.6***

29 30 0.1276 6.9 �2.2 17.5

162 106,130 0.0054 3.2 1.0 5.5***

45 106,130 0.8814 0.2 �2.8 3.343 106,130 0.2303 2.3 �1.5 6.042 106,130 0.0267 4.7 0.6 8.9**

32 106,130 0.0241 5.8 0.8 11.1**

43 45 0.3808 1.9 �2.4 6.342 45 0.0512 4.7 �0.1 9.1*

32 45 0.0650 4.5 �0.5 10.7*

42 43 0.4212 1.8 �2.9 8.732 43 0.2365 3.5 �3.1 10.032 42 0.6428 1.7 �6.4 9.0

Page 10: Targeting utility customers to improve energy savings from conservation and efficiency programs

Table 7Pairwise comparison of absolute energy savings across all treatment groups.

Level A -Level B n (A) n (B) p-Value Savings Lower 95% Upper 95%

Orlando, Florida (OUC) high-efficiency HVAC upgrade ALL* Control 149 35,053 <.0001 2054 1574 2534***

Q1 Control 32 35,053 0.0410 780 33 1511**

Q2 Control 45 35,053 0.0010 1217 512 1912***

Q3 Control 41 35,053 <.0001 2818 2116 3514***

Q4 Control 31 35,053 <.0001 4162 3032 5280***

Q2 Q1 45 32 0.2621 485 �396 1347Q3 Q1 41 32 <.0001 2040 1171 2927***

Q4 Q1 31 32 <.0001 3467 1983 4903***

Q3 Q2 41 45 0.0006 1584 657 2496***

Q4 Q2 31 45 <.0001 2963 1627 4409***

Q4 Q3 31 41 0.0395 1344 95 2723**

Gainesville, Florida (GRU) attic insulation upgrade ALL* Control 126 20,024 0.0060 688 198 1178Q1 Control 27 20,024 0.6184 �353 �1204 471Q2 Control 40 20,024 0.4529 370 �469 1204Q3 Control 30 20,024 0.0621 1268 257 2220*

Q4 Control 29 20,024 <.0001 2382 1320 3562***

Q2 Q1 40 27 0.2839 799 �343 1991Q3 Q1 30 27 0.0733 1685 483 2962*

Q4 Q1 29 27 0.0005 2716 1271 4591***

Q3 Q2 30 40 0.3230 813 �401 2108Q4 Q2 29 40 0.0169 2116 711 3691**

Q4 Q3 29 30 0.1636 1248 �382 3261

Jacksonville, Florida (JEA) professional energy audit ALL* Control 162 106,130 0.0024 533 188 878***

Q1 Control 45 106,130 0.9847 5 �471 480Q2 Control 43 106,130 0.2140 383 �226 988Q3 Control 42 106,130 0.0177 830 150 1504**

Q4 Control 32 106,130 0.0189 960 156 1806**

Q2 Q1 43 45 0.2967 363 �335 1082Q3 Q1 42 45 0.0300 827 109 1599**

Q4 Q1 32 45 0.0592 811 �53 1869*

Q3 Q2 42 43 0.3963 391 �522 1529Q4 Q2 32 43 0.2323 590 �446 1654Q4 Q3 32 42 0.6273 301 �1074 1383

* Note: Statistical confidence level indicated by at 90%** Note: Statistical confidence level indicated by at 95%*** Note: Statistical confidence level indicated by at 99%.

Table 8Pre-intervention ACB quartile subgroup energy savings differences: summary results for relative and absolute savings.

Relative Savings Absolute Savings

Q2 Q3 Q4 Q2 Q3 Q4

Orlando/HVAC Upgrade 2.6% 10.6%⁄⁄⁄ 17.8%⁄⁄⁄ Q1 485 kWh 2040 kWh⁄⁄⁄ 3467 kWh⁄⁄⁄ Q1Gainesville/insulation upgrade 4.6% 9.8%⁄⁄⁄ 14.4%⁄⁄⁄ 799 kWh 1685 kWh⁄ 2716 kWh⁄⁄⁄

Jacksonville/energy audit 1.9% 4.7%⁄ 4.5%⁄ 363 kWh 827 kWh⁄ 811 kWh⁄⁄⁄

Orlando/HVAC upgrade 8.2%⁄⁄⁄ 15.3%⁄⁄⁄ Q2 1584 kWh⁄⁄⁄ 2963 kWh⁄⁄⁄ Q2Gainesville/insulation upgrade 4.1% 11.8%⁄⁄⁄ 813 kWh 2116 kWh⁄

Jacksonville/energy audit 1.8% 3.5% 391 kWh 590 kWh⁄⁄

Orlando/HVAC upgrade 7.0%⁄ Q3 1344 kWh⁄⁄ Q3Gainesville/insulation upgrade 6.9% 1248 kWhJacksonville/energy audit 301 kWh

34 N.W. Taylor et al. / Applied Energy 115 (2014) 25–36

a marginal gain of nearly 259 MWh in energy savings. At the mostambitious level of participation achieved via targeting, ‘‘GreatExpectations’’, program impacts are increased by 103%, or over314 MWh beyond the ‘‘Status Quo’’ level of savings.

4.3.2. GRU/InsulationFor the attic insulation upgrade program, the potential benefit

of targeting at the ‘‘Low-Hanging Fruit’’ level vs. the ‘‘Status Quo’’is a 115% marginal improvement in program impact, totaling anadditional 100 MWh of energy saved. Achieving the ‘‘Reaching’’ le-vel of program participation is expected to yield 208% improve-ment relative to the ‘‘Status Quo’’, a marginal gain of over180 MWh in energy savings. At the most ambitious level of partic-

ipation achieved via targeting, ‘‘Great Expectations’’, program im-pacts are increased by 246%, or nearly 214 MWh beyond the‘‘Status Quo’’ level of savings.

4.3.3. JEA/Professional energy auditFor the only behavior change program evaluated (the profes-

sional energy audit) the potential benefit of targeting at the‘‘Low-Hanging Fruit’’ level vs. the ‘‘Status Quo’’ is a 45% marginalimprovement in program impact, totaling additional energy sav-ings of nearly 39 MWh. Achieving the ‘‘Reaching’’ level of programparticipation is expected to yield 74% improvement relative to the‘‘Status Quo’’, a marginal energy savings gain of nearly 64 MWh. Atthe most ambitious level of participation achieved via targeting,

Page 11: Targeting utility customers to improve energy savings from conservation and efficiency programs

Table 9Potential for improvements in program impact via targeting under alternative customer participation scenarios.

n Energy savings (kWh) Potential improvement from status quo

Q1 Q2 Q3 Q4 Average Total Energy (kWh) Percent

Orlando, Florida (OUC) high-efficiency HVAC upgradeStatus Quo 32 45 41 31 2054 306,037 0 (0%)Low Hanging Fruit 0 45 41 63 2903 432,477 126,440 (41%)Reaching 0 0 41 108 3792 564,998 258,961 (85%)Great Expectations 0 0 0 149 4162 620,072 314,035 (103%)

Gainesville, Florida (GRU) attic insulation upgradeStatus Quo 27 40 30 29 688 86,646 0 (0%)Low Hanging Fruit 0 40 30 56 1478 186,249 99,603 (115%)Reaching 0 0 30 96 2117 266,739 180,092 (208%)Great Expectations 0 0 0 126 2382 300,167 213,521 (246%)

Jacksonville, Florida (JEA) professional energy auditStatus Quo 45 43 42 32 533 86,409 0 (0%)Low Hanging Fruit 0 43 42 77 774 125,313 38,904 (45%)Reaching 0 0 42 120 927 150,124 63,715 (74%)Great Expectations 0 0 0 162 960 155,582 69,173 (80%)

N.W. Taylor et al. / Applied Energy 115 (2014) 25–36 35

‘‘Great Expectations’’, program impacts are increased by 80%, ornearly 70 MWh beyond the ‘‘Status Quo’’ level of savings.

4.3.4. DiscussionWhile the technical, achievable, and economic potentials for en-

ergy-efficiency and conservation program impacts vary across util-ity territories and customer subpopulations in our study, findingsfrom the customer targeting analysis indicate significant opportu-nities to improve program outcomes across all three programsevaluated. Results also suggest that there may be an optimal pro-gram participation scenario that could guide program implementa-tion priorities: for all three programs, moving from the ‘‘Reaching’’to the ‘‘Great Expectations’’ participation scenario, which is likelyto entail significant effort by program administrators, yields signif-icantly lower marginal savings than moving from ‘‘Low-HangingFruit’’ to ‘‘Reaching’’ or from ‘‘Status Quo’’ to ‘‘Low-Hanging Fruit’’.As with diminishing returns to technical efficiency, program bene-fits from targeting are likely to also experience diminishing mar-ginal returns to investment.

5. Conclusions and recommendations

This study tested three common utility demand-side manage-ment programs, each in different utility territories and over differ-ent time periods, to assess the potential impact of using the AnnualCommunity Baseline methodology to target DSM participants.Findings suggest that increased marketing to poor performinghouseholds could lead to far greater energy savings for energy effi-ciency and conservation programs for both technology and behav-ioral interventions. They also suggest that current methods,including blanket marketing campaigns, may induce/encourageprogram participation and monetary investment by those custom-ers who are likely to save very little energy and therefore realizerelatively low benefit-to-cost ratios for their investments.

From these findings, we anticipate that programs with higherdeemed, or expected, savings may find greater impact from usingthe ACB targeting method. Based on the literature referenced inthis article and the study outcomes, we propose that marginal dif-ferences in energy savings across participant subgroups would beeven greater in climates where homes use more energy, such asnorthern climates with greater heating demand, and in areaswhere energy efficiency and conservation programs are less com-mon (i.e., DSM programs are relatively new). Additional researchshould be conducted to further test the applicability of this methodin other climatic regions.

Findings presented here illustrate that appropriate, strategiccustomer targeting for participation in DSM programs might leadto significantly greater energy savings that could result in im-proved program cost-effectiveness for utilities and shorter paybackperiods for participating customers. We propose a strategy of tar-geting program marketing and recruitment materials to the cohortof customers with the greatest expected savings potential. Thisstrategy is expected to improve DSM program impact given a fixedbudget constraint (or alternatively, to minimize the cost to achievea target level of program impact). If adopted by utilities and DSMprogram administrators as a key component of comprehensive en-ergy information management systems – similar to those devel-oped for commercial and industrial applications [26] – themarginal benefits for both utilities and residential customers areexpected to be even greater. Future studies will include utilityand participant costs to determine the potential impact of this typeof targeting strategy on financial incentives. Additional work willalso explore enhanced participant targeting that goes beyond thesimple, quartile-based categorization used in this study to furtherstrengthen program outcomes.

In summary, the authors of this study suggest implementationof appropriate participant targeting protocols, such as the AnnualCommunity Baseline method, to enhance the energy savings bene-fits of energy efficiency and conservation programs.

References

[1] DOE, Database of State Incentives for Renewables & Efficiency (DSIRE), (n.d.).[2] DOE, Annual Energy Review 2011, U.S. Department of Energy, Energy

Information Administration, Washington, DC, 2012.[3] Loughran DS, Kulick J. Demand-side management and energy efficiency in the

United States. Energy J 2004;25:19–43.[4] Hirst E. The effects of utility DSM programs on electricity costs and prices. Oak

Ridge, Tennessee: Oak Ridge National Laboratory; 1991.[5] Arimura TH, Li S, Newell RG, Palmer K. Cost-effectiveness of electricity energy

efficiency programs. Energy J 2012;33:63–99.[6] Barbose GL, Goldman CA, Hoffman IM, Billingsley M. The future of utility

customer-funded energy efficiency programs in the USA: projected spendingand savings to 2025. Energy Efficiency 2013:1–19.

[7] Nadel S. Utility demand-side management experience and potential-a criticalreview. Annu Rev Energy Environ 1992;17:507–35.

[8] Kelly S. Do homes that are more energy efficient consume less energy? Astructural equation model of the English residential sector. Energy2011;36:5610–20.

[9] Wall LW, Goldman CA, Rosenfeld AH, Dutt GS. Building energy use compilationand analysis (BECA). Part B: Retrofit of existing North American residentialbuildings. Energy Buildings 1983;5:151–70.

[10] York D, Molina M, Neubauer M, Nowak S, Nadel, Chittum A, et al. Frontiers ofenergy efficiency: next generation programs reach for high energy savings. AmCouncil Energy-Efficient Econ 2013.

[11] Allcott H. Social norms and energy conservation. J Public Econom2011;95:1082–95.

Page 12: Targeting utility customers to improve energy savings from conservation and efficiency programs

36 N.W. Taylor et al. / Applied Energy 115 (2014) 25–36

[12] Larsen PH, Goldman CA, Satchwell A. Evolution of the U.S. energy servicecompany industry: market size and project performance from 1990 to 2008.Energy Policy 2012;50:802–20.

[13] Mills E. Inter-comparison of North American residential energy analysis tools.Energy Buildings 2004;36:865–80.

[14] Jones PH, Taylor NW, Kipp MJ, Knowles HS. Quantifying household energyperformance using annual community baselines. Int J Energy Sector Manage2010;4:593–613.

[15] Metcalf GE, Hassett KA. Measuring the energy savings from homeimprovement investments: evidence from monthly billing data. Rev EconomStatist 1999;81:516–28.

[16] Kaufman N, Palmer KL. Energy efficiency program evaluations: opportunitiesfor learning and inputs to incentive mechanisms. Energy Efficiency2012;5:243–68.

[17] T. Hong, P. Wang, H.L. Willis, A Naive multiple linear regression benchmark forshort term load forecasting, in: 2011 IEEE Power and Energy Society GeneralMeeting, 2011: p. 1–6.

[18] Jones P, Taylor N, Kipp J. Housing stock characterization study: an innovativeapproach to measuring retrofit impact. US: Department of Energy; 2012.

[19] Reichl J, Kollmann A. The baseline in bottom-up energy efficiency and savingcalculations – A concept for its formalisation and a discussion of relevantoptions. Appl Energy 2011;88:422–31.

[20] S.R. Schiller, National Energy Efficiency Evaluation, Measurement andVerification (EM&V) Standard: Scoping Study of Issues and ImplementationRequirements, (2011).

[21] OUC, Residential Rebate Form, (n.d.).[22] GRU, Added Insulation Rebate for Homes, (n.d.).[23] JEA, Energy Efficiency Evaluations, JEA Energy Audits. (2012).[24] Hájek J. Theory of rank tests. New York: Academic Press; 1967.[25] Hodges JL, Lehmann EL. Estimates of location based on rank tests. Ann Math

Statist 1963;34:598–611.[26] Swords B, Coyle E, Norton B. An enterprise energy-information system. Appl

Energy 2008;85:61–9.