9
R&D Comes to Services Bank of America's Pathbreaking Experiments Service companies have long suffered from haphazard approaches to innovation. But a leading bank is proving that service development can be as rigorous as product development. by Stefan Thomke A TH E H EART of business today lies a dilemma: Our economy is increasingly dependent on services, yet our innovation processes remain oriented toward products. We have well-tested, scientific methods for developing and refining manufactured goods-meth- ods that date back to the industrial laboratories of Thomas Edison-but many of them don't seem applicable to the world of services. Companies looking for break- throughs in service development tend to fall back on in- formal and largely haphazard efforts, from brainstorm- ing, to trial and error, to irmovation teams. Such programs can produce occasional successes, but they offer little opportunity for the kind of systematic learning required to strengthen the consistency and productivity of service development-and innovation in general-over time. The challenges in applying the discipline of formal R&D processes to services are readily apparent. Because a service is intangible, often existing only in the moment of its delivery to a customer, it is difficult to isolate in a tra- ditional laboratory. And since many services are tailored to individual buyers at the point of purchase, they can't be tested through large samples. As a result, experiments with new services are most useful when they are con- ducted live - with real customers engaged in real trans- actions. Live tests magnify the cost of failure, however; an experiment that doesn't work may harm customer rela- tionships and even the brand. Live experiments are also harder to execute and measiu"e. Once you leave a labora- tory and enter the hurly-burly of a commercial setting, the whole notion of experimental controls has to be APRIL 2003 71

R&D Comes to Services

Embed Size (px)

Citation preview

Page 1: R&D Comes to Services

R&D Comes to ServicesBank of America's

Pathbreaking Experiments

Service companies have long suffered

from haphazard approaches to innovation.

But a leading bank is proving that

service development can be as rigorous

as product development.

by Stefan Thomke

A TH E H EART of business today lies a dilemma: Oureconomy is increasingly dependent on services,yet our innovation processes remain oriented

toward products. We have well-tested, scientific methodsfor developing and refining manufactured goods-meth-ods that date back to the industrial laboratories ofThomas Edison-but many of them don't seem applicableto the world of services. Companies looking for break-throughs in service development tend to fall back on in-formal and largely haphazard efforts, from brainstorm-ing, to trial and error, to irmovation teams. Such programscan produce occasional successes, but they offer littleopportunity for the kind of systematic learning requiredto strengthen the consistency and productivity of servicedevelopment-and innovation in general-over time.

The challenges in applying the discipline of formalR&D processes to services are readily apparent. Becausea service is intangible, often existing only in the momentof its delivery to a customer, it is difficult to isolate in a tra-ditional laboratory. And since many services are tailoredto individual buyers at the point of purchase, they can't betested through large samples. As a result, experimentswith new services are most useful when they are con-ducted live - with real customers engaged in real trans-actions. Live tests magnify the cost of failure, however; anexperiment that doesn't work may harm customer rela-tionships and even the brand. Live experiments are alsoharder to execute and measiu"e. Once you leave a labora-tory and enter the hurly-burly of a commercial setting,the whole notion of experimental controls has to be

APRIL 2003 71

Page 2: R&D Comes to Services

R&D Comes to Services

rethought. The noise can drown out the signal, making ithard to determine whether the variable you're testing foris the one that actually causes the effect you observe.

Given such challenges, it's no surprise that most servicecompanies have not established rigorous, ongoing R&Dprocesses. But now there is an important exception tothat rule. Over the past three years. Bank of America hasheen running a series of formal experiments aimed at cre-ating new service concepts for retail banking. The com-pany has turned a set of its branches into, in effect, a lab-oratory where a corporate research team conducts serviceexperiments with actual customers during regular busi-ness hours, measures results precisely and compares themwith those of control branches, and pinpoints attractiveinnovations for broader rollout.

Bank of America's program is a work in progress-it isin itself an experiment. Refinements have been made atevery stage of its development. Some of its elements haveproved successful; some haven't. But through its successesand its failures, the effort has revealed an enormousamount about what a true R&D operation might looklike inside a service business.

The Growth ChallengeThe end of the twentieth century was a time of rapid con-solidation in the U.S. banking industry, and Bank of Amer-ica was an eager participant. Through a three-decadeM&A effort, culminating in a $60 billion merger withNationsBank in 1998, the hank transformed itself froma regional West Coast operation into one of the country'slargest national banks, operating some 4>5OO banking cen-ters in 21 states and serving approximately 27 millionhouseholds and 2 million businesses. But as the twenty-first century dawned. Bank of America, like other largeU.S. banks, faced a new challenge: With the opportunitiesfor further acquisitions narrowing, it would need to findways to grow organically, to expand its existing businessby attracting more customers and fulfilling a greatershare of their banking needs.

When Kenneth Lewis became the bank's chief execu-tive in 1999, he saw that winning the battle for customerswould require fresh approaches to service developmentand delivery. The old modus operandi of the banking in-dustry - provide the same services in the same ways asyour competitors -was a recipe for stagnation. But Lewisfaced a major obstacle in achieving his vision: The bankhad never made innovation a priority, and as a result, itlacked any formal infrastructure for developing new ser-vices. Innovation, Lewis saw, would require a revolution inthinldng and in practice.

The instrument of that revolution was the Innovation &Development (I&D) Team, a corporate unit charged withspearheading product and service development at thebank. The team's immediate goal was to pioneer new

services and service-delivery techniques that wouldstrengthen the bank's relationships with branch cus-tomers while also achieving a high degree of efficiency intransactions. Recognizing that service innovations shouldbe tested in the field, I&D Team members, together withsenior executives, decided to take an unprecedented stepin the conservative banking industry: They would createan "innovation market" within the bank's existing net-work - a set of branches that would provide, as bank ex-ecutive and l&D Team leader Amy Brady put it, "a testbed for creative ideas to increase customer satisfactionand grow revenues." The test market, the team realized,would have to be large enough to support a wide rangeof experiments but small enough to limit the risks to thebusiness.

The bank settled on Atlanta as the site for its innova-tion market. Atlanta represented a stable region for thebank - its last major acquisition in the area occurred in1996-and it was near the national headquarters in Char-lotte, North Carolina. The Atlanta branches were alsotechnologically advanced, even equipped with broadbandcommunication lines. Twenty of Bank of America's 200branches in Atlanta were initially dedicated to the inno-vation market, most in wealthier neighborhoods withsophisticated banking customers interested in a widerange of services. (Five more branches were later added tothe project.) The managers of each of these branchesagreed to work closely with the i&D Team in carrying outthe research effort, and they also agreed to provide muchof the required funding out of their own budgets in orderto get the early benefits of the resulting innovations. In-tegrating the program into normal operations at thebranches was a risky step - experiments, after all, neces-sarily carry the potential for disruption-but the team sawit as essential. Only by carrying out experiments underrealistic conditions-organizationally, operationally, andeconomically-could the I&D Team ensure the reliabilityof the results.

Designing ExperimentsThe l&D Team quickly realized that it would be very dif-ficult to conduct a diverse array of experiments withinthe confines of a traditionally designed bank branch. Ex-periments require frequent changes in practices andprocesses, which neither the branch employees nor the

Stefan Thomke is an associate professor of technology andoperations management at Harvard Business School inBoston. He is the author of three other HBR articles, includ-ing "Eniightened Experimentation: The New Imperativefor Innovation" (February 2001), and the forthcoming booicExperimentation Matters: Unlocking the Potential ofNew Technologies for Innovation (Harvard BusinessSchool Press, 2003). - ,

72 HARVARD BUSINESS REVIEW

Page 3: R&D Comes to Services

R&D Comes to Services

A Process for Service InnovationBanl< of America created a rigorous five-stageprocess for conceiving and executing innova-tion experiments (shown beiow). Each stageof the process had carefully delineated steps,

desired outcomes, success factors, and mea-sures of performance. For a detailed discussionof the process, see "Bank of America (A) and(B)," HBS cases 9-603-022 and 9-603-023.

1. Evaluate Ideas

Conceive Ideasinput: ideas and

output;informationupdated ideaqueue

2. Plan and Design

Assess Ideasinput: updated idea

queueoutput: approved ideas

Prioritize Ideasinput: approved ideasoutput: list of

prioritized ideas

Desired Outcome

Success Factors

Key Measures

Generate innovative ideas frominternal and external sources.

Awareness and commitmentby bank personnel andmanagement.

Number of total ideas loggedinto dedicated spreadsheet.Percentage of approved ideas.

Assign and Scopeinput; prioritized ideasoutput: design needs

3. Implement

Develop Test Planinput; individual

rollout planoutput; integrated

rollout pian

Complete Designinput: design needsoutput: design plan

Build Rollout Planinput; design planoutput: rollout plan

Desired Outcome Quickly plan the design, build,and rollout of idea.

Success Factors

Key Measures

Minimal planning time.Timing and quality of design.

Cycle time (by categoryof experiment).Quality of the experiment'sdesign.

Implement Ideainput: integrated

rollout planoutput; implemented

ideas

Desired Outcome

Success Factors

Key Measures

Successfully implement ideas.Successful integration of ideas.No overload of experimentsattest branches.

Cycle time.Market readiness of the idea.On-time implementation.

Manage the Market

Monitor Performanceinput; impiemented

ideasoutput: data results

Report Resultsinput; data resultsoutput; test-market

reports

Improve Processinput: test-market

reportsoutput: enhancements

Desired Outcome

Success Factors

Key Measures

Create a stable operatingenvironment for testing newconcepts and ideas.

Fast feedback.Meeting test and market goals:Test cycle of no less than 90 days.Operating results. ;

5. Recommend

CompleteRecommendationinput: test resultsoutput: recommendation

Review and ApproveRecommendationinput; recommendationoutput; approval

CommunicateRecommendationinput; approvaloutput: communication

Desired Outcome

Success Factors

Key Measures

Evaluate ideas and roll them outto test markets nationwide.

Quality of measurement results.

Cycle time.Clarity and completeness of therecommendation.

APRIL 2003 73

Page 4: R&D Comes to Services

R&D Comes to Services

physical facilities were prepared for. So the team decidedto reconfigure the 20 Atlanta branches into three alter-native models; Five branches were redesigned as "ex-press centers," efficient, modernistic buildings where con-sumers could quickly perform routine transactions suchas deposits and withdrawals. Five were turned into "fi-nancial centers," spacious, relaxed outlets where custom-ers would have access to the trained staff and advancedtechnologies required for sophisticated services such as

Bankof America decided to take an unprecedented step

in the conservative banking industry: It would create an

"innovation market" within the bank's existing branches.

stock trading and portfolio management. The remainingten branches were configured as "traditional centers,"familiar-looking branches that provided conventionalbanking services, though often supported by new tech-nologies and redesigned processes.

The group unveiled its first redesigned branch - a fi-nancial center - in the posh Buckhead section of Atlantain the fall of 2000. A customer entering the new centerwas immediately greeted at the door by a host-an ideaborrowed from Wal-Mart and other retail stores. At free-standing kiosks, associates stood ready to help the cus-tomer open accounts, set up loans, retrieve copies of oldchecks, or even buy and sell stocks and mutual funds. An"investment bar" offered personal computers where thecustomer could do her banking, check her investmentportfolio, or just surf the Internet. There were comfort-able couches, where she could relax, sip free coffee, andread financial magazines and other investment literature.And if she had to wait for a teller, she could pass the fewminutes in line watching television news monitors orelectronic stock tickers. What that customer probablywouldn't have realized was that all of these new serviceswere actually discrete experiments, and her reactions tothem were being carefully monitored and measured.

To select and execute the experiments in the testbranches, the I&D Team followed a detailed five-step pro-cess, as illustrated in the exhibit "A Process for Service In-novation."The critical first step was coming up with ideasfor possible experiments and then assessing and priori-tizing them. Ideas were submitted by team members andby branch staff and were often inspired by reviews of pastcustomer-satisfaction studies and other market research.Every potential experiment was entered into an "ideaportfolio," a spreadsheet that described the experiment,the process or problem it addressed, the customer seg-ments it targeted, and its status. The team categorizedeach experiment as a high, medium, or low priority, basedprimarily on its projected impact on customers but also

taking into account its fit with the bank's strategy andgoals and its funding requirements. In some cases, focusgroups were conducted to provide a rough sense of anidea's likely effect on customers. By May 2002, more than200 new ideas had been generated, and 40 of them hadbeen launched as formal experiments.

Once an idea was given a green light, the actual ex-periment had to be designed. The I&D Team wanted toperform as many tests as possible, so it strove to plan each

experiment quickly. To aid inthis effort, the group created aprototype branch in the bank'sCharlotte headquarters whereteam members could rehearsethe steps involved in an experi-ment and work out any processproblems before going live with

customers. The team would, for example, time each ac-tivity required in processing a particular transaction.When an experiment required the involvement of a spe-cialist-a mortgage underwriter, say-the team would en-list an actual specialist ffom the bank's staff and have himor her perform the required task. By the time an experi-ment was rolled out in one of the Atlanta branches, mostof the kinks had been worked out. The use of the proto-type center reflects an important tenet of service exper-iments: Design and production problems should beworked out off-line, in a lab setting without customers,before the service delivery is tested in a live environment.

Going LiveAn experiment is only as good as the learning it produces.Through hundreds of years of experience in the sciences,and decades in commercial product development, re-searchers have discovered a lot about how to design ex-periments to maximize learning. We know, for example,that an effective experiment has to isolate the particularfactors being investigated; that it must faithfully repli-cate the real-world situation it's testing; that it has to beconducted efficiently, at a reasonable cost; and that itsresults have to be accurately measured and used, in turn,to refine its design. (An overview of the qualities of goodexperiments is provided in the sidebar "Learning ThroughExperiments.") These are always complex challenges, and,as Bank of America found out, many of them becomefurther complicated when experiments are moved outof a laboratory and into a bank branch filled with realemployees serving real customers in real time. To itscredit, the I&D Team thought carefully about ways to in-crease the learning produced by its experiments, with aparticular focus on enhancing the reliability of the tests'results and the accuracy of their measurement. As MiltonJones, one of the bank's group presidents, constantly re-minded the team: "At the end of the day, the most critical

74 HARVARD BUSINESS REVIEW

Page 5: R&D Comes to Services

R&D Comes to Services

Learning Through ExperimentsThe objective of all experiments is to learn what does and does not

work. Experiments should be designed not so much to maximize

their odds of success but to maximize the information and Insights

they produce. The rate at which companies can learn by experi-

mentation will depend on many factors, some of which will be

unique to a given company. But there are seven factors that tend

to be common to all experiments.

Factor

Fidelity

Cost

Iteration time

Capacity

Sequence

Signal-to-noiseratio

DefinitionThe degree to which a model and its testingconditions represent a final product, process,or service under conditions of actual use.

The total cost of designing, building, running,and analyzing an experiment, including expensesfor prototypes, laboratory use, and so on.

The time from the initial planning of an experimentto when the analyzed results are available and usedfor planning another iteration.

The number ofexperiments that can be carriedout with some fidelity during a given time period.

The extent to which experiments are run inparallel or series.

The extent to which the variable of interest isobscured by other variables.

Type The degree to which a variable is manipulated,from incremental change to radical change.

aspect of experimentation and learning is measurement.Measurements will defend you if done right; otherwisethey will inhibit you."

Minimizing the Effect of Noise. Experiments can bedistorted when "noise"-variables other than the onebeing tested -infiuences results in ways that can t be con-trolled or measured. Managing noise is a particular chal-lenge in service experiments conducted in business set-tings. At the bank branches, for example, extraneousfactors like seasonal fluctuations in demand, changingmarket conditions, staff turnover, or even bad weathercould alter customers' perceptions and behavior, distort-ing the results of the tests. To temper the effects of noise,the I&D Team made heavy use of two techniques, repe-tition of trials and experimental controls. Repeating thesame experiment in the same branch served to averageout noise effects, and repeating the same experiment indifferent branches helped the team determine whichfactors were unique to a given branch. Setting up a con-

trol branch for each experiment-onewith similar characteristics and cus-tomer demographics to the branch ac-tually conducting the experiment -enabled the team to further isolatethe variable being tested. If the teammembers wanted to test, say, a newpiece of software for making transfersbetween accounts, they would installthe software on the terminals at oneexpress center but not on the termi-nals at another, similar express cen-ter. In this way, any differences in cus-tomers' performance in carrying outa transfer at the two branches couldbe attributed to the change in the soft-ware. The team was able to draw con-trol branches not only from the threedifferent types of branches in the in-novation market but also from otherbranches in Atlanta and in nearbyregions.

Achieving High Fidelity. In the de-velopment of products, experimentsare often, for cost or feasibility rea-sons, conducted with models or pro-totypes. This always raises a questionof fidelity-How reliable is the modelin representing the real world? Thatquestion becomes, in some ways, lessof a concern in service experimentslike Bank of America's. While real-world testing has the drawback of am-plifying noise, it has the advantage ofincreasing fidelity. But the bank didhave to grapple with the cost issue.

Achieving high fidelity required substantial investment-in bank remodeling, personnel training, technology, andthe like. By requiring that the experiments be funded outof the branches'operating budgets, the bank imposed fis-cal discipline on the program: Branch management wouldnaturally demand that experiments have a high likeli-hood of providing attractive returns. To attain high fidel-ity, the bank also had to deal with the so-called Haw-thorne effect. A well-documented phenomenon, theHawthorne effect refers to the way that people who knowthey are under observation-such as those participatingin an experiment - tend to change their behavior andthus distort the results. The l&D Team was aware thatsuch distortion was possible-given the direct and indirectpressure on branch staff to perform well in the experi-ments- and that it could damage the fidelity of the ex-periments. Here, too, the careful use of controls helped.By comparing the results of experiments susceptible tothe Hawthorne effect to the results achieved by control

APRIL 2003 75

Page 6: R&D Comes to Services

R&D Comes to Services

branches within the innovation market, the team wasable to filter out some of the effect. The team also insti-tuted "washout periods." Measurements of an experi-ment's results did not begin until a week or two after itsstart, allowing time for any novelty effects to wear offamong the staff.

Attaining Rapid Feedback. People learn best whenthey receive immediate feedback on the results of theiractions. (Imagine how hard it would be to learn to playthe piano if there were a long delay between striking akey and hearing the note.) But, in many circumstances, ittakes time to ensure that results are accurate-giving mis-leading feedback is even worse than giving no feedback.Finding the right balance between speed and reliability inproviding feedback is crucial to effective experimenta-

tion. The I&D Team specified that every experimentwould run for 90 days (not including the washout period)before it could be adjusted or discontinued based on theresults generated. The team believed that three monthswould provide enough time to gain reliable measureswithout unduly delaying modifications. In practice, therewere some exceptions to this rule. The team revamped,for example, a mortgage loan experiment after just 30days, primarily because it became clear that it was takingmuch too long to get credit approvals. The quick revisionof this experiment paid off by leading to the launch ofa successful new mortgage program.

Once an experiment was completed and measured, adecision had to be made about whether it was a successand warranted a broader rollout. This required a straight-

In the Transaction ZoneThe transaction zone media (TZM) ex-periment provides a usefui example ofhow Bank of America's innovation pro-cess works. The experiment had its ori-gins in an earlier study in which marketresearchers "intercepted" some 1,000customers standing in bank lines andasked them a series of questions. Thestudy revealed that after a person standsin line for about three minutes, a widegap opens between actual and per-ceived wait times. A two-minute wait,for example, usually feels like a two-minute wait, but a five-minute wait mayfeel like a ten-minute wait. Two subse-quent focus groups with sales associ-ates and a formal analysis by the Galluporganization provided further corrobo-ration of this effect. When the I&D Teamreviewed the data, they realized theremight be opportunities to reduce per-ceived wait times without reducing ac-tual wait times. Psychological studieshave revealed, after all, that if you dis-tract a person from a boring chore, timeseems to pass much faster. So the teamcame up with a hypothesis to test: Ifyou entertain people in line by puttingtelevision monitors in the "transactionzone" - above the row of tellers in abranch lobby-you will reduce perceivedwait times by at least 15%-

Because long waits have a direct im-pact on customer satisfaction, the teamgave the transaction zone media ex-periment a high priority. In the summerof 2001, the bank installed monitorsset to the Atlanta-based news stationCNN over the teller booths in one tra-ditional center. Anothertraditional cen-ter serving a similar clientele was usedas a control branch. After a week'swashout period, the team began to care-fully measure actual and perceived waittimes at the two branches. The resultswere significant, as shown in the chart"ActualVersus Perceived Waiting Time."The degree of overestimation of waittimes dropped from 32% to 15% at thetest branch. During the same period,the control branch actually saw an in-crease in overestimated wait times,from 15% to 26%.

Although these were encouraging re-sults, the team still had to prove to se-nior management that the installationof television monitors would ultimatelyboost the bank's bottom line. The teamknew, from prior studies, that improve-ments in the bank's customer-satisfactionindex (based on a standard 30-questionsurvey) correlated with increases in fu-ture sales. Every one-point improve-ment in the index added $1.40 in annual

revenue per household, mainly throughincreased customer purchases and re-tention. So a branch with a customerbase of 10,000 households would in-crease its annual revenues by $28,000 ifthe index increased by just two points.The team carried out a statistical analy-sis of the test branch's results and pro-jected that the reductions in perceivedwait times would translate into a 5.9-point increase in overall banking-centercustomer satisfaction.

While the benefits were substantial,the team had to consider whether theyoutweighed the costs of buying and in-stalling the monitors. The team deter-mined that it would cost some $22,000to upgrade a branch in the Atlanta in-novation market but that, for a nationalrollout, economies of scale would bringthe per-branch cost down to about$10,000 per site. Any branch with morethan a few thousand households in itscustomer base would therefore be ableto recoup the up-front cost in less thana year. Encouraged by the program's ap-parent economic viability, the team re-cently launched a second phase of theTZM experiment, in which it is measur-ing the impactof more varied televisionprogramming, different sound levels,and even advertising.

HARVARD BUSINESS REVIEW

Page 7: R&D Comes to Services

R&D Comes to Services

forward, two-part analysis. First, performance data fromthe test locations and the control branches were analyzedto determine whether the experiment had enhanced cus-tomer satisfaction, revenue generation, productivity, orany other relevant measure of performance. Second, acost-henefit analysis was carried out to ascertain whetherthe performance gain outweighed the expense requiredto introduce the new process or technology throughoutthe hroader branch network. To date, the proj^am hasposted strong results: Of the 40 experiments conducted asof May 2002,36 were deemed successes, and 20 had beenrecommended for national rollout. But, as we'll soon see,this high success rate posed its own challenges for the

I&D Team. (For a detailed discussion of one of the bank'ssuccessful experiments, see the sidebar "In the Transac-tion Zone.")

Rewards and ChallengesBank of America has reaped important rewards from itsexperiments. The initiative has generated an unprece-dented surge of creative thinking about branch banking,as manifested not only in the dozens of innovative pro-posals for service experiments but in the estabhshment ofthe three very different models of physical branches.Within the innovation market, customer satisfaction hasimproved substantially, and the experimental brancheshave attracted many new customers, some of whom travel

Actual Versus Perceived Waiting TimeBank of America's I&D Team asked customerswho had waited in line more than five minutes:How long have you been waiting?

Experimental SiteBefore the implementation of Bank ofAmerica's transaction zone media experi-ment, customers who waited longer thanfive minutes significantly overestimatedtheir waiting time (32%). But after theinstallation of TV monitors in the banklobby, overestimates for the same customergroup dropped to 15%.

Pre-TZM

Post-TZM

c 2

6.17

7.04

6.14

4 6Minutes

8 10

Difference

Control Branch

In this same testing period, no interventionwas carried out at the control branch, whichhad demographics very similar to those oftheexperimental site. Overestimates at thissite actually increased from 15% to 26%.

Pre-TZM atexperimental site

Post-TZM atexperimental site

7.38

9.27

7.37

2 4 6

Minutes

Difference

•15%

26%

10

0 perceived time

H actual time

APRIL 2003 77

Page 8: R&D Comes to Services

R&D Comes to Services

considerable distances to do their banking at the new cen-ters. More broadly, many of the experimental processesand technologies are now being adopted throughout thebank's national branch network, with promising results.

As important as the business benefits is the enormousamount of learning that the bank has gained. Carryingout experiments in a service setting poses many difficultproblems. In grappling with the challenges, the bank hasdeepened its understanding of the unique dynamics ofservice innovation. It may not have solved all the prob-lems, but it has gained valuable insights into the processof service development-insights that will likely provideBank of America with an important edge over its less ad-venturous competitors.

Let's look more closely at some of the toughest chal-lenges the bank has faced. First of all, carrying out exper-iments in a live setting inevitably entails disruptions. Cus-tomers may at times be confused by unfamiliar processes,and employees may be distracted as they learn new waysto work. During the early months of the Bank of America

In a lab,experiments are routinely undertaken with

the expectation that they'll fail but still produce value.

In the real world, there is pressure to avoid failure.

program, for example, the I&D Team found that tellersand other staff members had to spend between 30% and50% of their time in meetings and training sessions relatedto the experiments. Branches often had to bring in tem-porary workers to take up the slack, a practice that some-times introduced problems of its own. Although employ-ees spent less time in training and meetings as theybecame used to the program, they continued to feel addedtime pressure throughout the experiments. Any companypursuing a similar experimentation initiative will need toplan staffing requirements and work schedules carefully.

In addition to the new demands on their time. Bank ofAmerica employees also faced different and sometimesconflicting incentives, which raised some hard questionsabout compensation. Sales associates at the branches tra-ditionally earned between 30% and 50% of their total payfrom performance bonuses tied to a point system. An asso-ciate would earn points for meeting various sales quotas,and the number of points would vary according to, amongother things, the products sold, the branch's customer-satisfaction levels, and local market demographics.

For the first several months of the I&D program, thetest branches maintained the conventional incentivescheme. At first, sales associates seemed to relish the addi-tional activities-their involvement in the program madethem feel "special" (as a number of them put it), and theymade extra efforts to get the experiments up and running.

But the time pressures inevitably took their toll on theassociates. They soon realized that all the time they hadto dedicate to meetings and training reduced their oppor-txmities to earn bonus points. In a number of branches,moreover, associates had to take turns greeting customersas the host-an activity that, again, offered no chances toearn points. Because their monthly sales quotas hadn'tchanged, some associates became frustrated with the newarrangement. When acting as the host, for example, someassociates would leave their post to help a client open anaccount (and to gain the associated bonus points) ratherthan refer the client to another specialist as the experi-ment's design dictated. The desire to earn points con-flicted with the desire to participate in the experiments.

To address this unanticipated effect of the program,senior management abandoned the traditional bonussystem in the test branches in January 2001, switchingall associates to fixed incentives based on team perfor-mance. Most associates welcomed the change, which am-plified their feeling of being special while also underscor-

ing top management's commitmentto the experimentation process. But,again, not all staff members thrivedunder the new scheme. Without thelure of points, some associates losttheir motivation to sell. Resentmentfrom bank personnel outside the in-novation market also intensified as a

result of the special compensation program. One bank ex-ecutive pointed out that "those in the I&D branches nowthought they didn't have to chin to the same level as oth-ers." Doubts about the broader applicability of test-marketfindings also grew. As Allen Jones, a regional executive,said, "If a test is successful only under fixed-incentiveschemes, then we can't roll it out elsewhere."

In response to the problems, senior managementswitched the staff back to the old point-based incentivesystem affer just six months. Not surprisingly, tensionsbetween earning bonus points and assisting in experi-ments quickly returned. Further, the about-face dis-heartened some staffers, leading them to question man-agement's commitment to the program. The bank'sdifficulties over determining incentive pay underscorethe problems inherent in having employees participatein a corporate innovation initiative while also pursuingtheir everyday jobs: This is an unavoidable consequenceof real-time experimentation. In the long run, the best so-lution will lil̂ ely turn out to be a hybrid bonus system,one that includes both individual sales commissions anda fixed, team-based component. But every company willhave to go through its own testing period to arrive at thebalance that is right for its people and that doesn't un-dermine the fidelity of its experiments. It's important tonote, also, that staff turnover in the test branches droppedconsiderably during the program. The difficulties em-

HARVARD BUSINESS REVIEW

Page 9: R&D Comes to Services

R&D Comes to Services

ployees faced paled in comparison to the enthusiasm theyfelt about participating in the effort.

Another challenge Bank of America struggled with wasmanaging its capacity for experimentation. Any labora-tory has limits to the number of experiments it can carryout at any time, and if that capacity isn't carefully plannedfor, feedback slows and learning diminishes. The bank hadto manage capacity both in individual branches - onebranch sometimes had as many as 15 active experiments-and across all the branches undertaking experiments. Ifthe capacity of the entire market was not well managed,too many experiments would have to be performed ata single branch, increasing the amount of noise sur-rounding each one. And if the team were to run out of ca-pacity, it would be forced to do tests sequentially ratherthan simultaneously, which would delay the process.

Conducting the experiments in a commercial settingadded another wrinkle to capacity management. If cus-tomers loved a new service, branch managers would nat-urally demand that it be continued beyond the 90-daytrial period. This being the real world, after all, thebranches could not simply pull the plug on somethingcustomers had grown to relish. But that made it more dif-ficult to start new experiments - the noise from the con-tinuing experiments would begin to obscure the results ofthe new ones. Scott Arcure, who led the I&D Team's mea-surement efforts, admitted, "We often worry about chang-ing too many chemicals in the mix and wonder aboutwhich one made it explode. As bankers, we're not expertsat this type of measurement." The team ultimately de-cided to bring in a statistics expert to help it sort out theeffects of multiple variables.

Einally, there was the issue of failure. In any programof experimentation, the greatest learning comes fromthe most radical experiments - which also have thehighest likelihood of failure. In a laboratory, radicalexperiments are routinely undertaken with the ex-pectation that they'll fail but still produce extraor-dinarily valuable insights, often opening up entirelynew frontiers for investigation. In the real world,however, there are inevitably pressures to avoid fail-ures, particularly dramatic ones. Eirst, there's thefear you'll alienate customers. Second, there's the fearyou'll damage the bottom line, or at least shrink j'ourown source of funding. Einally, there's the fear you'llalienate top managers, leading them to pull the plugon your program.

The I&D Team felt all these pressures. Althoufjh itknew the value of radical experiments, it also knewthat extravagant failures in a live setting could put itsentire effort at risk. As a result, the team tended totake an incrementalist approach, often pursuing ex-periments that simply validated ideas that werelikely to succeed. Although the team's original plancalled for a 30% failure rate, the actual rate in the

first year was just 10%. The team turned out to be muchless adventurous than it had hoped to be. As Warren But-ler, a bank executive and team leader, explained, "We'retrying to sell ourselves to the bank. If we have too manyfailures, we iust won't be accepted. Currently, we mayhave failure withiri concepts, but not failure of the con-cepts." In the tradition-bound and risk-averse world ofbanking, it's no surprise that the team would end uperring on the side of caution. And the conservatism ofsome of the experiments should not obscure the radicalnature of the overall program. Any service company de-signing an experimentation program will have to care-fully weigh the risks involved against the learning thatmay be generated.

For hundreds, if not thousands, of years, systematic ex-perimentation has been at the heart of all innovation. Ad-vances in products, the tools that make them, and thevalue they create for both producers and consumers haveemerged through carefully designed and executed exper-iments. Similar advances in services have lagged because,as we have seen, experimentation in a live setting entailsparticular challenges. But however daunting they mayinitially seem, the challenges can be met, as Bank ofAmerica's effort suggests. In just a few years, the bankhas achieved important benefits that are having a realimpact on its business and its future. By applying thelessons from the bank's experience to their own busi-nesses, other service companies will be able to make thetransition from haphazard to systematic approaches toinnovation-from guesswork to true R&D. ^

Reprint R0304E; HBR OnPoint 3426To order, see page 125.

mr

APRIL 2003 79