12
UNCORRECTED PROOF 1 Online foresight platforms: Evidence for their impact on scenario planning & 2 strategic foresight Noah Q1 Raford 4 Department of Science, Technology, Engineering and Public Policy (STEaPP), University College London, Gower Street, London, WC1E 6BT, UK 5 7 article info 8 abstract 9 Article history: 10 Received 20 February 2013 11 Received in revised form 9 March 2014 12 Accepted 12 March 2014 13 Available online xxxx 14 Developments in social media, Web 2.0 and crowdsourcing have enabled new forms of 15 methodological innovation in both the social and natural sciences. To date, relatively little 16 attention has been given to how these approaches impact scenario planning and strategic 17 foresight, especially in public projects designed to engage multiple stakeholders. This article 18 explores the role that online approaches may play in qualitative scenario planning, using data 19 from five empirical case studies. Two categories of measures were used to compare results 20 between cases; participation characteristics, such as the number and type of participants 21 involved, and interaction characteristics, such as the number of variables and opinions 22 incorporated, the mechanisms of analysis, etc. The systems studied were found to have 23 substantial impact on the early stages of the scenario process, in particular: increased 24 participation in terms of both amount and diversity, increased volume and speed of data 25 collected and analyzed, increased transparency around driver selection and analysis, and 26 decreased overall cost of project administration. These results are discussed in the context of 27 emerging issues and opportunities for scenario planning, particularly for public scenario 28 projects, and how such tools and platforms might change scenario practice over time. 29 © 2014 Elsevier Inc. All rights reserved. 30 Keywords: 31 Scenario planning 32 Strategic foresight 33 Crowdsourcing 34 Online participation 35 Public policy 36 37 38 39 1. Introduction: the rise of the Social Web as an engine for 40 methodological innovation 41 Developments in social media, Web 2.0 and crowdsourcing 42 (here-in described as large-scale collective intelligence sys- 43 tems) have enabled new forms of methodological innovation 44 in both the social and natural science. Examples such as 45 FoldIt, the protein folding game, or the DARPA red balloon 46 challenge, illustrate how large numbers of diverse participants 47 can tackle complex problems in a coherent fashion. To date, 48 most scenario planning methods rely on a handful of expert 49 interviews and a small number of in person workshops to 50 produce results. Web-based participatory systems, by contrast, 51 offer the possibility of engaging hundreds, thousands, or even 52 more stakeholders, interest groups and geographies. What 53 impact might this have on scenario planning practice and 54 method? 55 O'Rielly [1] defines Web 2.0 as a way of harnessing 56 collective intelligenceby providing architectures of partici- 57 pationthat embrace experimental perpetual betaapplica- 58 tions in a way that provides for easy experimentation and 59 collaboration between diverse communities. Anderson [2] later 60 expanded upon definition, adding that Web 2.0 approaches 61 must include: 62 Individual production of user-generated content, including 63 amateur contributions 64 •“Folksonomictagging, i.e., user-signification of data, 65 shared with a community [3] 66 Data aggregation and social filtering 67 Participation and openness in terms of data, APIs and 68 intellectual property Technological Forecasting & Social Change xxx (2014) xxxxxx Tel.: +971 50 857 6316. E-mail address: [email protected]. TFS-17980; No of Pages 12 http://dx.doi.org/10.1016/j.techfore.2014.03.008 0040-1625/© 2014 Elsevier Inc. All rights reserved. Contents lists available at ScienceDirect Technological Forecasting & Social Change Please cite this article as: N. Raford, Online foresight platforms: Evidence for their impact on scenario planning & strategic foresight, Technol. Forecast. Soc. Change (2014), http://dx.doi.org/10.1016/j.techfore.2014.03.008

Tfs 17980

Embed Size (px)

DESCRIPTION

 

Citation preview

UNCO

RRECTED P

RO

OF1 Online foresight platforms: Evidence for their impact on scenario planning &

2 strategic foresight

3 NoahQ1 Raford⁎4 Department of Science, Technology, Engineering and Public Policy (STEaPP), University College London, Gower Street, London, WC1E 6BT, UK

5

7 a r t i c l e i n f o8 a b s t r a c t

9 Article history:10 Received 20 February 201311 Received in revised form 9 March 201412 Accepted 12 March 201413 Available online xxxx

14 Developments in social media, Web 2.0 and crowdsourcing have enabled new forms of15 methodological innovation in both the social and natural sciences. To date, relatively little16 attention has been given to how these approaches impact scenario planning and strategic17 foresight, especially in public projects designed to engage multiple stakeholders. This article18 explores the role that online approaches may play in qualitative scenario planning, using data19 from five empirical case studies. Two categories of measures were used to compare results20 between cases; participation characteristics, such as the number and type of participants21 involved, and interaction characteristics, such as the number of variables and opinions22 incorporated, the mechanisms of analysis, etc. The systems studied were found to have23 substantial impact on the early stages of the scenario process, in particular: increased24 participation in terms of both amount and diversity, increased volume and speed of data25 collected and analyzed, increased transparency around driver selection and analysis, and26 decreased overall cost of project administration. These results are discussed in the context of27 emerging issues and opportunities for scenario planning, particularly for public scenario28 projects, and how such tools and platforms might change scenario practice over time.29 © 2014 Elsevier Inc. All rights reserved.

30 Keywords:31 Scenario planning32 Strategic foresight33 Crowdsourcing34 Online participation35 Public policy

3637

38

39 1. Introduction: the rise of the Social Web as an engine for40 methodological innovation

41 Developments in social media, Web 2.0 and crowdsourcing42 (here-in described as “large-scale collective intelligence sys-43 tems”) have enabled new forms of methodological innovation44 in both the social and natural science. Examples such as45 “FoldIt”, the protein folding game, or the DARPA red balloon46 challenge, illustrate how large numbers of diverse participants47 can tackle complex problems in a coherent fashion. To date,48 most scenario planning methods rely on a handful of expert49 interviews and a small number of in person workshops to50 produce results. Web-based participatory systems, by contrast,51 offer the possibility of engaging hundreds, thousands, or even

52more stakeholders, interest groups and geographies. What53impact might this have on scenario planning practice and54method?55O'Rielly [1] defines Web 2.0 as a way of “harnessing56collective intelligence” by providing “architectures of partici-57pation” that embrace experimental “perpetual beta” applica-58tions in a way that provides for easy experimentation and59collaboration between diverse communities. Anderson [2] later60expanded upon definition, adding that Web 2.0 approaches61must include:

62• Individual production of user-generated content, including63amateur contributions64• “Folksonomic” tagging, i.e., user-signification of data,65shared with a community [3]66• Data aggregation and social filtering67• Participation and openness in terms of data, API’s and68intellectual property

Technological Forecasting & Social Change xxx (2014) xxx–xxx

⁎ Tel.: +971 50 857 6316.E-mail address: [email protected].

TFS-17980; No of Pages 12

http://dx.doi.org/10.1016/j.techfore.2014.03.0080040-1625/© 2014 Elsevier Inc. All rights reserved.

Contents lists available at ScienceDirect

Technological Forecasting & Social Change

Please cite this article as: N. Raford, Online foresight platforms: Evidence for their impact on scenario planning & strategicforesight, Technol. Forecast. Soc. Change (2014), http://dx.doi.org/10.1016/j.techfore.2014.03.008

UNCO

RRECTED P

RO

OF

69 Cast in this light, Web 2.0 approaches allow skilled experts70 to create easily accessible frameworks for collaboration, which71 the general public can then populate with their own content72 and analysis. This approach is typified by services such as73 Facebook and user-generated “mash-ups”, which combine data74 from different sources to provide unique services of interest to75 specific communities.76 Within the Web 2.0 umbrella, a range of different77 approaches have emerged which may have more utility for78 academics and practitioners. These include crowdsourcing,79 social computing, human computation and collective intelli-80 gence. Crowdsourcing is often defined as a subset of activities81 and systemswithin the broader ecosystem ofWeb 2.0 services.82 Jeff Howe, the originator of the term crowdsourcing, is explicit83 about his definition. Howe [4] writes,

84 Crowdsourcing is the act of a company or institution taking85 a function once performed by employees and outsourcing it86 to an undefined (and generally large) network of people in87 the form of an open call…The crucial prerequisite is the use88 of the open call format and the large network of potential89 laborers. (Howe, 2006)90

91 This definition emphasizes the distribution of discrete92 elements of labor to a large group of people outside a93 traditional organization (thus the etymological connection to94 the phrase, “outsourcing”). Services such as Innocentive, which95 help organizations post challenges and rewards those who96 offer promising solutions, or Amazon Mechanical Turk, which97 breaks complex tasks into discrete steps for distribution and98 completion by a crowd, are examples of such an outsourcing99 approach. Wikipedia is another popular example, where100 distributed contributors add, edit and debate content to create101 an emergent product.102 Web-based approaches are often praised for their ability103 to accomplish things which face-to-face groups cannot. In the104 context of public policy and participatory governance, for105 example, Brabham [5] suggests that enhanced, “speed, reach,106 asynchrony, anonymity, interactivity and the ability to carry107 every other form of mediated content” enables planners to108 engage people in ways never before possible. Schenk and109 Guittard [6] add that such approaches have the potential to110 produce better analytical outcomes, as well, leveraging111 positive network externalities, enhancing participation and112 creating greater stakeholder buy-in.

113 1.1. Applications to Scenario Planning and Public Policy

114 Relatively little scholarly work has been devoted to115 exploring how these systems might impact the scenario116 planning process, either positively or negatively. This is117 complicated by a lack of empirical evidence for evaluating the118 impact of scenario planning itself.119 Historically, scenario planning developed as a facilitated120 process for overcoming individual and group decision making121 biases in the face of long term uncertainty [7–9]. It has since122 evolved into a range of diverse approaches for helpingmanagers123 and policy-makers understand change in their respective fields124 [9–11]. Unlike forecasting or quantitative trend analysis, which125 attempt to reduce uncertainty and project estimates of future126 outcomes, scenario planning attempts to uncover and exploit

127uncertainties within the strategic environment as a tool for128learning and awareness-building. Its goal is to expand the range129of parameters taken into account, thereby helping participants130better understand their assumptions about the future and test131these against a range of possible outcomes.132Despite widespread application of qualitative scenario133techniques, there is a growing body of methodological134criticism about how it is most often practiced [12]. Leaving135aside the cultural and social critiques of how the process is136often used (such as the extensive work of Slaughter [13,14]137or Inayatullah [15,16]), there are at least three methodolog-138ical limitations which warrant consideration.139First, the process is labor-intensive, involving significant140investment in background interviews, data collection, face-141to-face discussion, and group workshops. This creates a limit142on the number of people who can participate in, and benefit143from the process. Next, it commonly involves a predomi-144nance of senior decision-makers and subject matter experts,145many of whom exhibit conscious or unconscious biases146towards vested interests or the status quo. By reducing the147range of sources considered and relying upon the input of148established figures and subject experts, important perspec-149tives and information sources can be excluded [17]. Finally,150scenario planning is highly dependent upon the skill and151experience of the workshop facilitators and scenario writers.152Different futures consultants working with the same group153may produce very different outcomes, a fact which makes the154process highly idiosyncratic [18,19].155The combination of participation limits, participant bias,156facilitator bias, and author subjectivity can cause important157viewpoints to be missed, important data or trends to be158ignored, or unpopular and unpleasant futures to be dropped.159More importantly, the very nature of a workshop-based160process may limit the scalability of such an approach as an161economical, robust and large-scale tool for increasing strategic162flexibility and stakeholder involvement. This is particularly163important in public scenario projects, such as those run by164governments, foundations or other multi-stakeholder organi-165zations, where socialization to the issues and buy-in is often an166essential aspect of the project design. Finally, the focus on167small-group, business-environment decision-making suggests168that elements of the process may need to be adapted for public169policy settings, inwhichmore participants need to be involved,170the goals of the exercise are often contested and the outcomes171must communicate to a wide variety of interests and values.172These challenges are compound by the relative lack of formal173evaluation studies on the effects and outcomes of scenario174planning. Someof the better recent research in this area has been175conducted by Ringland [20], Bezold [21] and Burt [22], as well as176the ongoing work of Chermack and colleagues in developing177preliminary survey instruments for the perceived impact of178scenario projects on participant’s “mental models” [23–25].179If policy makers and scenario practitioners are to take the180claims of digital participation more seriously, it is necessary to181create more robust evidence for the value and impact of digital182tools in scenario creation and, ultimately, better decision-183making. This papermakes a small step towards that larger goal184by asking what impact web-based approaches have on:

185• The number and type of participants involved in the scenario186planning process

2 N. Raford / Technological Forecasting & Social Change xxx (2014) xxx–xxx

Please cite this article as: N. Raford, Online foresight platforms: Evidence for their impact on scenario planning & strategicforesight, Technol. Forecast. Soc. Change (2014), http://dx.doi.org/10.1016/j.techfore.2014.03.008

UNCO

RRECTED P

RO

OF

187 • The geographic scope of participation enabled188 • The range of expert professional disciplines consulted189 • The number of variables and opinions incorporated190 • The mechanism of analysis, ranking and clustering191 • The time spent on data collection and analysis192 • The amount of user debate and reflection

193 This paper stops short of asking what value these changes194 may have in the scenario planning process, nor does it make195 claims about their impact on scenario planning outcomes196 (such as learning, attitudinal change, changes in subjective197 probability of perceived events, etc.) Such claims require198 further research and exploration beyond the scope of this199 paper. It does, however, present empirical evidence that may200 be useful for more formal efforts to test these relationships201 more rigorously. It therefore a small but valuable contribu-202 tion to the larger scholarly effort of building understanding of203 the role of Foresight Support Systems (FSS) may play in204 scenario planning and foresight; particularly in the public205 domain.

206 2. Methodology

207 2.1. Research design

208 Amixedmethod approachwas employed to investigate the209 research questions. Specifically, two novel online platforms210 were developed and deployed as case studies. A variety of211 quantitative and qualitative data were then collected. Three212 additional cases conducted by others were then evaluated213 using the same criteria. These cases (five in total) were then214 compared to each other and to a base case: a representative215 face-to-face scenario planning process typical of those used in216 public policy settings and other multi-stakeholder environ-217 ments. In-depth qualitative interviews were also used to add218 context and aid interpretation and comparison of these results.219 An ideal research designwould allow for the specification of220 dependent and independent variables based on a specific221 hypothesis derived from well understood theory. This proved222 challenging for such a novel topic for three reasons: 1) the223 relevant categories and variables for measurement were224 unknown in advance; 2) there was little empirical evidence225 for, or agreement on the key outcome variables of scenario226 planning and; 3) there was no standard measurement227 instruments or protocols that could be applied in their testing228 (the recent work of Chermack et al. [23–25] had yet published229 during the time of this research).230 Yin [26] suggests that “exploratory case studies” are useful231 in situations such as this, where a field is still developing or the232 relevant variables are either contested, undefined and/or233 unavailable for traditional experimental design. Answering234 these fundamental questions was beyond the scope of this235 work, so a descriptive, exploratory approachwas taken in order236 to focus directly on measures of participation, process and237 engagement. Cases were selected using a theoretical sampling238 approach [27]. Although a more formal experimental design239 with controls and a representative sample would have been240 preferable, such an approach was infeasible at the time of the241 research. This decision and its implications for the interpreta-242 tions of the findings of this research are is discussed in more243 detail in Section 2.4, Limitations, below.

2442.2. Cases

2452.2.1. Base case: the future of a Northern Spanish region246The base case selected for comparison was a face-to-face247scenario planning exercise conducted for a regional urban248planning think-tank in the north of Spain. This project was249conducted on behalf of the regional government by a widely250recognized scenario planning consultancy using an industry251recognized scenario development approach. The method252employed was a standard qualitative scenario generation253process, typified by Schwartz’s [16] eight-step process254generating a deductive two-by-two scenario matrix. A total255of 15 experts were interviewed and a total of 20 stakeholders256participated in the scenario creation workshop (out of257approximately 40 who were invited). The workshop took258place over two days at a central location. From start to finish,259the entire eight-step project took approximately 12 weeks to260complete. The case was selected through the recommenda-261tion of several scenario planning academics interviewed at262the start of the research, who felt it to be representative. The263limitations of using this as a base case are described in detail264in Section 2.4, Limitations.

2652.2.2. Case 1: Futurescaper, the impact of climate change266impacts on the UK government267This case used a bespoke online data generation platform268developed as part of a project with Tony Hodgson and the269International Futures Forum (IFF). The purpose of this project270was to identify the systemic linkages between climate change271impacts in other parts of the world and the secondary and272tertiary impacts on the United Kingdom. An online data273collection platform, dubbed “Futurescaper”, was created and274deployed to explore the role of such platforms during the275early and middle stages of the scenario creation process.276The system used a structured, form-based approach to the277collection of trends and drivers thatmight affect the future of the278research topic. It stored these trends in an online database and279provided basic analytical tools to aid in their analysis. The system280was designed specifically to address the task of generating281trends and drivers, exploring their interactions, ranking them,282clustering them into high-level themes, and then assembling283them into analytically useful visualizations (including network284maps, systems diagrams, influence charts and cross-impact285analysis). The system was built using a standard PHP / HTML286front-end and a MySQL backend for database storage, with287export options to a third party, Flash-based network visualiza-288tion engine based on the Mapquation platform [28].289An expert panel of 12 members and an analytic team of 4290analysts used the system to identify 186 representative291scientific articles, trends, news clippings and sources. These292were then uploaded into the system for analysis and293clustering. Analysts could browse this data, add new trends294and drivers, explore how they interact, and download them295for subsequent visualization. It was not designed to address296the latter stages of scenario creation, including scenario logic297creation, detailing, or narrativization.

2982.2.3. Case 2: SenseMaker scenarios, the impact of financial299uncertainty on government public services300The second online case adapted an existing commercial301software platform to build upon lessons from the first case.

3N. Raford / Technological Forecasting & Social Change xxx (2014) xxx–xxx

Please cite this article as: N. Raford, Online foresight platforms: Evidence for their impact on scenario planning & strategicforesight, Technol. Forecast. Soc. Change (2014), http://dx.doi.org/10.1016/j.techfore.2014.03.008

UNCO

RRECTED P

RO

OF

302 This system sought to address several issues raised by the303 Futurescaper case; notably, a desire to involve a greater number304 of participants, to explore new formats of data collection, and to305 improve the user interface to facilitate collective analysis. This306 casewas developedwithDave Snowden andDr.Wendy Schultz307 and deployed in an online engagement for the 2010 Interna-308 tional Risk Assessment and Horizon Scanning Conference for309 the Government of Singapore.310 The SenseMaker platform was adapted to collect free-form311 textual data on drivers and uncertainties in the form of stories,312 anecdotes, or narratives about the topic or theme. Respondents313 were asked to relate a story about the subject that would314 shed light on the topic. A total of 265 participants from around315 the world contributed mini-scenarios, narratives, anecdotes316 and opinions as part of this case. These were clustered using317 a structured evaluation process and summarized by the318 researchers into three representative scenarios and presented319 online.320 This platform differed significantly from Case 1 in that it321 did not explicitly capture drivers as discrete objects for322 subsequent combination. It also did not employ any algo-323 rithmic sorting or clustering mechanisms. Instead, it relied on324 user input to code stories against quantitative extremes, then325 sorted and ranked stories that were most representative of326 key dimensions of each scenario archetype. This subset of327 narratives was subsequently used to build scenario logics328 directly in an inductive fashion. This approach was based329 upon the Manoa “scenario archetype method” pioneered by330 Jim Dator at the University of Hawaii and later developed331 by Wendy Schultz and colleagues [29,30], which theorizes332 that all scenario stories can be categorized by a handful of333 archetypical examples following similar narrative structures334 and outline. According to this approach, the details often335 vary but the overall significance of each archetype remains336 constant. Examples include story structures such as “the337 hero’s quest”, “decline”, “collapse”, “continued growth”, etc.338 While this approach differed from the specifics of the classic339 deductive method employed in the Base Case (using two340 critical uncertainties to generate a scenario matrix composed341 of four distinct scenarios), it nonetheless followed the same342 overarching eight-step process of data collection, analysis,343 synthesis and narrativization and was therefore judged to344 comparable. See Curry and Schultz [18] for a more detailed345 comparison on the similarities and differences between such346 an approach.

347 2.2.4. Case 3: The Institute for the Future’s Foresight Engine348 The third case examined the Foresight Engine, an interac-349 tive gaming platform developed by the Palo Alto-based350 technology forecasting non-profit, The Institute for the Future351 (IFTF). The Foresight Engine uses a card-game like interface, in352 which thousands of players submit ideas to the future of a353 subject during a specific engagement period. These stimulate354 conversation and responses amongst “players”, who are355 rewarded for their interaction through points and a ranking356 against other players. The example chosen for the research357 comparison was an engagement exploring the future of the358 United States utility network, entitled “Smart Grid 2025”. The359 event, sponsored by the Institute of Electrical and Electronics360 Engineers (IEEE), engaged almost 700 participants from 81361 different countries over a 24-hour period, generating nearly

3625,000 submissions and interactions. Aside from participants,363over 26,000 people viewed the project website and associated364content.

3652.2.5. Case 4: The Wikistrat collaborative geo-strategy forecasting366platform367The fourth case examined an online geo-strategy plat-368form, called Wikistrat. The platform operates as a for-profit369strategy consultancy, using a distributed network of analysts370and subject matter experts who contribute piecework or371competition-based analysis in a crowdsourced format. Com-372pared to the Foresight Engine, Wikistrat uses a fairly simple,373Content Management System (CMS) / wiki platform. It374supports a complex community of experts who participate375over time for both recognition and financial reward. The376example chosen for this research was Wikistrat’s “Interna-377tional Grand Strategy Competition”, a four-week invitation-378only engagement exploring geopolitical scenarios around the379world, for a cash prize of $10,000. This event engaged380approximately 30 teams from universities in 13 countries381who produced an average of 7,000 - 8,000 words of content382per week on a range of subjects.

3832.2.6. Case 5: OpenForesight’s future of Facebook project384The fifth and final case explored the future of the social385media platform, Facebook, through an “open foresight” process386led by popular blogger Venessa Miemis. This project used387existing free services such as Facebook, Twitter, YouTube,388Quora and Kickstarter to conduct an “open source” scenario389planning exercise. The process began with a video on390Kickstarter project (the crowdfunding platform) to generate391interest and funds to execute the project. This announcement392was promoted via Facebook, Twitter, blogs and emails and393received significant social media coverage. The second phase394engaged approximately 25 thinkers in the field through395in-depth video interviews over Skype. The administrators also396created a Quora page, an interactive, user-driven question and397answer site, with which users posed and responded to various398questions raised by the interviews. The results were repre-399sented back to the open community of users in the form of400several blog posts and videos, resulting in a series of scenarios401describing several possible futures for the Facebook platform.402In addition to the 25 experts interviewed via video (which403received over 17,000 views on YouTube), the project received404109 responses fromover 220 subscribers to the Quora page and405extensive interaction on Facebook from over 50 users.

4062.3. Data

407A range of data was collected from these cases. Data for408each case were reviewed relative to each stage of the409scenario planning process and the appropriate research410question. Where quantitative data were available, numeric411comparisons were made to demonstrate difference. Where412no quantitative data was available, qualitative data from413discussions with participants and external experts were used.414This included the use of all relevant quotations, coded themes415and subsequent textual analysis.416Data from a variety of activities was divided into two417categories: data on participation characteristics and data418interaction characteristics. The first described measures

4 N. Raford / Technological Forecasting & Social Change xxx (2014) xxx–xxx

Please cite this article as: N. Raford, Online foresight platforms: Evidence for their impact on scenario planning & strategicforesight, Technol. Forecast. Soc. Change (2014), http://dx.doi.org/10.1016/j.techfore.2014.03.008

UNCO

RRECTED P

RO

OF

419 associated with levels and type of participation in various420 stages of the scenario process. The second described measures421 associated with the kinds of actions and activities undertaken422 by those participating in the process. Table 1 presents the423 activities that were measured, using Schwartz’s standard424 8-step model [10]. A full list of the data categories collected425 during this process is also displayed in Table 2, alongwith their426 availability and source.427 Data for all categories was not always available for direct428 measurement across all cases. In this situation, data was429 either estimated or not included in the analysis, as indicated430 in Table 2.431 In addition to structured data collection in the categories432 listed above, over 45 semi-structured interviews were con-433 ducted with experts in both scenario planning and online434 participation, of which 30 were substantially transcribed435 (notes were taken on the remainder but not transcribed436 verbatim). These interviews served two purposes: first, to437 solicit input on platform design and measurement protocol438 used for this research and second, to help interpret the439 meaning and context of the data generated from them and440 the comparative examples. The interviews were conducted441 over a period of six months with the aim of generating a range442 of themes about how online participatory collective intelli-443 gence systems may work, as well as a series of methodological444 insights related to their study. These themes informed445 Section 4: Discussion and particularly Section 5: Conclusion,446 which include representative quotes through-out.

447 2.4. Limitations

448 An exploratory mixed method approach departs from449 traditional experimental design in several ways. An idealized450 experiment allows for the isolation of key outcome variables,451 manipulation of specific independent variables through a452 controlled set of randomized or semi-random tests, and453 subsequent measurement of their impact on dependent454 variables through standardized measurement techniques455 and instruments. This includes adequate control for error,456 variance and exogenous factors, thereby providing evidence457 if such approaches are “better” or “worse”.

458Additional limitations were that, due to the novelty of each459online platform, various design factors known to influence user460behavior and participation in online environment could not be461tested or controlled. This includes factors such as visual look462and feel, language and jargon issues, usability concerns and463other similar factors. The same variability may apply to the464Base Case, which although recognized as a standard example of465best-practice scenario generation, may have also suffered from466its own idiosyncrasies for a variety of reasons.467It was also not possible to control for the nature of468participation in each system. Case 1 was comprised of a small469group while others (Cases 2 through 5) were open-invitation470professional communities and the general public. It was471therefore not possible to use representative sampling from472controlled populations, an approach which would have473allowed for stricter comparison and potentially more useful474insight.475These limitations suggest that the findings below should476be regarded as descriptive only. While they may serve as the477basis for more formal evaluation at a later date, they do not478support causal claims about the nature of the systems479evaluated, the design of the process used, their influence on480the variables measured, or their impact on the scenario481planning process itself (vis-à-vis “better” or “worse” out-482comes). It is therefore necessary to consider these findings as483a limited (but necessary) step towards the larger goal of484more rigorous hypothesis testing and evaluation.

4853. Findings

486These cases generated a range of findings grouped into487four main themes, below. These include an increase in the488volume and diversity of participation observed, an increase in489the amount and speed of data generated, changes in the490mechanisms of drivers clustering and analysis, and evidence491for participant socialization and interaction.

4923.1. Participation

493Data generated through these cases studies revealed a sig-494nificant increase in participation along three key dimensions:

Table 1t1:1

t1:2 Tasks performed and measured.

t1:3 Task Description

t1:4 Driver identification The process of scanning for trends, drivers and uncertainties that may influence the focal question of the research. Thisincludes entry of these drivers into the system.

t1:5 Driver exploration The process of exploring, reading, sorting through and making sense of drivers. Essentially a pedagogical engagementexercise, sometimes performed through video or web-resources.

t1:6 Driver ranking & selection The process of ranking and selecting key drivers from a larger list from which to build scenarios.t1:7 Driver clustering & aggregation After exploring interaction between drivers, this phase involves synthesizing lower level trends into higher-level themes

and issues, both for communication and for scenario building.t1:8 Scenario logic creation The process of creating draft logical frameworks and causal arguments for later refinement and filtration.t1:9 Scenario logic selection The act of choosing from competing alternative scenario logics to define the driver characteristics and uncertainties to

drive final scenario creation.t1:10 Scenario logic detailing An intermediate step involving the fleshing out of basic plot elements, story arcs, characters, actors and events in a

scenario logic framework (but before writing up as a narrative).t1:11 Implications development Preliminary exploration of implications, including high level review of winners and losers, impacts on policy, etc.t1:12 Implications detailing Fleshing out these implications in significantly more detail for the purposes of scenario narrative writing and working

group pedagogy.t1:13 Full scenario narrative creation The process (usually consultant-led) of converting the aggregated drivers of change, the final scenario logic and draft

implications into full, text-based stories.

5N. Raford / Technological Forecasting & Social Change xxx (2014) xxx–xxx

Please cite this article as: N. Raford, Online foresight platforms: Evidence for their impact on scenario planning & strategicforesight, Technol. Forecast. Soc. Change (2014), http://dx.doi.org/10.1016/j.techfore.2014.03.008

UNCO

RRECTED P

RO

OF

495 1) numerically, in terms of the absolute number of participants496 involved; 2) geographically, in terms of the distribution of497 participants, and; 3) professionally, in terms of the range of498 disciplines and expertise able to be involved. Each is discussed499 in detail below.500 From a purely numerical standpoint, Case 2, which used a501 modified form of the SenseMaker Suite, provided the most502 obvious evidence for increased participation, involving over503 265 participants in the scenario creation process. Case 3, The504 Institute for the Future’s SmartGrid 2025, had nearly 700505 registered participants, of which 166 participated substan-506 tially. Discussions on the Quora online question and answer507 platform for the Future of Facebook project (Case 4) solicited508 over 100 responses, with nearly twice as many registered509 users following the conversation as observers. Finally, over510 30 teams participated in the Wikistrat Grand Strategy511 competition (Case 5). Compared to the base case, in which512 a total of 35 participants were involved from beginning to513 end, it is clear that such systems are capable of facilitating514 magnitude significantly larger participating population in the515 scenario planning process than face-to-face means alone.516 The same appears true from a geographic standpoint. All517 cases indicated the ability to focus diverse participants from518 around the world on scenario creation tasks. The IFTF’s519 Foresight Engine involved users from 82 different countries,520 for example. Case 2 involved participants from the Americas,521 Europe and Asia. Wikistrat’s Grand Strategy competition

522involved teams from over 30 countries. While it is clear523that global or even national contribution may not be524appropriate for every project, the evidence presented here525demonstrates that such online systems have the ability to526convene participants from a much larger geographic area527than possible in the traditional process.528A similar pattern was found from a professional diversity529standpoint. While participation in these cases did not bridge530the “digital divide” in its totality (by involving a representa-531tive sample of a given population, for example), they were532able to successfully convene a wide range of subject matter533experts and professional disciplines in almost every case.534Case 1 drew explicitly from the published literature involving535over 35 different academic disciplines and peer-reviewed536research communities. Over 70% of participants in Case 2 had537post-graduate education and nearly 65% classified themselves538as either “expert” or as having “significant professional539experience” in the subject. The Wikistrat Grand Competition540also brought togethermore in-depth expertise in foreign policy,541regional history, economics, military affairs and sociology.542To a lesser degree, with the exception of Case 1 and the543Wikistrat example, each system also involved members of544the general public. Anecdotal evidence suggests that these545participants also had a deep subject-matter interest or local546expertise in the topic at hand, without which they would547probably not have chosen to participate. While none of the548cases and examples achieved true demographic or statistical

Table 2t2:1

t2:2 Data categories and availability, by source.

t2:3 Data category Sources and availability

t2:4 Base Case Case 1:Future-scaper

Case 2:Sense-Maker

Case 3:IFTF

Case 4:Wikistrat

Case 5:Open Foresight

t2:5 Participation characteristicst2:6 Degree of public openness (including promotion

& recruitment efforts)Measured Measured Measured Measured Estimated Measured

t2:7 Amount of preparation required Measured Measured Measured Measured Measured Measuredt2:8 The number of participants involved Measured Measured Measured Measured Measured Measuredt2:9 Reasons for participation Measured Measured Measured Estimated Estimated Estimatedt2:10 Degree of user anonymity Measured Measured Measured Measured Measured Measuredt2:11 Type of participants involvedt2:12 Level of education Estimated Estimated Measured None Estimated Nonet2:13 Professional experience Estimated Estimated Measured None Estimated Nonet2:14 Professional discipline Measured Estimated Measured Measured Estimated Nonet2:15 Age Estimated Estimated Measured None Estimated Nonet2:16 Geographic origin Measured Estimated Measured Measured Estimated Nonet2:17t2:18 Interaction characteristicst2:19 Tasks performedt2:20 Driver identification Measured Measured Measured Measured Measured Measuredt2:21 Driver exploration Measured Measured Measured Measured Measured Measuredt2:22 Driver ranking & selection Measured Measured Measured Measured Measured Measuredt2:23 Driver clustering & aggregation Measured Measured Measured N/A Measured Measuredt2:24 Scenario logic creation Measured N/A Measured N/A Measured Measuredt2:25 Scenario logic selection Measured N/A Measured N/A Measured Measuredt2:26 Scenario logic detailing Measured N/A Measured N/A Measured Measuredt2:27 Implications development Measured N/A N/A Measured Measured Measuredt2:28 Implications detailing Measured N/A N/A N/A Measured Measuredt2:29 Full scenario narrative creation Measured N/A Measured N/A Measured Measuredt2:30 Types of input considered Measured Measured Measured Measured Measured Measuredt2:31 Amount and type of visualization tools used Measured Measured Measured N/A N/A N/At2:32 Amount and type of analytical tools used Measured Measured Measured Measured Measured Measuredt2:33 Amount of socialization enabled Estimated Measured Measured Measured Measured Measuredt2:34 Amount and kinds of feedback provided Measured Measured Measured Measured Measured Measured

6 N. Raford / Technological Forecasting & Social Change xxx (2014) xxx–xxx

Please cite this article as: N. Raford, Online foresight platforms: Evidence for their impact on scenario planning & strategicforesight, Technol. Forecast. Soc. Change (2014), http://dx.doi.org/10.1016/j.techfore.2014.03.008

UNCO

RRECTED P

RO

OF

549 representativity of a given community (which was not their550 aim), they do appear to be successful at attracting a wide551 range of professional disciplines and levels of experience.552 The combination of increased numerical participation,553 increased geographic participation, and increased diversity of554 subject matter expertise helps to address the main concerns555 raised with the scenario planning process. The following556 section explores the nature of this participation in more557 depth, reflecting specifically on the role of participants and558 depth of participation in various stages of the process.

559 3.2. Data generation

560 While increased participation was clearly demonstrated,561 what was the nature of this participation? Was it uniform562 throughout the process? Was it substantive and deep? Did it563 add value, where and in what ways?564 Looking at the nature of participation in more detail, it565 is clear that most instances of participation were fairly566 limited for any given participant. In Case 2, each person567 contributed an average of a single time, consisting of short568 essays, stories or opinions about what factors might influence569 the future, with an estimated involvement of 6–12 minutes.570 The median number of contributions per user for the IFTF571 example was six, which varied between original content572 creation (1.5 median submissions per user) and responses to573 others’ submissions (4.5 median responses per user). Partic-574 ipation was also heavily skewed towards a small group of575 very active participants in this case: less than 20% of the total576 users (48 out of 237) contributed over 70% of the content.577 The same patterns appeared to apply in the OpenForesight578 example as well.579 In almost all cases, participation was focused on the early580 stages of the scenario process; specifically in the driver581 generation and analysis phases. Case 1, Futurescaper, focused582 exclusively on driver entry and analysis, building open-ended583 relationships between drivers, and the creation of emergent584 systems maps as analytical tools. Case 2 took a different585 approach, asking users to submit complete stories of the586 future or stories they thought would influence the future. The587 IFTF and OpenForesight examples did the same, asking users588 to both submit and discuss drivers and forces in change in589 various ways. All of this activity was focused on building590 early-state data and interpretation necessary for draft scenario591 creation.592 The Wikistrat platform offered an interesting contrast, in593 that it engaged teams of users throughout the entire process594 to produce content at every stage. While the final output595 was not narrative scenarios per se, they were geopolitical596 forecasts of different regions and countries. This focus on597 “end-to-end” full text submission is distinct from the other598 cases and examples reviewed here, but does not distract from599 the majority emphasis on early stage data generation and600 analysis.601 How did the use of these tools as data generation602 platforms compare to the base case? From a purely numerical603 standpoint, the base case generated 17 major drivers from604 in-depth interviews, divided into three categories (Political,605 Economic and Social). The process took between 80–90606 working hours to conduct and analyze, including the logistics607 associated with arranging and conducting interviews (but

608not counting travel time to and from the client’s location or609time spent developing the final presentation documents).610This amounts to an average of approximately five hours per611driver. In the workshop, an additional 90 drivers were also612identified, which took approximately 120 minutes to brain-613storm and cluster into a final set of three to four “critical614uncertainties”.615In comparison, Case 1 brought this time down to approx-616imately 15 minutes per variable, while Case 2 brought this617down to less than 10 minutes. The IFTF’s Foresight Engine618generated over 900 drivers in less than 24 hours, which619equates to approximately 90 seconds per driver. Although620imprecise (the definition of “driver” varied between cases), this621data suggests the potential for both a large reduction in the622time taken to generate initial drivers and forces of change, as623well as an increase in the diversity of drivers considered. This is624not to claim that such an approach is better or worse, but625instead that it is different from face-to-face workshop gener-626ation. This is discussed in more detail in Section 4, Discussion,627below.

6283.3. Methods of clustering and analysis

629Each case also demonstrated different approaches to the630clustering and synthesis of trends and drivers which makes631them difficult to compare. However, certain measures can be632made that suggest the platforms may be successful used in633clustering and ranking data generated in earlier stages.634Because clustering and ranking was done in a single635afternoon for the Base Case, comparison along temporal636dimensions is less appropriate. However, one of the main637criticisms of a workshop-based approach is that the amount638of time devoted to exploration of these trends and their639interactions is often insufficient. One expert interviewed640suggested that, “you often spend all your time in the641build-up, just synchronizing vocabulary and ideas. Then the642critical discussions about uncertainties and their interaction643is jammed into a quick afternoon, when everyone is rushing644to get back to their real lives.”645The more appropriate dimensions of comparison may646therefore be whether the cases offered either: a) greater647processing time for the analysis of variables and their648interactions, or; b) new mechanisms for analysis and explora-649tion that helped to more effectively leverage the existing time650available.651Compared on these dimensions, Case 1 fell primarily652under the latter category. By allowing users to specify the653relationships between drivers with folksonomic subject tags,654the platform divided the burden of analyzing complex655systemic relationships into a variety of micro-tasks, per-656formed in real time by each user at the point of data entry.657Thus, while participants entered a single trend or driver on658their own, they also provided the input necessary for the659system to automatically cluster topics into a systems map in660real time. This enabled rapid summary and analysis of drivers661and trends, vis-a-vis their systemic relationships, in a way662that provides greater analytical depth than would be possible663in an unaided workshop session.664Case 2 followed a related approach to distributed, user-665based analysis. Users coded their stories relative to subject-tags666and a series of pre-determined archetypical values. Thesewere

7N. Raford / Technological Forecasting & Social Change xxx (2014) xxx–xxx

Please cite this article as: N. Raford, Online foresight platforms: Evidence for their impact on scenario planning & strategicforesight, Technol. Forecast. Soc. Change (2014), http://dx.doi.org/10.1016/j.techfore.2014.03.008

UNCO

RRECTED P

RO

OF

667 then auto-aggregated for easier clustering and synthesis668 through a variety of means. Although the mechanism of669 clustering was quite different from Case 1, the same principle670 of dividing analytical tasks into small units and shifting it to the671 user was still quite successful. Both of these approaches672 illustrateways that online tools can help automate or distribute673 basic analytical tasks amongst many users, allowing for more674 complex analysis of their interaction in a shorter period of time.675 Cases 3, 4 and 5, however, offered no mechanisms for676 automated clustering or analysis. In Case 3, the IFTF example,677 the system allowed for rapid summarization of the most678 discussed ideas and trends, as well as those which the679 organizers deemed most interesting or provocative. But this680 was limited to metrics of user discussion alone and did not681 offer any additional facility for automated or semi-automated682 analysis of content and relationships. The same is true for683 Case 4, Wikistrat. That platforms open, collaborative nature684 may facilitate faster synthesis and analysis amongst analyst685 teams, however, given the simultaneous nature of multiple686 teams working in parallel, while at the same time being able687 to see and benefit from one another’s work.

688 3.4. Degree of socialization among participants

689 One of the most marked differences between the Base690 Case and the online cases was the role and type of socializa-691 tion involved. Although several of the cases demonstrate692 enhanced volumes of participation, the Base Case provided693 clear advantages in the social dimension of the scenario694 process. Although more people participated in the online695 cases, participation was almost entirely one-to-many, mean-696 ing that individuals sat alone at their computers, interacting697 with the system or with other users remotely. This was698 particularly the situation with Cases 1 and 2, whereby699 participants generated data and insights that a separate700 analytic team incorporated into their scenario process. The701 Wikistrat example alone was the sole example in which702 participants came together as teams, acting as both “users” of703 the system and as analysts of the data they and other created.704 The IFTF example did encourage a high degree of interactivity705 and dialogue between users, however, and allowed for a706 limited amount of analysis amongst users. That is not to say707 that the experience of participation in these systems was not708 emotional, particularly the IFTF’s game-like platform and709 Wikistrat’s collaborative competitive platform, but that the710 amount and kind of social engagement was far less than that711 of the Base Case and other similar exercises.712 It is widely believed that one of the goals of the scenario713 planning process should be building relationships between714 organizational and stakeholder “silos”, thereby helping to715 build trust and social capital to tackle difficult observations.716 This, along with the goal of helping to influence decision-717 makers through visceral, emotional, and creative means, is718 most often achieved through the social experience of the719 process itself (if at all). One UK-based academic noted that,720 “scenario workshops are most effective when they help721 people play through their emotional experiences of uncom-722 fortable ideas and new issues. If you can tap into this process723 online, you will be more likely to achieve your goals.”724 None of the five cases, with the exception perhaps of725 the IFTF’s social game, demonstrated the kind of powerful

726emotional response sought after in the best scenario work-727shops. Providing opportunities, or even requiring, participants728to interactwith each other andwith novel ideas is oneway that729face-to-face scenario workshops help ‘stretch’ participants’730thinking and comfort levels. The different kind and level of731social intensity produced by the online cases explored here732suggests that they are not, at present, capable of achieving the733stated emotional and social goals sought after in the best734scenario workshops. That said, it should be noted that the735base case itself failed to produce such intense emotional736engagement in the participants, a result that could be either737idiosyncratic of the workshop or facilitation itself, or indicative738of the fact that even face-to-face workshops do not always739produce their stated social goals.

7403.5. Cost of implementation

741Although direct cost comparison is challenging between742cases, the face to face base case involved an expensive,743globally recognized consultancy and cost between 250,000744and 300,000 Euros combined. Case 1 was an experimental745research platform which cost less than 5,000 GBP to develop,746while Case 2 used a modified version of a commercial747platform ranging between $50,000 and $60,000 USD. The748OpenForesight example was launched on a $5,000 USD749Kickstarter grant. It is unknown exactly how much time750was spent on the setting up and facilitation of the online751platforms. Although these costs are not directly comparable752for a variety of reasons, it is clear that online approaches are753significantly less expensive than traditional face to face754scenario methods. Their cost-effectiveness, however, must755be measured against their intended results and their ability756to deliver them, which is discussed in more detail below.

7574. Discussion

7584.1. Speed and high volume at the early stage

759It is clear that, at the time of this research, the main areas760of influence of each system is on the early stages of the761traditional scenario planning process. It can be seen that762Cases 1 and 2 were deemed to have the greatest subjective763utility at the early stages of the process; notably in generating764key themes, identifying drivers, ranking forces and (to a765larger extent in Case 2 than Case 1) helping develop draft766scenario logics. A similar pattern of utility was found in IFTF’s767Foresight Engine and the Future of Facebook examples. By768comparison, Wikistrat focused on engaging users primarily in769the write-up of more traditional “essay-like” contributions.770While this was relevant and applicable in the early stages, it771also had more perceived utility in later stages traditionally772more conducive to long-form narrative composition.773This focus on early stage driver identification and analysis774has several advantages. It increases the likelihood that a wide775variety of forces and factors will be included. This is further776enhanced when combined with the increased geographic,777professional, and numerical participation explored above.778Diversity is a critical component of the scenario planning779process, and such a wide approach to collecting input from780diverse sources appears to meet this goal. Expanding the781scope of participation beyond those familiar to the client or

8 N. Raford / Technological Forecasting & Social Change xxx (2014) xxx–xxx

Please cite this article as: N. Raford, Online foresight platforms: Evidence for their impact on scenario planning & strategicforesight, Technol. Forecast. Soc. Change (2014), http://dx.doi.org/10.1016/j.techfore.2014.03.008

UNCO

RRECTED P

RO

OF

782 consultant helps increase the probability that diverse view-783 points will be heard, as well as suggests that a more robust784 set of drivers will be captured.785 This increased participation and diversity also implies786 that individual and group biases may be less dominant at the787 early drivers exploration stage. One of the phenomena788 observed in the Base Case (and in many other studies of789 group interaction) was that a few dominant personalities790 often had disproportionate influence on the direction and791 tone of the discussion, potentially biasing scenario outcomes.792 This challenge reemerges and becomes more pronounced793 in the driver clustering and synthesis phase. The Base Case794 spent less than two hours in total exploring the connections795 between factors and clustering them into meaningful cate-796 gories. The initial round of clustering, highly influential in797 steering the overall process, was done entirely by the798 facilitators in private while participants were at lunch.799 Although this clustering was done based on the number of800 votes received for similar drivers, there was nonetheless very801 little time nor inclination to modify them after participants802 had returned and substantive debate had begun. Further-803 more, the debate was characterized by quite detailed804 argument over specific phrases and their meaning, perhaps805 converging too rapidly on “local minima or maxima” and806 thereby missing important discussions and possibilities.807 The use of computer-aided or semi-automated clustering808 techniques in Case 1, and the robust discussion and809 commenting mechanisms in the IFTF, Future of Facebook810 and Wikistrat examples, suggest that these platforms offer811 both more sophisticated tools for analyzing drivers and their812 interactions, as well as more transparent mechanisms for813 discussing and deciding on which ones to include in later814 scenario-building stages. Case 1 allowed for the exploration815 of second- and third-order effects quite easily, while the816 other examples allowed for robust discussion of variables and817 their implications through interactive reflection or transpar-818 ent debate and dialogue. This is in contrast to Case 1 (and819 other similar workshops), where a great deal of discussions820 was not recorded and the final decision as to which variables821 to include was chosen or modified after the workshop by the822 consultants. Thus the second area where these systems may823 be of value is in adding transparency and depth to the driver824 clustering and analysis process.

825 4.2. Impact on the scenario planning process: more participation,826 better outcomes?

827 Given the lack of objective metrics for evaluating scenario828 quality or outcomes, it is difficult to speculate about the829 impact that these platforms have on the outcome of scenario830 process itself. Building on the earlier work of, Wack [8],831 van der Heijden [9], Argyris and Schon [28] and others,832 Chermack’s theory of scenario planning [29] posits that833 scenarios can help organizations learn about environmental834 change by increasing their awareness of external forces and835 factors. This in turn is purported to produce more accurate836 “mental models”, which lead to better decision making and837 better performance. If we accept this basic theoretical model,838 what impact might increased participation have on this839 process?

840Many scholars argue that increased participation is841necessary to understand and act upon complex environmen-842tal changes in dynamic, fast moving environments [30,31].843They argue that more parties must be consulted, more data844and perspectives must be considered, and more differences845need to be bridged in order to reach desired outcomes on the846key forces and factors affecting an industry, community, or847nation’s future.848But does participation translate into better scenario849outcomes, in specific? Without a reliable measurement850framework for measuring the quality and impact of the851scenario process, it is impossible to tell. When asked this852question, however, there was significant disagreement853amongst the experts interviews. One experienced practition-854er and academic from France asked, “Do you need large855groups? No, definitely not. But do they add something?856Definitely – especially in the public policy context. If you can857figure out a way to involve more people in the process, it858might not help the actual process but will certainly improve859the acceptance of its results.”860Others were more optimistic about the value of increased861participation as a way to add content value, not just contextual862acceptance. A South African academic and well-respected863practitioner observed that, “these workshops often involve864the great and the good, but no one even knows themarginal or865fringe perspectives that could still be important in the future.866Providing a means to involve these players and ideas can only867improve the output, if done well.” Others explicitly acknowl-868edged that environmental complexity was so challenging and869dynamic that the only way to effectively understand the world870was through as many diverse perspectives as possible.871The evidence from this research suggests that use of such872systems on their own does not produce the stated social and873cognitive outcomes of the scenario planning process. That said,874each system did effectively demonstrate the ability to signif-875icantly enhance various aspects of the process in different876ways. The use of automated systems visualization tools and877clustering approaches in Case 1, for example, aswell as Case 2’s878qualitative and emotional narrative capture techniques, clearly879demonstrate ways that early stage scenario processes can be880enhanced and improved. The IFTF’s game-like interface and fun881playing experience attracted awide andmore diverse audience882to the subject, using sophisticated visuals and multimedia to883help communicate ideas and concepts to an audience who884would otherwise never have been involved. Wikistrat and the885Future of Facebook examples both demonstrate different886methods of engaging larger groups in dialogue and debate887around key issues, as well.888This suggests that with proper design and attention, a889hybrid form of online and face-to-face engagement could be890developed that would leverage the benefits of both virtual891and in-person collaboration more effectively. New experi-892ments developing various approaches to rapid-prototyping893futures online, then testing and exploring them in more894depth in person, may prove particularly successful.

8954.3. Impacts on professional standards and practice

896A second implication of the development of such systems897is their potential impact on the field’s future, as well as on the898larger topic of public participation in general. Pang [12]

9N. Raford / Technological Forecasting & Social Change xxx (2014) xxx–xxx

Please cite this article as: N. Raford, Online foresight platforms: Evidence for their impact on scenario planning & strategicforesight, Technol. Forecast. Soc. Change (2014), http://dx.doi.org/10.1016/j.techfore.2014.03.008

UNCO

RRECTED P

RO

OF

899 suggests that, “the futures profession is decentralized,900 eclectic and intellectually varied: there are no schools that901 train its elite, few barriers to entry, no certification or902 regulatory body.” Although this research dealt only with903 scenario planning (which is just one part of the larger futures904 field), it is possible that its findings may still be relevant to905 other aspects of the field, as well.906 Just as Amazon, eBay and other online sites have created907 reputation mechanisms that differentiate experienced, trust-908 worthy participants from others, so too could professional909 futurists and scenario planners gain evidence-based reputa-910 tions based on their performance in public scenario processes.911 Several experts interviewed suggest that the large-scale912 deployment of such systems would have a fundamental913 leveling effect on the industry, which is currently characterized914 by a wide variety of specialty practitioners employing various915 methodologies. Should such systems enable more transparent916 reputation tracking, both amateur and professional partici-917 pants could be evaluated more effectively by their scores over918 time. This would help prevent the “hedgehog effect” [17], by919 which loud, overly confident forecasters and pundits attract920 short-term media attention, regardless of their past record of921 performance. Such a common set of transparent standards922 could significantly improve the professional quality of the923 scenarios and foresight industry, which one expert character-924 ized as “heavily influenced by attention-seeking media impre-925 sarios who operate without verification, accountability, or926 professional validation.”927 Aside from professional standards, the low cost of such928 platforms could have a fundamental commoditizing effect on929 the scenario planning marketplace. While many non-specialist930 management consultancies now offer scenario planning ser-931 vices, it is conceivable that such systems would produce an932 even more profound “deskilling” of the practice beyond its933 current state. Just as tax accountants and HR services have934 dissolved into the cloud in many markets, so to might basic935 scenario planning and forecasting services. If true, this would936 both level the playing field for most practitioners (thereby937 lowering their fees and prestige), as well as create a smaller,938 more exclusive niche market for the true scenario “stars” (who939 may even reject the use of such tools on principle and trade on940 their reputation alone). Regardless, the widespread introduc-941 tion and roll-out of such tools, if it were to occur at a large scale,942 could have a fundamentally destabilizing effect on the scenario943 planning market as currently structured, with both positive944 and negative outcomes for nearly all players involved.

945 5. Conclusion

946 5.1. How such platforms may evolve

947 This research makes a small contribution to the growing948 body of scholarly work exploring issues in the design and949 impact of Foresight Support Systems (FSS) in corporate and950 public policy settings. Examples include recent reviews of951 accuracy of large-scale expert prediction markets [32,33], as952 well as formal evaluations of methods such as real time953 Delphi and other related approaches [34–36]. Bañuls and954 Salmeron [37] summarize some of the issues coming out of955 this research and raise a variety of important research956 directions going forward.

957This work suggests that as the nature and sophistication of958these platforms continues to develop, they will likely evolve in959a number of different directions. Two of these directions are960discussed here: their use as real-time horizon-scanning,961monitoring, and rapid-futures prototyping systems, and their962use as consumer-grade “personal futures systems”.

9635.1.1. Real-time horizon scanning and rapid futures prototyping964The combination of such systems with other forms of965predictive trend monitoring, data mining, and algorithmic966processing offers particular promise. Platforms such as967Google Trends and sentiment analysis of Twitter have found968that search term volume and positive/negative sentiment969correlates quite well with near-term predictions of things970such as movie sales, election standings, or flu outbreaks.971Software such as Palantir is in widespread use throughout the972intelligence communities of the US and other governments,973and data-mining is routine in nearly all large-scale corporate974activities. It is therefore likely that, over time, a more975sophisticated and large-scale version of the platforms976explored in this research may merge with such approaches977to create the kind of extreme-scale, real-time monitoring and978trend tracking systems.979This combination could offer three advantages. First, the980massive sample size of hundreds of thousands, if not millions981of participants would provide a much greater source of data982and material for scenario building. Second, it would enable983real-time monitoring of changes and trends, such that a984common base of opinions and perspectives could be985compared against rapid movements and surprising out-986comes. Third, it would allow for more rapid (potentially real987time) testing of solutions and scenario spaces.988Access to the mental processing power of millions of989participants, combined with suitably sophisticated mecha-990nisms for tracking and synthesizing the data they created,991would enable a fundamentally different kind of foresight992practice, based on “rolling, constantly updated images of the993future” to quote Dave Snowden, a UK-based academic and994practitioner. Such an ongoing process would go even further995to overcome the limitations of individual, group and996facilitator bias, helping to identify surprising events “as they997emerged from the future”. This makes possible a new998approach to corporate strategic planning and public sector999policy-making, one based on sensing and interacting with1000emerging trends, as opposed to trying to forecast and predict1001them over time.

10025.2. Personal futures systems

1003The second area of possible development is in consumer1004grade personal futures. Personal futures is, at present, a niche1005sub-topic within the larger literature on qualitative scenario1006planning. Wheelwright [39] defines personal futures as a1007process of using scenario planning methods at the scale of1008individual life decisions and pathways. This involves a1009combination of methods similar to those presented in this1010research, adapted to various life stages and life events. The1011result is a series of short- to medium-term qualitative1012scenarios exploring different branching points facing an1013important life decision.

10 N. Raford / Technological Forecasting & Social Change xxx (2014) xxx–xxx

Please cite this article as: N. Raford, Online foresight platforms: Evidence for their impact on scenario planning & strategicforesight, Technol. Forecast. Soc. Change (2014), http://dx.doi.org/10.1016/j.techfore.2014.03.008

UNCO

RRECTED P

RO

OF

1014 The combination of web-based, social-network driven1015 mechanisms for generating future scenarios with a wide-1016 spread desire to understand and comment on your and1017 others’ lives could become a powerful tool, taking many1018 forms. On the trivial side, automated or semi-automated1019 services can be easily imaged that would provided real-time1020 micro-scenario forecasts for your day, your week, or your1021 month derived from information culled from your social1022 network and that of your friends. We can easily posit the1023 invention of more powerful digital “personal assistants”1024 such as Apple’s Siri or a personal IBMWatson supercomputer1025 on your phone. Such services may evolve into robust1026 predictive tools for personal, professional and social advice1027 over a variety of time scales. Designer and artist Jessica1028 Charlesworth explored some of this territory in her research1029 on personal futures “Delphi Parties” and “microtrend diaries”1030 for friends, making it relatively easy to imagine how these1031 services could evolve as consumer product goods.

1032 5.3. Areas for future research

1033 Several areas of future research would benefit from1034 further attention. Two are considered below. First, this1035 research suggests that it will be important to develop more1036 rigorous empirical measurements to capture the real out-1037 comes of the scenario planning process. Although concepts1038 such as “shared mental models” and “collective learning” are1039 difficult to define, proxy descriptive measures or even1040 in-depth, ethnographic study could help shed more light on1041 the specific social and psychological mechanisms that occur.1042 Online platforms, which offer the ability to log and capture1043 multiple dimensions of human interaction, have notable1044 promise in this regard. As mentioned, the preliminary work1045 of Chermack and colleagues in designing such instruments1046 may be an important step in this direction.1047 A second area of research worth exploring, specifically1048 with regards to the efficacy and design of online scenario1049 planning systems, is how to encourage greater interactive1050 socialization, both in face-to-face and online settings. This1051 could take the form of blended workshops (using Twitter1052 back-channels, for example) or curated online engagement1053 with more explicit social goals (teams, periodic meet-ups,1054 etc.). Although illuminating in their own right, such hybrids1055 may also bear the most fruit in the context of a common1056 measurement platform, described above.1057 Finally, as large-scale data mining and real time trend1058 monitoring becomes more widespread, it is likely that there1059 will be significant value in joining this with the scenario1060 planning process (and vice-versa). The research communities1061 exploring large-scale data mining and real time pattern1062 matching is largely divorced from the scenario planning and1063 public engagement at the moment, although some research1064 initiatives are taking tentative steps towards combining1065 them. Significant synthetic work would therefore be valuable1066 to explore how such real-time systems might interact with,1067 compliment or replace the scenario process (if at all).1068 Together, the combination of more reliable measurement1069 instruments, more social online systems and more robust,1070 integrated data mining and trend monitoring systems are1071 likely to generate exciting changes in the scenario planning1072 and decision-support fields in the coming decade (especially

1073if developed as affordable, high-quality, consumer products).1074This work represents a small step in that exciting direction.

10756. Q2Uncited references

1076[38,40,41]

1077Acknowledgements

1078The author would like to thank Professors Joe Ferreira,1079Michael Flaxman and Andres Sevtsuk for their support during1080this research. Dr. Wendy Schultz, Dave Snowden, Tony1081Hodgson, Graham Leicester, Dr. Jake Dunagan, Dr. Anthony1082Townsend, and Venessa Miemis were also invaluable in1083providing data, access and interpretation for several of the1084case studies. Finally, Nathan Koren, Dr. Alexander Stahle and1085Kevin Marzec provided assistance in the ongoing program-1086ming and development of the Futurescaper platform.

1087References

1088[1] T. O’Rielly, What is Web 2.0?, accessed online at: http://oreilly.com/1089web2/archive/what-is-web-20.html 2005.1090[2] P. Anderson, What is Web 2.0? Ideas, technologies and implications for1091education, JISC Technology and Standards Watch, February 2007.1092[3] T. Vander Wal, Folksonomy coinage and definition, accessed online at:1093http://vanderwal.net/folksonomy.html 2005.1094[4] C. Howe, Crowdsourcing: why the power of the crowd is driving the1095future of business, Random House, New York, 2006.1096[5] D. Brabham, Crowdsourcing as a model for problem solving: An1097introduction and cases, Convergence 14 (1) (2008) 75–90.1098[6] E. Schenk, C. Guittard, Towards a characterization of crowdsourcing1099practices, J. Innov. Econ. 1 (7) (2011) 93–107.1100[7] F. Emery, E. Trist, The causal texture of organizational environments,1101Hum. Relat. 18 (1965) 21–32.1102[8] P.Wack, Scenarios: Unchartedwaters ahead, Harv. Bus. Rev. (September –1103October, 1985). Q31104[9] K. van der Heijden, Scenarios: The art of strategic conversation, Wiley1105and Sons, New York, 1996.1106[10] P. Schwartz, The art of the long view: Planning for the future in an1107uncertain world, Currency Doubleday, New York, 1996.1108[11] B. Sharpe, K. van der Heijden, Scenarios for success, Wiley and Sons,11092007.1110[12] A. Pang, Future 2.0: Rethinking the discipline, Foresight 12 (1) (2010)11115–20.1112[13] R. Slaughter, Futures beyond dystopia: Creating social foresight,1113Routledge Falmer, London, 2004.1114[14] R. Slaughter, Towards a critical futurism, World Future Soc. Bull. 18 (5)1115(1984) 17–21.1116[15] S. Inayatullah, Questioning scenarios, J. Futures Stud. 13 (3) (2009)111775–80.1118[16] S. Inayatullah, Deconstructing and reconstructing the future, Predictive1119Cult. Crit. Epistemol. Futur. 22 (2) (1990) 115–141.1120[17] P. Tetlock, Expert political judgment: How good is it? How can we1121know? Princeton University Press, Princeton, NJ, 2006.1122[18] A. Curry, W. Schultz, Roads less travelled: Different methods, different1123futures, J. Futures Stud. 13 (4) (2006) 35–60.1124[19] P. Shoemaker, Scenario planning: A tool for strategic thinking, Sloan1125Manag. Rev. (Winter, 1996). Q41126[20] G. Ringland, The role of scenarios in strategic foresight, Technol.1127Forecast. Soc. Chang. 77 (9) (2011) 1493–1498.1128[21] C. Bezold, Lessons from using scenarios for strategic foresight, Technol.1129Forecast. Soc. Chang. 77 (9) (2011) 1513–1518.1130[22] G. Burt, Why are we surprised at surprises? Integrating disruption1131theory and system analysis with the scenario methodology to help1132identify disruptions and discontinuities, Technol. Forecast. Soc. Chang.113374 (6) (2007) 731–749.1134[23] M. Glick, T. Chermack, H. Luckel, B. Gauck, The effects of scenario1135planning on participant mental model styles, Eur. J. Train. Dev. 36 (5)1136(2012) 488–507.1137[24] M. Haeffner, D. Leone, L. Coons, T. Chermack, The effects of scenario1138planning on perceptions of learning organization characteristics, Hum.1139Resour. Dev. Q. 23 (4) (2012) 519–542.

11N. Raford / Technological Forecasting & Social Change xxx (2014) xxx–xxx

Please cite this article as: N. Raford, Online foresight platforms: Evidence for their impact on scenario planning & strategicforesight, Technol. Forecast. Soc. Change (2014), http://dx.doi.org/10.1016/j.techfore.2014.03.008

UNCO

RRECTED P

RO

OF

1140 [25] A. Veliquette, L. Coons, S. Mace, T. Coates, T. Chermack, J. Song, The effects1141 of scenario planning on perceptions of strategic conversation quality and1142 engagement, Int. J. Technol. Intell. Plan. 8 (3) (2012) 254–275.1143 [26] R. Yin, Case study research: Design and methods, Sage, London, 1994.1144 [27] B. Glaser, A. Strauss, The discovery of grounded theory, Aldine, Chicago,1145 1967.1146 [28] M. Rosvall, C. Bergstrom, Maps of information flow reveal community1147 structure in complex networks, PNAS 105 (2008) 1118.1148 [29] J. Dator, Futures studies as applied knowledge, in: Richard Slaughter1149 (Ed.), New thinking for a new millennium, Routledge, London, 1996.1150 [30] W. Shultz, N. George, Scenarios compendium: Natural England1151 Commissioned Report NECR031, Retrieved August 12, 2011 from1152 http://publications.naturalengland.org.uk/publication/41011.1153 [31] C. Argyris, D. Schön, Organizational learning, Addison-Wesley, Reading,1154 MA, 1978.1155 [32] T. Chermack, A theoretical model of scenario planning, Hum. Resour.1156 Dev. Rev. 3 (4) (2004) 301–325.1157 [33] P. Healey, Collaborative planning: Shaping places in fragmented1158 societies, MacMillan Press, London, 1997.1159 [34] J. Innes, D. Booher, Reframing public participation: Strategies for the1160 21st century, Plan. Theory Pract. 5 (4) (2004) 419–436.1161 [35] L. Ungar, B. Mellors, V. Satopää, J. Baron, P. Tetlock, J. Ramos, S. Swift,1162 The Good Judgment Project: A large scale test of different methods of1163 combining expert predictions, AAAI Technical Report FS-12-06, 2012.1164 [36] J. Wolfers, E. Zitzewitz, Prediction markets, J. Econ. Perspect. 18 (2)1165 (2004) 107–126.

1166[37] T. Gordon, A. Pease, RT Delphi: An efficient, "round-less" almost real time1167Delphi method, Technol. Forecast. Soc. Chang. 73 (4) (2006) 321–333.1168[38] T. Gnatzy, J. Warth, H. von der Gracht, I.-L. Darkow, Validating an1169innovative real-time Delphi approach – A methodological comparison1170between real-time and conventional Delphi studies, Technol. Forecast.1171Soc. Chang. 78 (9) (2011) 1681–1694.1172[39] S. Dalal, D. Khodyakov, R. Srinivasan, S. Straus, J. Adams, ExpertLens: A1173system for eliciting opinions from a large pool of non-collocated1174experts with diverse knowledge, Technol. Forecast. Soc. Chang. 78 (8)1175(2011) 1426–1444.1176[40] V.A. Bañuls, J.L. Salmeron, Scope and design issues in foresight support1177systems, Int. J. Foresight Innov. Policy 7 (4) (2011) 338–351.1178[41] V. Wheelwright, Futures for everyone, J. Futures Stud. 13 (4) (2009)117991–104.

1180Dr. Noah Raford is a strategist and public policy advisor focusing on issues1181of foresight, scenario planning, and organizational change. He received his1182PhD from MIT, where he developed new approaches to stakeholder1183engagement and collaboration using crowdsourcing and web-based tools,1184and conducts ongoing research on 21st Century approaches to governance1185and management. He is a lecturer on foresight and public policy in the1186Department of Science, Technology, Engineering and Public Policy at the1187University College London and a former advisor to the UAE Prime Minister’s1188Office, where he helped design and run the country’s first national foresight1189and policy planning unit.

11901191

12 N. Raford / Technological Forecasting & Social Change xxx (2014) xxx–xxx

Please cite this article as: N. Raford, Online foresight platforms: Evidence for their impact on scenario planning & strategicforesight, Technol. Forecast. Soc. Change (2014), http://dx.doi.org/10.1016/j.techfore.2014.03.008