12
Using Blogs to Solicit Consumer Feedback: The Role of Directive Questioning Versus No Questioning Christine Balagué a, & Kristine de Valck b a Institut Mines-Telecom, Telecom School of Management, 9 rue Charles Fourier, 91000 Evry, France b HEC-Paris, 1, rue de la Libération, 78351 Jouy-en-Josas, France Available online 30 August 2012 Abstract Despite increasing adoption of social media for market research, the effect of the design of Web 2.0 platforms on the quantity and quality of market insights obtained is unclear. With a eld experiment, this article addresses the effect of participant interaction and the role of questioning on the performance of blog platforms that aim to solicit online consumer feedback. We show that the role of questioning is a key determinant of the protocol design decision process. In contrast with the industry standard of directive questioning and the intuitive appeal of a collective protocol in a social media setting, this study shows that no questioning, combined with an individual protocol, results in the best feedback quality. The analyses also highlight the value of an individual, no questioning protocol for performance over time and insights in consumers' experiential consumption and personal backgrounds. In terms of feedback quantity, protocols that combine directive questioning with a collective setting are best. These actionable recommendations indicate how market researchers can design online blog platforms to improve consumer feedback quantity and quality. © 2012 Direct Marketing Educational Foundation, Inc. Published by Elsevier Inc. All rights reserved. Keywords: Directive questioning; Online consumer feedback; Blogs; Market research Introduction The interactive Web 2.0 has radically changed how companies communicate with consumers. Rather than traditional, one-way communication practices, modern consumers empowered by the Internet demand ongoing interactions and two-way conversations. They no longer wait for companies to solicit their opinions but actively express their concerns, disapproval, suggestions for improvement, and novel ideas, using the various communication spaces available online. In turn, companies are experimenting with blog tools, online social networks, and virtual worlds to engage with customers throughout the product development process (e.g., Nambisan and Baron 2007; Prahalad and Ramaswamy 2004). Although companyconsumer contacts through social media are usually public (i.e., accessible to anyone), companies also adopt Web 2.0 platforms for non-public interactions (Barber, Reitsma, and Sorensen 2010). As these interactions occur in a secure environment, i.e., without the risk of competitors listening in, they are particularly effective for soliciting consumer feedback during concept and product testing. In this paper, we examine how the design of such a blog platformin terms of interactions between consumers and between consumers and the companyinfluences the quantity and quality of consumer feedback. Blogs are Web-based communication plat- forms consisting of frequently updated Web sites where content (text, pictures, sound files, etc.) is posted on a regular basis and displayed in reverse chronological order(Schmidt 2007, p. 1409). Blogs are attractive means to interact with consumers because of their cost efficiency, ease of use, potential reach, and opportunity to solicit consumer feedback during product tests that take place in a natural, in-home setting (Sawhney, Verona, and Prandelli 2005). Blogs can be individual, diary-like Web sites, or they can be maintained by a group of people who take turns contributing posts to develop a collective conversation. In a field experiment, we used non-public blogs to solicit consumer feedback during a two-week, in-home product test. The authors are listed alphabetically and contributed equally to the research. The authors like to thank the editor and two anonymous reviewers for their very valuable feedback. Corresponding author. E-mail addresses: [email protected] (C. Balagué), [email protected] (K. de Valck). 1094-9968/$ -see front matter © 2012 Direct Marketing Educational Foundation, Inc. Published by Elsevier Inc. All rights reserved. doi:10.1016/j.intmar.2012.06.002 Available online at www.sciencedirect.com Journal of Interactive Marketing 27 (2013) 62 73 www.elsevier.com/locate/intmar

Using Blogs to Solicit Consumer Feedback: The Role of Directive Questioning Versus No Questioning

Embed Size (px)

Citation preview

Available online at www.sciencedirect.com

Journal of Interactive Marketing 27 (2013) 62–73www.elsevier.com/locate/intmar

Using Blogs to Solicit Consumer Feedback: The Role of DirectiveQuestioning Versus No Questioning☆

Christine Balagué a,⁎ & Kristine de Valck b

a Institut Mines-Telecom, Telecom School of Management, 9 rue Charles Fourier, 91000 Evry, Franceb HEC-Paris, 1, rue de la Libération, 78351 Jouy-en-Josas, France

Available online 30 August 2012

Abstract

Despite increasing adoption of social media for market research, the effect of the design of Web 2.0 platforms on the quantity and quality ofmarket insights obtained is unclear. With a field experiment, this article addresses the effect of participant interaction and the role of questioning onthe performance of blog platforms that aim to solicit online consumer feedback. We show that the role of questioning is a key determinant of theprotocol design decision process. In contrast with the industry standard of directive questioning and the intuitive appeal of a collective protocol in asocial media setting, this study shows that no questioning, combined with an individual protocol, results in the best feedback quality. The analysesalso highlight the value of an individual, no questioning protocol for performance over time and insights in consumers' experiential consumptionand personal backgrounds. In terms of feedback quantity, protocols that combine directive questioning with a collective setting are best. Theseactionable recommendations indicate how market researchers can design online blog platforms to improve consumer feedback quantity and quality.© 2012 Direct Marketing Educational Foundation, Inc. Published by Elsevier Inc. All rights reserved.

Keywords: Directive questioning; Online consumer feedback; Blogs; Market research

Introduction

The interactive Web 2.0 has radically changed how companiescommunicate with consumers. Rather than traditional, one-waycommunication practices, modern consumers empowered by theInternet demand ongoing interactions and two-way conversations.They no longer wait for companies to solicit their opinions butactively express their concerns, disapproval, suggestions forimprovement, and novel ideas, using the various communicationspaces available online. In turn, companies are experimenting withblog tools, online social networks, and virtual worlds to engagewith customers throughout the product development process (e.g.,Nambisan and Baron 2007; Prahalad and Ramaswamy 2004).Although company–consumer contacts through social media areusually public (i.e., accessible to anyone), companies also adopt

☆ The authors are listed alphabetically and contributed equally to the research.The authors like to thank the editor and two anonymous reviewers for their veryvaluable feedback.⁎ Corresponding author.E-mail addresses: [email protected] (C. Balagué),

[email protected] (K. de Valck).

1094-9968/$ -see front matter © 2012 Direct Marketing Educational Foundation, Indoi:10.1016/j.intmar.2012.06.002

Web 2.0 platforms for non-public interactions (Barber, Reitsma,and Sorensen 2010). As these interactions occur in a secureenvironment, i.e., without the risk of competitors listening in, theyare particularly effective for soliciting consumer feedback duringconcept and product testing.

In this paper, we examine how the design of such a blogplatform–in terms of interactions between consumers and betweenconsumers and the company–influences the quantity and quality ofconsumer feedback. Blogs are Web-based communication plat-forms consisting of “frequently updated Web sites where content(text, pictures, sound files, etc.) is posted on a regular basis anddisplayed in reverse chronological order” (Schmidt 2007, p. 1409).Blogs are attractive means to interact with consumers because oftheir cost efficiency, ease of use, potential reach, and opportunity tosolicit consumer feedback during product tests that take place in anatural, in-home setting (Sawhney, Verona, and Prandelli 2005).Blogs can be individual, diary-like Web sites, or they can bemaintained by a group of people who take turns contributing poststo develop a collective conversation.

In a field experiment, we used non-public blogs to solicitconsumer feedback during a two-week, in-home product test.

c. Published by Elsevier Inc. All rights reserved.

63C. Balagué, K. de Valck / Journal of Interactive Marketing 27 (2013) 62–73

We manipulated whether participants gave feedback in anindividual or collective setting, as well as whether the feedbackprocess was directed by specific questions about the product andconsumption experience, or if participants could address anytopic they wanted. Both interaction dimensions are critical to theprotocol design process, in terms of coordination, costs, andperformance. Market research practitioners often advocate for theuse of a qualified moderator (e.g., Stevens 2007), arguing thatactive direction through questions guides the discussion andmaintains respondent interest. However, many consumers areaccustomed to reviewing, rating, and freely expressing them-selves about products and services on blogs and forums, withoutcompany probing, which leads us to question the commonpractice of using an active, and costly, online moderator (Fern1982). Moreover, the trade-off between face-to-face focus groupsand individual interviews has been well studied (e.g., Crabtree etal. 1993; Fern 1982, 2001), but we know less about the relativeperformance of review groups and individuals online.

Based on the electronic brainstorming literature, we believethat it is theoretically unclear whether a collective blog performsbetter or worse than an individual blog in terms of the quantityand quality of feedback generated. We introduce the role ofquestioning (direct or none) as a key determinant and predict thatonline directive questioning increases the quantity of consumerfeedback but harms its quality. The best performance, in terms ofquality, instead should be obtained with an individual blogwithout questions, which enables consumers to express them-selves freely. Our study validates these hypotheses while alsodeepening our understanding of the factors that contribute to theresults through an analysis of feedback processes in the testedprotocols.

Contextual and Theoretical Background

Using Social Media for Market Research

A recent report, “Predictions 2011: What Will Happen inMarket Research,” mentions the rise of do-it-yourself (DIY)online market research, which any department within a companycan execute without the assistance of external or internal marketresearch agencies or teams (Reitsma et al. 2010). This trend hasbeen facilitated by the growing market of specialized onlinesurvey and social media listening platform vendors. Thus anybrand manager can go online at any time to discover whatconsumers are saying about her products, or she can use socialmedia to engage in a conversation with current or potentialcustomers. As a consequence, there is an ongoing debate withinthe market research industry about whether market research willstill exist as a profession in 20 years from now (see, for example,the Advertising Research Foundation). Barber, Reitsma, andSorensen (2010, p. 2) assess the situation as follows:

While consumers continue their march toward social experi-ences online, the traditionally conservative practice of marketresearch has been slow to adopt social tools and research practices.The industry at large has historically defended its risk-averseapproach to research by falling back on time-tested measures ofrepresentativeness and the science of sampling. However, with

consumers living increasingly more of their lives online, otherresearch methods such as phone and mail are under pressure. Andthe very practice of online survey research is being challenged asconsumers expect more interactive and engaging experiences inthis channel.

Furthermore, “while most market insights professionals areinterested in the concept of using social media as a source forinsights, the actual application and practice of social media marketresearch still remains a black box” (Barber et al. 2011, p. 1).Considering the apparent need to understand ways to capitalize onsocial media for market research, we started this study with twobasic questions: First, when soliciting online consumer feedbackabout a (new) product, is it better to tap into the wisdom of acollective or to solicit feedback individually? Intuitively, socialmedia rely on networks and communities, so a collective approachseems appropriate. Yet existing literature on electronic brainstorm-ing challenges this assumption. Second, is it better to work with anexperienced moderator who directs the feedback process withquestions, or can equally good results be obtained by offeringinformants the freedom to decide what they want to discuss? In thelatter case, DIY research appears to provide an attractive alternativeto working with professional market researchers.

We address these questions with a field experiment in whichblogs provide interaction platforms. Blogs are ubiquitous socialmedia, adopted by a variety of consumers; therefore, they providea good channel for reaching mainstream markets. In the nextsection, we combine insights from the electronic brainstormingliterature and focus group research to develop our hypotheses.

Collective versus Individual Blogs

In the context of our study, consumers who give productfeedback via a collective blog can see what other consumerswrite and they are encouraged to respond to that, whereas in anindividual blog consumers do not see anyone else's feedback.In the past several decades, researchers have investigatedwhether people are more productive and creative workingalone or in a group (for overviews, see Diehl and Stroebe 1987;Pinsonneault et al. 1999). Laboratory and field studies haveestablished that in face-to-face brainstorming tasks, individualsoutperform groups (e.g., Fern 1982; Paulus, Larey, and Ortega1995; Sutton and Hargadon 1996). Dennis and Valacich (1993)show though that groups can outperform individuals when thebrainstorming is electronic, instead of face-to-face. In effect,electronic brainstorming overcomes some of the factors thatcause productivity losses among groups in face-to-face settings(Pinsonneault et al. 1999). Most importantly, electronic brain-storming enables participants to generate ideas in parallel andenter them simultaneously into computers, which overcomes theproblem of production blocking that occurs when participantsmust wait their turn to speak (e.g., Diehl and Stroebe 1991). Acomputer-mediated setting offers some anonymity becauseparticipant-generated ideas appear on computer screens withoutindicating their origination. Thus participants may be lessconcerned about communicating ideas that others may reject,which mitigates the problem of evaluation apprehension(Camacho and Paulus 1995).

64 C. Balagué, K. de Valck / Journal of Interactive Marketing 27 (2013) 62–73

The electronic brainstorming literature has focused primarilyon the process of cognitive stimulation to explain why groupsoutperform individuals (e.g., Dennis and Valacich 1999; DugoshLeggett et al. 2000; Paulus and Yang 2000). Because groupparticipants are exposed to the ideas of others, they are inspired tothink in novel directions, so groups tend to be more creative andproductive than individuals. Cognitive stimulation is important,but it is not the only factor that explains differences in group andindividual settings. Pinsonneault et al. (1999) offer a compre-hensive inventory of factors that enhance or inhibit group ideageneration; in addition to cognitive stimulation versus interfer-ence, they list two relevant opposing forces. That is, unlikeindividuals, groups may benefit from observational learningbecause members can imitate best performers, which increasegroup productivity (e.g., Hill 1982). However, groupmembers alsomay adjust their individual productivity to the least performingmembers, thus generating a downward social comparison process(Paulus et al. 1996). Furthermore, many studies have shown thatworking in groups motivates individuals and increases their desireto contribute to group outcomes (e.g., Camacho and Paulus 1995;Diehl and Stroebe 1987). At the same time, group brainstormingmight encourage free riding. In the latter case, members limit effortand rely on others to accomplish the task because they believe theirefforts are dispensable (Harkins and Petty 1982; Kerr and Bruun1983), perceive diffused responsibility (Latané, Williams, andHarkins 1979), or prefer social or cognitive loafing (Albanese andVan Fleet 1985; Diehl and Stroebe 1987).

Although the effects of social comparison and motivation maynot emerge in brief brainstorming sessions, focus group literatureasserts that they arise in groups that interact for longer periods oftime (Fern 2001; Morgan 1993). Thus it is unclear if the findingthat groups outperform individuals during electronic brainstormingholds in a multiple-day, online product feedback task that takesplace through blogs. We argue that the role of questioningdetermines whether the outcome is positive or negative. Moreover,we posit that questioning has differential effects on the two mainindicators of blog performance: quantity and quality of participantfeedback.

Questioning and its Effect on Feedback Quantity

In traditional market research, direction over the data collectionprocess varies in the control that moderators and interviewersexercise. Control ranges from nondirective to directive. Nondirec-tive moderators ask few questions and engage in limited probing;they do not actively participate in questioning. In contrast, directivemoderators exercise considerable control using (semi-)structuredquestions (Frey and Fontana 1991). Although many marketresearch practitioners consider direction through questioningessential to obtain good results (Morgan 1996), Fern (1982) hasshown that the quantity and quality of ideas generated by directedversus undirected groups did not differ significantly. In contrast,individuals interviewed by questioning performed better thanindividuals brainstorming alone. These results pertained to anoffline idea-generation task — a relatively straightforwardassignment for which participants did not needmany instructions.But what happens when participants face a more complicated

task, such as product evaluation and feedback (Fern 1983),especially if that task occurs online over the course of multipledays?

Amultiple-day, online product feedback context should create atwo-sided effect for directive questioning. A directive moderatorwho asks questions at regular intervals provides guidance about anappropriate completion plan, so participants gain an idea of thekinds of feedback expected at various points in time. Withoutregular questions, perceptions of their competence to completethe task may be low (Deci and Ryan 2002), which could preventthem from freely contributing product evaluations. This effect isparticularly prevalent online because participants primarilyinteract with a computer and thus lack immediate social pressureto contribute, as exists in a face-to-face market research setting.By posing regular questions, a directive moderator attempts toinitiate and maintain online contributions. We thus expect a maineffect of questioning on feedback quantity, measured as thevolume of posts and number of distinctive topics addressed, inboth collective and individual blogs.

H1. Blogs that feature directive questioning outperform blogswithout questioning in terms of (a) volume of posts and (b) numberof topics addressed.

Combined Effect of Questioning and Participant Interaction onFeedback Quantity

Participants in a collective blog directed with regular questionslikely benefit most from the process gains that have been attributedto electronic brainstorming groups. In this condition, perceivedcompetence and certainty about expected feedback combines withcognitive stimulation, positive social comparison, and the hedonicand learning pleasure of interacting with other participants, whichshould lead participants to contribute more posts and address moretopics than participants in other conditions. We hypothesize:

H2. A collective blog with directive questioning outperforms theother protocols in terms of (a) volume of posts and (b) number oftopics addressed.

Questioning and its Effect on Feedback Quality

Quantity of feedback is only one indicator of the value ofblogs as social media research platforms. Quality is another andperhaps even more important performance measure, as deter-mined by the depth and usefulness of obtained information toprovide insight into consumers' opinions and behaviors. Weargue that directive questioning on blogs exerts a negative effecton the quality of consumer feedback.

Cohesion and rapport among participants and between theparticipant and the moderator help participants express themselves(Fern 2001). In face-to-face settings, people placed together start tobond instantly (Edmunds 1999; Walther 1995), such as by smilingand exchanging looks to start the relationship-building process,even before saying a word. Online though, relationships withothers are mediated through a technological interface (Hoffmanand Novak 1996). Computer interaction reduces social cues andinformation richness, which makes it more difficult for participants

65C. Balagué, K. de Valck / Journal of Interactive Marketing 27 (2013) 62–73

and the moderator to get to know one another or feel close. In thisrespect, Walther (1995) shows that it takes more time to buildsocial bonds with others in a computer-mediated environment thanin face-to-face settings.

We argue therefore that online, the exchange may stayrelatively “cold” because the questions and incitements cannot beaccompanied by smiles, gestures, and encouraging nods orutterances. Therefore, online directive questioning may result in arather rigid question-and-answer format, rather than an intimateconversation. Answering questions posed by the moderator givesparticipants the satisfaction of task completion; considering thejob done, they may not volunteer more information (Stewart andShamdasani 1990). In turn, we expect that in an online context,directive questioning incites participants to react to questions,but not necessarily to expand on them, so it is detrimental tousefulness and depth.

Without regular questions, participants have autonomy in howthey express themselves. They may be uncertain about thefeedback expected, but the freedom offered to them in blogs thatlack directive questioning gives them opportunities to tell theirown story – “this is who I am, and this is how I experienced theproduct discovery” – which features many details and self-disclosure. We predict a main effect of no questioning oncontribution quality, which we operationalize as usefulness andself-disclosure, in both collective and individual blogs.

H3. Blogs that feature no questions and allow participants toexpress themselves freely outperform blogs with directive ques-tioning in terms of (a) percentage of useful posts and (b) percentageof self-disclosure posts.

Combined Effect of Questioning and Participant Interaction onFeedback Quality

The positive effect of no questioning also should be stronger forindividuals than for groups. We argue that participants in acollective blog without regular directive questions suffer fromnegative group dynamic processes, such as free-riding anddownward social comparison. That is, participants regard thefreedom of the nondirective format together with other participants,which alleviates pressures on them to perform. The knowledge thatothers may give feedback lessens the need to expand on one's ownexperiences. This situation may be aggravated by the mechanismof social comparison: When participants see that others contributeminimally, a downward spiral may begin. Finally, because it takestime to build bonds online, participants may feel less comfortableabout disclosing personal information to an unknown collective ofothers. In contrast, in an individual setting, each respondent is

Table 1Overview of hypothesized results.

Best performance:feedback quantity

Best performance:feedback quality

Direct versus noquestioning

Directive questioning (H1) No questioning (H3)

Collective versusindividual

Collective, directivequestioning (H2)

Individual, noquestioning (H4)

solely responsible for providing the online feedback. Respondentsknow that their contributions can be directly attributed to them, sothe pressure of personal exposure should increase their effort levelsand thus increase the usefulness of their feedback. Moreover, anindividual blog format often resembles a personal diary, whichshould trigger reflection and self-disclosure. Therefore, wehypothesize:

H4. Individual blogs that feature no questions and allow partici-pants to express themselves freely outperform the other protocols interms of (a) percentage of useful posts and (b) percentage of self-disclosure posts.

We summarize these hypotheses in Table 1.

Field Experiment

Participants

To conduct our field experiment, we cooperated with aninternational foods manufacturer interested in receiving consumerfeedback about a recently launched energy drink. We used asurvey and telephone interviews to recruit 60 participants from apanel. The selection criteria were designed to match the brandtarget, including age (18–35 years), lifestyle (active, sporting),and product category interest. We also controlled for Web 2.0experience (participation in online forums, blogs, social networks,and online communities), attitude toward the computer-mediatedenvironment, and online interaction propensity (adapted fromWiertz and De Ruyter 2007). Consumers with extremely high orlow social media literacy were excluded. Selected participantswere semi-randomly divided to create four homogeneous groupsin terms of gender (50% female) and age (50% between 18and 24 years, 50% between 25 and 35 years).1 We then randomlyassigned 15 participants to each of the four experimentalconditions.

Design and Procedure

Before the start of the experiment, all participants received 36bottles of the energy drink at their home address. An enclosedletter explained the product testing and feedback procedure.Participants had to wait for the moderator's signal before tastingthe product; then they were to drink at least one bottle of thebeverage per day for the duration of the experiment (15 days).The surplus of bottles could be consumed at will or shared withothers, with no obligation to finish the entire supply before theend of the experiment. Participants were instructed to log on to adesignated blog platform each day to provide feedback about theproduct. Those in the collective settings also received explicitinstructions to read and react to the contributions of the otherparticipants. The instructions also detailed how to contribute the

1 Homogeneity was warranted because we wanted to limit the potentialinfluence of sociodemographic profile effects related to social media usage. Wetested for the effects of these control variables and found no significant changesin the results.

66 C. Balagué, K. de Valck / Journal of Interactive Marketing 27 (2013) 62–73

posts. Furthermore, to provide some overall guidance, theintroductory letter in all four protocols suggested six mainthemes: Describe yourself; Food and you; Drinks and you; Theproduct before tasting it; First tasting; and Everyday life withthe product. The entire set of instructions and guidelines alsoappeared posted on the blogs.

We manipulated two factors across protocols: (1) participantinteraction (individual versus collective setting) and (2) role ofquestioning (directive versus no questioning). In the directivequestioning format, the moderator regularly posted openquestions about the product (e.g., “how do you like the taste?”“what are your suggestions for improvement?”). These questionshad been developed in advance and their wording and pacing weresimilar for both protocols with directive questioning. In the no-questioning format, the moderator asked no particular questionsbut encouraged participants to give feedback (e.g., “hi everyone, itis great to read your input to far. Don't forget that you can respondto what others have to say”). In the individual setting, participantsnever saw others' posts; we thus gathered 30 individual blogs. Inthe two collective cohorts, participants in each protocol logged onto a shared blog platform, where they could read and react to theposts of others.

To keep any other moderator influences to a minimum, weworked with the samemoderator for all 32 blogs. This experiencedmarket research professional worked for an agency specializing inonline focus group research. She thus is representative ofprofessional moderators commonly used by marketing practi-tioners. The moderator was blind to the hypotheses. Herobjective was similar for all protocols: to elicit feedback. Therewas no incentive or reward to generate particular results. In allprotocols, the moderator started the experiment by welcomingparticipants, reminding them of the guidelines, and invitingthem to give their feedback. Moreover, in all protocols, themoderator used public and private messages to encouragenonresponsive participants to contribute. The online interfacemakes it much easier to manage any moderator characteristicsthat might influence the interaction process (e.g., word choice,reactivity, speaking time) or even eliminate them (e.g., tone ofvoice, subtle gestures, facial expressions). We engaged in dailyinteractions with the moderator to discuss the feedback processacross the four protocols. By keeping her actions and wordingidentical for all protocols (cf. presence or absence of questioning),we ensured that her moderation behavior did not compromise theexperimental setting.

We kept the online interface simple and similar for allprotocols. The upper part contained tags for navigation andtext manipulation (e.g., letter type, font, color, image). Themiddle listed the titles of posts, in reverse order with the lastcontribution appearing first. The name of the contributor andthe date and time of the contribution also appeared. Partici-pants clicked on the title to open the post or to reply and/or readsubsequent reactions. The right side of the interface showed acalendar, so participants could access all posts by day of theexperiment. To make contributions, they clicked on a link inthe upper-left corner, labeled “new post.” They also couldeasily react to posts (whether their own, the moderator's, orother participants').

After the experiment ended, participants completed a ques-tionnaire to measure their product interest and brand attitude, aswell as to collect their feedback about the moderator, the otherparticipants (in the collective protocols), and the blog platform.Finally, they received a letter with a small gift voucher to thankthem for their participation.

Dependent Measures

We define two dependent variables. The first is feedbackquantity, which encompasses (1) the volume of posts and (2) thenumber of distinctive topics addressed (Füller, Jawecki, andMühlbacher 2006; Moon 2000). To measure volume, we countedthe total number of posts by each participant (Nambisan andBaron 2007; Wiertz and De Ruyter 2007). We eliminated non-topical posts, such as administrative or technical questions. Tomeasure the number of distinctive topics addressed, three judgesdouble-coded all contributions by participants in the four protocolsto identify the range of topics evoked throughout the experiment.In total, the judges defined 63 topics; their intercoder reliabilitywas 84%. Cohen's (1960) kappa varied between .77 and .86,which indicated excellent agreement (Fleiss 1981). The 63topics then were categorized into product topics (e.g., taste,color, packaging, ingredients), consumption experience topics(e.g., when and where the product is consumed), and self-disclosure topics (e.g., classification data such as age or profession;insights into personal lives, such as meal-preparation habits, pets,commutes; insights into social lives, such as friends, hobbies, meal-sharing habits). To avoid an artificial elevation of the number oftopics, a topic quoted in several posts by the same author wascounted once.

The second dependent variable is feedback quality, or howuseful posts were, as well as how much self-disclosure occurred.For this study, we defined usefulness as product-related feedback.A post thus was useful if it addressed at least one of the followingtypes of content: Product features (e.g., bottle cap convenience,bottle shape, color of the drink), an insight into product usage(e.g., effect after drinking, description of a typical drinker), asuggestion for improvement (e.g., improvements to the bottlecap), or competitor-related feedback. To avoid the effect ofdifferent volumes of posts per protocol, we measured feedbackquality as the percentages of useful posts and self-disclosureposts. Looking at the percentage of posts that are considered tocontain quality feedback, rather than absolute numbers is alsomanagerially relevant. In terms of time, thus, money needed foranalysis, it is better to have a lower volume of contributions witha high density of quality posts, rather than having to wadethrough a high number of contributions of which many are notuseful, thus causing a lot of ‘noise’ in the data collected.

The skewness and kurtosis statistics for each variable(i.e., number of posts, number of topics, percentage of usefulposts, percentage of self-disclosure posts) indicated that theanalysis of variance assumption of normality is not met. AKolmogorov–Smirnov test (significance b.05) confirmed thisresult. Therefore, we used non-parametric Mann–Whitney U teststo examine our hypotheses.

67C. Balagué, K. de Valck / Journal of Interactive Marketing 27 (2013) 62–73

Results for Consumer Feedback Quantity

As we hypothesized in H1a, participants contributed moreposts in the directive questioning protocols than in no-questioningprotocols. For directive questioning protocols (CDQ/IDQ), theaverage number of contributions per participant was 13.82 versus5.68 contributions for the no-questioning protocols (CNQ/INQ)(mean ranks=41.21 versus 18.96; z=−4.89, pb .05). Thus H1areceives support: There is a main effect of directive questioning onthe volume of posts.

Regarding the number of distinctive topics addressed, resultsshow that protocols with directive questioning outperformedthose with no questioning in terms of the number of topicsaddressed, in support of H1b. For directive questioning protocols(CDQ/IDQ), the average number of topics per participant was34.03 versus 20.79 for the no-questioning protocols (CNQ/INQ)(mean ranks=39.15 versus 21.39; z=−3.89, pb .05).

Beyond a main effect of directive questioning on feedbackquantity, hypothesis 2 stated that a collective blog with directivequestioning would outperform the other protocols in terms of both(a) volume of posts and (b) number of topics addressed. In supportof H2a, Fig. 1 shows that CDQ participants provided significantlymore contributions (on average 16.38 contributions per partici-pant) than all the other respondents (IDQ=11.41 contributions,mean ranks=12.76 versus 21.50; z=−2.60, pb .05; INQ=6.47contributions, mean ranks=9.37 versus 22.22; z=−3.94, pb .05;CNQ=4.77 contributions, mean ranks=7.42 versus 21.16; z=−4.33, pb .05). To test the interaction effect, we used theShoemaker extended median test (Sawilowsky 1990) for thevariable volume of posts. The significance of the Pearson chi-square was .263 (N.05). Therefore, there was no interaction effect.

Regarding the number of topics addressed, participants inthe CDQ protocol brought up 35.25 topics on average (seeFig. 2), significantly more than the 22.60 topics in the INQsetting (mean ranks=20.47 versus 11.23; z=−2.833, pb .05)and 18.69 topics in the CNQ group (mean ranks=19.69 versus9.23; z=−3.295, pb .05). However, we found no significantdifference between the CDQ and the IDQ participants(IDQ=32.88 topics, mean ranks=16.26 versus 17.78; z=−.461, pN .05). Thus, H2b receives only partial support. The

Collective

Individual

Questioning role

Ave

rage

num

ber

of p

osts

0

2

4

6

8

10

12

14

16

18

Directive Questioning Non Directive Questioning

Fig. 1. Interaction between role of questioning and participant interaction onfeedback quantity (number of posts).

Shoemaker extended median test again revealed no interactioneffect (Pearson chi-square significance .971N .05).

Results for Consumer Feedback Quality

We hypothesized that blogs that feature no questions andallow participants to express themselves freely would outper-form blogs with directive questioning in terms of (a) percentageof useful posts and (b) percentage of self-disclosure posts. Insupport of H3a, we find that for no-questioning protocols(CNQ/INQ), the percentage of useful posts was 66.25 versus63.83 for the directive questioning protocols (CNQ/INQ)(mean ranks=31.54 versus 26.53; z=−2.22, pb .05). For thepercentage of self-disclosure posts, our results also indicatesignificantly better performance of protocols without question-ing. For no-questioning protocols (CNQ/INQ), the percentageof self-disclosure posts was 69.84 versus 29.85 for the directivequestioning protocols (CNQ/INQ) (mean ranks=20.00 versus43.96; z=−5.27, pb .05). Thus, H3b is accepted.

Hypothesis 4 stated that individual blogs that feature noquestions and allow participants to express themselves freelyoutperform the other protocols in terms of feedback quality. Insupport of H4a, Fig. 3 shows that the percentage of useful posts inthe INQ protocol (82.25%) is highest of all protocols(CDQ=76.93%; mean ranks=20.07 versus 12.19; z=−2.425,pb .05; IDQ=61.22%; mean ranks=22.30 versus 11.38; z=−3.302, pb .05; CNQ=47.78%; mean ranks=18.97 versus 9.35;z=−3.119, pb .05). We used Shoemaker's extended median testto determine the interaction effect for the percentage of useful postsvariable. The Pearson chi-square significance value was .000(b.05), which indicates an interaction effect between questioningand participant interaction.

Finally, for the percentage of self-disclosure posts, INQ(80.11%) outperformed all other protocols (CNQ=57.99%mean ranks=17.30 versus 11.27; z=−1.970, pb .05; CDQ=31.88%; mean ranks=23.50 versus 8.97; z=−4.467, pb .05;IDQ=27.95%; mean ranks=24.37 versus 9.56; z=−4.474,pb .05). Results are presented in Fig. 4. Thus, H4b was supported.The Shoemaker extendedmedian test showed a Pearson chi-square

Collective

Individual

Questioning role

Ave

rage

num

ber

of t

opic

s

0

5

10

15

20

25

30

35

40

Directive Questioning Non Directive Questioning

Fig. 2. Interaction between role of questioning and participant interaction onfeedback quantity (number of topics).

Collective

Individual

Questioning role

Per

cent

age

of u

sefu

l pos

ts

0

10

20

30

40

50

60

70

80

90

Directive Questioning Non Directive Questioning

Fig. 3. Interaction between role of questioning and participant interaction onfeedback quality (percentage of useful posts).

68 C. Balagué, K. de Valck / Journal of Interactive Marketing 27 (2013) 62–73

significance of .467 (N.05), so no interaction effect occurred for thepercentage of self-disclosure posts.

Discussion

The picture that emerges from our findings depicts questioningas key for explaining online protocol performance, in terms of boththe quantity and the quality of participant feedback. In contrast withthe widespread assumption by market research practitioners thatdirective questioning is necessary to obtain good results inindividual and group interviews, we find that it has both positiveand negative effects on participants' online contribution behavior.When participants give feedback on the basis of directivequestions, they perform better in terms of quantity than if theprotocols lack any questions. However, for consumer feedbackquality, the protocols without questioning offer the best results forself-disclosure information; the individual protocol without ques-tioning gives the best results in terms of usefulness. Thus, thedecision to use directive questioning versus no questioning in anonline context depends on the trade-off between feedback quantityand feedback quality.

Moreover, we highlight an interesting interplay between therole of questioning and the level of participant interaction. The

Collective

Individual

Questioning role

Per

cent

age

of s

elf

disc

losu

re p

osts

0

10

20

30

40

50

60

70

80

90

Directive Questioning Non Directive Questioning

Fig. 4. Interaction between role of questioning and participant interaction onfeedback quality (percentage of self-disclosure posts).

strong outcomes of the collective, directive questioning protocolcan be explained by online group dynamics; the presence ofdirective questioning supports group process gains as cognitivestimulation, upward social comparison, and motivation. How-ever, the absence of questioning causes group process lossesbecause cognitive interference, downward social comparison,and free-riding emerge (Pinsonneault et al. 1999).

In the next section, we report on another post-hoc processanalysis that confirms that directive questioning contributed to therelative performance of groups versus individuals. The protocolswithout questions clearly outperform the other protocols in termsof quality. Because the good performance of the individual, no-questioning protocol offers a promising alternative to the industrystandard collective, directive questioning protocol, we also offerinsights into conditions that determine the value of this protocol.Finally, the post-hoc analyses reveal participants' relative appre-ciation for the protocols, which suggests an interesting paradoxbetween those that perform best managerially and strategically andthe protocol preferred by participants.

Process Analysis

Process Gains from Directive Questioning

Unsurprisingly, with regard to participants' evaluations ofquestioning, we found significant differences between thedirective and no questioning protocols. In the former, questionswere posed in five waves, namely, on the first, fourth, seventh,tenth, and thirteenth day of product testing. In the no-questioningprotocols, the moderator welcomed participants to the onlinefeedback protocol and encouraged them to contribute usingprivate and public posts but did not pose any regular questions.When asked if they would have liked more specific questionsabout the product testing, on a seven-point scale, participants in no-questioning protocols indicated they would, whereas participantsin the directive questioning protocols were neutral (INQ=5.71,CNQ=5.60; IDQ=3.23, CDQ=3.14; Mann–Whitney tests pb .05for CDQ versus CNQ and INQ, and for IDQ versus CNQ andINQ).

We have argued that for the 15-day online feedback task,directive questions would serve an important role by providingguidance about appropriate task completion and that withoutsuch guidance, participants would face uncertainty. In line withthis prediction, participants in the individual, no-questioningprotocol reported they felt unsure about what they should write(5.50, Mann–Whitney tests pb .05). This finding could explainwhy this protocol scored relatively low on feedback quantity.Participants in the collective, no-questioning protocol did notagree with the statement “I often felt unsure about what I shouldwrite” though (3.40), similar to participants in the directivequestioning protocols (IDQ=3.92, mean ranks=10.60 versus13.08; z=−.899, pN .05; CDQ=3.07, mean ranks=12.04 versus13.15; z=−.389, pN .05). Apparently, the group setting contrib-uted to a feeling of competency. In this respect, our post-testsurvey revealed that participants in the two collective protocolswere unanimously positive about the presence and contributionsof others. In particular, our survey replicated previous findings that

69C. Balagué, K. de Valck / Journal of Interactive Marketing 27 (2013) 62–73

showed that group settings offer both learning and cognitivestimulation advantages (Table 2). Thus, the contributions of theother participants added to or made up for questions posed by themoderator, and thus served as alternative indicators of what topost.

Table 3Protocol performance over time.

Week 1 Week 2 z Wilcoxon test p

Number of postsCDQ 7.44 8.94 −.79 .44CNQ 3.31 1.83 −1.82 .08IDQ 8.65 3.13 −3.33 .00*INQ 4.07 2.86 −.86 .40

Number of topicsCDQ 30.80 25.00 −1.48 .14CNQ 16.56 15.56 −1.05 .29IDQ 37.47 12.87 −3.18 .00*

Process Losses of Directive Questioning

Although directive questioning may be useful for guidingparticipants through the feedback process, it also could reduceparticipants' willingness or ability to volunteer information thathas not been requested explicitly (Stewart and Shamdasani 1990).Specifically, we expected that participants in the individual,directive questioning protocol might take a task orientation andonly answer posed questions because they had no opportunity toengage in social interactions with other participants. Accordingly,participants in the IDQ condition indicated that they felt less freeto write whatever they wanted, compared with participants in theother protocols (seven-point scale, IDQ=2.46, INQ=5.36,CDQ=6.36, CNQ=6.40; Mann–Whitney tests pb .05). Theprotocol scored well in terms of quantity because participantsduly replied to all directive questions, but it was not optimal forspontaneity or high-quality contributions.

A challenge for collecting consumer feedback through anonline interface is motivating informants to contribute to anaudience that remains relatively unknown and without personalcontact. Therefore, the moderator worked to ensure informantsthat their contributions had not been lost in cyberspace but insteadwere received and appreciated. Yet only in the collective, directivequestioning protocol did participants strongly disagree with thestatement, “I had the idea that no one read my contributions”(seven-point scale, CDQ=1.64, IDQ=3.69, CNQ=3.90,INQ=4.43; Mann–Whitney tests pb .05). Thus, for onlinefeedback, it is difficult to create a true feeling of interaction,which underscores our sense of the relative “coldness” ofcomputer-mediated communication.

It is noteworthy that participants in the collective, no-questioning protocol indicated that they were not convinced thattheir contributions had been read. Therefore, the collective settingper se was not enough to induce active exchanges amongparticipants. Rather, questions raised by participants in thecollective, no-questioning protocol rarely resulted in participantinteractions, unlike the collective, directive questioning protocol, inwhich participants actively reacted to queries. Active questioning

Table 2Learning and cognitive stimulation benefits of collective protocols.

Statements in collective protocols

Contributions of others enhanced my knowledge aboutthe product and its usage

5.60

Contributions of others were useful to me 5.44Contributions of others were helpful in answering

product-related questions I had5.21

Contributions of others encouraged me to post 5.88Contributions of others made me felt inspired 5.79

Notes: Scores are measured on seven-point Likert scales, ranging from 1 =“strongly disagree” to 7 = “strongly agree.”

by the moderator seemed to set the tone on this platform, whereasin the collective, no-questioning protocol, participants read oneanother's contributions (according to the survey results) but did notreact to them, despite repetitive moderator probing. Free-riding anddownward social comparison could explain such behavior.

Strong Performance of the Individual,No-Questioning Protocol

Comparing the performance of the protocols in the first versusthe second week of the feedback process (see Table 3) revealedthat performance levels changed in the second week. Theprotocols remained on the same level or decreased in the numberof posts, number of topics, and percentage of self-disclosureposts. That is, there was a duration impact on performance thatmay indicate an exhaustion phenomenon. In contrast, thepercentage of useful posts increased, implying that an initiationperiod was necessary to prompt a full contribution process. In theindividual, no-questioning protocol though, performance on allmeasures (volume of posts, number of topics, percentage of self-disclosure posts, and percentage of useful posts) remained stable oreven increased from the first to second week. This surprisingfinding conflicts with existing literature that theorizes that the lackof cognitive stimulation, learning opportunities, or social benefits,coupled with perceived uncertainty about the feedback task, causesmembers in this protocol to perform poorly in 15-minute, let alone15-day, sessions.

Paradox

The post-test survey brought to light an interesting paradoxbetween perceptions of the feedback process and participants'

INQ 25.91 18.00 −1.25 .21

% useful postsCDQ 65.87 84.94 −2.98 .00*CNQ 50.90 75.00 −1.44 .15IDQ 48.16 94.00 −4.26 .00*INQ 75.02 97.62 −2.66 .03*

% self disclosure postsCDQ 41.58 21.62 −2.56 .01*CNQ 68.85 11.11 −2.81 .00*IDQ 33.98 14.33 −2.36 .02*INQ 77.09 83.33 −.99 .40

*Significant (pb .05).

70 C. Balagué, K. de Valck / Journal of Interactive Marketing 27 (2013) 62–73

performance in the collective, no-questioning protocol. Overall,it was the worst performing protocol, yet these participantsappreciated the product feedback task significantly more thanparticipants in the collective, directive questioning protocol(mean ranks=16.40 versus 9.71; z=−2.610, pb .05), andindividual, no-questioning protocol (mean ranks=17.10 versus9.21; z=−2.960, pb .05). Their appreciation was similar to thatof respondents in the individual, directive questioning protocol(Mann–Whitney test pN .05). Although participants in thecollective, no-questioning protocol indicated they would haveliked more specific questions about the product testing, theyenjoyed the freedom to write whatever they wanted, and theydid not feel unsure about what to write. Finally, they thoughtthe contributions of others were helpful and inspiring, despitethe lack of direct exchanges. Thus, the participants in thecollective, no-questioning protocol took the feedback task inthe spirit of so many Web 2.0 users: consuming the revelationsand contributions of others, but enjoying the anonymity andlack of social obligation to participate. More research is neededto understand how marketers might increase performance inthis protocol, without sacrificing the elements that participantsparticularly appreciated (i.e., freedom and collective setting).

Theoretical Implications

This study is the first to systematically examine the combinedeffects of level of participant interaction (individual versuscollective) and the role of questioning (directive versus noquestioning) in the context of online blogs for product feedback.We offer new insights into the vast literature pertaining toperformance by individuals versus a collective (Diehl and Stroebe1987) because we show that questioning plays a key role insupporting or suppressing the process gains that have beenascribed to the dynamics of online groups versus individuals(Pinsonneault et al. 1999). Our study builds on Fern's (1982)exceptional effort to challenge and refute some common marketresearch practices, such as the use of directive questioning.Whereas Fern showed that directive questioning had no impact onface-to-face group brainstorming, his study reported that it didimprove the performance of individuals in terms of breadth ofcontributions. Our findings expand these insights by showing that,in blogs, directive questioning in both collective and individualprotocols benefitted feedback quantity but hindered its quality.Wefind that the individual, no-questioning protocol is best for quality;it also outperforms all the other protocols in terms of its consistenteffectiveness over time. Furthermore, we show that the industrystandard of collective, directive questioning is the most optimalprotocol in terms of consumer feedback quantity.

This study also offers directions for research. First, we focusedon the presence and absence of questioning, but other moderator-related variables also might be relevant. Online research isparticularly appealing because the “persona” of the moderator iseasy to create and control. Thus, researchers should determinewhat type of persona generates the best performance in terms offeedback quantity and quality. Does it matter whether themoderator presents as a man or a woman? Does it matter howmuch personal information the moderator gives? What is the

optimal frequency and tone of interactions? How can themoderator best probe for responses without asking directquestions? Finally, is a moderator even needed? We haveshown that quality feedback can be obtained without questioning.Maybe the moderator's welcoming and facilitating roles also aresuperfluous.

Second, it is relevant to examine how we can have the ‘best oftwo worlds’, i.e., combining both high quantity and high qualityfeedback. In particular, this means that for collective, directivequestioning protocols the percentage of quality posts needs to beincreased, whereas for individual, no-questioning protocols weshould find ways to increase the overall volume of postscontributed and number of topics addressed. Similarly, researchshould determine how to increase the effectiveness of the worstperforming protocols (i.e., collective, no-questioning and indi-vidual, directive questioning protocols). Perhaps the Web sitelayout or the interface design of the platform exerts significantinfluences on participant performance. In our study, the simplelayout and interface remained similar in all protocols, to eliminateunwanted differences. However, the negative effects of free-riding (CNQ) and the rigidness and coldness of the question-and-answer format (IDQ) could be attenuated by good interfacedesigns. For example, photos and videos could render theinterface more personal and social and thus encourage partici-pants to disclose personal information and interact more actively.

Third, such technological solutions warrant further research inanother direction as well. We measured consumer feedbackquantity and quality as number of posts, number of topics, andpercentages of useful and self-disclosure posts. However, tele-presence in computer-mediated environments is rich, featuringvisual, textual, audio, animated, and even haptic sensations (Schauand Gilly 2005). Therefore, further research should address how tocapture these elements as potential measures of social mediaplatform performance.

Fourth, our results emphasize the need for measures based onanalyses of consumer-generated content. Emerging researchsuggests ways firms might use automated text mining for marketresearch (Aggarwal, Vaidyanathan and Venkatesh 2009; Archak,Ghose, and Ipeirotis 2011; Escales 2007; Sonnier, McAlister, andRutz 2011).

These techniques usually involve crawling thousands ofconsumer or review comments and analyzing them with textmining techniques to build a model. To generalize our researchresults, researchers could capture massive amounts of data fromnumerous Web users, and use text mining techniques. Theintegration of text miningmethods alsomight improve the measureof content usefulness. For example, a potential measure mightintegrate a valence measure of participants' posts (positive,negative, or neutral comments), and text mining analyses couldsupport the inclusion of different valence levels (i.e., stronglypositive to strongly negative). This approach would provide a moreprecise measure of the usefulness of product feature evaluations,insights into product usage, and suggestions for improvements.

Fifth, more sophisticated measures are needed to quantifydata quality in blog feedback. Textual data constitute a broadresearch area, offering rich descriptions, detailed emotions,self-referencing in personal narratives and storytelling, and

71C. Balagué, K. de Valck / Journal of Interactive Marketing 27 (2013) 62–73

cognitive elaboration about product benefits and productusage. Advertising and social psychology researchers alsoare exploring new frontiers in the analysis of textual evaluation(Escales 2007).

Sixth, the participants in our study accessed blogs throughtheir computers. However, according to a recent Gartner Group(2009) study, by 2013, mobile phones will be the primary webaccess device, before personal computers. Market research viamobile phones suggests new environments to define, understand,and analyze. If consumers can give their feedback on the go, theirfreedom of expression may become increasingly important,which would imply that the role of directive questioning is moremarginal. Yet directive questioning also could prove indispens-able, if the possibility of continuous feedback makes the task toocomplex or daunting. Finally, the limited screen size and tendencyto use short messaging language could jeopardize feedbackquality. It would be interesting to investigate whether a longerdata collection period could counterbalance this disadvantage.

Managerial Implications

Although our data reflect a market research study aboutconsumer feedback to an existing product, we contend that ourfindings may generalize to other online marketer–consumerinteractions. Nambisan and Baron (2007, p. 43) describe how“virtual customer environments” are used for product design,product testing, product support, and customer relationshipmanagement. Our study should help marketers make moreinformed decisions about the design of such environments.First, they confirm the advantages of using blogs as onlineconsumer feedback platforms: Blogs are simple to implement,easy to use by consumers, and inexpensive. Second, rather thanchoosing an intuitively attractive collective setting, combinedwith the industry standard of directive questioning, marketersshould select an interaction design that leads to their specificdesired outcome, in terms of feedback quantity versus quality.Those who hope to capture deep, qualitative insights forconcept testing, product repositioning, or competitive analysesshould rely on individual, no-questioning formats. Collective,directive questioning formats instead should be preferred forencouraging many ideas and comments about a product and itscharacteristics. However, neither individual, directive question-ing nor collective, no questioning is optimal if the managerstrives for high quantity or quality of contributions. Third, themain effects of questioning are greater than those of participantinteraction. Contrary to current industry practice, the noquestioning option is a viable option for the design of virtualcustomer environments.

Platform design choices also depend on time issues. If there isno time pressure to obtain results, managers should opt for anindividual, no-questioning platform and a longer data collectionperiod that can capture both high quantity and high qualityparticipant contributions. Some market research companiescombine collective and individual formats into one study (Bô2007); our findings indicate that this option is likely a goodstrategy, especially if the collective part is actively guided bydirective questions, and the individual part contains no questioning.

This notion also raises questions about the optimal order: Shouldone start with the collective or the individual format, or should bothbe run in parallel? Beginning with a collective, directivequestioning format and ending with an individual, no-questioningformat probably generates quantity in terms of product topics,followed by quality in terms of information about experientialusage and self-disclosure. In contrast, starting with an individual,no-questioning format followed by a collective, directive ques-tioning format might offer a more progressive way to collectfeedback. The individual, no-questioning format allows partici-pants to address those aspects that they findmost relevant first, thenlets them express themselves without the normative and socialinfluence of other participants. Once everyone has given his or herindividual opinion, the forum can be opened for interaction anddiscussion among participants, under the active guidance of amoderator. These ideas merit further research.

Another managerial recommendation pertains to the plat-form organization over time. According to Web 2.0 distributedcognition theory (Hutchins 1995), cognitive phenomenagenerally can best be understood as distributed processes. Theoperation of cognitive systems in a social group may bedistributed over time, with precise tasks defined for eachprocess step. Applying this theory to blogs or other socialmedia platforms used to solicit consumer feedback aboutproduct design, testing, and support, we recommend clearlyseparating the feedback process into phases, to avoid the lack ofmotivation over time, as noted in our experiment. Each stepwould be organized by participants' tasks, rewards, levels ofinteraction with other participants, and levels of questioning, tomaximize the quantity and quality of feedback throughout theentire data collection period. The optimal duration of socialmedia platforms for market research remains an open avenuefor research.

More generally, the performance of our protocols must beseen in a Web 2.0 context, characterized by the contributioneconomy (Stiegler 2009) and Web users' exhibitionism. In lightof this prominent trend, we suggest some ideas that couldenhance the usefulness of blogs and other social media as onlinecustomer environments. First, the Web 2.0 spirit of exhibition-ism supports the soliciting of more information about partici-pants' profile. People are what they post (Schau and Gilly 2005),so we suggest including a profile page, with a design similar asan “about” page in blogs or social networks. Participants canoffer information about their activities, interests, preferences, aswell as select objects, photos, and avatars to identify themselves.Then marketers could collect additional information aboutpotential customers that they can use to contextualize consumerinput.

Second, to improve blogs' effectiveness for market researchand online customer relationship management, participants shouldbe motivated to give feedback with the assurance that theircontributions have been read and appreciated. In our experiment,we found that this impression is difficult to achieve, even with adedicated moderator. One solution would be technical; the systemcould automatically show contributors whether their contributionshave been read and by whom. Another solution would be moresocial. Because participant contributions in online communities

72 C. Balagué, K. de Valck / Journal of Interactive Marketing 27 (2013) 62–73

are motivated by social exchange and social capital (Wasko andFaraj 2005), we suggest implementing a rating process, such thatparticipants in collective platforms could rate one another'scontributions in terms of, for example, originality, usefulness, orcreativity. The moderator also could give ratings. These concretesigns of (public) recognition could increase feedback quantityand quality, though more research is needed to understand theirfull implications for participants' willingness to express them-selves freely and honestly, as well as for the group dynamics ofcompetition.

In conclusion, our paper highlights multiple possibilities forimproving online customer environments, as well as new ways tomaximize the quantity and quality of consumer feedback. Apromising field of research with direct consequences for productdevelopment, innovation, and better knowledge of markets andconsumers in general thus lies ahead.

References

Aggarwal, Praveen, Rajiv Vaidyanathan, and Alladi Venkatesh (2009), “UsingLexical Semantic Analysis to Derive Online Brand Positions: An Application toRetail Marketing Research,” Journal of Retailing, 85, 2, 145–58.

Albanese, Robert and David D. Van Fleet (1985), “Rational Behavior inGroups: The Free Riding Tendency,” Academy of Management Review, 10,2, 244–55.

Archak, Nikolay, Anindya Ghose, and Panagiotis G. Ipeirotis (2011), “Derivingthe Pricing Power of Product Features by Mining Consumer Reviews,”Management Science, 57, 8, 1485–509.

Barber, Tamara, Reinseken Reitsma, and Erica Sorenswen (2010), How CanMarket Researchers Get Social?. Cambridge, MA: Forrester Research, Inc.

———,———, Zach Hofer-Shall, Elise Godfrey, and Anjali Lai (2011), Howto Incorporate Social Media in Market Insights. Cambridge, MA: ForresterResearch Inc.

Bô, Daniel (2007), Le Forum Quali Online; La Nouvelle Frontiere Online.Paris: QualiQuanti.

Camacho, L. Mabel and Paul B. Paulus (1995), “The Role of SocialAnxiousness in Group Brainstorming,” Journal of Personality and SocialPsychology, 68, 6, 1071–80.

Cohen, Jacob (1960), “A Coefficient of Agreement for Nominal Scales,”Educational and Psychological Measurement, 20, 37–46.

Crabtree, Benjamin F., M. Kim Yanoski, William L. Miller, and Patrick J.O'Connor (1993), “Selecting Individual or Group Interviews,” in SuccessfulFocus Groups: Advancing the State of the Art, David L. Morgan, editor.Newbury Park, CA: Sage Publications.

Deci, Edward L. and Richard M. Ryan (2002), Handbook of Self-DeterminationResearch. Rochester, NY: University of Rochester Press.

Dennis, Alan R. and Joseph S. Valacich (1993), “Computer Brainstorms: MoreHeads Are Better than One,” The Journal of Applied Psychology, 78, 4,531–7.

——— and Joseph S. Valacich (1999), “Research Note. Electronic Brain-storming: Illusions and Patterns of Productivity,” Information SystemsResearch, 10, 4, 375–7.

Diehl, Michael and Wolfgang Stroebe (1987), “Productivity Loss inBrainstorming Groups: Toward the Solution of a Riddle,” Journal ofPersonality and Social Psychology, 53, 3, 497–509.

——— and ——— (1991), “Productivity Loss in Idea-generating Groups:Tracking Down the Blocking Effect,” Journal of Personality and SocialPsychology, 61, 3, 392–403.

Dugosh Leggett, Karen, Paul B. Paulus, Evelyn J. Roland, and Huei-ChangYang (2000), “Cognitive Stimulation in Brainstorming,” Journal ofPersonality and Social Psychology, 79, 5, 722–35.

Edmunds, Holly (1999), The Focus Group Research Handbook. Illinois: NTC.

Escales, Jennifer E. (2007), “Self-Referencing and Persuasion: NarrativeTransportation Versus Analytical Elaboration,” Journal of ConsumerResearch, 33, 421–9.

Fern, Edward F. (1982), “The Use of Focus Groups for Idea Generation: The Effectsof Group Size, Acquaintanceship, and Moderator on Response Quantity andQuality,” Journal of Marketing Research, 19, February, 1–13.

——— (1983), “Focus Groups: A Review of Some Contradictory Evidence,Implications, and Suggestions for Future Research,” in Advances inConsumer Research, Volume 10, Richard P. Bagozzi and Alice M Tybout,editors. Ann Arbor Association for Consumer Research. p 121–6.

——— (2001), Advanced Focus Group Research. Thousand Oaks, CA: SagePublications.

Fleiss, Joseph L. (1981), Statistical Methods for Rates and Proportions. NewYork: John Wiley.

Frey, James H. and Andrea Fontana (1991), “The Group Interview in SocialResearch,” The Social Science Journal, 28, 2, 175–87.

Füller, Johann, Gregor Jawecki, and Hans Mühlbacher (2006), “InnovationCreation by Online Basketball Communities,” Journal of Business Research,60, 60–71.

Gartner Group (2009),Gartner's Top Predictions for IT Organizations and Users,2010 and Beyond: A New Balance. Gartner Group Reports.

Harkins, Stephen G. and Richard E. Petty (1982), “Effects of Task Difficultyand Task Uniqueness on Social Loafing,” Journal of Personality and SocialPsychology, 43, 6, 1214–29.

Hill, Gayle W. (1982), “Group versus Individual Performance: Are N + 1 HeadsBetter than One?,” Psychological Bulletin, 91, 3, 517–39.

Hoffman, Donna L. and Thomas P. Novak (1996), “Marketing in HypermediaComputer-Mediated Environments: Conceptual Foundations,” Journal ofMarketing, 60, July, 50–68.

Hutchins, Edwin (1995), Cognition in the Wild. Cambridge, MA: MIT Press.Kerr, Norbert L. and Steven E. Bruun (1983), “Dispensability of Member Effort

and Group Motivation Losses: Free-Rider Effects,” Journal of Personalityand Social Psychology, 44, 1, 78–94.

Latané, Bibb, Kipling Williams, and Stephen Harkins (1979), “Many HandsMake Light the Work: The Causes and Consequences of Social Loafing,”Journal of Personality and Social Psychology, 37, 6, 822–32.

Moon, Youngme (2000), “Intimate Exchanges: Using Computers to Elicit Self-Disclosure from Consumers,” Journal of Consumer Research, 26, March,323–39.

Morgan David L., editor. “Successful Focus Groups: Advancing the State of theArt. Newbury Park, CA: Sage Publications.

Morgan, David L. (1996), “Focus Groups,” Annual Review of Sociology, 22,129–52.

Nambisan, Satish and Robert A. Baron (2007), “Customer Environments:Implications for Product Support and Customer Relationship Management,”Journal of Interactive Marketing, 2, 42–62.

Paulus, Paul B., Timothy S. Larey, and Anita H. Ortega (1995), “Performanceand Perceptions of Brainstormers in an Organizational Setting,” Basic andApplied Social Psychology, 17, 1/2, 249–65.

———, Timothy S. Larey, Vicky L. Putman, Karen L. Leggett, and Evelyn J.Roland (1996), “Social Influence Process in Computer Brainstorming,”Basic and Applied Social Psychology, 18, 1, 3–14.

——— and Huei-Chang Yang (2000), “Idea Generation in Groups: A Basis forCreativity in Organizations,” Organizational Behavior and HumanDecision Processes, 82, 1, 76–87.

Pinsonneault, Alain, R. Henri Barki, Brent Gallupe, and Norberto Hoppen (1999),“Electronic Brainstorming: the Illusion of Productivity,” Information SystemsResearch, 10, 2, 110–33.

Prahalad, C.K. and Venkat Ramaswamy (2004), “Co-creation Experiences: TheNext Practice in Value Creation,” Journal of Interactive Marketing, 18, 3,5–14.

Reitsma, Reineke, Jacqueline Anderson, Tamara Barber, and RoxanaStrohmenger (2010), Predictions 2011: What Will Happen In MarketResearch. Cambridge, MA: Forrester Research Inc.

Sawhney, Mohanbir, Gianmario Verona, and Emanuela Prandelli (2005),“Collaborating to Create: The Internet as a Platform for CustomerEngagement in Product Innovation,” Journal of Interactive Marketing, 19,4, 4–17.

73C. Balagué, K. de Valck / Journal of Interactive Marketing 27 (2013) 62–73

Sawilowsky, Schlomo S. (1990), “Nonparametric Tests of Interaction inExperimental Design,” Review of Educational Research, 60, 1, 91–126.

Schau, Hope J. and Mary C. Gilly (2005), “We Are What We Post? Self-Presentation in Personal Web Space,” Journal of Consumer Research, 30,December, 383–404.

Schmidt, Jan (2007), “Blogging Practices: An Analytical Framework,” Journalof Computer-Mediated Communication, 12, 1409–27.

Sonnier, Garrett P., Leigh McAlister, and Oliver J. Rutz (2011), “A DynamicModel of the Effect of Online Communications on Firm Sales,” MarketingScience, 3, 4, 702–16.

Stevens, Berni (2007), “Best Practices of Online Qualitative Research,”(accessed March 2009), [available at] http://www.quirks.com/articles/2007.

Stewart, David W. and Prem N. Shamdasani (1990), Focus Groups: Theory andPractice. Newbury Park, CA: Sage Publications.

Stiegler, Bernard (2009), For A New Critique of Political Economy. Paris:Editions Lavoisier.

Sutton, Robert I. and Andrew Hargadon (1996), “Brainstorming Groups inContext: Effectiveness in a Product Design Firm,” Administrative ScienceQuarterly, 41, 4, 685–718.

Walther, Joseph B. (1995), “Relational Aspects of Computer-mediatedCommunication: Experimental Observations Over Time,” OrganizationScience, 6, 2, 186–203.

Wasko, Molly and Samer Faraj (2005), “Why Should I Share? ExaminingSocial Capital and Knowledge Contribution in Electronic Networks ofPractice,” MIS Quarterly, 29, 1, 35–57.

Wiertz, Caroline and Ko de Ruyter (2007), “Beyond the Call of Duty: WhyCustomers Contribute to Firm-hosted Commercial Online Communities,”Organization Studies, 28, 3, 347–76.