Upload
anna-de-liddo
View
455
Download
2
Embed Size (px)
Citation preview
A Collective Intelligence Tool for Evidence Based Policy & Practice
Anna De Liddo & Simon Buckingham Shum Knowledge Media Institute
The Open University, Milton Keynes, MK7 6AA, UK
Jurix 2012 Workshop on
Argumentation Technology for Policy Deliberations
http://evidence-hub.net/
Computational Services/Analytics
UX design
Human Dynamics of Engagements
NLP, XIP, discourse analysis Structure
searches/Agents
Cohere/free semantics
Policy: Olnet, Ed
Future
EH/simplified model, widget/threaded
interface
Research: Rcyp Hub
Human-‐Centred Compu/ng for CCI (Argumenta/on-‐based Collec/ve Intelligence)
Practice: CoPHV
Collective intelligence is a new umbrella term used to express the augmented functions that can be enabled by the existence of a community. It refers to the intelligence that emerges by the coexistence of multiple people in the same environment. This environment can be both a real world environment and a virtual one. We are looking at the Web and at what intelligent hints, supports or behaviors can be tracked and emerge by the coexistence of a collective of people online.
Collective Intelligence: How do we Crowdsource Policy Deliberation?
Nowadays successful CI tools have been developed, especially in the IT business and e-commerce sector, that use those users’ ‘traces’ to depict users’ profiles and suggest user actions based on those profiles. These CI tools support quite simple user objectives, such as: deciding what book to buy (Amazon), finding the picture or video that mach their needs (Flickr and Youtube), or deciding what music to listen (LastFM). Collecting fragmented users’ traces seems to work to collect and exploit CI in the business and commerce sector.
Collective Intelligence
On the other hand if we look at the social and political sector, or at higher level organizational and business strategy issues we need to support more complex users goals such as i.e.: understanding policy actions; learning environmental responses to adapt organizational actions; understanding the economical crisis and the possible implications for the community etc.
Contested Collective Intelligence
CI tools that aims at supporting users in more complex knowledge works need to be thought and designed so that users’ collective intelligence can be captured and shared in a much richer and explicit way.
In the design space of CI systems • where there is insufficient data to confidently compute an answer,
• when there is ambiguity about the trustworthiness of environmental signals,
• and uncertainty about the impact of actions,
…then a more powerful scaffolding for thinking and discourse is required, in order to support the emergence of CI around complex socio political dilemmas.
Contested Collective Intelligence
With Cohere users can make their thinking visible and sharable with online communities by:
ü collaboratively annotating the Web, ü leveraging lists of annotations into meaningful knowledge maps and ü Engaging in structured online discussions.
First Prototype Tool for Contested Collective Intelligence (CCI)
Cohere Conceptual Model
Cohere builds on a conceptual model which consists of four main users ac/vi/es through which users can make their thinking visible and contribute to the development of Collec/ve Intelligence around specific issues:
Collaborative Annotate Web Resources
Make Semantic Connections
Watch the demo video at: http://www.youtube.com/watch?v=Fcn2ab9PYo4 Watch the Open Deliberation model video at: http://www.youtube.com/watch?v=vthygbKA2Mg
Explore, Filter and Makesense
Despite the success of the web annotation paradigm…People seems to struggle to make semantic connections, moreover too many semantics produce often redundancy and duplication.
This brought to the second design iteration:
A new simplified data model and a new interface for connection making….
The issue: lack of evidence of OER effectiveness
Experimenting CCI in a real case of Educational Policy: The OLnet project
olnet.org
OLnet project: The wider research question
RQ: How can we help researchers and practitioners in the OER field to contribute to the evidences of OER effectiveness and to investigate these evidences collaboratively?
Our approach to CI focuses on: capturing the hidden knowledge of the OER movement and leveraging it so that can be: ü debated (building and confronting arguments), ü evaluated (assessing evidence), and ü put in value (distilling claims used to inform OER
policy and practice)
Approach: Contested Collective Intelligence
What&Why: The Evidence Hub provides ü the OER community with a space to harvest the
evidence of OER effectiveness ü policy makers with a community-generated
knowledge base to make evidence based decision on Educational Policy.
Social Ecosystem • People (Contributors) • Projects • Organiza/ons Discourse Ecosystem • Key challenges • Issues, • Solu/ons, • Claims • Evidence • Resources
The Evidence Hub: Mapping the social and discourse ecosystem
olnet.org
Themes
Social Ecosystems (Org and Projects)
Social Ecosystems (Org and Projects)
Social Ecosystems (Org and Projects)
Social Ecosystems (Org and Projects)
Social Ecosystems (Org and Projects)
Social Ecosystems (Org and Projects)
The Discourse Ecosystem Elements: A Simplified Data Model
The Discourse Ecosystem Elements: A Simplified Data Model
The Discourse Ecosystem Elements: A Simplified Data Model
The Discourse Ecosystem Elements: A Simplified Data Model
The Discourse Ecosystem Elements: A Simplified Data Model
The Discourse Ecosystem Elements: A Simplified Data Model
The Discourse Ecosystem Elements: A Simplified Data Model
Evidence Types: From simple data, anecdotes and stories to literature Analysis and Experimental Results
Social Ecosystem • People (Contributors) • Projects • Organiza/ons Discourse Ecosystem • Key challenges • Issues, • Solu/ons, • Claims • Evidence • Resources
Loca4ng: Adding geo-‐loca/on info
CI: Community Actions
CI: Community Actions
Social Ecosystem • People (Contributors) • Projects • Organiza/ons Discourse Ecosystem • Key challenges • Issues, • Solu/ons, • Claims • Evidence • Resources
Loca4ng: Adding geo-‐loca/on info
Following: Expressing Interest
Social Ecosystem • People (Contributors) • Projects • Organiza/ons Discourse Ecosystem • Key challenges • Issues, • Solu/ons, • Claims • Evidence • Resources
Connec4ng: Adding to Widget
Loca4ng: Adding geo-‐loca/on info
Following: Expressing Interest
CI: Community Actions
Social Ecosystem • People (Contributors) • Projects • Organiza/ons Discourse Ecosystem • Key challenges • Issues, • Solu/ons, • Claims • Evidence • Resources
Connec4ng: Adding to Widget
Loca4ng: Adding geo-‐loca/on info
Following:
Promo4ng: Ordering-‐Priori/zing lists Vo/ng Connec/ons
Expressing Interest
CI: Community Actions
Social Ecosystem • People (Contributors) • Projects • Organiza/ons Discourse Ecosystem • Key challenges • Issues, • Solu/ons, • Claims • Evidence • Resources Theming: Adding OER Themes
Connec4ng: Adding to Widget
Loca4ng: Adding geo-‐loca/on info
Following:
Promo4ng: Ordering-‐Priori/zing lists
Vo/ng Connec/ons
Expressing Interest
CI: Community Actions
Social Ecosystem • People (Contributors) • Projects • Organiza/ons Discourse Ecosystem • Key challenges • Issues, • Solu/ons, • Claims • Evidence • Resources Theming: Adding OER Themes
Connec4ng: Adding to Widget
Loca4ng: Adding geo-‐loca/on info
Following:
Promo4ng: Ordering-‐Priori/zing lists
Vo/ng Connec/ons
Expressing Interest
CI: Community Actions
A simplified UI for connection making….
Widget Interface for Connection Making
Widget Interface for Connection Making
The Evidence Hub Some Facts and Figures The Evidence Hub alpha version launched in April 2011. With 50 users, from 35 different countries, including key OER people.
olnet.org
Some Facts and Figures
Opened to the public at OpenEd11 in Utah.
Some facts and figures: Engagement
olnet.org
- 108 contributors,
- received 3,054 visits from 1,053 unique visitors from 57 different countries
Some facts and figures on Content
olnet.org
304 OER projects and organizations
129 OER research claims
79 OER issues
89 proposed solutions
323 Evidence and
553 Resources
Reflection on initial User Testing & Interviews
olnet.org
Feedback from users shows that the EH is perceived as:
“relevant”, “organized”, “desirable” and “engaging”
but some/mes “sophis1cated” and “complex”.
improving the user experience by crea4ng summary views,
facilitate and simplify content seeding,
be:er displays and filters on the content.
Feedback from Lab-Based User Testing
Fragmented approach to argument construction: widget interface
• Easy to contribute to but • Increases miscathegorization : interpretation biases on
how content should be labeled under specific argumentation categories;
• Increases duplication of content • Decreases argumentation coherence
This lead to the third design Iteration….
Third design Iteration: The CoPHV Evidence Hub
Research by Children and Young People Evidence Hub: A mixed threaded/widget interface
Collective Intelligence
Development Trajectories: Facilitating content seeding
1) Web Annotation to support seeding Evidence Hub bookmarklet to allow people to capture evidence by performing annotation of free web resources and OERs.
olnet.org
Allows users to highlight and annotate Web resources through an Evidence Hub bookmarklet
2) Combining Human and Machine Annotation: The Hewlett Grant Reports Project
RESULTS
template
report
XIP-annotated report
De Liddo, A., Sándor, Á. and Buckingham Shum, S. (2012) Contested Collective Intelligence: Rationale, Technologies, and a Human-Machine Annotation Study, Computer Supported Cooperative Work (CSCW)
Journal : Volume 21, Issue 4 (2012), Page 417-448
Discourse analysis with the Xerox Incremental Parser
BACKGROUND KNOWLEDGE: Recent studies indicate … … the previously proposed … … is universally accepted ...
NOVELTY: ... new insights provide direct evidence ...... we suggest a new ... approach ... ... results define a novel role ...
OPEN QUESTION: … little is known … … role … has been elusive Current data is insufficient …
GENERALIZING: ... emerging as a promising approach Our understanding ... has grown exponentially ... ... growing recognition of the importance ...
CONRASTING IDEAS: … unorthodox view resolves … paradoxes … In contrast with previous hypotheses ... ... inconsistent with past findings ...
SIGNIFICANCE: studies ... have provided important advances Knowledge ... is crucial for ... understanding valuable information ... from studies
SURPRISE: We have recently observed ... surprisingly We have identified ... unusual The recent discovery ... suggests intriguing roles
SUMMARIZING: The goal of this study ... Here, we show ... Altogether, our results ... indicate
Detection of salient sentences based on rhetorical markers:
De Liddo, A., Sándor, Á. and Buckingham Shum, S. (2012) Contested Collective Intelligence: Rationale, Technologies, and a Human-Machine Annotation Study, Computer Supported Cooperative Work (CSCW) Journal : Volume 21, Issue 4 (2012),
Page 417-448
PROBLEM_CONTRAST_ First, we discovered that there is no empirically based understanding of the challenges of using OER in K-12 settings.
XIP annotations to Cohere
Report Node Annotation Node
Name entities extracted by XIP convert into Tags
CONTRAST converts into semantic connection’s label :””describes contrasting ideas in”
PROBLEM converts into Annotation node icon: “Issue”=“Light Bulb”
De Liddo, A., Sándor, Á. and Buckingham Shum, S. (2012) Contested Collective Intelligence: Rationale, Technologies, and a Human-Machine Annotation Study, Computer Supported Cooperative Work (CSCW) Journal : Volume 21, Issue 4 (2012),
Page 417-448
Human annotation and machine annotation
~19 sentences annotated 22 sentences annotated 11 sentences = human annotation 2 consecutive sentences of human annotation
71 sentences annotated 59 sentences annotated 42 sentences = human annotation
1.
2.
De Liddo, A., Sándor, Á. and Buckingham Shum, S. (2012) Contested Collective Intelligence: Rationale, Technologies, and a Human-Machine Annotation Study, Computer Supported Cooperative Work (CSCW) Journal : Volume 21, Issue 4 (2012),
Page 417-448
3) Collabora4ve PDF annota4on
olnet.org
Future developments could be devoted to power the Evidence Hub with PDF annotation so that users can share evidence of Policy arguments and impact directly working with PDF.
A high % of policy report and documents are in PDF format: we have a concept demo of direct PDF annotation shared back to Cohere
Steve Pettifer, Utopia: http://getutopia.com/
Collective Intelligence Development Trajectories:
4) Better visualization and filtering of content 5)Discourse Analytics
When social and discourse elements become too many in
number and complexity how can we make sense of them?
Toward CI visualization and analysis…
adding more formal logics to evaluate arguments
& developing discourse analytics to create summaries,
identify gaps, localize interests, focus contributions
Watch the demo video at: http://www.youtube.com/watch?v=Fcn2ab9PYo4 Watch the Open Deliberation model video at: http://www.youtube.com/watch?v=vthygbKA2Mg
Discourse Network Visualization
Social Network Visualization
Theoretical questions for future work • How to evaluate arguments? - authomatic (based on argument
computation) vs community lead mechanisms (such as voting and reputation systems)
• How to make optimal use of both human and machine
annotation & argumentation skills? – How to exploit machine consistency while reducing information
overload and noise? – How to exploit the unique human capacities to abstract, filter for
relevance etc.?
• How to cope with visual complexity (new search interface, focused and structured network searches, collective filtering, identifying argument structures)?
• How do we crowdsource Policy Deliberation? What is the right interface? What is the architecture of Participation?
References • De Liddo, A., Sándor, Á. and Buckingham Shum, S. (2012) Contested Collec/ve Intelligence: Ra/onale, Technologies, and a
Human-‐Machine Annota/on Study, Computer Supported Coopera/ve Work (CSCW) Journal : Volume 21, Issue 4 (2012), Page 417-‐448
• Buckingham Shum, Simon (2008). Cohere: Towards Web 2.0 Argumenta/on. In: Proc. COMMA'08: 2nd Interna4onal Conference on Computa4onal Models of Argument, 28-‐30 May 2008, Toulouse, France. Available at:hap://oro.open.ac.uk/10421/
• De Liddo, Anna and Buckingham Shum, Simon (2010). Cohere: A prototype for contested collective intelligence. In: ACM Computer Supported Cooperative Work (CSCW 2010) - Workshop: Collective Intelligence In Organizations - Toward a Research Agenda, February 6-10, 2010, Savannah, Georgia, USA. Available at: http://oro.open.ac.uk/19554/
• Buckingham Shum, Simon and De Liddo, Anna (2010). Collec/ve intelligence for OER sustainability. In: OpenED2010: Seventh Annual Open Educa4on Conference, 2-‐4 Nov 2010, Barcelona, Spain. Available at: hap://oro.open.ac.uk/23352/
• De Liddo, Anna (2010). From open content to open thinking. In: World Conference on Educa4onal Mul4media, Hypermedia and Telecommunica4ons (Ed-‐Media 2010), 29 Jun, Toronto, Canada. Available at: hap://oro.open.ac.uk/22283/
• De Liddo, Anna and Alevizou, Panagiota (2010). A method and tool to support the analysis and enhance the understanding of peer-‐-‐to-‐-‐peer learning experiences. In: OpenED2010: Seventh Annual Open Educa4on Conference, 2-‐4 Nov 2010, Barcelona, Spain. Available at: hap://oro.open.ac.uk/23392/
• Buckingham Shum, Simon (2007). Hypermedia Discourse: Contes/ng networks of ideas and arguments. In: Priss, U.; Polovina, S. and Hill, R. eds. Conceptual Structures: Knowledge Architectures for Smart Applica4ons. Berlin: Springer, pp. 29–44.
Thanks for Your Attention! Anna De Liddo
http://people.kmi.open.ac.uk/anna/