22
Beyond the Factor: Talking about research impact Claire Stewart Associate University Librarian for Research & Learning Senate Library Committee, February 10, 2016

Beyond the Factor: Talking about Research Impact

Embed Size (px)

Citation preview

Beyond the Factor:Talking about research impact

Claire StewartAssociate University Librarian for Research & Learning

Senate Library Committee, February 10, 2016

Interest in metrics

• In hiring, in support of promotion & tenure• Funders and publishers, evaluating proposals• Institutional productivity• Impact on our communities

Abbreviated metrics overview

Alphabet Miso. https://www.flickr.com/photos/bean/322616749/

JIF: Journal Impact Factor

Source: The Thompson Reuters Impact Factor

Significant variance across disciplines:• Top ranked journal overall:

JIF = 144.800• Top ranked journal in history:

JIF = 2.615

Not based on any single author/article

Oft criticized (DORA, Leiden, HEFCE statements)

eigenfactor

Based on same citation source as the Impact Factor (Thomson’s Journal Citation Reports)

Weights journals by importance based on citation frequency, similar to Google page rank

Also calculates an Article Influence score, over the first 5 years of an article

h index

Scholar-specific:“A scientist has index h if h of his or her Np papers have at least h citations each and the other (Np – h) papers have ≤h citations each.”

Dependent on citation index source (Google scholar and Scopus might have different values)

Doesn’t really account for different citation/usage patterns between fields

Source: Hirsch, J. E. “An Index to Quantify an Individual’s Scientific Research Output.” Proceedings of the National Academy of Sciences of the United States of America 102, no. 46 (November 15, 2005): 16569–72. doi:10.1073/pnas.0507655102.

Altmetrics

• Shares and mentions in non-traditional places, on social media, etc. (twitter, FB, Mendeley, blogs, wikipedia etc.)

• Often dependent on identifiers (DOIs, PubMedIDs, arXivIDs, etc.) which can have lower penetration in arts & humanities fields

Article level metrics

Recent expressions of concern about strictly quantitative approach to research

assessment

Advice to HEFCE (UK)

Framework for responsible metrics• Robustness: use the best

possible data• Humility: quantitative should

support expert assessment (e.g., peer review)

• Transparency: be able to show where data came from & let results be verified

• Diversity: account for variation by field

• Reflexivity: indicators updated as the system & effects change

20 specific recommendations to HEFCE around use of metrics

What do we want to know when we talk about impact?

• How has this [researcher’s] work advanced knowledge?

• Has this research been evaluated and by whom? • What is field shaping research?• Who are the researchers shaping my field?• What is going on in my field that’s important, or in a

field that could benefit my work? • What is the broader societal benefit of this work?

(value of higher education, research investments)

Source: Weinberg, Bruce A., Jason Owen-Smith, Rebecca F. Rosen, Lou Schwarz, Barbara McFadden Allen, Roy E. Weiss, and Julia Lane. “Science Funding and Short-Term Economic Activity.” Science 344, no. 6179 (April 4, 2014): 41–43. doi:10.1126/science.1250055.

Where was CIC federal research funding $ actually spent?

IRIS sample products, November 2015

Grad students, postdocs and other research staff employment

What are the other questions we will want to ask?

And what kinds of information will we need to answer these questions?

What are the other questions we will want to ask?

Who at UMN is doing research in or about countries other than the United States? Who are they collaborating with? What kind of effect has this work had?

‘Effect’ could include: articles, books and reports published, presentations offered, information about who benefited from these outputs, integration into policy development (conversations about and/or new legislation, regulation, etc.)

Why is this hard?

• Wide variety in what constitutes a valuable research output/indicator across disciplines

• Types of outputs expanding• Data about research outputs is messy partly

because it has the typical big data problems: volume, velocity, variety

• Highly distributed scholarly communication infrastructure (the data about outputs is everywhere)

Outputs/indicators vs metrics

“The observations here relate to the fact that while there is unease about the use of metrics as a mode of

‘measuring’ the excellence of research produced in the UK’s HEIs, the rich array of data presented as part of REF2014 demonstrates that the arts and humanities

sector are comfortable with deploying numbers (albeit framed as data rather than metrics) to present a case

about the excellence of their research cultures.”

Mike Thelwall, and Maria M Delgado. “Arts and Humanities Research Evaluation: No Metrics Please, Just Data.” Journal of Documentation 71, no. 4 (June 25, 2015): 817–33. doi:10.1108/JD-02-2015-0028.

Outputs/indicators vs metrics

Discussion

What conversations are taking place in your field about impact and metrics? What outputs are of interest? Do we know how to describe them? Are they captured in any consistent way?