10
Standards in science indicators Vincent Larivière EBSI, Université de Montréal OST, Université du Québec à Montréal Standards in science workshop SLIS-Indiana University

Standards in science indicators Vincent Larivière EBSI, Université de Montréal OST, Université du Québec à Montréal Standards in science workshop SLIS-Indiana

Embed Size (px)

Citation preview

Standards in science indicators

Vincent Larivière

EBSI, Université de MontréalOST, Université du Québec à

Montréal

Standards in science workshop

SLIS-Indiana UniversityAugust 11th 2011

Current situationSince the early 2000s, we are witnessing:1)Increase in the use of bibliometrics in

research evaluation;2)Increase in the size of the bibliometric

community;3)Increase in the variety of actors invoved in

bibliometrics (e.g. no longer limited to LIS or the STS community);

4)Increase in the variety of existing metrics for mesuring research impact: H-index (with its dozen varieties); engenvalues, SNIP and SCIMAGO impact indicators, etc.

5)No longer an ISI monopoly (Scopus, Google Scholar + several other initiatives (SBD, etc.).

Why do we need standardized bibliometric

indicators?1) Symptomatic of the immaturity of the research

field – no paradigm is yet dominant;2) Bibliometric evaluations are spreading at the

levels of countries, institutions, research groups and individuals;

3) Worldwide rankings are spreading and often yield diverging results

4) Standards shows the consensus in the community and allows for various measures to be :1) Comparable2) Reproducable

Impact indicatorsImpact indicators have been used for quite

a while in science policy and research evaluation.

Until quite recently, only a handful of metrix were available or compiled by research groups involved in bibliometrics:1) raw citations2) citations per publication3) Impact factors

Only one database was used: ISIOnly one normalization was made: by field

(when it was done!)

Factors to take into account in the creation of a new

standard1) Field specificities: citation potential and aging

characteristics.2) Field definition: at the level of journal or at the

level of paper? Interdisciplinary journals?3) Differences in the coverage of databases4) Distributions vs. aggregated measures5) Skewness of citation distributions (use of logs?)6) Paradox of ratios (01∞)7) Averages vs medians vs ranks8) Citation windows9) Unit vs fractional counting10)Equal or different weight for each citation?

Ex. 1: Impact indicators

Example of how a very simple change in the calculation method of an impact indicator can change the results obtained – even when very large number of papers are involved.

All things are kept constant here: same papers, same database, same subfield classification, same citation window.

The only difference is the order of operations leading to the calculation: average of ratio (AoR) vs ratio of averages (RoA). Both these methods are considered as standards in research evaluation.

4 levels of aggregation are analyzed: individuals, departments, institutions and countries

Relation between RoA and AoR field normalized citation indicators at the level of A) individual researchers (≥20 papers), B) departments (≥50 papers), C) institutions (≥500 papers) and

D) countries (≥1000 papers)

Figure 2. Relationship between (AoR – RoA) / AoR and the number of papers at the level of A) individual researchers,

B) departments, C) at the level of institutions (≥500 papers), D) countries.

Ex. 2: Productivity measures

Typically, we count the research productivity of units by summing the distinct number papers they produced and dividing it by the total number of researchers of the unit.

Another method is to assign papers to each researcher of the group, and then perform the average of their individual output.

Both counting methods are correlated, but nonetheless yield different results:

Difference in the results obtained for 1223 departments

(21,500 disambiguated researchers)