3
Editorial: Subprime Lessons for Academic Librarians by David F. Kohl F rom the editorAt the risk of burnishing my geezer credentials, I must report that the Subprime crisis has forcefully reminded me of a key lesson of the 1970s: Never attribute to purposeful evil what you can explain by incompetence. To be sure, there are indications of greedy and unscrupulous behavior in the subprime debacle, but the main problem seems to be that all these bright, highly educated people got deeply involved with financial tools they really didnt understand in an environment where they piled unexamined assumption upon unexamined (or at least insufficiently examined) assumption. In retrospect, it all seems a bit nutsbut it happened. So what gives? All I can figure out is something I would call Kohls Fast Food Principle. We know that eating at McDonalds (or Burger King or KFC) is not particularly nutritious, and with all the disposable napkins, cups, spoons, and ketchup packages probably not very good for the planet either. Yet fast food thrives. It thrives because it solves a problem. We need to eat and fast food solves that problem for us quickly and conveniently. And lets face it, what parent has not at some point just caved and traded a Happy Meal for a little peace and quiet. Whether its dealing with financial instruments or calories, quick and convenient are hard to beatespecially if were in too big a rush getting on with the rest of our lives to really think it through. In a kind of perverse analogy it seems to me that higher education has its own particular version of fast food. I am referring to journal Impact Factor, particularly when used to evaluate articles in a particular journal. As most everyone in higher education these days probably knows, it is simplicity itself. A single number assigned to a journal allows a harried Dean of Libraries or Provost or Tenure and Promotion Committee pressed for time to assign an academic value to a paper by proxy. The principle is the same one our mothers tried to impress upon us as childrenyoure known by the company you keep. An article placed in a high-Impact-Factor journal must be good and visa versa. You dont even have to read the paper to know its academic value if you know the journal Impact Factor. The fast food principle in action. The core of the issue here, however, is not whether we should completely reject journal Impact Factor, particularly in evaluating articles (though the idea of living a completely pure life is always tempting), or should pragmatically embrace it (the idea of quickly cutting to the bottom line when time and energy is short are equally tempting), but that we should use it with caution and insight into its strengths and limitations. It can, in short, be a useful, if not perfect, instrument for approaching the practical problem of evaluating scholarly contributions in higher education. But I think we have a ways to go in the library community in adequately understanding the Impact Factor. 1 And I particularly think that it is important that we as a community discuss and investigate the issue more fully because it has real significance for both personnel and collection decisions. The best known and universally used IF is that generated by Thomson. Basically, they calculate the total number of citations in a given year to a journals articles for a prior 2-year period and divide by the total number of articles published by that journal in the same 2-year period. For example, a 2006 IF for journal X would be the total citations in 2006 to the articles published in 2004/2005 divided by the total number of articles for 2004/2005. 2 It is possibly worth reflecting on the fact that half of the IFs are less than one. In any case, what follows are some very preliminary thoughts that I hope will encourage a fuller exploration and conversation of this issue by the community. I have organized my concerns into four areas: LIMITATIONSTECHNICAL Although an obvious point, it is worth keeping in mind that evaluating articles through journal IF is a proxy for the real thing and, as such, has an inherent limitation in that it is not the thing itself.The most that can be said of an article in a journal with a high IF is that the probability of its being a significant paper is likely to be higher than if it appeared in a journal with a low IF. But a high IF is not necessarily a David F. Kohl <[email protected]>. The Journal of Academic Librarianship, Volume 34, Number 4, pages 285287 July 2008 285

Subprime Lessons for Academic Librarians

Embed Size (px)

Citation preview

s for Academic Librarians

Editorial:

Subprime Lesson

by David F. Kohl

David F. Kohl <david.koh

The Journal of Academic Libra

rom the editor…

FAt the risk of burnishing my geezer credentials, I mustreport that the Subprime crisis has forcefully reminded

me of a key lesson of the 1970s: Never attribute to purposefulevil what you can explain by incompetence. To be sure, thereare indications of greedy and unscrupulous behavior in thesubprime debacle, but the main problem seems to be that allthese bright, highly educated people got deeply involved withfinancial tools they really didn’t understand in an environmentwhere they piled unexamined assumption upon unexamined (orat least insufficiently examined) assumption. In retrospect, it allseems a bit nuts—but it happened. So what gives?

All I can figure out is something I would call Kohl’s FastFood Principle. We know that eating at McDonalds (or BurgerKing or KFC) is not particularly nutritious, and with all thedisposable napkins, cups, spoons, and ketchup packagesprobably not very good for the planet either. Yet fast foodthrives. It thrives because it solves a problem. We need to eatand fast food solves that problem for us quickly andconveniently. And let’s face it, what parent has not at somepoint just caved and traded a Happy Meal for a little peace andquiet. Whether it’s dealing with financial instruments orcalories, quick and convenient are hard to beat—especially ifwe’re in too big a rush getting on with the rest of our lives toreally think it through.

In a kind of perverse analogy it seems to me that highereducation has its own particular version of fast food. I amreferring to journal Impact Factor, particularly when used toevaluate articles in a particular journal. As most everyone inhigher education these days probably knows, it is simplicityitself. A single number assigned to a journal allows a harriedDean of Libraries or Provost or Tenure and PromotionCommittee pressed for time to assign an academic value to apaper by proxy. The principle is the same one our mothers triedto impress upon us as children—you’re known by the companyyou keep. An article placed in a high-Impact-Factor journal

[email protected]>.

rianship, Volume 34, Number 4, pages 285–2

87

must be good and visa versa. You don’t even have to read thepaper to know its academic value if you know the journalImpact Factor. The fast food principle in action.

The core of the issue here, however, is not whether weshould completely reject journal Impact Factor, particularly inevaluating articles (though the idea of living a completely purelife is always tempting), or should pragmatically embrace it (theidea of quickly cutting to the bottom line when time and energyis short are equally tempting), but that we should use it withcaution and insight into its strengths and limitations. It can, inshort, be a useful, if not perfect, instrument for approaching thepractical problem of evaluating scholarly contributions inhigher education. But I think we have a ways to go in thelibrary community in adequately understanding the ImpactFactor.1 And I particularly think that it is important that we as acommunity discuss and investigate the issue more fully becauseit has real significance for both personnel and collectiondecisions.

The best known and universally used IF is that generated byThomson. Basically, they calculate the total number of citationsin a given year to a journal’s articles for a prior 2-year periodand divide by the total number of articles published by thatjournal in the same 2-year period. For example, a 2006 IF forjournal X would be the total citations in 2006 to the articlespublished in 2004/2005 divided by the total number of articlesfor 2004/2005.2 It is possibly worth reflecting on the fact thathalf of the IFs are less than one. In any case, what follows aresome very preliminary thoughts that I hope will encourage afuller exploration and conversation of this issue by thecommunity. I have organized my concerns into four areas:

LIMITATIONS—TECHNICAL

Although an obvious point, it is worth keeping in mind thatevaluating articles through journal IF is a proxy for the realthing and, as such, has an inherent limitation in that it is notthe “thing itself.” The most that can be said of an article ina journal with a high IF is that the probability of its being asignificant paper is likely to be higher than if it appeared ina journal with a low IF. But a high IF is not necessarily a

July 2008 285

guarantee of an academically sound article, as the ongoingdrumbeat of withdrawn articles and author conflict of interestcharges in such journals as the New England Journal ofMedicine and the Journal of the Medical Library Association,journals with outstanding IFs, show. A proxy is an indication,not a guarantee of value.3

A possibly even more troubling concern is the fact thatThomson does not use the full universe of citations. They havechosen to limit themselves in two dimensions. The first is thatthey only count citations appearing in a limited number ofjournals, in particular, those in their Web of Knowledgedatabase (9,000 science and social science journals). Althougha substantial database, it is nevertheless a subset of theacademic literature even in the sciences. Depending on theauthority, the number of scholarly journals has been estimatedat anywhere from 20,000 to 50,000 titles. Thomson’s argumentis that their subset represents the elite journals in their fieldsand represents the most reliable guide to the publishedliterature in terms of what these authors choose to cite. It isa reasonable argument but not entirely convincing. Further,their coverage of the journal literature is not only limited butalso uneven in terms of academic discipline and geographicrepresentation.4

The second dimension that limits the citation universe istemporal. Thomson only gives an article a 2-year windowafter publication to generate citations. Einstein’s iconic paperson special and general relativity, so limited in this manner,would have provided very little impact for the Annalen derPhysik. To accept IF as valid in the light of such limitations, itis necessary to make a number of possibly suspect assump-tions. Whether such assumptions are valid in general forscholarship or for the library literature in particular would beworth exploring.

A final concern in this category is a certain lack ofconsistency and transparency in how the article and citationnumbers are counted. It is, for example, somewhat strange thatthe criteria for identifying an article to be used in determiningthe citation count are different from those used for determiningthe article count. Editorials, for example, count for the formerbut not for the latter. It may also come as a surprise in this era ofautomation that the counts are done manually—a mechanismthat seemingly further encourages inconsistency. Most proble-matic, however, Thomson does not provide enough informationpublicly to replicate their calculations in any reliable way.Automatic belief in authoritative announcements hasn’t been afeature of the Academy since at least the Middle Ages. 5

LIMITATIONS—STATISTICAL

One size does not fit all. In small populations extreme outlierscan have a disproportionate and misleading effect on the IFcalculation. Most library journals publish well under 100articles a year. Just one or two highly cited papers in such asmall population can significantly affect the IF. There arestatistical ways to deal with this problem6 but Thomson doesnot use them. It would be useful, however, to know to whatdegree citation outliers are a problem in determining a reliableIF for library journals.7

LIMITATIONS—PHILOSOPHICAL

There is a question of what constitutes impact. For traditionalacademics whose formal conversations take place through thepublished word, citations are clearly important evidence that

286 The Journal of Academic Librarianship

their work has significance for their peers. For professionalssuch as academic librarians who operate in a more complexinterplay of theory and application, it may be that citations topublished work are not as clear a marker for impact. Otherevidence, such as number of downloads, may also be a useful(or possibly better) indicator of professional impact andsignificance. It would be useful to know, for example, howand in what manner downloads might be a significant measureof professional influence. Are downloads primarily an indica-tion of library student research for papers or are they the resultof working professionals seeking to make informed decision? Itwould also be useful to know whether there is a relationshipbetween the two, i.e., IF and downloads. Further, is it possiblethat a highly cited article might reflect an arcane academicdiscussion among library faculty which few practicing librar-ians choose to download—the proverbial tree falling in theforest but making a sound only for library school faculty andstudents, not for library practitioners?8

And finally, of course, there are reputational studies that arebased on the opinions of professionals rather than on eithercitations or downloads. There is a small but ongoing literatureinvestigating journal reputation, particularly in library science.How reputation relates to citation or download activity wouldbe interesting and worthwhile to know as also what the uniqueperspective each of these measures may tell us about academicjournals.9

LIMITATIONS—MORAL

And last, there is always, so to speak, the question of originalsin. It would be important to know both how pressured orunscrupulous individuals (editors or publishers) can manipulateor game the system and what measures, therefore, would beuseful in maintaining the integrity of the data. Like ClaudeRaines in Casablanca, librarians may be shocked, yes shocked,to think that anything unseemly might be going on. And frankly,the good news is that, as far as I know, the largely anecdotalevidence of IF manipulation seems to come from otherdisciplines. I am referring to the encouragement of unnecessaryreview articles with their multitude of citations to a possiblyfavored journal or to editorial trading of article acceptance foradded citations to the accepting journal—both practices thathave been publicly reported in conjunction with IF.10 Althoughmy many years of experience in the profession incline me toconsiderable trust in my peers, it might nevertheless be usefulfor the community to have some research and reflection in thisarea as well.

Increasingly we live in a world of accountability, drivennot only by a desire for improvement but also by continuinglessons of Enron or the Subprime crisis with their devastatingimpact on our lives when understanding and transparency areabsent. While journal Impact Factors are hardly in the sameleague, the moral of the story is the same. To use our toolswisely and effectively, we need to clearly understand themalong with their advantages and their limitations. Particularlyas academics, knowledge is what we are about. And in anycase, it is never wise to walk through the dark woods as aninnocent.

As always, I welcome your comments and thoughts.

David

NOTES AND REFERENCES

1. One clearly positive feature about IF is that it may be replacing therejection rate number if my experience is any indication. Thenumber of requests from authors for the JAL’s rejection rate seemsto have declined in recent years. I assume this is due to use of IF torate a journal’s academic significance. This probably represents astep forward because the only figure more problematic and lesstransparent that IF is journal rejection rate.

2. http://scientific.Thomson.com/free/essays/journalcitationreports/impactfactor/ (accessed 4/11/2008); also,http://scientific.Thomson.com/free/essays/journalcitationreports/usingimpactfactor/ (accessed 4/11/2008)

3. For some of the most recent contretemps as of this writing, seeDavid Armstrong “Researcher Received Industry Funds,” in TheWall Street Journal (April 18, 2008) p. B11, and Ron Winslow andAvery Johnson, “Merck’s Publishing Ethics Are Questioned byStudies,” in The Wall Street Journal (April 16, 2008), p. B4.

4. http://en.wikipedia.org/wiki/Impact_factor (accessed 4/11/2008).5.Mike Rossner, Heather Van Epps and Emma Hill. “Show me thedata,” in The Journal of Cell Biology 179:6 (December 17, 2007), p.1091–1092.

6. For example, see: Michael J. Stringer, Marta Sales-Pardo, and LuisA. Nunes Amaral, “Effectiveness of Journal Ranking Schemes

as a Tool for Locating Information,” PLoS ONE 3:2 e1683.doi:10.1371/journal.pone.ooo1683.

7. The best discussion of this issue (as well as a number of relatedconcerns) is probably, M. Amin and M. Mabe, “Impact Factors:Use and Abuse,” in Perspectives in Publishing (October, 2000/reissued with minor revisions October 2007) p. 1–6 (seeparticularly p.4).

8. A preliminary start in such research, although not in the library fieldper se, is Henk F. Moed, “Statistical Relationships BetweenDownloads and Citations at the Level of Individual DocumentsWithin a Single Journal,” in Journal of the American Society forInformation Science and Technology, 56:10 (2005) pp. 1088–1097.

9. For the initial reputational study among library journals, see DavidF. Kohl and Charles H. Davis. qRatings of Journals by ARL LibraryDirectors and Deans of Library and Information Science Schools,qCollege and Research Libraries 46:1 (January, 1985) 40-47; themost recent follow-up is Thomas E. Nisonger and Charles H. Davis.“The perception of library and information science journals by LISeducation deans and ARL library directors: A replication of theKohl-Davis Study.” College and Research Libraries 66 (July 2005):341–77.

10. Richard Monastersky, “The Number That’s Devouring Science,”http://chronicle.com/free/v52/i08/08a01201.htm (accessed 4/20/2008).

July 2008 287