6
13 Bulletin of the American Society for Information Science and Technology– December/January 2010 – Volume 36, Number 2 From Facts to Judgments: Theorizing History for Information Science by Ryan Shaw Ryan Shaw is a Ph.D. candidate at the School of Information at the University of California, Berkeley. He can be reached at ryanshaw<at>ischool.berkeley.edu. T here is increasing interest in representing the past as a database of historical facts. Drawing upon the increasing availability of digitized historical texts and advances in text mining and semi-structured databases, these projects seem set to fulfill Paul Otlet’s dream of extracting the factual content from texts and making it available to answer queries about the past. For example, the academic project DBpedia [1] aims to extract facts from Wikipedia infoboxes – the sections found on certain categories of Wikipedia articles that present basic facts using a standardized template. For example, the “Historical Event” infobox template includes fields for the event’s preferred name, alternate names, date, location, participants and result, as well as a representative image. On a Wikipedia article such as “French Revolution,” these values are presented as an HTML table. After parsing and processing by the DBpedia project’s algorithms, these values are transformed into a standardized data model for representing subject-predicate- object expressions (for example, “French_Revolution : location : France”). Freebase is a commercial service that similarly parses Wikipedia infoboxes into structured data, but which has the more ambitious goal of integrating this data with all other available public domain data and furthermore providing interfaces for editing and adding to it. Projects like DBpedia and Freebase mine historical facts primarily from the riches of Wikipedia, hoping that the collaborative effort there will trickle into their own projects. Other projects aim to mine the web at large for historical knowledge. Bruce Robertson calls the web “a historian’s fantasy” and envisions sophisticated tools for organizing and querying not only encyclopedia articles but also digitized archives, scholarly editions, journal articles and blog posts [2]. Pursuing a similar vision, digital historian Dan Cohen and programmer Simon Kornblith have developed a system called H-Bot [3] that parses Google search results to answer historical questions. And Google itself has begun incorporating timeline results – historical facts mined from the web – into its search results, as a search for a historical personage or event will usually show. All of these systems might be considered descendants of IBM’s “Professor RAMAC” computer program, which at the 1958 World’s Fair impressed audiences with its ability to answer historical questions using its “stack of 50 fast spinning disks” on which were stored “the principal historical events of the world from the birth of Christ to the launching of Sputnik I” [4]. H-Bot claims to demonstrate the “automatic discovery of historical knowledge” by using Google search results to answer simple factual questions such as “When was George Washington born?” or “Who was Lao-Tse?” The program uses simple sentence parsing techniques to transform questions into keyword searches that are passed to Google. It then uses statistical techniques to extract frequently repeated information such as dates and names from the returned web pages. (In practice, however, H-Bot usually draws its answers from a handful of online reference sources such as Wikipedia or Wordnet.) With its focus on simple factoids, its creators admit that H-Bot “offers an impoverished view of the past.” But they divert attention from this critique by implicitly equating such fact-finding with historical knowledge and by promising greater things to come (a favorite technique ofAI researchers for decades). Most tellingly, however, they argue that the profusion of historical evidence surviving from the recent past poses a problem for historians, who must adopt the same techniques for managing information overload that scientists use. CONTENTS NEXT PAGE > NEXT ARTICLE > < PREVIOUS PAGE Foundations of Information Science SPECIAL SECTION 2009 Annual Meeting Coverage

From facts to judgments: Theorizing history for information science

Embed Size (px)

Citation preview

Page 1: From facts to judgments: Theorizing history for information science

13

Bulletin

ofth

eA

mer

ican

Soc

iety

for

Info

rmat

ion

Sci

ence

and

Tech

nolo

gy–

Dec

embe

r/Ja

nuar

y20

10–

Volu

me

36,N

umbe

r2

From Facts to Judgments:Theorizing History for Information Scienceby Ryan Shaw

Ryan Shaw is a Ph.D. candidate at the School of Information at the University ofCalifornia, Berkeley. He can be reached at ryanshaw<at>ischool.berkeley.edu.

T here is increasing interest in representing the past as a database ofhistorical facts. Drawing upon the increasing availability of digitizedhistorical texts and advances in text mining and semi-structured

databases, these projects seem set to fulfill Paul Otlet’s dream of extractingthe factual content from texts and making it available to answer queriesabout the past.

For example, the academic project DBpedia [1] aims to extract facts fromWikipedia infoboxes – the sections found on certain categories ofWikipediaarticles that present basic facts using a standardized template. For example,the “Historical Event” infobox template includes fields for the event’spreferred name, alternate names, date, location, participants and result, aswell as a representative image. On aWikipedia article such as “FrenchRevolution,” these values are presented as an HTML table. After parsingand processing by the DBpedia project’s algorithms, these values aretransformed into a standardized data model for representing subject-predicate-object expressions (for example, “French_Revolution : location : France”).Freebase is a commercial service that similarly parsesWikipedia infoboxesinto structured data, but which has the more ambitious goal of integratingthis data with all other available public domain data and furthermoreproviding interfaces for editing and adding to it.

Projects like DBpedia and Freebase mine historical facts primarily fromthe riches ofWikipedia, hoping that the collaborative effort there will trickleinto their own projects. Other projects aim to mine the web at large forhistorical knowledge. Bruce Robertson calls the web “a historian’s fantasy”and envisions sophisticated tools for organizing and querying not only

encyclopedia articles but also digitized archives, scholarly editions, journalarticles and blog posts [2]. Pursuing a similar vision, digital historian DanCohen and programmer Simon Kornblith have developed a system calledH-Bot [3] that parses Google search results to answer historical questions.And Google itself has begun incorporating timeline results – historical factsmined from the web – into its search results, as a search for a historicalpersonage or event will usually show. All of these systems might beconsidered descendants of IBM’s “Professor RAMAC” computer program,which at the 1958 World’s Fair impressed audiences with its ability toanswer historical questions using its “stack of 50 fast spinning disks” onwhich were stored “the principal historical events of the world from thebirth of Christ to the launching of Sputnik I” [4].

H-Bot claims to demonstrate the “automatic discovery of historicalknowledge” by using Google search results to answer simple factualquestions such as “When was George Washington born?” or “Who wasLao-Tse?” The program uses simple sentence parsing techniques to transformquestions into keyword searches that are passed to Google. It then usesstatistical techniques to extract frequently repeated information such asdates and names from the returned web pages. (In practice, however, H-Botusually draws its answers from a handful of online reference sources suchas Wikipedia orWordnet.) With its focus on simple factoids, its creatorsadmit that H-Bot “offers an impoverished view of the past.” But they divertattention from this critique by implicitly equating such fact-finding withhistorical knowledge and by promising greater things to come (a favoritetechnique of AI researchers for decades). Most tellingly, however, they arguethat the profusion of historical evidence surviving from the recent past posesa problem for historians, who must adopt the same techniques for managinginformation overload that scientists use.

CON T E N T S NEX T PAGE > NEX T ART I C L E >< PRE V I OUS PAGE

Foundations of Information Science

SPECIAL SECTION2009 Annual Meeting Coverage

carlabadaracco
Rectangle
carlabadaracco
Rectangle
carlabadaracco
Rectangle
Page 2: From facts to judgments: Theorizing history for information science

Foundations of Information Science

14

Bulletin

ofth

eA

mer

ican

Soc

iety

for

Info

rmat

ion

Sci

ence

and

Tech

nolo

gy–

Dec

embe

r/Ja

nuar

y20

10–

Volu

me

36,N

umbe

r2

S H A W , c o n t i n u e d

TOP OF ART I C L EC O N T E N T S NEX T PAGE > NEX T ART I C L E >< PRE V I OUS PAGE

2009 Annual Meeting Coverage

This rhetoric should be familiar to information scientists. From PaulOtlet to Vannevar Bush to the National Science Foundation (NSF)-fundedprojects of today, much research in information management and retrievalhas focused on the needs of scientists and specifically on how to helpscientists avoid reading. The problem as it has been framed by this researchis that scientists must stay current with ever more scientific literature in afinite amount of time. This dilemma has led to a focus on text summarization,filtering of irrelevant information and extraction of key facts from explanatorynarrative. With the advent of large-scale corpuses of digitized texts, thesetechniques are now being proffered for the humanities as well. The call forproposals from a recent Digging into Data program funded by the NationalEndowment for the Humanities (NEH), NSF and others to develop data-mining tools and techniques for humanist scholars exemplifies how theproblem of too much text is being framed (emphasis added by this author):“Now that scholars have access to huge repositories of digitized data – farmore than they could read in a lifetime – what does that mean for research?”Databases of historical facts are in part a response to this perceived problem:when there are too many histories to be read, boil them down to bare factsthat can be subjected to powerful selection and retrieval mechanisms.

Such approaches are better suited to the sciences, given that scientists areassumed to be engaged in a cumulative research effort in which later researchersbuild upon the work of earlier researchers. This ideal of cumulative researcheffort requires that the complexity of earlier work be distilled down to reusableconclusions or facts. Bruno Latour in his Science in Action [5, 1-17] famouslydescribes this process of fact production as “black-boxing.” A black box is ametaphor for a mechanical or computational component that is used to fulfilla functional requirement without knowledge of its internal implementation.While it may be possible to know how the black box works, all that isrelevant to its users is that it produces expected outputs from given inputs.While scientists can in principle go back and recreate the experiments oftheir predecessors, efficient cumulative research requires that most of thetime they simply trust that the facts they inherit work as advertised.

Yet as Louis Mink [6] points out, work in history does not produce“detachable” conclusions of this kind. It is rare for historians to simply accept

an earlier account of some historical subject. Re-examination of primaryevidence is the rule. Mink argues that this practice is not common becausehistoriography is less developed than the sciences, but instead is due to adifference in the nature of the conclusions that historians produce. Ratherthan simply producing facts about the past, the historian aims to producewhat Mink calls “synoptic judgments” of some complex of actions andevents in the past.

A synoptic judgment is an interpretive act in which one moves fromseeing that a series of things happened (the facts about the past) to seeingthose happenings as a synthetic whole. Once she has reached such a judgmentherself after immersion in the historical evidence, the historian’s task is tolead others through the interpretive process via the medium of a written text.Through the techniques of narrative representation, the author of a historicaltext aims to show past actions and events as a coherent whole when seenfrom a certain perspective. The exhibition of this whole as represented bythe thick description of the historian’s narrative is the conclusion and assuch cannot be detached from that narrative. In other words, the historian’sconclusions inhere in the structure and organization of her narrative. Evenwhen the historian summarizes her narrative in separate statements, Minkargues, these statements are not detached conclusions but simply remindersto the reader of how the historian has ordered and organized her trueconclusion, the narrative itself.

Databases of historical facts purport to help us answer questions aboutthe past, and in a narrow sense they do that. But few of these systems take usfurther than the initial “Hey, neat!” reaction inspired by Professor RAMAC.The problem is not that the facts are wrong – Cohen and Rosenzweig [3] showthat, on the contrary, they can be quite accurate by most standards – or thatthey are incomplete (though they certainly are). Nor is insufficiently advancedtechnology to blame – even if we were able to perfectly extract historical factsfrom texts, disambiguating every name and indexing each fact in the absolutegrid of time and space, we would still face this problem. The problem is thatsystems like this are grounded in an impoverished conception of how werepresent the historical past, a conception that focuses on atomic facts ratherthan synoptic judgments.

carlabadaracco
Rectangle
carlabadaracco
Rectangle
Page 3: From facts to judgments: Theorizing history for information science

Foundations of Information Science

15

Bulletin

ofth

eA

mer

ican

Soc

iety

for

Info

rmat

ion

Sci

ence

and

Tech

nolo

gy–

Dec

embe

r/Ja

nuar

y20

10–

Volu

me

36,N

umbe

r2

S H A W , c o n t i n u e d

TOP OF ART I C L EC O N T E N T S NEX T PAGE > NEX T ART I C L E >< PRE V I OUS PAGE

2009 Annual Meeting Coverage

general conception” for binding together facts was one that illuminatedthose facts as being part of a (conscious or unconscious) policy guiding thebehavior of people in the past. LaterWalsh expanded his notion of colligation toinclude any case in which some set of events as is interpreted as a connectedprocess or development, whether or not such a policy could be discerned [9].

Even though concepts are of primary interest in library and informationstudies, colligatory concepts have been mostly overlooked. Even the mostsophisticated theoretical discussions of concepts in the literature tend to equateconcepts with classes or categories. For example in his recent survey of concepttheories Hjørland [10, p. 1522] asserts that “[c]oncepts are dynamicallyconstructed and collectively negotiated meanings that classify the worldaccording to interests and theories” (emphasis added). This preoccupationwith classification is perhaps understandable in light of the aforementionedfocus on scientific domains. The sciences seek to abstract away from uniqueindividuals to generalized classes that can be related by laws. While historiansdo generalize, they also – arguably primarily – seek to assemble descriptionsof unique past events into connected and coherent but no less uniquerepresentations. Concepts like “The Renaissance” colligate rather than classify.

The most fully developed theory of colligation to date has been developedby Frank Ankersmit, who in his Narrative Logic: A Semantic Analysis ofHistorian’s Language [11] seeks to explain how colligatory concepts – whichhe calls “narrative substances” – are constructed from sets of statementsexpressing facts. A narrative substance is a point of view from which to regardthe past, articulated by means of a specific historical narrative. Ankersmitcontends that each individual historiographical narrative constructs a narrativesubstance so that, for example, there are as many “Renaissances” as thereare narratives on the subject, since each narrative articulates a specific pointof view. So when we speak generally about “The Renaissance,” we are reallytalking about a whole family or type of narrative substances that have beengiven the same name.

Furthermore, Ankersmit claims that when we define such types, we do soextensionally rather than intensionally. An intensional definition of a type isone that defines some necessary and sufficient conditions for belonging to thetype. For example, one might define a mug intensionally as “a type of cup

The problem is an old one. It can seem obvious that what we need tounderstand the past are facts about the past, and that a perfect history is thusone that identifies and enumerates “everything that happened” in terms ofsuch facts.Yet upon careful consideration these notions are quite problematic.Philosophers have often hypothesized the idea of a complete historicaldatabase in order to demonstrate what the problems are. Arthur Dantoimagines an “Ideal Chronicle” with descriptions of “absolutely everythingthat happened,” in the order it happened, thus providing the “whole map ofthe Past” [7, 148-181]. Danto argues that even such a complete database ofthe past would not obviate the need for historiography, since the role ofhistorians is not simply to recount factual data about the past, but to representthe significance of episodes in the past from the perspectives of the present.These perspectives (and thus our criteria for significance) are constantlychanging. It is this kind of change, not simply the discovery or refutation ofhistorical evidence (addition or deletion of facts from the database) thatresults in new historiographical conclusions. Or, as Mink puts it, even if wecould “sit before a screen and directly review the past in its minutest details,”we would still need some imaginative representation of the past to help usmake sense of it all in light of our current historical situation, and it is therole of history to develop such imaginative representations and not simplygather the detailed facts.

Acts of synoptic judgment produce imaginative representations that arearticulated as historical narratives. Once historians have developed narrativesthat relate sets of facts under some synthesizing ideas, they usually labelthese narratives with phrases like “The Renaissance” or “The FrenchRevolution.” The philosopher of history W. H. Walsh calls this processcolligation, appropriating the philosopher of science WilliamWhewell’sterm for “the binding together or connection of a number of isolated factsby a suitable general conception or hypothesis” [8, 59-64]. Walsh supports ahermeneutic conception of historical method in which the goal of historiansis to imaginatively and empathetically reconstruct the experiences andthoughts of people in the past. Accordingly, Walsh’s notion of colligationinitially depended upon happenings being intrinsically connected by virtueof being intentions or consequences of some past actor’s plans. A “suitable

carlabadaracco
Rectangle
carlabadaracco
Rectangle
Page 4: From facts to judgments: Theorizing history for information science

Foundations of Information Science

16

Bulletin

ofth

eA

mer

ican

Soc

iety

for

Info

rmat

ion

Sci

ence

and

Tech

nolo

gy–

Dec

embe

r/Ja

nuar

y20

10–

Volu

me

36,N

umbe

r2

made of glass or ceramic and having a handle large enough to accommodate awhole hand.”An extensional definition of a type, on the other hand, enumeratesthe members of a set of individuals considered to be instances of that type.An extensional definition of mug would collect all the world’s individualcoffee mugs and beer steins and so on and thereby declare “these are mugs.”

Ankersmit argues that one can define types of narrative substancesextensionally by clustering narrative substances that contain overlapping setsof statements. He proposes a thought experiment in which a giant matrix isconstructed. Along one axis of the matrix are aligned all the declarativestatements made about the past that have actually appeared in some text oranother. Along the other axis are aligned all the narrative substancesconstructed by means of those statements. Each cell in the matrix is then filledwith a “0” or a “1” indicating whether or not the corresponding statement wasused to help construct the corresponding narrative substance. Given such amatrix, we could then try to identify types of narrative substances by groupingtogether narrative substances with similar patterns of 0s and 1s, in much thesame way that we might identify types of drinking vessels by looking forsimilar shapes or handles or materials. Ankersmit posits that we will observethat “certain classificatory patterns automatically appear.” These clusters in“narrative space” reflect the fact that historians write in response to otherhistorians and construct their narrative substances by distinguishing themfrom those that came before (which implies a significant degree of overlap).

As Ankersmit points out, such an extensional procedure for identifyingtypes can never be precise. Depending on how we interpret similar patternsthere will be many possible groupings into types. Moreover, for any giveninterpretation of similarity, there will always be boundary cases that couldbelong to more than one cluster. At best, extensional typification can identifyregularities in how we have chosen to conceptualize reality, but it cannot tellus anything about reality itself. In other words, looking at written history thisway tells us something not about the reality of the past, but about the contoursof the concepts developed by historians over time. “The Renaissance,” “The ColdWar,” “The French Revolution” and “9/11” are not objectively existing entitiesin the past, but are names of types of stories we tell to understand the past.

Finally, Ankersmit argues that an extensional procedure like this is the

only way to identify types of narrative substances. The alternative would beto intensionally identify types in terms of logical definitions based onattributes of the things being classified, the way we define the type mammalas “warm-blooded,” “vertebrate” and “having hair or fur.” But this intensionalidentification is precisely what we cannot do for narrative substances. Thereis no logical definition, no core set of properties both necessary and sufficientfor making a particular narrative a narrative about “The French Revolution.”While it’s easy to identify statements that would not appear in any narrative ofthe French Revolution – for example that the storming of the Bastille occurredin 1967 in Tokyo, Japan – we cannot identify statements that must appear insuch stories or by virtue of which we must consider a given narrative to be anarrative about the French Revolution. Given all the narratives that haveever been written about the French Revolution, we may not be able toidentify a single statement that appears in every one. Thus we cannot and donot identify types of narrative substances intensionally. Decisions aboutwhat “The Cold War” is can only be justified pragmatically, not logically.

With the large-scale digitization of books it may become possible toinvestigateAnkersmit’s theory by analyzing the full texts of historical narratives.Before undertaking such a project we must address some methodologicalproblems. First is the issue of how to identify statements about the past.Ankersmit’s matrix involves statements (also known as propositions) about thepast, not the sentences that express these statements. (Ankersmit contends(p. 19) that, “states of affairs in the past can be unambiguously described bymeans of constative statements.”) We cannot conflate statements with thesentences that appear in historical texts, as it is unlikely that any two textswill contain precisely the same sentence. So we must find a way to movefrom the sentences that appear in texts to the propositions those sentencesexpress, a problem that has occupied linguists and philosophers of languagesince Bertrand Russell. Fortunately this is an area where informationextraction technology might show its worth, as such technology is preciselyconcerned with transforming sentences into logical propositions. Despitewell-known problems with propositional theories of language [12, 70-74]and the fact that information extraction technologies are plagued by errors,their output still might be usable for exploring Ankersmit’s theory.

S H A W , c o n t i n u e d

TOP OF ART I C L EC O N T E N T S NEX T PAGE > NEX T ART I C L E >< PRE V I OUS PAGE

2009 Annual Meeting Coverage

carlabadaracco
Rectangle
carlabadaracco
Rectangle
Page 5: From facts to judgments: Theorizing history for information science

Foundations of Information Science

17

Bulletin

ofth

eA

mer

ican

Soc

iety

for

Info

rmat

ion

Sci

ence

and

Tech

nolo

gy–

Dec

embe

r/Ja

nuar

y20

10–

Volu

me

36,N

umbe

r2

The second problem, as Ankersmit points out, is that a given historicaltext constructs multiple narrative substances, and it is difficult to determineexactly which statements are being used to construct which narrativesubstances. Indeed, Ankersmit argues (p.104) that in order to identifynarrative substances reliably we must, “compare historiographical topicsstudied and discussed by generations of historians.” Fortunately thecatalogers who maintain the Library of Congress Subject Headings (LCSH)[13] have done this for us, creating subject headings for historiographicaltopics and assigning them to historical texts. Using the LCSH is not idealfor investigating narrative substances, however. Catalogers usually do notidentify more than a couple of the narrative substances constructed in agiven text. Since they are concerned with characterizing whole books, theywill not identify historical narratives in books that are not primarily worksof historiography. And since the goal is to collocate texts, there is no effortto distinguish differences among the narrative substances constructed bydifferent texts. In essence, what catalogers have done is group variousnarrative substances into types a priori.

Notwithstanding these problems, it is plausible that one could use ahistoriographical subject heading to obtain a set of texts within which authorshave constructed comparable narrative substances. Ankersmit suggests thatwe can refer to narrative substances with terminology such as “Renaissance1,”“Renaissance2,” “Renaissance3,” etc. where the subscript n indicates that weare referring to the specific representation of the Renaissance constructed inthe historical text n. Likewise, we could posit that a set of N texts foundunder the LCSH for the “Renaissance” constitute a set of narrative substances“Renaissance1,” “Renaissance2,” “Renaissance3”... “Renaissancen.” We couldthen compare the propositions made in these texts to each other to build amodel of the extension of the type “Renaissance.” This model could provide

a basis for highlighting differences among the individual narrativesconstructed by the different texts. Though such a model would enable us toexamine the structure of types already identified by the LCSH, it could notreveal new types not yet identified there. But we have to start somewhere.

Whether or not initial attempts to automatically discern and model themprove successful, it is time information science paid closer attention tocolligatory concepts. In the digital environment, traditional components ofthe scholarly apparatus such as term lists, classification and categorizationschemes, and thesauri are evolving into generalized semantic tools forenumerating and disambiguating concepts and mapping the relations amongthem [14]. Though in the past such tools have mainly been used forindexing and retrieval, in an era of full-text search I believe we will seeother applications move to the fore. Specifically, semantic tools that map agiven conceptual domain can be integrated into reading and writingenvironments to help users contextualize some fragment of interest. Givensuch applications, “fuzzy” concepts reflecting particular interpretive stancesare at least as important as traditional categorical concepts. Ankersmit andhis predecessors’ theory of colligation provides grounds for investigation ofsuch concepts. Done properly, such an investigation may lead to new tools(or new ways of using existing tools) for research that avoids reducingconceptions of the past to bare facts. �

ACKNOWLEDGEMENTS – Support from the Institute of Museum andLibrary Services through award LG-06-06-0037-06 “Bringing Lives to Light:Biography in Context” and from the Advancing Knowledge grant PK-50027-07 “Context and Relationships: Ireland and Irish Studies,” jointlyfunded by the National Endowment for the Humanities and the Institute ofMuseum and Library Services, is gratefully acknowledged.

RESOURCES on next page

S H A W , c o n t i n u e d

TOP OF ART I C L EC O N T E N T S NEX T PAGE > NEX T ART I C L E >< PRE V I OUS PAGE

2009 Annual Meeting Coverage

carlabadaracco
Rectangle
carlabadaracco
Rectangle
Page 6: From facts to judgments: Theorizing history for information science

Foundations of Information Science

18

Bulletin

ofth

eA

mer

ican

Soc

iety

for

Info

rmat

ion

Sci

ence

and

Tech

nolo

gy–

Dec

embe

r/Ja

nuar

y20

10–

Volu

me

36,N

umbe

r2

S H A W , c o n t i n u e d

TOP OF ART I C L EC O N T E N T S NEX T PAGE > NEX T ART I C L E >< PRE V I OUS PAGE

2009 Annual Meeting Coverage

Resources Mentioned in the Article

Website linksDBpedia: http://dbpedia.orgDigging into Data: www.diggingintodata.org/Freebase: http://www.freebase.comGoogle: www.google.comH-Bot: http://chnm.gmu.edu/tools/h-botWikipedia: www.wikipedia.orgWordnet: http://wordnet.princeton.edu

Other Resources[1] Bizer, C., Lehmann, J., Kobilarov, G., Auer, S., Becker, C., Cyganiak, R., & Hellmann, S. (2009, September). DBpedia – A crystallization point for the Web of data. Journal of Web Semantics

7 (3), 154-165. Retrieved October 21, 2009, from http://dx.doi.org/10.1016/j.websem.2009.07.002. Preprint available at www.wiwiss.fu-berlin.de/en/institute/pwo/bizer/research/publications/Bizer-etal-DBpedia-CrystallizationPoint-JWS-Preprint.pdf.

[2] Robertson, B. (2009). Exploring historical RDF with Heml. Digital Humanities Quarterly 3 (1). Retrieved October 21, 2009, from www.digitalhumanities.org/dhq/vol/3/1/000026.html.[3] Cohen, D. J., & Rosenzweig, R. (2005, December 5). Web of lies? Historical knowledge on the Internet. First Monday 10 (12). Retrieved October 21, 2009, from http://firstmonday.org/

htbin/cgiwrap/bin/ojs/index.php/fm/article/view/1299.[4] Morris, M. E. (1981). Professor RAMAC’s tenure. Datamation 27(4), 195–198.[5] Latour, B. (1987). Science in action. Cambridge, MA: Harvard University Press.[6] Mink, L. O. (1966). The autonomy of historical understanding. History and Theory 5 (1), 24–47. Retrieved October 21, 2009, from www.jstor.org/pss/2504434.[7] Danto, A. C. (1985). Narration and knowledge. New York: Columbia University Press.[8] Walsh, W. H. (1951). An introduction to philosophy of history. London: Hutchinson’s University Library.[9] Walsh, W.H. (1974). Colligatory concepts in history. In Gardiner, P. (Ed.), The philosophy of history (pp.127–144). Oxford: Oxford University Press.[10] Hjørland, B. (2009). Concept theory. Journal of the American Society for Information Science 60 (8), 1519-1536. Retrieved October 21, 2009, from http://dx.doi.org/10.1002/asi.21082.[11] Ankersmit, F. R. (1983). Narrative logic: A semantic analysis of the historian’s language. The Hague: Martinus Nijhoff.[12] Lycan, W. G. (2008). Philosophy of language: A contemporary introduction (2nd ed.). New York: Routledge. 70–74.[13] Library of Congress. Library of Congress Subject Headings. Information about editions and formats available at www.loc.gov/cds/lcsh.html. Online search of LC subject headings

available at http://authorities.loc.gov/.[14] Hjørland, B. (2007). Semantics and knowledge organization. Annual Review of Information Science and Technology 41, 367-405. Retrieved October 21, 2009, from

http://dx.doi.org/10.1002/aris.2007.1440410115. Preprint available at http://dlist.sir.arizona.edu/2312/.

carlabadaracco
Rectangle
carlabadaracco
Rectangle