5
University of Southern California Dental School Library. If no non- core title was available at any of these locations, it would be remotely stored instead of withdrawn. Other factors affecting the withdrawal of non-core titles were major researcher require- ments, future needs of the health sciences collection, and other subjective factors. Rais explained that remote storage was currently unavailable due to a lack of funding, an absence of local collaborative efforts, and the unwillingness of other academic units to allocate space for library storage. Rais' solution was to box selected titles and put them on shelves in library storage rooms. The second list was based on usage statistics from 2000 to 2006 for current print and online journals. Three hundred titles accounted for 76 percent of the usage and 825 print titles were targeted for conversion to electronic only. In 2008 and 2009, 300 and 119 titles, respectively, were converted to electronic only. Rais deems the project a success. All statistics are located in a single place to facilitate ease of use. The storage or withdrawal of titles created more shelf space in the journal stacks for core print titles. Savings from converting to electronic only could amount to $50,000. Rais will continue to monitor usage of both electronic and print titles to gauge trends and has decided not add new print titles unless absolutely necessary. Notes 1. Todd Carpenter, What Do You Get When You Cross a License with XML? A: ONIX-PL,presented at the annual conference of the North American Serials Interest Group, June 4-7, 2009, http://www.niso.org/apps/group_public/ download.php/2254/TAC_NASIG-ONIX-PL-UpdateFINAL.pdf (accessed August 13, 2009). 2. Timothy Jewell, Selection and Presentation of Commercially Available Electronic Resources: Issues and Practices, Washington, D.C.: Digital Library Federation and Council on Library and Information Resources, 2001, http://www.clir.org/ pubs/reports/pub99/pub99.pdf (accessed July 20, 2009). 3. Timothy Jewell, Ivy Anderson, Adam Chandler, Sharon E. Farb, Kimberly Parker, Angela Riggio, and Nathan D.M. Robertson, Electronic Resource Management: Report of the DLF ERM Initiative, Washington, DC: Digital Library Federation, 2004, http://www.diglib.org/pubs/dlf102/ (accessed August 13, 2009). 4. Patricia A. Loghry, Use or Not to Use: The Benefits and Challenges of Using a Subscription Agent for Electronic Journals,in E-Serials Collection Management, ed. David C. Fowler (Binghamton, NY: Haworth Information Press, 2004). 5. KBART: Knowledge Bases And Related Tools Working Group,http://www. uksg.org/kbart/ (accessed August 13, 2009). 6. Lori Duggan and Lynda Fuller Clendenning, Navigating a Course for Serials Staffing into the New Millennium,presented at the annual conference of the North American Serials Interest Group, June 4-7, 2009, http://www.nasig.org/ conference_handouts_2009.cfm (accessed August 13, 2009). 7. Ibid. doi:10.1016/j.serrev.2009.08.011 2009 American Library Association Annual Conference: Reports of Selected Sessions Shana McDanold OCLC Enhance Sharing Session Jay Weitz (consulting database specialist at OCLC) opened the session with a review of key points from the news section of an OCLC handout. Highlighted items included news that the draft of a new record use policy has been withdrawn; a new group is being formed to create a new policy with input from members. Regional service providers have now become service partners with OCLC, with OCLC billing conducted through service partners, while all help is made available through the main OCLC offices. Weitz reminded the audience that the recent research report, Online Catalogs: What Users and Librarians Want, is available online. 1 Weitz then reviewed the upcoming revised Duplicate Detection and Resolution (DDR) software implementation and the testing that is currently underway in preparation for its release. 2 Weitz began the general portion of the session with a few pre- submitted questions from Becky Culbertson, University of Califor- nia, San Diego. Who is following LCRI 25.5B Appendix 1 regarding titles for television programs, with comprehensive titles and individual titles when episodes are not designed to be viewed consecutively? 3 It was discovered that the Program for Coopera- tive Cataloging (PCC) participants and LC catalogers are following the LCRI, but not consistently. Catalogers may choose to follow the rule or not, although it is useful for applying order to records. It also makes sense to apply the LCRI to organize titles, especially when there is no formal title for an episode. In answer to other questions from Culbertson, catalogers may remove generic URLs from records if a title-specific URL, accessible to everyone, is input in its place. While the Online Audiovisual Catalogers (OLAC) Best Practices for Streaming Media document gives no clear indication whether to use existing records or create a new record to accommodate an aggregator/vendor neutral policy, which was not in existence when the document was created, catalogers can either edit existing records or create new records. 4 Vendor-specific records will eventually be neutralizedand combined. There is also no need to indicate a connection between different providers on different records. Weitz then provided some additional information on the Provider-Neutral E-Monograph MARC Record policy. 5 The policy will be implemented July 17 (update: now postponed until Aug. 3rd). Currently there is no specific date for when OCLC will start combining existing records. The policy applies to all monographs, both textual and other, that have a 533 note. It will be a two-step process to combine the records. First, the 533 notes and provider specific 710s will be removed from the records to neutralizethem, and then the records themselves will be merged at a later date. Next, Weitz discussed the status of the Expert Community Experiment. 6 He reviewed the history and timeline for the experiment and reminded everyone that, if things go well, the process will continue indefinitely. He then updated the attendees that so far things are going well and that there has been a minimum of conflicts (i.e., editing wars) occurring. Weitz clarified that BIBCO and CONSER records are excluded (records that contain a MARC 042 field), but that other LC records without the presence of an 042 field are available for editing. Other records that are currently partially excluded from the experiment include cataloging in publication (CIP) records (Encoding level 8). These records can be edited, but the encoding level cannot be changed as it is used as part of the LC replacement profile, unless the library changing the encoding level has National Level Enhance status. Weitz reported that OCLC is aware that encoding levels do not make much sense or have much logic to them anymore and that there is a discussion going on about possible revisions including simplification and combining of current encoding levels. Weitz briefly summarized common questions about the experi- ment and current participation numbers and record replace/activity statistics. Credits for the replace transactions that fall under the experiment will be decided once the 6-month period is over and will Blythe / Serials Review 35 (2009) 286311 307

2009 American Library Association Annual Conference: Reports of Selected Sessions

Embed Size (px)

Citation preview

University of Southern California Dental School Library. If no non-core title was available at any of these locations, it would beremotely stored instead of withdrawn. Other factors affecting thewithdrawal of non-core titles were major researcher require-ments, future needs of the health sciences collection, and othersubjective factors. Rais explained that remote storage wascurrently unavailable due to a lack of funding, an absence oflocal collaborative efforts, and the unwillingness of other academicunits to allocate space for library storage. Rais' solution was to boxselected titles and put them on shelves in library storage rooms.The second list was based on usage statistics from 2000 to 2006 forcurrent print and online journals. Three hundred titles accountedfor 76 percent of the usage and 825 print titles were targeted forconversion to electronic only. In 2008 and 2009, 300 and 119 titles,respectively, were converted to electronic only.

Rais deems the project a success. All statistics are located in asingle place to facilitate ease of use. The storage or withdrawal oftitles created more shelf space in the journal stacks for core printtitles. Savings from converting to electronic only could amount to$50,000. Rais will continue to monitor usage of both electronic andprint titles to gauge trends and has decided not add new print titlesunless absolutely necessary.

Notes

1. Todd Carpenter, “What Do You Get When You Cross a License with XML? A:ONIX-PL,” presented at the annual conference of the North American SerialsInterest Group, June 4-7, 2009, http://www.niso.org/apps/group_public/download.php/2254/TAC_NASIG-ONIX-PL-UpdateFINAL.pdf (accessed August13, 2009).

2. Timothy Jewell, Selection and Presentation of Commercially Available ElectronicResources: Issues and Practices, Washington, D.C.: Digital Library Federation andCouncil on Library and Information Resources, 2001, http://www.clir.org/pubs/reports/pub99/pub99.pdf (accessed July 20, 2009).

3. Timothy Jewell, Ivy Anderson, Adam Chandler, Sharon E. Farb, Kimberly Parker,Angela Riggio, and Nathan D.M. Robertson, Electronic Resource Management:Report of the DLF ERM Initiative, Washington, DC: Digital Library Federation,2004, http://www.diglib.org/pubs/dlf102/ (accessed August 13, 2009).

4. Patricia A. Loghry, “Use or Not to Use: The Benefits and Challenges of Using aSubscription Agent for Electronic Journals,” in E-Serials Collection Management,ed. David C. Fowler (Binghamton, NY: Haworth Information Press, 2004).

5. “KBART: Knowledge Bases And Related Tools Working Group,” http://www.uksg.org/kbart/ (accessed August 13, 2009).

6. Lori Duggan and Lynda Fuller Clendenning, “Navigating a Course for SerialsStaffing into the New Millennium,” presented at the annual conference of theNorth American Serials Interest Group, June 4-7, 2009, http://www.nasig.org/conference_handouts_2009.cfm (accessed August 13, 2009).

7. Ibid.

doi:10.1016/j.serrev.2009.08.011

2009 American Library Association AnnualConference: Reports of Selected Sessions

Shana McDanold

OCLC Enhance Sharing Session

Jay Weitz (consulting database specialist at OCLC) opened thesession with a review of key points from the news section of anOCLC handout. Highlighted items included news that the draft of a

new record use policy has been withdrawn; a new group is beingformed to create a new policy with input from members. Regionalservice providers have now become service partners with OCLC,with OCLC billing conducted through service partners, while allhelp is made available through the main OCLC offices. Weitzreminded the audience that the recent research report, OnlineCatalogs: What Users and Librarians Want, is available online.1

Weitz then reviewed the upcoming revised Duplicate Detectionand Resolution (DDR) software implementation and the testingthat is currently underway in preparation for its release.2

Weitz began the general portion of the session with a few pre-submitted questions from Becky Culbertson, University of Califor-nia, San Diego. Who is following LCRI 25.5B Appendix 1 regardingtitles for television programs, with comprehensive titles andindividual titles when episodes are not designed to be viewedconsecutively?3 It was discovered that the Program for Coopera-tive Cataloging (PCC) participants and LC catalogers are followingthe LCRI, but not consistently. Catalogers may choose to follow therule or not, although it is useful for applying order to records. Italso makes sense to apply the LCRI to organize titles, especiallywhen there is no formal title for an episode.

In answer to other questions from Culbertson, catalogers mayremove generic URLs from records if a title-specific URL, accessibleto everyone, is input in its place. While the Online AudiovisualCatalogers (OLAC) Best Practices for Streaming Media documentgives no clear indication whether to use existing records or createa new record to accommodate an aggregator/vendor neutralpolicy, which was not in existence when the document wascreated, catalogers can either edit existing records or create newrecords.4 Vendor-specific records will eventually be “neutralized”and combined. There is also no need to indicate a connectionbetween different providers on different records.

Weitz then provided some additional information on theProvider-Neutral E-Monograph MARC Record policy.5 The policywill be implemented July 17 (update: now postponed until Aug.3rd). Currently there is no specific date for when OCLC will startcombining existing records. The policy applies to all monographs,both textual and other, that have a 533 note. It will be a two-stepprocess to combine the records. First, the 533 notes and providerspecific 710s will be removed from the records to “neutralize”them, and then the records themselves will be merged at a laterdate.

Next, Weitz discussed the status of the Expert CommunityExperiment.6 He reviewed the history and timeline for theexperiment and reminded everyone that, if things go well, theprocess will continue indefinitely. He then updated the attendeesthat so far things are going well and that there has been aminimum of conflicts (i.e., “editing wars”) occurring. Weitzclarified that BIBCO and CONSER records are excluded (recordsthat contain a MARC 042 field), but that other LC records withoutthe presence of an 042 field are available for editing. Other recordsthat are currently partially excluded from the experiment includecataloging in publication (CIP) records (Encoding level 8). Theserecords can be edited, but the encoding level cannot be changed asit is used as part of the LC replacement profile, unless the librarychanging the encoding level has National Level Enhance status.Weitz reported that OCLC is aware that encoding levels do notmake much sense or have much logic to them anymore and thatthere is a discussion going on about possible revisions includingsimplification and combining of current encoding levels.

Weitz briefly summarized common questions about the experi-ment and current participation numbers and record replace/activitystatistics. Credits for the replace transactions that fall under theexperimentwill be decided once the 6-month period is over andwill

Blythe / Serials Review 35 (2009) 286–311

307

be awarded in a bulk credit to the responsible library's account.Credits as a whole are slated to be re-evaluated within the next twoto three years. OCLC recognizes that credits are a strong incentive toedit master records. Audience members voiced their concern thatlibraries are choosing to only edit locally if no credit is available,although multiple people raised the counter-argument that if acataloger performs the edit, it may as well be done to the masterrecord. An announcement regarding whether the experiment willcontinue can be expected in late August or early September.

Weitz then began a more detailed discussion of the DuplicateDetection and Resolution (DDR) software. From 1991 to 2005, theprevious DDR software was run through WorldCat sixteen times.Once OCLC moved to their new platform (Oracle), the softwarehad to be rewritten, and they have been working on it for the pastfour years. The new software is no longer limited to the booksformat; it covers all formats in OCLC. This past May, OCLC beganrunning small batches of records through the software, graduallyincreasing the size of the batch, beginning with records inWorldCat that had no holdings. These records were compared toWorldCat and merged when possible. As part of the test phase,OCLC quality control has been checking every merge to make sureit is legitimate and adjusting the algorithms of the software asneeded to fine tune it. The duplication rate found has been asexpected; around 4 percent to 7 percent of records are duplicatesthe software can detect. Weitz acknowledges that the process willnot be perfect. They will continue to run small batches through theend of the year. They will begin a full run through the WorldCatdatabase in late January 2010. The intention is that once a full runis completed, they will run the software continuously throughchanged/new records to check for duplicates against existingrecords. They will use roughly the same algorithms for batchloading and reclamation processes, hopefully reducing the numberof records added in error.

ALCTS Cataloging and Classification SectionExecutive Committee “Hot Topics” Forum

Karen Coyle (digital libraries consultant) introduced and moder-ated this conference's ALCTS Cataloging and Classification SectionExecutive Committee's Hot Topics Forum titled “The Future ofMARC.” Coyle introduced the speakers and provided a brieftimeline of the various discussions and articles covering thefuture of MARC from the year 2000 up to this point. The firstspeaker was Rebecca Guenther from the Library of Congress. Shespoke on “Evolving MARC 21 for the Future.” Guenther providedattendees with a brief review of MARC 21, describing it as asyntax defined by an international standard that includes bothClassic MARC (MARC 2709) and MARCXML (the communicationspiece). It is a data element set defined by content designation andsemantics and ultimately is a communication format. Manyelements of MARC are defined by external content rules and notby MARC. MARC 21 includes five different formats for differentpurposes: bibliographic, authority, holdings, classification, andcommunity information.

Guenther then described the current environment as containingbillions of descriptive records (some rich with data, some muchmore lean) that utilize a variety of cataloging rules. What makesMARC 21 so useful is that it allows the sharing of all those recordsno matter the rules used. National cataloging formats and ruleshave been harmonized with MARC 21, allowing the reuse,repackaging and repurposing of records. That is one of the majorsuccesses of MARC: the ability to carry data formulated by a mix ofrules on conventions. The records libraries have been sharing forover 30 years come in multiple languages and scripts, include

different subject thesauri, and are cataloged using multipledifferent sets of rules. Yet MARC 21 offers a widespread use andcost savings as its richness supports multifaceted retrieval.

Despite the success and usefulness of MARC, there areproblems. Guenther discussed a variety of problems, including:MARC 2709 syntax issues, the limited availability of fields/subfields/indicators, the inclusion of redundant data, controlledvalues embedded in a standard, the limited ability to link data, thelimits to extensibility, and the lack of explicit hierarchical levels.There has been progress. MARCXML is one example of progress inthat it adds more options. Other improvements include linkingsubfields and source codes, the use of URIs (Uniform REsourceIdentifiers) for controlled values and the repackaging of MARC intoother data formats such as Metadata Object Description (MODS).Guenther believes that to streamline MARC for the future we mustcontinue to take advantage of XML and what it offers, utilizecompatible alternatives that simplify some of the problems, anddevelop tools to allow for interoperability with other schema. Thiswill guarantee data continuity. Guenther then discussedMARCXML and its benefits and future uses, before moving into adiscussion of MODS and its benefits.

Guenther discussed changes and experimentation with MARCdata as it relates to RDA, FRBR and Resource Description Framework(RDF), as well as experimentation with linked data at LC that willexpose vocabularies and data to a wider community. There are alsoissues to resolve, such as determining the purpose of the data, howrich and granular the data should be, controlled data versustranscribed data, the benefits of codes versus words (and viceversa), different models, presentation considerations and determin-ing theeconomic cost of change. Guenther ended by recognizing thatwhile it will take time there are some ideas about how to moveforward. The RDA/FRBR/RDF implementation changes to MARC 21must progress. Libraries should consider transitioning to MARCXMLfor the exchange syntax; evolve MARC 21 as the data element set;find a use for MARC 21 beyond RDA; and find ways to assure datacontinuity so the rich information is not lost.

Ted Fons fromOCLC spoke on “Beyond the Record: OCLC and thefuture of MARC.” He provided context for OCLC and the OCLCparticipation in and contributions to RDA, both internally andexternally. Fons then discussed what OCLC was doing to movebeyond MARC 21. His discussion centered on OCLC's CrosswalkWeb Service, known as a common data format (CDF) hub model,used to move information in and out of WorldCat.7 The CrosswalkWeb Service allows OCLC to translate metadata from one formatinto another and back again as needed in a lossless roundtrip. It isengineered to be reusable and abstract enough to handle any kindof metadata markup language. In the future, a user interface to theCrosswalk will be developed, but for now it is only an internal tool.Fons ended by discussing how OCLC is also looking beyond therecord, pulling data from other formats. He used the example ofWorldCat Identities to show how MARC data can be combinedwith data in other formats in a useful way.

The third speaker was Amy Eklund from Georgia PerimeterCollege. She spoke on the MARC Content Designation Utility(MCDU) project.8 MARC records are an artifact of the entirecataloging enterprise, and the MCDU project looks to amassempirical data about the catalogers' use of MARC. Eklunddescribed some of the findings of the project when it came tofrequency counts for MARC fields. There are a significant numberof unused fields and subfields, as well as a significant number offields and subfields only used in a small subset of records. Theproject calculated common occurring elements, removing system-supplied fields, and found that only thirty-three fields and 141subfields were commonly occurring in approximately fifty-six

Blythe / Serials Review 35 (2009) 286–311

308

million records. They also mapped common occurring elementsto existing standards and FRBR. The project found somediscrepancies between usage and the requirements of variousstandards.

Eklund ended by stating the importance of empirically basedinformation in the decision making process for rules, standardsand policies. She also acknowledged that the results raisemany unanswered questions, including how systems use thesefields, what to do with fields used in less than 1 percent ofrecords, and what is the cost benefit of existing versus futurepractices.

The final speaker was Diane Hillmann from the InformationInstitute of Syracuse University. She presented “DoesMARCHave aFuture?” and answered “yes,” MARC has a future. Libraries needtime to move legacy data from MARC into a new format. Inaddition, the community is still largely using MARC and needs toretain the ability to communicate data in that format. Lastly,librarians tend to think in MARC format.

Conversely, there are several reasons why MARC does not havea future. Hillmann argues that at some point there must be a “dayone” or else costs will rise due to trying to maintain multiplesystems indefinitely. MARC is best for flat files and not for thesemantic relationships of today. Also, RDA cannot really be usedwith MARC if libraries are to move forward efficiently. Hillmannalso describes a middle ground between the yes and the no. MARChas a future if a limited definition of what is the future is used, onethat focuses solely on the next 5 to 8 years.

Based on those arguments, Hillman offers a replacement. MARCdata are in a silo and cannot be effectively communicated orshared with the wide world. RDA must be combined with XMLencoding or as part of the RDF triples in order to share data withthe wider community. RDA can be extended for specializedcommunities both inside and outside of the library world.Interoperability is the main reason data sharing will work.Libraries have a long history of sharing and exchanging data; thedefinition, again, needs to move beyond MARC.

Hillmann cites the Interoperability Levels for Dublin CoreMetadata as support and a method for extending data beyondMARC.9 The four levels are: shared term definitions, formalsemantic interoperability, description set syntactic interoperabil-ity, and description set profile interoperability. Utilizing thoselevels, MARC can be taken into the next step of a larger world ofshared data. Hillmann concluded by explaining why RDA is a goodfirst step in that its relationship vocabularies allow for machinemanipulation of data, exposing the rich information of libraries tothe world.

A question was asked about RDA vocabularies as a frameworkfor description. The remainder of RDA is the text describing howto do things with those vocabularies, how to use them anddescribe the relationships. Fons was asked how stable thecommon data format is, and he described it as evolving andactive as new data models are presented. Eklund was asked abouthow they can use the measurement of utilization as an indicatorof usefulness and how she would generally characterize the fieldsand subfields never used. Eklund assured the audience thatutilization is only a start; one must also look at format specificityand do an analysis of those narrower format definitions. When itcomes to the fields and subfields never used, Eklund describedthem as diverse and of various ages within the history of MARC(many are obsolete, etc.). Coyle drew the session to a close withthe statement by an attendee that librarians need to develop aclear vocabulary for present issues so that moving forward thesame term is not used to mean different things as has been donewith the term “MARC.”

ALCTS Catalog Form and Function Interest Group:Accentuating the “E-“: E-resources in thePublic Catalog

The ALCTS Cataloging Form and Function Interest Group sponsoreda panel to discuss how various libraries and consortia are drawingattention to the electronic resources available in OPACs. The firstspeaker wasMichael Kreyche of Kent State University, speaking on“Spotlighting E-Resources in the Catalog.”He started by giving a bitof background to put his presentation in context, explaining thatKent State's e-resources come from a variety of sources, and theyare using the Millennium integrated library system both locallyand within their consortia. To highlight the e-resources in theirsystem, they make use of Millennium's feature of scoping, which isa predefined set of limits that can be used at any point during asearch.

When developing the scope for e-resources, there were a seriesof questions to answer in order to set the scope to retrieveeverything they wanted, partially because there is an overlap inmaterial type. Questions Kent State addressed include what scopesto create and how to identify “online resources.” The scopes wereeventually decided on by public service librarians, who wanted aseparate virtual location for all online resources. The main issuebecame question number two, which was much harder to solve.Kreyche explained that correctly identifying all e-resources in thesystem was problematic due to a number of factors, includingchanges in cataloging rules so they often are not indentifiedconsistently in the MARC coding.

Kent State decided to use the 856 field and its indicators toidentify online resources. Kreyche explained the process of firstfixing errors and analyzing the 856 fields currently in the system tomake decisions about what to include in the scope, as URLs arepresent in both print and online records. The subset of recordsultimately decided upon were given a new location code for thescope. Kreyche described the daily automated maintenance that isdone to flag/find new e-resource records for the scope and addinga holdings record with the scope location. The project has beenvery successful and has become one of the most heavily usedscopes available by both librarians and patrons.

Next was Steve Shadle of the University of Washington,speaking on “Surfacing Electronic Resources in WorldCat Local.”He began with an overview of WorldCat Local and explained thatthe “localized” view is what incorporates your institution's localdata into the display. But WorldCat Local also allows patrons to gobeyond the library, allowing additional e-resource access servicesthat would not typically be part of an OPAC.

Shadle noted that one way of allowing access to e-resources isthrough the facets provided on the search results screens. In termsof the format facets, “Internet resource” is one of the top five usedby patrons. For local access information in the search results, OCLCpulls the local information either via a Z39.50 query or usingscreen scraping to pull the HTML from the catalog. Interlibrary loanactivities and e-resource clicks are through the roof as ILL points tosearching for e-resource access and if there is no e-resource access,it points users to ILL. The increased visibility of materials has alsoimpacted ILL and e-resource use significantly.

Shadle explained that one of the benefits of WorldCat Local isthe presence of additional links that normally would not be addedto the catalog such as CONTENTdm, URLs for open-accessmaterials, Google Books, and Open Archives Initiative harvestabledata such as image databases. He stresses that the focus is to getthe user to the item or resource no matter what the method ofaccess is; if it is not what the user wants or is insufficient, they canfind another source via another search. There are still problems

Blythe / Serials Review 35 (2009) 286–311

309

with WorldCat Local. Shadle shared that there are many third-party record sets that are not part ofWorldCat Local and, therefore,users do not have access to them. A list of these third-partyresources is kept and is constantly updated. Shadle concluded bystating that the ultimate goal is to make WorldCat Local “one-stopshopping” for users.

Kristin E. Martin and Kavita Mundle from the University ofIllinois at Chicago (UIC) were the last speakers. The title of theirpresentation was “That Didn't Go Quite As Planned: Obtaining andImproving E-book Records in a Consortia Environment.” Thereare two parts to their presentation: the background/process/challenges and the display. The presentation focused on twoconsortia purchases from Springer and Ingram/Coutts. Theirprocess begins with an evaluation of the MARC records. This canbe time-consuming and may involve multiple problems. Once therecords are analyzed utilizing MarcEdit and Excel, they make a listof identified problems and let the vendor know. The vendor thencorrects the problems and sends them a new set and the cyclecontinues until things come through the analysis process cleanly.Some of the errors include MARC field errors or the presence ofobsolete fields, missing or duplicate URLs, quality control issuessuch as consistency in treatment ofmulti-part sets/titles, authoritycontrol, and diacritics. The majority of the local customization isdone via the MarcEdit tool once the record set is through theanalysis and correction process.

For public display thereare actually three catalog interfaces for theUIC community: the OPAC (Voyager), VuFind, and WorldCat Local.They are also part of the I-Share consortia database. Because of themultiple interfaces, they place URLs in holdings records rather thanin the bibliographic records for flexibility of display. They have theadditional challenge for WorldCat Local of having to load all theirelectronic resources holdings for them to display, which can be anissue depending on the licensing agreement permissions.

The presentations were followed by a brief question-and-answer session. An attendee inquired about the lag time forloading e-resource records into a system. Several presenters use ane-serials MARC record service to guarantee regular updates and ageneral agreement to try loading records as quickly as possible. Itwas pointed out that records are visible when loaded intoWorldCat, but access may not be depending on the set-up of theparticular library. There are actually multiple levels of display inWorldCat Local determined by the local library: the library's ownholdings, consortia holdings, and, finally, the world display.

PCC Participants' Meeting

David Banush, associate university librarian for access services atBrown University and PCC Policy Committee chair, welcomedattendees to the meeting and introduced the guest speaker, R.David Lankes from the Syracuse University iSchool. His presenta-tion was entitled “The Death of the Document.” Slides and ascreencast of the entire session are available on his Web site.10

Lankes acknowledged that the “death” of the document is a bit ofan exaggeration, and “morbid injury” may be more accurate. Hethen moved into how using functional equivalencies can bedangerous. Human beings look for patterns and similarities toperform tasks, but differences based on the nature of the tasksimpart a function for an object. Thus a physical book is notfunctionally equivalent to a Kindle. We build frames and developexpectations of patterns, and functional equivalencies becomedangerous if we do not understand the context of what is going onaround the objects/tasks. One misleading functional equivalencyis that the single OPAC search box (or meta search box) isequivalent to the Google search box. Putting the two in context

shows that the underlying purpose/task of each search box isfundamentally different. Google is a discovery tool; OPACs andcatalogs, on the other hand, are inventory tools.

Next he discussed ALA's core competencies of librarianship,pointing out that they are a big list of “thou shalts” without everanswering why each one is important. As a profession, librariansneed tomove away from the library as a collection and towards thelibrary as scholarship, from dissemination of information to action,and from sharing to social good. Knowledge is not a “thing” to beshared, but rather an action. To illustrate how important it is to beaware of how something is used, as opposed to just the object,Lankes presented four options for dealing with non-documentdata: One, librarians can ignore it, but then commercialization ofthat data becomes a problem. Two, librarians can exclude it, butthat brings up the debate around selection, censorship, andintellectual freedom. Third, librarians can catalog it all, but thathas already been attempted with minimal success. Finally, thefourth and only real option is to embrace it. Lankes stated that wecannot record knowledge because knowledge is an inherenthuman activity. It is not part of the artifact/object, but ratherresident in the individuals who use and interact with the artifact/object. We cannot control or record that interaction because it isunique to each individual; it is how sense is made of the world andthe objects around us.

That knowledge comes from our establishment of relationshipsand the seeking of agreements and drawing of distinctionsbetween artifacts/objects. The context for something, therefore,comes from outside, not the object/artifact itself. But the creationof all this knowledge creates an environment Lankes refers to asthe “tyranny” of the item. Each object has massive amounts of datafor each small piece, not to mention the meta-metadata (themetadata about the object metadata).

Lankes transitioned into a description of different ways we tryto conquer that tyranny. There is the system view, which is muchlike an ILS; the user-based design, which is like an operatingsystem; and, finally, user systems, otherwise known as Web 2.0. Itis necessary to now start capturing that in-between, looking atthings as relationships rather than lists. An object/artifact/document may now live in multiple different places at once. It'simportant, however, to look beyond that object into the why andhow of it rather than just the what. That mission is not thefunctional mission of librarianship's past. Lankes ended hispresentation with the assertion that the mission of libraries “isto improve society through facilitating knowledge creation in theircommunities.”

The presentation was followed by a brief question-and-answer session. Lankes again asserted that libraries can linkthings in a transparent way because they are non-competitiveand have no need to create proprietary systems. What is neededis to utilize inventory systems and build discovery systems thataccess them without relying on them to establish relationships.He also asserted that the assumption when building discoverysystems should be that people know what they want, ratherthan what is happening now, which is assuming that users aretrying to discover what they want. The focus needs to movefrom focusing on the discovery process to focusing on the usersend result.

An attendee inquired as to what the fundamental differencesbetween a Kindle and a book are. Lankes recognizes that whilethings like digital rightsmanagement are limiting use, the Kindle isa nascent technology, with the linking and commenting options asbuilt in tools. The relationship between objects becomes imbeddedin the Kindle environment, where in a physical world thatrelationship is never connected to the object itself. The Kindle is

Blythe / Serials Review 35 (2009) 286–311

310

a seamless combination of multiple technologies, linking not onlyobjects, but people as well. This moves the reading process frompassive to an active scholarly communication process. Theexposure that the tools of formats like the Kindle provide isessential in moving knowledge from internal to external. The finalquestion posed to Lankes was to explain the title of hispresentation in light of his statement that it is not entirelyaccurate. Lankes responded that ultimately we need to expand thedefinition of the document. The document is not dying, but ratherdocument-like objects/artifacts are being built on relationshipsand social networks.

Notes

1. “Online Catalogs: What Users and Librarians Want,” 2009, http://www.oclc.org/reports/onlinecatalogs/fullreport.pdf (accessed August 13, 2009).

2. “Re-implementing Duplicate Detection and Resolution (DDR),” 2009, http://wwhttp://www.oclc.org/news/announcements/announcement369.htm(accessed August 13, 2009).

3. “25.5B Conflict Resolution,”http://www.loc.gov/catdir/cpso/25_5b.pdf(accessed August 13, 2009).

4. OLAC Cataloging Policy Committee Streaming Media Best Practices TaskForce, “Best Practices for Cataloging Streaming Media,” 2008, http://www.olacinc.org/drupal/capc_files/streamingmedia.pdf (accessed August 13,2009).

5. Becky Culbertson, Yael Mandelstam, and George Prager, “Provider-Neutral E-Monograph MARC Record Guide,” Washington, DC: Program for CooperativeCataloging, 2009, http://www.loc.gov/catdir/pcc/bibco/PN-Guide.pdf(accessed August 13, 2009).

6. “Expert Community Experiment,” http://www.oclc.org/worldcat/catalog/quality/expert/ (accessed August 13, 2009).

7. “OCLC Crosswalk Web Service Demo,” http://www.oclc.org/research/researchworks/xwalk/ (accessed August 13, 2009).

8. “MARC Content Designation Utilization,” http://www.mcdu.unt.edu/(accessed August 13, 2009).

9. Mikael Nilsson, et al, “Interoperability Levels for Dublin CoreMetadata,” 2009,http://dublincore.org/documents/interoperability-levels/ (accessed August13, 2009).

10. R. David Lankes, “The Death of the Document,” 2009, http://quartz.syr.edu/rdlankes/blog/wp-trackback.php?p=765 and http://quartz.syr.edu/rdlankes/blog/wp-trackback.php?p=768 (accessed August 13, 2009).

doi:10.1016/j.serrev.2009.08.010

Blythe / Serials Review 35 (2009) 286–311

311