15
This article was downloaded by: [Newcastle University] On: 06 May 2014, At: 07:46 Publisher: Routledge Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK College & Undergraduate Libraries Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/wcul20 Creating an Actionable Assessment Framework for Discovery Services in Academic Libraries Kim Durante a & Zheng Wang a a Robert W. Woodruff Library , Emory University , Atlanta , Georgia , USA Published online: 08 Aug 2012. To cite this article: Kim Durante & Zheng Wang (2012) Creating an Actionable Assessment Framework for Discovery Services in Academic Libraries, College & Undergraduate Libraries, 19:2-4, 215-228, DOI: 10.1080/10691316.2012.693358 To link to this article: http://dx.doi.org/10.1080/10691316.2012.693358 PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content. This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. Terms & Conditions of access and use can be found at http://www.tandfonline.com/page/terms- and-conditions

Creating an Actionable Assessment Framework for Discovery Services in Academic Libraries

  • Upload
    zheng

  • View
    212

  • Download
    0

Embed Size (px)

Citation preview

This article was downloaded by: [Newcastle University]On: 06 May 2014, At: 07:46Publisher: RoutledgeInforma Ltd Registered in England and Wales Registered Number: 1072954 Registeredoffice: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

College & Undergraduate LibrariesPublication details, including instructions for authors andsubscription information:http://www.tandfonline.com/loi/wcul20

Creating an Actionable AssessmentFramework for Discovery Services inAcademic LibrariesKim Durante a & Zheng Wang aa Robert W. Woodruff Library , Emory University , Atlanta , Georgia ,USAPublished online: 08 Aug 2012.

To cite this article: Kim Durante & Zheng Wang (2012) Creating an Actionable Assessment Frameworkfor Discovery Services in Academic Libraries, College & Undergraduate Libraries, 19:2-4, 215-228,DOI: 10.1080/10691316.2012.693358

To link to this article: http://dx.doi.org/10.1080/10691316.2012.693358

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracy of all the information (the“Content”) contained in the publications on our platform. However, Taylor & Francis,our agents, and our licensors make no representations or warranties whatsoever as tothe accuracy, completeness, or suitability for any purpose of the Content. Any opinionsand views expressed in this publication are the opinions and views of the authors,and are not the views of or endorsed by Taylor & Francis. The accuracy of the Contentshould not be relied upon and should be independently verified with primary sourcesof information. Taylor and Francis shall not be liable for any losses, actions, claims,proceedings, demands, costs, expenses, damages, and other liabilities whatsoever orhowsoever caused arising directly or indirectly in connection with, in relation to or arisingout of the use of the Content.

This article may be used for research, teaching, and private study purposes. Anysubstantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,systematic supply, or distribution in any form to anyone is expressly forbidden. Terms &Conditions of access and use can be found at http://www.tandfonline.com/page/terms-and-conditions

College & Undergraduate Libraries, 19:215–228, 2012Copyright © Taylor & Francis Group, LLCISSN: 1069-1316 print / 1545-2530 onlineDOI: 10.1080/10691316.2012.693358

Creating an Actionable Assessment Frameworkfor Discovery Services in Academic Libraries

KIM DURANTE and ZHENG WANGRobert W. Woodruff Library, Emory University, Atlanta, Georgia, USA

This article will suggest an assessment framework, currently beingimplemented at Emory University Libraries, for evaluation of Web-scale search and retrieval tools. This framework seeks to prioritizelimited library resources with the distinct goals of improvingend-user satisfaction and streamlining core services for Web-basedresearch. Critical components of this framework are Key Perfor-mance Indicators (KPIs) and standardized feedback gatheringsolutions. The authors hope to provide peers with a set of customer-centered metrics as well as an agile and actionable user testingmethodology for decision-making in the deployment of discoverytools.

KEYWORDS Discovery services, evidence-based practice, key per-formance indicators, metrics, next-generation library catalogs

INTRODUCTION AND LITERATURE REVIEW

Many academic libraries have implemented discovery platforms (Ballard2011), also referred to as Web-scale discovery services (Vaughan 2011), ornext generation catalogs (Yang and Wagner 2010). This trend can be directlyattributed to faculty and students’ increasing dependency on and trust insearch engines to support learning and research (Olle and Borrego 2011;Rieger 2009; Schonfeld and Guthrie 2006; Mostafa 2005). The capability ofdiscovery platforms to aggregate vast arrays of local content and remotelyhosted content into one search interface offers users instant gratification andconvenience similar to search engines.

Received 17 January 2012; reviewed 21 March 2012; accepted 27 April 2012.Address correspondence to Kim Durante, MSIS, Metadata Librarian, Robert W. Woodruff

Library, Emory University, 540 Asbury Circle, Atlanta, GA 30322-2870. E-mail: [email protected]

215

Dow

nloa

ded

by [

New

cast

le U

nive

rsity

] at

07:

46 0

6 M

ay 2

014

216 K. Durante and Z. Wang

Although those platforms perform much better than traditional searchtools in terms of user population and engagement (Ballard 2011; Yang andWagner 2010), most often academic libraries deploy these discovery layersas overlays to their classic, MARC-based integrated library system onlinepublic access catalogs (OPACs) (Yang and Wagner 2010). The familiarity andadvanced search expertise mastered through years of using OPACs is perhapsa major reason for the above practice. However, this add-on implementationmodel raises an interesting and very different kind of challenge to particularsegments of users as well as academic library administrations.

Presenting a multitude of seemingly heterogeneous search options de-feats the objective goal of discovery platforms, which is to simplify thesearch environment for users. Libraries generally provide access to subscrip-tion databases and electronic journals through a variety of channels, yetyears of system supplied search statistics show some indication that thereare a number of users who expect to find full-text articles within the biblio-graphic catalog. Providing research tools that include database deep-searchcapabilities poses particular issues in regard to response times, clarity of re-sources being searched, metadata, and a sometimes-mystifying set of searchresults. Compared with the centrality of article search sites such as GoogleScholar, users appear far less content searching through a variety of unfa-miliar and seemingly less intuitive tools. (Olle and Borrego 2011, 225) Asis often the case, the search scopes of certain tools overlap, and thus thelearning curve to master the nuances of each system can be incredibly steep.Emory University Libraries currently administers two applications that searchthe library’s main holdings—the online catalog and a discovery layer pow-ered by the ExLibris Primo software. Emory also facilitates article discoveryusing three very different applications: Primo Central, Metalib, and eJour-nals@Emory (the local electronic journals finder). Primo Central can searchsome full-text articles, but not all. Metalib can simultaneously search for full-text content within multiple databases, but its overall scope and performanceare suboptimal. eJournals@Emory can search for titles but not the full text ofindividual articles.

For libraries, the challenges are the cost to upkeep multiple tools withsimilar functionalities. Monetary, hardware, software, and staff resources arearranged to support those tools on an annual basis. Taking staff resourcesas an example, since the search environment is so complex, libraries oftenschedule classes to teach those tools, and Emory is no exception. Accordingto Mort and Brown (2007), most researchers are confident about using awide range of search and discovery tools despite the fact that they have beenlargely self-taught in the use of these tools. Perhaps by simplifying the searchenvironment, libraries may reallocate the instructional resources towardactivities deemed valuable by users. Given the shrinking budget of libraries,the administration of several related search tools is becoming unsustainable.

Dow

nloa

ded

by [

New

cast

le U

nive

rsity

] at

07:

46 0

6 M

ay 2

014

Assessment Framework for Discovery Services 217

This article provides peers with a case study that Emory UniversityLibraries has implemented for the purpose of unifying the search experienceand creating a sustainable and scalable discovery service model poweredby ExLibris’ Primo software. The study focuses on customer-centered andevidence-based methodologies to guide decision-making aimed at drivingusers from individual search tools to the unified environment. The uniquecontribution of this article is to present a metrics framework to evaluatethe performance of any discovery platform along with any related services.The past literature on library statistics often concentrated on collection useand development, and few studies have recommended Key PerformanceIndicators (KPIs) for monitoring and driving user adoption of search tools,especially next-generation platforms.

As these more sophisticated tools become an integral part of the researchprocess and thus part of a library’s core business, effective optimization ofthese search platforms should be one of the organization’s core competences.We hope to perfect a model that creates for users a unified and intuitiveenvironment in which to find library materials, while easing financial burdensand freeing up resources for implementing such search tools.

ASSESSMENT FRAMEWORK

Standard behaviors in Web research have created a series of disconnectsbetween the librarian and the user. Often, a researcher’s experience in thedigital environment is a largely unchecked measure of success because manyinstitutions are still concentrated on gathering statistics based upon the physi-cal acquisition and use of library materials. The library saw a need to developa relevant set of user assessment methodologies centered on Web researchactivity. Gauging user interaction in the digital research domain should relyon statistical tools and applications that replicate the environment whereusers are interacting most. Using Web-scale assessment tools provides a levelof sophistication and data standardization that is often difficult to gather fromthe physical environment and traditional modes of usability testing.

Implementation of a Web-scale assessment framework should beginwith a discussion of major strategic goals for the library’s core search tech-nologies. This will assist in envisioning the local search environment as asum of its applications. In order to remain responsive to user-driven require-ments, a cycle of data gathering, improvement, and deployment must reflectthe needs of an incredibly diverse user base. The process of setting goalsshould inform all data collection methodologies and product improvementcycles.

After a series of organizational discussions, Emory made a strategictechnological decision to focus all future developments and service imple-mentations on the Primo discovery layer, as opposed to making any future

Dow

nloa

ded

by [

New

cast

le U

nive

rsity

] at

07:

46 0

6 M

ay 2

014

218 K. Durante and Z. Wang

customizations to the online catalog. For administration and systems librari-ans, the ongoing maintenance and development of two catalog interfaces arenot sustainable. For users, the option to select from one of two systems thatsearch the library’s main holdings is often a confusing choice. As a centralstrategic goal, this initiative intends to redefine the notion of the library’s“catalog” and to migrate OPAC users into the Primo discovery environment.It is worth mentioning that transitioning the main search application from theOPAC to the next-generation discovery platform does not warrant the replica-tion of the full set of integrated system functionalities inside the new catalog.Rather, understanding the inherent strength and capabilities of Primo as anaggregate of data and services can lead to the optimization of a Web-scalediscovery layer relevant to the research needs of the academic community.

Because the Primo platform is capable of providing access to contentother than integrated system holdings, Emory has established a secondaryand longer-term goal to provide greater efficiency in article discovery fromwithin the next-generation catalog. Implementation of this goal will requirea considerable amount of competitive intelligence gathering, as well as dataanalysis focused primarily on Metalib and Primo Central. This ongoing initia-tive will be informed by much of the user data collected during the executionof the first goal.

Driving patron traffic to a new search environment requires buildingpatron loyalty to that product. Using tools that measure activity across allsystems can emphasize parts of either system that might succeed or fail dur-ing the research process. By examining these research events systematically,Emory was able to create a set of key performance indicators through whichit could measure various aspects of patron loyalty.

• Size of User Base• User Satisfaction Rate• User Engagement• Task Completion Rate

Each performance indicator can be formulated using a combinationof quantitative and qualitative data statistics gathered from the frameworkof Web-based assessment tools. All metrics data can be compiled into aspreadsheet for enhanced analysis. Segmenting data into monthly reportscan help synchronize all sources of Web activity and provide a baselineset of measurements, which can then be used in benchmarking. While theframework is meant to provide a performance picture of the entire suite ofcore services tools, it is also scalable enough to allow for its application tosmaller, more specific initiatives as well.

Dow

nloa

ded

by [

New

cast

le U

nive

rsity

] at

07:

46 0

6 M

ay 2

014

Assessment Framework for Discovery Services 219

SIZE OF USER BASE

Developing a virtual picture of a user population is an important preliminarydata point that facilitates the calculation of subsequent metrics for productassessment. The indicator measuring the overall size of a product’s user basecan be represented in terms of average total users, average number of uniquevisitors, and average number of searches within each application.

Keeping standardized metrics for this indicator requires the use of Webanalytics software and system-supplied metrics in order to track visitor activ-ities. Google Analytics is a free service that collects a wealth of data aboutusers and their behaviors on Websites. This software provides access to adashboard of customer data that can be analyzed according to a varietyof user dimensions, including demographic characteristics and technologypreferences.

Web analytics are a healthy component of most e-commerce sites andare commonly used to measure rates of conversion or purchase, as wellas navigational and content efficacy. The main objectives of library catalogsand search applications are noticeably different from business or informationWebsites. Usually, these tools do not have a direct monetary component, norare they intended to support intricately architected layers of html content-based Web pages. Despite these dissimilarities, search-driven sites do possessunderlying statistical indicators of ongoing success and patron loyalty.

In order to collect and compare elements of user activity within boththe Primo and OPAC environments, the library records analytics for the totalnumber of visitors to each search application. In order to establish moredistinct user profiles for each search tool, the library also measures the totalnumber of unique visitors to each site. Measuring the percentage of uniquevisitors in addition to total overall visitors can give a better picture of howeach system is performing.

Successful monthly metrics for this indicator would reveal the follow-ing:

• Percentage of Total Visitors in Primo > Percentage of Total Visitors inOPAC

• Percentage of Unique Visitors in Primo > Percentage of Unique Visitors inOPAC

The predominance of search as a primary function of each tool ne-cessitates a comparison between the overall numbers of queries conductedwithin each product. System or server logs from each proprietary piece ofsoftware will usually supply the necessary statistics for this metric. Trackingthe overall search activity on each site will supply a foundational metric for

Dow

nloa

ded

by [

New

cast

le U

nive

rsity

] at

07:

46 0

6 M

ay 2

014

220 K. Durante and Z. Wang

any future targeted analysis. A successful monthly metric would reveal thefollowing:

• Total Number of Searches in Primo > Total Number of Searches in OPAC

Search statistics can be valuable when viewed in combination with vis-itor analytics.

In order to gather the preliminary data for its article searching study,Emory also tracks the following:

• Unique Visitors to Metalib vs. Unique Visitors to Primo Central• Total Number of Searches in Metalib vs. Total Number of Searches in Primo

Central

After several months of data collection, raw numbers can be calculatedinto averages, establishing the basis for trend analysis using percentagesor ratios. The extent to which the indicator metrics succeed or fail shouldprovide the framework to develop actionable tasks aimed at the continuousimprovement of Primo’s usability. Resulting actions might be technical ormay involve marketing and outreach activities. Other more distinct userpreferences within each tool might be worth investigating once basic searchstatistics are in place:

• Percentage of Advanced Searches compared to Total Number of SearchesOverall

• Popularity of Specific Search Indexes

USER SATISFACTION RATE

While the concept of satisfaction usually applies to certain qualitative ideasof happiness and fulfillment, there is a handful of supplemental statistics thatcan provide general indicators of satisfaction or continued loyalty.

Google Analytics offers a number of advanced reporting and customersegmentation techniques that can measure users’ growing trust in a searchapplication. However, for initial implementation of the framework, Emorychose to focus on two easily accessed Web analytics that indicate overall usersatisfaction. The foundational metric in this indicator is the total number ofreturn visitors for a given month (Connaway and Dickey 2010). This metriccan be presented as a percentage when factored against the total numberof visits per month. An increase in the percentage of return visitors to asite should indicate a growing loyalty to that product (Burby and Brown2007). Conversely, percentages of return visitors to the OPAC and stand-

Dow

nloa

ded

by [

New

cast

le U

nive

rsity

] at

07:

46 0

6 M

ay 2

014

Assessment Framework for Discovery Services 221

alone search tools should decrease in concordance with increasing use ofthe discovery layer:

• Percentage of Return Visitors to Primo > Percentage of Return Visitors toOPAC

Drilling further into the analytics dashboard, the Google Analytics VisitorLoyalty report shows how often users return to the site. Ideally, a satisfactorydiscovery service should compel users to return to the service routinelythroughout their academic career. Therefore, the number of visitors whofrequently return to the service should be a percentage targeted for growth.It may require several months of data collection to establish an average rateof return before a true metric indicating growth can be calculated for a givencommunity:

• Calculate Average Frequency of Returning Users (for each system)• Average Return Frequency to Primo > Average Return Frequency to OPAC.

Examining digital statistics alone will probably not result in a true satis-faction metric. It is recommended to employ some form of qualitative datagathering in order to realistically address the major friction points in onlineservice arenas. User testing on the functional attributes of a search systemcan help to identify confusing terminologies or potentially perplexing naviga-tional paths to discovering resources and library services. Short observationaltests might evaluate a user’s ability to find relevant material in aggregatedsearch environments or perform account-related tasks from within the dis-covery interface without the assistance of circulation staff. Aside from func-tional testing, the library has had to refactor its help and feedback gatheringmethods to account for an increasingly more independent online researcherbase. Emory has implemented a third party application, Uservoice, in orderto manage the end-user help and support requests in Primo. The decision tomove to the Uservoice software was predicated on its ability to provide in-ternal and external improvements in help routing. There is an administrativedashboard similar to Google Analytics, which assists in tracking and assign-ing each ticket. Uservoice also provides a communication thread directlywith the visitor in case more information is needed.

There are several attractive social mechanics built into the Uservoicefront end such as the universal Help and Support Button, located in thesame place on every Webpage. Clicking the help button opens a “lightbox”application laid over the Primo user interface. This enables users to seekhelp without exiting the discovery application. The Uservoice add-on in-cludes a keyword search-as-you-type knowledge base, which, in the case ofrepeated problems, offers immediate help documentation on common issuesand known solutions (see Figure 1).

Dow

nloa

ded

by [

New

cast

le U

nive

rsity

] at

07:

46 0

6 M

ay 2

014

222 K. Durante and Z. Wang

FIGURE 1 Uservoice application opens on top of the current application. The searchableknowledge base displays help answers and solutions to common problems at the point ofservice.

This document database can be created from published support re-quests, and other content can be locally created to form a frequently askedquestions section. Uservoice also allows patrons to submit enhancementrequests that are publically visible and can be voted on by other users.The combination of system analytics with a user feedback channel allowsfor qualitative statements to be funneled into topical data categories, and itassists in the tracking of help or enhancement requests and in measuringinternal response times.

USER ENGAGEMENT

A key performance indicator measuring user engagement may take severalforms based upon the strategic goals for a particular library service. Mea-suring visitor engagement using the time on site metric for a search-basedsystem is considered too complex for the initial scope of the framework.While the average time spent on a site is an important metric, site durationmay represent a number of behind-the-scenes activities that are indicativeof very different research behaviors. Instead, Emory chose to monitor userengagement through event activities associated with specific Web or catalogservices within each of its core search tools.

The percentage of total visits that result in a sign-in can serve as a refer-ence point for one central aspect of patron engagement. It is also worthwhileto monitor the common navigational paths through which users arrive at thelogin link. The Primo sign-in feature provides direct electronic access tosubscription databases, electronic books, and journals, as well as to indi-vidual patron account information, all from inside the user interface. Using

Dow

nloa

ded

by [

New

cast

le U

nive

rsity

] at

07:

46 0

6 M

ay 2

014

Assessment Framework for Discovery Services 223

the Site Overlay feature in Google Analytics will indicate the popularityof the sign-in feature versus other interface elements based on percent-ages of click-throughs. Efforts toward increasing login activity are largelyrooted in the long-term strategic objective of employing Primo as an articleand database searching service. However, Primo offers some local patronaccount functionalities that are more dynamic than traditional OPAC services.A metric for gauging login activity can be represented as the following:

• Percentage of Sessions resulting in Sign-In = Number of Sign-Ins/Numberof Total Visitors.

When compiling user engagement statistics, event-tracking metrics canbe designed to measure the effectiveness of specific features. Creation of anevent analytic should first take into account the overall numbers of clicks ona particular link. These event-based clicks can be factored against the totalnumber of sessions where the event occurred and then finally against thetotal number of visitors. Successful implementation of a service should reveala higher click ratio per session for a specific event, and these engagementpercentages should increase as efforts are made to promote these features.

Primo’s electronic bookshelf feature, known as e-shelf, allows patronsto save or send item information using a selection of Web services. The elec-tronic shelf has functional resemblances to shopping cart or basket featuresin popular e-commerce sites. Sign-in is not required to use certain e-shelffeatures. However, data is not retained past a single session for guest users.EUL has chosen to focus on the user base of the e-shelf service as anotherimportant aspect of product engagement, including how often visitors returnto the e-shelf pages and what they accomplish within the appliance. Recordssaved in an e-shelf can be e-mailed, exported to citation services, or savedto Zotero. All activity is tracked according to the e-shelf user base using theevent-based clicks within that service. Event metrics should be formulatedaccording to the objective of the service, for example:

• Percentage of e-shelf use = Total Number of Sessions employing intoe-shelf/ Total Number of Visitors

• Item Saved to e-shelf > Unique Visitors Save to e-shelf click.

Methods of refining searches differ considerably between the two cat-alogs. Primo provides postfiltering ability for result sets using a variety offacets. The system statistics provide a wealth of data regarding the use ofthese facets, including the total number of searches that result in faceting,as well as the popularity of individual facet categories (Creator, Topic, Re-source Type, Home Library, etc.), measured by the number of clicks (seeFigure 2).

Dow

nloa

ded

by [

New

cast

le U

nive

rsity

] at

07:

46 0

6 M

ay 2

014

224 K. Durante and Z. Wang

FIGURE 2 Primo system-supplied chart showing use of facet categories.

Another interesting data point is the popularity of specific facet values,such as the name of a particular library, subject, full-text online only, or re-source type (book, journal, video, etc.). These individual facet values can bemonitored in conjunction with library-wide marketing initiatives as a way tomeasure the effectiveness of promotional content and external references tolibrary collections. As these values are relatively simple to track for numbersof clicks, types, and textual values, examining them can inform the creationor modification of search refinements:

• Percentage of Searches Employing Facets = Click-Through to Refine/TotalNumber of Searches

• Popularity of Specific Facet Categories• Popularity of Specific Facet Values

The Primo catalog provides several options for content enrichment toexisting records through the use of Web services or Application Program-ming Interfaces (APIs). The Google Analytics Site Overlay provides a dynamicpicture of a Web page, complete with click-through percentages displayedover the various hyperlinks. Links to full-text online versions, as well assupplemental information such as tables of contents, previews, or excerpts,are made available through providers such as Amazon, Google Books, orWorldCat from within Primo’s detailed record view. These forms of contentenhancement are popular with users desiring to access and evaluate mate-rial quickly (Connaway and Dickey 2010). With a growing number of APIsand content enrichment services available (Hathitrust, Syndetics Solutions,LibraryThing reviews), knowing how these current Web services are rou-tinely engaged should influence future optimization efforts as well as informhigher-level strategic decisions such as in-house cataloging levels or selectingitems for off-site storage.

This metric is applied to each specific Web service:

Dow

nloa

ded

by [

New

cast

le U

nive

rsity

] at

07:

46 0

6 M

ay 2

014

Assessment Framework for Discovery Services 225

• Web Service Usage = Number of Click-throughs to Web Service/Numberof Record Views

Because there is a tendency for researchers to seek information fromplaces other than the library catalog, Emory has developed a metric to trackthe amount of system traffic that originates from external Web sources.Google Analytics categorizes these Web leads in the Traffic Sources sectionof the dashboard. This metric was designed to monitor the effect of Primo’sXML site map generator, which exposes the system’s metadata to externalsearch engines. The Search Engine referrals report provides a breakdown ofkeywords that points users to local Primo content found through the Web.

TASK COMPLETION RATE

The task completion indicator targets various indicators of success within theonline discovery platform. While many academic libraries are accustomed tokeeping statistics associated with physical circulation or in-house use activityas a measure of collection use, there exists a need for complimentary indi-cators of success in evaluating and obtaining online materials. Performanceindicator ratios for the task completion rate can be developed in relation toa number of virtual behaviors.

Since Primo does not inherently utilize circulation data in conjunctionwith search or request functionality, numbers generated in a task completionscenario for discovery tools may not contain a one-to-one mapping in regardto physical usage statistics (Yang and Wagner 2010). Despite that, using apredesigned set of analytic events will assist in measuring the effectivenessof results relevancy, findability, and access.

Emory uses its event tracking formula to determine how often users areconnecting directly to online information. Primo provides a one-click onlineaccess feature from its brief results list. This is the fastest way for a userto retrieve electronic documents and view them from within Primo’s userinterface. Other indirect navigational paths may bypass the Online Resourcelink and navigate more deeply into a record or into the OPAC in search of aURL. These represent important paths to analyze in order to understand themethods by which users evaluate and obtain materials. Studies of academicresearchers show that ease-of-access to full-text online material factors heav-ily into a user’s results evaluation process (Olle and Borrego 2011). Trackingthe use of the direct online access link is a significant data point in measuringuser intuitiveness and interaction with Primo’s delivery mechanism:

• Percentage of Clicks to Online Access Link

Dow

nloa

ded

by [

New

cast

le U

nive

rsity

] at

07:

46 0

6 M

ay 2

014

226 K. Durante and Z. Wang

Other task completion metrics should focus on addressing commonpoints of friction that arise in an online search environment. An effectiveway to facilitate task completion without conducting event-based usabilitytests is to examine the failed search data provided in both the system statisticsand analytics reports. Examining these reports can bring to light certain typesof information that a user assumes is discoverable through the catalog. Whenthese strings or patterns appear frequently across multiple sessions and users,it is obvious that the system is not performing as users might expect. While itis not possible to address every unique point of failure from this report, largepatterns or data repetitions should appear after brief inspection. Additionally,examining a site’s exit rates using Google Analytics can provide informationregarding which search terms may commonly compel users to exit the searchprocess in favor of another tool (Boswell 2011).

As the notion of the library catalog evolves, Primo must be evaluated forits ability to provide patron account functionalities previously available onlythrough the OPAC. Account features in the discovery layer are a rather newconvention. Most patrons are not accustomed to accessing their account in-formation or making requests through the Primo environment. Instead, theyare familiar with using the traditional OPAC for these types of activities. Aseducational efforts in these new discovery layer mechanics increase, moni-toring the system metrics associated with various requesting features will bean important measure of Primo’s facility as an OPAC successor. The percent-age of total requests initiated from each catalog service can be calculatedthrough system logs, as can the breakdown for each individual type of re-quest. Additionally, an understanding of how users commonly navigate torequest links will assist in future product optimization within the discoverylayer:

Number of Requests in Primo > Number of Requests in OPAC

IMPLEMENTATION

While a robust set of metrics and ongoing analysis are essential to ensure thatproduct developments correspond to growing user expectations, institutionsthat have no prior experience with collecting and analyzing digital data maybegin by compiling baseline statistics, such as total visitors and total searchesalong with unique, new, and return visitors. Google Analytics might requirea few months of continual data gathering and practical application training inorder to gain a sense of the primary ways in which a population is interactingwith a system. Nevertheless, if conducted in lieu of specific goals, statisticaland analytical data can begin to show common patterns of user interactionalmost immediately.

Dow

nloa

ded

by [

New

cast

le U

nive

rsity

] at

07:

46 0

6 M

ay 2

014

Assessment Framework for Discovery Services 227

The library has maintained the Primo system longer than it has usedGoogle Analytics on its search applications; thus, the system-supplied num-bers tend to exhibit certain statistical patterns more keenly. One year’s worthof Primo-supplied failed search data repeatedly showed that a substantialportion of unsuccessful searches was the result of users looking for callnumbers through the Primo keyword search bar. In OPACs, call numbersand classification schema are critical in physically obtaining material. Callnumbers also make possible shelf browsing activities that aid in furtherserendipitous resource discovery. However, a growing amount of electronicmaterial is not classified in this manner. Yet, if Primo is to serve as a discoverytool for the library’s aggregation of available resources, call number searchesshould be available. Complementary qualitative statements appearing in theUservoice feedback asked for this capability, and several users voted for theenhancement. In this way, Emory was able to plan and execute work thatwas deemed meaningful according to our users, with the result that this typeof failed search has almost completely disappeared from the reports.

In compiling the sources of click-through data that are available throughthe Primo back-end, Emory noticed some immediate issues with low clicktotals. It appeared that users rarely employed a link to other versions orformats of a particular resource. Primo applies a Functional Requirementsfor Bibliographic Records (FRBR) algorithm to records, which clusters itemsof the same intellectual content together inside one preferred record. Thisfeature prompted a series of help requests from librarians and patrons whowere unsuccessful in locating items of a certain format or edition. It ap-peared that users looking for a film were sometimes presented a book orfilm score record for the same title because Primo clusters these together.Unless users knew to click on the “Other Versions or Formats” link, theunderlying item records were inaccessible through the catalog. Attempts totweak the algorithm to prohibit certain formats from clustering together wereonly partially successful. After further consideration and continued dissatis-faction from users regarding lack of discoverability, the library made thedecision to undo all FRBR clustering in Primo. These two relatively minordata-driven changes have already improved discoverability, mainly in therealm of known-item searching. However, longer-term data analysis will berequired for major service implementations.

Not surprisingly, Emory has noticed an increasing number of users visit-ing the Primo application using mobile devices. This assessment frameworkcan be applied to measure the common activities of a mobile user base as away to plan and develop relevant services using a continuous cycle of datagathering and implementation. As the key performance indicators begin todisplay satisfactory signs of success, they can be refined to focus on morecomplex metrics applicable to the needs of the research community.

Dow

nloa

ded

by [

New

cast

le U

nive

rsity

] at

07:

46 0

6 M

ay 2

014

228 K. Durante and Z. Wang

CONCLUSION

Methods of academic research have been affected by the ease-of-use andaccess provided by the World Wide Web. Libraries looking to optimize theirresource discovery services for users should begin by setting realistic andachievable goals that can be measured through Web-based data gatheringtools. These analytical tools are an essential part of an assessment frameworkthat better reflects how users are interacting in an online environment. Assuch, they represent streamlined methods of data collection that will providethe basis for a user-centered cycle of product development to ensure thatlibraries remain relevant to the populations they serve.

REFERENCES

Ballard, T. 2011. “Advisor Report from the Field, Comparison of User Search Behav-iors with Classic Online Catalogs and Discovery Platforms.” Charleston Advisor,January, 65–66.

Boswell, P. 2011. “Google Analytics: Measuring Content Use and Engagement.” InSociety for Technical Communication Summit, edited by B. Bailey, 135–38.Sacramento: Society for Technical Communication.

Burby, J., and A. Brown. 2007. Web Analytics Definitions. Washington, DC: WebAnalytics Association.

Connaway, L. S., and T. J. Dickey. 2010. The Digital Information Seeker: Report ofthe Findings from Selected OCLC, RIN, and JISC User Behaviour Projects. http://www.jisc.ac.uk/media/documents/publications/reports/2010/digitalinformationseekerreport.pdf.

Mort, D., and J. Brown. 2007. “Researchers Reject Need for Resource DiscoveryTraining.” Research Information 28:16–17.

Mostafa, J. 2005. “Seeking Better Web Searches.” Scientific American 292:67–73.Olle, C., and A. Borregol. 2011. “A Qualitative Study of the Impact of Electronic

Journals on Scholarly Information Behavior. Library & Information Research 32:221–28.

Rieger, Y. O. 2009. “Search Engine Use Behavior of Students and Faculty. UserPerceptions and Implications for Future Research. First Monday 14:12.

Schonfeld, R., and K. Guthrie. 2006. Ithaka 2006 Survey of U.S. Higher EducationFaculty Attitudes and Behaviors. http://dx.doi.org/10.3886/ICPSR22700.

Vaughan, J. 2011. “Chapter 1: Web Scale discovery What and Why?” Library Tech-nology Reports, January, 5–11.

Yang, S. Q., and K. Wagner. 2010. “Evaluating and Comparing Discovery Tools: HowClose Are We Towards Next Generation Catalog?” Library Hi Tech 28:690–709.

Dow

nloa

ded

by [

New

cast

le U

nive

rsity

] at

07:

46 0

6 M

ay 2

014