26
Sven Schlarb Austrian National Library Elag 2013 Gent, Belgium, May 29, 2013 An open source infrastructure for preserving large collections of digital objects The SCAPE project at the Austrian National Library

SCAPE Presentation at the Elag2013 conference in Gent/Belgium

Embed Size (px)

DESCRIPTION

Presentation of the European project SCAPE (www.scape-project.eu) at the Elag2013 conference in Gent/Belgium. The presentation includes details about use cases and implementation at the Austrian National LIbrary.

Citation preview

Page 1: SCAPE Presentation at the Elag2013 conference in Gent/Belgium

Sven SchlarbAustrian National Library

Elag 2013Gent, Belgium, May 29, 2013

An open source infrastructure for preserving large collections of digital objectsThe SCAPE project at the Austrian National Library

Page 2: SCAPE Presentation at the Elag2013 conference in Gent/Belgium

• SCAPE project overview• Application areas at the Austrian National Library

• Web Archiving• Austrian Books Online

• SCAPE at the Austrian National Library• Hardware set-up• Open source software architecture

• Application Scenarios• Lessons learnt

2

Overview

This work was partially supported by the SCAPE Project.The SCAPE project is co funded by the European Union under FP7 ICT 2009.4.1 (Grant Agreement number 270137).‐ ‐

Page 3: SCAPE Presentation at the Elag2013 conference in Gent/Belgium

• Ability to process large and complex data sets in preservation scenarios

• Increasing amount of data in data centers and memory institutions

Motivation

This work was partially supported by the SCAPE Project.The SCAPE project is co funded by the European Union under FP7 ICT 2009.4.1 (Grant Agreement number 270137).‐ ‐

Volume, Velocity, and Varietyof data

1970 2000 2030

cf. Jisc (2012) Activity Data: Delivering benefits from the data deluge. available at http://www.jisc.ac.uk/publications/reports/2012/activity-data-delivering-benefits.aspx

Page 4: SCAPE Presentation at the Elag2013 conference in Gent/Belgium

• “Big data” is a buzzword, just a vague idea• No definitive GB, TB, PB, (…) threshold where data

becomes “big data”, depends on the institutional context

• Massive growth of data to be stored and processed• Situation where solutions that are usually employed do

not fulfill new scalability requirements• Pushing the limit of conventional data base solutions• Simple batch processing becomes tedious or even impossible

„Big Data“ in a Library context?

This work was partially supported by the SCAPE Project.The SCAPE project is co funded by the European Union under FP7 ICT 2009.4.1 (Grant Agreement number 270137).‐ ‐

Page 5: SCAPE Presentation at the Elag2013 conference in Gent/Belgium

5

• EU-funded FP7 project, lead by Austrian Institute of Technology

• Consortium: 16 Partners • National Libraries• Data Centers and Memory

Institutions• Research institutes and

Universities• Commercial partners

• Started 2011, runs until mid/end-2014

SCAPE Consortium

This work was partially supported by the SCAPE Project.The SCAPE project is co funded by the European Union under FP7 ICT 2009.4.1 (Grant Agreement number 270137).‐ ‐

Page 6: SCAPE Presentation at the Elag2013 conference in Gent/Belgium

Takeup•Stakeholders and Communities

•Dissemination•Training Activities

•Sustainability

SCAPE Project Overview

Platform•Automation•Workflows

•Parallelization•Virtualization

Page 7: SCAPE Presentation at the Elag2013 conference in Gent/Belgium

MapReduce/Hadoop in a nutshell

7This work was partially supported by the SCAPE Project.The SCAPE project is co funded by the European Union under FP7 ICT 2009.4.1 (Grant Agreement number 270137).‐ ‐

Task1

Task 2

Task 3

Output data

Aggregated Result

Aggregated Result

Aggregated Result

Aggregated Result

Page 8: SCAPE Presentation at the Elag2013 conference in Gent/Belgium

Experimental ClusterJob TrackerTask Trackers

Data Nodes

Name Node

CPU: 1 x 2.53GHz Quadcore CPU (8 HyperThreading cores)RAM: 16GBDISK: 2 x 1TB DISKs configured as RAID0 (performance) – 2 TB effective

•Of 16 HT cores: 5 for Map; 2 for Reduce; 1 for operating system.

25 processing cores for Map tasks and 10 cores for Reduce tasks

CPU: 2 x 2.40GHz Quadcore CPU (16 HyperThreading cores)RAM: 24GBDISK: 3 x 1TB DISKs configured as RAID5 (redundancy) – 2 TB effective

Page 9: SCAPE Presentation at the Elag2013 conference in Gent/Belgium

• Access via REST API• Workflow engine for complex

jobs• Hive as the frontend for

analytic queries• MapReduce/Pig for Extraction,

Transform, and Load (ETL)• „Small“ objects in HDFS or

HBase• „Large “ Digital objects stored

on NetApp Filer

9

Platform Architecture

This work was partially supported by the SCAPE Project.The SCAPE project is co funded by the European Union under FP7 ICT 2009.4.1 (Grant Agreement number 270137).‐ ‐

Taverna Workflow engine

REST API

Page 10: SCAPE Presentation at the Elag2013 conference in Gent/Belgium

• Web Archiving• Scenario 1: Web Archive Mime Type Identification

• Austrian Books Online• Scenario 2: Image File Format Migration• Scenario 3: Comparison of Book Derivatives• Scenario 4: MapReduce in Digitised Book Quality Assurance

Application scenarios

Page 11: SCAPE Presentation at the Elag2013 conference in Gent/Belgium

• Physical storage 19 TB• Raw data 32 TB• Number of objects

1.241.650.566

• Domain harvesting • Entire top-level-domain .at

every 2 years• Selective harvesting

• Important websites that change regularly

• Event harvesting• Special occasions and

events (e.g. elections)

Key Data Web Archiving

Page 12: SCAPE Presentation at the Elag2013 conference in Gent/Belgium

(W)ARC Container

JPG

GIF

HTM

HTM

MID

(W)ARC InputFormat(W)ARC RecordReader

based onHERITRIX

Web crawlerread/write (W)ARC

MapReduce

JPGApache Tikadetect MIME

MapReduce

image/jpg

image/jpg 1image/gif 1text/html 2audio/midi 1

Scenario 1: Web Archive Mime Type Identification

 

Page 13: SCAPE Presentation at the Elag2013 conference in Gent/Belgium

TIKA 1.0DROID 6.01

Scenario 1: Web Archive Mime Type Identification

Page 14: SCAPE Presentation at the Elag2013 conference in Gent/Belgium

• Public private partnership with Google• Only public domain• Objective to scan ~ 600.000 Volumes

• ~ 200 Mio. pages• ~ 70 project team members

• 20+ in core team• ~ 130K physical volumes scanned so far

• ~ 40 Mio pages

Key Data Austrian Books Online

Page 15: SCAPE Presentation at the Elag2013 conference in Gent/Belgium

15

ADOCO (Austrian Books Online Download & Control)

This work was partially supported by the SCAPE Project.The SCAPE project is co funded by the European Union under FP7 ICT 2009.4.1 (Grant Agreement number 270137).‐ ‐

https://confluence.ucop.edu/display/Curation/PairTree

Google Public Private Partnership

ADOCO

Page 16: SCAPE Presentation at the Elag2013 conference in Gent/Belgium

• Task: Image file format migration• TIFF to JPEG2000 migration

• Objective: Reduce storage costs by reducing the size of the images

• JPEG2000 to TIFF migration• Objective: Mitigation of the JPEG2000 file

format obsolescense risk

• Challenges: • Integrating validation, migration, and

quality assurance• Computing intensive quality assurance

Scenario 2: Image file format migration

Page 17: SCAPE Presentation at the Elag2013 conference in Gent/Belgium

• Task: Compare different versions of the same book• Images have been manipulated (cropped, rotated) and stored

in different locations• Images come from different scanning sources or were subject

to different modification procedures• Challenges:

• Computing intensive (Average runtime per book on a single quad-core server ~ 4,5 hours)

• 130.000 books, ~320 pages each• SCAPE tool: Matchbox

Scenario 2: Comparison of book derivatives

Page 18: SCAPE Presentation at the Elag2013 conference in Gent/Belgium

• ETL Processing of 60.000 books, ~ 24 Million pages• Using Taverna‘s „Tool service“ (remote ssh execution)• Orchestration of different types of hadoop jobs

• Hadoop-Streaming-API• Hadoop Map/Reduce• Hive

• Workflow available on myExperiment: http://www.myexperiment.org/workflows/3105

• See Blogpost:http://www.openplanetsfoundation.org/blogs/2012-08-07-big-data-processing-chaining-hadoop-jobs-using-taverna

Scenario 3: MapReduce in Quality Assurance

Page 19: SCAPE Presentation at the Elag2013 conference in Gent/Belgium

19

• Create input text files containing file paths (JP2 & HTML)

• Read image metadata using Exiftool (Hadoop Streaming API)

• Create sequence file containing all HTML files

• Calculate average block width using MapReduce

• Load data in Hive tables• Execute SQL test query

Scenario 3: MapReduce in Quality Assurance

Page 20: SCAPE Presentation at the Elag2013 conference in Gent/Belgium

20

find

/NAS/Z119585409/00000001.jp2/NAS/Z119585409/00000002.jp2/NAS/Z119585409/00000003.jp2…/NAS/Z117655409/00000001.jp2/NAS/Z117655409/00000002.jp2/NAS/Z117655409/00000003.jp2…/NAS/Z119585987/00000001.jp2/NAS/Z119585987/00000002.jp2/NAS/Z119585987/00000003.jp2…/NAS/Z119584539/00000001.jp2/NAS/Z119584539/00000002.jp2/NAS/Z119584539/00000003.jp2…/NAS/Z119599879/00000001.jp2l/NAS/Z119589879/00000002.jp2/NAS/Z119589879/00000003.jp2...

...

NAS

reading files from NAS

1,4 GB 1,2 GB

60.000 books (24 Million pages): ~ 5 h + ~ 38 h = ~ 43 h

Jp2PathCreator HadoopStreamingExiftoolRead

Z119585409/00000001 2345Z119585409/00000002 2340Z119585409/00000003 2543…Z117655409/00000001 2300Z117655409/00000002 2300Z117655409/00000003 2345…Z119585987/00000001 2300Z119585987/00000002 2340Z119585987/00000003 2432…Z119584539/00000001 5205Z119584539/00000002 2310Z119584539/00000003 2134…Z119599879/00000001 2312Z119589879/00000002 2300Z119589879/00000003 2300...

Reading image metadata

Page 21: SCAPE Presentation at the Elag2013 conference in Gent/Belgium

21

find

/NAS/Z119585409/00000707.html/NAS/Z119585409/00000708.html/NAS/Z119585409/00000709.html…/NAS/Z138682341/00000707.html/NAS/Z138682341/00000708.html/NAS/Z138682341/00000709.html…/NAS/Z178791257/00000707.html/NAS/Z178791257/00000708.html/NAS/Z178791257/00000709.html…/NAS/Z967985409/00000707.html/NAS/Z967985409/00000708.html/NAS/Z967985409/00000709.html…/NAS/Z196545409/00000707.html/NAS/Z196545409/00000708.html/NAS/Z196545409/00000709.html...

Z119585409/00000707

Z119585409/00000708

Z119585409/00000709

Z119585409/00000710

Z119585409/00000711

Z119585409/00000712

NAS

reading files from NAS

1,4 GB 997 GB (uncompressed)

60.000 books (24 Million pages): ~ 5 h + ~ 24 h = ~ 29 h

HtmlPathCreator SequenceFileCreator

SequenceFile creation

Page 22: SCAPE Presentation at the Elag2013 conference in Gent/Belgium

22

Z119585409/00000001

Z119585409/00000002

Z119585409/00000003

Z119585409/00000004

Z119585409/00000005...

Z119585409/00000001 2100 Z119585409/00000001 2200Z119585409/00000001 2300Z119585409/00000001 2400

Z119585409/00000002 2100 Z119585409/00000002 2200Z119585409/00000002 2300Z119585409/00000002 2400

Z119585409/00000003 2100 Z119585409/00000003 2200Z119585409/00000003 2300Z119585409/00000003 2400

Z119585409/00000004 2100 Z119585409/00000004 2200Z119585409/00000004 2300Z119585409/00000004 2400

Z119585409/00000005 2100 Z119585409/00000005 2200Z119585409/00000005 2300Z119585409/00000005 2400

Z119585409/00000001 2250

Z119585409/00000002 2250

Z119585409/00000003 2250

Z119585409/00000004 2250

Z119585409/00000005 2250

Map Reduce

HadoopAvBlockWidthMapReduce

SequenceFile Textfile

Calculate average block width using MapReduce

60.000 books (24 Million pages): ~ 6 h

Page 23: SCAPE Presentation at the Elag2013 conference in Gent/Belgium

23

HiveLoadExifData & HiveLoadHocrData

jid jwidth

Z119585409/00000001 2250

Z119585409/00000002 2150

Z119585409/00000003 2125

Z119585409/00000004 2125

Z119585409/00000005 2250

hid hwidth

Z119585409/00000001 1870

Z119585409/00000002 2100

Z119585409/00000003 2015

Z119585409/00000004 1350

Z119585409/00000005 1700

htmlwidth

jp2width

Z119585409/00000001 1870Z119585409/00000002 2100Z119585409/00000003 2015Z119585409/00000004 1350Z119585409/00000005 1700

Z119585409/00000001 2250Z119585409/00000002 2150Z119585409/00000003 2125Z119585409/00000004 2125Z119585409/00000005 2250

CREATE TABLE jp2width(hid STRING, jwidth INT)CREATE TABLE jp2width(hid STRING, jwidth INT)

CREATE TABLE htmlwidth(hid STRING, hwidth INT)CREATE TABLE htmlwidth(hid STRING, hwidth INT)

Analytic Queries

Page 24: SCAPE Presentation at the Elag2013 conference in Gent/Belgium

24

HiveSelect

jid jwidth

Z119585409/00000001 2250

Z119585409/00000002 2150

Z119585409/00000003 2125

Z119585409/00000004 2125

Z119585409/00000005 2250

hid hwidth

Z119585409/00000001 1870

Z119585409/00000002 2100

Z119585409/00000003 2015

Z119585409/00000004 1350

Z119585409/00000005 1700

htmlwidthjp2width

jid jwidth hwidth

Z119585409/00000001 2250 1870

Z119585409/00000002 2150 2100

Z119585409/00000003 2125 2015

Z119585409/00000004 2125 1350

Z119585409/00000005 2250 1700

select jid, jwidth, hwidth from jp2width inner join htmlwidth on jid = hid

Analytic Queries

Page 25: SCAPE Presentation at the Elag2013 conference in Gent/Belgium

• Emergence of new options for creating large-scale storage and processing infrastructures• HDFS as storage master or staging area?• Create a local cluster or rent a cloud infrastructure?

• Apache Hadoop offers a stable core for building a large scale processing platform that is ready to be used in production

• Important to select carefully additional components from the Apache Hadoop Ecosystem (HBase, Hive, Pig, Oozie, Yarn, Ambari, etc.) that fit your needs

25

Lessons learnt

This work was partially supported by the SCAPE Project.The SCAPE project is co funded by the European Union under FP7 ICT 2009.4.1 (Grant Agreement number 270137).‐ ‐

Page 26: SCAPE Presentation at the Elag2013 conference in Gent/Belgium

Further information•Project website: www.scape-project.eu•Github repository: www.github.com/openplanets•Project Wiki: www.wiki.opf-labs.org/display/SP/HomeSCAPE tools mentioned•SCAPE Platform

• http://www.scape-project.eu/publication/an-architectural-overview-of-the-scape-preservation-platform

•Jpylyzer – Jpeg2000 validation• http://www.openplanetsfoundation.org/software/jpylyzer

•Matchbox – Image comparison• https://github.com/openplanets/scape/tree/master/pc-qa-matchbox

Thank you! Questions?