RIAO 20042 video retrieval systems
The Físchlár-News-Stories System: Personalised Access to
an Archive of TV News
Alan F. Smeaton, Cathal Gurrin, Howon Lee, Kieran McDonald, Noel Murphy, Noel E. O’Connor, D
avid WilsonCentre for Digital Video Processing, Dublin City University
Derry O’Sullivan, Barry SmythSmart Media Institute, Department of Computer Science, U
niversity College Dublin
Introduction
• Físchlár systems– A family of tools for capturing, analysis, indexi
ng, browsing, searching and summarisation of digital video information
– Físchlár-News-Stories• Provides access to a growing archive of broadcast
TV news• Segment news into shots and stories, calendar loo
kup, text search, link between related stories, personalisation and recommend stories
System overview
Shot boundary detection
• Shot– A single camera motion in time– We can have camera movement as well as object
motion
• Shot cut– Hard cut– Gradual transition (GT)
• Boundary detection (Browne, et al., 2000)– Frame-frame similarity over a window of frames– Evaluation: TRECVID 2001
• Over 90% precision and recall for hard cuts• Somewhat less for GT
Story segmentation
• Cluster all keyframes of shots– Similarity: colour and edge histograms (O’Connor, et
al., 2001)
• Anchorperson shots– One of the clusters will have an average keyframe-ke
yframe similarity much higher than the others and this will most likely be a cluster of anchorperson shots
• Beginning of news, beginning/end of advertisement– Apply a speech-music discrimination algorithm to the
audio
Story segmentation
• Detect individual advertisements– Sadlier, et al., 2002
• Shot length– Outside broadcasts an d footage video tends to have
shorter shot lengths than the in-studio broadcasts• Using SVM to determine story bounds
– Combine the output of these analyses• Evaluation (TRECVID 2003)
– 31% recall and 45% precision• For the present time, the automatic segmentatio
n is manually checked for accuracy every day
Search based on text
• Closed captions– Typing error, omit phrases or sentences– Time lagging
• Retrieval– Simple IR engine– When a story’s detail is displayed, we use the
closed caption text from that story as a query against the closed caption archive and display summaries of the 10 top-ranked stories
Personalisation
• User feedback– Rating on a given news story using a 5-point
scale– These ratings are used as input to a
collaborative filtering system which can recommend news stories to users based on ratings from other users
• Need to recommend on new content• User vs. stories ratings matrix is very sparse
– Story-story similarity + user-story ratings
CIMWOS: A Multimedia Retrieval System based on Combined
Text, Speech and Image Processing
Harris Papageorgiou1, Prokopis Prokopidis1,
2, Iason Demiros1,2, Nikos Hatzigeorgiou1, George Carayannis1,2
1Institute for Language and Speech Processing2National Technical University of Athens
Introduction
• CIMWOS– Multimedia, multimodal and multilingual– Content-based indexing, archiving, retrieval and on-
demand delivery of audiovisual content– Video library
• Sports, broadcast news and documentaries in English, French and Greek
– Combine speech, language and image understanding technology
– Producing XML metadata annotations following the MPEG-7 standard
Speech processing Subsystem
• Speaker Change Detection (SCD)
• Automatic Speech Recognition (ASR)
• Speaker Identification (SID)
• Speaker Clustering (SC)
Text processing Subsystem
• Named Entity Detection (NED)
• Term Extraction (TE)
• Story Segmentation (SD)
• Topic Detection (TD)
Speech Transcriptions
Named Entity
Detection
Term Extraction
Story Segmentation
Topic Detection
XML
Text processing Subsystem
• Applied on the textual data produced by the Speech Processing Subsystem
• Named entity detection– sentence boundary identification– POS tagging– NED
• Lookup modules that match lists of NEs and trigger-words against the text, hand-crafted and automatically generated pattern grammars, maximum entropy modeling, HMM models, decision-tree techniques, SVM classifier, etc.
– Term extraction• Identify single or multi-word indicative keywords• Linguistic processing is performed through an augmented ter
m grammar, the results of which are statistically filtered using frequency-based scores
Text processing Subsystem
• Story detection and topic classification– Employ the same set of models– Generative, mixture-based HMM
• One state per topic, one state modeling general language, that is words not specific to any topic
• Each state models a distribution of words given the particular topic
• Running the resulting models on a sliding window, thereby noting the change in topic-specific words as the window moves on
Image processing Subsystem
• Automatic Video Segmentation (AVS)– Shotcut detection and keyframe extraction– Measurement of differences between consecutive fra
mes– Adaptive thresholding on motion and texture cues
• Face Detection (FD) and Identification (FI)– Locate faces in video sequences, and associate these
faces with names– Based on SVM
Image processing Subsystem
• Object Recognition (OR)– The object’s surface is decomposed in a large
number of regions– The spatial and temporal relationships of these
regions are acquired from several example views
• Video Text Detection and Recognition (TDR)– OCR
• Text detection• Text verification• Text segmentation• OCR
Integration
• All processing modules in the corresponding three modalities converge to a textual XML metadata annotation scheme following the MPEG-7 descriptors
• These XML metadata annotations are further processed, merged and loaded into CIMWOS Multimedia database
Indexing and retrieval
• Weighted Boolean model– Weight of index term: tf*idf– Image processing metadata are not weighted
• Two-step– 1. Boolean exact-match
• Objects, topics and faces
– 2. Query best-match• Text, terms and named entities
• Basic retrieval unit– passage
22
2expression Term WeightedDice
ii
ii
yx
yx
YX
YX
2
expressionBinary Dice
Indexing schema
episode
Story(ies)
Passage(s) Shot(s) Subshot(s)
Key-frame
Face(s) Object(s)
Text(s)Named entity(ies)
Term(s)
Word(s)
Speaker(s)
Topic(s)
Evaluation
• Greek news broadcasts– 35.5 hours
• Collection A: 15 news, 18 hours– Segmentation, named entity identification, term
extraction, retrieval
• Collection B: 15 news, 17 hours– retrieval
• 3 users– Gold annotations on the videos
Evaluation
• Segmentation– Precision: 89.94%– Recall: 86.7%– F-measure: 88.29
• Term extraction– Precision: 34.8%– Recall: 60.28%
Evaluation
• Named entity identification
Evaluation
• Retrieval– Users translate each topic to queries– 5 queries for each topic in average– Collection B
• Segmentation is based on stories
– 60% filter• Filter out results that scored less 60% in the
CIMWOS DB ranking system
Evaluation
Precision Recall F-measure
Collection A 34.75 57.75 43.39
Collection A + 60% filter 45.78 53.52 49.35
Collection B 44.78 50.24 47.36
Collection B + 60% filter 64.96 37.07 47.20