139
Discovering digital identities through face recognition on mobile devices Gerry Hendrickx Thesis voorgedragen tot het behalen van de graad van Master in de ingenieurswetenschappen: computerwetenschappen Promotor: Prof. dr. ir. Erik Duval Assessoren: Dr. ir. Kurt De Grave Mr. Frans Van Assche Begeleider: Ir. Gonzalo Parra Academiejaar 2011 – 2012

Discovering digital identities through face recognition on mobile devices

Embed Size (px)

Citation preview

Page 1: Discovering digital identities through face recognition on mobile devices

Discovering digital identities through facerecognition on mobile devices

Gerry Hendrickx

Thesis voorgedragen tot hetbehalen van de graad van Master

in de ingenieurswetenschappen:computerwetenschappen

Promotor:Prof. dr. ir. Erik Duval

Assessoren:Dr. ir. Kurt De Grave

Mr. Frans Van Assche

Begeleider:Ir. Gonzalo Parra

Academiejaar 2011 – 2012

Page 2: Discovering digital identities through face recognition on mobile devices

c© Copyright K.U.Leuven

Without written permission of the thesis supervisor and the author it is forbiddento reproduce or adapt in any form or by any means any part of this publication.Requests for obtaining the right to reproduce or utilize parts of this publicationshould be addressed to the Departement Computerwetenschappen, Celestijnenlaan200A bus 2402, B-3001 Heverlee, +32-16-327700 or by email [email protected].

A written permission of the thesis supervisor is also required to use the methods,products, schematics and programs described in this work for industrial or commercialuse, and for submitting this publication in scientific contests.

Zonder voorafgaande schriftelijke toestemming van zowel de promotor als de auteuris overnemen, kopiëren, gebruiken of realiseren van deze uitgave of gedeelten ervanverboden. Voor aanvragen tot of informatie i.v.m. het overnemen en/of gebruiken/of realisatie van gedeelten uit deze publicatie, wend u tot het DepartementComputerwetenschappen, Celestijnenlaan 200A bus 2402, B-3001 Heverlee, +32-16-327700 of via e-mail [email protected].

Voorafgaande schriftelijke toestemming van de promotor is eveneens vereist voor hetaanwenden van de in deze masterproef beschreven (originele) methoden, producten,schakelingen en programma’s voor industrieel of commercieel nut en voor de inzendingvan deze publicatie ter deelname aan wetenschappelijke prijzen of wedstrijden.

Page 3: Discovering digital identities through face recognition on mobile devices

Preface

I would like to take a moment to thank a number of people who have aided andsupported me over the last year.First of all I’m grateful to Prof. E. Duval for giving me the opportunity to explorethis self-proposed subject. I thank him for wanting to promote me and for the adviceregarding the masterthesis during the year.A special thanks goes out to my supervisor Gonzalo Parra. He was always open forquestions and provided helpful feedback and advice on a weekly basis.I also thank all the participants in the prototype tests for their honesty and comments,Céline Gladiné for creating the graphical design of the application, Tom Vermeulenand Glenn Vingerhoets for the extensive testing and advice and @Sergimtzlosa forhis support regarding the iOS wrapper for Face.com.And last but not least I want to thank all my family and friends for being supportivethroughout the year.

Gerry Hendrickx

i

Page 4: Discovering digital identities through face recognition on mobile devices

Contents

Preface iAbstract ivList of Figures and Tables vList of Abbreviations and Symbols x1 Introduction 1

1.1 Situating the problem . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.3 Chapter overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

2 Research 52.1 Face recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52.2 Augmented reality . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102.3 Social network services (SNS) . . . . . . . . . . . . . . . . . . . . . . 13

3 Analysis 173.1 Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173.2 Privacy concerns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

4 Design 234.1 Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234.2 Storyboard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244.3 Paper prototype iteration 1 . . . . . . . . . . . . . . . . . . . . . . . 274.4 Paper prototype iteration 2 . . . . . . . . . . . . . . . . . . . . . . . 294.5 Paper prototype iteration 3 . . . . . . . . . . . . . . . . . . . . . . . 35

5 Implementation 415.1 Digital prototype 1 implementation . . . . . . . . . . . . . . . . . . . 415.2 Digital prototype 1 evaluation . . . . . . . . . . . . . . . . . . . . . . 475.3 Digital prototype 2 implementation . . . . . . . . . . . . . . . . . . . 515.4 Overview of classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

6 Evaluation 596.1 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 596.2 Results and discussion . . . . . . . . . . . . . . . . . . . . . . . . . . 606.3 USE questionnaire . . . . . . . . . . . . . . . . . . . . . . . . . . . . 616.4 Real-life testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

ii

Page 5: Discovering digital identities through face recognition on mobile devices

Contents

6.5 A/B testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 647 Conclusion 67

7.1 Thesis summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 677.2 Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 687.3 Reflection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 697.4 Possible future work . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

A Paper prototype iteration 2 questionnaire results 73A.1 Usefulness questions . . . . . . . . . . . . . . . . . . . . . . . . . . . 73A.2 Ease of use questions . . . . . . . . . . . . . . . . . . . . . . . . . . . 75A.3 Ease of learning questions . . . . . . . . . . . . . . . . . . . . . . . . 77A.4 Satisfaction questions . . . . . . . . . . . . . . . . . . . . . . . . . . 78

B Paper prototype iteration 3 questionnaire results 81B.1 Usefulness questions . . . . . . . . . . . . . . . . . . . . . . . . . . . 81B.2 Ease of use questions . . . . . . . . . . . . . . . . . . . . . . . . . . . 83B.3 Ease of learning questions . . . . . . . . . . . . . . . . . . . . . . . . 85B.4 Satisfaction questions . . . . . . . . . . . . . . . . . . . . . . . . . . 86

C Digital prototype iteration 1 questionnaire results 89C.1 Usefulness questions . . . . . . . . . . . . . . . . . . . . . . . . . . . 89C.2 Ease of use questions . . . . . . . . . . . . . . . . . . . . . . . . . . . 91C.3 Ease of learning questions . . . . . . . . . . . . . . . . . . . . . . . . 93C.4 Satisfaction questions . . . . . . . . . . . . . . . . . . . . . . . . . . 94

D Digital prototype iteration 2 questionnaire results 97D.1 Usefulness questions . . . . . . . . . . . . . . . . . . . . . . . . . . . 97D.2 Ease of use questions . . . . . . . . . . . . . . . . . . . . . . . . . . . 99D.3 Ease of learning questions . . . . . . . . . . . . . . . . . . . . . . . . 101D.4 Satisfaction questions . . . . . . . . . . . . . . . . . . . . . . . . . . 102

E Class diagram 105F Poster 107G Paper 109Bibliography 123

iii

Page 6: Discovering digital identities through face recognition on mobile devices

Abstract

The internet has revolutionized the way people interact with each other. Differenttools have been created to support communication on several levels. Social networksare starting to play a big role in the lives of their members and have their own focus.Because of this focus, people are joining different social networks, each one providingdifferent benefits to the users. Their information is spread between these networkswhich in total forms the digital identity of the user, his online persona. It is due tothis spread of information that discovering this digital identity is not a trivial task.The goal of this thesis is to simplify the access to a persons digital identity and dothis by means of face recognition on mobile devices.

The thesis discusses the research and related work study performed to scrambleideas to define the starting points. This research, combined with a brainstorm andsmall survey, are analysed and a concept is designed: an application that is ableto recognize faces and display the digital identity of those faces in real-time on thescreen of a mobile phone.

The design and development of the application follow an iterative design process.It starts with paper prototypes of 3 different user interfaces, whereof one is selectedand elaborated. This paper prototype of the interface is further designed and itera-tively evaluated before starting the actual implementation.

The development is done in 2 iterations: the first is focussing on the corefunctionality, the face recognition. This version is evaluated and produced a seconddigital prototype, which is a fully functioning application that allows the user torecognize people. The use of a database that stores social network handles of theusers allows the application to link a recognized person to his social networks.

iv

Page 7: Discovering digital identities through face recognition on mobile devices

List of Figures and Tables

List of Figures

2.1 An example of reconstructed eigenfaces of the same person [3]. . . . . . 62.2 An example of a 3 dimensional reconstruction of a face [6]. . . . . . . . 72.3 Related face recognition applications . . . . . . . . . . . . . . . . . . . . 102.4 An example of a location-based AR app: Layar [18]. . . . . . . . . . . . 112.5 An example of a view-based AR app: Augmented driving [21]. . . . . . 122.6 An overview of prof. Erik Duval’s linked networks on FriendFeed. . . . . 142.7 The FriendStream app on an Android smartphone. [35] . . . . . . . . . 15

4.1 Storyboard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254.2 User interface alternatives 2 and 3 . . . . . . . . . . . . . . . . . . . . . 264.3 The new home, history and overview views in the second paper prototype 304.4 The new setting views in the second paper prototype . . . . . . . . . . . 304.5 A photo of the different parts used in the second paper prototype. . . . 314.6 A boxplot representing statements A.2, A.3, A.4, A.5, A.7. . . . . . . . 334.7 A boxplot representing statements A.9, A.12, A.14, A.16. . . . . . . . . 344.8 A boxplot representing statements A.22, A.24, A.27, A.29. . . . . . . . . 354.9 A boxplot representing statements B.2, B.3, B.4, B.5, B.7. . . . . . . . . 384.10 A boxplot representing statements B.9, B.12, B.14, B.16. . . . . . . . . 394.11 A boxplot representing statements B.22, B.24, B.27, B.29. . . . . . . . . 40

5.1 A sequence diagram representing the face tracking process. . . . . . . . 425.2 Example screenshots of the face detection. . . . . . . . . . . . . . . . . . 435.3 A sequence diagram representing the face recognition process. . . . . . . 455.4 The different views of the first prototype. . . . . . . . . . . . . . . . . . 465.5 A boxplot representing statements C.2, C.3, C.4, C.5, C.7. . . . . . . . . 495.6 A boxplot representing statements C.9, C.12, C.14, C.16. . . . . . . . . 505.7 A boxplot representing statements C.22, C.24, C.27, C.29. . . . . . . . . 515.8 The different views of the second prototype. . . . . . . . . . . . . . . . . 535.9 A sequence diagram representing the information fetching process. . . . 555.10 The different views of the second prototype. . . . . . . . . . . . . . . . . 555.11 The Model-View-Controller pattern. [50] . . . . . . . . . . . . . . . . . . 565.12 The screen transition diagram of the application . . . . . . . . . . . . . 56

v

Page 8: Discovering digital identities through face recognition on mobile devices

List of Figures and Tables

6.1 A boxplot representing statements D.2, D.3, D.4, D.5, D.7. . . . . . . . 616.2 A boxplot representing statements D.9, D.12, D.14, D.16. . . . . . . . . 626.3 A boxplot representing statements D.22, D.24, D.27, D.29. . . . . . . . . 636.4 A graph representing the average time to discover a persons networks

with (WA) and without (WOA) the application. . . . . . . . . . . . . . 65

A.1 The application helps me be more effective. . . . . . . . . . . . . . . . . 74A.2 The application helps me be more productive. . . . . . . . . . . . . . . . 74A.3 The application is useful. . . . . . . . . . . . . . . . . . . . . . . . . . . 74A.4 The application gives me more control over the activities in my life. . . 74A.5 The application makes the things I want to accomplish easier to get done. 74A.6 The application saves me time when I use it. . . . . . . . . . . . . . . . 74A.7 The application meets my needs. . . . . . . . . . . . . . . . . . . . . . . 74A.8 The application does everything I would expect it to do. . . . . . . . . . 74A.9 The application is easy to use. . . . . . . . . . . . . . . . . . . . . . . . 75A.10 The application is simple to use. . . . . . . . . . . . . . . . . . . . . . . 75A.11 The application is user friendly. . . . . . . . . . . . . . . . . . . . . . . . 75A.12 The application requires the fewest steps possible to accomplish what I

want to do with it. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75A.13 The application is flexible. . . . . . . . . . . . . . . . . . . . . . . . . . . 75A.14 Using the application is effortless. . . . . . . . . . . . . . . . . . . . . . . 75A.15 I can use the application without written instructions. . . . . . . . . . . 76A.16 I don’t notice any inconsistencies as I use the application. . . . . . . . . 76A.17 Both occasional and regular users would like the application. . . . . . . 76A.18 I can recover from mistakes quickly and easily. . . . . . . . . . . . . . . 76A.19 I can use the application successfully every time. . . . . . . . . . . . . . 76A.20 I learned to use the application quickly. . . . . . . . . . . . . . . . . . . 77A.21 I easily remember how to use the application. . . . . . . . . . . . . . . . 77A.22 It is easy to learn to use the application. . . . . . . . . . . . . . . . . . . 77A.23 I quickly became skillful with the application. . . . . . . . . . . . . . . . 77A.24 I am satisfied with the application. . . . . . . . . . . . . . . . . . . . . . 79A.25 I would recommend the application to a friend. . . . . . . . . . . . . . . 79A.26 The application is fun to use. . . . . . . . . . . . . . . . . . . . . . . . . 79A.27 The application works the way I want it to work. . . . . . . . . . . . . . 79A.28 The application is wonderful. . . . . . . . . . . . . . . . . . . . . . . . . 79A.29 I feel I need to have it. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79A.30 It is pleasant to use. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79

B.1 The application helps me be more effective. . . . . . . . . . . . . . . . . 82B.2 The application helps me be more productive. . . . . . . . . . . . . . . . 82B.3 The application is useful. . . . . . . . . . . . . . . . . . . . . . . . . . . 82B.4 The application gives me more control over the activities in my life. . . 82B.5 The application makes the things I want to accomplish easier to get done. 82B.6 The application saves me time when I use it. . . . . . . . . . . . . . . . 82B.7 The application meets my needs. . . . . . . . . . . . . . . . . . . . . . . 82

vi

Page 9: Discovering digital identities through face recognition on mobile devices

List of Figures and Tables

B.8 The application does everything I would expect it to do. . . . . . . . . . 82B.9 The application is easy to use. . . . . . . . . . . . . . . . . . . . . . . . 83B.10 The application is simple to use. . . . . . . . . . . . . . . . . . . . . . . 83B.11 The application is user friendly. . . . . . . . . . . . . . . . . . . . . . . . 83B.12 The application requires the fewest steps possible to accomplish what I

want to do with it. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83B.13 The application is flexible. . . . . . . . . . . . . . . . . . . . . . . . . . . 83B.14 Using the application is effortless. . . . . . . . . . . . . . . . . . . . . . . 83B.15 I can use the application without written instructions. . . . . . . . . . . 84B.16 I don’t notice any inconsistencies as I use the application. . . . . . . . . 84B.17 Both occasional and regular users would like the application. . . . . . . 84B.18 I can recover from mistakes quickly and easily. . . . . . . . . . . . . . . 84B.19 I can use the application successfully every time. . . . . . . . . . . . . . 84B.20 I learned to use the application quickly. . . . . . . . . . . . . . . . . . . 85B.21 I easily remember how to use the application. . . . . . . . . . . . . . . . 85B.22 It is easy to learn to use the application. . . . . . . . . . . . . . . . . . . 85B.23 I quickly became skillful with the application. . . . . . . . . . . . . . . . 85B.24 I am satisfied with the application. . . . . . . . . . . . . . . . . . . . . . 87B.25 I would recommend the application to a friend. . . . . . . . . . . . . . . 87B.26 The application is fun to use. . . . . . . . . . . . . . . . . . . . . . . . . 87B.27 The application works the way I want it to work. . . . . . . . . . . . . . 87B.28 The application is wonderful. . . . . . . . . . . . . . . . . . . . . . . . . 87B.29 I feel I need to have it. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87B.30 It is pleasant to use. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87

C.1 The application helps me be more effective. . . . . . . . . . . . . . . . . 90C.2 The application helps me be more productive. . . . . . . . . . . . . . . . 90C.3 The application is useful. . . . . . . . . . . . . . . . . . . . . . . . . . . 90C.4 The application gives me more control over the activities in my life. . . 90C.5 The application makes the things I want to accomplish easier to get done. 90C.6 The application saves me time when I use it. . . . . . . . . . . . . . . . 90C.7 The application meets my needs. . . . . . . . . . . . . . . . . . . . . . . 90C.8 The application does everything I would expect it to do. . . . . . . . . . 90C.9 The application is easy to use. . . . . . . . . . . . . . . . . . . . . . . . 91C.10 The application is simple to use. . . . . . . . . . . . . . . . . . . . . . . 91C.11 The application is user friendly. . . . . . . . . . . . . . . . . . . . . . . . 91C.12 The application requires the fewest steps possible to accomplish what I

want to do with it. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91C.13 The application is flexible. . . . . . . . . . . . . . . . . . . . . . . . . . . 91C.14 Using the application is effortless. . . . . . . . . . . . . . . . . . . . . . . 91C.15 I can use the application without written instructions. . . . . . . . . . . 92C.16 I don’t notice any inconsistencies as I use the application. . . . . . . . . 92C.17 Both occasional and regular users would like the application. . . . . . . 92C.18 I can recover from mistakes quickly and easily. . . . . . . . . . . . . . . 92C.19 I can use the application successfully every time. . . . . . . . . . . . . . 92

vii

Page 10: Discovering digital identities through face recognition on mobile devices

List of Figures and Tables

C.20 I learned to use the application quickly. . . . . . . . . . . . . . . . . . . 93C.21 I easily remember how to use the application. . . . . . . . . . . . . . . . 93C.22 It is easy to learn to use the application. . . . . . . . . . . . . . . . . . . 93C.23 I quickly became skillful with the application. . . . . . . . . . . . . . . . 93C.24 I am satisfied with the application. . . . . . . . . . . . . . . . . . . . . . 95C.25 I would recommend the application to a friend. . . . . . . . . . . . . . . 95C.26 The application is fun to use. . . . . . . . . . . . . . . . . . . . . . . . . 95C.27 The application works the way I want it to work. . . . . . . . . . . . . . 95C.28 The application is wonderful. . . . . . . . . . . . . . . . . . . . . . . . . 95C.29 I feel I need to have it. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95C.30 It is pleasant to use. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95

D.1 The application helps me be more effective. . . . . . . . . . . . . . . . . 98D.2 The application helps me be more productive. . . . . . . . . . . . . . . . 98D.3 The application is useful. . . . . . . . . . . . . . . . . . . . . . . . . . . 98D.4 The application gives me more control over the activities in my life. . . 98D.5 The application makes the things I want to accomplish easier to get done. 98D.6 The application saves me time when I use it. . . . . . . . . . . . . . . . 98D.7 The application meets my needs. . . . . . . . . . . . . . . . . . . . . . . 98D.8 The application does everything I would expect it to do. . . . . . . . . . 98D.9 The application is easy to use. . . . . . . . . . . . . . . . . . . . . . . . 99D.10 The application is simple to use. . . . . . . . . . . . . . . . . . . . . . . 99D.11 The application is user friendly. . . . . . . . . . . . . . . . . . . . . . . . 99D.12 The application requires the fewest steps possible to accomplish what I

want to do with it. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99D.13 The application is flexible. . . . . . . . . . . . . . . . . . . . . . . . . . . 99D.14 Using the application is effortless. . . . . . . . . . . . . . . . . . . . . . . 99D.15 I can use the application without written instructions. . . . . . . . . . . 100D.16 I don’t notice any inconsistencies as I use the application. . . . . . . . . 100D.17 Both occasional and regular users would like the application. . . . . . . 100D.18 I can recover from mistakes quickly and easily. . . . . . . . . . . . . . . 100D.19 I can use the application successfully every time. . . . . . . . . . . . . . 100D.20 I learned to use the application quickly. . . . . . . . . . . . . . . . . . . 101D.21 I easily remember how to use the application. . . . . . . . . . . . . . . . 101D.22 It is easy to learn to use the application. . . . . . . . . . . . . . . . . . . 101D.23 I quickly became skillful with the application. . . . . . . . . . . . . . . . 101D.24 I am satisfied with the application. . . . . . . . . . . . . . . . . . . . . . 103D.25 I would recommend the application to a friend. . . . . . . . . . . . . . . 103D.26 The application is fun to use. . . . . . . . . . . . . . . . . . . . . . . . . 103D.27 The application works the way I want it to work. . . . . . . . . . . . . . 103D.28 The application is wonderful. . . . . . . . . . . . . . . . . . . . . . . . . 103D.29 I feel I need to have it. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103D.30 It is pleasant to use. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103

E.1 The class diagram of the application . . . . . . . . . . . . . . . . . . . . 106

viii

Page 11: Discovering digital identities through face recognition on mobile devices

List of Figures and Tables

List of Tables

2.1 Comparison of the different face recognition API’s . . . . . . . . . . . . 92.2 Social networks and their focus. . . . . . . . . . . . . . . . . . . . . . . . 13

3.1 Survey results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193.2 Results of social network survey. . . . . . . . . . . . . . . . . . . . . . . 203.3 Results of survey on Facebook that polls to the privacy concerns. . . . . 20

4.1 Comparison of the iOS and Android languages . . . . . . . . . . . . . . 24

5.1 List of status messages on different phases of the recognition process. . . 52

6.1 Table representing the times to discover social networks with the app(WA) and without the app (WOA). . . . . . . . . . . . . . . . . . . . . . 64

ix

Page 12: Discovering digital identities through face recognition on mobile devices

List of Abbreviations andSymbols

AbbreviationsPCA Principal Component AnalysisAPI Application Programming InterfaceAR Augmented RealityHWD Head Worn DisplaySNS Social Network Service

x

Page 13: Discovering digital identities through face recognition on mobile devices

Chapter 1

Introduction

1.1 Situating the problemThe internet has revolutionized the way people interact with each other. Differenttools have been created to support different levels of communication. Millions ofpeople are online and are using social networks like Facebook, Twitter, Google+and so on in order to connect with each other. However, each network has its owngoal and functionalities, which results in people participating in multiple networks.Their presence on the internet, their digital identity, is divided among these services,and it is often a hard task to find all the information available of a certain person.This digital identity of a person can be useful to other people. Upon meeting aninteresting person, you might want to get to know more about him, and this couldbe achieved by inspecting his digital presence. They can use it to get in touch oreven to find common ground to talk about when they meet face to face. The mainfocus of this thesis is to allow a user to get efficient access to this information.

This thesis tries to resolve this need by using smartphones. Such a device has anumber of useful characteristics:

• It’s always connected to the internet, either via Wifi or mobile networks andcan thus serve as an access point to a person’s digital identity.

• It has many features, like a camera, accelerometer, microphone, media playbackpossibilities,... that allow developers to create an immense amount of differentapplications.

• Modern smartphones are powerful enough to process heavy algorithms andlarge datasets.

These characteristics have made smartphones a gateway for new ideas andapplications and made them well suited for a digital identity discovery application.We can use the camera of the smartphone to scan the surroundings for faces anduse the face of a person as the link between his divided online information. Thisbrings us to the subject of this thesis. The general idea is to use a smartphone to

1

Page 14: Discovering digital identities through face recognition on mobile devices

1. Introduction

recognize a person and allow the user to get quick access to all of his information onthe internet. We will create an application that uses face recognition to recognize theperson and collect information about him. This information will be a collection ofthe social networks to which the recognized person is subscribed, his digital identity.

1.2 GoalsThe goals of this thesis are separated in different points.

1. A first step is to investigate the different possibilities of face recognition on asmartphone. A correctly functioning algorithm will need to be found that isboth reliable and fast. False positives, which is matching a face to a wrongname, need to be kept to a minimum, because otherwise a persons face will belinked with the wrong digital identity.

2. A proper way to visualize the application will need to be found. A study ofdifferent options to display data is needed, along with an evaluation of thepossibilities, to be able to make an informed decision. The information willhave to be shown in clear and useful user interface. One visualization to try, isaugmented reality (AR) [15]. AR is the augmentation of real life, by addinga layer of computer-generated content to the camera of the appliance. Thismodern technology can be used to display the digital identity in real-time nextto the person on the camera view of the smartphone. Ways to display thisneed to be researched and evaluated.

3. The application should be efficient. It’s the objective to create a frictionless andreliable way to discover a digital identity. It should be faster than that persontelling you where you can find him on social networks, or researching himyourself on the internet. The application is aimed at improving productivity inthe field of discovering information, eliminating the need to gather the digitalinformation manually.

This thesis does not describe the creation of a specific face recognition application.It describes the examination of the possibilities to discover digital identities by usingface recognition. It will explore different approaches and try to find the best wayto visualize such an application. We will design the application based on a rapidprototyping approach with frequent user feedback cycles, in order to get feedbackduring the design.

1.3 Chapter overviewChapter 2 presents concepts and state of the art applications related to the topic. Itcovers a literature study, describing recent progress in the fields of face recognition,augmented reality and social network aggregation. It contains a related work study,in which several applications of the 3 subjects will be discussed and compared to

2

Page 15: Discovering digital identities through face recognition on mobile devices

1.3. Chapter overview

see whether some useful lessons can be drawn from them. The chapter also has acomparison of multiple face recognition API’s that could have been used for thecreation of this application.

Chapter 3 covers the general analysis of the application. It composes a list ofrequired functionalities along with a detailed description of the desired application.It prioritizes the different aspect of the application to get a clear view about whatfeatures are important and discusses the privacy concerns.

Chapter 4 contains the first part of the iterative design, paper prototyping, whichis a technique to create and evaluate a user interface in a quick but thorough way.The chapter covers three iterations, each with their setup, goals and evaluation. Thefirst iteration deals with the visualization of the face recognition part of the applica-tion and the different ways a user interface can be created for the this functionality.The second iteration evaluates the interface of the entire application and looks forusability problems. The third iteration is an evaluation of the paper prototype witha number of experts from the department of human-machine interaction, in order toget an expert view on the user interface.

Chapter 5 provides a description of the implementation of the application. Itdiscusses the design of the digital prototypes. The implementation was also doneiteratively, with the first iteration being merely the core of the application, a workingface recognition application. The second iteration tweaks the application based onthe results from the evaluation of the first prototype and finishes the implementationof the other features.

Chapter 6 discusses the evaluation of the second and final digital prototype. Itfocusses on both the usability and usefulness of the application and tests whetherthe final application indeed provides a more efficient way to discover digital identities.

Finally chapter 7 is the conclusion of this thesis. It summarizes the entire project,discusses the final results and checks whether the application has achieved its goals.It also suggests work that can still be done, should this thesis be continued.

3

Page 16: Discovering digital identities through face recognition on mobile devices
Page 17: Discovering digital identities through face recognition on mobile devices

Chapter 2

Research

The development of this application covers several different domains and concepts.It is therefore useful to examine the work that is already accomplished in thesefields. This chapter is split up into different categories. The first section is aboutface recognition, which is the core technology driving the application. The secondsection discusses augmented reality to check whether this could be used to visualizethe recognized faces. Finally the third section presents different social networks andtechniques to aggregate and display information in a single environment.

2.1 Face recognition

Face recognition is defined by Princeton1 as "biometric identification by scanning aperson’s face and matching it against a library of known faces". This means that tobe able to recognize a face, you need two things: a device able to scan faces and alibrary of known faces to be able to find a possible match. In this thesis, the device toscan the faces will be a smartphone. Nearly every smartphone has a built-in camerathat can capture photos or videos to be scanned by a face recognition algorithm.This algorithm will need to have an extensive set of known faces to match it to.

2.1.1 Face recognition techniques

There are many different approaches to match a face. In this study we will discuss 3leading techniques: two traditional techniques and one modern technique. We donot focus on how these can be applied on mobile devices, because it is currently notfeasible to have the entire set of known faces locally on any device. We assume thatthe face will be processed by a different device than the smartphone.

Traditional face recognition techniques

In the early years of face recognition, the process was limited to two main approaches:feature recognition and template matching [1]. These approaches can vary depending

1http://wordnetweb.princeton.edu/perl/webwn?s=face+recognition

5

Page 18: Discovering digital identities through face recognition on mobile devices

2. Research

on the exact matching methods, but the general idea behind each of them remainsthe same.Feature recognition is based on the characteristics of a face [1]. It identifies severalpoints on the image, like the position of the nose, the eyes, the mouth and theeyebrows. The algorithm sees the face as a geometric shape, and tries to find therelation between the different aspects of that face. To be able to compare severalimages, the images need to be normalized, this means that all used images must beindependent of scale, position and rotation of the face. One way to normalize is tocompare the position of the eyes in a normalized image to the eyes in the yet tonormalize image. You can scale and rotate the latter to match these eye positions.Many different approaches exist to match the geometric characteristics of faces, themost known being Principal Component Analysis (PCA) [2], also referred to as theuse of eigenfaces. In this technique, the image is transformed into an eigenface,which is a multidimensional array of uncorrelated eigenvectors by using singularvalue decomposition, containing the important features of the face. In this highlycompressed state, one is still able to reconstruct the picture in a simplified form, butwith all characteristics intact (see figure 2.1). For each image, the weighted sum iscalculated which represents the image. All faces are then compared by using this sum.

Figure 2.1: An example of reconstructed eigenfaces of the same person [3].

In template matching, the picture of the face is seen as a matrix of light intensities.For this technique, the image needs to be normalized in the same way as in theprevious technique. The comparison of the images works only if the face in everyimage is faced the same way, has the same size and the same rotation. The imagethat contains the face that has to be recognized, is compared to all others in thedatabase and a score is calculated per characteristic (nose, eyes, mouth,...). Thedatabase image with the highest score is the matched image. This is a very basic

6

Page 19: Discovering digital identities through face recognition on mobile devices

2.1. Face recognition

technique, because the images need to be similar to the ones in the database to givea correct match and it is very sensitive to illumination variations.

Three-dimensional face recognition

The techniques described in previous sections are some of the first attempts of facerecognition. They are based on the comparison of two-dimensional images andthe results are therefore dependent on the normalization of the image, which is acumbersome process. More recently, other techniques are used to compare facesand are based on three-dimensional reconstruction [4]. This technique measures thedistances between characteristics of the face (like nose, mouth and eyes, but also bonestructure) and uses symmetry to construct a three dimensional model of the faceas seen in figure 2.2. This modelling is, in contrast to the previous algorithms, lesssensitive to lighting and rotation. Overexposure can still blur out the characteristics,but slight variations in lighting are no problem because the 3D model can still beconstructed. The 3D representation can be rotated and then matched against theother 3D faces in a database. The model with the greatest similarity will be themost possible match [5].

Figure 2.2: An example of a 3 dimensional reconstruction of a face [6].

2.1.2 Face recognition API’s

Because of the numerous possibilities of applications on smartphones, multiplecompanies have set up an application-programming interface (API) for developers toeasily include this functionality in different applications. This allows to bypass theimplementation of a face recognition algorithm and instead focus on the applicationitself. Therefore, it is necessary to find a suitable API to take care of the recognition.This section compares some of these API’s.

Face.com Face.com [7] is one of the most famous companies offering a face recogni-tion API. They focus on a social experience and have built their database mostly

7

Page 20: Discovering digital identities through face recognition on mobile devices

2. Research

by using tagged photo’s from Facebook [8] and Twitter [24]. Their API allowsdevelopers to create their own face recognition applications without the needof a private database of faces. All the applications that use Face.com help theirdatabase to grow even further. Once a user of a Face.com powered applicationlogs in through Facebook, the Face.com engine can train the algorithm withhis face and all of his friends by using the tagged photos. At the moment, theirdatabase contains over 31 billion face images from over 100 million differentpersons. Users can recognize either their Facebook friends or the applicationcan provide a private namespace with a collection of people to recognize.The algorithm behind Face.com is based on three-dimensional face recognitionas described in the previous section [6]. It builds a 3D model of the faces andthese are compared with the models from the database.

Betaface Betaface offers a complete package to recognize persons in images andvideos [9]. Their focus is mainly on indexing, allowing multimedia contentto be scanned and indexed. For example, you can search the content of avideo for the presence of certain celebrities. The software identifies knownpersons or can save unknown persons for later identification. Their algorithmfor face recognition is undisclosed, but they do detect facial points and indicatebiometric characteristics.

Verilook Verilook [10] is a product developed by Neurotechnology. It’s a facerecognition service built for use on personal computers and is able to extractfaces from videos or images and match them against the database of thedeveloper. Using this API thus requires us to construct our own database offaces. No information was found about the functioning of the algorithm.

Other API’s PittPatt [11] and Viewdle [12] are two other promising API’s but cannot be discussed due to several issues. PittPatt is recently acquired by Googleand is currently not available. This application promised a face recognitionAPI that could tell the viewing direction and scan for geometric characteristics.Viewdle has an iOS API (for iPhone and iPad development) which allowsdevelopers to integrate real-time face recognition along with augmented realityfeatures. Unfortunately this is still in a beta stage and no invite was issued fortesting.

Comparison

A comparison of these API’s is presented in table 2.1. Since our focus is to develop amobile application, the API’s should support Java for Android and/or Objective-C foriOS. Betaface does not satisfy this requirement. Face.com supports both Objective-Cand Java and Verilook supports Java. Face.com also provides a database basedon Facebook’s shared photos. This is a big advantage because it provides a soliddatabase to start from. It also allows a user to log into Facebook and the algorithmto be trained with all his friends faces. Verilook does not provide a database, so wewould need to create our own, which will be a small, controlled set of images of a

8

Page 21: Discovering digital identities through face recognition on mobile devices

2.1. Face recognition

limited number of test persons. This limits the opportunities to test the applicationfor broad audiences. A negative point of Face.com is that it is only able to processimages, not video. However it is free, but has a rate limit of 5000 photos per hour.This seems sufficient for research and testing purposes. Verilook and Betaface arepaid API’s, thus funding of the university will be required.

Face.com Betaface VerilookSupported media Images Images and video Images and videoPlatform Multiple (incl.

Obj-C and Java).NET Multiple (incl.

Java)Returns face fea-tures

Yes Yes Yes

Social networkconnection

Facebook andTwitter

None None

Database Provided Provided Not providedDatabase size +31 billion Not given Not applicableCost Free (limited) Paid Paid

Table 2.1: Comparison of the different face recognition API’s

2.1.3 Current applications

For the related applications we discuss a couple of smartphone apps that use facerecognition. Looking at these applications can provide insights about the possibilitiesof face recognition on mobile devices.

KLIK

KLIK is a photo application made by Face.com [13]. It demonstrates the power oftheir API, allowing the user to point the camera at a person and the app will showhis name in real time. The user can then capture a picture and edit it with multiplefilters. It only recognizes the Facebook friends of the users, since Face.com is basedon Facebook. It was released during the development of this thesis application andshows some functionalities that we envisioned as well, like the automatic recognitionof multiple persons at the same time. The application is shown in figure 2.3(a).

TAT Augmented ID

TAT augmented ID is a concept application that serves as a social aggregator basedon face recognition [14]. The concept is very related to the objective of this thesis,and shows a possible user interface to display social network links using augmentedreality after recognizing a face. In their interface the icons of the networks floataround the head of the recognized person, as presented in figure 2.3(b).

9

Page 22: Discovering digital identities through face recognition on mobile devices

2. Research

(a) KLIK by Face.com[13]

(b) TAT Augmented ID [14] (c) Viewdle Social Camera [12]

Figure 2.3: Related face recognition applications

Viewdle SocialCamera

Viewdle SocialCamera is a showoff application that allows the user to snap picturesand will automatically recognize and tag the persons in the picture [12]. It does notdisplay the names in real time like the two apps mentioned above, but it tags themafter the picture is taken. This application presents an alternative for the suggestedaugmented reality approach. The application is shown in figure 2.3(c).

Social annotations in augmented reality

Social annotations in AR is a thesis of this department from the previous year,written by Niels Beukers [19]. The goal of the thesis was to create a face recognitionapplication that annotates people in real-time with information from social networksand uses this information as a conversation enhancer. For the face recognitionfunctionality, Face.com was used. The program is able to recognize only the Facebookfriends of the user. A negative point is that the application needs to have a back-endin order to send the snapped images to Face.com and authenticate with Facebookthrough a different website. However, these challenges are not present anymore,because Face.com now provides multiple SDK’s for different programming languagescovering these issues.

2.2 Augmented realityAugmented reality (AR) is defined by Ronald Azuma as having the following proper-ties [15]:

1. combines real and virtual objects in a real environment,

2. runs interactively and in real time and

10

Page 23: Discovering digital identities through face recognition on mobile devices

2.2. Augmented reality

3. registers (aligns) real and virtual object with each other.

These properties are technology independent. AR requires an input device toscan the real world, a processing unit to add virtual object and an output channelto display the augmented world. The input device is usually a camera. When theuser wants to walk around using AR, the output device can be either a head worndisplay (HWD) or a handheld device like a smartphone. This thesis focuses on mobiledevices, which are suitable for AR. Smartphones are equipped with a camera andhave enough processing power to interpret the feed and add virtual objects. Thescreen can serve as both the display and the input device.

2.2.1 AR techniques on mobile devices

Basically there are two trends of AR on mobile devices at the moment: location-basedand view-based. The distinction lies with processed data before displaying the ARto the user.Location-based AR is used to display information about fixed items. These itemscan be buildings, landmarks, and any other object that is fixed in one location.The coordinates of these locations are stored in a database along with the extrainformation, like a description, multimedia content,... A smartphone owner canuse a location-based AR application to find his way to places, or to examine hisenvironment. This technique does not need to scan the input from the camera,but instead uses the built-in GPS and compass to determine the users location andorientation. An example application can be seen in figure 2.4

Figure 2.4: An example of a location-based AR app: Layar [18].

View-based AR is a more complicated technique. It receives the input from thecamera feed, processes it and displays extra objects or information. It needs tohave the functionality to scan the feed for a certain type of object. An example ofview-based AR can be seen in figure 2.5.

11

Page 24: Discovering digital identities through face recognition on mobile devices

2. Research

Figure 2.5: An example of a view-based AR app: Augmented driving [21].

2.2.2 Current applications

The use of AR on the smartphone is a booming business [16]. Looking at how theyare used now and the technologies behind it, can give us a clear view on what’savailable.

Layar

Layar [17] is an Android and iOS application. See figure 2.4 for a screenshot. Thecompany is one of the pioneers in smartphone AR. Originally the application usedlocation-based AR to show the users information about places in their neighbourhoodbut they have recently expanded to view-based AR too. The unique part aboutLayar is that it uses, as you could expect, layers of information. For instance, youcan choose the Wikipedia layer and it will show you Wikipedia information aboutyour neighbourhood. Layar allows users to create their own layers of information,and this makes it a very open platform that people can experiment with.

Augmented driving

Augmented driving [21] is an iPhone application created by imaGinyze. It’s a greatexample of view-based AR. The camera feed scans the road ahead for lanes and carsand monitors your driving speed. It processes this data and displays an augmentedview of the road on screen. Cars and lanes are highlighted, speed is indicated,distance to the car in front of you is given, and so on. It’s a well built applicationwith a lot of features. It proves that modern smartphones are powerful enough toprocess a live camera feed in real-time.

12

Page 25: Discovering digital identities through face recognition on mobile devices

2.3. Social network services (SNS)

Social annotations in AR

This work has already been presented in section 2.1.3. The application uses a HWDapproach to display information about the recognized person [19]. However, thistechnology isn’t at the stage that it’s just a small set of goggles. The setup consistedof a set of AR goggles, combined with a laptop, carried in a backpack, a webcammounted on top of the goggles and a mouse to navigate through the application.Another obstacle during this thesis was the use of OpenGL [20] to combine the videofeed from the camera of his HWD with the annotations. Our work focuses on mobiledevices, so both problems can be evaded. Our setup will consist of a smartphone,which already contains a camera, an input device (a touchscreen) and a display. Theuse of OpenGL will not be necessary as the mobile platforms offer their own solutionfor layering a video feed.

2.3 Social network services (SNS)

As said in the introduction, the digital identity of a person is represented by hispresence on the Internet. Due to the increasing popularity of SNS’s, the industry hasexpanded and this has resulted in the birth of many different tools. The differencebetween all these tools lies in their main focus. Table 2.2 displays a list of some ofthe leading networks along with their specific focus [22].

Social network Focus Members [22]Facebook Social 901,000,000+Google+ [25] Social 170,000,000Twitter Microblogging 300,000,000LinkedIn [26] Work/Jobs 120,000,000Last.fm [27] Music 30,000,000Flickr [28] Pictures 32,000,000Slideshare [29] Presentations Not availableDeviantART [30] Art 22,000,000

Table 2.2: Social networks and their focus.

Because the focus is so diverse, most people are subscribed to a lot of networks,each covering another aspect. A person can also choose to subscribe to multipleSNS’s with the same focus. For instance, if a Dutch person is already subscribedto Hyves [31], which is a network with a social focus in the Netherlands, he mightstill want to join Facebook, because it’s more international or has more interestingfunctionalities. This difference in focus and freedom of the user to subscribe to asmany networks as he wants, causes most people to have more than one social networkaccount. Membership to multiple social networks causes their information to bespread. To help to create a clear overview of all the information of a user, socialnetwork aggregators have been developed. They serve as a platform to collect anddisplay data from social network sources.

13

Page 26: Discovering digital identities through face recognition on mobile devices

2. Research

2.3.1 SNS aggregation

One way to aggregate social networks and the information streams they provide is tocreate an umbrella application that brings them all together. This one applicationgives an overview of all the activities of the accounts you link to it, while alsoproviding the option to post updates on multiple networks in a single click [32].In this section, a couple of examples of aggregation applications are discussed anddivided into 2 subclasses: aggregation by handles and aggregation by authentication.

Aggregation by handles

Aggregation by handles means that the linking of the networks happens by filling insocial network handles. A handle is the username of a person in a SNS (for instancegerry.hendrickx on Facebook). In aggregation by handles, you can just fill in yourTwitter handle and the application will link your Twitter account to your FriendFeedprofile. There is no authentication with Twitter required, so the system trusts thatyou will provide correct account information. 2 examples of such aggregation arediscussed:

Figure 2.6: An overview of prof. Erik Duval’s linked networks on FriendFeed.

FriendFeed FriendFeed has been created by former Google employees and is a socialnetwork itself. It offers real-time updates and allows users to share information,chat, etc. But FriendFeed’s most important feature is the option to link it withmore than 50 other social networks. Status updates of the linked networks willthen be displayed in the feed of FriendFeed. You can also easily find the socialnetworks that are linked to the feeds of other persons, as you can see in figure2.6. This makes it a great tool for discovering digital identities.

More! More! [36] is a mobile social discovery web tool that focuses on Web2.0 andacademic sources and is developed at the Katholieke Universiteit of Leuven.Similar to this thesis, More! aims to be a mobile discovery tool, allowing efficientand frictionless discovery of a person’s digital identity. The big difference isthe approach More! uses to reach this goal. More! is based on QR codes[37]. The user has to scan the QR code of another user to be able to get the

14

Page 27: Discovering digital identities through face recognition on mobile devices

2.3. Social network services (SNS)

link to the overview web page of that persons networks. However, the testusers experienced difficulties with scanning these codes. This was the weakpoint of the project. This thesis offers a solution to the problem by usingface recognition instead of QR codes. Also, More! has a functioning back-end,which allows its users to fetch person information through a RESTful API. Thedatabase saves the social network handles, the email-address and academicinformation of More! users.

Aggregation by authentication

The second example of social network aggregation focuses on mobile devices. HTC,one of the leading manufacturers of Android devices, ships with his Android phonesan application called FriendStream [34]. FriendStream is, in contrast to Friendfeed,not a SNS by itself. It’s an aggregator in its purest form, simply combining thefeeds and allowing users to post updates to multiple networks at a time. You cansee the application in figure 2.7. Another difference with Friendfeed is the methodto add an SNS to the application. FriendStream requires the user to authenticatewith the network he wants to add. It requires specific permissions (like to post toyour wall), and needs the social network authorization to do this. Because of thisauthentication, the number of SNS’s that you can link is limited at the moment,because linking means interacting with the API of the network, generating tokens,and authenticating. This would be a cumbersome task to do if FriendStream alsowanted to be able to link more than 50 networks. FriendStream is currently limitedto Facebook, Twitter and LinkedIn.

Figure 2.7: The FriendStream app on an Android smartphone. [35]

15

Page 28: Discovering digital identities through face recognition on mobile devices

2. Research

ConclusionThis chapter researched the different aspects of this thesis. First, we looked atface recognition and different approaches to recognize a face. Feature recognitionand template matching are outdated techniques and cannot match up to 3D facerecognition. It would be best for this thesis to use an algorithm that uses this newtechnique, because it is less sensitive to light intensities and rotation and angle of theimage. The research of available API’s showed that Face.com is the most promising,as it offers a large database, a direct link to the Facebook social network and supportsboth Java and Obj-C, the main mobile developing languages.

The augmented reality study showed two kinds of applications on mobile de-vices: location-based apps and view-based apps. Location-based apps use fixedGPS-coordinates and do not process the incoming video feed, while view-based appsdo. For face recognition, a view-based app should be created, so the incoming feedwith the face can be analyzed and annotated.

The social network research showed that people use multiple social networksbecause of the difference in focus of the networks. The existence of social networkaggregators further prove this point. Social network aggregators can work either withjust the handles, or with full authentication of the linked networks. More!’s back-endseemed to be very useful, as it already serves as aggregator of social network handles.Face recognition could work where QR codes failed.

16

Page 29: Discovering digital identities through face recognition on mobile devices

Chapter 3

Analysis

After researching what is possible regarding face recognition, augmented reality andsocial aggregation on mobile devices, the idea behind the application can now bedefined. In this chapter, use case scenarios and a survey are presented to examinethe required functionalities. Finally, some privacy issues are discussed and a solutionis proposed to deal with them.

3.1 RequirementsIn this section we will define the functionalities that the application will support. Toobtain these, we performed a brainstorm and posted a poll on Facebook.

3.1.1 Scenario brainstorming

A brainstorming session took place about possible face recognition applications thatcan be used in order to define the idea for our application. This resulted in thefollowing scenario’s:

Scenario 1: A face recognition application can be used to acquire a person’s Vcard[41], which is a file format standard for electronic business cards. It containsthe persons name, email addresses, phone numbers, urls and so on. The userof the application may scan the persons face and obtain the Vcard. This couldbe saved to the address book of the phone.

Scenario 2: The Vcard scenario can also be used to create an address book applica-tion. The application could have its own contact directory based on faces andthe recognized picture can be used as profile picture of the person providingan image that could be remembered.

Scenario 3: More! (as described in section 2.3.1) focuses on an academic context.You need to scan a QR code to access an overview of the papers and slides of apresenter on a conference. This allows the user to follow up on this presenter’spublications and work. Since the QR code scanning did not yield the requiredresults, the same concept can be tried with face recognition.

17

Page 30: Discovering digital identities through face recognition on mobile devices

3. Analysis

Scenario 4: Also applicable on conferences, you could use the application to scannearly all the faces in the room and have a real-time augmented reality overviewof what people are tweeting about the conference by showing their last tweetabove their head.

Scenario 5: As in TAT augmented ID (described in section 2.1.3) the face recognitioncould be used to scan a person and have icons of his social networks floataround his head in augmented reality. This gives the user an efficient way toquickly access a persons social networks.

Scenario 6: Face recognition can also be used as access control. A security systemcan check whether a recognized person has access to a restricted area. Thiscan be applied for access control in buildings instead of authentication badges;or on computers and mobile devices instead of passwords or pin codes.

Scenario 7: You could also scan a celebrity for instance on a billboard or in acommercial. The application could link this face to their IMDB page orpersonal website.

Scenario 8: Instead of using the application to recognize a face and get the name andinfo, you could also use it the other way around. You could give the applicationa name and let it search the room for you. When he finds the person thatmatches the name, he can highlight him by using augmented reality.

Scenario 9: Prosopagnosia is a disorder that prevents people from recognizing faces.Studies show that nearly 2 percent of the world population suffers from thiscondition, albeit not equally severe [42]. Some people could have occasionaltrouble, while others never succeed in recognizing a face. The face recognitionapplication could be used by prosopagnosia patients to scan a person and readhis name, instead of recognizing him themselves.

From these scenario’s we extract a final scenario that will be used for the applica-tion and describes the functionalities that will be supported. Multiple scenario’s arebased on the same underlying functionalities. We see that scenario 1 to 5 and 7 allneed to recognize a face and look for information about it. The required informationvaries, and ranges from social network information to personal information. Scenario6, the security system, and scenario 9 are valid uses of face recognition, but requireonly a lookup of the name. Scenario 8 starts from the assumption that you know thename and look for the person and is in contrast with the other scenario’s. Becauseour focus is on discovering digital identities, scenario 1 to 5 and 7 are combinedinto 1 final scenario: the user of the application should be able to recognize thepeople around him and lookup information about them. The information will bedisplayed in augmented reality and will show status updates (scenario 4) and linksto social networks (scenario 5). He can save this information to the address book ofthe phone (scenario 1) and look it up later in the history of the application (scenario2). The information contains personal information, information from a number ofsocial networks and academic information like publications or slides (scenario 3).

18

Page 31: Discovering digital identities through face recognition on mobile devices

3.1. Requirements

3.1.2 Information needs survey

A survey was posted on Facebook to get a view on what information people wouldwant to receive if they could recognize a person by using a smartphone. The resultscan be seen in table 3.1.

Option Number of votesNothing, I would allow people to have some privacy. 14Contact information 9Links to SNS’s 6Last status updates 3Pictures from Facebook 1Name and where we last met 1Possibility to contact the person 0

Table 3.1: Survey results

As you can see in table 3.1, 14 out of a total of 34 voters (35%) voted for theoption nothing. This is alarmingly high. People seem to be very protective abouttheir privacy. More on this subject in section 3.2.If we leave the nothing option out of consideration, we see that 9 out of 20 votes goto contact information and 6 to links to social networks. 3 people want to see therecognized person last status update and only 1 person wants to see pictures or thelast location they’ve met. Nobody wanted the possibility to quickly email or textthe recognized person.

When we combine the final scenario from the previous section with the results ofthis survey, we see that the application will measure up to the needs of the surveyparticipants if it indeed supports the options to look up and save contact information,provides link to different social networks and displays status updates.

3.1.3 SNS survey

The main functionalities of this application are the face recognition and the infor-mation fetching. The application should be able to recognize a face and gatherinformation about that person by requesting it with several social networks. To findout which networks are the most important, we’ve asked each test person during theyear which networks they use. The results can be seen in table 3.2.

The table shows us that all 25 test persons use Facebook. Twitter, LinkedInand Google+ are also used a lot. More specific networks like Last.fm or academicnetworks like Mendeley or SlideShare are used less. However, the academic networksare being used by everyone of the academic participants. 3 people also used othersocial networks. These were Foursquare (2 votes) and Spotify (1 vote). Becausethe popularity of a social network varies from time to time, we should keep thesupported networks as open as possible. The application should support the main

19

Page 32: Discovering digital identities through face recognition on mobile devices

3. Analysis

SNS Number of votesFacebook 25Twitter 18Google+ 17LinkedIn 12Last.fm 7Mendeley 7SlideShare 6Other 3

Table 3.2: Results of social network survey.

social networks (being Facebook, Twitter, LinkedIn and Google+), but should beeasily adaptable to support other networks.

3.2 Privacy concerns

The results of the poll in table 3.1 were intriguing as privacy issues were not initiallyconsidered as an issue. First of all, the general idea behind this thesis was to useinformation that is publicly available on the Internet. There was no attempt toaccess private information. The connection to the social networks will always bethrough the API’s and authorization is required to access private information. Toget an idea regarding people’s acceptance to being recognized, a second poll wasposted on Facebook. You can see the results in table 3.3.

Options Number of votesI would not like to recognized at all. 13I wouldn’t mind because the informationabout me on the internet is controlled.

5

I would’t mind being recognized mind. 3I would’t mind because there isn’t a lot ofinformation about me on the internet.

0

Table 3.3: Results of survey on Facebook that polls to the privacy concerns.

13 out of 21 participants (61%) refuse being recognized by a smartphone appli-cation. 5 voters do not care whether they are recognized because the informationon the Internet about them is controlled. They know which information is publiclyaccessible and monitor it themselves. 3 voters just don’t mind being recognized,one of them expressed that he had accepted the general tendency of the internetbecoming less and less private.This 61% is a number that cannot be ignored. Therefore a solution to the privacyconcerns is necessary in order to gain more adepts. After a brainstorm and a deeper

20

Page 33: Discovering digital identities through face recognition on mobile devices

3.2. Privacy concerns

look at what the face recognition API’s offer, we’ve come a with a solution thatwe think is feasible and will satisfy the privacy concerns. As said in section 2.1.2,Face.com can recognize both Facebook friends and people in a private namespace. Itis not possible to recognize everybody; you have to give a set of people to search forwhen running the Face.com request. For instance you can send [email protected] with an image to search this image for your Facebook friends. This limitationseems to be the solution to our privacy concerns. We can let the application onlydetect our Facebook friends, but this limits the discovery possibilities. Thereforewe can also use the private namespace, and register all users of the application inthis namespace. This way, a user can be recognized by other users. This follows thephilosophy that you can’t be against people recognizing you, if you yourself recognizepeople and willingly accepted to use the application. The final set of people that arerecognizable for a user will thus be his Facebook friends and all other users of theapplication.

ConclusionThis chapter described the brainstorm and Facebook survey that lead to the func-tionalities that should be supported by the application. The application shouldbe able to save a history or let you synch the information found to your addressbook. It should also be extensible regarding supported SNS’s. It should includethe popular networks like Facebook and Twitter, but also academic networks likeMendeley so that it can be used in different contexts. It should use augmented realityto display which networks are available for the recognized person together with basicinformation like the last status update.

The privacy issues, that were apparent from the poll lead to consequences aboutwho could be recognized. A compromise was found that allows a user to recognizehis Facebook friends and other users of the application.

21

Page 34: Discovering digital identities through face recognition on mobile devices
Page 35: Discovering digital identities through face recognition on mobile devices

Chapter 4

Design

After deciding the functionalities of the application, we proceed to design the userinterface. This chapter presents the methods applied to create the application. Thefollowed approach is cyclic: first a prototype is designed, then evaluated. After theevaluation, the design is adapted and evaluated again. The cycle goes on until weare satisfied with the results of the evaluation. The first design step is a storyboard.This is an overview of the different screens in the application and presents a storyon how the user might experience the application. Next is the paper prototyping.This method is an efficient way to test an interface. We draw the interface on paperand evaluate it with test persons in different iterations. This chapter discusses threeiterations of the paper prototype. The first one is to decide which interface is themost usable, the second is an elaboration of the chosen interface and the third isan adjusted version of the second prototype, evaluated by experts in the field ofhuman-machine interaction.

4.1 Technology

There are different operating systems for smartphones. It is therefore important tofind a platform that best suits the demands of application. The technology needs tobe decided before designing the application, because different platforms are subjectto different user interface guidelines. Apple’s iOS for iPhones and iPads and Google’sAndroid for smartphones and tablets are the two biggest platforms in the currentsmartphone market. In March 2012, Android has a US market share of 51.0 percent.iOS has a market share of 30.7 percent [43]. RIM with their operating system forBlackberry has a share of 12.3 percent, followed by Microsoft and Symbian withrespectively 3.9 and 1.4 percent.

Both platforms are compared in table 4.1 and are object-based programminglanguages, but the objective-C language of iOS does not have garbage collection,therefore memory management needs to be supervised by the programmer. This is tomake the programmer more aware of memory usage, since smartphones have limitedmemory. Android can be developed on Windows, Mac or Linux, using the Eclipse

23

Page 36: Discovering digital identities through face recognition on mobile devices

4. Design

iOS AndroidLanguage Objective-C JavaIDE Xcode EclipseOperating System Mac OSX Windows, Mac OSX,

LinuxDocumentation Well documented Well documentedBuilt-in face detec-tion

Yes (iOS5.0 andnewer, video andimages)

Yes (only images)

Built-in face recogni-tion

No No

Table 4.1: Comparison of the iOS and Android languages

IDE. iOS however can only be used on Mac, using the Xcode IDE. More specific forour application are the face detection and recognition techniques provided by bothlanguages. Android supports face detection in images. You can import an imageand it will tell you where the faces are. iOS does the same but also for video. Thisis a huge plus, because the application needs to be able to detect faces in real-time.Neither of the platforms support face recognition, so an external face recognitionAPI will have to be used.In conclusion, both platforms are great candidates, but the real-time video facedetection of iOS is the deciding factor. The application will be developed for iPhone,using the iOS platform.

4.2 StoryboardFigure 4.1 shows the initial storyboard created in PowerPoint. Screen 1 shows theinitial view of the application. It represents the camera view and automaticallystarts the detection of the faces present in the camera feed. Once a face is detectedit will be marked with a square box and the application will automatically start therecognition process. Screen 2 shows the state after a successful recognition. Belowthe face there will be a button to mark whether the recognition was incorrect; thename of the person is displayed above his face and an arrow icon shown at theright side. If the user clicks the arrow icon, information from social networks willbe fetched and displayed, as shown in screen 3. Next to the face, the icons of thenetworks that the recognized person is a member of, are drawn. If the user choosesto click one, for instance the contact icon, he will arrive at screen 4 and the contactinformation will be shown next to the face. If he clicks the Twitter icon, he will seethe last tweet of the recognized person. There is a button available in the contactview to synch the information with the contact book of the smartphone. The wholeprocess will be using augmented reality, with the user pointing the phone at theperson that he wants to recognize.

24

Page 37: Discovering digital identities through face recognition on mobile devices

4.2. Storyboard

Figure 4.1: Storyboard

However, one of the concerns during the design of the storyboard, was the factthat it might be cumbersome for the user to keep his iPhone pointed to the recognizedperson in order to be able to read the information. It would also mean that the userwould have to navigate through the application while holding the phone in the air.This concern needs to be further evaluated. To be able to compare the evaluation ofthis interface to possible alternatives, two other user interfaces were created. Thestoryboard with the alternative interfaces is shown in figure 4.2.

Interface 1 is the user interface as presented in the storyboard in figure 4.1 anduses a fully augmented interface to display the information. Interface 2 uses asidebar. This allows the user to keep using the camera and scan other people whilehe examines the fetched information. In this way, he does not need to keep the phonepointed at the recognized person, this is now optional. The use of augmented realityis reduced, as it might improve the usability. Interface 3 uses full screen informationpanels, which completely disables the augmented reality when looking at information.This provides the best overview of the fetched information, but removes the abilityto continue using the camera while browsing through the information. These 3 userinterfaces will be evaluated.

25

Page 38: Discovering digital identities through face recognition on mobile devices

4. Design

Figure 4.2: User interface alternatives 2 and 3

26

Page 39: Discovering digital identities through face recognition on mobile devices

4.3. Paper prototype iteration 1

4.3 Paper prototype iteration 1

To evaluate the different user interfaces, paper prototypes have been developed.Paper prototyping [38] takes place after the idea behind the application is decidedand the required functionality has been found. It’s the process of creating a mock-upof the application in pen and paper by drawing out the user interface and cuttingout the pieces. The designer uses these pieces to simulate the application. This giveshim an initial idea of the user interface of the application without spending a bigamount of time. He can evaluate this paper prototype by using test persons andletting them play out some scenario’s or execute some tasks. The test persons willneed to find their way in the application while using a technique called think aloud[39] (they need to say what they think is going to happen or what they expect).This way the designer immediately gets feedback about which buttons are clear andwhich are not.

The first iteration starts from the three different user interfaces from figures 4.1and 4.2. All the different buttons and views were printed and cut out. The paperprototypes of the interfaces were tested with 11 subjects. All of them were betweenthe age of 18 and 23, studying at the university of Leuven. Seven of them own asmartphone, and 4 of those own an iPhone. The other 4 had little or no smartphoneexperience. The tests were done by using think aloud, and they were given thefollowing tasks to complete:

1. Find the last song that the person standing in front of you listened to.

2. Add that person to your iPhones contacts.

3. Notify the system of a wrong recognition.

4. Visit the Facebook profile of the person standing in front of you.

5. Try accessing data from multiple persons at the same time.

These tasks needed to be completed for each of the user interfaces and aredesigned to test if the test subjects can find and use all the functionalities in theapplication. If they complete the tasks in the given order, they will discover everyaspect of the application. Step 1 and 2 of figures 4.1 and 4.2 were simulated bythe evaluator, because they should happen automatically in the final application.The user interaction for all interfaces starts at the end of step 2. Because paperprototyping uses static images, the camera feed was simulated by using a photo.Another possible option was to use transparent material, so that testers could seethrough the prototype and use the real world as camera feed. This was not donebecause all the loose pieces of the prototype would make it hard to lift.

27

Page 40: Discovering digital identities through face recognition on mobile devices

4. Design

4.3.1 Results and discussion

Upon executing the tasks while using user interface 1, the test subjects were requiredto use their imagination and experience to create a picture of how they woulduse the application in the real world. All of them thought that the applicationwas cumbersome to use. The fact that you should keep the phone pointing to aperson at all times, even when just reading information, proved to be a big issue.4 of the 11 persons commented that once they were reading the information, itwould be frustrating if the recognized person kept on moving. The text would moveover the screen, or worse, the recognized person could walk away. They wouldlose track of him and would have to recognize him again once he came back. 7persons said that the underlying camera feed wasn’t a priority anymore once theyhad the information. The focus upon receiving the information shifts from recogniz-ing to reading, and they would prefer to do it without following the recognized person.

User interface 2 (as seen in figure 4.2) had more positive feedback. It eliminatedthe problem of having to track the person while reading the information. The personcould move away, out of the reach of the camera, and the users would still be ableto read his data. However, 8 of the 11 persons said that the camera feed on thescreen would not be useful. They did not like scenario 1 because they wanted tolower their phone and check the information. This scenario allows them to do this,but the camera will just show the ground and the space on the screen will notbe used properly. The 3 remaining persons thought this would not be a problemand commented that the camera allowed them to recognize another person morequickly. The iPhone users also mentioned that using a third of the screen to displayinformation would not be enough to create a clear information overview. Since theiPhone screen has a size of 5 cm width and 7 cm height, using a third of the heightto display information would only give you a bit more than 2 cm. They reckonedthat this was not enough.

User interface 3 was received positively. The 8 persons who did not see a use forthe camera part with interface 2 or the persons who thought 2 cm was not enough todisplay the information, all saw interface 3 as the optimal one. It also did not facethe problem of keeping track of the recognized person. In conclusion, when askedafter the evaluation which scenario they preferred, none of them picked scenario 1.Although this might work with a head worn display (you would just have to keeplooking at the person), holding up the phone for the entire time seemed a big issue forthe users. 73% (8/11) of the test users preferred the 3rd interface, mainly due to thereadability of the full screen information. The rest of the test users preferred the 2ndscenario, because of the ability to keep using the camera while viewing information.This would make the application more interactive. The percentage differs enoughto pick interface 3 as the final scenario. This interface will now be elaborated andevaluated in a second paper prototype.

28

Page 41: Discovering digital identities through face recognition on mobile devices

4.4. Paper prototype iteration 2

4.4 Paper prototype iteration 2The previous prototype only contained the core screens of the application; these arethe camera view, and a view where the information about the recognized personis shown, hereafter referred to as the overview view. In this second iteration, theprototype has been elaborated further. The overview view is split up in differentviews for each social network and one general screen (figure 4.3(c)). As discussed inprevious sections, we have seen that in order to get an overview of someone’s socialnetworks, that person needs to link these profiles to his account. We need a pagethat allows the user to log into his social network accounts and link them to hisaccount in the application (see figure 4.4(b)). Prototype 2 also contains a history(figure 4.3(b)). This way we can test whether the users prefer the history, the addingto contacts, or both. To keep the camera view from being cluttered with buttons,a home view has been created (figure 4.3(a)), which links to the camera view, thehistory view and the settings view. The settings view (figure 4.4(a)) contains a linkto the social network settings view and contains the following extra settings thatmight be useful:

• Prevent screen lock: This disabled or enabled the automatic locking of theiPhone.

• Launch in scan mode: If enabled, this opens the application directly in thecamera view, instead of the home screen.

• Delete all history: An efficient way to clear the recognition history of theapplication.

The general style of the application tries to follow the iOS Human InterfaceGuidelines [44]. These are guidelines created by Apple, explaining how to use thestandard iOS interface parts and giving the user predefined locations to place abutton or label. These predefined parts are used in the prototype, along with somecustom design for the overview view. A picture of the paper prototype can be seenin figure 4.5.

The elaborated prototype was tested with 10 subjects, all between the age of20 and 23. 4 of them study computer science, 3 are studying engineering and theremaining 3 were students from other faculties. The group consisted of 9 men and 1women. 8 persons either owned an iOS device or had experience with one. This isimportant data, as people with experience on these devices will already be familiarwith the UI. The 2 other persons had no smartphone experience, so they can offerinsight on how difficult the application is for inexperienced users.

The experience with AR applications was spread. 5 of the subjects had neverused an AR application, 2 of them had used an AR app once and the other 3 hadused multiple AR apps. The experience was not required, but people familiar withsuch apps could compare these apps with the prototype, providing valuable feedback.

29

Page 42: Discovering digital identities through face recognition on mobile devices

4. Design

(a) Home View (b) History View (c) Overview View

Figure 4.3: The new home, history and overview views in the second paper prototype

(a) Settings View (b) Social Networks SettingsView

Figure 4.4: The new setting views in the second paper prototype

30

Page 43: Discovering digital identities through face recognition on mobile devices

4.4. Paper prototype iteration 2

Figure 4.5: A photo of the different parts used in the second paper prototype.

The tasks from section 4.3 examined all functionalities of paper prototype one,but since the prototype was elaborated for this iteration, some tasks needed to beincluded:

1. Recognize the person in front of you.

2. Check his last tweet

3. Look for information about someone who is not around, but you have recognizedhim before.

4. Delete that person from your history.

5. Link your account with your Twitter and Mendeley accounts.

6. Set-up the program so that when someone recognizes you, they can access infofrom Mendeley, but not from Twitter.

7. Delete all history.

8. Scan the person in front of you, but correct the program when the person isnot correctly identified.

9. Mail someone.

10. Add someone to your iPhone contacts.

These test were designed to make the user discover all functionalities of theapplication. During the test the subjects needed to use the think aloud techniqueand they only received help if they really did not seem to find how to do something.

31

Page 44: Discovering digital identities through face recognition on mobile devices

4. Design

4.4.1 Results and discussion

This section describes the results of the tasks. We list the problems that werediscovered during the tests:

• To be able to find more information about the recognized person, the userneeded to click the face in order to move on to the overview view. 7 usersimmediately pressed the face. However, 3 users pressed the name in order toget to the information. After they had realised that this had no effect theypressed the face.

• 6 out of 10 users mentioned that it required to many transactions to get fromthe overview view to the settings view. We should look for extra navigationoptions.

• The intended gesture to remove someone from the history, was to hold yourfinger on the face. After a second, the face would wiggle and a close buttonwould appear in the upper left corner. This gesture imitates the removal orclosing of applications in the iPhone. The 2 persons without iPhone experienceand 3 others failed to find this gesture. They even searched for a remove-buttonin the overview of a person. The other 5 found it and were very positive aboutit.

• The Not him/her?-button is placed in the overview view. All users were able tofind it but were not sure whether it belonged there. All 10 of them agreed thatthe button on the overview view was not appropriate, because at that stage,the app will already have fetched all the information of the wrong person. Theplacement of the button should be reviewed.

• This task required the users to go to contacts overview view, which is reachableby clicking the contact icon in the overview view. The used icon was the sameas Mac OSX uses for its contact application. However, only 4 out of 10 usersrecognized this icon and understood its purpose. Once they all figured thatthis was the contacts icon, they could predict to find the email address of theperson there..

After the tests, it became clear which parts needed to be changed. First of all,the icon for the contacts tab was not clear enough and should be replaced. Thegesture to remove someone from your history will be evaluated further to see if analternative is necessary. We should try to find better navigation options and thebutton to indicate a wrong recognition should be relocated in the camera view, sothat users can press it before fetching the wrong information.

4.4.2 USE questionnaire

After the tasks, the user needed to fill in a questionnaire in order to obtain the user’sperception of the application. The USE (Usefulness, Satisfaction, and Ease of use)

32

Page 45: Discovering digital identities through face recognition on mobile devices

4.4. Paper prototype iteration 2

questionnaire was used [45]. This is a questionnaire to test user interface satisfactionand is based on 4 categories: usefulness, ease of use, ease of learning and satisfaction.The questionnaire consists of 30 statements that can be rated on a 7-point Likertrating scale, from strongly disagree to strongly agree [46].

All statements and answers can be seen in appendix A.

Usefulness

The application scores good on the general usefulness questions. For the discussion,we focus on the results of A.2, A.3, A.4, A.5, A.7, which are also presented in figure4.6. These are the main indicators of usefulness along with some lower scoringstatements. The most important statement, A.3 The application is useful had verygood results with a median of 5.

Figure 4.6: A boxplot representing statements A.2, A.3, A.4, A.5, A.7.

A.2 The application helps me be more productive: The box plot shows spread results,although the median is 5. This spread could be caused by the goal of theapplication. It’s not an application that will improve the productivity of theuser.

A.4 The application gives me more control about the activities in my life: The resultsof this statement varied from 1 to 7 with a median of 4.5. This spread is alsothought to be caused by the goal of the application. It is not meant to giveyou more control about the activities in your life.

A.5 The application makes the things I want to accomplish easier to get done: Themost voted numbers were 3 and 7, which point to a divided opinion. Thismight be connected to the pointing of the phone. People might think askingthe name instead of recognizing a person is easier.

33

Page 46: Discovering digital identities through face recognition on mobile devices

4. Design

A.7 The application meets my needs: This question provided another very dividedset of responses and has been answered negatively by the non-smartphoneusers. The persons that do not have a need for a smartphone, appear to haveno interest to what a smartphone application can offer.

Ease of use

The application was perceived as easy to use. The majority of the votes for thesestatements were between 5 and 7. The discussed statements can be seen in figure 4.7and contain the main indicators of ease of use and consistency of the application.A.9 The application is easy to use and A.16 I don’t notice any inconsistencies as Iuse the application were answered positively and are due to the standard iOS style,which creates uniformity and consistency over the entire application.

Figure 4.7: A boxplot representing statements A.9, A.12, A.14, A.16.

A.12 The application requires the fewest steps possible to accomplish what I want todo with it: Although the results are positive, with a median of 5, their weremany complaints about the number of clicks required to return to the homeview of the application.

A.14 Using the application is effortless: The results are spread. Although themedian is at 5.5 the results range from 2 to 7. This spread is related to thenumber of clicks required to get from one view to another in the application.

Ease of learning and satisfaction

In figure 4.8, you can see the answers to statements A.22, which is a good indicator ofthe ease of learning, A.24, A.27 and A.29, which are general satisfaction statements.The ease of learning statements were received very positive, each with a median ofaround 6 and with no answers below 5. A.22 It is easy to learn to use the application

34

Page 47: Discovering digital identities through face recognition on mobile devices

4.5. Paper prototype iteration 3

scored high. We can conclude that the application is easy to learn, due to the usediOS standard, which makes the application familiar to iPhone users. The satisfactionperception of the application was good. A.24 I am satisfied with the application andA.27 The application works the way I want it to work were answered positively. Thescores of A.29 I feel the need to have it varied from 1 to 6. The positive answerswere given in general by the smartphone users, while the negative answers were givenby the users without a smartphone. This indicates that people with a smartphoneprefer the application.

Figure 4.8: A boxplot representing statements A.22, A.24, A.27, A.29.

Finally, we can conclude by looking at the questionnaire results and the completedtasks that the application was received well, although some changes are requiredto improve usability. Therefore, adaptations were made and a third iteration wascreated and evaluated.

4.5 Paper prototype iteration 3

In this iteration, the paper prototype is adapted and evaluated again. A numberof adjustments have been made to the previous prototype, based on the commentsgiven while executing the tasks:

1. The icon for the contact view is replaced and is now the same icon thatApple uses for its contacts application on iOS devices. Since the application isdeveloped for iPhones, users should be familiar with this icon.

2. Home-buttons are placed on nearly every screen of the application. Previouslythe home-button was only available in the camera view, settings view andhistory view. Now, every overview view has a home-button, so the users donot need to return to the camera first.

35

Page 48: Discovering digital identities through face recognition on mobile devices

4. Design

3. The Not him/her?-button was generally perceived as misplaced. It was locatedon the overview view of a recognized person but it should have been accessiblein the camera view, so that users could indicate a false match before all theinformation of the wrong person is being displayed. The button is now placedbeneath the recognized face in the camera view, in the form of a big X iconfollowed by an Incorrect?-label.

The test audience for this iteration consisted of experts in the field of human-computer interaction. 7 members of the computer science department at theKatholieke Universiteit Leuven have evaluated the prototype. The occupationof the users range from phd student to professor. The group consisted of 1 womanand 6 men. 4 out of 7 test subjects own an iPhone, while the other 3 have somesmartphone experience but not much with iOS devices. All test subjects have usedaugmented reality applications in the past, so they are familiar with the concept.The users needed to apply the think aloud technique and fulfil the same tasks asdefined in the previous section.

4.5.1 Results and discussion

This section describes the results of the tasks. The same tasks were used as in theprevious iteration. The problems and remarks of the users are the following:

• When the users needed to find more information about the recognized person,they had to click the face. Again, some of the users (2 out of 7) pressed thename instead of the face. We can conclude that this is not a big problem sincethe majority of users do not experience any issues and other adapted to itpretty well.

• One user was surprised to get to a Twitter overview page instead of the realTwitter application. This was asked to other users and all agreed that adirect link to the Twitter app might be better, because it provides the wholefunctionality of Twitter instead of just an overview. This idea will be used inthe digital prototype.

• To get from the Twitter overview view to the history view, the users neededto go back to the overview view where they could find a new home button. Ittakes 3 clicks to complete this, instead of 4 in the previous iteration. 3 usersstill complained about the length of the path, but the other 4 did not haveproblems with this. One of the users suggested a tab bar at the bottom of thescreen with direct links to the history, camera, home and setting views.

• To delete a person from the history, the users needed to press the face in thehistory for a second and an X-icon would appear that deleted that person.Some of the experts noted that this gesture is not used inside iOS applicationsand the user interface guidelines should be checked to see what is the properway to do this.

36

Page 49: Discovering digital identities through face recognition on mobile devices

4.5. Paper prototype iteration 3

• One of the users suggested a preview page, on which the user of the applicationcould check how his profile would show when being recognized by others. Allusers saw this as a nice-to-have feature, but of lower priority.

• The Delete history-button was found quickly, although 2 users noted that theywould prefer to see this button on the history page.

• The Not him/her?-button on the overview view was removed and placed as anincorrect-button on the camera view, below the face of the recognized person.However, 3 users saw this as clutter. They would prefer to just have the facesand the name labels on the camera view. They suggested a small X-icon orthumbs down icon next to the name. All agreed that the X-icon followed bythe Incorrect?-label was confusing. The X could refer to removing the labelinstead of indicating a false tag.

In general, the application was well received and some good suggestions wereobtained. The problems of the previous iteration are mostly solved. Some users stillfelt like the path from one view to another was too long, even with the addition ofextra home-buttons. However, if the application would link to other apps instead ofgiving an overview of each social network, it would have less screens and the paththrough the application would be shorter. This approach will be followed. Thebutton to indicate a false recognition still caused problems and should be revised.

4.5.2 USE questionnaire

The users were asked to fill in the same questionnaire from the previous iteration,in order to be able to compare the results. The full results of the questionnaire areshown in appendix B. We will again discuss the general score of the 4 categories andexamine some noteworthy results. In order to compare the scores to the previousiteration, the same statements are compared.

Usefulness

In general, the usefulness statements scored well in the questionnaire (see figure 4.9).The main indicator of usefulness, B.3 The application is useful had very good results.The median once again remained at 5, but none of the replies were lower.

B.2 The application helps me be more productive: The median of the box plot hasremained 5, but more than 50 percent of the answers were between 4 and 6,with a few lower scores and one extreme low score of 1. Overall, this is a betterresult than in the previous iteration and we can conclude that the expertsfound the app to increase their productivity.

B.4 The application gives me more control about the activities in my life: Thescores of this statement remain spread and on the lower side. This can not beimproved, because the app does not aim to give the user more control abouthis activities.

37

Page 50: Discovering digital identities through face recognition on mobile devices

4. Design

Figure 4.9: A boxplot representing statements B.2, B.3, B.4, B.5, B.7.

B.5 The application makes the things I want to accomplish easier to get done: In theprevious iteration the scores were divided but they have improved drastically.The majority of the users gave a score of 5 out of 7, which means the applicationmakes the discovery of digital identities easier to accomplish.

B.7 The application meets my needs: The scores are improved compared to theprevious iteration. This can be due to the new views of the application, thatoffer more functionalities.

Ease of use

The ease of use of the application again scored high. The discussed statements canbe seen in figure 4.10. B.9 The application is easy to use shows that the applicationis easy to use.

B.12 The application requires the fewest steps possible to accomplish what I want todo with it: The median of the results has risen from 5 to 6. This means thatthe extra home-buttons lowered the steps needed to accomplish the goals.

B.14 Using the application is effortless: The results are evenly spread between 4, 5and 6. This is an improvement over the spread results of this statement in theprevious iteration and is probably also caused by the extra home-buttons.

B.16 I don’t notice any inconsistencies as I use the application: This statementscored less than in the previous iteration. One explanation is that the incorrect-button is confusing.

38

Page 51: Discovering digital identities through face recognition on mobile devices

4.5. Paper prototype iteration 3

Figure 4.10: A boxplot representing statements B.9, B.12, B.14, B.16.

Ease of learning and satisfaction

The ease of learning statements were again answered very positively. In figure4.11 you can see the box plot representation of figure B.22: It is easy to learnto use the application. Most of the users gave a score of 6 out of 7, which is avery good result. We still believe that this is due to the iOS style. The generalsatisfaction of the application was good. In figure 4.11, you can see the answersto statements B.24, B.27 and B.29. B.29 I feel the need to have it and B.27 Theapplication works the way I want it to work were both answered positively. The usersfelt the need to have it and were satisfied with how it works. B.24 I am satisfiedwith the application is the main indicator of satisfaction and received a median of6. The lower scores of 4 and 5 are believed to be due to the confusing incorrect-button.

In general, the experts were very positive about the application. Points ofimprovement were found, but there were no big problems besides from the incorrect-button. Useful suggestions were made by the participants. Instead of creating a viewper social network, the application could refer to the social network application onthe smartphone or the website. Another extra functionality is the preview of theusers overview, so that he can know how other users see his profile once they haverecognized him.

Conclusion

In this chapter, a comparison has been made of the two biggest mobile platformson the market, the iOS and the Android platform. They are both good platformsfor mobile application development, with well-documented SDK’s, but the built-inreal-time face detection of iOS proved to be a decisive aspect.

39

Page 52: Discovering digital identities through face recognition on mobile devices

4. Design

Figure 4.11: A boxplot representing statements B.22, B.24, B.27, B.29.

A storyboard was made based on the functionalities of chapter 3, which focussedon the user interface of the face recognition. The face recognition would happenautomatically and the information would appear next to the head in real time.However, there was a growing concern about pointing the phone to the face of theperson the entire time. To analyse this, 3 user interfaces were built and evaluatedusing a paper prototype. The scenario’s offered other approaches to displaying theinformation about the recognized person. After evaluation it became clear that thelatter one was preferred by a majority of the test users.

A prototype with full screen information views was elaborated and evaluatedwith a settings, home and history view. The test users needed to perform tasks inorder to explore the functionalities of the application and fill in a USE questionnaire.After processing the results, some adjustments were made to the prototype and itwas tested again with a new set of test users: experts in the field of human-computerinteraction. They performed the tasks and filled in the USE questionnaire and cameup with some good remarks and ideas during the testing.

In general the application was received well. The scores of the questionnaireswere positive apart from some minor weaknesses. The application is now ready to beimplemented and there is no need for another paper prototype evaluation.

40

Page 53: Discovering digital identities through face recognition on mobile devices

Chapter 5

Implementation

In this chapter we will describe the implementation process of the final application.First, a general explanation will be given about the development environment. Sec-ondly, the initial implementation will be explained, featuring the core functionalitiesof the application. This initial implementation has been evaluated and adapted intoa second and more complete implementation.

In order to develop for iOS, the Xcode1 development environment needs to beused. It has a library of default objects (like buttons, text fields, types of views,...)that can be used to build the views in a way that corresponds to the iOS humaninterface guidelines [44]. The use of this standard library allows the developer tocreate applications that look familiar to iPhone users, and are built with standardparts of which the users already know the function. The programming language ofiOS is Objective-C. It is an object-oriented extension of C, designed to be simpleand straightforward [47]. It supports a number of frameworks, collectively calledCocoa Touch.

5.1 Digital prototype 1 implementation

The first digital prototype focusses in the core functionality of the application: theface recognition. This consists of 3 parts: face tracking, face recognition, and theconnection with Facebook. For this prototype, information from other social networkswill not yet be implemented and the history and settings will consist of static data.

5.1.1 Face tracking

In order to track faces, the iOS face detection will be used. This functionality ispart of the iOS Core Image framework, which is able to process real-time imagesand video. To implement the face detection, the application needed to have a cam-era view. Apple provides a basic class that registers the camera feed and allowsthe user to take pictures, the UIImagePickerController. This class is also able to

1https://developer.apple.com/xcode/

41

Page 54: Discovering digital identities through face recognition on mobile devices

5. Implementation

display overlays on the camera feed. However, this feature only overlays the userinterface of the camera. The class was not able to process each frame coming fromthe camera feed and could not be used to augment each frame seperately. If thecamera feed could not be processed, the face detection would not be able to scan faces.

The class AVCaptureSession can be used to specifically coordinate the flow com-ing from the input source and going to the output source. It is able to intercept allframes from the camera feed, process them and output them on the screen. Thisclass could be used along with the face detection.

CameraViewController

Frame gets intercepted by AVCaptureSession

CIDetector

getFaceFeatures(frame)

return faceFeatures

draw boxes basedon faceFeatures

User

User opens the camera view

loop

Figure 5.1: A sequence diagram representing the face tracking process.

To detect the faces and highlight them on the camera, the following steps needto be executed:

1. Each separate frame needs to be intercepted before being put out on the screen.

2. The frame needs to be sent to the Core Image Detector (CIDetector), which isthe face detection class.

3. The CIDetector scans the frame for face features and returns an array with allfaces found.

4. For each CIFeature, the position of the nose and eyes are given. Based on thesecoordinates, a box can be drawn around the face.

42

Page 55: Discovering digital identities through face recognition on mobile devices

5.1. Digital prototype 1 implementation

(a) Low accuracy (b) High accuracy

Figure 5.2: Example screenshots of the face detection.

5. The frame along with a box for each face is displayed on the screen.

This process is repeated for each frame of the camera feed and is shown in thesequence diagram of figure 5.1. The face detection is not limited to a number a facesat a time. It will scan each frame and return as many faces as are found in the frame.It has 2 accuracy options, CIDetectorAccuracyLow and CIDetectorAccuracyHigh.These two settings were tested and compared using the back camera of an iPad 2.The low accuracy setting is faster but will only detect faces up to 2 meters. Thiscan be seen in figure 5.2 (a). The detection happens smoothly, with only a smallnoticable lag when moving the camera fast. The high accuracy setting can detectfaces from more than 4 meters away, as can be seen in figure 5.2 (b). However,enabling the high accuracy slowed down the tracking process. Upon moving thedevice, the camera feed instantly moved with it, but the face detection boxes had abigger delay reaching up to 2 seconds. The extra distance does not compensate forthe amount of delay, so the lower accuracy was chosen.

5.1.2 Face recognition

In section 2.1.2 a comparison has been made between different face recognitionAPI’s. Face.com appeared to be the most suited to use, because of its link withFacebook, the size of its database, and the existence of an Objective-C wrapper.The Face.com API was thus linked to the project. Because Face.com only works

43

Page 56: Discovering digital identities through face recognition on mobile devices

5. Implementation

with images and not with real-time video, snapshots of the camera feed need to betaken and sent to the Face.com servers. In the paper prototypes, the recognitionwas done automatically, but in order to provide control over the requests to theFace.com server, a recognize-button is added to the camera view. This button takesa screenshot in the form of a UIImage that can be sent to the Face.com server. Thisrecognize-button is a temporary addition and will be removed once the implementa-tion is finished. The API has a built-in Facebook API in order to get the correctpermissions for Face.com to check images from Facebook friends of the logged-in user.

To request the recognition of faces from an image, the API requires a FWObject(FaceWrapperObject) to be created. This object requires a set of data in order tobuild the request in the background. The data consists of the following items:

1. A snapshot from the camera feed, whereof the server will try to recognize thefaces.

2. A boolean to select between recognition or detection. This is set to recognition.

3. A boolean to activate the possibility to recognize Facebook friends of thelogged-in user. This is set to true.

4. A list of Facebook ID’s to look for in the image. This is only required ifthe boolean above is set to true. To check for all facebook friends, the [email protected] is required.

The list of ID’s to check can be expanded by using a private namespace. Asdescribed in section 3.2, the proposed solution for the privacy concerns is to allow theapplication to recognize both Facebook friends and other users of the application. Forthe initial implementation, the list of people to recognize is limited to the Facebookfriends of the logged in user.

After filling in the data, the FWObject is sent to the Face.com server in a sep-arate thread, so that the camera feed does not freeze when waiting for the reply.Face.com processes the image, and sends a reply in JSON format of all found andrecognized faces. The JSON reply contains a list of faces, and for each face somephysical characteristics and a list of potential matches along with the confidencepercentage. The match with the highest confidence value is considered to be thecorrect person. In order to prevent false matches, the list is filtered to only allowmatches above 50 percent. If no match is found with a confidence level above 50percent, the label above the face will say Try again! and the user will have torecognize again by sending a new image. This will be tested to check how manyfalse positives the application produces. The recognition process is shown in figure 5.3.

When the reply has been analysed, the Facebook ID with the highest match isreturned together with the name. The name is assigned to a label floating above theface as shown in figure 5.4(b). This is a naive implementation, resulting in the same

44

Page 57: Discovering digital identities through face recognition on mobile devices

5.1. Digital prototype 1 implementation

CameraViewController

Actor pressesrecognize button

FaceWrapper

Frame gets savedas UIImage

User

FWObject gets constructed

detectFaceWithFWObject

Face.com

send requestRecognize

faces

return JSON Replyreturn JSON Reply

Analyze replySet face-label

Figure 5.3: A sequence diagram representing the face recognition process.

name appearing above each face detected on the camera. If you have recognized oneperson, and point the camera to another one, the first person’s name will still appearabove the new face. To avoid this, the user should press the recognize-button againin order to discover the identity of the new person. This naive implementation willbe used for now, but the algorithm will be improved.Furthermore, the application should be able to recognize multiple persons at atime. Face.com sends a reply containing all persons recognized in the image. Theproblem with assigning the correct label to multiple faces is that the face detection,as explained in the previous section, searches for faces on each frame. It is not ableto say that a face in one frame will be the same face in the next frame. While theapplication is waiting for the Face.com reply, the people present in the scope of thecamera could have traded places, or could be completely different persons. An extraalgorithm should thus be implemented to keep track of each face, from the momentthat they appear in the scope of the camera, until they disappear. The location ofeach face should be stored and compared to the location of the faces in the nextframe in order to know which face belongs to which.

5.1.3 Connection with Facebook

When the face is recognized, the user should be able to press the box surrounding theface to get more information. The box needs to be clickable from the moment theface is recognized. Because the face gets detected each frame, the box surroundingthe face is also redrawn again each frame. To make the box clickable, a button

45

Page 58: Discovering digital identities through face recognition on mobile devices

5. Implementation

(a) Facebook login (b) Camera view (c) Overview view

Figure 5.4: The different views of the first prototype.

was created with the same size and location as the box and made invisible. Thelocation of the box shifts each frame, so the button needs to be updated and re-drawn as well. This redrawn process implies that the button is removed from theview, and recreated at another location. As this happens each frame, the time-frame in which the button exists is short. This causes a difficulty when pressingthe button. This behaviour will be evaluated in the user tests. When the box isclicked, all information about the recognized person should be gathered. In thisiteration, we focus only on Facebook information. Since Face.com returns FacebookID’s, Facebook is the ideal starting point to gather information. In order to getmore than just the ID, we needed to link the Facebook Graph API2 to the application.

Facebook has released an Facebook iOS SDK, created specifically to link applica-tions on iPhone and iPad to Facebook. It is written in Objective-C and grants thedeveloper access to the Facebook Graph API [53]. The Graph API represents eachperson, event or page as an object with a unique ID. Information about the objectcan be accessed by sending a request to https://graph.facebook.com/ followedby the ID. For instance, to get information about ID 1283752014, the request shouldbe: https://graph.facebook.com/1283752014, which returns the following JSONobject:

{"id": "1283752014","name": "Gerry Hendrickx","link": "https://www.facebook.com/gerry.hendrickx",

2https://developers.facebook.com/

46

Page 59: Discovering digital identities through face recognition on mobile devices

5.2. Digital prototype 1 evaluation

"username": "gerry.hendrickx","gender": "male","locale": "en_US"

}

Since Face.com returns the Facebook ID’s of the recognized persons, this ID candirectly be used in combination with the above URL to get basic information aboutthe persons. The view with this information from Facebook can be seen in figure5.4(c). When a request to Facebook is sent and there is no logged in user, the APIwill display a window to log in to Facebook (figure 5.4(a)).

5.2 Digital prototype 1 evaluation

The application was evaluated by 5 test subjects. The group of test users consistedof 3 men and 2 women, all between the age of 20 and 23. 4 of them were experiencedsmartphone users, whereof 3 owned an iPhone and one had iOS experience. 1 persondid not have any smartphone experience which gave us some insight about whetherthe lack of experience had an influence on the usability of the application. Allsmartphone users had used an AR application before and were familiar with theconcept. 2 of the smartphone users were already part of the second paper prototypetests.The tests focussed on the face recognition functionality. The goal was to checkthe user interface of the camera view and the correctness and robustness of therecognition process. This users were given a list of tasks to execute:

1. Start the application and recognize the evaluator of the usertest.

2. Find out more information about the evaluator.

3. Go to his Facebook profile.

4. Try to recognize yourself.

For the tests, the evaluator was logged in to the application with his Facebookaccount and the recognition algorithm was trained by his Facebook friends. Whenyou train Facebook friends on Face.com, the service will use your logged Facebookaccount to scan the tagged photo’s of your friends and add these scans to its database.This is necessary in order to be able to recognize the faces of friends, but is not yetbuilt into the application. Also, all test users were Facebook friends of the evaluatorand should thus be recognizable during the tests. The think aloud method was usedonce again to get a clear view on the user’s perception.

5.2.1 Results and discussion

The outcomes of the evaluation are discussed below and present the problems andremarks:

47

Page 60: Discovering digital identities through face recognition on mobile devices

5. Implementation

1. After the users pressed the recognize-button, the reply from Face.com wasreceived after an average of 8 seconds with extremes of 5 to 15 seconds. Itis believed that the variations are due to the quality of the wireless internetconnection and load on the Face.com servers. 4 out of 5 test users immediatelyhad a correct match for the evaluator, while the other user did not receive amatch with a confidence above 50 percent. The Try again!-label appeared andthe user was able to restart the process. Because the label stays visible untilFace.com sends a new reply, there was some confusion about whether or notthe reply arrived, or if Face.com failed again to recognize the face; in whichcase the label would stay in its place. This resulted in the user repeatedlypressing the recognize button. He suggested some sort of status informationabout what was going on in the background. Eventually, Face.com replied witha correct match for the evaluator.

2. In order to get more information about the evaluator, the users needed to tapin the box surrounding his face on the camera view. The redrawing of thebutton for each frame indeed caused the button to be not clickable most ofthe time. Every test user tried to tap the face, but nobody was able to do thisin their first try. This made them hesitate whether they are supposed to clickthe name or not. After the attempt to click the name also failed, the evaluatorhelped and told that they should press inside the box. After multiple tries, allof them eventually succeeded in clicking the button. Every user commentedthat this needed to change in order to make the click action discoverable.

3. In order to recognize themselves, all users pressed the button to change theview from the back camera to the front camera, instead of turning the backof the phone towards them. After they pressed the recognize-button again,the application crashed. After restarting the application, the users tried torecognize themselves. 3 users immediately received a correct reply and wereable to check the information found about themselves. During the tests, onlyone person received an incorrect name. This means that Face.com matchedthe face to the face of another person with a higher confidence value than thematch with his own face. Checking the logs revealed that the wrong namehad a confidence percentage of 54%, while the correct name had a confidencepercentage of 53%. This can be caused by overexposure of the camera feedwhile pressing the recognize-button.

5.2.2 USE questionnaire

The users were once again asked to fill in the USE questionnaire. The full resultsof the questionnaire are given in appendix C. We will discuss the general score ofthe 4 categories and examine some noteworthy results. The statements were ratedfrom 1 to 7, with 1 being totally disagree and 7 being totally agree. The discussedstatements were selected because of their importance in their respective categories.

48

Page 61: Discovering digital identities through face recognition on mobile devices

5.2. Digital prototype 1 evaluation

Usefulness

In general, the usefulness statements scored well in the questionnaire. It appears thatthe test users generally perceived the app as useful. Some answers to the statementsare displayed in figure 5.5. C.2 The application helps me be more productive, C.3 Theapplication is useful and C.5 The application makes the things I want to accomplisheasier to get done were all answered very positively.

Figure 5.5: A boxplot representing statements C.2, C.3, C.4, C.5, C.7.

C.4 The application gives me more control about the activities in my life: Onceagain this question scored lower in comparison to the other usefulness questions.The answers ranged from 3 to 5, with a median of 5. This is still considered tobe due to the nature of the question, which the app does not aim to fulfil.

C.7 The application meets my needs: A median of 5 but with one extreme of 1given by the non-smartphone user shows that the application meets the needsof the smartphone users, but fails to be felt as needed by non-smarphoneusers. Because they don’t own a smartphone, they are not familiar with whatapplications can contribute to their life.

Ease of use

The ease of use of the application received high scores with the smartphone users buta low score from the non-smartphone user. This could be due to the iOS-style of theapplication, to which the smartphone users are used, but is new to the inexperienceduser. The discussed statements can be seen in figure 5.6.

C.9 The application is easy to use: The experienced smartphone users did not findany difficulties in using the application, but there was some kind of entrybarrier for inexperienced users, that could be due to the knowledge required towork with iOS or due to the difficulty of clicking the button on a face.

49

Page 62: Discovering digital identities through face recognition on mobile devices

5. Implementation

Figure 5.6: A boxplot representing statements C.9, C.12, C.14, C.16.

C.12 The application requires the fewest steps possible to accomplish what I want todo with it: The general flow to get from starting the application to informationabout a recognized person requires only 3 clicks. However the difficulty inclicking the face affected the results.

C.14 Using the application is effortless: The application seems effortless to use,although we are surprised that the cumbersome clicking on the face did notcause this statement to score lower.

C.16 I don’t notice any inconsistencies as I use the application: This statementobtained some lowers scores due to the clicking on the face and confusion upontrying to recognize again after a bad reply.

Ease of learning and satisfaction

The ease of learning statements were answered very positively. None of the ease oflearning questions received a score below 6. In figure 5.7 you can see the box plotrepresentation of statement C.22: It is easy to learn to use the application.The general satisfaction perception had to deal with an extreme low, again due tothe user with no smartphone experience. However, C.27 The application works theway I want it to work received positive results. In figure 5.7, you can see the answersto statements C.24, C.27 and C.29.

C.24 I am satisfied with the application: This statement is the main indicator ofsatisfaction and received a median of 5. This is lower than the score in thelast paper prototype, and can be the result of the presence of some bugs, thecumbersome clicking on the button and the lack of information about thebackground processes.

50

Page 63: Discovering digital identities through face recognition on mobile devices

5.3. Digital prototype 2 implementation

Figure 5.7: A boxplot representing statements C.22, C.24, C.27, C.29.

C.29 I feel the need to have it: This statement is one of the lowest scoring statements.Most users think it’s a nice-to-have application, but there is no strong need tohave it.

In conclusion, the first digital prototype obtained some positive results. Thequestionnaire showed that the application was perceived as useful and easy to use foriPhone users. All users thought that the application was easy to learn, but were notcompletely satisfied with it. The digital prototype will now be adapted and furtherelaborated to include the settings and history, and support information from moresocial networks.

5.3 Digital prototype 2 implementationFor the second digital prototype, the first prototype is adapted from the commentsof its evaluation and new features are implemented. The adjustments resulting fromthe evaluation are:

• The button on the face to go to the overview of a recognized person should beimproved to make it clickable at each try.

• Status messages should be provided to keep the user updated with the back-ground processes of the application. This way they know whether the applica-tion is waiting for a reply or received and displayed the same response.

The features that will be implemented are:

• Social network settings: a view where the user can link the application to hissocial networks.

• A history view: displays a list of recognized people.

51

Page 64: Discovering digital identities through face recognition on mobile devices

5. Implementation

• A settings view: the user can adjust some settings of the application.

• Training possibility: to allow the logged in user to train the Face.com algorithmwith his Facebook friends.

We will now describe the implementation of the adjustments to the previousprototype and the new functionalities.

5.3.1 The face button

Due to the implementation of the button that allows users to click on a recognizedface, the button failed in recognizing the touch as a click nearly 80 percent of thetime. The cause for this is the redrawing of the button upon each frame. Since thebutton was removed and re-instantiated multiple times per second, the time frame inwhich it was clickable was far too short. This problem is solved by not removing thebutton each frame, but simply move it accordingly with the face on the new frame.This way, the button stays located at the same place until the new location of theface is established and it remains clickable at all times.

5.3.2 Status messages

Upon receiving a reply from Face.com with no matches with a confidence level above50 percent, the Try again!- label is shown. When the users tried again, they wereunsure whether the request was still busy, or whether Face.com gave another badreply, meaning the label would stay in place. In order to prevent such confusion,the camera view was adapted to support and display labels. These status messagesare updated whenever a new phase in the recognition process is reached. Table 5.1displays which message is shown at what time.

Phase MessageEntering the camera view Tracking faces. Click recognize.After pressing the recognize-button Recognizing faces... Please wait.After receiving a bad reply (< 50%)from Face.com

Recognized 0 faces. Try again.

After receiving a good reply fromFace.com

Recognized 1 face.

Table 5.1: List of status messages on different phases of the recognition process.

The evaluation of the prototype will reveal whether the status messages arereadable and well placed. A screenshot of the camera view with status messages canbe seen in figure 5.8 (a).

5.3.3 History view

The history view was introduced in the second paper prototype and positively re-ceived by the test users. However, the style used in the paper prototypes was not the

52

Page 65: Discovering digital identities through face recognition on mobile devices

5.3. Digital prototype 2 implementation

(a) Camera view (b) History view (c) Settings view

Figure 5.8: The different views of the second prototype.

standard iOS way of interacting with a list of people. The gesture used to removeusers (long-pressing until the pictures wiggle and pressing the X-icon) is not used iniOS applications. Therefore the design of the history view switched to the standardiOS tableview, a view designed specifically to show dynamic lists, allowing directremoval of an item of the list by using the standard iOS swipe gesture. An examplecan be seen in figure 5.8 (b).

The HistoryTableViewController-class is created to display the list of previouslyrecognized persons. This is a direct subclass of the TableViewController-class, andthus uses the same memory efficient way of displaying long lists. Such a class requiresa datasource, where it fetches the data displayed in the list. This datasource is anNSArray which is saved in a persistent way in the application data storage, theNSUserDefaults. When a user accesses the overview view of a previously unrecogniseduser, this array gets updated with the new RecognizedPerson-object. When loadingthe history, the array gets pulled from the NSUserDefaults and shown in the tableview.Each item in the table is made clickable and opens the overview view of that recognizedperson.

5.3.4 Settings view

The settings of the application are limited in this iteration to some useful options. ASettingsViewController-class has been made, linking the buttons on the settings-viewwith the underlying functionality. The settings view can be seen in figure 5.8 (c).We’ve implemented the following settings:

Open in camera mode This switch allows the user to select whether or not he

53

Page 66: Discovering digital identities through face recognition on mobile devices

5. Implementation

wants to start the application in camera mode, meaning it will skip the home-view and go straight to the camera-view. This allows the user to eliminate oneclick in order to use the face recognition. The setting of the switch is saved inthe NSUserDefaults and checked upon starting the application.

Log out of Facebook This button calls the Facebook API and logs the user out ofFacebook. The saved authorization token is removed from the NSUserDefaults.This setting is useful when another person wishes to try it with his Facebookfriends. When the next request to Face.com or Facebook is sent, the applicationwill ask the new user to log in.

Train Facebook friends This setting is important to new users of the application.In order to recognize their Facebook friends through Face.com, Face.com needsto be trained with their pictures. This settings constructs a request to theFace.com API with the command to train the ID [email protected]. Anotification is displayed when the training process has completed.

5.3.5 Support for multiple social networks

In order to comply to the goal of the application to offer efficient discovery of digitalidentities, the users should be able to discover information of the recognized personfrom multiple social networks. We have decided to use the back-end system that theMore!-project (discussed in section 2.3.1) is using. This is an open-source web-appthat uses a database to store social network handles. The database structure is basedon the iTEC person data model [54], which is a data model constructed to describepersons in a database.

The More! back-end provides a RESTful API, so the request can be done viaHTTP GET or POST messages. It supports requests to fetch the social networkhandles of a person by providing the ID used in the system as a parameter. Theapplication uses Facebook ID’s as identifiers throughout the application, so these ID’swill also be used as key identifiers in the database. When a user recognizes a person,he will arrive at the overview view. It’s controller, the OverviewTableViewController,will send a HTTP-request to the web service upon opening the view. When thereply is received, the available handles are parsed and the table is updated with linksto the recognized persons profile on the found social networks. An example of theupdated overview view can be seen in figure 5.10 (b).

The back-end should be able to add new persons and their social network handles.The application needs to allow the users to link their handles to the user account inthe application. A new view/controller pair is made to fulfil this requirement, thenetworks-view and NetworkViewController-class. This view can be seen in figure5.10 (c) and supports handles of Twitter, LinkedIn, SlideShare and Google+. Thesenetworks are chosen based on table 3.2 in section 3.1.3. The list of supported socialnetworks can be extended. If users want to link their social networks to their profilein the application, they need to fill in the respective textfields. When leaving the

54

Page 67: Discovering digital identities through face recognition on mobile devices

5.3. Digital prototype 2 implementation

OverviewTableViewController FBRequestWrapper Facebook DatabaseUser clicked on recognized face.

getFBRequestWithGraphPath:facebookID

Facebook request

Facebook reply

NSURLConnection initwithrequest:request

Facebook reply

tableview.reloadData

Database reply

tableview.reloadData

User

Figure 5.9: A sequence diagram representing the information fetching process.

(a) Home view (b) Overview view (c) Networks view

Figure 5.10: The different views of the second prototype.

networks-view, the database will be updated with their handles and the overviewview of the user will show the new networks (figure 5.10(b)). The More! back-enddid not support adding persons and handles to the database. A new set of methodswere written to enable making these additions directly from a HTTP-request. Thecomplete process of fetching data for a recognized person is shown in the sequencediagram 5.9. The figure shows that the table gets reloaded twice: when Facebookreplies with the Facebook-information and when the database replies with the socialnetwork handles.

55

Page 68: Discovering digital identities through face recognition on mobile devices

5. Implementation

5.4 Overview of classesiOS development is focussed on the model-view-controller pattern. In this program-ming pattern, you separate the user interface (the views) from the rest of the code(the controllers and the models). The view interacts with the controllers upon userinteraction and the controller uses data from the model classes and interacts withother controllers. A schematic overview of the pattern can be seen in figure 5.11.

Figure 5.11: The Model-View-Controller pattern. [50]

A brief overview of all created classes and their function is given below. Eachcontroller class has its own respective view, which will not be discussed separately.Appendix E shows a class diagram displaying the relations between the classes.Figure 5.12 presents the screen transition diagram, which shows the links betweenthe views.

open application

Home

History

Settings Networks

Camera

Overview

Recognize

Tap onface

CameraHistory Home

Social networksettings

Home

Home

Click on nameHistory

Home Settings

Settings

Figure 5.12: The screen transition diagram of the application

SocialRecognizerViewController This is the class corresponding to the homeview and can be seen in figure 5.10(a). The class serves as a pass-through class

56

Page 69: Discovering digital identities through face recognition on mobile devices

5.4. Overview of classes

from where most other views are reachable. It has limited functionality, butprovides the link with the Facebook API, in order to log the user in before heuses the application. The Facebook pop-up will appear if the correct tokensare not found and the user can log in. These tokens will be stored in theNSUserDefaults. After logging in, an initial request will be made to Facebookto fetch the friend list of the logged in user. This list, with each friend’s nameand ID is used with each recognition, in order to look up the ID and get thename without making an additional request to Facebook.

CameraViewController This is the core class of the application. It implementsthe AVCaptureSession and has the camera view as it’s view. It contains themethods to detect the faces as described in section 5.1.1 and implementsthe recognize-button from the camera view. Upon clicking this button, therecognize-method is called which stores the current frame as an UIImage andconstructs the FWObject as described in section 5.1.2. A callback methodis implemented that receives the Face.com reply and filters the results basedon the confidence percentage. The label is drawn above the face and the facebecomes clickable in order to navigate to the overview view of the recognizedperson. A button to switch from the front to the back camera is also createdin the UI, to allow the user to use both camera’s of the iPhone. An algorithmto draw the labels in the correct position for each orientation of the phone isalso implemented in this class.

OverviewViewController This controller manages the overview view of a rec-ognized person. It requests information about the recognized ID and storesthis information in a RecognizedPerson-object. It is a subclass of a UITable-ViewController, which is the standard iOS way to display data. It presentsgeneral information about the recognized person, along with links to his socialnetworks. This link opens the application of the network instead of giving anoverview of his profile in the application itself as suggested in section 4.5.1.

FBRequestWrapper This class is a singleton class that handles all calls to theFacebook API. It has the functionality to log in and log out from Facebookand send requests to the Facebook Graph. The class is able to store the tokenprovided by the API in a collection to store all user-dependant settings, theNSUserDefaults. The choice to use a singleton pattern to implement the link tothe Facebook API, is based on the requirement of multiple classes to interactwith Facebook.

UserModel This is a model class used to store data. The class stores informationabout the currently logged in user of the application. For this iteration itstores the name and ID of the current user, along with a map of his friends,containing the ID’s and the full names. This map is filled upon starting theapplication, when the request is made by the SocialRecognizerViewController.

RecognizedPerson This class is also a model class, containing information aboutthe recognized person. It stores the information received from the request to

57

Page 70: Discovering digital identities through face recognition on mobile devices

5. Implementation

the Facebook graph and is used by the OverviewViewController to fill it’s tablewith data about the person.

NetworksViewController This class allows users to fill in their social networkhandles. After leaving the networks view, the web service will be called toupdate these handles in the database.

HistoryTableViewController This class fetches the list of recognized people fromthe NSUserDefaults and displays them in a standard iOS table view.

SettingsViewController This controller is linked to the settings view and imple-ments all the settings of the application. It offers the functionality to open theapplication in the camera view, to log the current user out of Facebook and totrain Face.com with the friends of the logged in user.

ConclusionThis chapter described the implementation of the digital prototypes. A first prototypewas created with the functioning core of the application: the face recognition. Theprototype featured face tracking, face recognition and an overview of the informationof the recognized person. The face tracking is implemented by using frame-per-frameanalysis of the camera feed, with each frame being sent to the built-in face detectionof iOS5. A recognize-button was added to allow the user to choose when he wants tostart the recognition process. When the button is clicked, a snapshot of the camerafeed is taken and sent to Face.com, along with some other parameters that definethe request. When the reply is received, it is analysed to see whether there arematches with a confidence above 50 percent. If this is the case, the face is labelledwith the match that has the highest confidence value. If this is not the case, a Tryagain!-label is shown. When the face is recognized, it becomes clickable and refers tothe overview view, which shows basic Facebook information along with a link to therecognized person’s Facebook profile. This prototype was evaluated by 5 test persons.In general it was received positively, but the same remarks were given by each testperson. The prototype was adapted and featured new views: the history-view, thesettings-view and the networks-view. To support the discovery of multiple networks,a web application and a database was set up in order to store the social networkhandles of the users. When a person gets recognized by another user, the applicationwill request the handles from the service and display them in the overview view ofthe recognized person. The final application will be tested and evaluated in the nextchapter.

58

Page 71: Discovering digital identities through face recognition on mobile devices

Chapter 6

Evaluation

This chapter presents the evaluation of the final application. The evaluation willfollow the same structure as previous prototype evaluations and will assess whetherthe final application has reached it goals.

6.1 Methodology

The tests are split up into 2 parts: first the test users will have to fulfil a number oftasks. Secondly, one of the test subjects is given the task to use the application in areal-life environment and a comparison is made of the time it takes to discover thedigital identity of a person with and without the application, in order to check ifthe application has fulfilled its goal. After the tests, the users were asked to fill in aUSE questionnaire.

The final application as described in section 5.3 was evaluated with 12 testsubjects. The group of test users consisted of 10 men and 2 women, between theage of 20 and 25. All men had considerable smartphone experience and have workedwith an iOS device (iPhone or iPad). The 2 women have no smartphone experience.4 of the smartphone users have already tested the previous digital prototype.

The test users need to complete the following tasks:

1. Recognize the evaluator and look for information about him.

2. Find a way to start the application in camera mode.

3. Find details about a certain person which you have scanned before, but is notaround at the moment.

4. Delete this person from the history.

5. Alter the Twitter ID of the logged in user and check to see if the alterationshave worked.

59

Page 72: Discovering digital identities through face recognition on mobile devices

6. Evaluation

These tasks are designed in a way that makes the user discover the full func-tionalities of the application. During these tasks they have to apply the thinkaloud-technique.

6.2 Results and discussionThe results of the tasks are discussed below:

1. As in the first digital prototype the home view was clear to all users. Once theyarrived in the camera view, the notification Tracking faces. Click recognize.was displayed, but only 6 out of 12 users mentioned that they read it orfollowed its instructions. 3 of the users that did not read the notificationwere impatient and pressed the recognize-button again before Face.com hadsent a reply. When asked why they had neglected the notifications, 4 usersanswered that the font size was too small and 2 said that the red colour wasnot visible enough. The 6 users who did pay attention to the notifications, saidthat the notifications guided them and the users received them very positively.The 4 users who tested the previous digital prototype also expressed that thenotifications were a big improvement. We conclude that the notifications area positive improvement, but that they should be displayed in a clearer way.The recognition succeeded in the first try with 11 of the 12 test users. The12th user received the Try again!-label and after trying again, he received acorrect recognition. Again, the recognition failed due to overexposure of thecamera. After the recognition, all users pressed the face and arrived in theoverview view. The 4 test users that had tested the previous iteration, noticedthe improvement of the face-button.

2. No problems were experienced with the start in camera mode-button. Afterrestarting the application, the users saw the effect of this settings and theyall thought this was useful. It eliminates one click to get from starting theapplication to recognizing a person. 10 out of 12 users preferred camera modeto be enabled by default.

3. Due to the use of a standard iOS gesture (swipe over the name and a delete-button appears) all experienced iOS users completed this task without problems.The 2 inexperienced users needed help to find this somewhat hidden feature.An edit-button that provides an alternative way to edit the lists might make itmore clear for inexperienced users.

4. In order to alter the Twitter ID of the logged user, the users needed to findthe networks view. Once arrived in the networks view, they all succeeded inaltering the Twitter handle. In order to test the changes, they needed to returnto the camera and recognize the evaluator (because he is the logged in user). Abug was found regarding the time to update the database. It should be notedthat while loading the overview view, the request to the database timed outfor about 20% of the time, which is also an issue that need to be looked at.

60

Page 73: Discovering digital identities through face recognition on mobile devices

6.3. USE questionnaire

6.3 USE questionnaire

The users were once again asked to fill in the USE questionnaire. The full results ofthe questionnaire are given in appendix D. We will discuss the general score of the 4categories and examine some noteworthy results.

6.3.1 Usefulness

The responses reflect a perceived usefulness of the tool. Some answers to thestatements are displayed in figure 6.1. D.3 The application is useful, D.5 Theapplication makes the things I want to accomplish easier to get done and D.7 Theapplication meets my needs were received positively. It seems that the application isconsidered useful and meets the needs of the users.

Figure 6.1: A boxplot representing statements D.2, D.3, D.4, D.5, D.7.

D.2 The application helps me be more productive: We conclude that the app indeedhelps the users be more productive. As proven in section 6.5 the applicationsaves time when trying to find the social networks of a person. If we comparethe scores to the previous prototype, we see that the median has remained at 5out of 7.

D.4 The application gives me more control about the activities in my life: The scoresof this statement were lower but this is again considered to be due the natureof the question.

6.3.2 Ease of use

The application scored excellent on most ease of use statements, but considerablylower on some of them. The discussed statements can be seen in figure 6.2. D.9

61

Page 74: Discovering digital identities through face recognition on mobile devices

6. Evaluation

Figure 6.2: A boxplot representing statements D.9, D.12, D.14, D.16.

The application is easy to use and D.14 Using the application is effortless receivedpositive scores. The application seems to be considered as easy to use.

D.12 The application requires the fewest steps possible to accomplish what I wantto do with it: With the face button fixed in this iteration, the score for thisstatement has risen. The flow of the application seems to be experienced asgood by the users.

D.16 I don’t notice any inconsistencies as I use the application: The median of thisscore has lowered from 6 to 5 compared to the previous iteration. This canbe due to the database server that timed out occasionally, which caused theoverview view to display only Facebook at some times, and all networks atother times.

6.3.3 Ease of learning and satisfaction

The ease of learning statements were again answered very positively. In figure 6.3you can see the box plot representation of figure D.22: It is easy to learn to use theapplication. The scores range from 6 to 7 with a median of 7. This is an excellentscore. It seems that the notifications caused the scores to rise in comparison to theprevious iteration. The satisfaction statements did not score as great as the ease ofleaning questions, but still received some good ratings. In figure 6.3, you can see theanswers to statements D.24, D.27 and D.29. D.27 The application works the way Iwant it to work was well received. The users think the application indeed works howthey want it to work.

D.24 I am satisfied with the application: The median of this statement has risensince the previous prototype. It seems that the users feel more satisfied with theapplication at this stage. This is thought to be caused by the new functionalities

62

Page 75: Discovering digital identities through face recognition on mobile devices

6.4. Real-life testing

Figure 6.3: A boxplot representing statements D.22, D.24, D.27, D.29.

like the settings, that allow the users to tweak the application to their liking andthe history that enables them to look up information from previous recognitions.

D.29 I feel the need to have it: With a median of 5 and most scores between 4 and5, this statement is one of the lower rated statements. This could be due tothe privacy and disturbance issues. Using the application makes other personsuncomfortable, which could have caused this lower score.

6.4 Real-life testing

In order to test the real-life usefulness of the application, the application was givento a test user along with the task to try it out in the real world, in an uncontrolledenvironment. This test will provide insight in 2 mayor points: how the recognitionprocess is received by people who aren’t familiar with the application and how thetest subjects use and experience the application after a more extensive period oftime.The test subject tested the application in a school environment, where he tried torecognize 4 persons. He logged in to the application with his own Facebook accountand trained his friends via the button in the settings-view. This way he was ableto recognize all his Facebook friends. When he found a friend to be recognized, hetold him he was testing a new application and asked if he could try it. When theyagreed he pointed the iPhone to their faces and recognized them. 3 out of 4 wereimmediately correctly recognized. The 4th one is believed not to have succeededbecause of bad lightning circumstances. During the recognition process, the iPhonewas kept pointed to their faces. This made them feel uncomfortable, because theythought he was filming them or taking pictures. When he afterwards showed themthat they were recognized, they were stunned but not enthusiastic. When he toldthem that no pictures were saved, and that the application only provided information

63

Page 76: Discovering digital identities through face recognition on mobile devices

6. Evaluation

about them because the test user was their Facebook friend, they were a bit morerelaxed. However, the general feel of the application with people that aren’t familiarwith it, is uncomfortable. People do not like being kept in the frame of a camera,and are very attached to their privacy. The test subject expressed that due to theduration of the recognition process, the recognition can not be done in a subtleway and will always cause an uncomfortable 10 seconds, especially when tried onstrangers.

Subject Started Recognitiontime

Time toreachTwitterWA

Time toreachLinkedInWA

Time toreachTwitterWOA

Time toreachLinkedInWOA

1 WA 11.3s 20.9s 32.5s 42.3s 89.2s2* WA 10.4s 28s 38s 62s 129s3 WA 31s 71s 83.3s 47s 95.4s4 WA 10.4s 17.8s 25.2s 30.7s 62.4s5 WA 8.8s 15.4s 22.9s 55s 112.8s6 WA 14s 29.5s 44s 37.9s 73s7 WOA 12.3s 18s 28.6s 45.7s 82s8 WOA 8.3s 13s 21.3s 30.2s 67.4s9 WOA 7.8s 17.3s 27.8s 35s 79.2s10* WOA 15.6s 27.4s 39.3s 54.9s 128.1s11 WOA 10.2s 15.9s 23.8s 24.1s 56.8s12 WOA 10s 19.8s 30.5s 29.3s 49sAverage 12.508s 24.5s 34.766s 41,175s 85,358s

Table 6.1: Table representing the times to discover social networks with the app(WA) and without the app (WOA).

6.5 A/B testingThe main goal of this thesis is to enable efficient discovery of digital identities throughface recognition by using smartphones. In order to test whether this goals is achieved,the test persons were asked to fulfil a simple task with the application, and withoutthe application. The task is split up in several parts. The first 6 test users startwith the application and need to recognize the evaluator. After the recognition, theyneed to go to his Twitter profile and try to follow him. When this is done, theyneed to go to his LinkedIn profile and try to connect to the evaluator. Afterwards,the evaluator gives them the social networks handles of his Twitter and LinkedInaccount verbally and they need to use the standard applications to look him up.This is a simplification of the normal discovery process, in which the user needsto look for the handles themselves (by i.e. using Google). The second part of testusers starts without the application, and finished with the application. When askedfor a preference, all users preferred the application above the manual lookups. Themanual lookup was perceived as inefficient and all users found the application to be

64

Page 77: Discovering digital identities through face recognition on mobile devices

6.5. A/B testing

faster and more fun to use. There is no bias based on which application they startedwith. The duration of the task will be recorded and compared. The time it takeswill be a measurement for the efficiency of the application.

Figure 6.4: A graph representing the average time to discover a persons networkswith (WA) and without (WOA) the application.

Table 6.1 shows the results of these tasks. There are no apparent differencesbetween the users who started with the application and the users who started without.All users preferred the application above the manual lookups. Test subject 2 and10 (marked with the * in the table) are the 2 users with no smartphone experience.They were expected to need more time to complete the tasks. Test subject 3 hadtrouble recognizing the evaluator, and this was caused by Face.com. At the timeof testing, their servers were overloaded and took much longer to respond. Thetable also contains the average time to complete all tasks which is plotted in figure6.4. The figure shows that there is an initial time to recognize the face (in averagearound 12 seconds), and subsequently a constant time per network that is visited(in average around 10 seconds). Without the application, each network needed tobe visited manually, by opening the application of the network, entering the handleof the person that you’re looking for, and going to his profile. This lookup takesaround 40 seconds per network. Our application has proven to be a lot faster than amanual lookup.

Conclusion

This chapter described the final evaluation of the application. The application wastested with 12 test persons. They needed to complete a number of tests to examine

65

Page 78: Discovering digital identities through face recognition on mobile devices

6. Evaluation

the usability of the application and discover the functionalities in the application.The notifications in the camera view were found positive, but for some of the usersnot visible enough. They should be adapted in a way that makes them clearer butthat does not clutter the camera view. The settings view was found easily and thesettings were clear to the users. The history was seen as a positive feature and thegesture to remove items from the history was found by the users with iOS experience,since this is a standard gesture to delete a row from a tableview. The linking of theapplication to the user’s social networks was also obvious for most users, althoughthey did not always know what their handle was for some social networks. This couldbe prevented by instead of using handles, logging them in to the different networksso the application is authorized to lookup the handle for them. This also preventsusers from providing false handles.

The application was tested in a real-life situation, with people who weren’t awareof the goal of the application. The 4 persons that were recognized felt uneasy becausethe camera needed to be pointed to them for about 10 seconds. They also hadprivacy concerns, but once explained that the information was only visible becausethe application was being used by one of their Facebook friends, they felt more relaxed.

Furthermore, the time it takes to get to the social networks of a person with andwithout the application was tracked. The application requires a 12-second timeframein order to recognize the person but afterwards, via the direct links in the overviewview, the discovery of the recognized person’s networks was faster than a regularmethod. We can conclude that the application enables the users to efficiently discovera person’s digital identity.

The USE questionnaire was filled in by the 12 test users, and the results ingeneral were good. The application was received as usefull, easy to use, easy to learnand the users were partially satisfied.

66

Page 79: Discovering digital identities through face recognition on mobile devices

Chapter 7

Conclusion

7.1 Thesis summary

The goal of this thesis was to create an efficient way to discover the digital identityof a person by using face recognition on a mobile device. This covers 2 differenttopics: face recognition and social network aggregation. Chapter 2 studied thepossibilities and related work of face recognition on mobile applications and howsocial networks can be aggregated. It also looked at augmented reality to determinedifferent possibilities on how the application could display the information of a person.

After researching about these aspects, a brainstorm and survey were presentedand discussed, which lead to the functionalities that the application should implement.The application should be able to fetch information from different social networksof a person in real-time and use augmented reality to display this information on acamera view of the smartphone. Other functionalities, like a history of recognizedpeople and settings, were also introduced by the analysis.

We designed the application based on a rapid prototyping approach with frequentuser feedback cycles. The first 3 cycles consisted of paper prototypes. 3 user interfacemock-ups were created, displaying different ways to visualize the information of therecognized person. The chosen user interface was elaborated and evaluated again,which uncovered some navigation and visualization problems. After adjusting theprototype to the remarks of the users, a third prototype was evaluated and besidessome minor issues, received positive by the test users.

The implementation of the application was performed in 2 iterations. The firstprototype focussed on the core functionality: the face recognition and was evaluated.This evaluation again uncovered some navigation problems, to which the applicationwas adapted. The second and final prototype consisted of a fully functioning applica-tion. This was evaluated on its usability, its usefulness and its efficiency.

The result of this thesis is an application that is able to recognize people and offer

67

Page 80: Discovering digital identities through face recognition on mobile devices

7. Conclusion

efficient access to their digital identities. It uses the Face.com API as face recognitionalgorithm, which proved to be reliable and efficient. The application is linked tothe Facebook API in order to provide some basic information of the recognizedpersons. Some aspects of the application found their origin in other projects. Theface tracking process, which implements the iOS5 built-in face detection, was basedon the SquareCam-project [51]. This is an example project provided by Apple todemonstrate the use of the face detection. The link with Face.com was provided bythe FaceWrapper project [52]. This is an iOS wrapper for Face.com that providesthe functionalities to request face recognition for images and train the algorithmwith Facebook friends. The web service and database are the same as used for theMore! project and were adapted to be used with our application.

7.2 Goals

Section 1.2 introduced a number of goals that this thesis wanted to accomplish. Wenow discuss whether the application satisfies these goals.

Use a stable and reliable face recognition algorithm After comparing severalface recognition API’s in section 2.1.2, Face.com was chosen as face recognitionservice. During the testing of the digital prototypes, it proved to be a verystable and reliable service. The confidence percentages allowed us to filter thereply and the application succeeded most of the times in matching a correctname to a face. It’s method of 3D reconstruction only fails when the image isoverexposed. However, if lighting conditions are good, Face.com will provide acorrect match. The Face.com database is based on tagged Facebook imagesand thus provides a big and easily expandable set of trained faces.

Find a proper way to visualise the information Multiple prototypes, both pa-per and digital, have been evaluated to ensure that the user interface of theapplication was clear and usable for users. The design followed the iOS userinterface guidelines, so that user interaction would be familiar. The final USEquestionnaire provided good results and leads us to the conclusion that theusers are satisfied with the design of the user interface.

Discovery of digital identities should be efficient In the last evaluation, wecompared the time it took to visit 2 social networks of a user by using theapplication versus manually looking for his profile on these networks. Thegoal stated that the application should grant easier access and be less timeconsuming than the manual lookup. The comparison showed that, althoughthe application consumes around 12 seconds of this task on the recognitionprocess, the discovery was still much more efficient.

It should be noted that although the application has met its goals, it is not asuseful as we expected it to be. This is due to the cumbersome way of recognizinga person. Aiming the phone to the person and holding it up until the recognition

68

Page 81: Discovering digital identities through face recognition on mobile devices

7.3. Reflection

process has finished, caused a 12 second time frame that was uncomfortable for boththe user and the recognized person. The application would be less inconvenient if itonly took a fast snapshot of the person, so the phone would only need to be pointedto the person for a quick second.

7.3 Reflection

If we take a critical look at the project, we see some positive and negative points.We are very satisfied with the end product. The final application has met all of itsgoals and is a fully functioning face recognition application. It can be used to easilyfind the networks of friends, but it is open enough to be used in an academic contextas well. Although the application currently only supports a limited amount of socialnetworks, it can be expanded to support many more without a big effort. If socialnetworks with an academic focus like Mendeley would be supported, the applicationwould have potential as a research 2.0 app.We must state that the final application is not the fully augmented application thatwe envisioned at the start of this project. This is mainly because of the usefulness.Even now, with the application requiring to point the phone at the person for 12seconds, it is perceived as uncomfortable. The envisioned idea would require thephone to be pointed at all times. Due to the use of short iterations, we have noticedthis early enough and this resulted in a more useful application. However a fullyaugmented could work with a different hardware setup, like head-mounted displays.Although the final result covers most of the required functionalities, the applicationcan still be improved (see the following section). We did not manage to implementall the functionalities. This is due to the problems that occurred during the imple-mentation. iOS development was new to us and we’ve lost a serious amount of timeby figuring out how everything works and how certain objects are meant to be used.

7.4 Possible future work

The development of the application was prioritized to ensure all necessary function-alities were implemented. In order to improve the application, some lower prioritytasks could be implemented:

Bug fixes Several known bugs should be fixed, the biggest being the bug thatcrashes the application when trying to recognize a second person. Otherknown bugs include the occasional timing out of the database request, and thecrashing of the application when the user returns to the networks view beforethe database has processed the request.

Improving the face recognition process The face tracking algorithm describedin section 5.1.2 should be implemented in order to recognize multiple personsat the same time. The face recognition should also happen fully automaticallyas it was envisioned in the paper prototypes.

69

Page 82: Discovering digital identities through face recognition on mobile devices

7. Conclusion

Photo-based recognition Instead of improving the face tracking algorithm, wecan choose to remove the augmented camera view of the application andwork with pictures instead. If the application only requires the user to take apicture of the person he wants to recognize, the uncomfortable moment duringrecognition will disappear.

Authentication from social networks At the moment, the application allowsthe user to fill in handles for his social networks. This way he can fill in whicheverhandle he wants. Authorization with the different networks could prevent this.This requires the API’s of all supported networks to be implemented in theapplication, which proves the identity of the user with those networks, butmakes it harder to support many different networks.

Recognition of application users Currently, the application is only able to rec-ognize Facebook friends of the logged in user. This should be expanded toallow users to recognize other users of the application. This is possible by usingthe private namespace offered by Face.com. Recognizing Facebook friends wasenough to prove the use of the application, but for it’s general applicability, itis more useful to be able to discover the digital identities of strangers than ofFacebook friends.

70

Page 83: Discovering digital identities through face recognition on mobile devices

Appendices

71

Page 84: Discovering digital identities through face recognition on mobile devices
Page 85: Discovering digital identities through face recognition on mobile devices

Appendix A

Paper prototype iteration 2questionnaire results

On each graph, the x-axis represents the 7-point Likert rating scale with 1 representingtotally disagree and 7 representing totally agree, the y-axis represents the number ofpersons.

A.1 Usefulness questions

73

Page 86: Discovering digital identities through face recognition on mobile devices

A. Paper prototype iteration 2 questionnaire results

Figure A.1: The applicationhelps me be more effective.

Figure A.2: The applicationhelps me be more productive.

Figure A.3: The application isuseful.

Figure A.4: The applicationgives me more control over theactivities in my life.

Figure A.5: The applicationmakes the things I want to ac-complish easier to get done.

Figure A.6: The applicationsaves me time when I use it.

Figure A.7: The applicationmeets my needs.

Figure A.8: The applicationdoes everything I would expectit to do.

74

Page 87: Discovering digital identities through face recognition on mobile devices

A.2. Ease of use questions

A.2 Ease of use questions

Figure A.9: The application iseasy to use.

Figure A.10: The application issimple to use.

Figure A.11: The application isuser friendly.

Figure A.12: The application re-quires the fewest steps possibleto accomplish what I want to dowith it.

Figure A.13: The application isflexible.

Figure A.14: Using the applica-tion is effortless.

75

Page 88: Discovering digital identities through face recognition on mobile devices

A. Paper prototype iteration 2 questionnaire results

Figure A.15: I can use the appli-cation without written instruc-tions.

Figure A.16: I don’t notice anyinconsistencies as I use the ap-plication.

Figure A.17: Both occasionaland regular users would like theapplication.

Figure A.18: I can recover frommistakes quickly and easily.

Figure A.19: I can use the appli-cation successfully every time.

76

Page 89: Discovering digital identities through face recognition on mobile devices

A.3. Ease of learning questions

A.3 Ease of learning questions

Figure A.20: I learned to use theapplication quickly.

Figure A.21: I easily rememberhow to use the application.

Figure A.22: It is easy to learnto use the application.

Figure A.23: I quickly becameskillful with the application.

77

Page 90: Discovering digital identities through face recognition on mobile devices

A. Paper prototype iteration 2 questionnaire results

A.4 Satisfaction questions

78

Page 91: Discovering digital identities through face recognition on mobile devices

A.4. Satisfaction questions

Figure A.24: I am satisfied withthe application.

Figure A.25: I would recom-mend the application to a friend.

Figure A.26: The application isfun to use.

Figure A.27: The applicationworks the way I want it to work.

Figure A.28: The application iswonderful.

Figure A.29: I feel I need to haveit.

Figure A.30: It is pleasant touse.

79

Page 92: Discovering digital identities through face recognition on mobile devices
Page 93: Discovering digital identities through face recognition on mobile devices

Appendix B

Paper prototype iteration 3questionnaire results

On each graph, the x-axis represents the 7-point Likert rating scale with 1 representingtotally disagree and 7 representing totally agree, the y-axis represents the number ofpersons.

B.1 Usefulness questions

81

Page 94: Discovering digital identities through face recognition on mobile devices

B. Paper prototype iteration 3 questionnaire results

Figure B.1: The applicationhelps me be more effective.

Figure B.2: The applicationhelps me be more productive.

Figure B.3: The application isuseful.

Figure B.4: The applicationgives me more control over theactivities in my life.

Figure B.5: The applicationmakes the things I want to ac-complish easier to get done.

Figure B.6: The applicationsaves me time when I use it.

Figure B.7: The applicationmeets my needs.

Figure B.8: The application doeseverything I would expect it todo.

82

Page 95: Discovering digital identities through face recognition on mobile devices

B.2. Ease of use questions

B.2 Ease of use questions

Figure B.9: The application iseasy to use.

Figure B.10: The application issimple to use.

Figure B.11: The application isuser friendly.

Figure B.12: The application re-quires the fewest steps possibleto accomplish what I want to dowith it.

Figure B.13: The application isflexible.

Figure B.14: Using the applica-tion is effortless.

83

Page 96: Discovering digital identities through face recognition on mobile devices

B. Paper prototype iteration 3 questionnaire results

Figure B.15: I can use the appli-cation without written instruc-tions.

Figure B.16: I don’t notice anyinconsistencies as I use the ap-plication.

Figure B.17: Both occasionaland regular users would like theapplication.

Figure B.18: I can recover frommistakes quickly and easily.

Figure B.19: I can use the appli-cation successfully every time.

84

Page 97: Discovering digital identities through face recognition on mobile devices

B.3. Ease of learning questions

B.3 Ease of learning questions

Figure B.20: I learned to use theapplication quickly.

Figure B.21: I easily rememberhow to use the application.

Figure B.22: It is easy to learnto use the application.

Figure B.23: I quickly becameskillful with the application.

85

Page 98: Discovering digital identities through face recognition on mobile devices

B. Paper prototype iteration 3 questionnaire results

B.4 Satisfaction questions

86

Page 99: Discovering digital identities through face recognition on mobile devices

B.4. Satisfaction questions

Figure B.24: I am satisfied withthe application.

Figure B.25: I would recommendthe application to a friend.

Figure B.26: The application isfun to use.

Figure B.27: The applicationworks the way I want it to work.

Figure B.28: The application iswonderful.

Figure B.29: I feel I need to haveit.

Figure B.30: It is pleasant touse.

87

Page 100: Discovering digital identities through face recognition on mobile devices
Page 101: Discovering digital identities through face recognition on mobile devices

Appendix C

Digital prototype iteration 1questionnaire results

On each graph, the x-axis represents the 7-point Likert rating scale with 1 representingtotally disagree and 7 representing totally agree, the y-axis represents the number ofpersons.

C.1 Usefulness questions

89

Page 102: Discovering digital identities through face recognition on mobile devices

C. Digital prototype iteration 1 questionnaire results

Figure C.1: The applicationhelps me be more effective.

Figure C.2: The applicationhelps me be more productive.

Figure C.3: The application isuseful.

Figure C.4: The applicationgives me more control over theactivities in my life.

Figure C.5: The applicationmakes the things I want to ac-complish easier to get done.

Figure C.6: The applicationsaves me time when I use it.

Figure C.7: The applicationmeets my needs.

Figure C.8: The applicationdoes everything I would expectit to do.

90

Page 103: Discovering digital identities through face recognition on mobile devices

C.2. Ease of use questions

C.2 Ease of use questions

Figure C.9: The application iseasy to use.

Figure C.10: The application issimple to use.

Figure C.11: The application isuser friendly.

Figure C.12: The application re-quires the fewest steps possibleto accomplish what I want to dowith it.

Figure C.13: The application isflexible.

Figure C.14: Using the applica-tion is effortless.

91

Page 104: Discovering digital identities through face recognition on mobile devices

C. Digital prototype iteration 1 questionnaire results

Figure C.15: I can use the appli-cation without written instruc-tions.

Figure C.16: I don’t notice anyinconsistencies as I use the ap-plication.

Figure C.17: Both occasionaland regular users would like theapplication.

Figure C.18: I can recover frommistakes quickly and easily.

Figure C.19: I can use the appli-cation successfully every time.

92

Page 105: Discovering digital identities through face recognition on mobile devices

C.3. Ease of learning questions

C.3 Ease of learning questions

Figure C.20: I learned to use theapplication quickly.

Figure C.21: I easily rememberhow to use the application.

Figure C.22: It is easy to learnto use the application.

Figure C.23: I quickly becameskillful with the application.

93

Page 106: Discovering digital identities through face recognition on mobile devices

C. Digital prototype iteration 1 questionnaire results

C.4 Satisfaction questions

94

Page 107: Discovering digital identities through face recognition on mobile devices

C.4. Satisfaction questions

Figure C.24: I am satisfied withthe application.

Figure C.25: I would recommendthe application to a friend.

Figure C.26: The application isfun to use.

Figure C.27: The applicationworks the way I want it to work.

Figure C.28: The application iswonderful.

Figure C.29: I feel I need to haveit.

Figure C.30: It is pleasant touse.

95

Page 108: Discovering digital identities through face recognition on mobile devices
Page 109: Discovering digital identities through face recognition on mobile devices

Appendix D

Digital prototype iteration 2questionnaire results

On each graph, the x-axis represents the 7-point Likert rating scale with 1 representingtotally disagree and 7 representing totally agree, the y-axis represents the number ofpersons.

D.1 Usefulness questions

97

Page 110: Discovering digital identities through face recognition on mobile devices

D. Digital prototype iteration 2 questionnaire results

Figure D.1: The applicationhelps me be more effective.

Figure D.2: The applicationhelps me be more productive.

Figure D.3: The application isuseful.

Figure D.4: The applicationgives me more control over theactivities in my life.

Figure D.5: The applicationmakes the things I want to ac-complish easier to get done.

Figure D.6: The applicationsaves me time when I use it.

Figure D.7: The applicationmeets my needs.

Figure D.8: The applicationdoes everything I would expectit to do.

98

Page 111: Discovering digital identities through face recognition on mobile devices

D.2. Ease of use questions

D.2 Ease of use questions

Figure D.9: The application iseasy to use.

Figure D.10: The application issimple to use.

Figure D.11: The application isuser friendly.

Figure D.12: The application re-quires the fewest steps possibleto accomplish what I want to dowith it.

Figure D.13: The application isflexible.

Figure D.14: Using the applica-tion is effortless.

99

Page 112: Discovering digital identities through face recognition on mobile devices

D. Digital prototype iteration 2 questionnaire results

Figure D.15: I can use the appli-cation without written instruc-tions.

Figure D.16: I don’t notice anyinconsistencies as I use the ap-plication.

Figure D.17: Both occasionaland regular users would like theapplication.

Figure D.18: I can recover frommistakes quickly and easily.

Figure D.19: I can use the appli-cation successfully every time.

100

Page 113: Discovering digital identities through face recognition on mobile devices

D.3. Ease of learning questions

D.3 Ease of learning questions

Figure D.20: I learned to use theapplication quickly.

Figure D.21: I easily rememberhow to use the application.

Figure D.22: It is easy to learnto use the application.

Figure D.23: I quickly becameskillful with the application.

101

Page 114: Discovering digital identities through face recognition on mobile devices

D. Digital prototype iteration 2 questionnaire results

D.4 Satisfaction questions

102

Page 115: Discovering digital identities through face recognition on mobile devices

D.4. Satisfaction questions

Figure D.24: I am satisfied withthe application.

Figure D.25: I would recom-mend the application to a friend.

Figure D.26: The application isfun to use.

Figure D.27: The applicationworks the way I want it to work.

Figure D.28: The application iswonderful.

Figure D.29: I feel I need to haveit.

Figure D.30: It is pleasant touse.

103

Page 116: Discovering digital identities through face recognition on mobile devices
Page 117: Discovering digital identities through face recognition on mobile devices

Appendix E

Class diagram

Figure E.1 represents the complete class diagram of the application. The arrowsbetween the classes represent a use-relation.

105

Page 118: Discovering digital identities through face recognition on mobile devices

E. Class diagram

Face

book

API

Face

Wra

pper

Face

.com

API

- (vo

id)c

heck

ForS

tartI

nCam

eraM

ode;

Cam

eraV

iew

Con

trolle

r *ca

mer

aVie

wC

ontro

ller;

Setti

ngsV

iew

Con

trolle

r *se

tting

sVie

wC

ontro

ller;

Ove

rvie

wVi

ewC

ontro

ller *

over

view

View

Con

trolle

r;H

isto

ryTa

bleV

iew

Con

trolle

r *hi

stor

yTab

leVi

ewC

ontro

ller;

Soci

alR

ecog

nize

rVie

wC

ontro

ller

NSS

tring

*nam

e;N

SStri

ng *f

aceb

ookI

D;

NSS

tring

*em

ail;

NSS

tring

*loc

atio

n;N

SStri

ng *g

ende

r;N

SStri

ng *p

rofil

ePic

UR

L;N

SStri

ng *p

rofil

eThu

mbU

RL;

Rec

ogni

zedP

erso

n

+ de

faul

tMan

ager

;- (

void

)set

IsLo

gged

In:(B

OO

L)_l

ogge

dIn;

- (vo

id)F

BSes

sion

Begi

n:_d

eleg

ate;

- (vo

id)F

BLog

out;

- (vo

id)g

etFB

Req

uest

With

Gra

phPa

th:

(NSS

tring

*)_p

ath

andD

eleg

ate:

_del

egat

e;- (

void

)sen

dFBR

eque

stW

ithG

raph

Path

:(N

SStri

ng *)

_pat

h pa

ram

s:(N

SMut

able

Dic

tiona

ry *)

_par

ams

andD

eleg

ate:

_del

egat

e;

F

aceb

ook

*face

book

;

BO

OL

isLo

gged

In;

FBR

eque

stW

rapp

er

- (vo

id)s

etC

omin

gFro

mC

amer

aFor

Pers

on:

(NSS

tring

*)fa

cebo

okID

;- (

void

)set

Com

ingF

rom

His

tory

ForP

erso

n:(R

ecog

nize

dPer

son

*)pe

rson

;- (

IBAc

tion)

back

Butto

nPre

ssed

;- (

IBAc

tion)

othe

rBut

tonP

ress

ed;

- (vo

id)fe

tchI

nfoF

or:(N

SStri

ng *)

curre

ntID

;- (

void

)sav

eRec

ogni

zedP

erso

n;- (

void

)get

Info

From

Dat

abas

e;- (

void

)ope

nMai

lApp

;

NSM

utab

leD

ata

*res

pons

eDat

a;R

ecog

nize

dPer

son

*rec

ogni

zedP

erso

n

Ove

rvie

wTa

bleV

iew

Con

trolle

r

NSM

utab

leAr

ray

*rec

ogni

zedP

erso

nsO

verv

iew

View

Con

trolle

r *ov

ervi

ewVi

ewC

ontro

ller;

His

tory

Tabl

eVie

wC

ontro

ller

+ de

faul

tMod

el;

- (vo

id)p

arse

JSO

NR

eply

:(N

SDic

tiona

ry *)

repl

y;- (

NSS

tring

*)ge

tNam

eFro

mID

:(N

SStri

ng *)

face

book

ID;

- (vo

id)s

aveP

refe

renc

es;

- (N

SMut

able

Arra

y *)

load

Rec

ogni

zedL

ist;

- (vo

id)a

ddR

ecog

nize

dPer

son:

(Rec

ogni

zedP

erso

n *)

reco

gniz

edPe

rson

;- (

void

)rem

oveR

ecog

nize

dPer

son:

(Rec

ogni

zedP

erso

n *)

reco

gniz

edPe

rson

;

NSA

rray

*frie

ndsI

nfo;

NSD

ictio

nary

*meI

nfo;

NSS

tring

*nam

e;N

SStri

ng *f

aceb

ookI

D;

NSM

utab

leAr

ray

*rec

ogni

zedL

ist;

Use

rMod

el

+ (S

pinn

erVi

ew *)

load

Spin

nerIn

toVi

ew:

(UIV

iew

*)su

perV

iew

;- (

void

)rem

oveS

pinn

er;

- (U

IImag

e *)

addB

ackg

roun

d;

Spin

nerV

iew

- (IB

Actio

n)ho

meB

utto

nPre

ssed

:sen

der;

- (IB

Actio

n)sw

itchC

amer

as:s

ende

r;+

(CG

Rec

t)vid

eoPr

evie

wBo

xFor

Gra

vity

:(NSS

tring

*)

grav

ity fr

ameS

ize:

(CG

Size

)fram

eSiz

e ap

ertu

reSi

ze:

(CG

Size

)ape

rture

Size

;- (

IBAc

tion)

reco

gniz

e:se

nder

;- (A

VCap

ture

Vide

oOrie

ntat

ion)

avO

rient

atio

nFor

Dev

ice

Orie

ntat

ion:

(UID

evic

eOrie

ntat

ion)

devi

ceO

rient

atio

n;- (

void

)logI

osFa

ceD

etec

tion:

(CIF

aceF

eatu

re

*)fa

ceFe

atur

e;- (

void

)sen

dToF

aceC

om;

- (U

IImag

e *)

scal

eAnd

Rot

ateI

mag

e:(U

IImag

e *)

imag

e;- (

void

)set

Nam

eLab

el:(N

SDic

tiona

ry *)

face

s;- (

void

)clic

kedO

nFac

e;

U

IVie

w *p

revi

ewVi

ew;

U

ISeg

men

tedC

ontro

l *ca

mer

asC

ontro

l;

UIB

arBu

ttonI

tem

*hom

eBut

ton;

A

VCap

ture

Vide

oPre

view

Laye

r *pr

evie

wLa

yer;

U

IBar

Butto

nIte

m *r

ecog

nize

Butto

n;

AVC

aptu

reVi

deoD

ataO

utpu

t *vi

deoD

ataO

utpu

t;

dis

patc

h_qu

eue_

t vid

eoD

ataO

utpu

tQue

ue;

A

VCap

ture

StillI

mag

eOut

put *

stillI

mag

eOut

put;

U

IVie

w *fl

ashV

iew

;

UIIm

age

*squ

are;

B

OO

L is

Usi

ngFr

ontF

acin

gCam

era;

C

IDet

ecto

r *fa

ceD

etec

tor;

U

IImag

e *s

tillIm

age;

U

IBut

ton

*face

Butto

n;

NSS

tring

*cur

rent

ID;

U

ILab

el *s

tatu

sLab

el;

Cam

eraV

iew

Con

trolle

r

- (IB

Actio

n)tra

inFr

iend

s:se

nder

;- (

IBAc

tion)

face

book

Logo

ut:s

ende

r;- (

void

)trai

nFrie

ndsF

aceC

om;

- (IB

Actio

n)sw

itchS

tartI

nCam

era:

send

er;

Spin

nerV

iew

*spi

nner

View

;N

etw

orks

View

Con

trolle

r *ne

twor

ksVi

ewC

ontro

ller;

UIS

witc

h *c

amer

aBut

ton

Setti

ngsV

iew

Con

trolle

r

- (vo

id)u

pdat

eDat

abas

e;

NSM

utab

leD

ata

*res

pons

eDat

a;U

ITex

tFie

ld *e

mai

lTex

tFie

ld;

UIT

extF

ield

*tw

itter

Text

Fiel

d;U

ITex

tFie

ld *g

oogl

eTex

tFie

ld;

UIT

extF

ield

*slid

esha

reTe

xtFi

eld;

UIT

extF

ield

*lin

kedi

nTex

tFie

ld;

Net

wor

ksVi

ewC

ontro

ller

Figure E.1: The class diagram of the application106

Page 119: Discovering digital identities through face recognition on mobile devices

Appendix F

Poster

This chapter contains the poster.

107

Page 120: Discovering digital identities through face recognition on mobile devices

Master Computer Science

Thesis Gerry Hendrickx

Promotor Prof. Erik Duval

Supervisor Gonzalo Parra

Academic Year 2011-2012

The Goal Obtain the social network profiles of persons in an efficient way, by using face recognition.

How it works

Discovering digital identities through face recognition on mobile devices

The Application •  Augmented camera view •  History of tagged people

Evaluation •  Iterative design:

•  3 interface designs

•  2 paper prototypes •  2 digital prototypes

•  Testing •  Focus on smartphone users •  Think aloud technique •  USE questionnaire

•  Results

Privacy Concerns Although public information, there were concerns è Limited to Facebook friends & app users.

•  Social network profiles aggregator

Strongly disagree Strongly agree Strongly disagree Strongly agree Strongly disagree Strongly agree

Conclusions – Future Work Conclusions: •  Augmented reality was not the way to go •  Slow (5-10 seconds) recognition process •  Discovering profiles is ~4x more efficient

than without app. •  Holding the phone up is tedious •  Easily expandable for more social networks

Future work: •  Authentication of social networks •  Recognize multiple persons •  Stability improvements

Page 121: Discovering digital identities through face recognition on mobile devices

Appendix G

Paper

This chapter contains both the English and the Dutch version of the scientific paper.

109

Page 122: Discovering digital identities through face recognition on mobile devices

Discovering digital identities through face recognition onmobile devices

Gerry HendrickxFaculty of Engineering, Department of Computer Science

Katholieke Universiteit [email protected]

Abstract

This paper covers the current status and past workof the creation of an iPhone application. The applica-tion uses face recognition provided by Face.com [1] torecognize faces captured with the camera of the iPhone.The recognized faces will be named and enable the userto get access to the information of the person from dif-ferent social networks, his digital identity. This paperdescribes the research, the related work, the user in-terface prototyping and the current state of the imple-mentation. It covers 3 iterations of paper prototypingin which a user interface gets established and the initialimplementation of the application.

1. INTRODUCTION

The internet has revolutionized the way peopleinteract with each other. Different social networks havebeen created to support different levels of communica-tions. The presence of a person on these networks iscalled his digital identity. The goal of this thesis is tofind an efficient way to discover the digital identity ofa person, by using face recognition and a smartphone.This paper describes the process of creating a facerecognition application for the iPhone. It uses facerecognition to offer the user extra information aboutthe persons seen through the camera. This extra in-formation will come from various social networks, themost important being Facebook, Twitter and Google+.It aims to offer users access to online data and informa-tion that is publicly available on the internet. This canbe used in a private context, to enhance conversationsand find common ground with your discussion partner,or in an academic context, to be able to easily findinformation, like slides or publications of the speakerat an event you’re attending. The app will be createdfor iOS because the SDK offers a built in face detection

mechanism since iOS5 [2]. Android, the other option,does not have this built in feature.

A brainstorm and a survey resulted in a list of re-quirements for the face recognition application. Thesurvey asked about which information users would liketo get if they were able to recognize a person by usinga smartphone. The survey was answered by 34 persons.14 out of 34 voters would respect the privacy of the rec-ognized person and thus would not want any informa-tion. 9 wanted contact information, 6 wanted links tothe social network profiles of the recognized person and3 of the voters wanted the last status update of Facebookor Twitter. The 2 remaining votes went to pictures of therecognized person and the location where they last met.There was a strong need for privacy, so a policy was de-cided upon: The app could be limited to the recognitionof the user’s Facebook friends, but the need to recog-nize your Facebook friends is lower than the need torecognize strangers. To broaden the scope of recogniz-able people, the other users of the application will alsobe recognizable. The general policy will be: if you usethe application, you can be recognized. An eye for aneye.The brainstorm resulted in a list of functionalities orcharacteristics. First of all the application should workfast. Holding your smartphone up and pointed to an-other person is quite cumbersome, so the faster therecognition works, the smaller this problem becomes.The information about the person should be displayedin augmented reality (AR) [3]. This is a technology thataugments a users view of the world. AR can add extrainformation to the screen, based on GPS data or imageprocessing. AR could place information about a personaround his face in real-time. The functionality require-ments from the poll and brainstorm and thus the goals toachieve efficient discovery of digital identities by usingface recognition are the following:

Page 123: Discovering digital identities through face recognition on mobile devices

• Detection and recognition of all the faces in thefield of view.

• Once recognized, the name and other options (likeavailable social networks) will appear on screenwith the face.

• Contact info will be fetched from the networks andcan be saved in the contacts app.

• Quick access to all social networks will be avail-able, along with some basic information such asthe last status update/tweet.

Information available will differ from person toperson. To add an extra layer of privacy settings, a userwill be able to link his networks and choose to enable ordisable them. When the user gets recognized, only hisenabled networks will show up.

2. RELATED WORK

For this masterthesis we went to search for relatedmobile applications. No existing application was foundthat does exactly the same as this project. However,some similarities were found with the following appli-cations:

• Viewdle [5]: Viewdle is a company focusing onface recognition. They have several projects on-going, and have already created an application forAndroid, called Viewdle Social Camera. It recog-nizes faces in images based on Facebook. Viewdlehas a face recognition iOS SDK available.

• Animetrics [6]: Animetrics created applications tohelp government and law enforcing agencies. Ithas multiple products like FaceR MobileID, whichcan be used to get the names and percentage of thematch of any random person. FaceR CredentialMEcan be used for authentication on your own smart-phone. It recognizes your face and if it matches,it unlocks your data. Animetrics also focuses onhome security face recognition. However they donot seem to have a public API, since their focus isnot on the commercial market.

• TAT Augmented ID [7]: TAT Augmented ID is aconcept app. It recognizes faces in real-time anduses augmented reality to display icons around theface. The concept is the same, but the resultinguser interface is different. Section 3 discusses whya fully augmented user interface is not preferred onmobile devices.

Another non-commercial related work is a mas-terthesis of 2010 at the Katholieke Universiteit Leuven[10]. The author created a head-mounted-display-basedapplication to recognize faces and get information.From his work we learned that HMD’s are not the idealpractical setup (his app required a backpack with a lap-top and heavy headgear), and that the technology used(OpenGL) is a cumbersome way to develop. Using iOSsimplifies these aspects. The author used Face.com asface recognition API and was very satisfied with it.

A comparison of face recognition API’s was madein order to find the one most suited to our goals. A quicksummary of the positive and negative points:

• Viewdle: As said above, we’ve tried to contactViewdle to get more information about the API.Sadly, they did not respond, therefore Viewdle isno option.

• Face.com: Face.com offers a well documentedREST API. It offers Facebook and Twitter integra-tion and a private namespace that will allow theapplication to apply the previously stated privacypolicy . There is a rate limit on the free version ofFace.com.

• Betaface [8]: The only API to work with imagesand video. However, it is Windows only, hasn’tbeen used with iOS yet and is not free.

• PittPatt [9]: PittPatt was a very promising service,but sadly it got acquired by Google. The servicecannot be used at this time.

It seemed that Face.com is the only, but the bestoption found. It has an iOS SDK and social media inte-gration, which are both very useful in the context of thisapplication.

3. PAPER PROTOTYPING

Paper prototyping is the process of designing theuser interface based on quick drawings of all the differ-ent parts of the user interface [11]. By doing this withpaper parts, it becomes easy to quickly evaluate andadapt the interface. The prototyping phase consisted of3 iterations: the interface decision, the interface evalua-tion and the expert opinion.

3.1. Phase one: interface decisions

The first phase of the paper prototyping was to de-cide which of the 3 possible interfaces would be used.The interfaces were:

Page 124: Discovering digital identities through face recognition on mobile devices

• Interface 1: A box of information attached to thehead of the recognized person. This is the best useof augmented reality, but you have to keep yourphone pointed to the person in order to be able toread his information. See figure 1a.

• Interface 2: A sidebar with information, whichtakes about 1/4th of the screen. This way, userslower their phone when a person is selected, butthey can still use the camera if they want. See fig-ure 1b.

• Interface 3: A full screen information screen. Thismakes minimal use of augmented reality but offersa practical way to view the information. Users seethe name of the recognized person in the cameraview, and once tapped, they get referred to the fullscreen information window. See figure 1c.

These interfaces are evaluated using 11 test sub-jects, aged 18 to 23, with mixed smartphone experience.The tests were done using the think aloud technique,which means they have to say what they think is goingto happen when they click a button. The interviewerplays computer and changes the screens. The same sim-ple scenario was given for all interfaces where the testsubject needed to recognize a person and find informa-tion about him. After the test, a small custom-madequestionnaire was given to poll the interface preference.None of the users picked interface 1 as their favourite.The fact that you should keep your phone pointed to theperson in order to read and browse through the infor-mation, proved to be a big disadvantage. The choicebetween interface 2 and 3 was not unanimously de-cided. 27% chose interface 2, 73% chose interface 3.Thus interface 3 was chosen and elaborated for the sec-ond phase of paper prototyping. People liked the sec-ond idea where you could still see the camera, but thenagain they realized that, if interface 1 thought us thatyou would not keep your camera pointed at the crowd,you wouldn’t do this in interface 2, so the camera wouldbe showing your table or pants. This reduces the use ofthe camera feed. The smartphone owners also pointedout that using only a part of the screen would be toosmall to actually put readable information on it.

3.2. Phase two: interface evaluation

For the second phase, 10 test subjects were usedbetween the ages of 20 and 23, again with mixed smart-phone experience. An extended scenario was createdwhich explored the full scope of functionality in the ap-plication. The testers needed to recognize people, ad-just settings, link social networks to their profile, indi-cate false positives and manage the recognition history.

(a) Interface 1

(b) Interface 2

(c) Interface 3

Figure 1: The different scenario’s

Page 125: Discovering digital identities through face recognition on mobile devices

The think aloud technique was applied once again. Atthe end of each prototype test, the test subject needed tofill in a USE questionnaire [4]. This is a questionnaireconsisting of 30 questions, divided into 4 categories topoll to different aspect of the evaluated application. Thequestions can be answered on a 7-point Likert ratingscale, 1 representing totally disagree and 7 representingtotally agree. The categories are usefulness, ease of use,ease of learning and satisfaction.

3.2.1. Usefulness. The overall results were good. Thequestion about whether the application is useful re-ceives a median score of 5 out of 7, with no result below4. People seem to understand why the app is useful andit does what they would expect. The scores on whetherit meets the need of the users were divided with scoresgoing from 1 to 7, because some users (especially theusers with some privacy concerns or without a smart-phone) did not see the usefulness of the application.However, the target audience, the smartphone users, didsee its usefulness, resulting in higher scores in this sec-tion.

3.2.2. Ease of use. From the ease of use questions theneed to add or rearrange buttons became clear. Userscomplained about the number of screen transitions ittook to get from one screen to somewhere else in theapplication and this could be seen in the results. Thequestion whether using the application is effortless re-ceived scores ranging from 2 to 7 with a median of 5.The path through the application should be revised tosee whether missing link can be found and corrected toimprove the navigation. For instance, the home buttonto go to the main screen will be added on several otherscreens, instead of having to navigate back through allthe previous screens. Using the application was effort-less for all iPhone users, because the user interface wasbuilt using the standard iOS interface parts.

3.2.3. Ease of learning. None of the ease of learningquestions scored below 5 on the 7-point Likert ratingscale. This is also due to the standard iOS interface,which is developed by Apple to be easy to work withand easy to learn.

3.2.4. Satisfaction. Most people were satisfied withthe functionality offered by the application and howit was presented in the user interface. Especially theiPhone users were very enthusiastic, calling it an inno-vative, futuristic application. The question whether theuser was satisfied with the application received scoresranging from 5 to 6, with a median at 6 out of 7. Non-smartphone users were more skeptical and did not seethe need for such an application. Aside from this, the

application was fun to use and the target audience wassatisfied.

3.2.5. Positive and negative aspects. The users wereasked to give the positive and negative aspects of theapplication. The positive aspects were the iOS-style ofworking and the functionality and concept of the appli-cation. The negative aspects were more user interfacerelated, such as not enough home buttons, and the sug-gested method to indicate a false positive. This buttonwas placed on the full screen information window of aperson. Everybody agreed that this was to late, becauseall the data of the wrongfully tagged person would thenbe displayed. So the incorrect tag-button should beplaced on the camera view. Some useful ideas like en-abling the user to follow a recognized person on Twitterwere suggested.

3.3. Phase three: expert opinions

For this phase, the supervisor and 6 advisers of thethesis were used in the paper prototype test. The proto-type was adjusted to the results of the second iteration.More home-buttons were placed and the incorrect tag-button was placed on the camera view. The test sub-jects all had extensive experience in the field of human-computer interaction and can thus be seen as experts.They took the tests and filled in the same questionnaireand gave their opinion on several aspects of the pro-gram. A small summary:

• There were concerns about the image quality ofthe different iPhones. Tests should be done to testfrom what distance a person can be recognized.

• The application should be modular enough. In arapidly evolving 2.0 world, social network mayneed to be added or deleted from the application. Ifall the networks are implemented as modules, thiswill be a simpler task.

• The incorrect tag-button could be implemented inthe same way as the iPhoto application asks theuser if an image is tagged correctly.

• The social network information should not just bestatic info. The user should be able to interact di-rectly from the application. If this is not possible,it would be better to refer the user directly to theFacebook or Twitter app.

• More info could be displayed in the full screen in-formation window. Instead of showing links to allnetworks, the general information about the personcould already be displayed there.

Page 126: Discovering digital identities through face recognition on mobile devices

When asked which social networks they would liketo see in the application, nearly everybody said Face-book, Twitter and Google+. In an academic context,they would like to see Mendeley and Slideshare.

4. IMPLEMENTATION

Apart from some suggestions, the third paperprototype was received positively. The next step inthe process is the implementation. The application iscurrently in development, and a small base is working.The main focus so far is on the crucial functionality,the face recognition. It is important to get this partup and running as fast as possible, because the entireapplication depends on it. So far the application isable to track faces using the iOS5 face detection. Atemporary box frames the faces and follows them asthe camera or person moves. This functionality couldbe used to test the quality of the face detection API. Asyou can see in figure 2, the face detection algorithmof iOS5 can detect multiple faces at ones, and in suchdepth that the smallest recognized face is barely biggerthan a button. These are great results, because thealgorithm appeared to be fast and reliable and detailedenough for our purpose. Detection of even smallerfaces is not necessary, because the boxes will becomeharder to click if they are smaller than a button.

The next step was the face recognition. This is han-dled by Face.com. An iOS SDK is available on the web-site [12]. This SDK contains the functionality to sendimages to the Face.com servers, and receive a JSON re-sponse with the recognized faces. It also covers the nec-essary Facebook login, as Face.com requires the user tolog in using his Facebook account. This login is onlyneeded one time. One problem was that Face.com onlyaccepts images, not video. To be able to test the facerecognition as fast as possible, a recognize-button wasadded to the camera view. Once clicked, a snapshotis taken with the camera. This snapshot is send to theservers of Face.com and analyzed for different faces.The JSON response gets parsed and the percentage ofthe match and the list of best matches can be fetchedfrom it. At the moment, only one face can be recog-nized at the same time, because there is no algorithmprovided to check which part of the response should bematched to which face on the camera. This is temporar-ily solved by limiting the program to one face at a time.Figure 3 shows the current status of the application. Aface is recognized and its Facebook ID is printed above.

Figure 2: Face tracking results.

Figure 3: Face recognized and matched with the correctFacebook ID.

5. NEXT STEPS AND FUTURE WORK

The next step in development is the further de-velopment of the user interface. Now that we have

Page 127: Discovering digital identities through face recognition on mobile devices

a basic implementation of the main functionality, itis important to finish the mock-up of the application.This way, a dummy implementation of several screenscan be used to test the interface in several iterationsusing the digital prototype. When these tests happen,the underlying functionality can be extended andimplemented in parallel.

Several big problems need to be solved. Thebiggest being the matching of faces detected by iOS5face detection and the faces recognized by Face.com.Because Face.com recognizes faces by using images,a way needs to be found to match these results to thefaces on screen. If the user moves the camera to otherpeople after pressing the recognize-button, the resultsfrom Face.com will not match the faces on screen.The solution in mind is to use an algorithm to matchthe Face.com results with the face detection based onproportions. If we succeed in finding a correlationbetween for instance the eyes and the nose of a personin both services, it should be possible to find whichdetected face matches which Face.com result. Anotherway to match the faces to the reply is to keep trackof the faces based on their coordinates on the screen.A suitable algorithm for this problem needs to be found.

Another smaller problem is the use of the Face.comSDK. It has a limited Facebook graph API built into it.However, this API can not be used to fetch the nameof an ID or to get status updates. Therefore the realFacebook iOS SDK should be used. To prevent the appfrom working with two separate API’s, the Face.comAPI needs to be adapted so that it uses the real FacebookiOS SDK instead of the limited graph API.

6. CONCLUSION

This masterthesis is still a work in progress. We al-ready have good results from paper prototyping, and thecore of the application has already been implemented.In the following months, some problems will have tobe solved and user testing is still required to make theapplication match its goal: a fast, new way to discoverpeople using the newest technologies and networks.

References

[1] http://www.face.com[2] https://developer.apple.com/library/mac/documentation/

CoreImage/Reference/CIDetector Ref/Reference/Reference.html[3] R. Azuma, Y. Baillot, R. Behreinger, S. Feiner, S.

Julier, B. MacInture, Recent developments in augmented

reality, IEEE Computer Graphics And Applications,21(6):34-47, 2001

[4] Arnold M. Lund, Measuring Usability withthe USE Questionnaire, in Usability and UserExperience, vol. 8, no. 2, October 2001,http://www.stcsig.org/usability/newsletter/0110measuring with use.html

[5] http://www.viewdle.com/[6] http://www.animetrics.com/Products/FACER.php[7] http://www.youtube.com/watch?v=tb0pMeg1UN0[8] http://www.betaface.com/[9] http://www.pittpatt.com/

[10] Niels Buekers, Social Annotations in Augmented Real-ity, Masterthesis at KULeuven, 2010-2011

[11] Erik Duval, paper prototyping,http://www.slideshare.net/erik.duval/paper-prototyping-12082416, last checked on April 29, 2012

[12] Sergiomtz Losa, FaceWrapper for iPhone,https://github.com/sergiomtzlosa/faceWrapper-iphone

Page 128: Discovering digital identities through face recognition on mobile devices

Ontdekken van digitale identiteiten door middel vangezichtsherkenning op mobiele toestellen.

Gerry HendrickxIngenieurswetenschappen, Departement van Computerwetenschappen

Katholieke Universiteit [email protected]

Samenvatting

Dit eindwerk behandelt de huidige staat en hetreeds geleverde werk voor de ontwikkeling van een iP-hone applicatie. De toepassing maakt gebruik van ge-zichtsherkenning, verleend door Face.com [1], om ge-zichten te herkennen die vastgelegd werden met de ca-mera van de iPhone. De herkende gezichten zullen be-noemd worden en de gebruiker toegang verschaffen totde informatie van de persoon verkregen vanuit verschil-lende sociale netwerken: zijn digitale identiteit. Dit do-cument beschrijft het onderzoek, het gerelateerde werk,het evalueren van prototypes van de gebruikersinterfaceen de huidige staat van de applicatie. Het overloopt 3iteraties van papieren prototypes, waarin een gebrui-kersinterface wordt gecreeerd en de eerste implementa-tie van de toepassing.

1. INTRODUCTIE

Het internet heeft een revolutie veroorzaakt inde manier waarop mensen met elkaar omgaan. Ver-schillende sociale netwerken werden gemaakt om ver-schillende niveaus van communicatie te ondersteunen.De aanwezigheid van een persoon op deze netwerkenwordt zijn digitale identiteit genoemd. Het doel vandeze thesis is om een efficiente manier te vinden omde digitale identiteit van een persoon te ontdekken,door gebruik te maken van gezichtsherkenning ensmartphone. Dit document beschrijft het proces van deontwikkeling van een gezichtsherkennings-applicatievoor de Iphone. Het gebruikt gezichtsherkenning omde gebruiker extra informatie te verschaffen betreffendede persoon, die gezien wordt door de camera. Dezeextra informatie zal afkomstig zijn van verschillendesociale netwerken, de belangrijkste zijnde Facebook,Twitter en Google+. Het heeft tot doel om gebruikerstoegang te verlenen tot online gegevens en informatie

die vrij toegankelijk is op het internet. Dit kan wordengebruikt in een private context, om gesprekken teverbeteren en een gezamelijke onderwerp te vinden metje gesprekspartner, of in een academische context, omde mogelijkheid te hebben om gemakkelijk informatiete vinden, zoals slides of publicaties van een sprekerop een event waar je aanwezig bent. De applicatiezal gecreeerd worden voor IOS omdat de SDK eeningebouwde gezichtsdetectiemechanisme levert sindsIOS5 [2]. Android, de andere optie, heeft dit built infeature niet.

Uit een enquete en een brainstorm ontstond eenlijst van gevraagde vereisten voor de gezichtsherken-ningsapplicatie. De enquete vroeg welke informatiegebruikers zouden willen krijgen als ze de mogelijkheidhadden om een persoon te herkennen door het gebruikvan een smarphone. De vraag werd beantwoord door34 personen. 14 van de 34 stemmers zouden de privacyvan de herkende persoon respecteren en zouden aldusgeen enkele informatie wensen. 9 zouden graag contactinformatie krijgen, 6 wilden links naar de socialenetwerkprofielen van de herkende persoon en 3 van destemmers wilden de laatste status update van Facebookof Twitter. De 2 resterende stemmen gingen naar defoto’s van de herkende personen en naar de plaatswaar ze het laatst hadden ontmoet. Uit de enquetevolgende een duidelijke nood aan privacy, zodoendewerd volgende beleid vastgesteld: de applicatie zoukunnen beperkt worden tot de herkenning van personendie behoren tot de Facebook vrienden van de gebruiker.Om de werkruimte van herkenbare gezichten te verbre-den, zouden de andere gebruikers van de appilicatieook herkenbaar zijn. De algemene tendens zal zijn:als je de applicatie gebruikt, zal je herkend kunnenworden. Oog om oog.

De brainstorm resulteerde in een lijst van functio-

Page 129: Discovering digital identities through face recognition on mobile devices

naliteiten en kenmerken. Ten eerste zou de toepassingsnel moeten werken. De smarphone omhoog houden,gericht op het gezicht van een ander persoon, is nogalomslachtig. Hoe sneller de herkenning werkt, hoe klei-ner dit probleem wordt. De informatie over de persoonzou moeten verschijnen in toegevoegde realiteit (TR)[3]. Dit is een technologie die elementen aan de kijkvan de gebruiker op de wereld toevoegt. TR kan ex-tra informatie aan het scherm toevoegen, gebaseerd opGPS-gegevens of beeldverwerking. TR zou informa-tie van een persoon rond zijn hoofd kunnen plaatsen inreele tijd. De functionaliteitseisen van de enquete enbrainstorm en de doelen om een efficiente ontdekkingvan digitale identiteiten te bereiken door het gebruik vanherkenning zijn de volgende:

• Opsporing en herkenning van alle gezichten in hetgezichtsveld.

• Eens herkend, zullen de naam en andere opties (zo-als beschikbare sociale netwerken) verschijnen ophet scherm bij het gezicht.

• Contactgegevens zullen van de netwerken gehaaldworden en zullen kunnen worden opgeslagen in decontacten applicatie van de iPhone

• Snelle toegang tot alle sociale netwerken zal be-schikbaar zijn, samen met basisinformatie zoals delaatste status update/tweet.

Beschikbare informatie zal verschillen van persoontot persoon. Om een extra privacy-instelling toe te voe-gen, zal de gebruiker de mogelijkheid hebben om zijnnetwerken te linken en om te kiezen om ze in of uit teschakelen. Als de gebruiker herkend wordt, zullen al-leen zijn ingeschakelde netwerken verschijnen.

2. GERELATEERD WERK

Voor deze masterthesis gingen we op zoek naar ge-relateerde mobiele toepassingen. Er werd geen enkelebestaande toepassing gevonden die juist hetzelfde doetals in dit project. Sommige overeenkomsten werden ge-vonden in de volgende applicaties:

• Viewdle [5]: Viewdle is een bedrijf gefocust opgezichtsherkenning. Ze hebben verschillende lo-pende projecten en hebben al een toepassing ge-maakt voor Android, genoemd Viewdle Social Ca-mera. Het herkent gezichten in foto’s op Facebook.Viewdle heeft een gezichtsherkennings iOS SDKbeschikbaar.

• Animetrics [6]: Animetrics maakt toepassingenom regeringen en agentschappen die het recht

handhaven te helpen. Het heeft verschillende pro-ducten zoals FaceR MobileID, wat kan gebruiktworden om zowel de namen als het percentage vande overeenkomsten van elk persoon te krijgen. Fa-ceR Credential ME kan gebruikt worden voor au-thenticatie op je eigen smartphone. Het herkentje eigen gezicht en als het een correcte herkennigis, opent het uw gegevens. Animetrics focust ookop het herkennen van gezichten voor de veiligheidvan uw woning. Ze lijken geen publieke API tehebben, omdat hun focus niet gericht is op de com-merciele markt.

• TAT Augmented ID [7]: TAT Augmented ID iseen concept toepassing. Het herkent gezichten inreele tijd en gebruikt toegevoegde realiteit om ico-nen rond het gezicht te tonen. Het concept is het-zelfde, maar de resulterende gebruikersinterface isverschillend. Section 3 bediscussieert waarom eenvolledig toegevoegde realiteit gebruikersinterfaceniet de voorkeur krijgt op mobiele apparaten.

Een ander niet-commercieel gerelateerd werkis een masterthesis van 2010 aan de KatholiekeUniversiteit Leuven [10]. De schrijver maakte eenhead-mounted display-toepassing (HMD) om gezich-ten te herkennen en informatie te krijgen. Van zijnwerk leerden we dat HMDs praktisch niet de idealeopstelling hebben (zijn toepassing vereiste een rugzakmet een laptop en een headset) en dat de technologiedie werd gebruikt (OpenGL) omslachtig is om meete werken. Het gebruik van iOS maakt deze aspecteneenvoudiger, aangezien de hardware vereisten beperktblijven tot de iPhone. De schrijver gebruikte Face.comals gezichtsherkenning en was er zeer tevreden over.

Er werd een vergelijking van gezichtsherkenningAPI’s gemaakt, om de meest geschikte te vinden voorons doel. Een kleine samenvatting van de positieve ende negatieve punten:

• Viewdle: Zoals reeds vermeld, hebben we gepro-beerd om Viewdle te contacteren om meer infor-matie te verkrijgen over de API, maar zonder ant-woord. Viewdle is geen optie.

• Face.com: Face.com biedt een goed gedocumen-teerde REST API aan. Het beschikt over Facebooken Twitter integratie en een private namespace, diede toepassing zal toelaten om gebruik te makenvan het eerder vermelde privacy beleid. Er is eengebruikslimiet op de gratis versie van Face.com.

• Betaface [8]: De enige toepassing die zowel metfoto’s en video’s kan werken. Ontwikkelen is ech-

Page 130: Discovering digital identities through face recognition on mobile devices

ter alleen mogelijk in Windows, en het werd nogniet gebruikt met iOS. Bovendien is het niet gratis.

• PittPatt [9]: Pittpatt was een veelbelovende ser-vice, maar spijtig genoeg werd het aangekochtdoor Google. De service kan op dit moment danook niet worden gebruikt.

Het lijkt erop dat Face.com de enige en tevens ookde nuttigste gezichtsherkennings API is voor onze ap-plicatie. Het heeft een iOS SDK en sociale media in-tegratie, welke beiden erg nuttig zijn in de context vandeze toepassing.

3. PAPIEREN PROTOTYPES

Een papieren prototype maken is het proces vanhet ontwikkelen van de gebruikersinterface, gebaseerdop snelle tekeningen van alle verschilende delen van degebruikersinterface [11]. Door dit te doen met papie-ren delen wordt het gemakkelijk om snel de interface teevalueren en aan te passen. De papieren prototype fasebestaat uit 3 iteraties: het beslissen van de interface, deevaluatie van de uitgewerkte interface en de evaluatiemet experten.

3.1. Fase 1: interface beslissen

De eerste fase van het papieren prototype was debeslissing welke van de 3 mogelijke interfaces gebruiktzouden worden:

• Interface 1: een kader met informatie naast hethoofd van de herkende persoon. Dit is het bestegebruik van toegevoegde realiteit, maar je moet deiPhone op het hoofd van de persoon gericht hou-den om de informatie te kunnen lezen. Zie 1a.

• Interface 2: een zijbalk met informatie, die onge-veer 1/4de van het scherm in beslag neemt. Opdeze manier kunnen gebruikers hun iPhone lagerhouden wanneer een persoon is geselecteerd, maarze kunnen de camera nog steeds gebruiken als zewillen. Zie figuur 1b.

• Interface 3: een volledig informatiescherm. Ditmaakt minimaal gebruik van de toegevoegde reali-teit, maar biedt een praktische manier om de infor-matie te bekijken. Gebruikers zien de naam van deherkende persoon in het camerabeeld, en eens zeop het gezicht geklikt hebben, worden ze doorver-wezen naar het scherm met volledige informatie.Zie figuur 1c.

(a) Interface 1

(b) Interface 2

(c) Interface 3

Figuur 1: De verschillende interfaces

Page 131: Discovering digital identities through face recognition on mobile devices

Deze interfaces werden geevalueerd door 11 test-personen, met een leeftijd van 18 tot 23 jaar, met ver-schillende smartphone ervaring. De testen werden ge-daan door middel van de denk-luidop-techniek, wat be-tekent dat ze moeten zeggen wat ze denken dat zal ge-beuren als ze een knop indrukken. De interviewer ver-andert de papieren schermen om de applicatie te simu-leren. Hetzelfde simpel scenario werd gegeven voor alde interfaces, waarbij de testpersoon een persoon moestherkennen en informatie over die persoon moest vin-den. Na de test werd een kleine, op maat gemaakte vra-genlijst gegeven om de interface voorkeur na te gaan.Niemand van de gebruikers koos interface 1 als favo-riet. Het feit dat je de iPhone op de persoon gerichtmoest houden om de informatie te lezen en door te ne-men, bleek een ernstig nadeel te zijn. De keuze tussenInterface 2 en 3 was niet unaniem beslist. 27% koosinterface 2 en 73% koos voor Interface 3. Dus werd In-terface 3 gekozen en uitgewerkt voor het tweede deelvan het papieren prototype. De smartphone gebruikersbenadrukten ook dat als je enkel 1/4e van het schermgebruikt voor informatie, dit te klein zou zijn om lees-bare informatie weer te geven.

3.2. Fase 2: interface evaluatie

Voor de tweede fase werden 10 testpersonen ge-bruikt, tussen de 20 en 23 jaar oud, weer met verschil-lende smartphone ervaring. Een uitgebreidere interfacewerd gemaakt, die de volledige omvang van de functio-naliteit onderzocht. De testers moesten mensen herken-nen, instellingen aanpassen, sociale netwerken aan hunprofiel linken, valse positieven aanduiden en de geschie-denis van de herkenning beheren. De denk- luidop-techniek werd weer toegepast. Aan het einde van detest moest de testpersoon een USE vragenlijst invullen[4]. Dit is een vragenlijst die bestaat uit 30 vragen, in-gedeeld in 4 categorieen om de verschillende aspectenvan de geevalueerde toepassing te bevragen. De vra-gen kunnen beantwoord worden op een 7-punten Li-kert schaal, 1 vertegenwoordigt totaal niet akkoord en 7helemaal akkoord. De categorieen zijn nuttigheid, ge-bruiksgemak, gemakkelijkheid om aan te leren en tevre-denheid.

3.2.1. Nuttigheid. De algemene resultaten warengoed. De vraag of de toepassing nuttig is, kreeg eengemiddelde score van 5 op 7, met geen enkel cijferonder de 4. Mensen lijken te verstaan waarom detoepassing nuttig is en ze doet wat ervan verwachtwordt. De scores op de vraag of het voldoet aan denood van de gebruikers was verdeeld met cijfers gaandevan 1 tot 7, omdat sommige gebruikers (vooral deze

met privacy bedenkingen en deze zonder smartphone)geen nut zagen in het gebruik van de toepassing. Hetdoelpubliek, de smartphone gebruikers, zagen er welhet nut van in met hogere scores als gevolg in dezegroep.

3.2.2. Gebruiksgemak. Uit deze vragen werd de noodaan een herschikking van sommige knoppen duidelijk.Gebruikers klaagden over het aantal schermovergangendat ze moesten doorlopen om van 1 scherm naar ergensanders in de toepassing te gaan en dit was duidelijk inde resultaten. De vraag of het gebruik van de toepas-sing vlot verloopt, kreeg scores van 2 tot 7 met een ge-middelde van 5. De weg door de toepassing zou moetenworden herzien om te kijken of de ontbrekende schakelskunnen gevonden en verbeterd worden om de navigatiete bevorderen. Bij voorbeeld, de home-knop om naarhet hoofdscherm te gaan, zal worden toegevoegd op ver-schillende andere schermen in plaats van steeds terugte moeten gaan via alle vorige schermen. Het gebruikvan de toepassing verliep probleemloos voor alle iP-hone gebruikers, omdat de gebruikersinterface gemaaktwas met de standaard iOS interface delen.

3.2.3. Gemakkelijkheid om aan te leren. Geen en-kele vraag in deze sectie scoorde lager dan 5 op de 7-punten Likert schaal. Dit is ook te wijten aan de stan-daard iOS interface, die werd ontwikkeld door Applemet het oog op gebruiksgemak en gemakkelijkheid omaan te leren.

3.2.4. Tevredenheid. De meeste gebruikers waren te-vreden over de functionaliteiten aangeboden door detoepassing en over hoe het werd voorgesteld in de ge-bruikersinterface. Vooral de iPhone gebruikers warenzeer enthousiast. Zij noemden het een vernieuwendetoepassing. De vraag of de gebruiker tevreden was overde toepassing kreeg scores van 5 tot 6, met een gemid-delde van 6 op 7. Niet-smarphone gebruikers warenmeer sceptisch en zagen het nut niet in van dergelijketoepassing. Dit terzijde, de toepassing was leuk en hetdoelpubliek was tevreden.

3.2.5. Positieve and negatieve aspecten. Er werd degebruikers gevraagd om de postieve en negatieve aspec-ten van de toepassing te noteren. De positieve aspec-ten waren de iOS-stijl van werken, de functionaliteit enhet idee van de toepassing. De negatieve aspecten wa-ren meer interface-gerelateerd, zoals niet genoeg home-knoppen en de voorgestelde methode om een incorrecteherkenning aan te duiden. Deze knop werd echter ge-plaatst op het informatiescherm van een herkende per-soon. Iedereen was het er over eens dat dit te laat was,omdat dan alle gegevens van de verkeerde persoon ge-

Page 132: Discovering digital identities through face recognition on mobile devices

toond zouden worden. Zodoende zou de incorrect tag-button geplaatst moeten worden op het camera scherm.Enkele nuttige ideeen, zoals het de gebruiker mogelijkmaken om een herkend persoon te volgen op Twitter,werden voorgesteld.

3.3. Fase 3: de mening van experten

Voor deze fase werden de promoter en 6 raadge-vers gebruikt als testpersonen. Het prototype was aan-gepast aan de resultaten van de tweede iteratie. Meerhome-knoppen werden geplaatst en de incorrect tag-knop werd op het camera scherm geplaatst. De test-gebruikers hebben allemaal uitgebreide ervaring op hetgebied van mens-computer interactie en kunnen aldusgezien worden als experten. Zij deden de tests, vuldendezelfde vragenlijsten in en gaven hun mening op ver-schillende aspecten van het programma. Een kleine sa-menvatting:

• Er was ongerustheid omtrent de beeldkwaliteit vande verschillende iPhones. Er zouden tests moetengebeuren om te kijken vanaf welke afstand een per-soon herkend kan worden.

• De toepassing zou voldoende modulair moetenzijn. In een snel evoluerende 2.0-wereld, zoudensociale netwerken moeten kunnen bijgevoegd ofverwijderd worden van de toepassing. Als alle net-werken als modules worden geımplementeerd, zoudit gemakkelijker worden.

• De incorrect tag-knop zou op dezelfde wijze kun-nen worden geımplementeerd als knop in iPhotodie de gebruiker vraagt of een afbeelding juist isgetagged.

• De sociale netwerk informatie zou niet enkel stati-sche info mogen geven. De gebruiker zou de mo-gelijkheid moeten hebben om onmiddellijk vanuitde toepassing te reageren. Als dit niet mogelijkis, zou het beter zijn om de gebruiker onmiddellijknaar Facebook of Twitter te verwijzen.

• Er zou meer informatie kunnen worden getoond inhet volledige scherm. In plaats van meer links naaralle netwerken te tonen, zou de algemene informa-tie rond een persoon hier al gegeven kunnen wor-den.

Op de vraag welke sociale netwerken zij willenzien in de toepassing, zei bijna iedereen Facebook,Twitter en Google+. In een academische context zou-den ze ook graag Mendeley en Slideshare zien.

Figuur 2: Gezichtsdetectie resultaten.

4. IMPLEMENTATIE

Afgezien van enkele suggesties werd het derdepapieren prototype positief onthaald. De volgende stapin het proces is de implementatie. De toepassing ismomenteel in ontwikkeling en een kleine basis werkt.De belangrijkste focus ligt nu op de hoofdfunctiona-liteit: de gezichtsherkenning. Het is belangrijk omdit deel zo vlug mogelijk werkende te krijgen, omdatde volledige toepassing er op steunt. Tot hier toeslaagt de toepassing er in om gezichten op te sporen,gebruik makende van de iOS5 gezichtsdetectie. Eenvierkant omkadert de gezichten en volgt ze wanneerde camera of de persoon beweegt. Deze functionaliteitzou kunnen worden gebruikt om de kwaliteit van degezichts-herkenning API te testen. Zoals je kan zienin figuur 2, kan het gezichtsdetectie algoritme vaniOS5 verschillende gezichten samen herkennen en ditin zulke diepte, dat het kleinste herkenbare gezichtnauwelijks groter is dan een knop. Dit zijn goederesultaten, omdat het algoritme snel, betrouwbaar engedetailleerd genoeg bleek voor ons doel. Detectie vannog kleinere gezichten is niet nodig, omdat de kadersmoeilijk worden om in te klikken wanneer ze kleinerzijn dan een knop.

De volgende stap was de gezichtsherkenning. Ditwordt behandeld door Face.com. Een iOS SDK is be-schikbaar op de website [12]. Dit SDK omvat de func-tionaliteit om beelden naar de Face.com servers te stu-ren en een JSON antwoord te ontvangen met de her-kende gezichten. Het bevat ook de nodige Facebook

Page 133: Discovering digital identities through face recognition on mobile devices

Figuur 3: Gezicht is herkend en de correcte FacebookID wordt getoont.

login, want Face.com vraagt de gebruiker om in te log-gen met zijn Facebook account. Deze login is slechtseenmalig. Een probleem was dat Face.com alleen maarbeelden aanvaardt en geen video’s. Om het mogelijk temaken om de gezichtsherkenning zo vlug mogelijk tetesten, is een recognize-knop toegevoegd aan het came-rabeeld. Wanneer hierop geklikt wordt, wordt een fotogenomen. Deze foto wordt verzonden naar de serversvan Face.com en wordt geanalyseeerd voor gezichten.Het JSON antwoord wordt ontleed en per gezicht krijgtmen een percentage van zekerheid en de lijst van moge-lijke namen. Figuur 3 toont de huidige staat van de toe-passing. Een gezicht wordt herkend en zijn FacebookID wordt bovenaan geprint.

5. VOLGENDE STAPPEN EN TOEKOM-STIG WERK

De volgende stap in de ontwikkeling is de verdereimplementatie van de gebruikersinterface. Nu we eenimplementatie hebben van de hoofdfunctionaliteit, ishet belangrijk om de mock-up van de toepassing te ver-volledigen. Op deze wijze kan een dummy implemen-tatie van verschillende schermen worden gebruikt omde interface te testen in verschillende iteraties, gebruikmakende van het digitale prototype. Verschillende groteproblemen moeten worden opgelost. Het grootste pro-bleem is het identificeren van gezichten herkend dooriOS5 gezichtsherkenning aan gezichten herkend doorFace.com. Omdat Face.com gezichten herkent door ge-bruik te maken van foto’s moet er een manier gevon-

den worden om deze resultaten te vergelijken met degezichten op het scherm. Als de gebruiker de cameraricht op andere mensen nadat hij op de recognize-knopheeft geduwd, zal het resultaat van Face.com niet over-eenkomen met het gezicht op het scherm. De oplos-sing in gedachte is om een algoritme te gebruiken omde Face.com resultaten te laten overeenkomen met degezichtsdetectie gebaseerd op verhoudingen. Als weslagen om een correlatie te vinden tussen de ogen ende neus van een persoon in beide diensten, zou het mo-gelijk zijn om uit te zoeken welk gezicht op de cameraovereenkomt met het Face.com resultaat. Een anderemanier om de gezichten te laten overeenkomen met hetantwoord van Face.com is het bijhouden van de gezich-ten gebaseerd op hun coordinaten op het scherm. Eengepast algoritme voor dit probleem moet gevonden wor-den.

6. CONCLUSION

Deze masterthesis is een werk in uitvoering. Wehebben goede resultaten van de papieren prototypes ende kern van de toepassing is reeds geımplementeerd.Sommige problemen nog moeten worden opgelost enhet testen met gebruikers is nog nodig om de toepas-sing zijn doel te laten bereiken: een efficiente, nieuwemanier om mensen te ontdekken, gebruik makende vande nieuwste technologieen en netwerken.

Referenties

[1] http://www.face.com[2] https://developer.apple.com/library/mac/documentation/

CoreImage/Reference/CIDetector Ref/Reference/Reference.html[3] R. Azuma, Y. Baillot, R. Behreinger, S. Feiner, S. Ju-

lier, B. MacInture, Recent developments in augmen-ted reality, IEEE Computer Graphics And Applications,21(6):34-47, 2001

[4] Arnold M. Lund, Measuring Usability withthe USE Questionnaire, in Usability and UserExperience, vol. 8, no. 2, October 2001,http://www.stcsig.org/usability/newsletter/0110measuring with use.html

[5] http://www.viewdle.com/[6] http://www.animetrics.com/Products/FACER.php[7] http://www.youtube.com/watch?v=tb0pMeg1UN0[8] http://www.betaface.com/[9] http://www.pittpatt.com/

[10] Niels Buekers, Social Annotations in Augmented Rea-lity, Masterthesis at KULeuven, 2010-2011

[11] Erik Duval, paper prototyping,http://www.slideshare.net/erik.duval/paper-prototyping-12082416, last checked on April 29, 2012

[12] Sergiomtz Losa, FaceWrapper for iPhone,https://github.com/sergiomtzlosa/faceWrapper-iphone

Page 134: Discovering digital identities through face recognition on mobile devices
Page 135: Discovering digital identities through face recognition on mobile devices

Bibliography

[1] R. Brunelli, T. Poggio, Face recognition, features vs templates, MassachusettsInstitute Of Technology, Massachusetts USA, 1993

[2] Smith, Kelly, Face recognition, National Science and Technology Council, 2006

[3] Face recognition using eigenfaces, http://www.cs.princeton.edu/~cdecoro/eigenfaces/, Princeton University, USA, last checked on April 24, 2012

[4] Alexander M. Bronstein, Michael M. Bronstein, Ron Kimmel, Three-DimensionalFace Recognition, Isreal, december 10, 2004

[5] Kevin Bonsor, Ryan Johnson, How facial recognition systems work,URL: http://electronics.howstuffworks.com/gadgets/high-tech-gadgets/facial-recognition.htm, last checked on April 25, 2012

[6] Yaniv Taigman, Lior Wolf, Leveraging Billions of Faces to Overcome Perfor-mance Barriers in Unconstrained Face Recognition, Face.com, 2011

[7] Face.com, www.face.com, last checked on April 25, 2012

[8] Facebook, www.facebook.com, last checked on April 25, 2012

[9] Betaface, www.betaface.com, last checked on April 25, 2012

[10] Verilook, www.neurotechnology.com/verilook.html, last checked on April 25,2012

[11] PittPatt, www.pittpatt.com, last checked on April 25, 2012

[12] Viewdle, www.viewdle.com, last checked on April 25, 2012

[13] KLIK by Face.com, itunes.apple.com/us/app/klik-by-face.com/id484990787?mt=8, last checked on April 25, 2012

[14] TAT augmented ID, http://www.tat.se/videos/, last checked on April 25,2012

[15] R. Azuma, Y. Baillot, R. Behreinger, S. Feiner, S. Julier, B. MacInture, Recentdevelopments in augmented reality, IEEE Computer Graphics And Applications,21(6):34-47, 2001

123

Page 136: Discovering digital identities through face recognition on mobile devices

Bibliography

[16] Luis Freitas, www.thetrendwatch.com/2010/01/18/place-your-bets-2010-location-based-services-augmented-reality,last checked April 26, 2012

[17] Layar, http://www.layar.com/browser/, last checked on April 26, 2012

[18] R. Scott Macintosh, Portable Real Estate Listings âĂŤ but With a Differ-ence, http://www.nytimes.com/2010/03/26/greathomesanddestinations/26iht-rear.html?_r=1, last checked on April 26, 2012

[19] Niels Beukers, Social annotations in augmented reality, KU Leuven, 2012

[20] OpenGL, http://www.opengl.org/, last checked on April 26, 2012

[21] Augmented driving, http://www.imaginyze.com/, last checked on April 26,2012

[22] Unbiased data-driven comparisons, http://social-networking.findthebest.com/, last checked on April 26, 2012

[23] http://en.wikipedia.org/wiki/List_of_social_networking_websites,last checked on May 20, 2012

[24] Twitter, http://www.twitter.com, last checked on April 26, 2012

[25] Google+, http://plus.google.com, last checked on April 26, 2012

[26] LinkedIn, http://www.linkedin.com, last checked on April 26, 2012

[27] Last.fm, http://www.last.fm, last checked on April 26, 2012

[28] Flickr, http://www.flickr.com, last checked on April 26, 2012

[29] Slideshare, http://www.slideshare.com, last checked on April 26, 2012

[30] DeviantART, http://www.deviantart.com, last checked on April 26, 2012

[31] Hyves, http://hyves.nl, last checked on April 26, 2012

[32] Adam Pash, 3 Social Media Aggregators That Bring It All Together, PCWorld,August 4, 2009

[33] FriendFeed, http://friendfeed.com/, last checked on April 29, 2012

[34] FriendStream, http://mashable.com/2010/02/16/htc-friend-stream/, lastchecked on April 29, 2012

[35] FriendStream screenshot, http://smartphonerepublic.com/2011/05/12/android-tips-htc-friend-stream/, last checked on April 29, 2012

[36] Gonzalo Parra, Joris Klerkx, Erik Duval, More!: Mobile Interaction with LinkedData, KU Leuven

124

Page 137: Discovering digital identities through face recognition on mobile devices

Bibliography

[37] QR codes, http://en.wikipedia.org/wiki/QR_code, last checked on April29, 2012

[38] Erik Duval, paper prototyping, http://www.slideshare.net/erik.duval/paper-prototyping-12082416, last checked on April 29, 2012

[39] Lewis, C. H., Using the "Thinking Aloud" Method In Cognitive Interface Design

[40] Gary Perlman, User Interface Usability Evaluation with Web-Based Question-naires, http://hcibib.org/perlman/question.html, last updated november24, 2011, last checked on April 29, 2012

[41] Vcard, http://en.wikipedia.org/wiki/VCard, last checked on May 1, 2012

[42] Dr Sarah Bate, Information about prosopagnosia, http://www.prosopagnosiaresearch.org/index/information, last checked on May1, 2012

[43] comScore Reports March 2012 U.S. Mobile Subscriber Market Share,http://www.comscore.com/Press_Events/Press_Releases/2012/4/comScore_Reports_March_2012_U.S._Mobile_Subscriber_Market_Share,last check May 2, 2012

[44] iOS Human Interface Guidelines, http://developer.apple.com/library/ios/DOCUMENTATION/UserExperience/Conceptual/MobileHIG/MobileHIG.pdf, Apple, 2007

[45] USE Questionnaire, http://hcibib.org/perlman/question.cgi?form=USE,last checked on May 13, 2012

[46] Arnold M. Lund, Measuring Usability with the USE Questionnaire, http://www.stcsig.org/usability/newsletter/0110_measuring_with_use.html, lastchecked on May 13, 2012

[47] The objective-c programming language, http://developer.apple.com/library/mac/documentation/Cocoa/Conceptual/ObjectiveC/ObjC.pdf,Apple, 2011

[48] Joe Conway, Aaron Hillegass, iOs programming, the big nerd ranch guide, Thebig nerd ranch, Atlanta USA, 2011

[49] Paul Hegarty, iPhone and iPad application development, Stanford University,California USA, 2011.

[50] Cocoa Encyclopedia p.43, https://developer.apple.com/library/mac/documentation/General/Conceptual/CocoaEncyclopedia/CocoaEncyclopedia.pdf, last updated February 16, 2012, last checkedon May 19, 2012.

125

Page 138: Discovering digital identities through face recognition on mobile devices

Bibliography

[51] SquareCam, http://developer.apple.com/library/ios/#samplecode/SquareCam/Introduction/Intro.html, last checked on May 20, 2012.

[52] @Sergimtzlosa, FaceWrapper, https://github.com/sergiomtzlosa/faceWrapper-iphone, last checked on May 20, 2012.

[53] Facebook iOS SDK Reference, https://developers.facebook.com/docs/reference/iossdk/, last checked on May 20, 2012.

[54] Frans Van Assche, D9.1 - analysis and design documents for the directory,Deliverable for the ITEC project, p. 18.

126

Page 139: Discovering digital identities through face recognition on mobile devices

K.U.Leuven Faculteit Ingenieurswetenschappen 2011 – 2012

Fiche masterproef

Student: Gerry Hendrickx

Titel: Discovering digital identities through face recognition on mobile devices

UDC : 681.3

Korte inhoud:Het internet heeft de manier waarop mensen met elkaar communiceren gerevolution-aliseerd. Verschillende sociale netwerken zijn ontstaan om deze communicatie opverschillende levels te ondersteunen. Ze beginnen een steeds grotere rol te spelen inde levens van hun leden en hebben elks hun eigen focus. Door deze focus gaan mensenzich inschrijven voor meerdere netwerken: één voor elke nood. Hun informatie isverdeeld onder deze netwerken en vormt in zijn geheel de digitale identiteit van degebruiker, zijn online alter ego. Door deze verspreiding van informatie wordt hetopzoeken van de digitale identiteit van een gebruiker een omslachtige taak. Hetdoel van deze thesis is om deze taak te versimpelen door gebruik te maken vangezichtsherkenning op mobiele toestellen. De thesis presenteert het onderzoek en destudie van gerelateerd werk dat uitgevoerd is om ideeën te verkrijgen en startpuntente vinden. Dit onderzoek, gecombineerd met een breinstorm en een enquête, werdgeanalyseerd en leidde tot het volgende concept: een mobiele applicatie die in staatis mensen te herkennen en hun digitale identiteit weer te geven in reële tijd. Hetontwerp en de ontwikkeling van de applicatie zijn iteratief aangepakt. We besprekende eerste papieren prototypes van 3 verschillende gebruikersinterfaces, waarvan er ééngeselecteerd en uitgewerkt is. Dit prototype van de gebruikersinterface werd tweemaalgeëvalueerd en aangepast alvorens te starten aan de eigenlijke implementatie. Deontwikkeling gebeurde in 2 iteraties: de eerste focust op de hoofdfunctionaliteitvan de applicatie, de gezichtsherkenning. Deze versie is geëvalueerd en leidde toteen tweede digitale prototype, dewelkde een volledig werkende gezichtsherkenningsapplicatie is die alle functionaliteiten ingebouwd heeft. Een database wordt gebruiktom de gebruikersnamen van sociale netwerken van de gebruikers op te slaan, wat deapplicatie toelaat om een herkend persoon te linken aan zijn sociale netwerken.

Thesis voorgedragen tot het behalen van de graad van Master in deingenieurswetenschappen: computerwetenschappenPromotor : Prof. dr. ir. Erik DuvalAssessoren: Dr. ir. Kurt De Grave

Mr. Frans Van AsscheBegeleider : Ir. Gonzalo Parra