35
T ECHNICAL R EPORT Report No. CS2012-03 Date: November 2012 Computer Science Annual Workshop CSAW’12 Computer Science Department Department of Computer Science University of Malta Msida MSD2080 MALTA Tel: +356-2340 2519 Fax: +356-2132 0539 http://www.cs.um.edu.mt

Computer Science Annual Workshop CSAW’12...Computer Science Annual Workshop CSAW’12 Computer Science Department University of Malta Department of Computer Science University of

  • Upload
    others

  • View
    14

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Computer Science Annual Workshop CSAW’12...Computer Science Annual Workshop CSAW’12 Computer Science Department University of Malta Department of Computer Science University of

TECHNICAL REPORT

Report No. CS2012-03Date: November 2012

Computer Science Annual WorkshopCSAW’12

Computer Science Department

University of Malta

Department of Computer ScienceUniversity of MaltaMsida MSD2080MALTA

Tel: +356-2340 2519Fax: +356-2132 0539http://www.cs.um.edu.mt

Page 2: Computer Science Annual Workshop CSAW’12...Computer Science Annual Workshop CSAW’12 Computer Science Department University of Malta Department of Computer Science University of

Programme  and  Table  of  Contents

Time Content Speaker Page

09:00 Dealing  with  the  Hypothetical  in  Contracts Gordon  Pace 309:45 Towards Proof-Theoretic Interpretations for LTL Monitoring Clare  Cini 710:15 Lighning  Talk:  Accountable  Monitoring  of  Proof-­‐Carrying  Logs Gordon  Pace n/a

11:00 Novel attack resilience by fusing events related to attack objectives Mark  Vella 811:45 Separation-Based Reasoning for deterministic channel-Passing Concurrent Programs Aimee  Borda 11

14:00 Making mutation testing a more feasible proposition for the industry Mark  Micallef 1214:30 Towards Addressing the Equivalent Mutant Problem in Mutation Testing Mark  Anthony  Cachia 1515:00 Lightning  Talk  -­‐  Verification  Techniques  for  Distributed  Algorithms Aimee  Borda n/a15:15 Lightning  Talk  -­‐  Integrating  Runtine  Verification  into  the  SDLC Christian  Colombo  and  Mark  Micallef n/a

16:00 mu-­‐Larva  Script:  Rethinking  the  Larva  scripting  language   Adrian  Francalanza 1716:30 Integrating  Testing  and  Runtime  Verification Christian  Colombo 19

Time Content Speaker Page

09:00 Language  extension  proposals  for  Cloud-­‐Based  computing Adrian  Francalanza 2109:30 Designing Correct Runtime-Monitors for Erlang Aldrin Seychel 2310:15 Lightning  Talk  -­‐  Preserving  determinism  in  actor  based  concurrency Aldrin  Seychel n/a

11:00 Formal Fault-Tolerance Proofs for Distributed Algorithms Mandy  Zammit 2411:45 Real-time Selective Rendering Steven  Galea 2612:15 Lightning  Talk  Slot  (free) n/a

14:00 Distributed High-Fidelity Graphics using P2P Daniel  D'Agostino 2814:30 High-fidelity rendering as a service using heterogeneous supercomputing systems Keith  Bugeja 3115:00 Point Cloud Searching and Understanding Sandro  Spina 33

16:00 Epistemic and Deontic: The Two Tics Gordon  Pace n/a16:30 Wrap-­‐Up Gordon  Pace n/a

Coffee:  15:30  -­‐  16:00Session  4:  16:00  -­‐  17:00

Coffee:  10:30  -­‐  11:00Session  2:  11:00  -­‐  12:30

Lunch:  12:30  -­‐  14:00Session  3:  14:00  -­‐  15:30

Day  1  -­‐  Thursday  8th  November

Coffee:  8:30  -­‐  9:00Session  1:  9:00  -­‐  10:30

Coffee:  10:30  -­‐  11:00Session  2:  11:00  -­‐  12:30

Lunch:  12:30  -­‐  14:00Session  3:  14:00  -­‐  15:30

Coffee:  15:30  -­‐  16:00Session  4:  16:00  -­‐  17:00

Day  2  -­‐  Friday  9th  November

Coffee:  8:30  -­‐  9:00Session  1:  9:00  -­‐  10:30

Page 3: Computer Science Annual Workshop CSAW’12...Computer Science Annual Workshop CSAW’12 Computer Science Department University of Malta Department of Computer Science University of

 

Page 4: Computer Science Annual Workshop CSAW’12...Computer Science Annual Workshop CSAW’12 Computer Science Department University of Malta Department of Computer Science University of

Dealing with the Hypothetical in Contracts

Gordon J. Pace

Dept. of Computer Science, University of Malta, [email protected]

The notion of a contract as an agreement regulating the behaviour of two (ormore) parties has long been studied, with most work focusing on the interactionbetween the contract and the parties. This view limits the analysis of contractsas first-class entities — which can be studied independently of the parties theyregulate. Deontic logic [1] has long sought to take a contract-centric view, buthas been marred with problems arising from paradoxes and practical oddities [2].Within the field of computer science, the holy grail of contracts is that of a deon-tic logic sufficiently expressive to enable reasoning about real-life contracts butsufficiently restricted to avoid paradoxes and to be computationally tractable.

Contract automata [3–5] have been proposed as a way of expressing theexpected behaviour of interacting systems, encompassing the deontic notions ofobligation, prohibition and permission. For instance, the contract automatonshown in Fig. 1 expresses the contract which states that ‘the client is permittedto initialise a service, after which, he or she is obliged to submit valid usercredentials and the provider is prohibited from increasing the price of the service.’Note that the states are tagged with the deontic information, explicitly statingwhat actions are obliged (O), permitted (P) and forbidden (F) by which party(given in the subscript). The transitions are tagged with actions which whentaken by the parties induce a change of state, with ∗ being used as shorthand todenote anything-else.

Pc(initialise)Oc(login)Fp(increase)

initialise

logout

∗ ∗

Fig. 1. Service provision contract

The use of automata to describe contracts allows for their composition in astraightforward manner using standard techniques from computer science. Forinstance, consider a bank client who has a savings account and a loan with thebank. The complete actual contract between the bank and its client is the com-

3

Page 5: Computer Science Annual Workshop CSAW’12...Computer Science Annual Workshop CSAW’12 Computer Science Department University of Malta Department of Computer Science University of

2 Gordon J. Pace

position of the two contracts regulating the different accounts. This approachthus not only enables the description, analysis and validation of contracts inde-pendently of the agents, but also allows for a compositional approach to contractconstruction.

A limitation inherent to the notion of contract automata is that they describenorms and ideal behaviour only through the knowledge of the agents’ actions. Nopart of a contract may depend on whether or not another part of the contractwas violated or not. For example, clauses such as ‘if the bank does not givepermission to the client to open a new bank account, it is obliged to pay a fine.’Similarly, it is impossible to specify clauses which depend on whether anotherclause is also in force, such as ‘if p is obliged to pay, he is forbidden from leavingthe shop.’ Note the difference between the two cases — whereas the first dependson whether or not a clause is satisfied, the second depends on whether a clauseis enforced. We are currently looking into ways of extending the semantics ofcontract automata to allow for these two kinds of scenarios.

Reparation: Contracts identify ideal behaviour which is not necessarily ad-hered to — although the contract between the bank and the client statesthat the client is to ensure that the repayments are done on time, the clientdoes not always do this, in which case another clause would ensure that heor she is now also obliged to pay a fine. These reparatory contract clauses arecrucial to allow for the description of behaviour under non-ideal behaviour.This can be handled, for instance, by qualifying transitions also with deon-tic clauses (or their negation), ensuring that they can only be taken if theclause is satisfied (or violated). The initial state of the automaton shownin Fig. 2(a) shows how this construction is used to ensure that party 1 ispermitted to download file A (downloadA), as long as party 2 has permissionto download file B (downloadB)1.

Although in the example, the tagged clause appears in the source state ofthe transition, this is not necessarily the case and the approach can be usedto reason about clauses as though they were in force in that state — hypo-thetical clauses. For instance, leaving out the P2(downloadB) in the initialstate of the automaton in Fig. 2(a) would result in a similar contract but inwhich no violation is tagged when the transition to the next state is taken.

Note that, for a complete semantics one would have to have two types ofviolations: (i) those violations which took place but have an associated repa-ration (violating P2(downloadB) in the initial state of Fig. 2(a)); and (ii) vio-lations for which there is no associated reparation (violating P1(downloadA)in the same state). This allows us to make a distinction between satisfyingthe contract outright and satisfying but through the taking of reparations.

Conditional clauses: Another extension to contract automata are conditionalclauses, based on whether a clause is in force at a particular moment in timee.g. ‘if Peter is obliged to pay, then Peter is forbidden from leaving the shop’,

1 For completeness we would also want to add which party is satisfying or violatingthe clause on the transition.

4

Page 6: Computer Science Annual Workshop CSAW’12...Computer Science Annual Workshop CSAW’12 Computer Science Department University of Malta Department of Computer Science University of

Dealing with the Hypothetical in Contracts 3

or ‘if Peter is not permitted to leave the shop then Peter is prohibited fromsmoking.’ Fig. 2(b) shows how the first of these examples can be encoded ina contract automaton. Such conditional clauses are particularly useful whencomposing contract automata, since the presence of particular clauses maynot be known before the actual composition.

P1(downloadA)P2(downloadB)

F1(downloadA)¬P2(downloadB) ` ∗

P2(downloadB) ` ∗ ` ∗

Op(pay) . Fp(leave) Pp(enter)

leave

enter∗ ∗

Fig. 2. (a) party 1 is permitted to download file A, as long as party 2 has permissionto download file B; (b) if p is obliged to pay, then p is forbidden from leaving the shop.

The interaction of these two forms of deontic tagging can be used to expresscomplex scenarios. For instance, the last example can be extended in such amanner that if Peter violates the prohibition from leaving (when in force), hewould be forbidden from entering the shop again. Note that in such a case thetransitions may also need to be tagged with conditional deontic clauses.

Furthermore, in these examples we have simply tested for the satisfaction/vi-olation or presence/absence of a single contract clause. Extending this approachto deal with sets of clauses (in a conjunctive or disjunctive manner) or even forwhole boolean expressions over these new operators can possibly lead to com-plex constructiveness issues similar to the ones which arise in Esterel [6]. Evenwithout complex expressions, such cases may arise. For instance, how should theclause !Op(a) . Op(a) be interpreted? Does it mean that (i) the obligation toperform a should always be in force even when it is was going not to be thecase, or that (ii) it is a void statement which should be disallowed, since if theobligation is not in force, then we add it, but since it is now in force the clauseenforcing it no longer triggers and hence cannot be considered to be in force?These and other similar issues make the definition of the formal semantics ofsuch contracts challenging and the development of algorithms for the discoveryof conflicts in such contracts non-trivial.

5

Page 7: Computer Science Annual Workshop CSAW’12...Computer Science Annual Workshop CSAW’12 Computer Science Department University of Malta Department of Computer Science University of

4 Gordon J. Pace

References

1. P. McNamara. Deontic logic. volume 7 of Handbook of the History of Logic, pages197–289. North-Holland Publishing, 2006.

2. J.-J. C. Meyer, F. Dignum, and R. Wieringa. The paradoxes of deontic logic revis-ited: A computer science perspective (or: Should computer scientists be botheredby the concerns of philosophers?). Technical Report UU-CS-1994-38, Departmentof Information and Computing Sciences, Utrecht University, 1994.

3. G. J. Pace and F. Schapachnik. Permissions in contracts, a logical insight. InJURIX, pages 140–144, 2011.

4. G. J. Pace and F. Schapachnik. Contracts for interacting two-party systems. InFLACOS 2012: Sixth Workshop on Formal Languages and Analysis of Contract-Oriented Software, sep 2012.

5. G. J. Pace and F. Schapachnik. Types of rights in two-party systems: A formalanalysis. In JURIX, 2012.

6. T. Shiple, G. Berry, and H. Touati. Constructive analysis of cyclic circuits. InEuropean Design and Test Conference, 1996.

6

Page 8: Computer Science Annual Workshop CSAW’12...Computer Science Annual Workshop CSAW’12 Computer Science Department University of Malta Department of Computer Science University of

Towards Proof-Theoretic Interpretations for LTLMonitoring

Clare Cini and Adrian Francalanza

Department of Computer Science, University of Malta{ccin0005 | adrian.francalanza}@um.edu.mt

In this talk we will present a study on how runtime verification (RV) algo-rithms can be given one common proof-theoretic interpretation. In RV, there isa lot of work that deals with verification algorithms, yet there is no standardnotation for such algorithms making it hard to understand and compare suchwork.

We aim to have a common underlying framework where such algorithms canbe interpreted in a proof-theoretic way. Therefore we are going to investigatehow verification algorithms, specifically the algorithms by Geilen [2], Sen et al.[3] and Bauer et al. [1], can be mapped to this kind of interpretation. In partic-ular, we investigate whether these algorithms can be mapped to the coinductiveinterpretations for LTL. The coinductive interpretation appears to lend itselfmore naturally to the formalisation of runtime verification algorithms given themodel over which LTL is defined i.e. infinite strings. Coinduction is often usedfor analysis that have an extensional flavour which seems in line with the nota-tion of runtime monitoring (where you arrive to a conclusion by observing onlypart of the entity being analysed).

Preliminary results show that Geilen’s algorithm can be mapped to suchproof systems, hence allowing us to integrate his runtime verification algorithmas a proof search over LTL rules. Furthermore, this algorithm is defined in acoinductive manner, thus confirming our initial idea, that coinduction is theideal interpretation for verification. Yet, work is still ongoing and the results arenot yet conclusive. If we are able to map every algorithm to our proof system,the latter can be seen as a framework for verification algorithms.

References

1. Andreas Bauer, Martin Leucker, and Christian Schallhart. Runtime verificationfor LTL and TLTL. ACM Transactions on Software Engineering and Methodology,20(4):14, 2011.

2. M.C.W. Geilen. On the construction of monitors for temporal logic properties, 2001.3. Koushik Sen, Grigore Rosu, and Gul Agha. Generating optimal linear temporal

logic monitors by coinduction. In Proceedings of 8th Asian Computing ScienceConference (ASIAN03), volume 2896 of Lecture Notes in Computer Science, pages260–275. Springer-Verlag, 2004.

7

Page 9: Computer Science Annual Workshop CSAW’12...Computer Science Annual Workshop CSAW’12 Computer Science Department University of Malta Department of Computer Science University of

Novel attack resilience by fusing events relatedto objectives

Mark Vella

[email protected]

Research in intrusion detection systems (IDS) is mainly restricted to the mis-use and anomaly detection dichotomy, and therefore to their limitations. Webattack detectors are a case in point, where ones that perform misuse detection areprone to miss novel attacks, whilst those performing anomaly detection produceimpractical amounts of daily false alerts. Detectors inspired from the workingsof the human immune system (HIS) have proposed new effective detection ap-proaches, however without tackling the issue of novel attack resilience separatelyfrom anomaly detection.

Danger Theory-inspired detection. This paper attempts to leverage inspirationfrom the Danger Theory (DT), which is a recent model of the HIS [1, 2]. Specif-ically, the focus is on the process that enables the HIS to respond to previouslyunseen infections without attacking host cells or symbiotic organisms. At a highlevel of abstraction, the process consists of the sensing of generic signs of anongoing infection which triggers a process that identifies the make-up of thegerm responsible, resulting in a response that is highly specific. Seen from acomputational point of view, immune responses as suggested by DT follow ageneric-to-specific information fusion process. Generic signals of infection thatgerms cannot avoid producing, enable the immune system to respond to novelinfections. Their fusion identifies the make-up of the responsible germ, therebypreventing the production of anti-bodies that do harm. The aim of this workis to explore how this process can be used to develop effective detectors, mean-ing that they are resilient to novel attacks and suppress false positives. This isachieved by translating the process from the HIS to the web attack domain, andthen subsequently realized as concrete detectors.

Approach. There are two types of generic signals of ongoing infection: PathogenicAssociated Molecular Patterns (PAMP) and danger signals, that originate exter-nally and internally to the human body respectively. PAMPs constitute molec-ular patterns associated with evolutionary distant organisms. PAMPs are notspecific to germs, but are rather associated with entire germ classes and so anygerm is expected to feature such patterns. Danger signals on the other handare molecules that originally form part of the internals of the body’s own cells.Successful infections cause cells to explode, causing cell internals to be releasedand picked up by the HIS as danger signals. Like PAMPs, danger signals donot identify the germs causing them, and are expected to be provoked by anysuccessful infection.

8

Page 10: Computer Science Annual Workshop CSAW’12...Computer Science Annual Workshop CSAW’12 Computer Science Department University of Malta Department of Computer Science University of

2 ������������ ����� ��������������������������������� ����������������������������������������������������������������� ������������������������!� �����������"����� �������#��#�$������������������� ��%�������������������!� �����������"���������������&���#����������� '()������ &���#�����������Fig. 1. An attack objective-centric translation of the generic-to-specific informationfusion process

Figure 1 summarizes the proposed translation, with suspicious HTTP re-quests modeled on PAMPs and attack symptoms modeled on danger signals,both defined as attack objective-related events that attacks cannot avoid pro-ducing. The fusion process however does not follow the workings of the HISsince it was found to be counterproductive in earlier work [3]. Rather, it lever-ages feature-based correlation of events in order to identify attack HTTP re-quests. Since suspicious HTTP requests and attack symptoms are not exclusiveto attack behavior, this component is required to distinguish those related toattacks. This can be achieved through feature similarity links that capture boththe causal relation between suspects and symptoms as well as their associationwith an ongoing attack, in a similar manner to existing security event correlationsystems.

Results and conclusions. The approach has been evaluated through three detec-tors that cover both high impact and popular web attacks within their scope.Effectiveness results are encouraging, with novel attack resilience demonstratedin terms of attacks aiming for a specific objective but modify the exploited vul-nerability, payload or use obfuscation. False positive suppression is achieved untilrequests that do not contain the same attack content as per coincident and suc-cessful attack requests. However, their implementation is rendered difficult byrequiring extensive knowledge of the deployment platform and secure coding. Aperformance study showed that the efficiency challenges tied with the statefuldetection of events can be mitigated. These results merit a follow-up throughfurther detector development and a formalization of the method.

9

Page 11: Computer Science Annual Workshop CSAW’12...Computer Science Annual Workshop CSAW’12 Computer Science Department University of Malta Department of Computer Science University of

3

References

1. Aickelin, U., Cayzer, S.: The danger theory and its application to artificial immunesystems. CoRR abs/0801.3549 (2008)

2. Matzinger, P.: Friendly and dangerous signals: Is the tissue in control? Nat Immunol8(1), 11–13 (2007)

3. Vella, M., Roper, M., Terzis, S.: Danger theory and intrusion detection: Possibilitiesand limitations of the analogy. In: Artificial Immune Systems, pp. 276–289. LectureNotes in Computer Science, Springer Berlin / Heidelberg (2010)

10

Page 12: Computer Science Annual Workshop CSAW’12...Computer Science Annual Workshop CSAW’12 Computer Science Department University of Malta Department of Computer Science University of

Separation-Based Reasoning for DeterministicChannel-Passing Concurrent Programs

Aimee Borda and Adrian Francalanza

University of Malta

Tractable proof systems for concurrent processes are hard to achieve because of the non-deterministic interleaving of statements. This is particularly true when the scheduling of statementshave an effect on the final state of the program, referred to as racy conditions. As a result, whenwe try to reason about concurrent programs, we have to consider all possible thread interleavingto accurately approximate the final result. Since, scalability is important we limit our reasoning toa subset of the concurrent programs that are race-free. By doing so we gain serialized reasoning,where examining one possible path will automatically encapsulate the other interleaving.Fundamentally, races are caused when two concurrent processes try to access shared resources si-multaneously (for example two concurrent processes trying to alter a location in memory at thesame time) and hence the final result depends on the last statement that updates the location.A popular approach for delimiting the class of race-free processes is through resource separation. [3]Through the resources assigned to each process we can approximate the interference that a processcan create to its concurrent processes. Therefore, if the resources available are split disjointly amongthe concurrent processes, we guarantee that no interference between the processes is created. Moreconcretely, if all concurrent processes work within different parts of the memory, each process willnot create race conditions for other processes. Moreover, this guarantee allows us to reason abouteach process independently, as a sequential process.Nonetheless, the formalism must also handle dynamic resource separation and resource sharing,where a subset of the resources’ access rights are shared amongst the processes. A popular exampleis the reader-writer problem. Here, race-conditions are created only, if one of the concurrent pro-cesses modifies the content of a shared location, since trying to read that location will depend onwhether the statement is scheduled before or after the write has been committed. Hence, resourcesharing can be allowed if the actions performed by each process on that resource do not create racesto values, which in this example imply that we have concurrent reads but exclusive writes.Separation logic is a possible formalism which handles this quite elegantly.[3,2] In [1], a proof sys-tem inspired from separation logic to reason about concurrent processes for the message passingmodel is described. However, it can only reason in terms of the complete disjointness of resources.In this work, we are trying to push the boundaries of non-racy processes by examining some formof resource sharing and how these structures preserve determinism.

References

1. Adrian Francalanza, Julian Rathke, and Vladimiro Sassone. Permission-based separation logic formessage-passing concurrency. Logical Methods in Computer Science, 7(3), 2011.

2. Peter W. O’Hearn. Resources, concurrency, and local reasoning (abstract). In David A. Schmidt, editor,ESOP, volume 2986 of Lecture Notes in Computer Science, pages 1–2. Springer, 2004.

3. John C. Reynolds. Separation logic: A logic for shared mutable data structures. In LICS, pages 55–74.IEEE Computer Society, 2002.

11

Page 13: Computer Science Annual Workshop CSAW’12...Computer Science Annual Workshop CSAW’12 Computer Science Department University of Malta Department of Computer Science University of

Making mutation testing a more feasibleproposition for the industry

Mark Micallef, Mark Anthony Cachia

Deptartment of Computer Science, University of Malta, [email protected], [email protected]

Software engineering firms find themselves developing systems for customerswhose need to compete often leads to situations whereby requirements are vagueand/or prone to change. One of the prevalent ways with which the industry dealswith this situation is through the adoption of so-called Agile development pro-cesses. Such processes enable the evolutionary delivery of software systems insmall increments, frequent customer feedback, and, ultimately, software whichcontinuously adapts to changing requirements. In this fluid scenario, developersrely on automated unit tests to gain confidence that any regressions resultingfrom code changes will be detected. Consequently, trust in the software systemcan only follow from the quality of the tests. Unfortunately, the industry tendsto rely on tools that calculate primitive measures such as statement coverage, ameasure which has been shown to provide a false sense of security [2].

Mutation testing [3] is an analysis technique based on the following idea: Givena program P and a test suite T , if one judges T to adequately cover P , thenexecuting T against P ′ (where P ′ is an altered version of P ), should result inat least one failing test. Therefore, from an original program P , a number ofmodified programs P 1...Pn, called mutants, are produced by applying a numberof syntactic modifications to P . These modifications are carried out by muta-tion operators which are designed to change P in a way that corresponds to afault which could be introduced by a developer. Mutants which are detected byT are said to be killed. Undetected (unkilled) mutants require manual investi-gation by developers, possibly resulting in improvements to T . In comparisonto techniques such as statement coverage analysis, mutation testing provides asignificantly more reliable measure of test suite thoroughness.

Despite its effectiveness, mutation testing suffers from three recognised prob-lems. These are (1) the computational expense of generating mutants1, (2) thetime required to execute test suites against all mutants2, and (3) the equiva-lent mutant problem. The latter refers to situations where syntactically differentmutants turn out to be semantically identical, thus wasting time and effort. Be-sides the three cited problems with mutation testing, we also argue that there

1 The computational complexity of Mutation Testing is O(n2) where n is the numberof operations in the source code [1].

2 Executing all tests against each mutant renders mutation tools unacceptably slowand only suitable for testing relatively small programs[4].

12

Page 14: Computer Science Annual Workshop CSAW’12...Computer Science Annual Workshop CSAW’12 Computer Science Department University of Malta Department of Computer Science University of

is a fourth problem, one concerned with the time and effort required to investi-gate and address unkilled mutants. Each unkilled mutant requires a developerto understand the mutant’s semantics, determine if a change to the test suite isrequired and finally modify the test suite to kill the mutant. We argue that thiseffort can be a deterrent to the wider uptake of mutation testing because thetime and cognitive effort required to carry out the task may not be perceived asbeing worth the potential benefits gained.

Fig. 1. Comparing traditional mutation testing with localised mutation

In this talk, we will provide an overview of mutation testing and subsequentlydiscuss problems which prevent its wider uptake. We then discuss our researchactivities in this area and present a technique which we term as localised muta-tion. The technique leverages the iterative nature of Agile development such thatone only creates mutants from sections of the codebase which have changed sincethe last mutation run. The hypothesis is that if mutation testing is carried outin bite-sized chunks on code which has recently changed, then computational ex-pense can be drastically reduced and developers should experience less cognitiveload during analysis. Consequently, the main hurdles of mutation testing adop-tion in industry would be significantly reduced. Preliminary results from thisresearch will also be discussed and related to our ongoing research activities.

References

1. M. E. Delamaro, J. Maldonado, A. Pasquini, and A. P. Mathur. Interface mutationtest adequacy criterion: An empirical evaluation. Empirical Softw. Engg., 6(2):111–

13

Page 15: Computer Science Annual Workshop CSAW’12...Computer Science Annual Workshop CSAW’12 Computer Science Department University of Malta Department of Computer Science University of

142, June 2001.2. S. A. Irvine, T. Pavlinic, L. Trigg, J. G. Cleary, S. Inglis, and M. Utting. Jumble

java byte code to measure the effectiveness of unit tests. In Proceedings of theTesting: Academic and Industrial Conference Practice and Research Techniques -MUTATION, TAICPART-MUTATION ’07, pages 169–175, Washington, DC, USA,2007. IEEE Computer Society.

3. H. M. Jia Y. An analysis and survey of the development of mutation testing. ACMSIGSOFT Software Engineering Notes, 1993.

4. E. W. Krauser, A. P. Mathur, and V. J. Rego. High performance software testingon simd machines. IEEE Trans. Softw. Eng., 17(5):403–423, May 1991.

14

Page 16: Computer Science Annual Workshop CSAW’12...Computer Science Annual Workshop CSAW’12 Computer Science Department University of Malta Department of Computer Science University of

Integrating Mutation Testing into Agile Processesthrough Equivalent Mutant Reduction via

Differential Symbolic ExecutionMark Anthony Cachia

Department of Computer ScienceUniversity of Malta

Msida, [email protected]

Mark MicallefDepartment of Computer Science

University of MaltaMsida, Malta

[email protected]

I. AGILE PROGRAMMING

In agile programming, software development is performedin iterations. To ensure the changes are correct, considerableeffort is spent writing comprehensive unit tests [2]. Unit testsare the most basic form of testing and is performed on thesmallest or smaller set of code [7].These unit tests havemultiple purposes, the main one being that of acting as asafety net between product releases. However, the value oftesting can be called into question if there is no measure ofthe quality of unit tests [2]. Code coverage analysis is anautomated technique which illustrates which statements arecovered by tests [4]. However, high code coverage might stillnot be good enough as whole branches or paths could stillgo completely untested which in turn leads to false senseof security [8]. Mutation Testing is a technique designed tosuccessfully and realistically identify whether a test suite issatisfactory. In turn, such tests lead to finding bugs withinthe code. The technique behind mutation testing involvesgenerating variants of a system by modifying operators (calledmutants) and executing tests against them. If the test suite isthorough enough, at least one test should fail against everymutant thus rendering that mutant killed. Unkilled mutantswould require investigation and potential modification of thetest suite [3].

II. MUTATION ANALYSIS

Mutation analysis is a process which determines the effec-tiveness of a test suite. This is achieved by modifying thesource of a program and ensuring that at least one test fails thusensuring the tests are sensitive to particular source changes. Amutant who is detected by tests is called a killed mutant. Inmutation analysis, a large number of mutants are generated.Mutants are programs which have been syntactically altered.Sometimes, such alterations lead to the generation of Equiv-alent Mutants. Equivalent mutants is a problem in MutationTesting and can be defined as the mutants “which produce thesame output as the original program” [3]. Performing MutationAnalysis on Equivalent mutants is a waste of computationtime. As program equivalence is undecidable, automatically

MS =

∑KilledMutants∑

(Mutants− EquivalentMutants)

Fig. 1. Equation representing the Mutation Score [3]

detecting equivalent mutants is impossible. The equivalentmutant problem is a barrier that prevents Mutation Testingfrom being widely adopted. The result of Mutation Analysisis a ratio or percentage of the killed mutations divided by thesum of equivalent mutants [3]; Figure 1 illustrates.

There are various reasons why mutants may be equivalent.Grun et al. [1] manually investigated eight equivalent mutantsgenerated from the JAXEN XPATH query engine program.They noticed four main reasons which cause a mutant to beequivalent are

• mutants generated from unneeded code,• mutants which improves speed,• mutants which just alter the internal states, and• mutants which cannot be triggered.

III. SYMBOLIC EXECUTION

Symbolic execution is the process of analysing a programby executing it in terms of a symbolic parameter α insteadof a concrete instance. The class of inputs represented byeach symbolic execution is determined by the control flow ofthe program based on branching and operations on the inputs.Each branch leads to a unique path condition (PC). A PC is aset of conditions which the concrete variables have to adhereto for the execution to be in the given particular path; hence,for any given set of concrete parameters, the given parameterscan reside into at most one path. Initially, the path condition istrue, however at each branch operation, the branch conditionis ANDed to the previous PC. In the else path, the NOT ofthe branch condition is added to the current PC [5].

Symbolic execution takes normal execution semantics forgranted and is thus considered to be a natural extension ofconcrete execution. At the end of symbolic execution, anexecution tree can be graphed illustrating all the possible paths

15

Page 17: Computer Science Annual Workshop CSAW’12...Computer Science Annual Workshop CSAW’12 Computer Science Department University of Malta Department of Computer Science University of

the program can follow. At each leaf, there exists a uniquePC which can be satisfied by a concrete input. The PC atany of the leaves is distinct i.e. ¬ (PC1

∧PCn). As symbolic

execution satisfies a commutative property (relative to concreteexamples, symbolic execution has the “exact same effects asconventional executions” [5].

A. Differential Symbolic Execution

Differential symbolic execution is a technique introduced byPerson et al. [6] which efficiently performs Symbolic executionon two versions of the same class. The aim of Differentialsymbolic execution is effectively and efficiently determinewhether the two version of the code is equivalent; and if not,characterise the behavioural differences by identifying the setsof inputs causing different behaviour [6].

Differential symbolic executions works by symbolically ex-ecuting each method version to generate symbolic summaries.Symbolic summaries pair input values with the constraint (alsoknown as the branch condition) together with the operation ona symbolic value (the effect or operation). The summaries fromboth versions are compared. If the summaries are equivalent,the methods are considered functionally equivalent. Otherwise,a behavioural delta (∆) which characterise the input valueswhere the versions differ.

IV. PROPOSED WORK AND EVALUATION

The proposed work is to combine Differential symbolicexecution to the most effective optimisations of MutationTesting. Direct bytecode manipulation as well as LocalisedMutation; a technique which only mutates modified code whencompared to previous versions as introduced in the FYP,will be employed to ensure the utmost performance in thegeneration of mutants. Differential symbolic execution, willbe used to perform efficient functional equivalence betweenthe mutated method and the original method to ensure themethod’s semantics have been altered. This determines if theversions have the same black box behaviour as well as havethe same partition effects [6].

The evaluation under consideration will asses the effec-tiveness of equivalent mutant reduction in Mutation analysis.The evaluation will be performed in two iterations with twodifferent algorithms;

1) Mutation testing with the most efficient optimisations2) Mutations testing with the most efficient optimisations

as well as equivalent mutant reduction using Differentialsymbolic execution.

Various metrics are envisaged to be utilised. Such metricscurrently include the execution time to perform the completeMutation analysis and the time taken for the developer toimplement enough tests until a satisfactory Mutation scoreis achieved. Another aim of this work is to find the mostefficient point in the software life cycle at which it is ideal tostart performing mutation analysis. Particularly, if it’s worthperforming in just one chunk at the very end of the cycle orif it is most feasible to perform incremental Mutation testingduring the whole development process. It is planned that the

interval at which Mutation analysis is ideal to commence andat what interval the analysis should be performed (such as percommit or by some other heuristic) will be also determinedduring the empirical evaluation.

REFERENCES

[1] Bernhard J. M. Grun, David Schuler, and Andreas Zeller. The impact ofequivalent mutants. In Proceedings of the IEEE International Conferenceon Software Testing, Verification, and Validation Workshops, ICSTW ’09,pages 192–199, Washington, DC, USA, 2009. IEEE Computer Society.

[2] Sean A. Irvine, Tin Pavlinic, Leonard Trigg, John G. Cleary, Stuart Inglis,and Mark Utting. Jumble java byte code to measure the effectivenessof unit tests. In Proceedings of the Testing: Academic and IndustrialConference Practice and Research Techniques - MUTATION, TAICPART-MUTATION ’07, pages 169–175, Washington, DC, USA, 2007. IEEEComputer Society.

[3] Harman M Jia Y. An analysis and survey of the development of mutationtesting. ACM SIGSOFT Software Engineering Notes, 1993.

[4] Nguyen H Kaner C, Falk J. Testing computer software. 1999.[5] James C. King. Symbolic execution and program testing. Commun. ACM,

19(7):385–394, July 1976.[6] Suzette Person, Matthew B. Dwyer, Sebastian Elbaum, and Corina S.

Pasareanu. Differential symbolic execution. In Proceedings of the 16thACM SIGSOFT International Symposium on Foundations of softwareengineering, SIGSOFT ’08/FSE-16, pages 226–237, New York, NY,USA, 2008. ACM.

[7] R S Pressman. Software engineering: a practitioner’s approach (2nd ed.).McGraw-Hill, Inc., New York, NY, USA, 1986.

[8] Ben H. Smith and Laurie Williams. Should software testers use mutationanalysis to augment a test set. Journal of Systems and Software, page2009.

16

Page 18: Computer Science Annual Workshop CSAW’12...Computer Science Annual Workshop CSAW’12 Computer Science Department University of Malta Department of Computer Science University of

µLarvaScript: Rethinking the Larva ScriptingLanguage

Adrian [email protected]

University of Malta

1 Synopsis

polyLarva, the latest incarnation of the Larva runtime-verification (RV) tool suite,experienced a major redesign to its scripting language (used for specifying themonitors that carry out the RV.) As opposed to previous versions, where theprogrammer specified the monitoring automata describing the RV procedure,in pLarvaScript (the scripting language for polyLarva) the programmer specifiesmonitoring as a sequence of (guarded) rules of the form

p, c −→ a

where p denotes an event pattern, c denotes a condition and a stands for a mon-itor action. A monitor sythesised from this sequence of rules would then listento a stream of events: for each event e, it attempts to match an event patternof the list of rules; when this happens, the monitor evaluates the correspondingcondition of the rule and, if successful, the respective rule action is triggered.

By and large, the new version of LarvaScript has so far been developed organ-ically, motivated more by the pragmatic concerns of the applications consideredrather than by language design issues. Also, because of backward-compatibilityconcerns, a number of design decisions were inherited, carried over from itspercursors. Although effective and pragmatic, this method of development hashampered a full understanding of the resulting language: at present, the onlyformal semantics available is the polyLarva compiler itself which is not ideal fora number of reasons (i) an understanding of the langauge constructs requires adetailed understanding of the compiler; (ii) we have no way how to determinewhether the compiler implementation is correct; and (iii) concurrency often in-troduces subtle semantic and language design issues that are best studied at ahigher level of abstraction.

In this talk, I will discuss preliminary investigations regarding the analysis ofthe pLarvaScript language from a foundational perspective. For instance, we con-sider inherited constructs such as foreach for defining sub-monitors togetherwith associated design decisions, such as the hierarchic organisation of thesesubmonitors, and reasses them again from first principles. We do not not haveany conclusive results, and thus far our guiding principles have been simplicity,elegance (admittedly both subjective measures) and a potential concurrent im-plemenation of the tool. I will therefore be seeking feedback from the audience

17

Page 19: Computer Science Annual Workshop CSAW’12...Computer Science Annual Workshop CSAW’12 Computer Science Department University of Malta Department of Computer Science University of

during the talk, which should help us hone our present positions regarding theunderstanding of the language.

18

Page 20: Computer Science Annual Workshop CSAW’12...Computer Science Annual Workshop CSAW’12 Computer Science Department University of Malta Department of Computer Science University of

Combining Testing and Runtime Verification

Christian Colombo

Department of Computer Science, University of Malta

Testing and runtime verification are intimately related: runtime verification enablestesting of systems beyond their deployment by monitoring them under normal use whiletesting is not only concerned with monitoring the behaviour of systems but also generat-ing test cases which are able sufficiently cover their behaviour. Given this link betweentesting and runtime verification, one is surprised to find that in the literature the twohave not been well studied in each other’s context. Below we outline three ways inwhich this can be done: one where testing can be used to support runtime verification,another where the two techniques can be used together in a single tool, and a thirdapproach where runtime verification can be used to support testing.

1 A Testing Framework for Runtime Verification Tools

Like any piece of software, runtime verification tools which generate monitoring codefrom formal specification have to be adequately tested, particularly so because of its useto assure other software. State-of-the-art runtime verification tools such as Java-MOP[12] and tracematches [3] have been tested on the DaCapo benchmark [2]. However, thekind of properties which have been monitored are rather low level contrasting with ourexperience with industrial partners who seem more interested in checking for higherlevel properties (such as the ones presented in [5, 4]). Whilst we had the chance to testour tool Larva [6] on industrial case studies, such case studies are usually available forsmall periods of time and in limited ways due to privacy concerns. Relying solely onsuch case studies can be detrimental for the development of new systems which needsubstantial testing and analysis before being of any use.

For this reason, we aim to develop a testing framework which would provide ahighly configurable mock transaction system to enable thorough validation of systemswhich interact with it. Although not a replacement of industrial case studies, this wouldenable better scientific evaluation of runtime verification systems.

2 Combining Testing and Runtime Verification Tools

While testing is still the prevailing approach to ensure software correctness, the use ofruntime verification [1] as a form of post-deployment testing is on the rise. Such con-tinuous testing ensures that if bugs occur, they don’t go unnoticed. Apart from beingcomplementary, testing and runtime verification have a lot in common: runtime veri-fication of programs requires a formal specification of requirements against which theruns of the program can be verified [11]. Similarly, in model-based testing, checks arewritten such that on each (model-triggered) execution step, the system state is checkedfor correctness. Due to this similarity, applying both testing and runtime verification

19

Page 21: Computer Science Annual Workshop CSAW’12...Computer Science Annual Workshop CSAW’12 Computer Science Department University of Malta Department of Computer Science University of

2 C. Colombo

techniques is frequently perceived as duplicating work. Attempts [9] have already beenmade to integrate the runtime verification tool Larva [6] with QuickCheck [10]. Weplan to continue on this work by integrating the Larva tool with a Java model-basedtesting tool, ModelJUnit1.

3 Using Runtime Verification for Model-Based Testing Feedback

To automate test case generation and ensure that the tests cover all salient aspects of asystem, model-based testing [7, 8] enables the use of a model specification from whichtest cases are automatically generated. Although successful and growing in popularity,model-based testing is only effective in as much as the model is complete. Sequencesof untested user interaction may lead to huge losses for the service-providerif any ofthese lead to a failure. Since coming up with a model for test case generation is largelya manual process [7, 8], it is virtually impossible to ensure the model is complete.

In this context, we propose to use runtime information to detect incompleteness inthe test case generation model: by considering the execution paths the system takes atruntime, a monitor checks whether each path (or a sufficiently similar one) is in the testcase generation model.

References1. Barringer, H., Falcone, Y., Finkbeiner, B., Havelund, K., Lee, I., Pace, G.J., Rosu, G., Sokol-

sky, O., Tillmann, N. (eds.): RV, LNCS, vol. 6418 (2010)2. Blackburn, S.M., Garner, R., Hoffmann, C., Khang, A.M., McKinley, K.S., Bentzur, R., Di-

wan, A., Feinberg, D., Frampton, D., Guyer, S.Z., Hirzel, M., Hosking, A., Jump, M., Lee,H., Moss, J.E.B., Phansalkar, A., Stefanovic, D., VanDrunen, T., von Dincklage, D., Wieder-mann, B.: The dacapo benchmarks: java benchmarking development and analysis. SIGPLANNot. 41(10), 169–190 (2006)

3. Bodden, E., Hendren, L.J., Lam, P., Lhoták, O., Naeem, N.A.: Collaborative runtime verifi-cation with tracematches. J. Log. Comput. 20(3), 707–723 (2010)

4. Colombo, C., Pace, G.J., Abela, P.: Compensation-aware runtime monitoring. In: RV. LNCS,vol. 6418, pp. 214–228 (2010)

5. Colombo, C., Pace, G.J., Schneider, G.: Dynamic event-based runtime monitoring of real-time and contextual properties. In: FMICS. LNCS, vol. 5596, pp. 135–149 (2008)

6. Colombo, C., Pace, G.J., Schneider, G.: Larva — safer monitoring of real-time java programs(tool paper). In: SEFM. pp. 33–37 (2009)

7. Dias Neto, A.C., Subramanyan, R., Vieira, M., Travassos, G.H.: A survey on model-basedtesting approaches: a systematic review. In: Empirical assessment of software engineeringlanguages and technologies. pp. 31–36. WEASELTech ’07 (2007)

8. El-Far, I.K., Whittaker, J.A.: Model-Based Software Testing. John Wiley & Sons, Inc. (2002)9. Falzon, K.: Combining Runtime Verification and Testing Techniques. Master’s thesis (2011)

10. Hughes, J.: Quickcheck testing for fun and profit. In: Practical Aspects of Declarative Lan-guages, LNCS, vol. 4354, pp. 1–32 (2007)

11. Leucker, M., Schallhart, C.: A brief account of runtime verification. JLAP 78(5), 293–303(2009)

12. Meredith, P.O., Jin, D., Griffith, D., Chen, F., Rosu, G.: An overview of the MOP runtimeverification framework. JSTTT (2011), to appear.

1 http://www.cs.waikato.ac.nz/ marku/mbt/modeljunit/

20

Page 22: Computer Science Annual Workshop CSAW’12...Computer Science Annual Workshop CSAW’12 Computer Science Department University of Malta Department of Computer Science University of

Language Extension Proposals for Cloud-BasedComputing

Adrian [email protected], and Tyron Zerafa

[email protected]

University of Malta

1 Synopsis

Cloud computing can be described as the homogenisation of resources distributedacross computing nodes, so as to facilitate their sharing by a number of programs.In some sense, the use of virtual machines in languages such as Java and Erlang,are a first step towards to this idea of cloud computing, by providing a commonlayer of abstraction over nodes with different characteristic. This has fosteredvarious forms of distributed computing mechanisms such as web services basedon remote procedure calls (RPCs) and code on demand (COD) applets executingin sandboxed environment.

Erlang is an actor-based programming language that lends itself well to theconstruction of distributed programming through a combination of languagefeatures such as message-passing and error-handling mechanisms. It also offersmechanisms for dynamically spawning processes (actors, to be more precise) onremote nodes, for reasons ranging from load balancing to the maximisation ofcomputation proximity vis-a-vis the resources that it uses. Although the mech-anism work well, it relies on a rather strong assumption when programming forthe cloud, namely that the source-code at every node is homogeneous.

The first aim of our study is to create a layer of abstraction that automatesthe necessary migration of source-code so as to allow seamless spawning of pro-cesses across nodes “on the cloud”. The challenge here is to migrate the leastamount of code, at the least amount of cost/effort, so as to allow the remotecomputation to be carried out. The solutions we explore range from those thatrely on code dependency static analyses to ”lazy” dynamic methods that migratesource code only when needed by the host node. There are also issues relatingto source-name clashes and versioning.

The second aim of the study is to enhance the security of resources advertisedon the cloud, by delineating their expected use. Our approach will rely on themechanism of having a policy file per node, describing the restrictions imposedon resource usage. We plan to explore feasibility of code migration mechanismsthat take into consideration these policy restrictions imposed by the receivingnodes. We shall also explore various ways how to enforce these policies.

The third aim of the study is to analyse mechanisms for tolerating networkerrors such as node disconnections; this is particularly relevant to distributed

21

Page 23: Computer Science Annual Workshop CSAW’12...Computer Science Annual Workshop CSAW’12 Computer Science Department University of Malta Department of Computer Science University of

computations spanning over more that two nodes. Again, we plan to use thepolicy-file approach to automate decisions that need to be taken to seamlesslycarry out distributed computation in the eventuality of such failures; the policyfiles may also be used to specify redundancy mechanisms that should, in turn,enable better tolerance to such faults.

22

Page 24: Computer Science Annual Workshop CSAW’12...Computer Science Annual Workshop CSAW’12 Computer Science Department University of Malta Department of Computer Science University of

Designing Correct Runtime-Monitors for Erlang

Aldrin Seychell and Adrian Francalanza

Faculty of ICT, University of Malta, Msida Malta,[email protected], [email protected]

In runtime verification, a monitor continuously checks the execution of aprogram that is running concurrently with it. Normally, the runtime monitorchecks that the system does not violate a correctness property. Any runtimemonitor is expected to satisfy the following:

If a system does not obey a property ϕ, then the monitor for ϕ MUST flag a failure.

Showing that this statement holds is not trivial since we have to consider allpossible execution paths of the monitor. In our work, we show how one can provethis indirectly by proving a number of correctness criteria on our synthesisedmonitors.

For instance, a desirable property for monitors is that the execution of a mon-itor does not result in divergence since this would result in infinite computationby the monitor. Having divergence in the monitor results in an unacceptablecomputational overhead that would hinder the performance of the monitoredsystem. If the computational overhead incurred on the system is unacceptablyhigh, the monitor cannot even be considered as a valid monitor since it will neverbe used.

Another desirable property that monitors should observe is that there existsat least one execution path that when the program does not satisfy a property,the monitor will give a fail notification. This is generally enough if the monitorsare executed in a sequential manner.

The monitors might have to execute using concurrent processes for exampleto check multiple branches of a property at the same time. Thus, in a concurrentenvironment it is not enough that there exists a single path that gives the ex-pected result. Having a proof of determinism would extend the latter propertyto be valid for all execution paths. Determinism also means that if a specificexecution trace is monitored multiple times, the monitor will always give thesame result.

In this talk, a design of monitors in Erlang for Erlang programs is discussed.A proof that this design of monitors satisfies the theorem stated above is alsogiven. The design uses concurrent processes in order to monitor multiple sub-properties combined under conjunction. These synthesised monitors are shownto be non-divergent, are correct and are determinate. In order to show that themonitors preserves determinism, the monitors are proved to be confluent sinceconfluence implies determinism.

23

Page 25: Computer Science Annual Workshop CSAW’12...Computer Science Annual Workshop CSAW’12 Computer Science Department University of Malta Department of Computer Science University of

Formal Fault-Tolerance Proofs for DistributedAlgorithms

Mandy Zammit Adrian FrancalanzaDepartment of Computer Science

Faculty of ICTUniversity of Malta

[email protected] [email protected]

Distributed Algorithms express problems as concurrent failing processes which co-operate and interact towards a common goal [1, 2]. Such algorithms arise in a wide rangeof applications, including distributed information processing, banking systems and airlinereservation systems amongst others. It is desirable that distributed algorithms are well be-haved both in a failure free environment and even in the presence of failure (i.e. faulttolerant). To ensure well behavedness for all executions of distributed algorithms formalcorrectness proofs are needed. This is due to the concurrent nature of such algorithms,where executions of the algorithms result in different interleavings amongst parallel pro-cesses (i.e. there is a large number of possible execution paths).

However distributed algorithms are often described in semi-formal pseudo code and theircorrectness criteria1 in natural language. The current status quo has a number of shortcom-ings. First is, the discrepancy between the semi-formal description of the algorithm and theactual implementation which is expressed in a formal language. This discrepancy is often aprincipal source of errors in the deployment of these algorithms. Second, due to the expo-nential interleaving amongst parallel processes and the subtle forms of process interferenceat these interleavings, becomes rather subtle and delicate.

Unfortunately formalisation tends to increase complexity because formal analysis forcesyou to consider every possible interleaving. At the same time the use of proof mechaniza-tions still does not yield a clearly expressed proof that is easy to construct or understand;rather, it yields monolithic mechanical proofs that are very hard to digest.

In our work we address the complexity problem introduced by formalisation. We formaliseand prove the correctness of three Broadcast Protocols, namely; Best Effort Broadcast, Reg-ular Reliable Broadcast, and Uniform Reliable Broadcast. We use an asynchronous PI-calculus together with fail-stop processes and perfect failure detectors to encode the proto-cols and their correctness criteria, and bisimulation (≈, a notion of coinductive equivalence)for the correctness proofs. In order to alleviate complexity, we extend the work conductedby Francalanza et. al. [3] where we make use of the concept of Harnesses, and a novel ap-proach to correctness proofs whereby proofs for basic correctness (correctness in a failurefree setting) and for fault tolerance (correctness in a setting were failures can be induced)are disjoint. We extend their methodology to produce inductive proofs containing coinduc-tive techniques at each inductive step rather than having one large coinductive proof. Thiswould allow us to move away from the large monolithic proofs.

References

[1] Lynch, Nancy A. , Distributed Algorithms, Morgan Kaufmann (1st edition), 2007.

1correctness criteria specify the well behavedness of an algorithm24

Page 26: Computer Science Annual Workshop CSAW’12...Computer Science Annual Workshop CSAW’12 Computer Science Department University of Malta Department of Computer Science University of

[2] Tel, Gerard, Introduction to Distributed Algorithms, Cambridge University Press (2ndedition), 2001.

[3] Francalanza, Adrian and Hennessy, Matthew, A fault tolerance bisimulation proof forconsensus, Proceedings of the 16th European conference on Programming, Springer-Verlag, 2007.

[4] Kühnrich, Morten and Nestmann, Uwe, On Process-Algebraic Proof Methods forFault Tolerant Distributed Systems, Proceedings of the Joint 11th IFIP WG 6.1 Inter-national Conference FMOODS ’09 and 29th IFIP WG 6.1 International ConferenceFORTE ’09 on Formal Techniques for Distributed Systems, Springer-Verlag, 2009.

25

Page 27: Computer Science Annual Workshop CSAW’12...Computer Science Annual Workshop CSAW’12 Computer Science Department University of Malta Department of Computer Science University of

Real-time Selective RenderingCSAW-2012 Presentation Abstract

Steven GaleaDepartment of Computer Science

University of MaltaMsida, Malta

[email protected]

Abstract - Traditional physically-based renderers can producehighly realistic imagery; however, suffer from lengthy execution times,which make them impractical for use in interactive applications. Se-lective rendering exploits limitations in the human visual system torender images that are perceptually similar to high-fidelity render-ings in a fraction of the time. This paper outlines current researchbeing carried out by the author to tackle this problem, using a combi-nation of ray-tracing acceleration techniques, GPU-based processing,and selective rendering methods. The research will also seek to con-firm results published in literature, which indicate that users fail tonotice any quality degradation between high-fidelity imagery and acorresponding selective rendering.

1 Introduction

At its essence, computer graphics is concerned with synthesizing a pic-torial representation of objects and environments from their computerbased models [1], through a process known as rendering.

Traditional rendering methods make use of rasterization to project3D models to a 2D image plane, together with an array of specializedmethods to approximate real-world phenomena like shadows, reflec-tions and indirect lighting [2]. These techniques have evolved intomature API’s (e.g. OpenGL, DirectX) and can achieve real-time per-formance at the cost of limited realism and increased application com-plexity.

In contrast, physically-based renderers seek to replicate the actualproperties of lighting and material. For instance, ray-tracing simu-lates the path taken by light through a virtual environment, and cangenerate images that not only look realistic, but are also physicallyplausible. However, rendering in this manner is very computationallyintensive, and until recently, physically-based rendering was largelylimited to off-line rendering [3].

This paper outlines the authors research in selective rendering withthe aim of generating high-fidelity imagery at interactive rates andconfirming experimental results published in the literature. Section2 gives a short description of the problem to be tackled. Section 3reviews the relevant literature. The proposed solution is discussedin Section 4. Section 5 provides an overview of the current projectas well as possible directions for further research. Finally, Section 6concludes the paper.

2 Problem Description

According to Chalmers and Ferko [4], rendering high-fidelity imagesin real-time has long been the ”holy grail” of the computer graphicscommunity. To achieve this, research on physically-based renderershas tried to tackle the problem from different points of view. On onefront, researchers have focused on finding effective data structuresthat reduce the number of operations carried out for each ray of light[3]. Other works have focused on implementing massively-parallel raytracers on GPUs [5], and distributing rendering load over a computingcluster [6] or supercomputer [7]. However, simulating real-world lightinteraction of complex scenes at interactive rates on a desktop PC stillremains out of reach, even using modern GPUs [8].

More recent attempts have adopted principles from psychology andperception to target computation to parts of the scene that mattermost to a human observer [9]. Selective rendering allows for substan-tial reductions in computation while creating an image that is per-ceptually equivalent to rendering a high-fidelity image of the wholescene.

3 Literature Review

Physically-based rendering algorithms, such as ray-tracing, generatehigh-fidelity images by simulating the physics of light and its interac-tion with surface materials. In particular, global illumination effects,such as surface inter-reflections and caustics, play a major role inenhancing the realism of rendered images [10]. However, the hugeamount of computation required to produce physically-correct imagesprevents these methods from being utilized in interactive applicationson desktop computers [4]. Several techniques have been developed toimprove ray tracing performance:

Acceleration data structures, such as kd-trees and BVH trees,reduce geometric complexity by testing for ray-object intersectionsagainst groups of geometric primitives [11].

Instant global illumination (IGI) pre computes the distribution oflighting in the scene by following light paths from light sources in thescene and depositing virtual point lights (VPL) at each vertex [12].Rendering indirect light is achieved by casting shadow rays from theshaded point to each VPL to determine visibility.

Interleaved sampling exploits the slowly changing nature of the indi-rect lighting function to include the contribution from a given numberof VPLs in less time [13]. This is achieved by modifying the IGI pre-processing stage to generate k subsets of VPLs. The image is thenpartitioned into tiles of n by m (n*m=k, usually n=m=3 or m=n=5)pixels, with each pixel sampling indirect light from the relative VPLsubset. Finally, a discontinuity buffer is used to calculate the contribu-tion of all VPLs at each pixel by interpolating values from neighboringpixels, provided that they do not cross geometric discontinuities [12].

Selective rendering seeks to produce images that are perceived tobe of an equivalent quality to a physically-correct image, but using afraction of the computation. This is possible since the human visualsystem does not process an image in raster-like fashion, but rapidlydirects attention to salient features in a scene, such as sudden move-ment or brightly coloured objects [14]. This knowledge can be usedto construct saliency maps, i.e. images highlighting the most percep-tually interesting parts of the scene, which are then used to directcomputation appropriately.

4 Proposed Solution

Saliency maps generated using image processing techniques are usu-ally expensive to compute and do not give the region of interest wherethe user is currently looking at, but only where the user is likely todirect attention to. This MSc will look into various methods of achiev-ing real-time performance for rendering using selective methods. Thisincludes the possibility of using an eye tracker to get an accuratemeasurement of the user’s current region of interest, with all the asso-ciated computation being carried out by the eye tracking device. Thisshould significantly reduce the time spent on computing the saliencymap, making more resources available for rendering. To our knowl-edge, the use of an eye tracking device for real-time selective renderinghas never been explored before.

Additionally, this project will also seek to confirm results publishedin literature which indicate that users fail to notice any differences inquality between a high-fidelity rendering and a selective rendering ofthe same scene. Previous psychophysical experiments have used pre-rendered images and animations for the evaluation. If the rendereris successful at achieving real-time performance, the project can alsolook into carrying out a psychophysical experiment with interactiveimagery, where the participants are able to freely navigate the scene.

26

Page 28: Computer Science Annual Workshop CSAW’12...Computer Science Annual Workshop CSAW’12 Computer Science Department University of Malta Department of Computer Science University of

5 Project Status

A prototype to demonstrate proof of concept is being developed as anextension to the GPU-based ray-tracing framework NVIDIA OptiX[15]. The framework provides a high-performance ray-tracing corecomprising a small set of building blocks that are used in most ray-tracing algorithms.

Several basic components such as model-loading and rendering ofdirect lighting have already been developed. Rendering of global illu-mination is achieved by a GPU-based implementation of the IGI al-gorithm, together with interleaved sampling and discontinuity bufferto improve execution time.

Interleaved sampling provides good speedups at the cost of somedegradation in image quality, especially near geometric discontinu-ities, where the averaging across nearby samples cannot be performed.The loss in image quality arises mainly from the fixed 3x3 or 5x5 tilingin interleaved sampling. Current research is investigating whether itis possible to obtain a better quality image by smartly positioning thesamples across the image according to saliency. This is done by consid-ering the full set of VPLs at each sample, but positioning the samplesdensely in salient regions (e.g. at pixel intervals), and more sparsely(e.g. sampling once per 3x3 or 5x5 tile) in less salient regions. Missingillumination values can then be interpolated from nearby samples.

Preliminary results show that this technique gives a good qualityindirect lighting image. However, a more in depth analysis still needsto be carried out to compare the output of the proposed method withthat of traditional interleaved sampling.

The saliency map currently being used is a GPU implementationof the method by Hou et al. [16]. Although fast, the method is quitebasic and several enhancements have been proposed in literature toimprove accuracy. Future work will look into substituting the currentsaliency map with an improved version, thus improving the quality ofthe indirect lighting image.

Once the selective rendering component is ready, the project mayalso look into using an eye tracker to provide the saliency informationinstead of the saliency map.

6 Conclusion

This paper started with a review of the existing literature, outliningthe performance issues of rendering high-fidelity imagery at interac-tive rates. An overview of the current research was given, propos-ing a combination of ray-tracing acceleration techniques, GPU-basedprocessing, and selective rendering methods as a way to tackle thisproblem. The current status of the project was discussed, togetherwith preliminary results and directions for future research.

References

[1] J. D. Foley, A. v. Dam, S. K. Feiner and J. F. Hughes, ComputerGraphics: Principles and Practice in C (2nd Edition), Addison-Wesley Professional, 1995.

[2] J. Schmittler, S. Woop, D. Wagner, W. J. Paul and P. Slusallek,”Realtime ray tracing of dynamic scenes on an FPGA chip,”in ACM SIGGRAPH/EUROGRAPHICS conference on Graphicshardware, New York, USA, 2004.

[3] I. Wald, W. R. Mark, J. Gnther, S. Boulos, T. Ize, W. Hunt,S. G. Parker and P. Shirley, ”State of the Art in Ray TracingAnimated Scenes,” Computer Graphics Forum, vol. 28, no. 6,pp. 1691-1722, 2009.

[4] A. Chalmers and A. Ferko, ”Levels of Realism: From Virtual Re-ality to Real Virtuality,” in 24th Spring Conference on ComputerGraphics, New York, USA, 2008.

[5] J. Gunther, S. Popov, H.-P. Seidel and P. Slusallek, ”Real-time Ray Tracing on GPU with BVH-based Packet Traversal,”in IEEE/Eurographics Symposium on Interactive Ray Tracing,2007.

[6] I. Wald, P. Slusallek, C. Benthin and M. Wagner, ”InteractiveDistributed Ray Tracing of Highly Complex Models,” in Pro-ceedings of the 12th Eurographics Workshop on Rendering Tech-niques, London, UK, 2001.

[7] I. Wald and P. Slusallek, ”State of the Art in Interactive RayTracing,” 2001.

[8] A. Chalmers, K. Debattista, G. Mastoropoulou and L. P. d. San-tos, ”There-Reality: Selective Rendering in High Fidelity Vir-tual Environments,” The International Journal of Virtual Real-ity, 2007.

[9] D. Bartz, D. Cunningham, J. Fischer and C. Wallraven, ”TheRole of Perception for Computer Graphics,” in EurographicsState-of-the-Art-Reports, 2008.

[10] K. Myszkowski, T. Tawara, H. Akamine and H.-P. Seidel,”Perception-Guided Global Illumination Solution for AnimationRendering,” in In Proceedings of SIGGRAPH 2001, Los Angeles,California, USA, 2001.

[11] E. Reinhard, A. Chalmers and F. W. Jansen, ”Overview of Par-allel Photo-realistic Graphics,” in Eurographics ’98 State of theArt Reports, 1998.

[12] I. Wald, T. Kollig, C. Benthin, A. Keller and P. Slusallek, ”Inter-active global illumination using fast ray tracing,” in EGRW ’02Proceedings of the 13th Eurographics workshop on Rendering,2002.

[13] A. Keller and W. Heidrich, ”Interleaved Sampling,” in Proceed-ings of the 12th Eurographics Workshop on Rendering Tech-niques, London, UK, 2001.

[14] S. Lee, G. J. Kim and S. Choi, ”Real-time tracking of visuallyattended objects in interactive virtual environments,” in Proceed-ings of the 2007 ACM symposium on Virtual reality software andtechnology, New York, USA, 2007.

[15] NVIDIA, ”NVIDIA OptiX ray tracing engine,” NVIDIA, [On-line]. Available: http://developer.nvidia.com/optix. [Accessed 1May 2012].

[16] X. Hou and L. Zhang, ”Saliency Detection: A Spectral ResidualApproach,” in IEEE Conference on Computer Vision and PatternRecognition, Minneapolis, Minnesota, USA, 2007.

27

Page 29: Computer Science Annual Workshop CSAW’12...Computer Science Annual Workshop CSAW’12 Computer Science Department University of Malta Department of Computer Science University of

Distributed High-Fidelity Graphics using P2P

Daniel D’Agostino

In the field of three-dimensional computer graphics, rendering refers to theprocess of generating images of a scene from a particular viewpoint. There aremany different ways to do this, from the highly interactive real-time renderingmethods [1] to the more photorealistic and computationally intensive methods[5].

This work is concerned with Physically Based Rendering (PBR), a class ofrendering algorithms capable of achieving a very high level of realism. This isachievable thanks to physically accurate modelling of the way light interacts withobjects in a scene [7], together with the use of accurately modelled materials andphysical quantities.

Unfortunately, this realism comes at a cost. PBR computations are veryexpensive, and it may take several hours to render a single image. Hence, it is nosurprise that most of the research in this field attempts to find ways to reducethat cost, and produce images in less time.

In spite of their diversity, all physically-based rendering algorithms attemptto solve the same problem: imitating the visual appearance of an environment asit would look in real life. This problem is formulated as the rendering equation[6]. Based on the principle of conservation of energy, this equation states thatthe light leaving a surface comprises both the light emitted from the surface,and that being reflected from other surfaces.

Due to factors relating to geometric complexity of the scene, it is not possibleto solve the rendering equation analytically [7]. This difficulty mandates the useof numerical techniques, the most popular of which are Monte Carlo techniques.These were introduced to PBR with distributed ray tracing [3], a technique basedon the ray tracing method [11] which could collect light arriving at a surfacefrom many different directions in order to include fuzzy phenomena (such assoft shadows).

Irradiance caching [10] is a particularly useful technique that can be used inconjunction with distributed ray tracing (as well as other algorithms). Based onthe observation that indirect diffuse lighting varies slowly over a surface [10], itstores irradiance (view-independent lighting data) in a data structure (usuallyan octree) and uses fast interpolation in order to speed up computation.

While one side of PBR research has been formulating more efficient algo-rithms to render physically-based images faster, another has been exploitingthe embarrassingly parallel nature of ray tracing [4] in order to distribute therendering load on many processors or interconnected machines [9].

Research into parallel PBR has so far almost exclusively used the client/servermodel. Although some rendering research based on the peer-to-peer (P2P) modelhas recently emerged [8, 2, 12], at present (to the best of our knowledge) there isnone that is designed to improve physically-based rendering systems.

28

Page 30: Computer Science Annual Workshop CSAW’12...Computer Science Annual Workshop CSAW’12 Computer Science Department University of Malta Department of Computer Science University of

The method we are developing is based on the irradiance caching algorithm.Given that irradiance stored by a renderer is view-independent, it is easy to sharewith other machines, who can then use it directly without having to recomputeit locally. This makes it particularly suitable for use over a P2P network.

Aside from the obvious benefits of free computation resulting in overallspeedup, this novel way of sharing irradiance between peers is expected toprovide new and interesting scenarios in which PBR can be used for collab-oration between several peers in the same scene, as opposed to the traditionalclient/server model where the clients would do the rendering work and the serverwould be the only one to see the final images.

References

1. Tomas Akenine-Moller, Eric Haines, and Natty Hoffman. Real-Time Rendering3rd Edition. A. K. Peters, Ltd., Natick, MA, USA, 2008.

2. Azzedine Boukerche and Richard Werner Nelem Pazzi. A peer-to-peer approachfor remote rendering and image streaming in walkthrough applications. In ICC,pages 1692–1697. IEEE, 2007.

3. Robert L. Cook, Thomas Porter, and Loren Carpenter. Distributed ray tracing.In Proceedings of the 11th annual conference on Computer graphics and interactivetechniques, SIGGRAPH ’84, pages 137–145, New York, NY, USA, 1984. ACM.

4. Thomas W. Crockett. Parallel rendering. Parallel Computing, 23:335–371, 1995.5. P. Dutre, K. Bala, and P. Bekaert. Advanced global illumination. Ak Peters Series.

AK Peters, 2006.6. James T. Kajiya. The rendering equation. In Proceedings of the 13th annual con-

ference on Computer graphics and interactive techniques, SIGGRAPH ’86, pages143–150, New York, NY, USA, 1986. ACM.

7. M. Pharr and G. Humphreys. Physically Based Rendering: From Theory to Imple-mentation. Morgan Kaufmann. Elsevier Science, 2010.

8. J. A. Mateos Ramos, C. Gonzlez-Morcillo, D. Vallejo Fernndez, and L. M. Lpez-Lpez. Yafrid-NG: A Peer to peer Architecture for Physically Based Rendering.pages 227–230.

9. I. Wald and P. Slusallek. State of the art in interactive ray tracing. STAR,EUROGRAPHICS 2001, pages 21–42, 2001.

10. Gregory J. Ward, Francis M. Rubinstein, and Robert D. Clear. A ray tracingsolution for diffuse interreflection. In Proceedings of the 15th annual conference onComputer graphics and interactive techniques, SIGGRAPH ’88, pages 85–92, NewYork, NY, USA, 1988. ACM.

11. Turner Whitted. An improved illumination model for shaded display. Commun.ACM, 23(6):343–349, June 1980.

12. Minhui Zhu, Sebastien Mondet, Geraldine Morin, Wei Tsang Ooi, and Wei Cheng.Towards peer-assisted rendering in networked virtual environments. In Proceedingsof the 19th ACM international conference on Multimedia, MM ’11, pages 183–192,New York, NY, USA, 2011. ACM.

29

Page 31: Computer Science Annual Workshop CSAW’12...Computer Science Annual Workshop CSAW’12 Computer Science Department University of Malta Department of Computer Science University of

Fig. 1. The Sibenik Cathedral, rendered using our Irradiance Cache

Fig. 2. The Kalabsha Temple, rendered using our Irradiance Cache

30

Page 32: Computer Science Annual Workshop CSAW’12...Computer Science Annual Workshop CSAW’12 Computer Science Department University of Malta Department of Computer Science University of

Rendering as a Service

Keith Bugeja

Department of Computer ScienceUniversity of Malta

High-fidelity rendering requires a substantial amount of computational re-sources to accurately simulate lighting in virtual environments [1]. While desk-top computing, boosted by modern graphics hardware has shown promise indelivering realistic rendering at interactive rates, rendering moderately complexscenes may still elude single machine systems [2]. Moreover, with the increasingadoption of mobile devices, which are incapable of achieving the same computa-tional performance, there is certainly a need for access to further computationalresources that would be able to guarantee a certain level of quality.

Fig. 1. Mnajdra site, generated using our high-fidelity renderer, Illumina PRT

Cloud computing is a distributed computing paradigm for service hostingand delivery. It is similar to grid computing, but leverages virtualisation at mul-tiple levels to realise resource sharing and provision [3], providing resources ondemand and charging customers on a per-use basis rather than at a flat rate.Cloud technologies have the potential of providing a large number of resourcesto dedicate to a single application at any given point in time [5]. This paradigmoffers the possibility of interactive high-fidelity graphics for users that do nothave access to expensive dedicated clusters and can be delivered on any systemfrom mobile to tablet to a desktop machine.

31

Page 33: Computer Science Annual Workshop CSAW’12...Computer Science Annual Workshop CSAW’12 Computer Science Department University of Malta Department of Computer Science University of

The aim of our research is that of leveraging the cloud infrastructure toprovide high-fidelity rendering and thus present a first attempt into renderingas a service, via a system that can provide parallel resources for rendering whichcould be either dedicated or adapt to the servers’ workload. As such, there existchallenges across three broad aspects: efficient high-fidelity rendering, resourceallocation and task distribution, and low-latency client-server communication.

The focus of this presentation is the task distribution framework for clustercomputing [4] that is currently being researched and developed; whereas grid andcluster infrastructures favour job submissions for medium to long term execu-tions, our task distribution framework is targetted at short term execution andinteractivity, in the vein of Many-task Computing [6]. We also consider futuredirections that our research might take, specificially vis-a-vis generalisation andheterogenous computing.

References

1. Vibhor Aggarwal, Kurt Debattista, Thomas Bashford-Rogers, Piotr Dubla, andAlan Chalmers. High-fidelity interactive rendering on desktop grids. IEEE Com-puter Graphics and Applications, pre print, (99):1–1, 2010.

2. Vibhor Aggarwal, Kurt Debattista, Piotr Dubla, Thomas Bashford-Rogers, andAlan Chalmers. Time-constrained high-fidelity rendering on local desktop grids.In EGPGV, pages 103–110, 2009.

3. M. Armbrust, A. Fox, R. Griffith, A.D. Joseph, R. Katz, A. Konwinski, G. Lee,D. Patterson, A. Rabkin, I. Stoica, et al. A view of cloud computing. Communica-tions of the ACM, 53(4):50–58, 2010.

4. R. Buyya et al. High performance cluster computing: Architectures and systems(volume 1). Prentice Hall, Upper SaddleRiver, NJ, USA, 1:999, 1999.

5. P. Mell and T. Grance. The nist definition of cloud computing (draft). NIST specialpublication, 800:145, 2011.

6. I. Raicu, I.T. Foster, and Y. Zhao. Many-task computing for grids and supercom-puters. In Many-Task Computing on Grids and Supercomputers, 2008. MTAGS2008. Workshop on, pages 1–11. IEEE, 2008.

32

Page 34: Computer Science Annual Workshop CSAW’12...Computer Science Annual Workshop CSAW’12 Computer Science Department University of Malta Department of Computer Science University of

Iterative Partitioning and Labelling of PointCloud Data

Sandro Spina

Department of Computer ScienceUniversity of Malta

Over the past few years the acquisition of 3D point information represent-ing the structure of real-world objects has become common practice in manyareas[4]. This acquisition process[1] has traditionally been carried out using 3Dscanning devices based on laser or structured light techniques. Professional grade3D scanners are nowadays capable of producing highly accurate data at samplingrates of approximately a million points per second. Moreover the popularisationof algorithms and tools capable of generating relatively accurate virtual repre-sentations of real-world scenes from photographs without the need of expensiveand specialised hardware has led to an increase in the amount and availabilityof 3D point cloud data. The management and processing of these huge volumesof scanned data is quickly becoming a problem[2].

Fig. 1. Section of Mnajdra pre-historic temple scanned by Heritage Malta

Following the on-site acquisition process, substantial data processing is usu-ally carried out. This scenario is contributing to an increase in importance foralgorithms which are capable of analysing and processing point clouds efficiently.In general this post processing of data takes a set of points as input (the pointcloud) and returns a set partition of the input. For example, point clouds ac-quired from external sites would usually require a cleaning process in order toremove points which are not relevant to the site. The input would thus consistof all points acquired and the output a set partition consisting of two sets; the

33

Page 35: Computer Science Annual Workshop CSAW’12...Computer Science Annual Workshop CSAW’12 Computer Science Department University of Malta Department of Computer Science University of

first storing those points which are relevant and should be kept and the secondstoring those which should be discarded. I many areas, another important postprocessing task is that of grouping together points which logically fall under thesame semantic definition. For example all points making up the floor of a site,or the windows, the benches, doors, etc.

Fig. 2. Recognition and reconstruction of scanned 3D indoor scene [3]

In a similar fashion, techniques have been developed which address the prob-lem of reconstructing scanned indoor scenes into their respective components.Figure 1 illustrates one such example. In these cases the challenges are mainlyrelated to clutter, resulting in both object interferences and occlusions. This pre-sentation focuses on a point cloud segmentation/partitioning pipeline which isbeing developed in order to facilitate (and automate) post-processing tasks fol-lowing point cloud acquisition. We address both scenarios described above. Thepipeline consists of two main stages; a segmentation stage and a fitting stage.The initial segmentation stage[5] partitions the input point cloud into the varioussurfaces making up the site and produces a structure graph. The fitting stagethen tries to maximise usage of every surface (node) in the structure graph. Thepresentation outlines current research progress, objectives and future directions.

References

1. Fausto Bernardini and Holly E. Rushmeier. The 3d model acquisition pipeline.Comput. Graph. Forum, 21(2):149–172, 2002.

2. Paolo Cignoni and Roberto Scopigno. Sampled 3d models for ch applications: Aviable and enabling new medium or just a technological exercise? J. Comput. Cult.Herit., 1:2:1–2:23, June 2008.

3. Liangliang Nan, Ke Xie, and Andrei Sharf. A search-classify approach for clut-tered indoor scene understanding. ACM Transactions on Graphics (Proceedings ofSIGGRAPH Asia 2012), 31(6), 2012.

4. Heinz Ruther. Documenting africa’s cultural heritage. In In Proceedings of the 11thInternational Symposium VAST. Virtual Reality, Archaeology and Cultural Heritage,2010.

5. Sandro Spina, Kurt Debattista, Keith Bugeja, and Alan Chalmers. Point cloudsegmentation for cultural heritage sites. In Franco Niccolucci, Matteo Dellepiane,Sebastian Pena Serna, Holly Rushmeierand, and Luc Van Gool, editors, VAST11:The 12th International Symposium on Virtual Reality, Archaeology and IntelligentCultural Heritage, volume Full Papers, pages 41–48, Prato, Italy, 11/2011 2011.Eurographics Association, Eurographics Association.

34