22
Machine Vision and Applications DOI 10.1007/s00138-015-0737-3 SURVEY Leaf segmentation in plant phenotyping: a collation study Hanno Scharr 1 · Massimo Minervini 2 · Andrew P. French 3 · Christian Klukas 9 · David M. Kramer 5 · Xiaoming Liu 6 · Imanol Luengo 3 · Jean-Michel Pape 4 · Gerrit Polder 7 · Danijela Vukadinovic 7 · Xi Yin 6 · Sotirios A. Tsaftaris 2,8 Received: 19 April 2015 / Revised: 11 September 2015 / Accepted: 27 October 2015 © Springer-Verlag Berlin Heidelberg 2015 Abstract Image-based plant phenotyping is a growing application area of computer vision in agriculture. A key task is the segmentation of all individual leaves in images. Here we focus on the most common rosette model plants, Arabidopsis and young tobacco. Although leaves do share MM and SAT acknowledge a Marie Curie Action: “Reintegration Grant” (Grant #256534) of the EU’s Seventh Framework Programme (FP7/2007-2013). HS acknowledges funding from EU-FP7 no. 247947 (GARNICS). HS, JMP, and CK acknowledge the support of the German-Plant-Phenotyping Network, which is funded by the German Federal Ministry of Education and Research (Project Identification Number: 031A053). XY, XL, and DK acknowledge the support of US Department of Energy, Office of Science, Basic Energy Sciences Program (DE-FG02-91ER20021) and the MSU centre for Advanced Algal and Plant Phenotyping. B Massimo Minervini [email protected] B Sotirios A. Tsaftaris [email protected] Hanno Scharr [email protected] Andrew P. French [email protected] Christian Klukas [email protected] David M. Kramer [email protected] Xiaoming Liu [email protected] Imanol Luengo [email protected] Jean-Michel Pape [email protected] Gerrit Polder [email protected] appearance and shape characteristics, the presence of occlu- sions and variability in leaf shape and pose, as well as imaging conditions, render this problem challenging. The aim of this paper is to compare several leaf segmentation solutions on a unique and first-of-its-kind dataset containing images from typical phenotyping experiments. In particular, we report and discuss methods and findings of a collection of submissions for the first Leaf Segmentation Challenge of the Computer Vision Problems in Plant Phenotyping work- shop in 2014. Four methods are presented: three segment Danijela Vukadinovic [email protected] Xi Yin [email protected] 1 Institute of Bio- and Geosciences: Plant Sciences (IBG-2) Forschungszentrum Jülich GmbH, Jülich, Germany 2 IMT Institute for Advanced Studies, Lucca, Italy 3 Schools of Biosciences and Computer Science, Centre for Plant Integrative Biology, University of Nottingham, Nottingham, UK 4 Department of Molecular Genetics, Leibniz Institute of Plant Genetics and Crop Plant Research (IPK), Gatersleben, Germany 5 Departments of Energy Plant Research Lab, and Biochemistry and Molecular Biology, Michigan State University, East Lansing, MI, USA 6 Department of Computer Science and Engineering, Michigan State University, East Lansing, MI, USA 7 Greenhouse Horticulture, Wageningen University and Research Centre, Wageningen, Netherlands 8 School of Engineering, University of Edinburgh, Edinburgh, UK 9 LemnaTec GmbH, Aachen, Germany 123

Leaf segmentation in plant phenotyping: a collation studycvlab.cse.msu.edu/pdfs/Xi_Liu_MVA2016.pdf · Machine Vision and Applications DOI 10.1007/s00138-015-0737-3 SURVEY Leaf segmentation

  • Upload
    others

  • View
    4

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Leaf segmentation in plant phenotyping: a collation studycvlab.cse.msu.edu/pdfs/Xi_Liu_MVA2016.pdf · Machine Vision and Applications DOI 10.1007/s00138-015-0737-3 SURVEY Leaf segmentation

Machine Vision and ApplicationsDOI 10.1007/s00138-015-0737-3

SURVEY

Leaf segmentation in plant phenotyping: a collation study

Hanno Scharr1 · Massimo Minervini2 · Andrew P. French3 · Christian Klukas9 ·David M. Kramer5 · Xiaoming Liu6 · Imanol Luengo 3 ·Jean-Michel Pape4 · Gerrit Polder7 · Danijela Vukadinovic7 · Xi Yin6 ·Sotirios A. Tsaftaris2,8

Received: 19 April 2015 / Revised: 11 September 2015 / Accepted: 27 October 2015© Springer-Verlag Berlin Heidelberg 2015

Abstract Image-based plant phenotyping is a growingapplication area of computer vision in agriculture. A keytask is the segmentation of all individual leaves in images.Here we focus on the most common rosette model plants,Arabidopsis and young tobacco. Although leaves do share

MM and SAT acknowledge a Marie Curie Action: “ReintegrationGrant” (Grant #256534) of the EU’s Seventh Framework Programme(FP7/2007-2013). HS acknowledges funding from EU-FP7 no.247947 (GARNICS). HS, JMP, and CK acknowledge the support ofthe German-Plant-Phenotyping Network, which is funded by theGerman Federal Ministry of Education and Research (ProjectIdentification Number: 031A053). XY, XL, and DK acknowledge thesupport of US Department of Energy, Office of Science, Basic EnergySciences Program (DE-FG02-91ER20021) and the MSU centre forAdvanced Algal and Plant Phenotyping.

B Massimo [email protected]

B Sotirios A. [email protected]

Hanno [email protected]

Andrew P. [email protected]

Christian [email protected]

David M. [email protected]

Xiaoming [email protected]

Imanol [email protected]

Jean-Michel [email protected]

Gerrit [email protected]

appearance and shape characteristics, the presence of occlu-sions and variability in leaf shape and pose, as well asimaging conditions, render this problem challenging. Theaim of this paper is to compare several leaf segmentationsolutions on a unique and first-of-its-kind dataset containingimages from typical phenotyping experiments. In particular,we report and discuss methods and findings of a collectionof submissions for the first Leaf Segmentation Challenge ofthe Computer Vision Problems in Plant Phenotyping work-shop in 2014. Four methods are presented: three segment

Danijela [email protected]

Xi [email protected]

1 Institute of Bio- and Geosciences: Plant Sciences (IBG-2)Forschungszentrum Jülich GmbH, Jülich, Germany

2 IMT Institute for Advanced Studies, Lucca, Italy

3 Schools of Biosciences and Computer Science, Centre forPlant Integrative Biology, University of Nottingham,Nottingham, UK

4 Department of Molecular Genetics, Leibniz Institute of PlantGenetics and Crop Plant Research (IPK), Gatersleben,Germany

5 Departments of Energy Plant Research Lab, andBiochemistry and Molecular Biology, Michigan StateUniversity, East Lansing, MI, USA

6 Department of Computer Science and Engineering, MichiganState University, East Lansing, MI, USA

7 Greenhouse Horticulture, Wageningen University andResearch Centre, Wageningen, Netherlands

8 School of Engineering, University of Edinburgh, Edinburgh,UK

9 LemnaTec GmbH, Aachen, Germany

123

Page 2: Leaf segmentation in plant phenotyping: a collation studycvlab.cse.msu.edu/pdfs/Xi_Liu_MVA2016.pdf · Machine Vision and Applications DOI 10.1007/s00138-015-0737-3 SURVEY Leaf segmentation

Scharr et al.

leaves by processing the distance transform in an unsuper-vised fashion, and the other via optimal template selectionand Chamfer matching. Overall, we find that although sep-arating plant from background can be accomplished withsatisfactory accuracy (>90% Dice score), individual leafsegmentation and counting remain challenging when leavesoverlap. Additionally, accuracy is lower for younger leaves.We find also that variability in datasets does affect outcomes.Our findingsmotivate further investigations anddevelopmentof specialized algorithms for this particular application, andthat challenges of this form are ideally suited for advancingthe state of the art.Data are publicly available (online at http://www.plant-phenotyping.org/datasets) to support future chal-lenges beyond segmentation within this application domain.

Keywords Plant phenotyping · Leaf · Multi-instancesegmentation · Machine learning

1 Introduction

The study of a plant’s phenotype, i.e. its performance andappearance, in relation to different environmental conditionsis central to understanding plant function. Identifying andevaluating phenotypes of different cultivars (or mutants) ofthe same plant species, are relevant to, e.g. seed productionand plant breeders. One of the most sought-after traits isplant growth, i.e. a change in mass, which directly relatesto yield. Biologists grow model plants, such as Arabidop-sis (Arabidopsis thaliana) and tobacco (Nicotiana tabacum),in controlled environments and monitor and record theirphenotype to investigate general plant performance. Whilepreviously such phenotypes were annotated manually byexperts, recently image-based nondestructive approaches aregaining attention among plant researchers to measure andstudy visual phenotypes of plants [23,26,39,54].

In fact, most experts now agree that lack of reliableand automated algorithms to extract fine-grained informa-tion from these vast datasets forms a new bottleneck inour understanding of plant biology and function [37]. Wemust accelerate the development and deployment of suchcomputer vision algorithms, since according to the Foodand Agriculture Organization of the United Nations (FAO),large-scale experiments in plant phenotyping are a key fac-tor in meeting agricultural needs of the future, one of whichis increasing crop yield for feeding 11 billion people by2050.

Yield is related to plant mass and the current gold standardfor measuring mass is weighing the plant; however, this isinvasive and destructive. Several specialized algorithms havebeendeveloped tomeasurewhole-plant properties andpartic-ularly plant size [6,17,24,29,38,52,54,60]. Nondestructivemeasurement of a plant’s projected leaf area (PLA), i.e. the

counting of plant pixels from top-view images, is considereda good approximation of plant size for rosette plants and iscurrently used. However, when studying growth, PLA reactsrelatively weakly, as it includes growing and non-growingleaves, but the per-leaf-derived growth (implying a per-leafsegmentation), has a faster and clearer response. Thus, forexample, growth regulation [5] and stress situations [26] canbe evaluated inmore detail. Additionally, since growth stagesof a plant are usually based on the number of leaves [15], anestimate of leaf count as provided by leaf segmentation isbeneficial.

However, obtaining such refined information at the indi-vidual leaf level (as for example in [55]) which could help usidentify even more important plant traits is, from a com-puter vision perspective, particularly challenging. Plantsare not static, but changing organisms with complexity inshape and appearance increasing over time. Over a period ofhours, leaves move and grow, with the whole plant changingover days or even months, in which the surrounding envi-ronmental (as well as measurement) conditions may alsovary.

Considering also the presence of occlusions, it is notsurprising that the segmentation of leaves from singleview images (a multi-instance image segmentation problem)remains a challenging problem even in the controlled imag-ing of model plants. Motivated by this, we organized theLeaf Segmentation Challenge (LSC) of the Computer VisionProblems in Plant Phenotyping (CVPPP 2014) workshop,1

held in conjunction with the 13th European Conference onComputer Vision (ECCV), to assess the current state of theart.

This paper offers a collation study and analysis of severalmethods from the LSC challenge, and also from the litera-ture. We briefly describe the annotated dataset, the first ofits kind, that was used to test and evaluate the methods forthe segmentation of individual leaves in image-based plantphenotyping experiments (see Fig. 1 and also [46]). Colourimages in the dataset show top–down views on rosette plants.Two datasets show different cultivars of Arabidopsis (A. tha-liana), while another one shows tobacco (N. tabacum) underdifferent treatments. We manually annotated leaves in theseimages to provide ground truth segmentation and definedappropriate evaluation criteria. Several methods are brieflypresented, and in the results section, we discuss and evaluateeach method.

The remainder of this article is organized as follows:Sect. 2 offers a short literature review, while Sect. 3 definesthe adopted evaluation criteria. Section4presents the datasetsand annotations used to support the LSC challenge, whichis described in Sect. 5. Section 6 describes the methodscompared in this study, with their performance and results

1 http://www.plant-phenotyping.org/CVPPP2014.

123

Page 3: Leaf segmentation in plant phenotyping: a collation studycvlab.cse.msu.edu/pdfs/Xi_Liu_MVA2016.pdf · Machine Vision and Applications DOI 10.1007/s00138-015-0737-3 SURVEY Leaf segmentation

Leaf segmentation in plant phenotyping: a collation study

Fig. 1 Example images of Arabidopsis (A1, A2) and tobacco (A3)from the datasets used in this study [46]

discussed in Sect. 7. Finally, Sect. 8 offers conclusions andoutlook.

2 Related work

At first glance, the problem of leaf segmentation appearssimilar to leaf identification and isolated leaf segmentation(see e.g. [12–14,28,48,58]), although as we will see later itis not. Research on these areas has been motivated by sev-eral datasets showing leaves in isolation cut from plants andimaged individually, or showing leaves on the plant butwith aleaf encompassing a large field of view (e.g. by imaging via asmart phone application). This problemhas been addressed inan unsupervised [48,58], shape-based [13,14,28], and inter-active [12–14] fashion.

However, the problem at hand is radically different. Thegoal, as the illustrative example of Fig. 1 shows, is not to iden-tify the plant species (usually known in this context) but tosegment accurately each leaf in an image showing a wholeplant. This multi-instance segmentation problem is excep-

tionally complex in the context of this application. This is notonly due to the variability in shape, pose, and appearance ofleaves, but also due to lack of clearly discernible boundariesamong overlapping leaves with typical imaging conditionswhere a top-view fixed camera is used. Several authors havedealt with the segmentation of a live plant from backgroundto measure growth using unsupervised [17,24] and semi-supervised methods [34], but not of individual leaves. Theuse of colour in combination with depth images or multipleimages for supervised or unsupervised plant segmentation isalso common practice [4,10,27,44,47,49,50,56].

Several authors have considered leaf segmentation in atracking context, where temporal information is available.For example, Yin et al. [60] segment and track the leavesof Arabidopsis in fluorescence images using a Chamfer-derived energy functional to match available segmented leaftemplates to unseendata.Dellen et al. [18] use temporal infor-mation in a graph-based formulation to segment and trackleaves in a high spatial and temporal resolution sequenceof tobacco plants. Aksoy et al. [3] track leaves over time,merging segments derived by superparametric clustering byexploiting angular stability of leaves. De Vylder et al. [16]use an active contour formulation to segment and track Ara-bidopsis leaves in time-lapse fluorescence images.

Even in the general computer vision literature, this typeof similar-appearance, multi-instance problem is not wellexplored. Although several interactive approaches exist [22,40], user interaction inherently limits throughput. Therefore,here we discuss automated learning-based object segmenta-tion approaches, which might be adaptable to leaf segmen-tation. Wu and Nevatia [57] present an approach that detectsand segments multiple, partially occluded objects in images,relying on a learned, boosted whole-object segmentor andseveral part detectors. Given a new image, pixels showingpart responses are extracted and a joint likelihood estimationinclusive of inter-object occlusion reasoning is maximized toobtain final segmentations. Notably, they test their approachon classical pedestrian datasets, where appearance and sizevariation does exist, so in leaf segmentation where neigh-bouring leaves are somewhat similar, this approach mightyield less appealing results. Another interesting work [45]relies on Hough voting to jointly detect and segment objects.Interestingly, beyond pedestrian datasets they also use adataset of house windows where appearance and scale vari-ation is high (as is common also in leaves), but they do notoverlap. Finally, graphical methods have also been appliedto resolve and segment overlapping objects [25], and weretested also on datasets showing multiple horses.

Evidently, until now, the evaluation and development ofleaf segmentation algorithms using a common referencedataset of individual images without temporal informa-tion is lacking, and therefore is the main focus of thispaper.

123

Page 4: Leaf segmentation in plant phenotyping: a collation studycvlab.cse.msu.edu/pdfs/Xi_Liu_MVA2016.pdf · Machine Vision and Applications DOI 10.1007/s00138-015-0737-3 SURVEY Leaf segmentation

Scharr et al.

3 Evaluation measures

Measuring multi-object segmentation accuracy is an activetopic of research with several metrics previously proposed[31–33,42]. For the challenge and this study, we adoptedseveral evaluation criteria and devised Matlab implementa-tions. Some of these metrics are based on the Dice score ofbinary segmentations:

Dice (%) = 2|Pgt ∩ Par||Pgt| + |Par| , (1)

measuring the degree of overlap among ground truth Pgt andalgorithmic result Par binary segmentation masks.

Overall, two groups of criteria were used. To evaluate seg-mentation accuracy we used:

– Symmetric Best Dice (SBD), the symmetricaverage Dice among all objects (leaves), where for eachinput label the ground truth label yielding maximumDiceis used for averaging, to estimate average leaf segmenta-tion accuracy. Best Dice (BD) is defined as

BD(La, Lb) = 1

M

M∑

i=1

max1≤ j≤N

2|Lai ∩ Lb

j ||La

i | + |Lbj |

, (2)

where | · | denotes leaf area (number of pixels), and Lai

for 1 ≤ i ≤ M and Lbj for 1 ≤ j ≤ N are sets of leaf

object segments belonging to leaf segmentations La andLb, respectively. SBD between Lgt, the ground truth, andLar, the algorithmic result, is defined as

SBD(Lar, Lgt) = min{BD(Lar, Lgt), BD(Lgt, Lar)

}.

(3)

– Foreground–Background Dice (FBD), theDice score of the foreground mask (i.e. the whole plantassuming the union of all leaf labels), to evaluate adelineation of a plant from the background obtained algo-rithmically with respect to the ground truth.

We should note that we also considered the Global Consis-tency Error (GCE) [32] and Object-level Consistency Error(OCE) [42] metrics, which are suited for evaluating segmen-tation of multiple objects. However, we found that they areharder to interpret and that the SBD is capable of capturingrelevant leaf segmentation errors.

To evaluate how good an algorithm is in identifying thecorrect number of leaves present, we relied on:

– Difference in Count (DiC), the number ofleaves in algorithm’s result minus the ground truth:

DiC = #Lar − #Lgt. (4)

– |DiC|, the absolute value of DiC.

In addition to the metrics adopted for the challenge, inSect. 7.5, we augment our evaluation here with metrics thatprioritize leaf shape and boundary accuracy.

4 Leaf data and manual annotations

4.1 Imaging setup and protocols

Wedevised imaging apparatus, setups, and experiments emu-lating typical phenotyping experiments, to acquire threeimaging datasets, summarized in Table 1, to support thisstudy. They were acquired in two different labs with highlydiverse equipment. Detailed information is available in [46],but brief descriptions are given below for completeness.

Arabidopsis data: Arabidopsis images were acquired intwo data collections, in June 2012 and in September–October2013, hereafter named Ara2012 and Ara2013, respectively,both consisting of top-view time-lapse images of severalArabidopsis thaliana rosettes arranged in trays. Arabidopsisimages were acquired using a setup for investigating afford-able hardware for plant phenotyping.2 Images were capturedwith a 7 megapixel consumer-grade camera (Canon Power-Shot SD1000) during day time only, every 6h over a periodof 3weeks for Ara2012, and every 20min over a period of7weeks for Ara2013.

Acquired raw images (3108×2324 pixels, pixel resolu-tion of∼0.167mm)were first saved as uncompressed (TIFF)files, and subsequently encoded using the lossless compres-sion standard available in the PNG file format [53].

Tobacco data: Tobacco images were acquired using arobotic imaging system for the investigation of automatedplant treatment optimization by an artificial cognitive sys-tem.3 The robot head consisted of two stereo camera systems,black-and-white and colour, each consisting of 2 Point-GreyGrashopper cameras with 5 megapixel (2448×2048, pixelsize 3.45µm) resolution and high-quality lenses (SchneiderKreuznach Xenoplan 1.4/17-0903). We added lightweightwhite and NIR LED light sources to the camera head.

Using this setup, each plant was imaged separately fromdifferent but fixed poses. In addition, for each pose, smallbaseline stereo image pairs were captured using each singlecamera, respectively, by a suitable robot movement, allow-ing for 3D reconstruction of the plant. For the top-view pose,distance between camera centre and top edge of pot variedbetween 15 and 20cm for different plants, but being fixed

2 http://www.phenotiki.com.3 http://www.garnics.eu/.

123

Page 5: Leaf segmentation in plant phenotyping: a collation studycvlab.cse.msu.edu/pdfs/Xi_Liu_MVA2016.pdf · Machine Vision and Applications DOI 10.1007/s00138-015-0737-3 SURVEY Leaf segmentation

Leaf segmentation in plant phenotyping: a collation study

Table 1 Summary of information of the Arabidopsis and tobacco experiments providing the three datasets

Experiment Subjects Wild-types Mutants Period (days) Image resolution Field of view Per plant resolution

Ara2012 19 Col-0 No 21 7 MPixel Whole tray 0.25 MPixel

Ara2013 24 Col-0 Yes (4) 49 7 MPixel Whole tray 0.25 MPixel

Tobacco (23.01.2012) 20 Samsun No 18 5 MPixel Single plant 5 MPixel

Tobacco (16.02.2012) 20 Samsun No 20 5 MPixel Single plant 5 MPixel

Tobacco (15.05.2012) 20 Samsun No 18 5 MPixel Single plant 5 MPixel

Tobacco (10.08.2012) 20 Samsun No 30 5 MPixel Single plant 5 MPixel

per plant, resulting in lateral resolutions between 20 and25pixel/mm.

Data used for this study stemmed from experimentsaiming at acquiring training data and contained a singleplant imaged directly from above (top-view). Images wereacquired every hour, 24/7, for up to 30days.

4.2 Selection of data for training and testing

As part of the benchmark data for the LSC, we released threedatasets, named respectively ‘A1’, ‘A2’, and ‘A3’, consist-ing of single-plant images of Arabidopsis and tobacco, eachaccompanied by manual annotation (discussed in the nextsection) of plant and leaf pixels. Examples are shown inFig. 1.

From the Ara2012 dataset, to form the ‘A1’ dataset,we extracted cropped regions of 161 individual plants(500×530 pixels) from tray images, spanning a period of12days. An additional 40 images (530×565 pixels) form‘A2’, which were extracted from the Ara2013 dataset span-ning a period of 26days. From the tobacco dataset, to formthe ‘A3’ dataset, we extracted 83 images (2448×2048 pix-els). Each dataset was split into training and testing sets forthe challenge (cmp. Sect. 5).

The data differ in resolution, fidelity, and scene complex-ity, with plants appearing in isolation or together with otherplants (in trays), with plants belonging to different cultivars(or mutants), and subjected to different treatments.

Due to the complexity of the scene and of the plant objects,the datasets present a variety of challenges with respect toanalysis. Since our goal was to produce a wide variety ofimages, images corresponding to several challenging situa-tions were included.

Specifically, in ‘A1’ and ‘A2’, on occasion, a layer ofwater from irrigation of the tray causes reflections. As theplants grow, leaves tend to overlap, resulting in severe leafocclusions. Nastic movements (a movement of a leaf usu-ally on the vertical axis) make leaves appear of differentshapes and sizes from one time instant to another. In ‘A3’beneath shape changes due to nastic movements, also dif-ferent leaf shapes appear due to different treatments. Under

high illumination conditions (one of the treatment optionsin the Tobacco experiments), plants stay more compact withpartly wrinkled, severely overlapping leaves. Under lowerlight conditions, leaves are more rounded and larger.

Furthermore, the ‘A1’ images of Arabidopsis present acomplex and changing background, which could compli-cate plant segmentation. A portion of the scene is slightlyout of focus (due to the large field of view) and appearsblurred, and some images include external objects such astape or other fiducial markers. In some images, certain potshave moss on the soil, or have dry soil and appear yellow-ish (due to increased ambient temperature for a few days).‘A2’ presents a simpler scene (e.g.moreuniformbackground,sharper focus, without moss); however, it includes mutantsof Arabidopsis with different phenotypes related to rosettesize (some genotypes produce very small plants) and leafappearance with major differences in both shape and size.

The images of tobacco in ‘A3’ have considerably higherresolution, making computational complexity more rele-vant. Additionally, in ‘A3’, plants undergo a wide rangeof treatments changing their appearance dramatically, whileArabidopsis is known to have different leaf shape amongmutants. Finally, other factors such as self-occlusion, shad-ows, leaf hairs, and leaf colour variation make the scene evenmore complex.

4.3 Annotation strategy

Figure 2 depicts the procedure we followed to annotate theimage data. In the first place, we obtained a binary segmen-tation of the plant objects in the scene in a computer-aidedfashion. For Arabidopsis, we used the approach based on

Fig. 2 Schematic of the workflow to annotate the images. Plants werefirst delineated in the original image and then individual leaves werelabelled

123

Page 6: Leaf segmentation in plant phenotyping: a collation studycvlab.cse.msu.edu/pdfs/Xi_Liu_MVA2016.pdf · Machine Vision and Applications DOI 10.1007/s00138-015-0737-3 SURVEY Leaf segmentation

Scharr et al.

active contours described in [34], while for tobacco, a sim-ple colour-based approach for plant segmentation was used.The result of this segmentation was manually refined usingraster graphics editing software.Next,within the binarymaskof each plant, we delineated individual leaves, following anapproach completely based on manual annotation. A pixelwith black colour denotes background,while all other coloursare used to uniquely identify the leaves of the plants in thescene. Across the frames of the time-lapse sequence, we con-sistently used the same colour code to label the occurrencesof the same leaf. The labelling procedure involved alwaystwo annotators to reduce observer variability, one annotatingthe dataset and one inspecting the other.

Note that LSC did not involve leaf tracking over time,therefore all individual plant images were considered sepa-rately, ignoring any temporal correspondence.

4.4 File types and naming conventions

Plant images were encoded using the lossless PNG [53]format and their dimensions varied. Plant objects appearedcentred in the (cropped) images. Segmentation masks wereimage files encoded as indexed PNG, where each segmentedleaf was identified with a unique (per image) integer value,starting from ‘1’,whereas ‘0’ denotes background. The unionof all pixel labels greater than zero provides the groundtruth plant segmentation mask. A colour index palette wasincluded within the file for visualization reasons. The file-names have the form:

– plantXXX_rgb: the original RGB colour image;– plantXXX_label: the labelled image;

where XXX is an integer number. Note that plants were notnumbered sequentially.

5 CVPPP 2014 LSC challenge outline

The LSC challenge was organized by two of the authors (HSand SAT), as part of the CVPPP workshop, which was heldin conjunction with the European Conference on ComputerVision (ECCV), in Zürich, Switzerland, in September 2014.Electronic invitations for participation were communicatedto a large number of researchers working on computer visionsolutions for plant phenotyping and via computer visionand pattern recognition mailing lists and several phenotyp-ing consortia and networks such as DPPN,4 IPPN,5 EPPN,6

4 http://www.dppn.de/.5 http://www.plant-phenotyping.org/.6 http://www.plant-phenotyping-network.eu/.

iPlant.7 Interested parties were asked to visit the website andregister for the challenge after agreeing to rules of participa-tion and providing contact info via an online form.

Overall, 25 teams registered for the study and downloadedtraining data, 7 downloaded testing data, andfinally 4 submit-tedmanuscript and testing results for review at theworkshop.For this study, we invited several of the participants (seeSect. 6).

5.1 Training phase

An example preview of the training set (i.e. one exampleimage from each of the three datasets as shown in Fig. 1)was released in March 2014 on the CVPPP 2014 website.The full training set, consisting of colour images of plantsand annotations, was released in April 2014.

A total of 372 PNG images were available in 186 pairsof raw RGB colour images and corresponding annotations inthe form of indexed images, namely 128, 31, and 27 imagesfor ‘A1’, ‘A2’, and ‘A3’, respectively. Images of many dif-ferent plants were included at different time points (growthstages). Participants were unaware of any temporal relation-ships among images, and were expected to treat each imagein an individual fashion. Participants were allowed to tai-lor pipelines to each dataset and could choose supervisedor unsupervised methods. Matlab evaluation functions werealso provided to help participants assess performance on thetraining set using the criteria discussed in Sect. 3. The dataand evaluation script are in the public domain.8

5.2 Testing phase

We released 98 colour images for testing (i.e. 33, 9, and 56images from ‘A1’, ‘A2’, and ‘A3’, respectively) and kept therespective label images hidden. Images here correspondedto plants at different growing stages (with respect to thoseincluded in the training set) or completely new and unseenplants. Again this was unknown to the participants. A shorttesting period was allowed: the testing set was released onJune 9, 2014, and authors were asked to submit their resultsby June 17, 2014, and accompanying papers by June 22,2014.

In order to assess the performance of their algorithm onthe test set, participants were asked to email to the organizersa ZIP archive following a predefined folder/file structure thatenabled automated processing of the results. Within 24h,all participants who submitted testing results received theirevaluation using the same evaluation criteria as for training,as well as summary tables in LATEX and also individual per-image results in a CSV format. Algorithms and the papers

7 http://www.iplantcollaborative.org/.8 http://www.plant-phenotyping.org/CVPPP2014-dataset.

123

Page 7: Leaf segmentation in plant phenotyping: a collation studycvlab.cse.msu.edu/pdfs/Xi_Liu_MVA2016.pdf · Machine Vision and Applications DOI 10.1007/s00138-015-0737-3 SURVEY Leaf segmentation

Leaf segmentation in plant phenotyping: a collation study

were subject to peer review and the leading algorithm [41]presented at the CVPPP workshop.

6 Methods

We briefly present the leaf segmentationmethods used in thiscollation study. We include methods not only from challengeparticipants but also others for completeness and for offeringa larger view of the state of the art. Overall, three methodsrely on post-processing of distance maps to segment leaves,while one uses a database of templates which are matchedusing a distance metric. Each method’s description aims toprovide an understanding of the algorithms, and whereverappropriate, we offer relevant citations for readers seekingadditional information.

Please note that participating methods were given accessto the training set (including ground truth) and testing set butwithout ground truth.

6.1 IPK Gatersleben: segmentation via 3D histograms

The IPK pipeline relies on unsupervised clustering and dis-tance maps to segment leaves. Details can be found in [41].The overall workflow is depicted in Fig. 3 and summarizedin the following.

1. Supervised foreground/background segmentation utiliz-ing 3D histogram cubes, which encode the probabilityfor any observed pixel colour in the given training set ofbelonging to the foreground or background; and

2. Unsupervised feature extraction of leaf centre points andleaf split point detection for individual leaf segmentationby using a distance map, skeleton, and the correspondinggraph representation (cmp. Fig. 3).

To avoid the partitioning of the 3D histogram in rectan-gular regions [30], here a direct look-up in the 3D histogramcubes instead of (multiple) one-dimensional colour compo-nent thresholds is used. For this approach, two 3D histogramcubes for foreground and background are accumulated usingthe provided training data. To improve the performanceagainst illumination variability, input images are convertedinto the Lab colour space [7]. Entries which are not rep-resented in the training data are estimated by using aninterpolation of the surrounding values of a histogram cell.The segmentation results are further processed by morpho-logical operations and cluster-analysis to suppress noise andartefacts. The outcome of this operation serves as input forthe feature extraction phase to detect leaf centre points andoptimal split points of corresponding leaf segments.

For this approach, the leaves of Arabidopsis plants in‘A1’ and ‘A2’ are considered as compact objects which onlypartly overlap. In the corresponding Euclidean distance map

Fig. 3 Workflow of the IPK approach, including main processingcomponents: segmentation, image feature extraction (including leafdetection), and individual leaf segmentation

(EDM), the leaf centre points appear as peaks, which aredetected by a maximum search. At the next step, a skele-ton image is calculated. To resolve leaf overlaps, split points

123

Page 8: Leaf segmentation in plant phenotyping: a collation studycvlab.cse.msu.edu/pdfs/Xi_Liu_MVA2016.pdf · Machine Vision and Applications DOI 10.1007/s00138-015-0737-3 SURVEY Leaf segmentation

Scharr et al.

at the thinnest connection points are detected. Values ofthe EDM are mapped on the skeleton image. The resultingimage is used for creating a skeleton graph, where leaf centrepoints, skeleton end-points, and skeleton branch-points arerepresented as nodes in the graph. Edges are created if thecorresponding image points are connected by a skeleton line.Additionally, a list of the positions and minimal distances ofeach particular edge segment are saved as an edge-attribute.This list is used to detect the exact positions of the leaf splitpoints. In order to find the split point(s) between two leafcentre points, all nodes and edges of the graph structureconnecting these two points are traversed and the positionwith the minimal EDM values is determined. This processis repeated, if there are still connections between the twoleaves which need to be separated. For calculating the splitline belonging to a particular minimal EDM point, two coor-dinates on the plant leaf border are calculated. The nearestbackground pixel is searched (first point), and also the near-est background pixel at the opposite position relative to thesplit point (second point) is located. The connection line isused as border during the segmentation of overlapping leaves.In a final step, the separated leaves are labelled by a regiongrowing algorithm.

Our approach was implemented in Java, and tested on adesktop PCwith 3.4GHz processor and 32GBmemory. Javawas configured to use amaximum of 4GBRAM.On averageeach image takes 1.6, 1.2, and 9 seconds for ‘A1’, ‘A2’, and‘A3’, respectively.

6.2 Nottingham: segmentation with SLIC superpixels

A superpixel-basedmethod that does not require any trainingis used. The training dataset has been used for parametertuning only. The processing steps visualized in Fig. 5 can besummarized as follows:

1. Superpixel over-segmentation in Lab colour space usingSLIC [1];

2. Foreground (plant) extraction using simple seeded regiongrowing in the superpixel space;

3. Distance map calculation on extracted foreground;4. Individual leaf seed matching by identifying the super-

pixels whose centroid lays in the local maxima of thedistance map; and

5. Individual leaf segmentation by applying watershedtransform with the extracted seeds.

Steps (1) and (2) are used to extract the whole plant while(3), (4) and (5) for extracting the individual leaves. We nowpresent a detailed explanation of each of the steps, with Figs.4 and 5 summarizing the process.

Preparation. Given an RGB image, it is first converted tothe Lab colour space [7], to enhance discrimination between

Fig. 4 Example images of each of the steps in the Nottinghamapproach. First row original image (left), SLIC superpixels (centre),thresholded superpixels (right). Second row distance map with super-pixel centroids (left), filtered superpixel centroids (middle), watershedsegmentation (right)

Fig. 5 The Nottingham leaf segmentation process

foreground leaves and background. Then, SLIC superpixels[1] are computed. A fixed number of superpixels are com-puted over the image. Empirically, 2000 pixels was foundto provide good coverage of the leaves. The mean colourof the ‘a’ channel (which characterizes the green colour ofthe image well) is extracted for each superpixel. A RegionAdjacencyMap (superpixel neighbourhood graph) is createdfrom the resulting superpixels.

Foreground extraction. Having the mean colour of eachsuperpixel for channel ‘a’, a simple region growing approach[2] in superpixel space allows the complete plant to beextracted. The superpixel with the lowest mean colour (themost bright green superpixel) defined in Lab space is usedas the initial seed. However, for ‘A1’ and ‘A2’, since theydo not contain shadows, an even simpler thresholding of themean colour of each superpixel allows faster yet still accuratesegmentation of the plant. Thresholds for the ‘A1’ and ‘A2’are set to −25 and −15 respectively.

Leaf identification. Once the plant is extracted fromthe background, superpixels not belonging to the plant areremoved. A distance map is computed (first removing strong

123

Page 9: Leaf segmentation in plant phenotyping: a collation studycvlab.cse.msu.edu/pdfs/Xi_Liu_MVA2016.pdf · Machine Vision and Applications DOI 10.1007/s00138-015-0737-3 SURVEY Leaf segmentation

Leaf segmentation in plant phenotyping: a collation study

edges using the Canny detector [11]) and the centroids arecalculated for all superpixels. A localmaxima filter is appliedto extract the superpixels that lay in the centre of the leaves.A superpixel is selected as a seed only if it is the most centralin the leaf compared to its neighbours within a radius. Thisis implemented by considering the superpixel centroid valuein the distance map, and filtering the superpixels that do nothave the maximum value within its neighbours.

Leaf segmentation. Finally, watershed segmentation [51]is appliedwith the obtained initial seeds over the image space,yielding the individual leaf segmentation.

Using a Python implementation running on a i3 quad-coredesktopwith i3-4130 (3.4GHz) processor and 8GBmemory,on average, each image takes< 1 second for dataset ‘A1’ and‘A2’, and 1–5 seconds for ‘A3’.

Overall, it is a fast method with no training required. Italso could be tuned to get a much higher accuracy on a per-image basis. The parameters that can be tuned are as follows:(1) number of superpixels, (2) compactness of superpixels,(3) foreground extractor (threshold or region growing), (4)parameters of the canny edge detector, and (5) colour spacefor SLIC, foreground extractor and canny edge detector. Allthose parameters were tuned in a per-dataset basis using thetraining set in order to maximize the mean Symmetric BestDice score for each dataset.However, they can be easily tunedmanually on a per-image basis if required.

6.3 MSU: leaf segmentation with Chamfer matching

TheMSUapproach extends amulti-leaf alignment and track-ing framework [59–61] to the LSC. As discussed in Sect. 2,this framework was originally designed for segmenting andtracking leaves in plant fluorescence videos where plant seg-mentation is straightforward due to the clean background. Forthe LSC, a more advanced background segmentation processwas adopted.

The framework is motivated by the well-known ChamferMatching (CM) [8], which aligns one object instance in animage with a given template. However, since there are largevariations of leaves in plant images, it is infeasible to matchleaves with only one template. Therefore, we generate a setof templates with different shapes, scales, and orientations.Specifically, H leaves with representative shapes (e.g. differ-ent aspect ratios) are selected from H images of the trainingset. Each leaf shape is scaled to S different sizes, and eachsize is rotated to R different orientations. This leads to a setof H × S × R leaf templates (5× 9× 24 for ‘A1’ and ‘A2’,8× 10× 24 for ‘A3’) with labelled tip locations, which willbe used in the segmentation process.

An accurate plant segmentation and edge map are criticalto obtain reliable CM results. To this end, all RGB imagesare converted into the Lab colour space, and a threshold τ

is applied to channel ‘a’ for estimating a foreground mask

(a) (b)

(c) (d) (e) (f)

Fig. 6 Overview of the MSU approach: training is done once for eachplant type (i.e. twice for three datasets), and pre-processing and seg-mentation are performed for each image

(chosen empirically for each dataset: 40 for ‘A1’, and 30 for‘A2’ and ‘A3’), which is refined by standard morphologicaloperations. The Sobel edge operator is applied within theforeground segment to generate an edge map. Since ‘A3’has more overlapping leaves and the boundaries between theleaves are more visible due to shadows, an additional edgemap is used, obtained by applying the edge operator on theimage resulting from the difference of ‘a’ and ‘b’ channels.

Morphological operations are applied to remove smalledges (noise) and lines (leaf veins). A mask (Fig. 6a) andedge map (Fig. 6b) are cropped from the RGB image.

For each template, we search all possible locations on theedge map and find one location with the minimum CM dis-tance. Doing so for all templates generates an overcompleteset of leaf candidates (Fig. 6c). For each leaf candidate, wecompute the CM score, its overlap with foreground mask,and the angle difference, which measures how well the leafpoints to the centre of the plant. Our goal is to select a sub-set of leaf candidates as the segmentation result. First, wedelete candidates with large CM scores, small overlap withthe foreground mask, or a large angle difference. Second, wedevelop an optimization process [61] to select an optimal setof leaf candidates by optimizing the minimal number of can-didates with smaller CM distances and leaf angle differencesto cover the foreground mask as much as possible. Third,all leaf candidates are selected as an initialization and gra-dient descent is applied to iteratively delete redundant leafcandidates, which leads to a small set of final leaf candidates.

As shown in Fig. 6d, a finite number of templates cannotperfectlymatch all edges.We apply amulti-leaf tracking pro-cedure [60] to transform each template, i.e. rotation, scaling,and translation, to obtain anoptimalmatchwith the edgemap.This is done by minimizing the summation of three terms:the average CM score, the difference between the synthe-sized mask of all candidates and the test image mask, and theaverage angle difference. The leaf alignment result providesinitialization of the transformation parameters and gradient

123

Page 10: Leaf segmentation in plant phenotyping: a collation studycvlab.cse.msu.edu/pdfs/Xi_Liu_MVA2016.pdf · Machine Vision and Applications DOI 10.1007/s00138-015-0737-3 SURVEY Leaf segmentation

Scharr et al.

descent is used to update these parameters. When a leaf can-didate becomes smaller than a threshold, we will remove it.After this optimization, the leaf candidates will match theedge map much more accurately (Fig. 6e), which remediesthe limitation of a finite set of leaf templates. Finally, we usethe tracking result and foreground mask to generate a labelimage so that all foreground pixels, and only foreground pix-els, have labels.

Only one leaf out of each of the H training images isused for template generation. The same pre-processing andsegmentation procedures are conducted independently foreach image of the training and testing set.

Using our Matlab implementation running on a quad-coredesktop with 3.40GHz processor and 32GB memory, onaverage each image takes 63, 49, and 472 seconds for ‘A1’,‘A2’, and ‘A3’ respectively.

6.4 Wageningen: leaf segmentation with watersheds

The method consists of two steps: plant segmentation andseparate leaf segmentation, illustrated in Fig. 7. Plant seg-mentation from the background uses supervised classifica-tion with a neural network. Since the nature of the threedatasets (‘A1’, ‘A2’, and ‘A3’) is different, a separate classi-fier and post-processing steps are applied to each individualset. The ground truth images are used tomask plant and back-ground pixels. For all images, 3000 pixels of each class arerandomly selected for training.When the plant is smaller than3000 pixels, all plant pixels are used. To separate the plantsfrom the background, four colour and two texture featuresare used for each pixel. The colour features used in the clas-sification are red, green, and blue pixel values (R, G, B) and

Fig. 7 Steps of the Wageningen approach, shown on an image from‘A3’ (top left, with zoomed detailed shown in red box): test RGB image(top left), neural network-based foreground segmentation (top middle),inverse distance image transform (top right), watershed basins (bottomleft), intersection of basins and the foreground image mask (bottommiddle), final leaf segmentation after rejecting small regions (bottomright) (colour figure online)

the excessive Green value (2G–R–B) which highlights greenpixels. For texture features, the pixel values of the variancefiltered green channel [62], and the pixel values of the gradi-ent magnitude filtered green channel are used. The latter twohighlight edges and rough parts in the image.

A large range of linear and nonlinear classifiers have beentested on each dataset, with a feed-forward (MLP) neural net-work with one hidden layer of 10U giving the best results.Morphological operations are applied on the binary imageobtained after plant classification, resulting in the plantmasks(i.e. a foreground–background segmentation). For ‘A1’ and‘A2’, the morphological operations consist of an erosion fol-lowed by a propagation using the original results as mask.Small blobs mainly from moss are removed this way. For‘A3’, all blobs in the image are removed, except for the largestone.

In order to remove moss that occurs in ‘A2’ and ‘A3’ andin order to emphasize spaces between stems and leaves (cf.Fig. 8) to which the watershed algorithm is highly sensitive,additional colour transformation, shape and spatial filtering,and morphological operations are applied. For ‘A2’, all com-ponents of the foreground segmentation are filtered out thatare further away from the centre of gravity of the foregroundmask than 1.5 times estimated radius of the foregroundmask.

The radius r is estimated from mask area A as r = (A/π)12 .

Next, the Y-component image of the YUV colour transfor-mation, giving the luminance, is thresholded with a thresholdoptimized on the training set (th = 85). For ‘A3’, there arecases of large moss areas attached to the foreground segmen-tation mask. To remove them, first the compactness C of theforeground mask is calculated as C = L2/(4π A), where Lis the foreground mask contour length. C > 20 indicatesthe presence of a large moss area segmented as foreground.There, the X-component of the XYZ colour transformationyielding chromatic information is thresholded (th = 55), andthe pixels that are smaller than the threshold are filtered out.In this way, the moss pixels which have a slightly differ-ent colour than the plants are removed from the foregroundimage. Next, in order to emphasize spaces between the leavesand the stems, all foreground masks are corrected with the

Fig. 8 Wageningen: accentuating holes

123

Page 11: Leaf segmentation in plant phenotyping: a collation studycvlab.cse.msu.edu/pdfs/Xi_Liu_MVA2016.pdf · Machine Vision and Applications DOI 10.1007/s00138-015-0737-3 SURVEY Leaf segmentation

Leaf segmentation in plant phenotyping: a collation study

thresholded Y-component of the YUV colour-transformedimage as described for ‘A2’.

The second step, i.e. separate leaf segmentation, is per-formed using a watershed method [9] applied on the Euclid-ean distance map of the resulting plant mask image fromthe first step of the method. Initially, the watershed transfor-mation is computed without applying the threshold betweenthe basins. In the second step, the basins are successivelymerged if they are separated by a watershed that is smallerthan a given threshold. The threshold value is tuned on thetraining set in order to produce the best result. The thresholdsare set to 30, 58, and 70 for the datasets ‘A1’, ‘A2’, and ‘A3’respectively.

Plant segmentation is done in Matlab 2015a and theperClass classification toolbox (http://perclass.com) on aMacBook with 2.53GHz Intel Core 2 Duo. Learning theneural network classifier using a training set of 6000 pix-els takes about 4 s per image. Plant segmentation using thistrained classifier and post-processing take 0.76 s, 0.73 s and24s for ‘A1’, ‘A2’, and ‘A3’ respectively. Moss removal andleaf segmentation are performed in Halcon, running on a lap-top with 2.70GHz processor and 8GB memory. On average,each image takes 160, 110, and 700ms for ‘A1’, ‘A2’, and‘A3’ respectively.

7 Results

In this section, we discuss the performance of each methodas evaluated on testing and training sets. Note that the groundtruth was available to participants (authors of this study) forthe training set; however, the testing set was only known tothe organizers of the LSC (i.e. S. A. Tsaftaris, H. Scharr, andM. Minervini) and was unavailable to all others. Trainingset numbers are provided by the participants (with the sameevaluation function and metrics used also on the testing set).

Note that sinceNottingham is anunsupervisedmethod, theresults reflect directly the performance on all the training set.With MSU, since they use some of the leaves in the trainingset to define their templates some bias could exist, but it isminimal. IPK and Wageningen apply supervised methodsto obtain foreground segmentation, using the whole dataset(Wageningen use a random selection of 3000 pixels per classper image).

7.1 Plant segmentation from background

Figure 9 shows selected examples of test images from thethree datasets. We choose from each dataset two examples:one to show the effectiveness of the methods and one to showlimitations.We show visually the segmentation outcomes foreach method together with ground truth; we also overlay thenumbers of the evaluation measures on the images.

Overall, we see that most methods perform well in sepa-rating plant from background, except when the backgroundpresents challenges (e.g. moss) as does the second imageshown for ‘A1’. Then FBD scores are lower for almost allmethods, with IPK and Nottingham showing more robust-ness. These observations are evident in the whole dataset (cf.FBD numbers in Tables2, 3). Average testing numbers arelower than training for most methods with the exception ofNottingham, which does significantly better in ‘A2’ and ‘A3’in the testing case. Given that their method is unsupervised,this behaviour is not unexpected.

7.2 Leaf segmentation and counting

Referring again to Fig. 9 and Tables 2, 3, let us evaluatevisually and quantitatively how well algorithms do in seg-menting leaves.When leaves are not overlapping, allmethodsperform well. Nevertheless, each method exhibits differentbehaviour. IPK, MSU, and Wageningen obtain higher SBDscores; however, IPK does produce straight line boundariesthat are not natural—they should be more curved to bettermatch leaf shape. There seems to be also an interesting rela-tionship between segmentation error and leaf size (see alsonext section for effects related to plant size).

In fact, plotting leaf size vs. Dice per leaf,9 see Fig. 10, weobserve that with all methods larger leaves are more accu-rately delineated, with exception of the largest few leavesin MSU. Dice for smaller leaves shows more scatter andsmaller leaves aremore frequently not detected, as evidencedby the high symbol density at Dice = 0 (dark blue symbols).

For small leaves with (leaf area)12 � 20 Wageningen per-

forms best, detecting more leaves than the others and withhigher accuracy. IPK shows better performance than others

in the mid range 40 � (leaf area)12 � 80 due to higher per-

leaf accuracy (see the more dark/black symbols in the regionabove Dice = 0.95) and fewer non-detected leaves. In the midrange, only Wageningen performs similarly with respect toleaf detection (fewest symbols at Dice = 0), closely followedby MSU.

We should note that measuring SBD and FBD with Dicedoes have some limitations. If a method reports a Dicescore of 0.9, this loss of 0.1 can be attributed to either anunder-segmentation (e.g. loss of a stem in Arabidopsis, non-precise leaf boundary) or an over-segmentation (consideringbackground as plant). Therefore, in Sect. 7.5, we apply twomeasures being more sensitive to shape consistency, in orderto investigate the solutions’ performance with respect to leafboundaries.

9 To measure Dice per leaf, we first find matches between a leaf inground truth and an algorithm’s result that maximally overlap, and thenreport the Dice (Eq. 1) of matched leaves; for non-matched leaves azero is reported.

123

Page 12: Leaf segmentation in plant phenotyping: a collation studycvlab.cse.msu.edu/pdfs/Xi_Liu_MVA2016.pdf · Machine Vision and Applications DOI 10.1007/s00138-015-0737-3 SURVEY Leaf segmentation

Scharr et al.

Fig. 9 Selected results on test images. From each dataset ‘A1’–‘A3’,an easier and more challenging image is shown, together with groundtruth, and results of IPK, Nottingham, MSU, and Wageningen (fromtop to bottom, respectively). Numbers in the image corners are num-

ber of leaves (upper right), SBD (lower left), and FBD (lower right).For viewing ease, matching leaves are assigned the same colour as theground truth. Figure best viewed in colour (colour figure online)

With regard to leaf counting, most methods show theirlimitations, and in fact using such a metric also highlightserrors in leaf segmentation. For example, in Fig. 9, we seethat when the images are more challenging, some methodsmerge leaves: this lowers SBD scores but affects count num-bers even more critically. Other methods (e.g. Wageningen)tend to over-segment and consider other parts as leaves (seefor example the second image of ‘A1’ in Fig. 9), which some-times leads to over estimating numbers.

These misestimations are evident throughout training andtesting sets (cf. Tables2, 3). Stepping away from the sum-mary statistics of the tables, over and under estimation arereadily apparent in Fig. 11. All algorithms present countingoutliers, whereMSUyields the least count variability, despitea clear underestimation. The mean DiC ofWageningen is theclosest to zero, albeit featuring the highest variances.We alsoobserve that DiC lowers as the number of leaves increases,particularly in the case of ‘A3’.

123

Page 13: Leaf segmentation in plant phenotyping: a collation studycvlab.cse.msu.edu/pdfs/Xi_Liu_MVA2016.pdf · Machine Vision and Applications DOI 10.1007/s00138-015-0737-3 SURVEY Leaf segmentation

Leaf segmentation in plant phenotyping: a collation study

Table 2 Segmentation and counting results on the training set

SBD (%) FBD (%) |DiC| DiC

IPK

A1 74.2 (7.7) 97.4 (1.8) 2.6 (1.8) −1.9 (2.5)

A2 80.6 (8.7) 99.7 (0.3) 0.9 (1.0) −0.3 (1.3)

A3 61.8 (19.1) 98.2 (1.1) 2.1 (1.7) −2.1 (1.7)

ALL 73.5 (11.5) 98.0 (1.9) 2.2 (1.7) −1.7 (2.3)

Nottingham

A1 68.0 (7.4) 94.6 (1.6) 3.8 (2.0) −3.6 (2.4)

A2 60.9 (18.5) 87.5 (19.7) 2.5 (1.5) −2.5 (1.5)

A3 47.1 (25.0) 79.4 (34.5) 2.3 (1.8) −2.3 (1.9)

ALL 63.8 (15.3) 91.2 (16.2) 3.4 (2.0) −3.2 (2.2)

MSU

A1 78.0 (6.4) 95.8 (1.9) 2.3 (1.5) −2.3 (1.6)

A2 72.3 (9.5) 94.1 (4.1) 1.6 (1.4) −1.3 (1.7)

A3 69.6 (16.5) 95.0 (6.5) 1.4 (1.5) −1.3 (1.5)

ALL 75.8 (9.6) 95.4 (3.4) 2.1 (1.5) −2.0 (1.7)

Wageningen

A1 72.8 (7.8) 95.0 (2.4) 2.2 (2.0) 0.4 (3.0)

A2 71.7 (8.0) 95.2 (2.4) 1.3 (1.1) −0.6 (1.6)

A3 69.6 (19.9) 96.1 (5.1) 1.7 (2.4) 0.6 (2.9)

ALL 72.2 (10.5) 95.2 (3.0) 2.0 (2.0) 0.3 (2.8)

Average values are shown for metrics described in Sect. 3 and in paren-thesis standard deviation. ‘ALL’ denotes the average (and standarddeviation) among the three datasets for each method. Other shorthandsand abbreviations as defined in text (Sects. 3, 6)The best results for each metric are denoted in bold

7.3 Plant growth and complexity

Plants are complex and dynamic organisms that grow overtime, and move throughout the day and night. They grow dif-ferentially, with younger leaves growing faster than matureones. Therefore, per-leaf growth is a better phenotyping traitwhen evaluating growth regulation and stress situations. Asthey grow, new leaves appear and plant complexity changes:in tobacco, more leaves overlap and exhibit higher nasticmovements; and in Arabidopsis, younger leaves emerge,overlapping other more mature ones.

At an individual leaf level, the findings of Fig. 10—Dice of smaller leaves showing higher variability scatter andwith smaller leaves being missed—illustrate that we needto achieve homogeneous performance and robustness if wewant to obtain accurate per-leaf growth estimates.

Using classical growth stages,which rely on leaf count as amarker of growth, the downwards slope seen in Fig. 11 couldbe attributed to growth. This is more clear in Fig. 12, wherewe see that with more leaves, leaf segmentation accuracy(SBD) also decreases.

Even if we consider plant size (measured as PLA, i.e. thesize of the plant in ground truth, obtained as the union of all

Table 3 Segmentation and counting results on the testing set

SBD (%) FBD (%) |DiC| DiC

IPK

A1 74.4 (4.3) 97.0 (0.8) 2.2 (1.3) −1.8 (1.8)

A2 76.9 (7.6) 96.3 (1.7) 1.2 (1.3) −1.0 (1.5)

A3 53.3 (20.2) 94.1 (13.3) 2.8 (2.5) −2.0 (3.2)

ALL 62.6 (19.0) 95.3 (10.1) 2.4 (2.1) −1.9 (2.7)

Nottingham

A1 68.3 (6.3) 95.3 (1.1) 3.8 (1.9) −3.5 (2.4)

A2 71.3 (9.6) 93.0 (4.2) 1.9 (1.7) −1.9 (1.7)

A3 51.6 (16.2) 90.7 (20.4) 2.5 (2.4) −1.9 (2.9)

ALL 59.0 (15.6) 92.5 (15.6) 2.9 (2.3) −2.4 (2.8)

MSU

A1 66.7 (7.6) 94.0 (1.9) 2.5 (1.5) −2.5 (1.5)

A2 66.6 (7.9) 87.7 (3.6) 2.0 (1.5) −2.0 (1.5)

A3 59.2 (17.8) 95.0 (5.2) 2.3 (1.9) −2.3 (1.9)

ALL 62.4 (14.8) 94.0 (4.7) 2.4 (1.7) −2.3 (1.8)

Wageningen

A1 71.1 (6.2) 94.7 (1.5) 2.2 (1.6) 1.3 (2.4)

A2 75.7 (8.4) 95.1 (2.0) 0.4 (0.5) −0.2 (0.7)

A3 57.6 (24.8) 89.5 (22.3) 3.0 (4.9) 1.8 (5.5)

ALL 63.8 (20.5) 91.7 (17.0) 2.5 (3.9) 1.5 (4.4)

Shorthands and abbreviations as in Table 2The best results for each metric are denoted in bold

leaf masks), we observe a decreasing trend in SBD for eachmethod with plant size, see Fig. 13. Observe the large vari-ability in SBD when plants are smaller. Even isolating it toa single method we see that when plants are small, depend-ing on the plant’s leaf arrangement, variability is extremelyhigh: either good (close to 80%) or rather low SBD valuesare obtained.

7.4 Effect of foreground segmentation accuracy

In Fig. 14, we plot FBD versus SBD for eachmethod poolingthe testing data together. We see that high SBD can only beachieved when FBD is also high; but obtaining a high FBD isnot at all a guarantee for good leaf segmentation (i.e. a highSBD) since we observe large variability in SBD even whenFBD is high.

This prompted us to evaluate the performance of the leafsegmentation part isolating it from errors in the plant (fore-ground) segmentation step. Thus, we asked participants tosubmit results on the training set assuming that also a fore-ground (plant) mask is given (as obtained by the union of leafmasks), effectively not requiring a plant segmentation step.

Naturally, all methods benefit when the ground truthplant segmentation is used: compare SBD, DiC, and |DiC|between Tables 2 and 4. SBD improves considerably in most

123

Page 14: Leaf segmentation in plant phenotyping: a collation studycvlab.cse.msu.edu/pdfs/Xi_Liu_MVA2016.pdf · Machine Vision and Applications DOI 10.1007/s00138-015-0737-3 SURVEY Leaf segmentation

Scharr et al.

Fig. 10 Dataset ‘A1’: Dice score per leaf versus (leaf area)12 , i.e. ground truth average leaf radius (in pixels). Larger symbols refer to larger leaves.

Colour also indicates Dice score for better visibility. Figure best viewed in colour (colour figure online)

Number of leaves0 5 10 15 20

DiC

-15

-10

-5

0

5

10

15A1A2A3

(a) IPKNumber of leaves

0 5 10 15 20

DiC

-15

-10

-5

0

5

10

15A1A2A3

(b) Nottingham

Number of leaves0 5 10 15 20

DiC

-15

-10

-5

0

5

10

15A1A2A3

(c) MSUNumber of leaves

0 5 10 15 20

DiC

-15

-10

-5

0

5

10

15

A1A2A3

(d) Wageningen

Fig. 11 For each method, scatters of number of leaves in ground truthversus Difference in Count (DiC) are shown for the testing set. Eachdataset is colour coded differently. Also, lines of average and average± one standard deviation of DiC are shown, as solid and dashed bluelines, respectively (colour figure online)

cases; counting improvement is less pronounced, and some-times results even get worse. Best performance measures areachieved by IPK (closely followed by MSU) for SBD andWageningen for counting. Note that for ‘A3’ IPK’s, SBDperformance increases substantially with known plant seg-

Number of leaves0 2 4 6 8 10 12 14

SB

D

0

10

20

30

40

50

60

70

80

90

100

IPKNottinghamMSUWageningen

Fig. 12 Effect of plant complexity (measured as number of leaves) onleaf segmentation accuracy, i.e. SBD, for ‘A3’. Each method is markercoded separately

mentation. Overall, additional investment in obtaining betterperforming foreground segmentation is therefore warranted.

Comparing the count numbers (DiC and |DiC| in Tables2, 4), the best performer is Wageningen, with slight over-estimation in Table 2 and slight under-estimation Table 4,while again all other methods under-estimate the number ofleaves present. Even when foreground plant mask is given,these numbers do not improve significantly. So it is not errorsin the foreground segmentation component that cause such

123

Page 15: Leaf segmentation in plant phenotyping: a collation studycvlab.cse.msu.edu/pdfs/Xi_Liu_MVA2016.pdf · Machine Vision and Applications DOI 10.1007/s00138-015-0737-3 SURVEY Leaf segmentation

Leaf segmentation in plant phenotyping: a collation study

Projected Leaf Area (PLA, number of pixels, log10 scale)

10 3 10 4 10 5 10 6 10 7

SB

D

0

10

20

30

40

50

60

70

80

90

100

IPKNottinghamMSUWageningen

Fig. 13 Effect of plant size, measured as number of plant pixels inground truth, (projected leaf area) on leaf segmentation accuracy, i.e.SBD, for ‘A3’. Each method is marker coded separately

FBD60 65 70 75 80 85 90 95 100

SB

D

0

10

20

30

40

50

60

70

80

90

100

IPKNottinghamMSUWageningen

Fig. 14 Effect of plant versus leaf segmentation accuracy (FBD vs.SBD). Each method is marker coded separately. Data from all datasetspooled together; results with FBD <60% are omitted for clarity

performance, but the inherent assumption of low overlapthat each method relies on to find leaves. As a result, mostapproaches miss small leaves and sometimes miscount otherplant parts for leaves. The Wageningen algorithm is moreresilient to this problem, presumably due to the optimizationof the basins threshold. When the threshold increases, theleaf count decreases. The thresholds were tuned with respectto the best SBD, but apparently this also affects DiC. A pos-itive effect is also due to emphasizing spaces between leavesand stems, avoiding the problem of small spaces betweenleaves being wrongly segmented as foreground, resulting ina higher number of leaves.

7.5 Performance under blinded shape-based metrics

Most of the metrics we adopted for the challenge rely onsegmentation- and area-based measurements (cf. Sect. 3). Itis thus of interest to see how the methods perform on metricsthat evaluate boundary accuracy and best preserve leaf shape.Notice that thesemetricswere not available to the participants(hence the termblinded), somethods havenot beenoptimized

Table 4 Segmentation and counting results on the training set assumingforeground segmentation known

SBD (%) |DiC| DiC

IPK

A1 79.1 (5.5) 2.1 (1.4) −1.9 (1.7)

A2 80.7 (10.8) 1.2 (1.3) −1.1 (1.4)

A3 71.0 (20.6) 1.8 (1.8) −1.8 (1.8)

ALL 78.2 (10.4) 1.9 (1.5) −1.8 (1.7)

Nottingham

A1 71.0 (7.2) 4.4 (1.7) −4.4 (1.7)

A2 66.5 (21.6) 2.5 (1.5) −2.7 (1.5)

A3 59.5 (11.3) 2.4 (1.3) −2.4 (1.3)

ALL 68.6 (12.1) 3.9 (1.9) −3.9 (1.9)

MSU

A1 78.5 (5.5) 2.5 (1.4) −2.5 (1.4)

A2 77.4 (8.1) 1.6 (1.3) −0.9 (1.9)

A3 76.1 (14.1) 1.2 (1.2) −1.1 (1.2)

ALL 78.0 (7.8) 2.2 (1.4) −2.0 (1.6)

Wageningen

A1 77.3 (4.9) 1.5 (1.3) −0.3 (2.0)

A2 75.5 (8.0) 1.3 (1.3) −0.9 (1.6)

A3 76.5 (14.6) 1.4 (1.3) −1.3 (1.4)

ALL 76.9 (7.6) 1.5 (1.3) −0.5 (1.9)

Shorthands and abbreviations as in Table 2The best results for each metric are denoted in bold

for suchmetrics. For brevity, we present results on the testingset only.

We adopt twometrics based, respectively, on theModifiedHausdorff Distance (MHD) [19] and Pratt’s Figure of Merit(FoM) [43], to compare point sets A and B denoting leafobject boundaries.

The Modified Hausdorff Distance (MHD) [19] measuresthe displacement of object boundaries as the average of allthe distances from a point in A to the closest point in B.With

D(A ,B) = 1

|A |∑

p∈Aminq∈B

‖p − q‖, (5)

where ‖ · ‖ is the Euclidean distance, MHD is defined as:

MHD(A ,B) = max {D(A ,B), D(B,A )} . (6)

This metric is known to be suitable for comparing templateshapeswith targets [19]. It prioritizes leaf boundary accuracy,being relevant for shape-based leaf recognition purposes.

Pratt’s Figure of Merit (FoM) [43] was introduced in thecontext of edge detection and penalizes missing or displacedpoints between actual (A ) and ideal (I ) boundaries:

123

Page 16: Leaf segmentation in plant phenotyping: a collation studycvlab.cse.msu.edu/pdfs/Xi_Liu_MVA2016.pdf · Machine Vision and Applications DOI 10.1007/s00138-015-0737-3 SURVEY Leaf segmentation

Scharr et al.

FoM(A ,I ) = 1

max{|A |, |I |}|A |∑

i=1

1

1 + αd2i, (7)

where α = 1/9 is a scaling constant penalizing boundaryoffset, and di is the distance between an actual boundarypoint and the nearest ideal boundary point.

Let Bar and Bgt be sets of leaf object boundaries extractedfrom leaf segmentation masks Lar and Lgt, respectively,where Bar is the algorithmic result and Bgt is the groundtruth. To evaluate how well leaf object shape and boundariesare preserved, and to follow the spirit of SBD defined inSect. 3, we use:

– Symmetric Best Hausdorff (SBH), the sym-metric average MHD among all object (leaf) boundaries,where for each input label the ground truth label yieldingminimum MHD is used for averaging. Best Hausdorff(BH) is defined as

BH(Ba, Bb)

=

⎧⎪⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎪⎩

√w2 + h2 if either

Ba = ∅ or

Bb = ∅1M

M∑i=1

min1≤ j≤N MHD(Bai , Bb

j ) otherwise

(8)

where Bai for 1 ≤ i ≤ M and Bb

j for 1 ≤ j ≤ N arepoint sets corresponding to the boundaries, respectively,Ba and Bb, of leaf object segments belonging to leafsegmentations La and Lb; w and h denote, respectively,width and height of the image containing the leaf object.SBH is then:

SBH(Bar, Bgt)=max{BH(Bar, Bgt), BH(Bgt, Bar)

}.

(9)

SBH is expressed in units of length (e.g. pixels or mil-limetres) and is 0 for perfectly matching boundaries. IfBar is empty, SBH is equal to the image diagonal (i.e. thegreatest possible distance between any two points).

– Best Figure of Merit (BFoM), the averageFoM among all leaf objects, where for each input labelthe ground truth label yielding maximum FoM is usedfor averaging.

BFoM(Bar, Bgt) = 1

M

M∑

i=1

max1≤ j≤N

FoM(Bari , Bgt

j ), (10)

We express BFoM as a percentage, where 100% denotesa perfect match.

Table 5 Segmentation results on the testing set with respect to leafshape

SBH (pix) SBH (mm) BFoM (%)

IPK

A1 9.2 (3.2) 1.54 (0.53) 62.6 (7.3)

A2 6.9 (3.6) 1.15 (0.60) 66.9 (8.1)

A3 174.9 (442.8) 7.00 (17.7) 41.9 (17.3)

ALL 103.6 (343.5) 4.62 (13.4) 51.1 (17.6)

Nottingham

A1 13.0 (5.9) 2.17 (0.99) 58.7 (9.0)

A2 9.3 (5.7) 1.55 (0.95) 62.3 (7.4)

A3 193.6 (589.7) 7.74 (23.6) 49.2 (21.8)

ALL 115.8 (453.1) 5.30 (17.8) 53.6 (18.1)

MSU

A1 13.3 (5.6) 2.22 (0.94) 50.9 (10.3)

A2 10.0 (6.3) 1.67 (1.05) 52.0 (12.2)

A3 81.1 (105.6) 3.24 (4.22) 46.5 (19.0)

ALL 51.7 (86.6) 2.76 (3.25) 48.5 (16.1)

Wageningen

A1 13.0 (5.6) 2.17 (0.94) 54.1 (9.1)

A2 10.2 (7.8) 1.70 (1.30) 61.1 (9.7)

A3 109.1 (227.0) 4.36 (9.08) 43.7 (26.7)

ALL 67.7 (177.6) 3.38 (6.90) 48.8 (21.8)

Shorthands and abbreviations as in Table 2. Notice that for SBH loweris better, whereas for BFoM higher is betterThe best results for each metric are denoted in bold

In Table 5, we see the results on the testing set using thesemetrics. SBH values vary widely between dataset ‘A3’ andthe others (‘A1’ and ‘A2’). This indicates an issuewhen usingSBH with images of different size. Being a distance, SBHgiven in pixels is dependent on resolution. We therefore alsoprovide this in real-world units i.e. mm, even though objectresolution depends on (the non-constant) local object dis-tance from the camera.

Overall,MSU reports best average performance accordingto SBH (although this result is largely influenced by the ‘A3’dataset) with IPK performing best on ‘A1’ and ‘A2’. Thegood performance of MSU does not come unexpected, asoptimizing the Chamfer Matching distance boils down tominimizing D(A ,B) from Eq. (5), leading to SBH.

With respect to BFoM, IPK again performs best on ‘A1’and ‘A2’, while Nottingham outperforms the other methodson ‘A3’. Interestingly, the overall ranking of the methodsaccording to the two metrics is opposite.

MSU exhibits lower variance compared to IPK, Notting-ham, andWageningen, since the latter methods include someempty segmentations (i.e. no leaf objects found) in the test-ing results, which in BFoM evaluates to 0, and in SBH to theimage diagonal length. This situation occurs for some imagesof very small plants, which are probably missed in the plantsegmentation steps of the methods.

123

Page 17: Leaf segmentation in plant phenotyping: a collation studycvlab.cse.msu.edu/pdfs/Xi_Liu_MVA2016.pdf · Machine Vision and Applications DOI 10.1007/s00138-015-0737-3 SURVEY Leaf segmentation

Leaf segmentation in plant phenotyping: a collation study

7.6 Differences among datasets

Although the tobacco dataset, ‘A3’, has higher resolution andleaf boundaries are more evident, rich shape variation andlarge overlap among leaves challenge all methods: almostall achieve lower performance compared to ‘A1’ and ‘A2’(Tables2 and 3). Even the variability in accuracy increasesfor ‘A3’. The MSU algorithm shows the least variabilityamong datasets probably due to the fact that it uses templates(rotated and scaled). As such it can adapt better to differentshape variability and heavier occlusions and is more robustto plant segmentation errors. It is also due to the relianceon an edge map to fit the templates: on ‘A3’ it can be esti-mated more reliably compared to ‘A1’ and ‘A2’, where someimages can be blurry (due to larger field of view) and resolu-tion is lower. However, when foreground is known (Table 4),variability of the Wageningen solution also becomes lowerbetween datasets.

While ‘A2’ does contain images from different mutants,it shows a different image background with respect to ‘A1’(black textured tray vs. red smoother tray). When a plantmask is known, SBD results on the training set show (Table4) that Nottingham, MSU, and Wageningen still do better in‘A1’ than in ‘A2’, and all methods show higher variances in‘A2’ than in ‘A1’. So it might appear that different mutantsplay a role; however, this result is not conclusive since ‘A2’has fewer images than ‘A1’. In fact, a simple unpaired t-testbetween SBD in ‘A1’ and ‘A2’ shows no statistical difference(for any of the methods).

We should point out that both Nottingham and Wagenin-gen use the same mechanism to segment leaves: a watershedon the distance (from the boundary) map. However, Notting-ham relies on finding first the centres and then using thoseas seeds for leaf segmentation, while Wageningen obtains anover-segmentation and then merges parts using a thresholdon the basins. Their performance difference due to this algo-rithm selection becomes apparent when comparing resultswith given foreground segmentation (Table 2). We see thatthe Wageningen algorithm does better compared to the Not-tingham solution. We conclude that finding suitable seedsfor segmentation is hard and further, comparing Fig. 10, thatthis is true especially for small leaves. On the other hand,it appears that the Wageningen algorithm finds a suitablethreshold for merging according to the dataset.

7.7 Discussions on experimental work

Through this study, we find that plant segmentation can beachieved with unsupervised approaches reaching averageaccuracy above 90%. As we suspected, whenever compli-cations in the background are present, they do lower plantsegmentation accuracy (explaining also the large variationin performance). Possibly higher performance (and lower

variability) can be obtained with methods relying on learnedclassifiers and active contour models [34]. Lower plantsegmentation accuracy negatively affects leaf segmentationaccuracy in almost all cases. Nevertheless, a first level mea-surement of plant growth (as PLA) can be achieved with arelatively good accuracy, although methods that obtain highaverage and low variance should be sought-after.

On the other hand, measuring individual leaf growth onthe basis of leaf segmentations shows currently low accuracy.The algorithms presented here show an average accuracy of62.0% (best 63.8%, see Table 3) in segmenting a leaf andalmost always miss some leaves, particularly under heavyocclusions when both small (young) and larger (mature)leaves are present within the same plant. SBD does not nec-essarily capture that, but it is evident when analysing leaf sizevs. Dice accuracy and leaf counts. On several occasions, leafcount is not accurate (missing several leaves), and frequentlythe algorithms are wrongly labelling disconnected leaf parts(particularly their stems) as leaves.10

Several approaches (IPK and Nottingham) assume thatonce a centre of a leaf is found that segmentation can beobtained by region growing methods. Naturally, when leavesheavily overlap they do fail to identify the centres (andfind less leaf centres than in the ground truth), which holdsfor both rosette plants considered here. Also when imagecontrast is not ideal, lack of discernible edges leads to a mis-estimation of leaf boundaries. This is particularly evident inthe Arabidopsis data (‘A1’ and ‘A2’) and affects approachesthat rely on edge information (MSU). The tobacco dataset(‘A3’), being high resolution, does offer superior contrast,but the amount of overlap and shape variation is significantleading to under-performance for most of the algorithms.

We also investigate performance on leaf boundaries usingSBH and BFoM (cf. Sect. 7.5). SBH penalizes boundaryregions being far away from where they should be, whileBFoM acknowledges boundaries being in the right position.Thus, from a shape-sensitive viewpoint, low SBH is neededif boundary outliers lead to low performance, whereas highBFoM is advisable if an algorithm is robust against boundaryoutliers. When choosing from the solutions presented here,a trade-off needs to be found, as high BFoM (good) comeswith high SBH (bad) and vice versa.

Evident by the meta-analysis of all results is the effect ofplant complexity (due to plant age, mutant, or treatment) onalgorithmic accuracy. Leaf segmentation accuracy decreaseswith larger leaf count (Fig. 12), using leaf count as a proxyfor maturity [15]. This is expected: as the plant grows andbecomes more complex, more leaves and higher overlapbetween young and mature leaves are present. Overall, most

10 This indicates that additional (possibly tailored) evaluation metricsmay be necessary, although our testing with some common in the liter-ature did not yield any improvement.

123

Page 18: Leaf segmentation in plant phenotyping: a collation studycvlab.cse.msu.edu/pdfs/Xi_Liu_MVA2016.pdf · Machine Vision and Applications DOI 10.1007/s00138-015-0737-3 SURVEY Leaf segmentation

Scharr et al.

methods face greater difficulties in detecting and segment-ing smaller (younger) leaves (Fig. 10), most likely not dueto their size, but overlap: they tend to grow on top of moremature leaves.

Moving forward, no approach here relies on learning amodel on the basis of the training data to obtain leaf seg-mentations and this might lead to promising algorithms inthe future. Interestingly, some of our findings on learning tocount leaves do show that leaf count can be estimated with-out segmentation [21]. However, individual and accurate leafsegmentation is still important: for example, studying indi-vidual leaf growth, tracking leaf position and orientation,classifying young from old leaves, and others.

One alternative which changes the problem definition andmay reduce complexity is to provide additional data such astemporal (time-lapse images) and/or depth (stereo andmulti-view imagery) information. The former can be used for betterleaf segmentation, e.g. via joint segmentation and trackingapproaches [59,60]. Both types of information will help inresolving occlusions and obtaining better boundaries. Suchdata are publicly available in the form of a curated dataset[35], and a software tool was released to facilitate leaf anno-tation [36].

8 Conclusion and outlook

This paper presents a collation study of a variety of leafsegmentation algorithms as tested within the confines of acommon dataset and a true scientific challenge: the Leaf Seg-mentation Challenge of CVPPP 2014. This is the first of suchchallenges in the context of plant phenotyping andwe believethat such formats will help advance the state of the art of thissocietally important application of machine vision.

Having annotated data in the public domain is extremelybeneficial and this is one of the greatest outcomes of thiswork. The data can be used not only to motivate and enlistinterest from other communities but also to support futurechallenges (similar to this one). We all believe that here isthe future: it is via such challenges that the state of the artadvances rapidly and a new challenge for 2015 has alreadybeen held.11 However, these challenges should happen ina rolling fashion, year-round, with leader boards and auto-mated evaluation systems. It is for this reason that we areconsidering a web-based system, e.g. similar in concept toCodalab,12 for people to submit results but also deposit newannotated datasets. This has been proven useful in other areasof computer vision (consider for example PASCAL VOC[20]) and it will benefit also plant phenotyping.

11 See the new Leaf Counting Challenge of CVPPP 2015 at BMVC(http://www.plant-phenotyping.org/CVPPP2015).12 https://www.codalab.org/.

In summary, the better we can “see” plant organs suchas leaves via new computer vision algorithms, evaluated oncommon datasets and collectively presented, the better qual-ity phenotyping we can do and the higher the societal impact.

Acknowledgments We would like to thank participants of the 2014CVPPP workshop for comments and annotators that have contributedto this work.Author contributions SAT coordinated this collation study. SAT andHS organized the original LSC challenge. SAT, HS, and MM, wrotethe paper and performed analysis. All other authors have contributedmethods, text, and results. All authors have approved the manuscript.

References

1. Achanta, R., Shaji, A., Smith, K., Lucchi, A., Fua, P., Susstrunk, S.:SLIC superpixels compared to state-of-the-art superpixel methods.IEEE Trans. Pattern Anal. Mach. Intell. 34(11), 2274–2282 (2012)

2. Adams, R., Bischof, L.: Seeded region growing. IEEE Trans. Pat-tern Anal. Mach. Intell. 16(6), 641–647 (1994)

3. Aksoy, E., Abramov, A., Wörgötter, F., Scharr, H., Fischbach, A.,Dellen, B.: Modeling leaf growth of rosette plants using infraredstereo image sequences. Comput. Electron. Agric. 110, 78–90(2015)

4. Alenyà, G., Dellen, B., Torras, C.: 3D modelling of leaves fromcolor and ToF data for robotized plant measuring. In: IEEE Inter-national Conference on Robotics and Automation, pp. 3408–3414(2011)

5. Arvidsson, S., Pérez-Rodríguez, P., Mueller-Roeber, B.: A growthphenotyping pipeline for Arabidopsis thaliana integrating imageanalysis and rosette areamodeling for robust quantificationof geno-type effects. New Phytol 191(3), 895–907 (2011)

6. Augustin,M., Haxhimusa, Y., Busch,W., Kropatsch,W.G.: Image-based phenotyping of the mature Arabidopsis shoot system. In:Computer Vision—ECCV 2014 Workshops, vol. 8928, pp. 231–246. Springer (2015)

7. Bansal, S., Aggarwal, D.: Color image segmentation using CIELabcolor space using ant colony optimization. Int. J. Comput. Appl.29(9), 28–34 (2011)

8. Barrow, H., Tenenbaum, J., Bolles, R., Wolf, H.: Parametric corre-spondence and chamfer matching: two new techniques for imagematching. Tech. rep, DTIC (1977)

9. Beucher, S.: The watershed transformation applied to image seg-mentation. Scanning Microsc. Int. 6, 299–314 (1992)

10. Biskup, B., Scharr, H., Schurr, U., Rascher, U.: A stereo imagingsystem formeasuring structural parameters of plant canopies. PlantCell Environ. 30, 1299–1308 (2007)

11. Canny, J.: A computational approach to edge detection. IEEETrans. Pattern Anal. Mach. Intell. 8(6), 679–698 (1986)

12. Casanova, D., Florindo, J.B., Gonçalves, W.N., Bruno, O.M.:IFSC/USP at ImageCLEF 2012: plant identification task. In: CLEF(Online Working Notes/Labs/Workshop) (2012)

13. Cerutti, G., Antoine, V., Tougne, L., Mille, J., Valet, L., Coquin,D., Vacavant, A.: ReVeS participation: tree species classificationusing random forests and botanical features. In: Conference andLabs of the Evaluation Forum (2012)

14. Cerutti, G., Tougne, L., Mille, J., Vacavant, A., Coquin, D.: Under-standing leaves in natural images: a model-based approach for treespecies identification.Comput.Vis. ImageUnderst.10(117), 1482–1501 (2013)

15. CORESTA, C.: A scale for coding growth stages in tobacco crops(2009). http://www.coresta.org/Guides/Guide-No07-Growth-Stages_Feb09.pdf

123

Page 19: Leaf segmentation in plant phenotyping: a collation studycvlab.cse.msu.edu/pdfs/Xi_Liu_MVA2016.pdf · Machine Vision and Applications DOI 10.1007/s00138-015-0737-3 SURVEY Leaf segmentation

Leaf segmentation in plant phenotyping: a collation study

16. DeVylder, J., Ochoa,D., Philips,W., Chaerle, L., VanDer Straeten,D.: Leaf segmentation and tracking using probabilistic paramet-ric active contours. In: International Conference on ComputerVision/Computer Graphics Collaboration Techniques, pp. 75–85(2011)

17. De Vylder, J., Vandenbussche, F.J., Hu, Y., Philips, W., Van DerStraeten, D.: Rosette Tracker: an open source image analysis toolfor automatic quantification of genotype effects. Plant Physiol.160(3), 1149–1159 (2012)

18. Dellen, B., Scharr, H., Torras, C.: Growth signatures of rosetteplants from time-lapse video. IEEE/ACM Trans. Comput. Biol.Bioinform. PP(99), 1–11 (2015)

19. Dubuisson, M.P., Jain, A.K.: A modified Hausdorff distance forobject matching. In: Proceedings of the 12th IAPR InternationalConferenced on Pattern Recognition, vol. 1, pp. 566–568 (1994)

20. Everingham, M., Van Gool, L., Williams, C.K.I., Winn, J., Zisser-man, A.: The Pascal Visual Object Classes (VOC) challenge. Int.J. Comput. Vis. 88(2), 303–338 (2010)

21. Giuffrida, M.V., Minervini, M., Tsaftaris, S.A.: Learning to countleaves in rosette plants. In: Proceedings of the Computer VisionProblems in Plant Phenotyping (CVPPP) Workshop, pp. 1.1–1.13.BMVA Press (2015)

22. Grady, L.: Random walks for image segmentation. IEEE Trans.Pattern Anal. Mach. Intell. 28(11), 1768–1783 (2006)

23. Granier, C., Aguirrezabal, L., Chenu, K., Cookson, S.J., Dauzat,M., Hamard, P., Thioux, J.J., Rolland, G., Bouchier-Combaud, S.,Lebaudy, A., Muller, B., Simonneau, T., Tardieu, F.: PHENOP-SIS, an automated platform for reproducible phenotyping of plantresponses to soil water deficit in Arabidopsis thaliana permittedthe identification of an accession with low sensitivity to soil waterdeficit. New Phytol. 169(3), 623–635 (2006)

24. Hartmann, A., Czauderna, T., Hoffmann, R., Stein, N., Schreiber,F.: HTPheno: an image analysis pipeline for high-throughput plantphenotyping. BMC Bioinform. 12(1), 148 (2011)

25. He, X., Gould, S.: An exemplar-based CRF for multi-instanceobject segmentation. In: IEEE Conference on Computer Visionand Pattern Recognition (CVPR), pp. 296–303 (2014)

26. Jansen, M., Gilmer, F., Biskup, B., Nagel, K., Rascher, U., Fis-chbach, A., Briem, S., Dreissen, G., Tittmann, S., Braun, S., Jaeger,I.D., Metzlaff, M., Schurr, U., Scharr, H., Walter, A.: Simultane-ous phenotyping of leaf growth and chlorophyll fluorescence viaGROWSCREEN FLUORO allows detection of stress tolerance inArabidopsis thaliana and other rosette plants. Funct. Plant Biol.36(10/11), 902–914 (2009)

27. Jin, J., Tang, L.: Corn plant sensing using real-time stereo vision.J. Field Robot. 26(6–7), 591–608 (2009)

28. Kalyoncu, C., Toygar, Ö.: Geometric leaf classification. Comput.Vis. Image Underst. 133, 102–109 (2015)

29. Klukas, C., Chen, D., Pape, J.M.: Integrated analysis platform: anopen-source information system for high-throughput plant pheno-typing. Plant Physiol. 165(2), 506–518 (2014)

30. Kurugollu, F., Sankur, B., Harmanci, A.E.: Color image segmen-tation using histogram multithresholding and fusion. Image Vis.Comput. 19(13), 915–928 (2001)

31. Martin, D., Fowlkes, C.,Malik, J.: Learning to detect natural imageboundaries using local brightness, color, and texture cues. IEEETrans. Pattern Anal. Mach. Intell. 26(5), 530–549 (2004)

32. Martin, D., Fowlkes, C., Tal, D., Malik, J.: A database of humansegmented natural images and its application to evaluating segmen-tation algorithms and measuring ecological statistics. Int. Conf.Comput. Vis. (ICCV) 2, 416–423 (2001)

33. Mezaris, V., Kompatsiaris, I., Strintzis, M.: Still image objectivesegmentation evaluation using ground truth. In: 5th COST 276Workshop, pp. 9–14 (2003)

34. Minervini, M., Abdelsamea, M.M., Tsaftaris, S.A.: Image-basedplant phenotyping with incremental learning and active contours.

Ecol. Inform. 23, 35–48 (2014). (Special Issue on Multimedia inEcology and Environment)

35. Minervini, M., Fschbach, A., Scharr, H., Tsaftaris, S.: Finely-grained annotated datasets for image-based plant phenotyping.Pattern Recogn. Lett. (2015) (In press)

36. Minervini, M., Giuffrida, M.V., Tsaftaris, S.A.: An interactive toolfor semi-automated leaf annotation. In: Proceedings of the Com-puter Vision Problems in Plant Phenotyping (CVPPP) Workshop,pp. 6.1-6.13. BMVA Press (2015)

37. Minervini, M., Scharr, H., Tsaftaris, S.A.: Image analysis: the newbottleneck in plant phenotyping. IEEE Signal Process. Mag. 32(4),126–131 (2015)

38. Müller-Linow,M., Pinto-Espinosa, F., Scharr, H., Rascher, U.: Theleaf angle distribution of natural plant populations: assessing thecanopy with a novel software tool. Plant Methods 11(1), 11 (2015)

39. Nagel, K., Putz, A., Gilmer, F., Heinz, K., Fischbach, A., Pfeifer,J., Faget, M., Blossfeld, S., Ernst, M., Dimaki, C., Kastenholz,B., Kleinert, A.K., Galinski, A., Scharr, H., Fiorani, F., Schurr,U.: GROWSCREEN-Rhizo is a novel phenotyping robot enablingsimultaneous measurements of root and shoot growth for plantsgrown in soil-filled rhizotrons. Funct. Plant Biol. 39, 891–904(2012)

40. Nieuwenhuis, C., Cremers,D.: Spatially varying color distributionsfor interactive multilabel segmentation. IEEE Trans. Pattern Anal.Mach. Intell. 35(5), 1234–1247 (2013)

41. Pape, J.M., Klukas, C.: 3-D histogram-based segmentation and leafdetection for rosette plants. In: Computer Vision—ECCV 2014Workshops, vol. 8928, pp. 61–74. Springer (2015)

42. Polak, M., Zhang, H., Pi, M.: An evaluation metric for imagesegmentation ofmultiple objects. ImageVis. Comput. 27(8), 1223–1227 (2009)

43. Pratt, W.K.: Digital Image Processing. Wiley-Interscience, NewYork, NY (1978)

44. Quan, L., Tan, P., Zeng, G., Yuan, L., Wang, J., Kang, S.: Image-based plant modeling. ACM Trans. Graph. 25(3), 599–604 (2006)

45. Riemenschneider, H., Sternig, S., Donoser, M., Roth, P.M.,Bischof, H.: Hough regions for joining instance localization andsegmentation. In: Computer Vision—ECCV 2012, vol. 7574, pp.258–271. Springer (2012)

46. Scharr, H., Minervini, M., Fischbach, A., Tsaftaris, S.A.: Anno-tated image datasets of rosette plants. Tech. Rep. FZJ-2014-03837,Forschungszentrum Jülich GmbH, (2014). http://hdl.handle.net/2128/5848

47. Silva, L., Koga, M., Cugnasca, C., Costa, A.: Comparative assess-ment of feature selection and classification techniques for visualinspection of pot plant seedlings. Comput. Electron. Agric. 97, 47–55 (2013)

48. Soares, J.V.B., Jacobs, D.W.: Efficient segmentation of leaves insemi-controlled conditions. Mach. Vis. Appl. 24(8), 1623–1643(2013)

49. Song, Y., Wilson, R., Edmondson, R., Parsons, N.: Surface mod-elling of plants from stereo images. In: Proceedings of the 6thInternational Conference on 3-D Digital Imaging and Modeling(3DIM ’07), pp. 312–319 (2007)

50. Teng, C.H., Kuo, Y.T., Chen, Y.S.: Leaf segmentation, classifica-tion, and three-dimensional recovery from a few images with closeviewpoints. Opt. Eng. 50(3), 1–13 (2011)

51. Vincent, L., Soille, P.: Watersheds in digital spaces: an efficientalgorithm based on immersion simulations. IEEE Trans. PatternAnal. Mach. Intell. 13(6), 583–598 (1991)

52. van derHeijden,G., Song,Y.,Horgan,G., Polder,G.,Dieleman,A.,Bink,M., Palloix, A., van Eeuwijk, F., Glasbey, C.: SPICY: towardsautomated phenotyping of large pepper plants in the greenhouse.Funct. Plant Biol. 39(11), 870–877 (2012)

53. W3C: Portable network graphics (PNG) specification (2003)

123

Page 20: Leaf segmentation in plant phenotyping: a collation studycvlab.cse.msu.edu/pdfs/Xi_Liu_MVA2016.pdf · Machine Vision and Applications DOI 10.1007/s00138-015-0737-3 SURVEY Leaf segmentation

Scharr et al.

54. Walter, A., Scharr, H., Gilmer, F., Zierer, R., Nagel, K.A., Ernst,M.,Wiese, A., Virnich, O., Christ, M.M., Uhlig, B., Jünger, S., Schurr,U.: Dynamics of seedling growth acclimation towards altered lightconditions can be quantified via GROWSCREEN: a setup and pro-cedure designed for rapid optical phenotyping of different plantspecies. New Phytol. 174(2), 447–455 (2007)

55. Walter, A., Schurr, U.: The modular character of growth in Nico-tiana tabacum plants under steady-state nutrition. J. Exp. Bot.50(336), 1169–1177 (1999)

56. Wang, J., He, J., Han, Y., Ouyang, C., Li, D.: An adaptive thresh-olding algorithm of field leaf image. Comput. Electron. Agric. 96,23–39 (2013)

57. Wu, B., Nevatia, R.: Detection and segmentation of multiple,partially occluded objects by grouping, merging, assigning partdetection responses. Int. J. Comput. Vis. 82(2), 185–204 (2009)

58. Yanikoglu, B., Aptoula, E., Tirkaz, C.: Automatic plant identifica-tion from photographs. Mach. Vis. Appl. 6(25), 1369–1383 (2014)

59. Yin, X., Liu, X., Chen, J., Kramer, D.M.: Multi-leaf alignmentfrom fluorescence plant images. In: IEEE Winter Conference onApplications of Computer Vision (WACV), pp. 437–444 (2014)

60. Yin, X., Liu, X., Chen, J., Kramer, D.M.: Multi-leaf tracking fromfluorescence plant videos. In: IEEE International Conference onImage Processing (ICIP), pp. 408–412 (2014)

61. Yin, X., Liu, X., Chen, J., Kramer, D.M.: Multi-Leaf Segmen-tation, Alignment and Tracking from Fluorescence Plant Videos.arXiv:1505.00353 (2015)

62. Ziou, D., Tabbone, S.: Edge detection techniques: an overview. Int.J. Pattern Recogn. Image Anal. 8(4), 537–559 (1998)

HannoScharr received hisDoc-toral degree in Physics fromRuprecht-Karls University, Hei-delberg, Germany in 2000 andHabilitation in Computer Sci-ence from Goethe University,Frankfurt, Germany in 2014.From 2002 to 2003 he wasSenior Scientist at Intel, SantaClara, California. Currently heis working as a Senior Scien-tist at IBG-2: Plant Sciences,ForschungszentrumJülich, Jülich,Germany. His ares of inter-est include computer vision and

image processing, robotics and automation for applications in plant sci-ences.

Massimo Minervini receivedthe B.Sc. and M.Sc. degrees inComputer Science from the Uni-versity of Bari, Italy, in 2008and 2010, respectively. In 2015he received the Ph.D. degreein Computer Science and Engi-neering from IMT Institute forAdvanced Studies, Lucca, Italy,investigating application-awareimage compression for plantphenotyping. He is currently apost-doctoral researcher at IMTLucca. His research interestsinclude data compression, com-

puter vision, machine learning, and applications to life science.

Andrew P. French received hisPh.D. in Computer Science in2005 from the University of Not-tingham (UK), specialising incomputer vision tracking algo-rithms. He is a member of theschools ofComputer Science andBiosciences at the University ofNottingham, where he is part ofthe Computer Vision Lab andthe Centre for Plant IntegrativeBiology. His research interestsinclude developing image analy-sis for plant phenotyping in thelab and field.

Christian Klukas received hisdiploma certificate in Busi-ness Computing in 2003 fromOtto-von-Guericke University,Magdeburg, Germany, and hisPh.D. certificate in 2009 fromMartin-Luther-University, Halle(Saale), Germany. From 2010 to2015 he led the Image Analy-sis group at the Leibniz Insti-tute of Plant Genetics and CropPlant Research (IPK) in Gater-sleben, Germany. As lead of theData Analytics group at Lem-naTecGmbH,Aachen,Germany,

he works on the topics of high-throughput image analysis, machinelearning and data post-processing.

David M. Kramer receivedPh.D. in Biophysics in 1990from U. Illinois at Urbana-Champaign, USA. He was thenpost-doctoral research associate,first at U. of Illinois, then atthe Institut de Biologie Physico-Chimique, Paris. In 1995, hebecame a faculty member at theInstitute for Biological Chem-istry at Washington State Uni-versity. In 2010 he was recruitedas the Hannah DistinguishedProfessor in Photosynthesis andBioenergetics at the MSU-DOE

Plant Research Lab at Michigan State University, USA. His workon photosynthetic phenotyping led to the establishment of MSU’sCenter for Advanced Algal and Plant Phenotyping (CAAPP), amulti-disciplinary and multi-user facility to phenotype photosyntheticprocesses. He is co-founder of Phenometrics, Inc., a spin-off com-pany from MSU, which manufactures advanced photobioreactors foroptimizing algal bioenergy production. He also co-founded the Photo-synQ.org project, a network of scientists, researchers, extension agents,sophisticated scientific instruments and analytical tools that facilitatesplant phenotyping around the world.

123

Page 21: Leaf segmentation in plant phenotyping: a collation studycvlab.cse.msu.edu/pdfs/Xi_Liu_MVA2016.pdf · Machine Vision and Applications DOI 10.1007/s00138-015-0737-3 SURVEY Leaf segmentation

Leaf segmentation in plant phenotyping: a collation study

Xiaoming Liu is an AssistantProfessor in the Department ofComputer Science and Engineer-ing at Michigan State University(MSU). He received the M.E.degree in Computer Sciencefrom Zhejiang University, Chinain 2000, and the Ph.D. degreein Electrical and Computer Engi-neering from Carnegie MellonUniversity in 2004. Before join-ing MSU in Fall 2012, he wasa research scientist at GeneralElectric Global Research Center.His research areas are biomet-

rics, image alignment, video surveillance, computer vision and patternrecognition. He has authored more than 90 scientific publications, andhas filed 22 U.S. patents.

Imanol Luengo received hisB.S. in Computer Science andhisM.S. (by research) in Compu-tational Engineering and Intelli-gent Systems, both from the Uni-versity of the Basque Country,Spain in 2012 and 2013 respec-tively. He is now pursuing hisPh.D. (since 2014) from the Uni-versity of Nottingham in the areaof image processing methods forbiomedical volume analysis. Hisareas of interest include low levelvision tasks for data preprocess-ing and high level vision tasks,

such as segmentation and semantic labelling with superpixels.

Jean-Michel Pape received hisB.S. in Computational Visualis-tics from the Otto-von-GuerickeUniversity in 2012. He is cur-rently working in the researchgroup Image Analysis at theLeibniz Institute of Plant Genet-ics and Crop Plant Research(IPK). His research is focusedon image processing, computervision and machine learning inthe context of high-throughputplant phenotyping.

Gerrit Polder is researcherComputer Vision and Plant Phe-notyping at the departmentGreen-house Horticulture of Wagenin-gen University and ResearchCenter. In 1985 he obtaineda B.Sc. in electronics at theHAN University of Applied Sci-ences in Arnhem the Nether-lands. After working severalyears in image processing andrelated topics, in 2004 he got aPh.D. from Delft University ofTechnology on Spectral imagingfor measuring biochemicals in

plant material. From 2004 he works on machine-vision and pheno-typing projects. His research interests are mainly on hyperspectral andmultispectral imaging for disease detection, 3D imaging using stereo-vision, Time Of Flight Imaging, and light-field technology and sensorfusion (color, fluorescence and infrared) for monitoring plant health.

Danijela Vukadinovic receivedher M.Sc. degree in Telecom-munications and Image Process-ing from the University of Bel-grade, Serbia in 2003. Shewas a researcher in TU Delft,the Netherlands, in the fieldof Facial Emotion Recognitionin the 2004-2006 period. Shereceived her Ph.D. degree in Bio-medical Image Processing fromthe University of Rotterdam, theNetherlands, in 2012. Currently,she is focused on horticulturetopics being a memeber of the

Greenhouse Technology Group, Wageninegn University and ResearchCenter, the Netherlands. Her research interests include ComputerVision, Machine Learning, Robotics and Phenotyping.

Xi Yin received the B.S. degreein Electronic and InformationScience from Wuhan University,China, in 2013. Since August2013, she has been workingtoward her Ph.D. degree in theDepartment of Computer Sci-ence and Engineering, Michi-gan State University, USA. Herresearch interests include imageprocessing, pattern recognition,and computer vision.

123

Page 22: Leaf segmentation in plant phenotyping: a collation studycvlab.cse.msu.edu/pdfs/Xi_Liu_MVA2016.pdf · Machine Vision and Applications DOI 10.1007/s00138-015-0737-3 SURVEY Leaf segmentation

Scharr et al.

Sotirios A. Tsaftaris receivedthe M.Sc. and Ph.D. degrees inElectrical and Computer Engi-neering from Northwestern Uni-versity (USA), in 2003 and 2006,respectively, and the Diplomadegree in Electrical and Com-puter Engineering from the Aris-totle University of Thessaloniki(Greece) in 2000. He is cur-rently a Chancellor’s Fellow (ata Senior Lecturer level) at theUniversity of Edinburgh (UK).Previously he was an AssistantProfessor with IMT Institute for

Advanced Studies, Lucca (Italy). He remains as the Director of the Pat-tern Recognition and Image Analysis Unit at IMT. Prior to that, he helda joint Research Assistant Professor appointment at Northwestern Uni-versity with the Departments of Electrical Engineering and ComputerScience (EECS) and Radiology Feinberg School ofMedicine. Hemain-tains an adjunct appointment with EECS. He has published extensively,particularly in interdisciplinary fields, with more than 100 journal andconference papers in his active record. His research interests are imageanalysis, image processing, data mining and machine learning, and dis-tributed computing. He is considered one of the pioneers in affordableimage-based phenotyping, and recently with colleagues has developedthe Phenotiki platform. He serves the academic community regularly,and in phenotyping he has co-organized twice the Computer VisionProblems in Plant Phenotyping (CVPPP) once at ECCV 2014 and atBMVC 2015.

123