97
Automated Blastomere Segmentation for Visual Servo on Early-Stage Embryo by Simarjot Singh Sidhu A thesis submitted in conformity with the requirements for the degree of Master of Applied Science Department of Mechanical and Industrial Engineering University of Toronto © Copyright by Simarjot Singh Sidhu 2019

Automated Blastomere Segmentation for Visual Servo on

  • Upload
    others

  • View
    5

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Automated Blastomere Segmentation for Visual Servo on

Automated Blastomere Segmentation for Visual Servo on Early-Stage Embryo

by

Simarjot Singh Sidhu

A thesis submitted in conformity with the requirements for the degree of Master of Applied Science

Department of Mechanical and Industrial Engineering University of Toronto

© Copyright by Simarjot Singh Sidhu 2019

Page 2: Automated Blastomere Segmentation for Visual Servo on

ii

Automated Blastomere Segmentation for Visual Servo on Early-

Stage Embryo

Simarjot Singh Sidhu

Master of Applied Science

Department of Mechanical and Industrial Engineering

University of Toronto

2019

Abstract

Automation of single biological cell surgery requires the location of organelles and cell structures

to be determined to permit automated processing carried out in the cell surgery process. In this

work, z-stack images of mouse embryos are used as model to develop image processing

algorithms, to determine the centroid position (𝑥, 𝑦, 𝑧) coordinates of embryo blastomeres.

Transparency of embryos allow for a series of images along the vertical cell axis (𝑧) to be obtained.

Individual z-stack images are processed using 2D image processing steps to first segment, then

estimate the centroid (𝑥, 𝑦) coordinates of the blastomeres in the 2D image. Successive processing

of all z-stack images then permits the centroid of the blastomeres to be determined in (𝑥, 𝑦, 𝑧)

coordinates. Image processing-based calibration allows PZD micropipette to move to the

computed centroid position with PBVS control. These algorithms are experimentally verified with

mouse embryos at the 2 blastomere stage of development.

Page 3: Automated Blastomere Segmentation for Visual Servo on

iii

Acknowledgments

This work described in the thesis would not have been made possible without the help of several

individuals.

Firstly, I would like to express my sincerest gratitude to my supervisor, Professor James K. Mills

for the continuous support of my research. His patience, support and immense knowledge are

monumental in the completion of my work. Thank you for motivating me to push through in times

of failure.

I would like to thank Professor Goldie Nejat and Professor Pierre E. Sullivan for taking the time

to serve on my committee. I appreciate listening to your thoughts on my work.

Furthermore, this thesis would not have been made possible without the support of my labmates

at the Laboratory for Nonlinear Systems Control (NSCL), Ihab Abu-Ajamieh, Andrew Michalak,

Armin Eshaghi, William Yao and Maharshi Trivedi. It has been a privilege to work with such a

talented group. A special mention goes to Ihab for his great mentorship, and insightful career

advice and guidance. Thank you to my all my friends for providing me encouragement and support

throughout this journey.

Special thanks go to Dr. Christopher Yee Wong and Dr. Steven Kinio for their help in starting my

academic journey, and their guidance in career and academia.

Last but not least, I would like to express my deepest gratitude towards my parents for their

unconditional love, time, support, patience (and food), while supporting me on this endeavour. I

would not be the person I am without them.

Page 4: Automated Blastomere Segmentation for Visual Servo on

iv

Table of Contents

Acknowledgments.......................................................................................................................... iii

Table of Contents ........................................................................................................................... iv

List of Tables ................................................................................................................................. vi

List of Figures ............................................................................................................................... vii

List of Appendices ......................................................................................................................... xi

Nomenclature ................................................................................................................................ xii

Introduction .................................................................................................................................1

1.1 Preimplantation Genetic Diagnosis......................................................................................1

1.2 Automation of Single Cell Surgery......................................................................................2

1.3 Problem Statement and Objectives ......................................................................................3

1.3.1 Problem Statement ...................................................................................................3

1.3.2 Objectives ................................................................................................................4

1.4 Contributions........................................................................................................................5

1.5 Thesis Organization .............................................................................................................5

Background and Literature Review ............................................................................................7

2.1 Overview of Image Processing Techniques .........................................................................7

2.1.1 Types of Microscopy ...............................................................................................7

2.1.2 Depth of Field ..........................................................................................................9

2.1.3 Image Processing and Cell Segmentation ..............................................................12

2.2 Overview of Visual Servoing Techniques .........................................................................15

2.2.1 Introduction to Visual Servoing .............................................................................15

2.2.2 Image-Based Visual Servo .....................................................................................16

2.2.3 Position-Based Visual Servo..................................................................................16

Page 5: Automated Blastomere Segmentation for Visual Servo on

v

Methodology .............................................................................................................................19

3.1 z-Stack Images ...................................................................................................................19

3.1.1 Obtaining z-Stack Images ......................................................................................21

3.2 Blastomere Segmentation ..................................................................................................22

3.2.1 Initialization (Step 1) .............................................................................................22

3.2.2 Low-Cost Energy Path (Step 2) .............................................................................29

3.2.3 Blastomere Centroid Calculation (Step 3) .............................................................41

3.3 Visual Servoing ..................................................................................................................44

3.3.1 Micropipette Calibration ........................................................................................45

3.3.2 Position Based Visual Servoing .............................................................................49

Results and Discussion ..............................................................................................................53

4.1 Experimental Procedure .....................................................................................................53

4.2 Experimental Results .........................................................................................................56

4.3 Discussion of Results .........................................................................................................59

Conclusions ...............................................................................................................................62

5.1 Summary and Conclusions ................................................................................................62

5.2 Contributions......................................................................................................................64

5.3 Recommendations and Future Works ................................................................................64

References ......................................................................................................................................65

Appendices .....................................................................................................................................71

Page 6: Automated Blastomere Segmentation for Visual Servo on

vi

List of Tables

Table 1: Sample Blastomere Coordinate Data .............................................................................. 72

Table 2: Sample Blastomere Coordinate Calculations ................................................................. 73

Table 3: Sample Blastomere Coordinate Calculation Errors ........................................................ 73

Table 4: Sample Given Micropipette Tip Coordinates ................................................................. 74

Table 5: Sample True Micropipette Tip Coordinates ................................................................... 74

Table 6: Sample Micropipette Tip Coordinate Errors .................................................................. 74

Page 7: Automated Blastomere Segmentation for Visual Servo on

vii

List of Figures

Figure 1.1: Development of Early-Stage Embryo [5]. .................................................................... 2

Figure 1.2: Preexisting Experimental Setup: (a) Nikon Ti-U brightfield inverted microscope and

(b) Scientifica Patchstar robotic micromanipulators and Prior Proscan III motorized stage. ......... 4

Figure 2.1: Images captured by brightfield microscopy and fluorescence, respectively.

Blastomeres are dyed red, and nuclei are dyed green [17]. ............................................................ 8

Figure 2.2: Various brightfield microscopy techniques: (a) 12-cell stage embryo captured with DIC

[28], (b) zygote stage embryo captured with HMC [32], (c) 4-cell stage embryo captured with

HMC [33]. ....................................................................................................................................... 9

Figure 2.3: Images showing tetrahedral shape of 4-cell stage embryo: (a) Image focused at bottom

two blastomeres of embryo [38]. (b) Image focused at top two blastomeres of embryo [38]. ..... 10

Figure 2.4: Diagram of 2-cell stage embryo with blastomeres, and centroid of the blastomere, 𝐶𝑇.

....................................................................................................................................................... 11

Figure 2.5: Diagram of embryo placed on motorized stage of a microscope. The movement axes

of both the stage and objective lens are labelled. .......................................................................... 11

Figure 2.6: Active contour segmentation of zona pellucida: (a) Original image [40]. (b) Active

contour segmentation [40]. (c) Manual segmentation as reference [40]. ..................................... 13

Figure 2.7: Variational Level Sets for Cell Segmentation: (a) manual segmentation [41]. (b)

Blastomeres within the ZP, with bounding curves [41]. ............................................................... 13

Figure 2.8: Z-stack images of embryo: Top row: original Z-stack images obtained [33]. Middle

row: blastomeres segmented with graph-based method [33]. Segmented contours marked in

yellow. Bottom row: reconstructed 3D structure of blastomeres [33]. ......................................... 14

Figure 2.9: Demonstration of IBVS: (a) Initial coordinates of features, marked as yellow dots, and

(b) Desired coordinates of features, marked as red dots. .............................................................. 17

Page 8: Automated Blastomere Segmentation for Visual Servo on

viii

Figure 2.10: Controlling position coordinates to move to desired location with PBVS............... 17

Figure 3.1: Diagram of showing axes of motion, and the Cartesian reference frame. The motorized

stage moves along the xy-plane. The objective lens moves along the z-axis. .............................. 19

Figure 3.2: Diagram of Image Stack and z-Stack Images. The z-stack image outlined in red is the

z-stack image of interest (IOI). The two z-stack images outlined in blue are involved in the process

to create the image array, 𝐽, which is further explained in Section 3.2.2 below. .......................... 20

Figure 3.3: Z-stack images of embryo. (a) Embryo with z-stack images taken successively at

equally separated focal planes. (b) Individual z-stack images stacked to indicate what part of the

blastomere is taken at what z-stack image. Also samples the format of TIF files. ....................... 21

Figure 3.4: The original image of the embryo, selected from the middle of the image stack [15].

....................................................................................................................................................... 23

Figure 3.5: Image with standard deviation filter applied. ............................................................. 24

Figure 3.6: The thresholded binarized image. .............................................................................. 25

Figure 3.7: Image with area filter applied. .................................................................................... 25

Figure 3.8: Image with area fill applied. ....................................................................................... 26

Figure 3.9: Image smoothened by a structuring element, resulting in a blob containing the two

blastomeres. .................................................................................................................................. 26

Figure 3.10: Image processing algorithms to acquire approximate centroids. (a) Blob acquired

from previous step Figure 3.9. (b) Calculated centroid of blob represented as a blue *. (c) Line

from blob centroid to closest edge. (d) Segmented blastomeres from line cut. (e) Centroids of

respective blastomeres, with the centroids represented as a blue *. ............................................. 28

Figure 3.11: z-Stack image with ROI around BOI. ...................................................................... 29

Figure 3.12: ROI displayed in polar coordinates at the z-stack IOI, 𝐽𝑖. ...................................... 30

Figure 3.13: Format of image array, 𝐽. ......................................................................................... 31

Page 9: Automated Blastomere Segmentation for Visual Servo on

ix

Figure 3.14: Energy array at the z-stack image of interest, 𝐸𝑘. ................................................... 32

Figure 3.15: Basic graph structure example. ................................................................................ 33

Figure 3.16: Basic graph structure path example.......................................................................... 34

Figure 3.17: Sample of 2D graph structure of the energy z-stack image, 𝐸𝑘. .............................. 35

Figure 3.18: Sample of 3D graph structure, 𝐸(𝜃𝑛, 𝜌𝑛, 𝑚). .......................................................... 36

Figure 3.19: Components of 3D Graph Structure Complexity. .................................................... 37

Figure 3.20: Graph showing number of permutations, 𝑃𝑚, vs. number of z-stack images within

the graph for the energy matrix, 𝐸. ............................................................................................... 38

Figure 3.21: Sparse Matrix where 𝑛 = 1, or 𝑚 = 3. ..................................................................... 39

Figure 3.22: Low-cost energy path. (a) Path 𝛤𝑖 at 𝐸𝑖 − 1. (b) Path 𝛤𝑖 at 𝐸𝑖. (c) Path 𝛤𝑖 at 𝐸𝑖 + 1.

(d) Path 𝛤𝑖 projected onto the xy-plane, 𝛾𝑖. .................................................................................. 40

Figure 3.23: Computed path, 𝛾𝑖, represented by the red line. And centroid, 𝐶𝑖, represented by a red

*, of the z-stack IOI of 𝐼𝑖. ............................................................................................................. 40

Figure 3.24: Diagram of the z-stack image centroids, 𝐶𝑖, area 𝐴𝑖, and computed blastomere

centroid, 𝐶. .................................................................................................................................... 42

Figure 3.25: Flowchart of Blastomere Segmentation Algorithm. The orange section represents the

manual operations required to begin the automated task, whereas the blue sections represent the

automated tasks. Statements in green represent the output for its respective step. ...................... 43

Figure 3.26: Schematic of the experimental setup. ....................................................................... 45

Figure 3.27: Micropipette image segmentation. (a) Original image of micropipette. (b) Canny edge

detection. (c) Image Fill. (d) Micropipette outline split into side walls and tip. (e) Micropipette

with orientation and tip position. .................................................................................................. 47

Figure 3.28: Micropipette Calibration Procedure. (a) Micropipette at first position. (b)

Micropipette at second position. (c) Micropipette at calibration test position. ............................ 49

Page 10: Automated Blastomere Segmentation for Visual Servo on

x

Figure 3.29: Micropipette Control Path. (a) Micropipette at second position, 𝑝𝑚𝑝, 2. (b)

Micropipette at third position, 𝑝𝑚𝑝, 2. (c) Micropipette at fourth, and final, position, 𝑝𝑚𝑝, 4. .. 51

Figure 4.1: Flowchart of overall BOI centroid computation and visual servo process. The boxes

represent tasks, whereas the arrows represent the procession from one task to another. Orange

boxes and arrows represent tasks performed manually. Whereas the blue boxes and arrows

represent automatically performed tasks. ..................................................................................... 55

Figure 4.2: Sample Experiment of Visual Servoing. .................................................................... 58

Figure 4.3: Various z-stack images of embryo. (a) z-stack image at 𝐼14. Note the white circularly

shaped outline within the embryo. This is the boundary of the blastomere at this z-stack image. (b)

z-stack image at 𝐼31. Also used as the middle of image stack due to it being the z-stack image with

the largest blastomere boundary. (c) z-stack image at 𝐼44. Blastomere boundary is not visible due

to blastomere opacity. ................................................................................................................... 60

Figure 4.4: Comparison of Micropipette Tips for Calibration. ..................................................... 61

Page 11: Automated Blastomere Segmentation for Visual Servo on

xi

List of Appendices

Appendix A. Experiment of Visual Servoing

Appendix B. Sample Blastomere Coordinate Calculations

Appendix C. Sample Micropipette Tip Coordinate Calculations

Appendix D. Sample of Blastomere Segmentation Across Image Stack

Page 12: Automated Blastomere Segmentation for Visual Servo on

xii

Nomenclature

Abbreviations

3D Three Dimensions/Dimensional

ART Assisted Reproductive Technologies

BOI Blastomere of Interest

CAD Canadian Dollar

DIC Differential Interference Contrast

DoF Depth of Field (Depth of Focus)

DOF Degrees of Freedom

GPS Global Positioning System

HMC Hoffman Modulation Contrast

IBVS Image-Based Visual Servo Control

ICSI Intracytoplasmic Sperm Injection

IOI z-Stack Image of Interest

IVF In Vitro Fertilization

NA Numerical Aperture

OQM Optical Quadrature Microscopy

PBVS Position-based Visual Servo Control

PGD Preimplantation Genetic Diagnosis

PZD Partial Zona Dissection

ROI Region of Interest

TIF Tagged Image Format Filetype

USD United States Dollar

ZP Zona Pellucida

Microscopy

𝑎 Sample Node

𝐴𝑖 𝑖th Index of Area of 𝛾𝑖

𝑏 Sample Node

Page 13: Automated Blastomere Segmentation for Visual Servo on

xiii

𝑐 Sample Node

𝐶𝑎 Centroid Approximation used for ROI Initialization

𝑐𝑎𝑥 x-Coordinate of Centroid Approximation used for ROI Initialization

𝑐𝑎𝑦 y-Coordinate of Centroid Approximation used for ROI Initialization

𝐶𝑏𝑙𝑜𝑏 Centroid of Blob

𝐶𝑖 Centroid of 𝛾𝑖

𝑐𝑖𝑥 x-Coordinate of Centroid of 𝛾𝑖

𝑐𝑖𝑦 y-Coordinate of Centroid of 𝛾𝑖

𝐶𝑚 Calculated Centroid by Manual Segmentation

𝑐𝑚𝑥 x-Coordinate of Calculated Centroid by Manual Segmentation

𝑐𝑚𝑦 y-Coordinate of Calculated Centroid by Manual Segmentation

𝑐𝑚𝑧 z-Coordinate of Calculated Centroid by Manual Segmentation

𝐶𝑇 True Centroid of BOI

𝐶̅ Calculated Centroid of BOI

𝑑 Sample Node

𝑑𝐼 Distance between Consecutive z-Stack images

𝑒 Sample Node

𝐸 Energy Array

𝐸𝑏 Energy Value at Node b

𝐸𝑐 Energy Value at Node c

𝐸𝑑 Energy Value at Node d

𝐸𝑖 𝑖th Index of Energy Array

𝐸𝑗 𝑗th Index of Energy Array

𝐸𝑘 𝑘th Index of Energy Array

𝑒𝑥 Error along x-axis

𝑒𝑦 Error along y-axis

𝑒𝑧 Error along z-axis

𝑓 Sample Node

𝐹𝑡ℎ𝑟𝑒𝑠ℎ Threshold of Binarized Filter

𝑔 Sample Node

𝐺𝜌 Gradient Operator along Radial Direction

Page 14: Automated Blastomere Segmentation for Visual Servo on

xiv

𝑖 Indexing Variable

𝐼𝑖 𝑖th Index of z-Stack Image of an Image Stack

𝐼𝐼 Index of z-Stack Image of an Image Stack where BOI is Largest

𝑗 Indexing Variable

𝐽 Image Array

𝐽𝑖 𝑖th Index of Image Array

𝑘 Indexing Variable

𝐾 Scaling Factor for Low-Cost Energy Path Formula

𝑚 Number of z-Stack Images used for Graph Structure

𝑀𝑇 Total Visual Magnification of Microscope

𝑛 Refractive Index of Medium

𝑁 Number of z-Stack Images in an Image Stack

ℕ+ Positive Natural Numbers

𝑁𝐴 Numerical Aperture of Objective Lens

𝑃 Sigmoid Function

𝑃𝑚 Permutations of Graph Structure

𝑥𝑇 True x-coordinate of BOI

𝑦𝑇 True y-coordinate of BOI

𝑧𝑇 True z-coordinate of BOI

�̅� x-Coordinate of Calculated Centroid of BOI

�̅� y-Coordinate of Calculated Centroid of BOI

𝑧𝑖 z-Coordinate at z-stack Image 𝐼𝑖

𝑧̅ z-Coordinate of Calculated Centroid of BOI

𝛼 Direction of Lighting from HMC Imaging

∈ Belongs to (Mathematical Operator)

𝛤𝑖 𝑖th Index of 3D Path for Blastomere Segmentation

𝛾𝑖 𝑖th Index of 2D Projection on xy-plane of 𝛤𝑖

λ Wavelength of Light Used

𝜌 Radii of Ring for ROI

𝜌′ Radii of Inner Ring of ROI

𝜌′′ Radii of Outer Ring of ROI

Page 15: Automated Blastomere Segmentation for Visual Servo on

xv

𝜌𝑛 Number of 𝜌 Samples for 𝐽

𝜃 Angle for use in Polar Coordinates of 𝐽

𝜃𝑛 Number of 𝜃 Samples for 𝐽

Visual Servoing

𝒂 Set of Parameters representing additional knowledge about the System

𝐶𝐿 Centroid of Left Micropipette Side Wall

𝐶𝑅 Centroid of Right Micropipette Side Wall

𝐶̅ Calculated Centroid of BOI

𝒆 Error between Features

𝑖 Indexing Variable

𝑗 Indexing Variable

𝑘 Indexing Variable

𝒎 Set of Image Measurements

𝑁 Total Number of Points of 𝑠𝑜𝑟𝑑𝑒𝑟𝑒𝑑

𝑝 Micropipette Tip Position

𝑝𝑜𝑓𝑓𝑠𝑒𝑡 Offset of 𝑝 from Micromanipulator Frame to Camera Frame

𝑝𝐶 Micropipette Tip Position for Calibration in Camera Frame

𝑝𝑀 Micropipette Tip Position for Calibration in Micromanipulator Frame

𝑝1 First Micropipette Tip Position for Calibration

𝑝1𝐶 First Micropipette Tip Position in Camera Frame

𝑝1,𝑥𝐶 x-Component of 𝑝1

𝐶

𝑝1,𝑦𝐶 y-Component of 𝑝1

𝐶

𝑝1𝑀 First Micropipette Tip Position in Micromanipulator Frame

𝑝1,𝑥𝑀 x-Component of 𝑝1

𝑀

𝑝1,𝑦𝑀 y-Component of 𝑝1

𝑀

𝑝2 Second Micropipette Tip Position for Calibration

𝑝2𝐶 Second Micropipette Tip Position in Camera Frame

𝑝2,𝑥𝐶 x-Component of 𝑝2

𝐶

𝑝2,𝑦𝐶 y-Component of 𝑝2

𝐶

Page 16: Automated Blastomere Segmentation for Visual Servo on

xvi

𝑝2𝑀 Second Micropipette Tip Position in Micromanipulator Frame

𝑝2,𝑥𝑀 x-Component of 𝑝2

𝑀

𝑝2,𝑦𝑀 y-Component of 𝑝2

𝑀

𝑝𝑚𝑝 Position of Micropipette Tip for Visual Servoing

𝑝𝑚𝑝,1 First Position of Micropipette Tip for Visual Servoing

𝑝𝑚𝑝,2 Second Position of Micropipette Tip for Visual Servoing

𝑝𝑚𝑝,3 Third Position of Micropipette Tip for Visual Servoing

𝑝𝑚𝑝,4 Fourth Position of Micropipette Tip for Visual Servoing

𝑅 Rotation Matrix

𝑠 Scaling Factor

𝒔 Current Set of Features

𝒔∗ Desired Set of Features

𝑠𝐿 Set of Points along Left Micropipette Side Wall

𝑠𝑅 Set of Points along Right Micropipette Side Wall

𝑠𝑜𝑟𝑑𝑒𝑟𝑒𝑑 Set of Points along Perimeter of Micropipette

𝑠𝑜𝑟𝑑𝑒𝑟𝑒𝑑,𝑥 x-Component of 𝑠𝑜𝑟𝑑𝑒𝑟𝑒𝑑

𝑠𝑜𝑟𝑑𝑒𝑟𝑒𝑑,𝑦 y-Component of 𝑠𝑜𝑟𝑑𝑒𝑟𝑒𝑑

𝒕 Time

T Transformation Matrix

𝑥 x-Coordinate of Micropipette Tip

𝑥1 x-Coordinate of First Micropipette Tip Position for Visual Servoing

𝑥2 x-Coordinate of Second Micropipette Tip Position for Visual Servoing

𝑥3 x-Coordinate of Third Micropipette Tip Position for Visual Servoing

𝑥4 x-Coordinate of Fourth Micropipette Tip Position for Visual Servoing

𝑥𝐸 Error of Tip along x-Direction

𝑥𝐺 Sample Given x-Coordinate

𝑥𝑇 x-Coordinate of True Micropipette Tip Position

𝑦𝐸 Error of Tip along y-Direction

𝑦𝐺 Sample Given y-Coordinate

𝑦𝑇 y-Coordinate of True Micropipette Tip Position

𝑧𝐸 Error of Tip along z-Direction

Page 17: Automated Blastomere Segmentation for Visual Servo on

xvii

𝑧𝐺 Sample Given z-Coordinate

𝑧𝑇 z-Coordinate of True Micropipette Tip Position

�̅� x-Coordinate of Calculated Centroid of BOI

𝑦 y-Coordinate of Micropipette Tip

𝑦1 y-Coordinate of First Micropipette Tip Position for Visual Servoing

𝑦2 y-Coordinate of Second Micropipette Tip Position for Visual Servoing

𝑦3 y-Coordinate of Third Micropipette Tip Position for Visual Servoing

𝑦4 y-Coordinate of Fourth Micropipette Tip Position for Visual Servoing

�̅� y-Coordinate of Calculated Centroid of BOI

𝑧 z-Coordinate of Micropipette Tip

𝑧1 z-Coordinate of First Micropipette Tip Position for Visual Servoing

𝑧2 z-Coordinate of Second Micropipette Tip Position for Visual Servoing

𝑧3 z-Coordinate of Third Micropipette Tip Position for Visual Servoing

𝑧4 z-Coordinate of Fourth Micropipette Tip Position for Visual Servoing

𝑧𝑜𝑓𝑓𝑠𝑒𝑡 Offset of 𝑧 from Micromanipulator Frame to Camera Frame

𝑧𝐶 z-Coordinate of Micropipette Tip in Camera Frame

𝑧𝑀 z-Coordinate of Micropipette Tip in Micromanipulator Frame

𝑧̅ z-Coordinate of Calculated Centroid of BOI

𝛼 Angle of Micropipette

𝛼𝑖𝑛𝑖𝑡 Initial Estimate of Micropipette Angle

𝛼𝑖𝑛𝑖𝑡,𝐿 Micropipette Left Wall Angle for Initialization

𝛼𝑖𝑛𝑖𝑡,𝑅 Micropipette Right Wall Angle for Initialization

𝛽𝐶 Angle for Rotation Matrix in Camera Frame

𝛽𝑀 Angle for Rotation Matrix in Micromanipulator Frame

𝛿 Sampling Length

휀 Number of Points used for Angle Estimation

∈ Belongs to (Mathematical Operator)

Page 18: Automated Blastomere Segmentation for Visual Servo on

1

Introduction

1.1 Preimplantation Genetic Diagnosis

In the healthcare industry, technology is progressing at a rapid rate. Advancements are being made

to further develop technologies towards the micro and cellular scale. These developments are

necessary for the means of manipulation of individual cells and their intracellular components.

Specifically, the field of assisted reproductive technologies (ART), requires the use of these

technologies to help with fertility and reproduction related issues. One such type of ART

procedures requires manipulation of embryos at an early stage of development, within a few days

of fertilization, also known as in vitro fertilization (IVF).

In vitro fertilization involves an unfertilized cell, known as an oocyte, to be taken out of the

organism for procedures such as intracytoplasmic sperm injection (ICSI) [1], and preimplantation

genetic diagnosis (PGD) [2], as opposed to in vivo fertilization, in which the oocyte remains within

the organism [3]. The procedure for an IVF procedure is as follows. An oocyte is first removed

from the organism. ICSI involves using a small sharp needle, also called a micropipette, to

inseminate an oocyte. Once inseminated, the fertilized cell, known as an embryo (or zygote), is

stored in an incubator, mimicking the temperatures and CO2 levels inside of the organism, and

begins developing. Initially, the embryo starts as a single celled zygote. Every day, it advances to

a new stage, where in the first day the intercellular material splits into a 2-cell stage. In the

subsequent days, the material splits into a 4-cell, and then an 8-cell stage [4], etc.. These split

intercellular components are known as blastomeres, and are vital to the PGD process [2]. The

embryo then develops into the 16-32 cell stage (morula), and then a blastocyst. These stages can

be seen in Figure 1.1 [5]. Only then is it transferred back into the organism for further natural

development.

Page 19: Automated Blastomere Segmentation for Visual Servo on

2

Figure 1.1: Development of Early-Stage Embryo [5].

The preimplantation genetic diagnosis process, is a method used by embryologists, to perform IVF

treatments, for genetic testing purposes [2]. 1-2 blastomeres at the 2-cell, 4-cell, or 8-cell stages

are extracted for genetic analysis. These analyses may be used to diagnose genetic diseases, such

as autosomal-dominant disorders, such as Huntington disease and Marfan syndrome, and

autosomal-recessive disorders, such as cystic fibrosis and sickle cell disease [2], [6]. Manual PGD

processes performed by embryologists have a low rate of success, hovering at around 30% [7].

The average cost for an IVF treatment ranges from approximately $10,000 to $20,000 (CAD) per

IVF cycle in Canada [8]. The cost per success for cycle-based IVF treatment nears $50,000 (USD)

in the United States [9]. There is a need to both lower the cost for IVF patients, and vastly improve

the success rate of this process.

1.2 Automation of Single Cell Surgery

The development of automating single biological cell surgery is an effective approach to resolve

this problem. Automation of cell surgery tasks has the potential to provide for a robust and

Page 20: Automated Blastomere Segmentation for Visual Servo on

3

repeatable procedure, allowing for higher success rates of IVF treatments. The automated

processes operate without the drawback of operator fatigue experienced by embryologists.

Automated processes also reduce human contamination with the embryo, and reduce the time

elapsed for the embryo is outside of the host. Automation in this field has the advantage of

increased throughput and speed of performing cell surgery tasks, and is designed with the primary

goal of increased success rates. In recent years, several advancements have been made to automate

these IVF and PGD tasks, such as embryo rotation [10], [11], micropipette control [10], [12] and

cell aspiration [13]. Hence, automation can be a less expensive, foster greater use, providing an

alternative to standard procedures now used in IVF.

1.3 Problem Statement and Objectives

1.3.1 Problem Statement

An important step for automating single cell surgery is determining where individual biological

cells, and their intracellular components, are in 3D Cartesian space. In particular, blastomeres

within early-stage embryos, are in general not located in the same focal plane as each other, and

could pose a problem in automating the detection of such blastomeres during tasks, such as

blastomere aspiration for PGD processes [14]. Microscopes also possess a shallow depth of focus,

which results in only relatively thin layers of the blastomere to be in focus at any instant in time.

In some cases of automated blastomere extraction, due to the limited depth of focus, the entire

blastomere would travel along the z-direction away from the focal plane, and hence fails to perform

the given task [15]. The knowledge of 3D coordinate location data is vital to successfully complete

automated single-cell surgery tasks. This coordinate data is important for automating and operating

image-based processes, such as visual servo control, particularly position-based visual servo

(PBVS) control, so that blastomere related tasks, such as aspiration, may successfully be carried

out.

Page 21: Automated Blastomere Segmentation for Visual Servo on

4

1.3.2 Objectives

For the purpose of this thesis, the developed algorithms must be able to compute and determine

the centroid coordinates of a blastomere of interest (BOI) within an embryo in 3D Cartesian space,

and then move a micropipette to the computed position using PBVS control, for blastomere

aspiration and extraction purposes. Furthermore, the proposed algorithms must integrate with the

preexisting Nikon Ti-U brightfield inverted microscope setup [Figure 1.2(a)], equipped with two

robotic micromanipulators and a motorized stage [Figure 1.2(b)].

Figure 1.2: Preexisting Experimental Setup: (a) Nikon Ti-U brightfield inverted microscope and

(b) Scientifica Patchstar robotic micromanipulators and Prior Proscan III motorized stage.

Page 22: Automated Blastomere Segmentation for Visual Servo on

5

1.4 Contributions

In this work, the following contributions are made:

1. The research proposes a method to obtain image data from across the z-direction of an

embryo, known as z-stack images.

2. From obtained z-stack images, the research proposed an automated image processing

procedure to determine the centroid of a BOI, and a visual servo procedure to move a

micropipette to this computed centroid position.

3. The proposed research integrates with the existing Nikon Ti-U brightfield microscope

setup, equipped with two robotic micromanipulators and a motorized stage.

1.5 Thesis Organization

The remainder of the thesis is divided into four chapters. Chapter 2 presents a background and a

literature review of the proposed research. This includes a background of image processing

techniques, such as microscopy, depth of field and image processing and cell segmentation, as

outlined in Section 2.1. The literature review then introduces visual servo techniques, and

compares the two types, image-based, and position-based visual servo control. This is introduced

in Section 2.2.

The methodology proposed in this research is detailed in Chapter 3. This chapter includes the

image acquisition from a brightfield microscope and introduces the concept of z-stack images and

the image stack, as outlined in Section 3.1. With the acquired image stack, Section 3.2 details the

proposed 3D image processing algorithms for computing the centroid of the BOI. The image

processing procedures in the section involve algorithms to determine a region of interest from the

z-stack image for subsequent steps, and the construction of a graph structure to produce a low-cost

energy path for segmentation of the BOI at the z-stack image. With the blastomeres segmented at

every z-stack image of the image stack, the 3D Cartesian coordinates of the BOI centroid is

calculated. Section 3.3 details the visual servo procedure in order to move a micropipette to the

target position, the computed centroid of the BOI. Starting with an image processing-based

Page 23: Automated Blastomere Segmentation for Visual Servo on

6

micropipette calibration, a method is then described to move the micropipette to the computed BOI

centroid position.

Chapter 4 presents a guide on the acquisition of results from the proposed algorithms in this

research, from a user’s perspective. It details the experimental procedure to acquire results, as

Section 4.1. The data for sample experiments are displayed and the proposed algorithms are

validated for accuracy in Section 4.2. The results are then discussed in Section 4.3, along with

limitations of the proposed algorithms.

Lastly, Chapter 5 concludes by summarizing the thesis, proving contributions of the research,

along with recommendations for future work.

Page 24: Automated Blastomere Segmentation for Visual Servo on

7

Background and Literature Review

In this chapter, a literature review of the research is presented. The review is separated into two

main parts. Section 2.1 introduces an overview of image processing techniques. Starting from

providing various types and limitations of microscopy, such as the limited depth of field, to various

methods used for cell segmentation. Section 2.2 introduces an overview of visual servoing, and its

two types; image-based and position-based visual servo and provides a need for their use in the

automation of single cell surgery.

2.1 Overview of Image Processing Techniques

2.1.1 Types of Microscopy

To automate the task of calculating the 3D coordinates of blastomeres within embryos, image

processing is necessary, especially when using a brightfield microscope to acquire images.

Brightfield microscopes are ideal due to their simplicity of setup, and for observing living cells.

The limitation is in their low contrast when used to observe biological cells, particularly with

translucent cells, such as embryos [16]. The image contrast can be enhanced either physically,

using fluorescence techniques, or by software, using image processing techniques, or some

combination of the two. Tsichlaki and FitzHarris were able to dye the embryo to measure the

volume of the nuclei of the blastomeres, while simultaneously dying the blastomeres [17]. Figure

2.1 shows the images of embryos through various stages of development, with both the brightfield

microscope, and an image captured with a confocal microscope under fluorescent imaging. Note

the stark differences in contrast between the two techniques.

Confocal microscopes are capable of segmenting intracellular components, such as blastomeres,

in 3D space, as long as the subjects are dyed [16], [18]. An optical coherence tomography method

is also useful for rapidly acquiring 3D models of embryos [19]. Moreover, segmentation with

confocal microscopes requires the use of raster scanning, a type of sequential scanning, which may

take long periods of time [15], [20]. Using fluorescence imaging in confocal microscopy exposes

Page 25: Automated Blastomere Segmentation for Visual Servo on

8

cell to risks due to toxic dyes, or bleaching. Current research is ongoing to minimize the damage

to cells from fluorescent dyeing [21]–[25].

Figure 2.1: Images captured by brightfield microscopy and fluorescence, respectively.

Blastomeres are dyed red, and nuclei are dyed green [17].

There are methods to improve the contrast of unstained biological samples captured by brightfield

microscopes to better be able to detect translucent bodies, such as the blastomeres and zona

pellucida (ZP) of embryos. For example, differential interference contrast (DIC) is based on the

principle of wave interference, similar to the equipment used in detecting gravitational waves with

LIGO [26], [27]. DIC uses the wave interference property to accentuate the outlines of the

intracellular components, such as blastomeres, generating higher contrast images. Newmark et al.

is able to count the number of blastomeres within an embryo using DIC in conjunction with optical

quadrature microscopy (OQM) [28]. Figure 2.2(a) shows an image captured with a brightfield

microscope with the DIC technique. Soll et al. was able to successfully track and analyze the

motility of organelles in 3D with DIC [29]. Similar to DIC, Hoffman modulation contrast (HMC),

also provides a method to accentuate the outer edges of the cells, and is often used for this purpose

[30], [31]. Giusti et al. captured images of a zygote stage of the embryo with using HMC, as shown

in Figure 2.2(b) [32]. Giusti et al. was able to accurately segment the zygote using a graph-based

method and recover the cell contour of the zygote boundary due to the high contrast of the captured

HMC image. Giusti et al. was also able to segment 4-cell stage embryos using the same graph-

based method, however with the addition of image stacks (also known as focus stacks, z-stack

images) [33]. With this method, they were able to segment the 4 blastomeres with a success rate

of 71.3%.

Page 26: Automated Blastomere Segmentation for Visual Servo on

9

Figure 2.2: Various brightfield microscopy techniques: (a) 12-cell stage embryo captured with DIC

[28], (b) zygote stage embryo captured with HMC [32], (c) 4-cell stage embryo captured with

HMC [33].

2.1.2 Depth of Field

There exists another problem when imaging cells, due to the optical properties of microscopes,

specifically their low depth of focus. Embryos have a diameter of approximately 100 μm, and

blastomeres have a diameter ranging from approximately 25 to 75 μm [34], based on the embryos

stage of development. At these microscopic scales, the objective lenses of the microscope possess

a very shallow depth of field (DoF) (also commonly known as depth of focus). The DoF is

dependent on the objective lenses' physical properties, namely it’s magnification, numerical

aperture (NA), and wavelength of light used [35]. Berek’s formula is an equation that measures

the DoF of objective lenses, as shown as (2.1) [35].

𝐷𝑜𝐹 = 𝑛 (λ

2 ∗ 𝑁𝐴2+

340

𝑀𝑇 ∗ 𝑁𝐴) (2. 1)

Where 𝑀𝑇 is the total magnification, λ is the wavelength of light used, and 𝑛 is the refractive

index of the medium in which the object is situated in. Depending on the objective lens, the DoF

may vary from 5-20 μm at these scales. The entire blastomere, or even the embryo will not be in

focus in a single image. Early stage embryos at the 2-cell stage tend to have its blastomeres oriented

such that they lie on a plane, parallel to the stage it is sitting upon [36]. Embryos at the 4-cell stage

tend to be oriented in a tetrahedral pattern for the majority of the time, at >80% [36], [37]. When

blastomeres are arranged in this tetrahedral pattern, they do not lie in the same plane, thus making

Page 27: Automated Blastomere Segmentation for Visual Servo on

10

it difficult to obtain in 3D Cartesian space, as expressed in Figure 2.3 [38]. The very limited depth

of focus poses a challenge in locating the blastomeres in 3D Cartesian space.

Figure 2.3: Images showing tetrahedral shape of 4-cell stage embryo: (a) Image focused at bottom

two blastomeres of embryo [38]. (b) Image focused at top two blastomeres of embryo [38].

The position coordinates of the blastomere of interest (BOI) are 3D Cartesian coordinates, i.e. 𝑥, 𝑦,

and 𝑧 coordinates. The centroid of the BOI, 𝐶𝑇 = (𝑥𝑇 , 𝑦𝑇 , 𝑧𝑇), where, 𝐶𝑇 is the true centroid

position of the BOI, and 𝑥𝑇, 𝑦𝑇, and 𝑧𝑇 are the true centroid coordinate components along axes 𝑥,

𝑦, and 𝑧 respectively. A diagram of the embryo with 𝐶𝑇 is labelled and shown in Figure 2.4. This

provides an accurate measurement of the centroid of the blastomere, necessary for blastomere

aspiration or extraction purposes. Locating this centroid determines where the blastomere lies in

3D Cartesian space. Most, if not all IVF experimental setups are such that the microscope camera

observes the embryo in the 𝑥𝑦-plane. Since brightfield microscopes take images in 2D, image

processing algorithms are required to obtain 𝑥𝑇 and 𝑦𝑇 coordinates. The camera must move along

the 𝑧-axis in order to obtain the 𝑧𝑇 coordinate. A diagram of this setup is shown in Figure 2.5.

Page 28: Automated Blastomere Segmentation for Visual Servo on

11

Figure 2.4: Diagram of 2-cell stage embryo with blastomeres, and centroid of the blastomere, 𝐶𝑇.

Figure 2.5: Diagram of embryo placed on motorized stage of a microscope. The movement axes

of both the stage and objective lens are labelled.

Page 29: Automated Blastomere Segmentation for Visual Servo on

12

2.1.3 Image Processing and Cell Segmentation

There are various methods to obtain the desired 𝑧-coordinate of the features of a blastomere

observed under a brightfield microscope, 𝑧𝑇. Ideally, 𝑧𝑇 is located where the blastomere is widest,

and the blastomere boundary is clearest when the focal plane of the microscope lies at this part of

the embryo. Assuming a spherical shape, this is also at the center of the blastomere. One method

to obtain 𝑧𝑇 being software-based auto-focusing. Bahadur and Mills were able to maintain focus

at a targeted position within embryos with an autofocusing technique using bare-bones particle

swarm optimization and Gaussian jumps [38]. The technique involved obtaining the sharpness

values, in terms of standard deviation, of several points along the 𝑧-axis, and finding the 𝑧-

coordinate that gave the largest sharpness value. A similar approach with auto-focusing was

performed by Wang et al. to focus on and detect polar bodies of oocytes [39].

As mentioned in 2.1.2, to obtain the 𝑥𝑇 and 𝑦𝑇 coordinates of the BOI, image processing

techniques are required. Image segmentation creates a mask in which an object of interest can be

selected for further analysis. Segmentation may be performed through basic image processing

techniques, such as binary thresholds, standard deviation and Gaussian filters, area and perimeter

filters, and image smoothening. However, due to the complexity of the embryo and blastomere

structure, such as overlapping images of blastomeres and a large number of image artifacts, it is

better to use more advanced image processing techniques for segmentation. One such method is

active contours, used by Morales et al. to segment the zona pellucida of embryos, as shown in

Figure 2.6 [40]. However, this requires a manual initialization of parameters, such as defining the

foreground and backgrounds of a specimen, and defeats the purpose of automating the task of

determining the blastomere centroid coordinates. Level sets also provide a way to determine 𝑥𝑇

and 𝑦𝑇, as obtained from the manually segmented 2D contours by Pedersen et al, as seen in Figure

2.7 [41], [42]. Pedersen et al. was also about to approximate the model of the embryo with the

variational level set approach [41], [42]. Giusti et al. employs a graph-based segmentation method

to segment zygotes [33]. After initializing a Region of Interest (ROI), a low-cost, gradient-based,

and graph-based algorithm is run to calculate the outer edge of the zygote. This may also be used

to find the BOI. Giusti et al. also explores the acquired and analyzed 𝑧-stack images to obtain 3D

morphology measurements of early-stage embryos, as shown in Figure 2.8 [33]. Other advanced

segmentation techniques involve Canny and Sobel edge detection algorithms, watershed and Otsu

methods, and even machine learning and neural networks [43].

Page 30: Automated Blastomere Segmentation for Visual Servo on

13

Figure 2.6: Active contour segmentation of zona pellucida: (a) Original image [40]. (b) Active

contour segmentation [40]. (c) Manual segmentation as reference [40].

Figure 2.7: Variational Level Sets for Cell Segmentation: (a) manual segmentation [41]. (b)

Blastomeres within the ZP, with bounding curves [41].

Page 31: Automated Blastomere Segmentation for Visual Servo on

14

Figure 2.8: Z-stack images of embryo: Top row: original Z-stack images obtained [33]. Middle

row: blastomeres segmented with graph-based method [33]. Segmented contours marked in

yellow. Bottom row: reconstructed 3D structure of blastomeres [33].

Page 32: Automated Blastomere Segmentation for Visual Servo on

15

2.2 Overview of Visual Servoing Techniques

2.2.1 Introduction to Visual Servoing

The physical cell surgery task is a necessary step to perform automated procedures on embryos.

Methods for manipulation and surgery of cells and intracellular components involve optical

tweezers [44], [45], electric fields [46], and friction-based rotation [11] for both translation and

rotation of the cell. Since the objective is to use the existing hardware from the microscope,

commonly found in IVF clinics, micropipettes are used as the tool for the automated cell surgery

task. For robotic micromanipulators to perform automated processes on embryos, the position of

the embryos with respect to the micromanipulator must be evaluated for controller development

[47]. The closed-loop control of a manipulator from visual images is also known as visual servoing

[47].

As defined by Chaumette and Hutchinson, “visual servo control refers to the use of computer

vision data to control the motion of a robot” [48]. It requires the use of a camera to acquire an

image, from which coordinates of objects are calculated, permitting the motion of a robot. The

camera may either be connected to the end-effector, or some other appendage, of the moving robot,

or it may remain stationary [48], [49]. In the case of automation of IVF tasks with brightfield

microscopes, the focal plane of the camera will only move along the 𝑧-axis, whereas the

micropipette will move in the 𝑥, 𝑦, and 𝑧 axes.

The aim of visual servoing is to minimize the error between the current set of features, and the

desired set of features, as shown in (2.2) [48].

𝒆(𝒕) = 𝒔(𝒎(𝒕), 𝒂) − 𝒔∗ (2. 2)

Where 𝒆(𝑡) is the error between features, 𝒔 is the current set of features, 𝒎(𝑡) is the set of image

measurements, 𝒂 is the set of parameters representing additional knowledge about the system, and

𝒔∗ is the desired set of features. There are two main visual servo control techniques: image-based

visual servo control (IBVS), and position-based visual servo control (PBVS). Both variations have

their pros and cons.

Page 33: Automated Blastomere Segmentation for Visual Servo on

16

2.2.2 Image-Based Visual Servo

IBVS focuses on manipulating pixel coordinates to achieve the desired coordinate values [48]. The

goal of IBVS is to solve (2.2) where 𝒔 is the pixel coordinate of each feature of the object of interest

from the camera’s perspective, and 𝒔∗ is the desired pixel coordinate of each feature. The goal of

IBVS is to move the camera, or the object, in a way that features at the initial position, move

towards the desired position.

IBVS will then move the camera, and/or object, such that the pixel coordinates of the features, 𝒔,

align with those in the desired state, 𝒔∗. IBVS is useful when using cameras to control robotic

manipulators, particularly at a larger scale. Liu and Sun perform IBVS on cells for tracking and

cellular rotation [50]. However, it is performed only along the 𝑥𝑦-plane and does not include the

𝑧-axis. At the cellular scale, due to the shallow DoFs of the microscope, it becomes increasing

difficult to implement IBVS since there are far more parameters at play, and thus is not ideal for

this research.

2.2.3 Position-Based Visual Servo

PBVS is another type of visual servoing. Instead of pixel coordinates, PBVS focuses on

manipulating the position coordinates of objects and manipulators to achieve the same goal as

IBVS; to solve (2.2), however with an exception. For PBVS, 𝒔 is the position coordinate of each

feature of the object of interest from the robot’s perspective, and 𝒔∗ is the desired position

coordinate of each feature [48]. The concept is illustrated with the use of diagrams. The diagrams

in Figure 2.9 show (a), the initial state of a square shaped object with reference to the robot frame,

and (b) the desired state. The goal of PBVS is to move the camera or object in a manner that the

position coordinates of the features at the initial position, shown as the yellow dots of Figure 2.9(a),

move towards the desired position, shown as the red dots in Figure 2.9(b).

Page 34: Automated Blastomere Segmentation for Visual Servo on

17

Figure 2.9: Demonstration of IBVS: (a) Initial coordinates of features, marked as yellow dots, and

(b) Desired coordinates of features, marked as red dots.

Figure 2.10: Controlling position coordinates to move to desired location with PBVS.

Page 35: Automated Blastomere Segmentation for Visual Servo on

18

PBVS will then move the camera, and/or object, in a way so that the position coordinates of the

features, 𝒔, align with those in the desired state, 𝒔∗, as demonstrated in Figure 2.10. However,

PBVS can work without the features necessarily being in the field of view, with trajectory planning

as performed by Thuilot et al. [51].

In order to move the micropipettes to a targeted position, i.e. the centroid of the blastomere of

interest, 𝐶𝑇, visual servoing is required. In this case, 𝒔∗ = 𝐶𝑇, as 𝐶𝑇 is the desired position of the

micropipette. Since PBVS operates with a single 2D image, obtained from the microscope, due to

a limited depth of focus, and due to the necessity of determining the 3D coordinates of the centroid,

in 3D, it is far more ideal to incorporate PBVS as adopted in this research. The lack of PBVS used

as a vision-based control approach at the cellular scale also calls for an investigation into the

matter, and to see if PBVS is a viable method to use with embryos to perform IVF tasks.

Page 36: Automated Blastomere Segmentation for Visual Servo on

19

Methodology

In this chapter, the methodology and design procedure for the software algorithms to obtain the

centroid of the blastomere of interest are presented. In this chapter, Section 3.1 introduces the

concept of z-stack images, and how they are obtained with the lab equipment (Section 3.1.1).

Section 3.2 details the image processing algorithms required for blastomere segmentation,

including initializing the region of interest (Section 3.2.1), creating a low-cost energy path (Section

3.2.2), and calculating the centroid of the blastomere (Section 3.2.3) [52]. The last section, Section

3.3 introduces the methodology used to calibrate the micropipette (Section 3.3.1), and visual servo

the pipette to the target blastomere of interests position (Section 3.3.2).

3.1 z-Stack Images

The methodology adopted will utilize the concept of z-stack images to calculate the centroid, 𝐶𝑇 =

(𝑥𝑇 , 𝑦𝑇 , 𝑧𝑇), in 3D Cartesian coordinates, where 𝐶𝑇 is the true centroid position of the blastomere

of interest (BOI), and 𝑥𝑇 , 𝑦𝑇, and 𝑧𝑇 are the true centroid coordinate components along the axes

x, y, and 𝑧 respectively, as shown in Figure 3.1. Z-stack images are a sequence of images captured

successively at equally spaced focal planes, as shown in Figure 3.2.

Figure 3.1: Diagram of showing axes of motion, and the Cartesian reference frame. The motorized

stage moves along the xy-plane. The objective lens moves along the z-axis.

Page 37: Automated Blastomere Segmentation for Visual Servo on

20

Z-stack images for experimental work in this research are acquired via an inverted brightfield

microscope. The z-stack image capturing experimental setup is shown in Figure 3.1. An embryo

is placed upon a motorized stage while submerged in embryo culture media. An objective lens of

a microscope is situated below the embryo. The motorized stage moves in the xy-plane, whereas

the objective lens moves along the z-axis. Utilizing the microscope system, Hoffman Modulation

Contrast (HMC) images, denoted as 𝐼1, 𝐼2, … , 𝐼𝑁, are captured successively at equally separated

focal planes with the objective lens. Individual images are called z-stack images, whereas the

collection of these z-stack images is called the image stack. Figure 3.2 illustrates the z-stack

images, and how they relate to the image stack, as well as spacing distance between each z-stack

image, 𝑑𝐼, and notation for the 𝑖th z-stack image of interest (IOI), 𝐼𝑖, which is outlined in red, later

used in Section 3.2.2. Figure 3.2 also includes the two z-stack images, 𝐼𝑖−1 and 𝐼𝑖+1, outlined in

blue, which are to be involved in the process of creating an image array, 𝐽, also used in Section

3.2.2.

Figure 3.2: Diagram of Image Stack and z-Stack Images. The z-stack image outlined in red is the

z-stack image of interest (IOI). The two z-stack images outlined in blue are involved in the process

to create the image array, 𝐽, which is further explained in Section 3.2.2 below.

Page 38: Automated Blastomere Segmentation for Visual Servo on

21

The spacing between each focal plane, 𝑑𝐼, is found by calculating Berek’s formula, as shown as

(2.1) [35]. Since every z-stack image, 𝐼, has a limited depth of field (DoF) range, the spacing for

each z-stack image must be such that no blastomere information is lost. Overlapping the DoF of

the z-stack images are adequate, but underlapping removes necessary information. Using the

Berek’s formula to equate the proper DoF required for a 20x objective lens, used to acquire z-stack

images, 𝑑𝐼 is found to be approximately 5 μm, and thus each z-stack image is separated 5 μm apart.

3.1.1 Obtaining z-Stack Images

Via a Nikon Ti-U inverted microscope, HMC z-stack images are taken of the embryo. The first

image is taken with the focal plane located slightly below the bottom of the embryo. Using the

microscope software to drive the x-y-z microscope stage, Prior Proscan III, the objective lens

moves upwards, along the z-axis, and with the software, Micromanager and ImageJ, the camera

successively captures an image every 5 μm, until it reaches 60 z-stack images, or 120 μm, which

is slightly larger than the diameter of a mouse embryo, typically 100 μm. This results to the last

image acquired slightly above the embryo, and ensures that data for the entire embryo is collected.

This z-stack image acquisition is shown as Figure 3.3(a). The images are collated into a TIF file;

a filetype that contains all images taken. Figure 3.3(b) shows a sample of the format of the TIF

file. Once the TIF file of the image stack is acquired, it is then exported into Matlab, which then

initiates the image processing of blastomere image segmentation process, detailed in Section 3.2.

Figure 3.3: Z-stack images of embryo. (a) Embryo with z-stack images taken successively at

equally separated focal planes. (b) Individual z-stack images stacked to indicate what part of the

blastomere is taken at what z-stack image. Also samples the format of TIF files.

Page 39: Automated Blastomere Segmentation for Visual Servo on

22

3.2 Blastomere Segmentation

A series of image processing steps are executed to calculate the centroid, 𝐶𝑇 = (𝑥𝑇 , 𝑦𝑇,𝑧𝑇), of the

BOI. The proposed blastomere segmentation algorithm is comprised of three main steps. First, for

the initialization step (Step 1), an approximation of the centroid of the BOI, 𝐶𝑎 = (𝑐𝑎𝑥, 𝑐𝑎𝑦), is

computed using a series of image processing algorithms. This is a prerequisite step for Step 2,

which requires an approximate centroid to create a region of interest (ROI). Step 2 utilizes this

ROI, converting the image to polar coordinates. Following a series of image processing steps, a

low-cost directed graph is generated to find and obtain the contour of the BOI. With the BOI

segmented for the 𝐼𝐼 image, the 3D coordinates of the centroid are computed of this 2D image. The

z-coordinate of the BOI is obtained from the z-coordinate at which the z-stack image was obtained.

These steps are then completed for all z-stack images of the image stack. Lastly, the 3D centroid

of the BOI is calculated from the centroids of the BOI from each 2D image, as 𝐶̅ = (�̅�, �̅�, 𝑧̅). The

following sections, Section 3.2.1, Section 3.2.2, and Section 3.2.3, detail the Initialization, Low-

Cost Energy Path, and Blastomere Centroid Calculation image processing steps respectively.

3.2.1 Initialization (Step 1)

The first of the series of image processing steps is to initialize a region of interest (ROI) required

for the subsequent image processing steps. The low-cost energy path, as detailed in Section 3.2.2,

is an image processing procedure that segments the boundary of the BOI for a z-stack image. To

simplify the procedure, the area that contains this boundary is processed, rather than the entire

image. This area is also known as the ROI. Due to the approximately circular shape of the

blastomeres, the ROI bounds are also circular in shape, and should be centered in a way such that

it encompasses the BOI boundary on both the inner and outer sides. This centered position is

approximated as 𝐶𝑎 = (𝑐𝑎𝑥, 𝑐𝑎𝑦), with the use of image processing procedures. The approximation

does not need to be exact. As long as the bounds of the ROI fully encompass the BOI boundary,

the low-cost energy path algorithm can perform the blastomere image segmentation.

To begin locating the centroid approximation, 𝐶𝑎 = (𝑐𝑎𝑥, 𝑐𝑎𝑦), initial image processing steps are

performed. To illustrate this process, z-stack images of a 2-cell blastomere embryo is used,

selecting a z-stack image near the middle of the stack where the blastomere is the largest and

Page 40: Automated Blastomere Segmentation for Visual Servo on

23

clearest, labelled as 𝐼𝐼. First, a greyscale image of the embryo is used as an input to the image

processing software described in the following, as shown as Figure 3.4.

Figure 3.4: The original image of the embryo, selected from the middle of the image stack [15].

Next, a standard deviation filter is applied to the image, accentuating prominent lines of the image,

such as the blastomere boundaries. Other methods, such as Canny and Sobel operators were

experimented with. However, due to the complex nature of the inner sides of the blastomeres and

their unclear contrasts, they provide chaotic line segmentation within the blastomere, even after

varying their respective thresholds, which are difficult to remove with further image processing.

Hough transforms were also attempted to address this issue, however this approach is found to

work better for finding objects with either straight distinct lines, or with high degrees of circularity.

For these reasons, the standard deviation filter is adopted as an approach to accentuate the

necessary lines, such as the blastomere boundaries. The Matlab function, stdfilt, applies a local

3x3 standard deviation filter, which then accentuates the lines needed for further processing, as

shown as Figure 3.5.

Page 41: Automated Blastomere Segmentation for Visual Servo on

24

Figure 3.5: Image with standard deviation filter applied.

The standard deviation filter outputs an array with intensities ranging from 0 to 255. To create a

segmented image, a binary mask is required. The image therefore is filtered with a binary threshold

filter, which converts the pixel intensities to either 0 or 1, depending if they are greater than a given

threshold value, 𝐹𝑡ℎ𝑟𝑒𝑠ℎ. In this case, 𝐹𝑡ℎ𝑟𝑒𝑠ℎ is chosen empirically such that the blastomeres are

not fragmented, and much of the image remains intact. Note that the value of 𝐹𝑡ℎ𝑟𝑒𝑠ℎ may vary

depending on parameters that change how the image stack is acquired, such as the exposure set by

the image capturing system. The Matlab function, imbinarize, then binarizes the standard

deviation filter image, with the threshold 𝐹𝑡ℎ𝑟𝑒𝑠ℎ, as shown as Figure 3.6.

Page 42: Automated Blastomere Segmentation for Visual Servo on

25

Figure 3.6: The thresholded binarized image.

To eliminated unwanted noise from the thresholded image, and to only focus on the blastomeres,

an area filter is then applied. Matlab’s area filter function, bwareafilt, is applied to keep the

largest area, the embryo, and to remove image noise, as shown in Figure 3.7.

Figure 3.7: Image with area filter applied.

The resulting area is then hole-filled, with the Matlab’s imfill function. Hence a single blob of

the two blastomeres remain, as shown in Figure 3.8.

Page 43: Automated Blastomere Segmentation for Visual Servo on

26

Figure 3.8: Image with area fill applied.

To further smooth the image, and to eliminate extraneous artifacts, a structuring element is created

and applied. Using the Matlab strel function, a disk-shaped structuring element of size 3, is

applied over the image, which smooths the blob, shown in Figure 3.9. Erosion and dilation methods

may also be used, however this does not guarantee the removal of all unwanted anomalies.

Figure 3.9: Image smoothened by a structuring element, resulting in a blob containing the two

blastomeres.

The structuring element then leaves only the two blastomeres of the embryo. However, to separate

the two embryos, further image processing is required. Note that 2-cell stage embryos suspended

Page 44: Automated Blastomere Segmentation for Visual Servo on

27

in a stable orientation with blastomeres lie on a plane parallel to the camera focal plane [36], [37].

4-cell stage embryos have blastomeres oriented in a tetrahedral pattern, with two blastomeres on a

plane on the top, and two blastomeres on the plane below, as shown in Figure 2.3 [38].

Occasionally, the embryo will be oriented such that three blastomeres lie on the bottom plane, and

one lies on the top, forming a pyramid shape, however this scenario is unlikely [36], [37].

Knowing this orientation of the blastomeres within the embryo, image processing algorithms can

be applied to split the blob into the two blastomeres, and then calculate to resulting approximate

centroid, 𝐶𝑎 = (𝑐𝑎𝑥, 𝑐𝑎𝑦).

There exist a few methods for acquiring approximate centroids for the ROI. One method performed

by Giusti et al. involves the use of distance transforms [53]. Distance transforms utilize the

approach of calculating the distance from every pixel with intensity 1 (white pixels), to the closest

pixel with intensity 0 (black pixels), and then defining a new image with pixels intensities based

on that distance. This leads to images with local maxima, as these portray the position where the

furthest from the boundary. Giusti et al. was able to use the distance transform on images of human

zygotes for approximating the centroids of them [53]. However, this was performed for single-

celled zygotes. The method was investigated experimentally using 2-cell stage embryos to find

that the method results in inaccuracies due to the presence of several blastomeres in the image.

The method may be ideal for approximating singular circular shapes, such as zygotes, however

not multiple blastomeres. Note that there are several approaches to obtain the centroid

approximation, and the following method is only one of them. If a viable centroid approximation

is obtained, using any method, the following steps, as detailed in Section 3.2.2 and Section 3.2.3,

may proceed. The proposed method is to first separate the two blastomeres in the image, and then

calculate the centroids of the two blastomeres.

The first step is to separate the blob into the two blastomeres. This must be performed at the line

at which the two blastomeres come in contact. The idea is to create a line that divides the two

blastomeres along the shortest width of the blob. To do so, first the centroid of the blob, 𝐶𝑏𝑙𝑜𝑏,

from Figure 3.9, is calculated using Matlab’s regionprops function, as shown in Figure 3.10(b).

Next, the shortest distance from 𝐶𝑏𝑙𝑜𝑏 to the edge is calculated, and a straight line is drawn to

closest edge, as shown as the pink line in Figure 3.10(c). A 1-pixel width, 0 intensity line is then

drawn from the edge, through 𝐶𝑏𝑙𝑜𝑏, to until it hits another edge. This marks the shortest width of

Page 45: Automated Blastomere Segmentation for Visual Servo on

28

the blob, and should be the ideal place to separate the blastomeres in two. The line then follows a

multiplication operation with the image to remove the part of the blob at the location of the line,

shown as Figure 3.10(d). The resulting image then shows two separate blobs, which are the

separated blastomeres. Like before, another regionprops function is performed to obtain the

centroids of the two blastomeres, 𝐶𝑎 = (𝑐𝑎𝑥, 𝑐𝑎𝑦), and label them as 𝐶𝑎1= (𝑐𝑎𝑥1

, 𝑐𝑎𝑦1) and 𝐶𝑎2

=

(𝑐𝑎𝑥2, 𝑐𝑎𝑦2

) respectively. The example followed in this methodology will observe the second

blastomere, labeled with centroid approximation 𝐶𝑎2, as the BOI.

Figure 3.10: Image processing algorithms to acquire approximate centroids. (a) Blob acquired

from previous step Figure 3.9. (b) Calculated centroid of blob represented as a blue *. (c) Line

from blob centroid to closest edge. (d) Segmented blastomeres from line cut. (e) Centroids of

respective blastomeres, with the centroids represented as a blue *.

Page 46: Automated Blastomere Segmentation for Visual Servo on

29

3.2.2 Low-Cost Energy Path (Step 2)

The second of the series of image processing steps is to segment the blastomere z-stack IOI, 𝐼𝑖. To

do so, given the centroid approximation, 𝐶𝑎 = (𝑐𝑎𝑥, 𝑐𝑎𝑦), from Section 3.2.1, an ROI is created

that encompassed the boundaries of the blastomere at 𝐼𝑖. To create the ROI, a circular-shaped

corona, centered at the centroid approximation, 𝐶𝑎, is formed. The size must be chosen such that

the corona fully encompasses the blastomere boundary on both the inside and outside. The inner

ring of the corona, 𝜌′, must be smaller than the boundary, and the outer ring, 𝜌′′, must be larger.

The size of these rings must account for the blastomere’s shape, and the accuracy of the centroid

approximation. For the purposes of this example, the radii, 𝜌, of these rings are set to 𝜌′ = 50

pixels, and 𝜌′′ = 100 pixels respectively, as can be seen in Figure 3.11.

Figure 3.11: z-Stack image with ROI around BOI.

Page 47: Automated Blastomere Segmentation for Visual Servo on

30

Instead of utilizing the sole z-stack IOI, 𝐼𝑖, three z-stack images from the image stack are used for

the low-cost energy path algorithm: the z-stack IOI (𝐼𝑖), one z-stack image above (𝐼𝑖+1), and one

z-stack image below (𝐼𝑖−1) the z-stack IOI. This is to account for the z-direction gradient changes

to the dataset while determining the low-cost energy path. More z-stack images from the image

stack may be used for the algorithm, but at a tradeoff of a linear increase in computation time,

which is further explained below. The resulting ROI is then converted from Cartesian coordinates

to polar coordinates through bilinear interpolation using (3.1), as shown in Figure 3.12.

𝑱(𝜽, 𝝆, 𝒊) = 𝑰(𝒄𝒂𝒙 + 𝝆 ∙ 𝐜𝐨𝐬(𝜽) , 𝒄𝒂𝒚 + 𝝆 ∙ 𝐬𝐢𝐧(𝜽) , 𝒊) (3. 1)

0 ≤ 𝜃 ≤ 2𝜋 𝜌′ ≤ 𝜌 ≤ 𝜌′′

Figure 3.12: ROI displayed in polar coordinates at the z-stack IOI, 𝐽𝑖.

𝐽 is the 𝜃𝑛 x 𝜌𝑛 x 3 image array in cylindrical polar coordinates. 𝜌 and 𝜃 are uniformly sampled in

𝜃𝑛 and 𝜌𝑛 intervals, respectively. Ideally, the smaller values chosen for 𝜌 and 𝜃, the less

computationally expensive the image processing procedure will be. Consequently, the larger the

numbers chosen for 𝜌 and 𝜃, the more computationally expensive and intense the image processing

procedure. The data format of the 𝐽 image array is as shown in Figure 3.13. 𝐽𝑖 is the image array at

index 𝑖. With 𝑖 being the index of the image array, analogous to how 𝑖 represents the index of the

IOI. Since three images are converted into polar coordinates, the corresponding transformation of

the image 𝐼𝑖−1, 𝐼𝑖, and 𝐼𝑖+1, are converted, with a bilinear interpolation, into 𝐽𝑖−1, 𝐽𝑖, and 𝐽𝑖+1

respectively.

Page 48: Automated Blastomere Segmentation for Visual Servo on

31

Figure 3.13: Format of image array, 𝐽.

In order to calculate the graph-based, low-cost energy path, image array 𝐽 is processed to acquire

energy values for its respective pixels. The energy value at every pixel is defined by Giusti et al.

as (3.2) and (3.3) [53].

𝐸(𝜃, 𝜌) = 𝑃(cos(𝜃 − 𝛼) ∙ 𝐺𝜌(𝐽) + sin2(𝜃 − 𝛼) ∙ |𝐺𝜌(𝐽)|) (3. 2)

𝑃(𝑥) = (1 + 𝑒𝑥𝐾)

−1

(3. 3)

The variable 𝐺𝜌 represents the gradient operator along the radial direction, 𝜌. The sigmoid

function, 𝑃(𝑥), is applied to make all values of the energy array 𝐸(𝜃, 𝜌), to be in the range [0, 1].

The scaling factor, 𝐾, is set to 1/5 of the image array dynamic range. 𝛼 is the direction of the

lighting resulting from the use of HMC imaging. Processing image array 𝐽 with (3.2) results in the

energy array 𝐸(𝜃, 𝜌). The 3D adaptation of (3.2), involving the z-stack images surrounding the

IOI, is adapted to include these z-stack images, and incorporates the 𝐽 array with parameters, 𝜃, 𝜌,

and 𝑖, and is shown as (3.4). The left component dominates where the contour is orthogonal to the

Page 49: Automated Blastomere Segmentation for Visual Servo on

32

light direction, and the right component takes into account for unpredictability of the contour

appearance when the contour is parallel to the light direction [53].

𝐸(𝜃, 𝜌, 𝑖) = 𝑃(cos(𝜃 − 𝛼) ∙ 𝐺𝜌(𝐽) + sin2(𝜃 − 𝛼) ∙ |𝐺𝜌(𝐽)|) (3. 4)

An example of the energy array at the z-stack image, 𝐸𝑘, is shown as Figure 3.14.

Figure 3.14: Energy array at the z-stack image of interest, 𝐸𝑘.

After the energy array is created, a directed graph structure is then constructed from each pixel,

represented as a node, in order to generate a low-cost energy path. A directed graph is a set of

vertices, or nodes, that are connected to each other, and these connection have a direction [54]. A

directed edge, also known as an arc, is an ordered pair, and is labelled as (𝑏, 𝑎) for example, where

𝑎 is the node it starts at, and 𝑏 is the node it is directed towards. A value is associated for each arc,

which in this case will be the pixel values of the energy array, 𝐸. Nodes may have as many arcs

both directed towards, and away from them as needed. The locations of the nodes do not matter.

What matters most is which nodes connect to one another. Figure 3.15 shows a basic graph, where

node 𝑎 has three directed paths to nodes 𝑏, 𝑐, and 𝑑, with their respective energy values being 𝐸𝑏,

𝐸𝑐, and 𝐸𝑑.

Page 50: Automated Blastomere Segmentation for Visual Servo on

33

(𝑏, 𝑎) 𝐸𝑏

(𝑐, 𝑎) 𝐸𝑐

(𝑑, 𝑎) 𝐸𝑑

Figure 3.15: Basic graph structure example.

Paths are routes taken to travel from one node to another, even if they are not connected directly.

The path entirely depends on the problem it is trying to achieve. For example, the Travelling

Salesman Problem is a problem where all nodes are connected to each other, also known as a

Hamilton circuit, and have different weights. The goal for the Travelling Salesman Problem is to

start at one node, travel to all other nodes, all in the shortest distance possible. This problem is

used to simulate path planning algorithms, such as those used by global positioning systems (GPS).

There are many algorithms that can solve the Travelling Salesman Problem, but the main issue is

the complexity of the graph. A brute force algorithm will absolutely find the shortest path, but will

take the longest time since it needs to analyze the energy values between all nodes to determine

the shortest path. A more directed algorithm, such as nearest-neighbour algorithm, does not need

to analyze the energy values for all the nodes, but rather only arcs with the smallest energy values.

Depending on the scenario, multiple algorithms may be used to solve the same equation.

Since this methodology does not have a Hamilton circuit like graph, where all the nodes are

connected to each other, other algorithms must be employed. The brute force approach may be

used, but will require far too much computation, especially with an increase in graph complexity.

The algorithm implemented in this methodology uses the low-cost path algorithm. The low-cost

path algorithm starts from the node with the lowest energy value from the rightmost column and

observes all the nodes connecting to it. Of those connecting nodes, it selects the node that has the

Page 51: Automated Blastomere Segmentation for Visual Servo on

34

smallest energy value. The journey continues until it encounters the leftmost node. The path taken

for this journey is stored. An example of this graph and algorithm is illustrated in Figure 3.16. The

nodes 𝑎 − 𝑔, are connected to each other and have energy values, as shown in the figure and arcs

beside. The node with the lowest energy value on the righthand column is 𝑒, which has an energy

value of 0.1. Of the nodes connected to it, 𝑏 and 𝑐, the node with the smallest energy value is 𝑐,

with a value of 0.3. Even though 𝑑, has a smaller value of 0.2 within the same column, it is not

connected to node 𝑒. Lastly, the only node connected to 𝑐 is 𝑎, which has an energy value of 0.4.

The low-cost path generated, is then as follows; 𝑒 → 𝑐, and then 𝑐 → 𝑎, as outlined in orange in

Figure 3.16.

(𝑏, 𝑎) 0.5

(𝒄, 𝒂) 0.3

(𝑐, 𝑎) 0.2

(𝑒, 𝑏) 0.1

(𝑓, 𝑏) 0.2

(𝒆, 𝒄) 0.1

(𝑓, 𝑐) 0.2

(𝑔, 𝑐) 0.4

(𝑓, 𝑑) 0.2

(𝑔, 𝑑) 0.4

Figure 3.16: Basic graph structure path example.

For the methodology, from the energy array 𝐸, a directed graph structure is constructed from each

pixel, represented as a node, to generate a low-cost energy path. In this graph, each node connects

to its forward-facing, 26-neighbour pixels, up to a maximum of nine. The arcs are forward-facing

is due to the nature of the path. Due to the round nature of the blastomeres, segmentation is to

occur in a sequential, left to right, way in the polar coordinate frame of reference. This way, the

path always travels in one direction and does not stagnate during its search. Also, the reason why

Page 52: Automated Blastomere Segmentation for Visual Servo on

35

only the closest neighbouring nodes are connected to each other is due to two reasons. The first

being, having more nodes connected linearly adds to the complexity of the graph. And the second

being, the blastomere does not vary too much in shape to rationalize connecting nodes to more

than their closest neighbouring nodes. Figure 3.17 shows how the node connect to one another on

one z-stack image, 𝐸𝑘.

Figure 3.17: Sample of 2D graph structure of the energy z-stack image, 𝐸𝑘.

For this sample, the node, represented by the bright red square at (𝐸𝑖, 𝐸𝑗 , 𝐸𝑘) is taken as example.

On this plane, the node connects to its forward-facing, 8-neighbouring nodes, represented by the

green squares. Note that for the 2D example, this means that the bright red node only connects to

three other nodes. This is true for all nodes placed within the inner sides of the columns; nodes

within 𝜌2 and 𝜌𝑛−1. The outer sides of the columns, i.e. 𝜌1 and 𝜌𝑛, each have two forward-facing

nodes. To expand on the graph structure, a 3D graph structure is made with the introduction of the

images surrounding the z-stack images, 𝐸𝑘−1 and 𝐸𝑘+1, as shown in Figure 3.18.

Page 53: Automated Blastomere Segmentation for Visual Servo on

36

Figure 3.18: Sample of 3D graph structure, 𝐸(𝜃𝑛, 𝜌𝑛, 𝑚).

For this sample, the node, represented by the bright red square at (𝐸𝑖, 𝐸𝑗 , 𝐸𝑘) is taken as an example.

For this graph structure, the note connects to its forward-facing, 26-neighbouring nodes,

represented by the green squares. For the 3D example, this means that the bright red node connects

to not only the three nodes in front of it (𝐸𝑖+1, 𝐸𝑗−1 − 𝐸𝑗+1, 𝐸𝑘), but also the three nodes above

(𝐸𝑖+1, 𝐸𝑗−1 − 𝐸𝑗+1, 𝐸𝑘+1) and below (𝐸𝑖+1, 𝐸𝑗−1 − 𝐸𝑗+1, 𝐸𝑘−1) it as well, for a total of nine

connections. This is true for all nodes placed within the inner sides of the columns on 𝐸𝑘; nodes

within 𝜌2 and 𝜌𝑛−1. The outer sides of the columns on 𝐸𝑘, i.e. 𝜌1 and 𝜌𝑛, each have six forward-

facing nodes. The nodes placed within the inner sides of the columns of 𝐸𝑘−1 and 𝐸𝑘+1, nodes

within 𝜌2 and 𝜌𝑛−1, also each have six forward-facing nodes. And the nodes placed on the outer

sides of the columns on 𝐸𝑘−1 and 𝐸𝑘+1, nodes 𝜌1 and 𝜌𝑛, each have four forward-facing nodes. To

illustrate this complex structure, Figure 3.19 illustrates these four components highlighted in

multiple colours.

Page 54: Automated Blastomere Segmentation for Visual Servo on

37

Figure 3.19: Components of 3D Graph Structure Complexity.

The illustration provides a general case where an 𝑚 number of z-stack images are used, rather than

three. The reason why three is selected is due to complex nature of the 3D graph structure. The

equation of calculating the number of permutations possible, 𝑃𝑚, for a graph structure of size 𝜃𝑛 x

𝜌𝑛 x 𝑚 is shown below as (3.5), as well as in Figure 3.19.

𝑃𝑚 = (𝜌𝑛 − 2)(9)(𝜃𝑛 − 1)(𝑚 − 2) + (2)(6)(𝜃𝑛 − 1)(𝑚 − 2)

+2𝜌𝑛 − 26𝜃𝑛 − 1 + 224𝜃𝑛 − 1 (3. 5)

Page 55: Automated Blastomere Segmentation for Visual Servo on

38

The more 𝑚 number of z-stack images there exists within the energy array, the higher the

complexity. The complexity increases linearly with an increasing number of 𝑚 z-stack images

within the energy array. Using the equation (3.5), 𝑃𝑚 is found to be 206,164 for 𝑚 = 3. If 𝑚 = 1,

𝑃𝑚 is found to be 29,452. A graph is shown in Figure 3.20, showing the 𝑃𝑚 for various values of

𝑚. If more than three z-stack images are used, then they must be chosen symmetrically from 𝐸𝑘

as 𝐸𝑘±𝑛, where 𝑛 ∈ ℕ+, which is also why 𝑚 is always an odd number.

Figure 3.20: Graph showing number of permutations, 𝑃𝑚, vs. number of z-stack images within

the graph for the energy matrix, 𝐸.

The most computationally expensive part of the overall methodology is creating the 3D graph

structure of the energy array. In Matlab, to create a graph structure, a sparse matrix is used. A

sparse matrix is a square matrix that contains and keeps track of all node and each of their

connections. For example, an energy array with lengths 200 x 50 x 3, can have a large sparse

matrix with an 29253 x 29253 array, with over 855 x 106 array indices. A sparse matrix, as depicted

before as a 29253 square matrix, is shown in Figure 3.21. Due to this large array, the majority of

the computation time is lost to create this sparse matrix for every z-stack image 𝐼𝑖 of the image

stack.

0

200000

400000

600000

800000

1000000

1200000

1400000

1600000

1800000

2000000

0 5 10 15 20 25

Pn

m

Number of permutations depending on number of z-stack images

Page 56: Automated Blastomere Segmentation for Visual Servo on

39

Figure 3.21: Sparse Matrix where 𝑛 = 1, or 𝑚 = 3.

After generating the sparse matrix, a path is computed using the graphshortestpath function

from Matlab, and found to be the 3D boundary of the BOI at 𝐼𝑖, denoted as 𝛤𝑖. A projection of the

3D path onto the xy-plane is illustrated in Figure 3.22. The path is then converted back into

Cartesian coordinates from polar coordinates, in order to complete the desired boundary for the z-

stack image, using a bilinear interpolation equation analogous to (3.1). The centroid of the resulting

shape in the 𝑖th z-stack image is then calculated from Matlab as 𝐶𝑖 = (𝑐𝑖𝑥, 𝑐𝑖𝑥), as shown in Figure

3.23. This step is then computed for all 𝑁 z-stack images of the image stack. Since the blastomere

is stationary during this automated step, the same centroid approximation, 𝐶𝑎, (from Section 3.2.1)

can be used to initialize this step. The radii 𝜌’ and 𝜌’’ change depending on which z-stack image

of the image stack is processed.

Page 57: Automated Blastomere Segmentation for Visual Servo on

40

Figure 3.22: Low-cost energy path. (a) Path 𝛤𝑖 at 𝐸𝑖−1. (b) Path 𝛤𝑖 at 𝐸𝑖. (c) Path 𝛤𝑖 at 𝐸𝑖+1. (d)

Path 𝛤𝑖 projected onto the xy-plane, 𝛾𝑖.

Figure 3.23: Computed path, 𝛾𝑖, represented by the red line. And centroid, 𝐶𝑖, represented by a

red *, of the z-stack IOI of 𝐼𝑖.

Page 58: Automated Blastomere Segmentation for Visual Servo on

41

3.2.3 Blastomere Centroid Calculation (Step 3)

With all centroids of each resulting z-stack image calculated, the three-dimensional centroid of the

entire blastomere, 𝐶̅ = (�̅�, �̅�, 𝑧̅), can be calculated. This is completed with a weighted-averaging

process, with each dimension shown in the following equations, (3.6), (3.7), and (3.8).

�̅� =1

𝑁∑ 𝑐𝑖𝑥

𝑁

𝑖=1

(3. 6)

�̅� =1

𝑁∑ 𝑐𝑖𝑦

𝑁

𝑖=1

(3. 7)

𝑧̅ =∑ (𝑧𝑖 ⋅ 𝐴𝑖)

𝑁𝑖=1

∑ (𝐴𝑖)𝑁𝑖=1

(3. 8)

Where, 𝑖 is the index, 𝑧𝑖 is the z-coordinate at the z-stack image 𝐼𝑖, and 𝐴𝑖 is the area contained

within the boundary of path 𝛾𝑖 of the segmented blastomere at 𝐼𝑖.

The resulting centroid, 𝐶̅ = (�̅�, �̅�, 𝑧̅), is the calculated centroid of the blastomere. In Figure 3.24,

a diagram is shown of the z-stack images and their centroids, 𝐶𝑖, areas, 𝐴𝑖, as well as the centroid

of the blastomere, 𝐶̅. With this, successive blastomere-related tasks can be performed, such as

blastomere aspiration with a micropipette [55], and optimizing the laser zona drilling ablation zone

[56]. The entire image processing procedure of Section 3.2 is illustrated as a flowchart, shown in

Figure 3.25.

Page 59: Automated Blastomere Segmentation for Visual Servo on

42

Figure 3.24: Diagram of the z-stack image centroids, 𝐶𝑖, area 𝐴𝑖, and computed blastomere

centroid, 𝐶̅.

Page 60: Automated Blastomere Segmentation for Visual Servo on

43

Figure 3.25: Flowchart of Blastomere Segmentation Algorithm. The orange section represents the

manual operations required to begin the automated task, whereas the blue sections represent the

automated tasks. Statements in green represent the output for its respective step.

Page 61: Automated Blastomere Segmentation for Visual Servo on

44

3.3 Visual Servoing

Once the centroid of the blastomere of interest (BOI), 𝐶̅ = (�̅�, �̅�, 𝑧̅), is computed, the next step is

to move a micropipette tip to this target position. This procedure is performed with visual servoing.

More specifically with position-based visual servo (PBVS), since 3D coordinate data is utilized

rather than image data. Depending on the goal, different micropipettes shapes are used for different

tasks. For example, for blastomere aspiration using a displacement method, a micropipette with a

small outer diameter, is used to push fluid from within the embryo, and aspirate one or more

blastomeres [14]. A holding pipette has an outer diameter to be able to immobilize an embryo

during a procedure. A partial zona dissection (PZD) micropipette is used for dissection the embryo,

or perforating the zona pellucida [57].

In order to move the micropipette to the calculated blastomeres centroid, 𝐶̅, visual servoing,

specifically PBVS, is performed. The visual servoing discussed in the following two sections.

Section 3.3.1 describes the image processing procedure for micropipette calibration. Section 3.3.2

describes the micropipette control used to move the micropipette to the target position, 𝐶̅, using

PBVS.

The experimental setup is as follows, as shown in Figure 3.26. An embryo is submerged in small

layer of culture media inside a petri dish. The petri dish is placed on the brightfield microscope

motorized stage, which has 2 degrees of freedom (DOF) along the xy Cartesian plane. The camera

is positioned below the microscope objective lens. Note that when the objective lens moves

vertically in the z-axis direction, the focal plane moves accordingly. Two micromanipulators are

situated on either side of the embryo. On one micromanipulator, with the aid of a vacuum pump,

a holding pipette is positioned such that it holds the embryo with negative pressure, to keep the

embryo from moving during an experimental procedure. On the other micromanipulator, a solid

sharp tip PZD micropipette is held, angled towards the embryo. Both micromanipulators each have

3 DOF, allowing motion in the x, y, and z Cartesian axes. The stage, both micromanipulators, and

camera are connected to a computer, and are software controlled.

Page 62: Automated Blastomere Segmentation for Visual Servo on

45

Figure 3.26: Schematic of the experimental setup.

3.3.1 Micropipette Calibration

Once the experimental equipment is setup, the first step is to calibrate the PZD micropipette

position in Cartesian space. The micropipette must be within the image workspace for proper

calibration (Figure 3.27(a)). To calibrate, first the micropipette is segmented with image

processing, similar to the work with Wong and Mills [55]. First, the Canny edge detection method

is used with the Matlab edge function, as shown in Figure 3.27(b). Due to the simple, straight

shape of the micropipette, the Canny edge detection algorithm readily produces well-defined lines.

Since the micropipette extends outside the borders of the image, the image cannot be closed, and

hence cannot be filled. To address this problem, the borders are considered to be edges, and then

are filled with the Matlab imfill function [55], as shown in Figure 3.27(c). Then, since only the

outline of the micropipette is required, the artificial edge on the border is removed and the Matlab

bwperim function is used to solely find this outline. Then, a set of points, labeled 𝑠𝑜𝑟𝑑𝑒𝑟𝑒𝑑(𝑖), 𝑖 ∈

[1, 𝑁], where 𝑁 is the total number of points sequentially ordered from start to end, and 𝑖 is the

index [55], as shown in Figure 3.27(d). The angle of the micropipette, 𝛼, is used to find the positon

of the tip. A line running through the median of the micropipette will encounter a point on the

micropipette. The point at which the encounter occurs is the tip. An initial estimate of the angle,

𝛼𝑖𝑛𝑖𝑡, is made by averaging the slopes of the micropipette walls, 𝛼𝑖𝑛𝑖𝑡,𝐿 and 𝛼𝑖𝑛𝑖𝑡,𝑅 as 𝛼𝑖𝑛𝑖𝑡, derived

by (3.9 – 3.11). 휀 is the number of points used for estimating the angle, chosen such that it does

Page 63: Automated Blastomere Segmentation for Visual Servo on

46

not include the tip, and varies depending on the micropipette type. The left and right sides of the

micropipette are named relative to which side of the tip it sits on.

𝛼𝑖𝑛𝑖𝑡,𝐿 = (tan−1(𝑠𝑜𝑟𝑑𝑒𝑟𝑒𝑑,𝑦(1)) − (𝑠𝑜𝑟𝑑𝑒𝑟𝑒𝑑,𝑦(1 + 휀))

(𝑠𝑜𝑟𝑑𝑒𝑟𝑒𝑑,𝑥(1)) − (𝑠𝑜𝑟𝑑𝑒𝑟𝑒𝑑,𝑥(1 + 휀))) (3.9)

𝛼𝑖𝑛𝑖𝑡,𝑅 = (tan−1(𝑠𝑜𝑟𝑑𝑒𝑟𝑒𝑑,𝑦(𝑁)) − (𝑠𝑜𝑟𝑑𝑒𝑟𝑒𝑑,𝑦(𝑁 − 휀))

(𝑠𝑜𝑟𝑑𝑒𝑟𝑒𝑑,𝑥(𝑁)) − (𝑠𝑜𝑟𝑑𝑒𝑟𝑒𝑑,𝑥(𝑁 − 휀))) (3.10)

𝛼𝑖𝑛𝑖𝑡 =𝛼𝑖𝑛𝑖𝑡,𝐿 + 𝛼𝑖𝑛𝑖𝑡,𝑅

2(3.11)

To find the angle of the micropipette, the side walls of the micropipette are used. Local angles 𝛼𝑖

at every point 𝑠𝑜𝑟𝑑𝑒𝑟𝑒𝑑(𝑖) are then averaged to find 𝛼, using (3.12), as derived by Wong and Mills

[55].

𝜶𝒊 = 𝐭𝐚𝐧−𝟏(𝒔𝒐𝒓𝒅𝒆𝒓𝒆𝒅,𝒚(𝒊 + 𝜹)) − (𝒔𝒐𝒓𝒅𝒆𝒓𝒆𝒅,𝒚(𝒊 − 𝜹))

(𝒔𝒐𝒓𝒅𝒆𝒓𝒆𝒅,𝒙(𝒊 + 𝜹)) − (𝒔𝒐𝒓𝒅𝒆𝒓𝒆𝒅,𝒙(𝒊 − 𝜹)); 𝒊 ∈ [𝟏 + 𝜹, 𝑵 − 𝜹] (𝟑. 𝟏𝟐)

Where, 𝛿 is the sampling length. The points that do not closely align with the micropipette walls,

such as the tip, are removed with a simple threshold criterion where only the points within 15° of

𝛼𝑖𝑛𝑖𝑡 are removed. The two remaining, now separated, set of points on the two sides are labelled

as 𝑠𝐿 and 𝑠𝑅. The centroids of these sides are found with the Matlab regionprops function and

are labelled 𝐶𝐿 and 𝐶𝑅, respectively. A line is then drawn through the midpoint of these two

centroids, with angle 𝛼 [55]. The point at which this line touch the encounters the micropipette, is

the position of the tip of the micropipette, 𝑝, as shown in Figure 3.27(e).

Page 64: Automated Blastomere Segmentation for Visual Servo on

47

Figure 3.27: Micropipette image segmentation. (a) Original image of micropipette. (b) Canny edge

detection. (c) Image Fill. (d) Micropipette outline split into side walls and tip. (e) Micropipette

with orientation and tip position.

Page 65: Automated Blastomere Segmentation for Visual Servo on

48

Once the tip of the micropipette is found through these image processing techniques, the next step

is to calibrate the micropipette position in Cartesian space. To do so, first the micropipette is placed

at an arbitrary position within the camera workspace such that the micropipette is within the focal

plane of the camera. The tip position, 𝑝1, is then found from the earlier image processing

procedure, as shown in Figure 3.28. With the micromanipulator, the micropipette moves along the

xy-plane to a known position within the camera workspace, with the same image processing

procedure applied again, extracting the tip position, 𝑝2, as shown in Figure 3.28. Knowing the

position of these points 𝑝1 and 𝑝2 in the camera frame of reference as 𝑝1𝐶 and 𝑝2

𝐶, a coordinate

transform is applied in order to obtain the position in the micromanipulator frame of reference,

𝑝1𝑀 and 𝑝2

𝑀 [55]. To obtain the coordinate transform, first the angles between the two points are

found for the rotation matrix, found in both camera and manipulator coordinate frames as 𝛽𝐶 and

𝛽𝑀 as (3.13 – 3.14).

𝛽𝐶 = tan−1 (𝑝2,𝑦

𝐶 − 𝑝1,𝑦𝐶

𝑝2,𝑥𝐶 − 𝑝1,𝑥

𝐶) (3.13)

𝛽𝑀 = tan−1 (𝑝2,𝑦

𝑀 − 𝑝1,𝑦𝑀

𝑝2,𝑥𝑀 − 𝑝1,𝑥

𝑀) (3.14)

The rotation matrix, 𝑅 is then found. as follows in (3.15).

𝑅 = [cos(𝛽𝑀 − 𝛽𝐶) − sin(𝛽𝑀 − 𝛽𝐶)

sin(𝛽𝑀 − 𝛽𝐶) cos(𝛽𝑀 − 𝛽𝐶)] (3.15)

The transformation, 𝑇(𝑝𝐶) is found to be as follows (3.16) [55].

𝒑𝑴 = 𝑻(𝒑𝑪) = 𝒔𝑹𝒑𝑪 + 𝒑𝒐𝒇𝒇𝒔𝒆𝒕 (𝟑. 𝟏𝟔)

Where 𝑠 is the scaling factor in μm/px, based on the camera and magnification used, and 𝑝𝑜𝑓𝑓𝑠𝑒𝑡

is the offset from the micromanipulator frame origin to the camera frame origin. Once the camera

position in Cartesian space is calibrated, the micropipette moves to a third arbitrary, but known,

position, 𝑝1, to verify the micropipette calibration. This calibration process is illustrated in Figure

3.28(c).

Page 66: Automated Blastomere Segmentation for Visual Servo on

49

Figure 3.28: Micropipette Calibration Procedure. (a) Micropipette at first position. (b)

Micropipette at second position. (c) Micropipette at calibration test position.

Calibration of the z-coordinate is performed in a similar, but simpler manner. The scaling factor

used for both the camera and micromanipulator frames of reference is the same, i.e. 𝑠 = 1, since

they both use the units of μm. The rotation matrix, 𝑅 = 1 for the z-axis. Therefore, the z-coordinate

transformation is as follows in (3.17).

𝑧𝑀 = 𝑧𝐶 + 𝑧𝑜𝑓𝑓𝑠𝑒𝑡 (3.17)

Where, 𝑧𝑀 is the z-coordinate from the micromanipulator frame of reference, and 𝑧𝐶 is the z-

coordinate from the camera frame of reference. 𝑧𝑜𝑓𝑓𝑠𝑒𝑡 is the distance between the two coordinates

and can be found by subtracting 𝑧𝐶 from 𝑧𝑀. At this point, the inputs of the coordinates from the

camera frame of reference can be provided, and the micropipette will approach the targeted

coordinate.

3.3.2 Position Based Visual Servoing

Once calibrated, the final step is to move the micropipette to the calculated centroid of the

blastomere, 𝐶̅ = (�̅�, �̅�, 𝑧̅). With the holding micropipette, the embryo is then moved towards the

camera’s field of view. At this point, the blastomere image segmentation algorithm is performed

and provides 𝐶̅. Since the micropipette can only move via translation, and not rotation, a specific

set of instructions are required for the micropipette to go towards the 3D coordinate, as the only

way to pierce the zona pellucida is to move the micropipette towards the embryo, with the PZD

Page 67: Automated Blastomere Segmentation for Visual Servo on

50

held orthogonal to the surface of the embryo. Since the micropipette cannot rotate, the zona

pellucida piercing must occur along the x-axis.

The micropipette travels along through four intermediate positions. The first position being the

PZD initial position. This coordinate is stored as 𝑝𝑚𝑝,1 = (𝑥1, 𝑦1, 𝑧1), i.e. the initial position of the

micropipette. The target coordinate is 𝑝𝑚𝑝,4 = (𝑥2, 𝑦2, 𝑧2). To move through these intermediate

positions, position-based visual servo (PBVS) is used to accomplish this task. Since PBVS allows

motion outside of the focal plane of the camera, the micropipette can move in three dimensions

(𝑥, 𝑦, 𝑧), while the focal plane of the camera remains stationary. The position coordinate 𝒔 is that

of the micropipette tip, and the desired position coordinate, 𝒔∗, is that of the computed BOI

centroid, 𝐶̅. The algorithm uses closed-loop control for controlling the micromanipulators.

Although, one control system cycle is run when moving the pipette from point-to-point. As speeds

of the visual feedback system increase in the near future, higher frequency cycles can be

implemented. The error is determined between the current position, and the provided target

position. The intermediate positions are provided to the software, given the blastomere centroid

position, 𝐶̅. Next, the micropipette moves along the z-axis to approach the same focal plane as the

blastomere centroid. This coordinate is 𝑝𝑚𝑝,2 = (𝑥1, 𝑦1, 𝑧2); the second position of the

micropipette, also shown in Figure 3.29(a). The next step is for the micropipette to move along the

y-axis, to approach the third coordinate, 𝑝𝑚𝑝,3 = (𝑥1, 𝑦2, 𝑧2), also shown in Figure 3.29(b). The

final step is for the micropipette to move towards the blastomere centroid, or 𝑝𝑚𝑝,4 = (𝑥2, 𝑦2, 𝑧2),

along the x-axis, as shown in Figure 3.29(c). This will sequentially pierce the zona pellucida, and

then move the micropipette to the centroid of the blastomere. A simplified explanation is given

below, with experiment performed in Appendix A.

First translation:

𝑝𝑚𝑝,1 = (𝑥1, 𝑦1, 𝑧1) → 𝑝𝑚𝑝,2 = (𝑥2, 𝑦2, 𝑧̅)

Second translation:

𝑝𝑚𝑝,2 = (𝑥2, 𝑦2, 𝑧̅) → 𝑝𝑚𝑝,3 = (𝑥2, �̅�, 𝑧̅)

Third translation:

𝑝𝑚𝑝,3 = (𝑥2, �̅�, 𝑧̅) → 𝑝𝑚𝑝,4 = (�̅�, �̅�, 𝑧̅) = 𝐶̅

Page 68: Automated Blastomere Segmentation for Visual Servo on

51

Figure 3.29: Micropipette Control Path. (a) Micropipette at second position, 𝑝𝑚𝑝,2. (b)

Micropipette at third position, 𝑝𝑚𝑝,2. (c) Micropipette at fourth, and final, position, 𝑝𝑚𝑝,4.

Page 69: Automated Blastomere Segmentation for Visual Servo on

52

To summarize, Chapter 3 describes the methodology used for the blastomere segmentation and

visual servoing process. The first section of this chapter introduces z-stack images (Section 3.1),

and how they are obtained (Section 3.1.1). The next section describes blastomere image

segmentation (Section 3.2), and describes how it is performed involving a three step process,

including initializing a region of interest (Section 3.2.1), creating a graph structure to solve a low-

cost energy path (Section 3.2.2), and then calculating the centroid of the blastomere (Section

3.2.3). Finally, the chapter concludes with visual servoing (Section 3.3), first a micropipette is

calibrated with image processing (Section 3.3.1), and then position-based visual servo is used to

move the micropipette to a target position (Section 3.3.2), such as the blastomere centroid.

Page 70: Automated Blastomere Segmentation for Visual Servo on

53

Results and Discussion

In this chapter, the experimental procedures and results from the algorithms developed in this work

are presented. In Section 4.1, the process to obtain the centroid of the blastomere and to move a

PZD micropipette to the centroid location is described from a user’s standpoint. This is to both

describe the entire blastomere segmentation and visual servo procedure, as well as discuss in detail

the automated procedures. Section 4.2 provides results from sample experiments, how they are

obtained, and the effectiveness of the developed algorithms to accomplish the objectives. Section

4.3 discusses the results and clarifies causes of error within the algorithms.

4.1 Experimental Procedure

The entire BOI centroid computation and visual servo process involves several steps. In order for

a user to correctly perform the task required , the following steps are carried out sequentially, as

outlined in Figure 4.1. The first step is to prepare the embryo, along with the microscope. This step

is performed manually by the user, and is illustrated in Figure 1.2. The standard inverted brightfield

microscope used for experiments is the Nikon Ti-U, equipped with two objective lenses, with 20x

and 40x optical zoom respectively. For this setup, a 2-cell stage mouse embryo is submerged

within culture media on a petri dish and placed upon the stage of the microscope, a Prior Proscan

III, with translation actuation along the xy-axes. Two robotic micromanipulators, Scientific

Patchstar Micromanipulators, are placed on either side of the stage and are each equipped with a

micropipette. These micromanipulators have translation actuation along the xyz-axes, with the xy-

axes aligned with those of the motorized stage and camera frame. The left-hand micromanipulator

is equipped with the holding micropipette, with an outer diameter of 120 μm and an inner diameter

of 15-20 μm, capable of holding onto an early-stage mouse embryo of approximately 100 μm

diameter. The right-hand micromanipulator is equipped with a solid sharp tip PZD micropipette,

capable of moving to a targeted position and piercing a targeted blastomere. The range of the PZD

micropipette is limited to the range of the micromanipulator, at 5 cm along the x, y, and z

directions. However, since the micropipette cannot move through the stage without structural

failure, care must be taken so the range of the micropipette moves only within approximately 2.5

cm above the stage. A standard QImaging optiMOS camera is attached to the microscope, via the

objective lens. All microscope components, including the stage, micromanipulators, and camera,

Page 71: Automated Blastomere Segmentation for Visual Servo on

54

are connected to a desktop computer equipped with an Intel Core i7-4790 CPU at 3.6 GHz and 16

GB of RAM. Each component has corresponding software associated with it; the stage and

objective lens run on Prior Scientific Controller, the micromanipulators run on Scientific Patchstar

LinLab 2, and the camera runs on Micromanager 1.4 and ImageJ. The developed algorithms are

programmed, and executed, in Matlab R2018a. A Sutter Instrument XenoWorks Digital

Microinjecter serves as a vacuum for the holding pipette. To prepare for determination of the

blastomere centroid, the user moves the holding micropipette towards the embryo and immobilizes

it by applying a pressure of -5hPa. The PZD micropipette is brought into camera view, where then

the holding pipette, along with the embryo, move out of the camera view in order to proceed to

micropipette calibration.

The user then opens the developed micropipette calibration program on Matlab. The only

requirement from the user is to input the scaling factor, 𝑠, depending on the objective lens used for

the experiment. For either the 20x or 40x lenses, the scaling factor will either be 0.32244 μm/px

or 0.16043 μm/px respectively. Then the PZD micropipette calibration is performed automatically,

allowing for the subsequent programs to move the PZD micropipette to desired positions with only

camera coordinates, rather than micromanipulator coordinates.

To acquire the image stack of z-stack images, Micromanager is used to capture images and save

them as TIF files, and Prior Scientific Controller is used for moving the objective lens along the

z-axis. Starting from below the bottom of the embryo, a total of 60 z-stack images are successively

captured over 120 μm, centered at the middle of the embryo, to ensure the entire embryo is captured

within the image stack data. The total time taken to capture these images 7.5 s, at 0.125 s with 3

ms of exposure per z-stack image. These parameters are required with user input and are performed

manually. However, once the user initiates the image acquisition program button, the automated

program begins, moving the focal plane, acquiring the images and storing them as an image stack

in TIF files. Once the z-stack images are acquired, Micromanager converts the file into an easily

accessible TIF file, ready for Matlab to use. Currently, there does not exist a software bridge

between Matlab and Micromanager, and hence image stacks acquired from Micromanager cannot

automatically be read through Matlab. Therefore, once saved onto the computer, the image stacks

were manually exported to Matlab, which then proceeds with the automated BOI centroid

calculation process.

Page 72: Automated Blastomere Segmentation for Visual Servo on

55

Once the TIF file of the z-stack image data is read by Matlab, the user sets parameters indicting

which blastomere is the blastomere of interest. Then, the program undergoes the automated

blastomere segmentation process, involving generating a region of interest, creating a low-cost

energy path, and computing the centroid coordinate, 𝐶̅, of the blastomere of interest as described

in Section 3.2. The time to compute the centroid coordinate of the BOI is approximately 45

minutes, with current computing power. This computed centroid is with reference to the camera

frame, which then is to be transformed into micromanipulator frame with the use of another Matlab

program. This is so the micromanipulator can then translate to its targeted position, when provided

coordinates within the camera frame. There is a software connection between Matlab and the

Scientific Patchstar micromanipulators, and Matlab is capable of controlling the

micromanipulators by sending position coordinates within the micromanipulator frame. The

micropipette then translates to the calculated blastomere centroid in a sequential pattern as PBVS,

as detailed in Section 3.3.2, and the automated task is successfully completed. The former

procedures, from Matlab importing the z-stack image data to moving the micropipette to a targeted

position, is entirely automated and requires no user input.

Figure 4.1: Flowchart of overall BOI centroid computation and visual servo process. The boxes

represent tasks, whereas the arrows represent the procession from one task to another. Orange

boxes and arrows represent tasks performed manually. Whereas the blue boxes and arrows

represent automatically performed tasks.

Page 73: Automated Blastomere Segmentation for Visual Servo on

56

4.2 Experimental Results

To validate the proposed automated procedures, automated blastomere segmentation and visual

servoing was performed on a 2-cell mouse embryo, as a proof of concept. The centroid calculations

are verified with results acquired manually, whereas the visual servo is verified experimentally.

A sample experiment is performed to compute the BOI centroid, as described in Section 4.1. Once

the BOI segmentation and centroid computing algorithms of Section 3.2 are performed, they output

a calculated BOI centroid of 𝐶̅ = (�̅�, �̅�, 𝑧̅). This centroid resulting from the algorithms are verified

with manual segmentation of the blastomeres, where the centroid determined by manual

segmentation is labeled as 𝐶𝑚 = (𝑐𝑚𝑥, 𝑐𝑚𝑦, 𝑐𝑚𝑧). The manual segmentation is performed by user

observation. The z-stack images from the image stack are obtained sequentially and the xy-

centroid coordinates are found by observation. The sample data obtained can be seen in Table 1 in

Appendix B. The error along the x and y axes are 𝑒𝑥 = |𝑐𝑚𝑥 − 𝑐𝑖𝑥| and 𝑒𝑦 = |𝑐𝑚𝑦 − 𝑐𝑖𝑦|

respectively. On average, 𝑒𝑥 is found to be approximately 5.751 pixels (1.854 μm or 9.27%), and

𝑒𝑦 is found to be approximately 1.939 pixels (0.625 μm or 3.125%), as calculated in Tables B.2

and B.3 in Appendix B. The error along the z-axis is 𝑒𝑧 = |𝑐𝑚𝑧 − 𝑧̅|, where 𝑐𝑚𝑧 is the z-coordinate

used for the z-stack image at the middle of the image stack, also used to find the centroid

approximation for the ROI in Section 3.2.1. 𝑒𝑧 is found to be approximately 11.774 μm

(approximately 6 z-stack images or 58.89%).

Although this verification method is prone to human error, a better way to segment is to manually

outline the boundary of the blastomere at every z-stack image of the image stack via drawing

software, and then compute the resulting xy-coordinates of the centroid. Therefore, the area can

then be produced for every image stack, and an accurate z-coordinate can be determined, and hence

a more accurate model for comparison can be used to determine error and validate the algorithms.

However, the accuracy of the blastomere related tasks, such as blastomere aspiration, depending

on the dimensions of the aspirating micropipette. An aspirating micropipette must have an outer

diameter less than that of the object that it is aspirating, in order to correctly aspirate via suction.

The normal inner diameter of an aspirating micropipette is between 30-40 μm, and the error

between the observed 𝐶𝑚 and the calculated 𝐶̅ are within these required tolerances.

Page 74: Automated Blastomere Segmentation for Visual Servo on

57

A sample experiment is performed to move the PZD micropipette, to a given targeted position.

The PBVS algorithms are used, as described in Section 3.3 to move the micropipette tip to the

targeted position. The PBVS algorithms are under closed-loop control, and allow the

micromanipulator to move the micropipette to a targeted 3D Cartesian coordinate, rather than

camera coordinates. The algorithm converts the targeted position from the camera frame of

reference to the robot frame of reference for the micromanipulator to move. Once the PBVS

algorithms of Section 3.3 are performed, the micropipette moves to the given position. The visual

servo algorithms are verified based on how close the micropipette tip is to this given coordinate.

After calibration and coordinate frame transformation from the camera frame to the

micromanipulator frame, coordinates in the camera frame can be given to the micromanipulator,

and the micropipette will move to the respective coordinate in micromanipulator frame. A sample

target coordinate is given as, 𝑝 = (𝑥𝐺 , 𝑦𝐺 , 𝑧𝐺) = (950, 725, 28). The PZD micropipette then

moves from its initial position, 𝑝1 = (1275, 550, 31), to the second intermediate position, 𝑝2 =

(1275, 550, 28), to the third, 𝑝3 = (1275, 725, 28), and to the final targeted position of the BOI

centroid, 𝐶̅ = 𝑝4 = (950, 725, 28), successively while following the steps as described in Section

3.3.1, and are found in Table 4 in Appendix C. A figure for this example process is shown in Figure

4.2.

The error is found by obtaining the exact coordinates of the micropipette tip within the camera

frame, to determine the accuracy of the micropipette within a visual servoing environment. The

true micropipette tip position for the experiment are found in Table 5, as 𝑝 = (𝑥𝑇 , 𝑦𝑇 , 𝑧𝑇). The

error for each intermediate position is found as 𝑝 = (𝑥𝐸 , 𝑦𝐸 , 𝑧𝐸), where 𝑥𝐸 = |𝑥𝑇 − 𝑥𝐺|, 𝑦𝐸 =

|𝑦𝑇 − 𝑦𝐺|, and 𝑧𝐸 = |𝑧𝑇 − 𝑧𝐺| and shown in Table 6. The errors along the x and y axes are found

to be approximately within 25 pixels. These errors are largely due to possible calibration error

along the x-axis. Overall, the pixels are observed to be within 8 μm or 40%, where if the

blastomeres are approximately 50 μm, the errors are within the required tolerances for tasks, such

as blastomere aspiration due to the size of the aspirating micropipette being approximately 30-40

μm.

Page 75: Automated Blastomere Segmentation for Visual Servo on

58

Figure 4.2: Sample Experiment of Visual Servoing.

Page 76: Automated Blastomere Segmentation for Visual Servo on

59

4.3 Discussion of Results

Given the results, it is important to discuss possible cases of failure of both the blastomere

segmentation and visual servoing algorithms.

Image acquisition and processing parameters greatly influence the blastomere segmentation

process, specifically in finding the low-cost energy path. Image acquisition parameters such as the

objective lens is used, camera exposure time and binning, the distance between consecutive z-stack

images, 𝑑𝐼, the time take to acquire z-stack images of an image stack, and the size of the image

stack, 𝑁 all play a role in the results. These parameters may influence the need to change other

parameters in image processing, such as size of the standard deviation filter, threshold for

binarizing an image, size of the structuring element, to the sizes of ROI radii and graph structures.

Experiments and its parameters are to be kept consistent to achieve consistent results. Changes

between experiments require changes for these parameters, until successful results are achieved.

An issue relevant to image processing arises due to the translucency of the embryo. Although the

zona pellucida and perivitelline space of the embryo are transparent, blastomeres are slightly

opaque. Hence, as can be seen in Figure 4.3, only one side of the blastomere is visible to

microscope. In the case of an inverted brightfield microscope, the boundaries of bottom half of the

blastomeres are visible, and are observed as white circular rings. The upper half of the blastomere

is not visible to the microscope (Figure 4.3). Therefore, to compute the centroid of the BOI, only

data obtained that provides a clear view of the boundaries of the blastomere are processed. In the

case of the sample experiment, only z-stack images within 𝐼3 and 𝐼36 are visible and processed.

Determining the size of the region of interest raises another issue. The ROI must encompass the

boundary of the blastomere at every z-stack image. The number of pixels in the ROI impact

significantly on image processing time. The main components that effect the structure of the ROI

are the radii, 𝜌′ and 𝜌′′, and number of 𝜃 samples, 𝜃𝑛. The larger the number of points of 𝜌𝑛 and

𝜃𝑛 used for the image array 𝐽, the greater the computation time. Note that only the range between

𝜌′ and 𝜌′′ changes the number of sampling points, 𝜌𝑛, and not the physical values of the radii. For

the ROI to account for the changing radii of the spherically shaped blastomere at every z-stack

image, the radii, 𝜌′ and 𝜌′′, change according to the blastomere boundary radii, however 𝜌𝑛 is kept

Page 77: Automated Blastomere Segmentation for Visual Servo on

60

consistent. Uncertainty arises when the blastomere lies within an unknown region of the embryo

in 3D space. The ROI must be a greater size, to accommodate for such uncertainty.

Figure 4.3: Various z-stack images of embryo. (a) z-stack image at 𝐼14. Note the white circularly

shaped outline within the embryo. This is the boundary of the blastomere at this z-stack image. (b)

z-stack image at 𝐼31. Also used as the middle of image stack due to it being the z-stack image with

the largest blastomere boundary. (c) z-stack image at 𝐼44. Blastomere boundary is not visible due

to blastomere opacity.

Page 78: Automated Blastomere Segmentation for Visual Servo on

61

One issue that arises with the automated micropipette calibration procedure is due to the solid

sharp tip of the PZD micropipette. Two long sides of the micropipette join to form a sharp tip,

instead of the flat tip, similar to a holding micropipette (Figure 4.4). The image processing

procedure checks for the tip based on when the drawn line, as detailed in Section 3.3.1, and

encounters the micropipette side wall. The angle of the line, 𝛼, greatly affects where the line will

encounter. With a sharp tip, such as that of a PZD micropipette, the likelihood of initially acquiring

an image of the tip is low. For this reason, the micropipette calibration will be in error, but

insufficient to lead to visual servo failure for blastomere aspiration. This is one of the causes of

error within the visual servo algorithms.

Figure 4.4: Comparison of Micropipette Tips for Calibration.

With these issues taken care of during experiments, then the blastomere segmentation and visual

servo algorithms can successfully perform the required tasks.

Page 79: Automated Blastomere Segmentation for Visual Servo on

62

Conclusions

5.1 Summary and Conclusions

The development of algorithms for automated single cell surgery are crucial for IVF and PGD

tasks. In this work, a detailed literature review is first presented, focusing on the various types and

limitations of microscopy, depth of field, and various image processing and cell segmentation

techniques. The literature review also overviews two visual servo approaches, including image-

based, and position-based visual servoing. Obtaining the blastomere of early-stage embryos is vital

for processes such as PGD genetic testing purposes. The position of the centroid in 3D Cartesian

space of a blastomere of interest is knowledgeable in order to easily perform position-based visual

servoing procedures. Algorithms are developed to acquire z-stack images via an inverted

brightfield microscope, compute centroid coordinates of a BOI within an embryo in 3D Cartesian

space, and move a micropipette to a computed position using position-based visual servo control,

which are separated into a three step sections, as shown in the methodology.

First, a method for automating the collection of embryo image data is performed. The main goal

for this section is to acquire image data of the embryo, along with the BOI, to be used in subsequent

sections. Starting from below the bottom of the embryo and moving the focal plane along the z-

direction to above the embryo, a sequence of images is captured successively at equally spaced

focal planes, known as z-stack images. The collection of z-stack images is known as the image

stack. The image stack is saved and exported as a TIF file for further analysis and processing.

Acquisition of the image stack is obtained with an inverted brightfield microscope.

Given the series of z-stack images, the main goal for this section is to compute the centroid of the

BOI in 3D Cartesian space, for PGD genetic testing purposes. The image stack of z-stack images

is then exported to Matlab, where it undergoes a series of image processing steps to compute the

centroid of the BOI, with a three-step process. The first step is to obtain the z-stack image from

the middle of the image stack, and compute an approximate centroid for the region of interest used

in the next step. This follows a series of image processing steps, involving standard deviation filter,

binarized thresholds, area filters, and structuring elements creating a blob of the two blastomeres.

With image processing, the blastomeres in the blob are separated, leaving the two blastomere

Page 80: Automated Blastomere Segmentation for Visual Servo on

63

where then the approximate centroids are computed. Only the BOI is taken for further examination.

An ROI is then developed, centered at this approximate centroid for the subsequent steps. The

second step is to segment the blastomere at this z-stack image. The ROI is converted into polar

coordinates and assigned energy values. A graph structure is then constructed, incorporating the

z-stack images both below and above the current z-stack image, where then a low-cost energy path

is generated in order to segment the blastomere at the respective z-stack image, which then is

converted back into Cartesian coordinates. The third step begins with carrying through Steps 1 and

2 for all z-stack images, and subsequently, with the use of a weighted averaging equations,

computes the centroid of the BOI in 3D Cartesian space.

Given the computed BOI centroid coordinates in 3D Cartesian space, the main goal for this section

is to move a micropipette to the computed position with PBVS, as a proof of concept. First, the

micropipette is calibrated in order to accurately move to a target position. The second section is

moving the micropipette to the computed BOI centroid coordinates in 3D Cartesian space. This is

performed through visual servo, and by the micropipette moving through a series of intermediate

steps to reach the computed BOI coordinates. Specifically, PBVS is used because it is technique

that controls a robot micromanipulator with feedback from visual images based on 3D positional

coordinates, instead of 2D camera coordinates. This is useful due to the shallow depth of field

found in microscope images at this scale.

The results of this methodology provide a guide from a user’s standpoint, as well as sample

experimental results from a 2-cell stage embryo. The results are found to provide accurate results

for computing the blastomere centroid, and for micropipette visual servoing. The accuracy of

results are sufficient for cell surgery related tasks, such as blastomere aspiration for PGD.

In conclusion, all the objectives in this research are achieved. The developed algorithms compute

and determine the centroid coordinates of a BOI within an embryo in 3D Cartesian space, and

move a micropipette to a computed position using PBVS control. The algorithms also integrate

with the existing inverted brightfield microscope setup, equipped with two robotic

micromanipulators and a motorized stage.

Page 81: Automated Blastomere Segmentation for Visual Servo on

64

5.2 Contributions

In this work, the following contributions are made:

1. The research proposes a method to obtain image data from across the z-direction of an

embryo, known as z-stack images.

2. From obtained z-stack images, the research proposed an automated image processing

procedure to determine the centroid of a BOI, and a visual servo procedure to move a

micropipette to this computed centroid position.

3. The proposed research integrates with the existing Nikon Ti-U brightfield microscope

setup, equipped with two robotic micromanipulators and a motorized stage.

5.3 Recommendations and Future Works

Based on the research accomplished in this thesis, future works are suggested in the following

aspects:

1. Explore image processing algorithms with images from high contrast imaging, such as

fluorescent imaging.

2. Program algorithms in a faster language other than Matlab, such as C/C++ or Python, as

well as develop algorithms that incorporate parallel programming for faster computation

in generating the graph structures.

3. Develop higher frequency closed-loop visual servo control algorithms for micropipette

movement, rather than single cycle closed-loop motion.

4. Reconstruct blastomere shapes along with their positions using the blastomere

segmentations along the z-stack images.

5. Investigate machine learning segmentation techniques, such as semantic segmentation, for

blastomere segmentation, and improve on the accuracy of the proposed algorithms.

Page 82: Automated Blastomere Segmentation for Visual Servo on

65

References

[1] A. Van Steirteghem et al., “Intracytoplasmic sperm injection,” Baillieres. Clin. Obstet.

Gynaecol., vol. 8, no. 1, pp. 85–93, Mar. 1994.

[2] P. Braude, S. Pickering, F. Flinter, and C. M. Ogilvie, “Preimplantation genetic diagnosis,”

Nat Rev Genet, vol. 3, no. 12, pp. 941–955, Dec. 2002.

[3] I. Pérez-Ruiz et al., “Analysis of protein oxidative modifications in follicular fluid from

fertile women: Natural versus stimulated cycles,” Antioxidants, vol. 7, no. 12, p. 176, Nov.

2018.

[4] F. J. Prados, S. Debrock, J. G. Lemmen, and I. Agerholm, “The cleavage stage embryo,”

Hum. Reprod., vol. 27, no. suppl 1, pp. i50–i71, 2012.

[5] Y. Yamanaka, A. Ralston, R. O. Stephenson, and J. Rossant, “Cell and molecular regulation

of the mouse blastocyst,” Developmental Dynamics, vol. 235, no. 9. pp. 2301–2314, Sep-

2006.

[6] J. C. Harper et al., “The ESHRE PGD consortium: 10 years of data collection,” Hum.

Reprod. Update, vol. 18, no. 3, pp. 234–247, May 2012.

[7] A. P. Ferraretti et al., “Assisted reproductive technology in Europe, 2008: results generated

from European registers by ESHRE†,” Hum. Reprod., vol. 27, no. 9, pp. 2571–2584, 2012.

[8] “Status of Public Funding for In Vitro Fertilization in Canada and Internationally 1 Context

and Policy Issues,” Can. Agency Drugs Technol. Heal., no. 14, 2010.

[9] P. Katz et al., “Costs of infertility treatment: results from an 18-month prospective cohort

study,” Fertil. Steril., vol. 95, no. 3, pp. 915–921, 2011.

[10] C. Y. Wong and J. K. Mills, “Cleavage-stage embryo rotation tracking and automated

micropipette control: Towards automated single cell manipulation,” in 2016 IEEE/RSJ

International Conference on Intelligent Robots and Systems (IROS), pp. 2351–6.

[11] I. A. Ajamieh, B. Benhabib, and J. K. Mills, “Automated system for cell manipulation and

Page 83: Automated Blastomere Segmentation for Visual Servo on

66

rotation,” in Proceedings of 2018 IEEE International Conference on Mechatronics and

Automation, ICMA 2018, 2018, pp. 2334–2339.

[12] A. G. Banerjee and S. K. Gupta, “Research in automated planning and control for

micromanipulation,” IEEE Trans. Autom. Sci. Eng., vol. 10, no. 3, pp. 485–495, Jul. 2013.

[13] E. Shojaei-Baghini, Y. Zheng, and Y. Sun, “Automated micropipette aspiration of single

cells,” Ann. Biomed. Eng., vol. 41, no. 6, pp. 1208–1216, Jun. 2013.

[14] C. Y. Wong and J. K. Mills, “Cell extraction automation in single cell surgery using the

displacement method,” Biomed. Microdevices, vol. 21, no. 3, p. 52, Sep. 2019.

[15] C. Y. Wong, “Automation of Single Cell Manipulation for Embryo Biopsy,” Ph.D

dissertation, University of Toronto, 2017.

[16] R. Wegerhoff, O. Weidlich, and M. Kässens, Basics of Light Microscopy and Imaging, vol.

1. GIT VERLAG GmbH & Co. KG, 2006.

[17] E. Tsichlaki and G. Fitzharris, “Nucleus downscaling in mouse embryos is regulated by

cooperative developmental and geometric programs,” Sci. Rep., vol. 6, Jun. 2016.

[18] A. K. Hadjantonakis and V. E. Papaioannou, “Dynamic in vivo imaging and cell tracking

using a histone fluorescent protein fusion in mice,” BMC Biotechnol., vol. 4, Dec. 2004.

[19] J. Zheng et al., “Label-free subcellular 3D live imaging of preimplantation mouse embryos

with full-field optical coherence tomography,” J. Biomed. Opt., vol. 17, no. 7, p. 0705031,

Jun. 2012.

[20] K. McDole, Y. Xiong, P. A. Iglesias, and Y. Zheng, “Lineage mapping the pre-implantation

mouse embryo by two-photon microscopy, new insights into the segregation of cell fates,”

Dev. Biol., vol. 355, no. 2, pp. 239–249, 2011.

[21] L. D. M. Ottosen, J. Hindkjær, and J. Ingerslev, “Light exposure of the ovum and

preimplantation embryo during ART procedures,” J. Assist. Reprod. Genet., vol. 24, no. 2–

3, pp. 99–103, Mar. 2007.

Page 84: Automated Blastomere Segmentation for Visual Servo on

67

[22] K. O. Pomeroy and M. L. Reed, “The effect of light on embryos and embryo culture,” 2013.

[23] D. Inoue and J. Wittbrodt, “One for all-a highly efficient and versatile method for

fluorescent immunostaining in fish embryos,” PLoS One, vol. 6, no. 5, 2011.

[24] D. Zhang et al., “Supplement of betaine into embryo culture medium can rescue injury

effect of ethanol on mouse embryo development,” Sci. Rep., vol. 8, no. 1, Dec. 2018.

[25] N. Ma, N. R. de Mochel, P. D. A. Pham, T. Y. Yoo, K. W. Cho, and M. A. Digman, “Label-

free assessment of pre-implantation embryo quality by the Fluorescence Lifetime Imaging

Microscopy (FLIM)-phasor approach,” bioRxiv, p. 286682, Mar. 2018.

[26] W. Lang, “Nomarski differential interference-contrast microscopy I. Fundamentals and

experimental designs,” vol. 70, pp. 16–114, 1968.

[27] B. P. Abbott et al., “Observation of Gravitational Waves from a Binary Black Hole Merger,”

Phys. Rev. Lett., vol. 116, no. 6, p. 61102, Feb. 2016.

[28] J. A. Newmark et al., “Determination of the number of cells in preimplantation embryos by

using noninvasive optical quadrature microscopy in conjunction with differential

interference contrast microscopy,” Microsc. Microanal., vol. 13, no. 2, pp. 118–127, Apr.

2007.

[29] D. R. Soll, D. Wessels, P. J. Heid, and E. Voss, “Computer-assisted reconstruction and

motion analysis of the three-dimensional cell.,” ScientificWorldJournal., vol. 3, pp. 827–

841, 2003.

[30] Z. Wang, W. T. Ang, S. Y. Min Tan, and W. T. Latt, “Automatic segmentation of zona

pellucida and its application in cleavage-stage embryo biopsy position selection,” in 2015

37th Annual International Conference of the IEEE Engineering in Medicine and Biology

Society (EMBC), 2015, pp. 3859–3864.

[31] A. Karlsson, N. C. Overgaard, and A. Heyden, “Automatic Segmentation of Zona Pellucida

in HMC Images of Human Embryos.”

[32] A. Giusti, G. Corani, L. M. Gambardella, C. Magli, and L. Gianaroli, “Lighting-Aware

Page 85: Automated Blastomere Segmentation for Visual Servo on

68

Segmentation of Microscopy Images for In Vitro Fertilization,” pp. 576–585, 2009.

[33] A. Giusti, G. Corani, L. Gambardella, C. Magli, and L. Gianaroli, “Blastomere

segmentation and 3D morphology measurements of early embryos from hoffman

modulation contrast image stacks,” in 2010 7th IEEE International Symposium on

Biomedical Imaging: From Nano to Macro, ISBI 2010 - Proceedings, 2010, pp. 1261–1264.

[34] M. Johansson, T. Hardarson, and K. Lundin, “There is a cutoff limit in diameter between a

blastomere and a small anucleate fragment,” J. Assist. Reprod. Genet., vol. 20, no. 8, pp.

309–313, 2003.

[35] R. Rottermann and P. Bauer, “How Sharp Images Are Formed.” 28-Apr-2010.

[36] F. J. Prados, S. Debrock, J. G. Lemmen, and I. Agerholm, “The cleavage stage embryo,”

Hum. Reprod., vol. 27, no. suppl 1, pp. i50–i71, 2012.

[37] G. Cauffman, G. Verheyen, H. Tournaye, and H. Van De Velde, “Developmental capacity

and pregnancy rate of tetrahedral-versus non-tetrahedral-shaped 4-cell stage human

embryos,” J. Assist. Reprod. Genet., vol. 31, no. 4, pp. 427–434, 2014.

[38] I. M. Bahadur and J. K. Mills, “Robust autofocusing in microscopy using particle swarm

optimization,” in 2013 IEEE International Conference on Mechatronics and Automation,

IEEE ICMA 2013, 2013, pp. 213–218.

[39] Z. Wang, C. Feng, W. T. Ang, S. Y. M. Tan, and W. T. Latt, “Autofocusing and Polar Body

Detection in Automated Cell Manipulation,” IEEE Trans. Biomed. Eng., vol. 64, no. 5, pp.

1099–1105, May 2017.

[40] D. A. Morales, E. Bengoetxea, and P. Larra, “Automatic Segmentation of Zona Pellucida

in Human Embryo Images Applying an Active Contour Model,” Med. Underst. Anal., no.

February 2016, pp. 104–116, 2008.

[41] U. D. Pedersen, O. F. Olsen, and N. H. Olsen, “A multiphase variational level set approach

for modelling human embryos.”

[42] U. Damgaard Pedersen, “Modeling Human Embryos Using a Variational Level Set

Page 86: Automated Blastomere Segmentation for Visual Servo on

69

Approach,” 2004.

[43] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical

image segmentation,” in Lecture Notes in Computer Science (including subseries Lecture

Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2015, vol. 9351, pp.

234–241.

[44] I. V. Ilina, Y. V. Khramova, M. A. Filatov, M. L. Semenova, and D. S. Sitnikov,

“Application of femtosecond laser scalpel and optical tweezers for noncontact biopsy of late

preimplantation embryos,” High Temp., vol. 53, no. 6, pp. 804–809, Nov. 2015.

[45] I. V. Il’ina, D. S. Sitnikov, A. V. Ovchinnikov, M. B. Agranat, Y. V. Khramova, and M. L.

Semenova, “Noncontact microsurgery and micromanipulation of living cells with combined

system femtosecond laser scalpel-optical tweezers,” in Biophotonics: Photonic Solutions

for Better Health Care III, 2012, vol. 8427, p. 84270S.

[46] C. Jiang and J. K. Mills, “Planar Cell Orientation Control System Using a Rotating Electric

Field,” IEEE/ASME Trans. Mechatronics, vol. Volume 20, no. Number 5, Oct. 2014.

[47] H. K. H. Chu, “An Automated Micromanipulation System for 3D Parallel Microassembly

by An Automated Micromanipulation System for 3D Parallel Microassembly,” 2011.

[48] F. Chaumette and S. Hutchinson, “Visual servo control. I. Basic approaches,” IEEE Robot.

Autom. Mag., vol. 13, no. 4, pp. 82–90, Dec. 2006.

[49] F. Chaumette and S. Hutchinson, “Visual servo control. II. Advanced approaches

[Tutorial],” Robot. Autom. Mag. IEEE, vol. 14, pp. 109–118, 2007.

[50] X. Liu and Y. Sun, “Visually Servoed Orientation Control of Biological Cells in

Microrobotic Cell Manipulation,” in Springer Tracts in Advanced Robotics, 2009, vol. 54,

pp. 179–187.

[51] B. Thuilot, P. Martinet, L. Cordesses, and J. Gallice, “Position based visual servoing:

Keeping the object in the field of vision,” Proc. - IEEE Int. Conf. Robot. Autom., vol. 2, pp.

1624–1629, 2002.

Page 87: Automated Blastomere Segmentation for Visual Servo on

70

[52] S. Sidhu and J. K. Mills, “Automated Blastomere Segmentation for Early-Stage Embryo

Using 3D Imaging Techniques,” in Proceedings of 2019 IEEE International Conference on

Mechatronics and Automation, ICMA 2019, 2019, pp. 1588–1593.

[53] A. Giusti and others, “Segmentation of Human Zygotes in Hoffman Modulation Contrast

Images,” Proc. Med. Image Underst. Anal., no. c, pp. 189–193, 2009.

[54] D. Guichard, “Introduction to Combinatorics and Graph Theory.” p. 147, 2016.

[55] C. Y. Wong and J. K. Mills, “Cleavage-stage embryo rotation tracking and automated

micropipette control: Towards automated single cell manipulation,” in IEEE International

Conference on Intelligent Robots and Systems, 2016, vol. 2016-Novem, pp. 2351–2356.

[56] C. Y. Wong and J. K. Mills, “Automation and Optimization of Multipulse Laser Zona

Drilling of Mouse Embryos During Embryo Biopsy,” IEEE Trans. Biomed. Eng., vol. 64,

no. 3, pp. 629–636, Mar. 2017.

[57] I. M. Bahadur and J. K. Mills, “A mechanical perforation procedure for embryo biopsy,” in

2013 ICME International Conference on Complex Medical Engineering, CME 2013, 2013,

pp. 313–318.

Page 88: Automated Blastomere Segmentation for Visual Servo on

71

Appendices

Appendix A. Experiment of Visual Servoing

Page 89: Automated Blastomere Segmentation for Visual Servo on

72

Appendix B. Sample Blastomere Coordinate Calculations

𝑰𝒊 𝒄𝒊𝒙 (in px) 𝒄𝒊𝒚 (in px) 𝑨𝒊 (in px2) 𝒄𝒎𝒙 (in px) 𝒄𝒎𝒚 (in px)

3 407.0214 1180.426 2050 426 1166

4 419.1576 1177.493 1487 428 1166

5 418.5451 1172.339 2566 430 1168

6 425.0089 1173.815 1949 428 1168

7 426.2316 1176.548 2186 428 1168

8 426.1681 1179.861 3018 426 1168

9 423.563 1177.703 4334 426 1172

10 419.9018 1178.01 4334 424 1172

11 423.2451 1180.032 4996 422 1170

12 425.0008 1180.104 5141 426 1174

13 421.8036 1181.056 5870 424 1174

14 423.9012 1185.474 6242 428 1174

15 422.8728 1178.996 7930 426 1176

16 425.4549 1174.813 9187 432 1174

17 413.3434 1159.006 9741 428 1174

18 417.7459 1179.887 10241 430 1178

19 427.872 1163.592 12417 426 1176

20 395.5784 1166.374 24160 436 1182

21 384.4635 1162.628 21686 420 1178

22 397.111 1162.045 20582 418 1172

23 423.3193 1186.507 24915 422 1176

24 424.1436 1186.029 24029 420 1180

25 423.7309 1184.987 22637 422 1178

26 420.3445 1188.147 22637 426 1170

27 422.7234 1170.544 19932 424 1174

28 421.6773 1185.161 21961 426 1176

29 421.9259 1185.353 21827 426 1176

30 421.4112 1182.425 21882 420 1178

31 421.7194 1181.189 21909 422 1182

32 423.0068 1183.188 21788 422 1182

33 423.396 1183.624 21988 426 1182

34 427.884 1174.486 22076 426 1182

35 428.7528 1174.456 23348 426 1182

36 424.4434 1157.622 21009 428 1180

Table 1: Sample Blastomere Coordinate Data

Page 90: Automated Blastomere Segmentation for Visual Servo on

73

�̅� (in px) �̅� (in px) ∑ (𝑨𝒊)𝑵𝒊=𝟏

(in px2) ∑ (𝒛𝒊 ⋅ 𝑨𝒊)

𝑵𝒊=𝟏 (in px3) �̅� (in 𝒊) ∑(𝒄𝒎𝒙)

(in px) ∑(𝒄𝒎𝒚)

(in px)

419.7785 1176.88 472055 11854800 25.1131 425.5294

1174.941

Table 2: Sample Blastomere Coordinate Calculations

(in px) (in μm)

𝒆𝒙 5.75093 1.85434 (9.27%)

𝒆𝒚 1.938776 0.625141 (3.125%)

𝒆𝒛 5.887 11.774 (58.89%)

Table 3: Sample Blastomere Coordinate Calculation Errors

Page 91: Automated Blastomere Segmentation for Visual Servo on

74

Appendix C. Sample Micropipette Tip Coordinate Calculations

𝒙𝑮 (in px) 𝒚𝑮 (in px) 𝒛𝑮 (in 𝒊)

𝒑𝟏 1275 550 31

𝒑𝟐 1275 550 28

𝒑𝟑 1275 725 28

𝒑𝟒 950 725 28

Table 4: Sample Given Micropipette Tip Coordinates

𝒙𝑻 (in px) 𝒚𝑻 (in px) 𝒛𝑻 (in 𝒊)

𝒑𝟏 1255 550 31

𝒑𝟐 1255 550 28

𝒑𝟑 1254 724 28

𝒑𝟒 925 726 28

Table 5: Sample True Micropipette Tip Coordinates

𝒙𝑬 (in px) 𝒚𝑬 (in px) 𝒛𝑬 (in 𝒊)

𝒑𝟏 20 (32.24%) 0 (0%) 0

𝒑𝟐 20 (32.24%) 0 (0%) 0

𝒑𝟑 21 (33.86%) 1 (1.61%) 0

𝒑𝟒 25 (40.31%) 1 (1.61%) 0

Table 6: Sample Micropipette Tip Coordinate Errors

Page 92: Automated Blastomere Segmentation for Visual Servo on

75

Appendix D. Sample of Blastomere Segmentation Across Image Stack

z-Stack Image - 3 z-Stack Image - 4

z-Stack Image - 5 z-Stack Image - 6

z-Stack Image - 7 z-Stack Image - 8

Page 93: Automated Blastomere Segmentation for Visual Servo on

76

z-Stack Image - 9 z-Stack Image - 10

z-Stack Image - 11 z-Stack Image - 12

z-Stack Image - 13 z-Stack Image - 14

Page 94: Automated Blastomere Segmentation for Visual Servo on

77

z-Stack Image - 15 z-Stack Image - 16

z-Stack Image - 17 z-Stack Image - 18

z-Stack Image - 19 z-Stack Image - 20

Page 95: Automated Blastomere Segmentation for Visual Servo on

78

z-Stack Image - 21 z-Stack Image - 22

z-Stack Image - 23 z-Stack Image - 24

z-Stack Image - 25 z-Stack Image - 26

Page 96: Automated Blastomere Segmentation for Visual Servo on

79

z-Stack Image - 27 z-Stack Image - 28

z-Stack Image - 29 z-Stack Image - 30

z-Stack Image - 31 z-Stack Image - 32

Page 97: Automated Blastomere Segmentation for Visual Servo on

80

z-Stack Image - 33 z-Stack Image - 34

z-Stack Image - 35 z-Stack Image - 36