39
Bachelor Final Project in Application Modality Designing a Framework to Give Perception Capabilities to an Industrial Robot for Waste Separation Tasks Nicolas Barrero Lizarazo a,d , Didier Esneider Galvis Rico a,d , Carol Viviana Martinez Luna, PhD b,d , and Stevenson Bolivar Atuesta, MSc c,d a Industrial Engineering Student b Assistant Professor, Research Project Director c Professor, Research Project Co-Director d Department of Industrial Engineering, Pontificia Universidad Javeriana. Bogot´ a, Colombia Abstract With the development of modern robotics since 1960, all the industry has experienced an exponential growth. In the last few years advances in communications and information technologies have triggered a new industrial revolution called industry 4.0. In this new model, the importance of the interaction human- robot is fundamental to develop the new “smart factory”, where the integration of cyber-physical systems will dominate the way industrial processes are done. Currently, many industrial processes in Colombia are still done manually, for instance; the recycling process, particularly the waste separation task. This process is usually done by vulnerable population or low-income families, that are dedicated to collect recyclable materials directly from mixed-waste bins and bring it to a public collection center where it is classified manually. Based on the previously mentioned situation, industrial engineering students from the Pontificia Uni- versidad Javeriana, Bogot´ a (PUJ) conducted a research; which consists in characterizing and simulating, under controlled conditions, the application of an industrial robot for waste separation tasks in an emulated human-robot co-existence workstation. The main purpose of this research is to make a first approach to analyze the waste pre-classification task of plastic bottles in Bogot´ a, that occurs at the Centro de recolecci´ on La Alqueria, one of the few collection centers that are part of the Unidad Administrativa Especial De Servicios P´ ublicos (UAESP). The process is simulated at the Centro Tecnol´ ogico de Automatizaci´ on Industrial (CTAI) from PUJ, where a testbed was created. It consists on an emulated pre-classification workstation where three types of thermoplastic recyclable materials are classified in three different bins ( red for colored-HDPE, white for HDPE/PS and green for colored-PET, using the Microsoft Kinect sensor, and their 3D pose is estimated using the kinect’s IR (infrared) depth sensor. The SDA10F dual-arm Motoman robot equipped with a Robotiq gripper is used to perform the pick and place task (picking up the bottles and carrying to the appropriate bin). To distribute the designed system two laptops are used; one in charge of running the image processing algorithm, and another one in charge of the environment simulation and the robot’s control. The proposed system is based on open source software, using the Robot Operating System (ROS) and the OpenCV libraries for developing the image processing algorithm. The system features four stages; the first stage is the bottle detection stage, it uses the OpenCV libraries to identify bottles on a work-table. The second stage is the classification stage, it locates the bottles on the work-table and characterize them by their type and geometry. The third stage is the pose estimation stage, where computer vision techniques are used to estimate the pose of each bottle with respect to the base of the robot. The last stage (control), is in charge of planning the robot’s trajectory to pick up the bottles and place them to the proper bin. Two tests where executed to measure the system’s performance. The first test evaluates the accuracy of the computer vision algorithm, using an image database a “Ground Truth” was performed. The results of this evaluation shows a mean accuracy of the classificator of 83.33%. The second test consists in the

Bachelor Final Project in Application Modality Designing a

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Bachelor Final Project in Application ModalityDesigning a Framework to Give Perception Capabilities to an Industrial Robot for

Waste Separation Tasks

Nicolas Barrero Lizarazoa,d, Didier Esneider Galvis Ricoa,d,Carol Viviana Martinez Luna, PhDb,d, and Stevenson Bolivar Atuesta, MScc,d

aIndustrial Engineering StudentbAssistant Professor, Research Project Director

cProfessor, Research Project Co-DirectordDepartment of Industrial Engineering, Pontificia Universidad Javeriana. Bogota, Colombia

Abstract

With the development of modern robotics since 1960, all the industry has experienced an exponentialgrowth. In the last few years advances in communications and information technologies have triggered anew industrial revolution called industry 4.0. In this new model, the importance of the interaction human-robot is fundamental to develop the new “smart factory”, where the integration of cyber-physical systemswill dominate the way industrial processes are done.

Currently, many industrial processes in Colombia are still done manually, for instance; the recyclingprocess, particularly the waste separation task. This process is usually done by vulnerable population orlow-income families, that are dedicated to collect recyclable materials directly from mixed-waste binsand bring it to a public collection center where it is classified manually.

Based on the previously mentioned situation, industrial engineering students from the Pontificia Uni-versidad Javeriana, Bogota (PUJ) conducted a research; which consists in characterizing and simulating,under controlled conditions, the application of an industrial robot for waste separation tasks in an emulatedhuman-robot co-existence workstation.

The main purpose of this research is to make a first approach to analyze the waste pre-classificationtask of plastic bottles in Bogota, that occurs at the Centro de recoleccion La Alqueria, one of the fewcollection centers that are part of the Unidad Administrativa Especial De Servicios Publicos (UAESP). Theprocess is simulated at the Centro Tecnologico de Automatizacion Industrial (CTAI) from PUJ, wherea testbed was created. It consists on an emulated pre-classification workstation where three types ofthermoplastic recyclable materials are classified in three different bins ( red for colored-HDPE, white forHDPE/PS and green for colored-PET, using the Microsoft Kinect sensor, and their 3D pose is estimatedusing the kinect’s IR (infrared) depth sensor.

The SDA10F dual-arm Motoman robot equipped with a Robotiq gripper is used to perform the pickand place task (picking up the bottles and carrying to the appropriate bin). To distribute the designedsystem two laptops are used; one in charge of running the image processing algorithm, and another onein charge of the environment simulation and the robot’s control.

The proposed system is based on open source software, using the Robot Operating System (ROS) andthe OpenCV libraries for developing the image processing algorithm. The system features four stages; thefirst stage is the bottle detection stage, it uses the OpenCV libraries to identify bottles on a work-table.The second stage is the classification stage, it locates the bottles on the work-table and characterize themby their type and geometry. The third stage is the pose estimation stage, where computer vision techniquesare used to estimate the pose of each bottle with respect to the base of the robot. The last stage (control),is in charge of planning the robot’s trajectory to pick up the bottles and place them to the proper bin.

Two tests where executed to measure the system’s performance. The first test evaluates the accuracyof the computer vision algorithm, using an image database a “Ground Truth” was performed. The resultsof this evaluation shows a mean accuracy of the classificator of 83.33%. The second test consists in the

evaluation of the global performance of the system. Using a representative sample of bottles the systemperforms the whole classification task with a mean effectiveness rate of 57.38%. With these two testsome errors where found and described in depth in sections VI-B and VI-C.

As a final remark, this research explored the potential of robotics in fighting the world’s wastedilemma justifying the utilization of robots in these activities which involves repetitive tasks that becomean inefficient manual labor. Thus, this technology could improve the way resources are being disposedalong all the supply chain, improving not only the profit for companies, but also it is an ecofriendlyapplication of technology.

keywords: Industrial Robotics, Computer Vision, Waste separation, Industry 4.0, framework, percep-tion for robots, dynamical environments, ROS-I, OpenCV, recycling process, automation.

I. PROBLEM STATEMENT

Nowadays, technological advances are a strategic tool for the development of a country. Its consequencesnot only are limited to an economic impact, but it also influences in social and political aspects. Thequality of products have increased due to the technological advances and customers preferences. As aresult, production processes become more complex and specialized [1]. Since 1961, robots have beenused in industrial processes, when general motors was the first ever company to use them [2].

In the last few years, technology has drastically changed with an exponential growth, for instance Figure1 compares the degree of complexity of each industrial revolution since 1760. Currently, we stand on thebrink of a technological revolution that will fundamentally alter the way we live, work, and relate to oneanother. Industry 4.0 introduces what has been called the “smart factory”. It is characterized by a fusionof technologies that “is blurring the lines between the physical, digital, and biological spheres” [3]. Thismodel proposes an industry where automatic solutions will adopt the execution of versatile operations,which consist of operational and analytical components such as “autonomous” manufacturing cells, whichindependently control and optimize production [4].

Figure 1: The four industrial revolutions with degree of complexity [5].

The arrival of this new manufacturing model is evident in different international indicators, in 2015,robot sales increased by 15%, by far the highest level ever recorded for one year [6]. This is a sign thatnot only specialized industries (automotive for example) are using robots, but small and medium-sizedenterprises (SMEs) are joining the market too, as seen in Figure 2(a).

(a) Annual Supply of Industrial Robots by Industries (b) Annual Supply of Industrial Robots by Year

Figure 2: Estimations of annual supply wordwide of robots [6].

Additionally, Figure 2(b) shows that since 2010, the demand for industrial robots has acceleratedconsiderably. Between 2010 and 2015, the average robot sales increase was at 16% per year. This is anincrease of about 59% and a clear sign of the significant rise in demand for industrial robots worldwide.

Due to this constant interest of industries in automating their processes in a way where interactionhuman-robot is more fluid, some new enterprises have born. For instance, since 2008 UR (UniversalRobots) 1 developed what they called “Co-Robots”. Collaborative robots or “Co-Robots” are designed towork alongside human workers, assisting them with a variety of tasks.

These “Co-Robots” will be part of the “smart factory” which is going to be: “Modular, Mobile, andFlexible.” In this new manufacture, model robots and humans will interact with each other creating acollaborative environment where the decisions are decentralized.

Figure 3 shows the different levels of integration in these smart factories, Figure 3(a)shows the relationof coexistence, where the activities are developed in various spaces clearly separated at the same time. InFigure 3(b) cooperation implies that both, human and robot share the same workstation but each one workin different times. Figure 3(c) describes collaboration relationship, where both share the same workstationand work together at the same time.

What makes “Co-Robots” such attractive is the fact that they are affordable, easily trainable, andflexible. Also, the prices have become cheaper making them more affordable to businesses of all sizes,for example, SMEs (Small and medium-sized enterprises) are eager to adopt this technology. Current usesfor “Co-Robots” include machine tending, material handling, assembly tasks, packaging, pick and place,counting, and inspection [7].

Far from replacing human workers, robots improve their productivity, as seen in 3(d), freeing themfrom monotonous and repetitive task, allowing them to focus on more complex jobs or to complete thetask in collaboration with the robot in a shared space [2].

This is why, the purpose of this project is to create a framework using open source software as ROS-I[10] and OpenCV [11], and the hardware available at CTAI, to give to a robot perception capabilitiesto develop a picking task using computer vision. With perception capabilities the robot will be able toperform more specialized activities that require a higher interaction with its surroundings, the introductionof this technology represents a change in the way industrial processes are currently done, so it is importantto identify potential applications. In this case, it is proposed to be applied in a waste separation task.

The waste application task was chosen due to the recycling percentage is minimum worldwide andonly in Europe is over 10%, this is because traditional disposal systems as landfills or incineration arestill vastly used, especially in North America and Latin America. The situation is more critical in Asiaor Africa where the disposal is done in open dumps.

1Universal Robots. Was officially founded in 2005, with the goal of making robot technology accessible to small and medium-sized enterprises.

(a) Coexistence [8]. (b) Cooperation [8]. (c) Collaboration [8].

(d) Co-Robots executing cooperation and collaboration in a production line [9].

Figure 3: Different interactions human-robot and potential application.

On the other hand, Colombia has the biggest industry of recovered aluminum exportation, even morethan Brazil’s exportations of scrap. Although, Colombia is ahead with aluminum, the recovering of plasticsis not as developed as it is in countries such as India or Brazil [9]. Therefore, Colombia has the potentialto become a recovered plastic exporter leader in the region.

Is also important to notice that in Colombia approximately 12 millions of tons of garbage are thrownaway and the country only recycle 17% of it [12], specifically in Bogota 7500 tons are being disposedinto “Dona Juana” landfill per day and only 14% of the potential recyclable materials are being actuallyrecycled. Figure 4 shows an estimation made by [13] of the money that daily the city is not making whenthrowing these materials away, being plastics with COP$608.140.000 per day the most representativeone.

Figure 4: Estimated daily worth (in COP) of materials thrown away daily at Dona Juana landfill [13].

This is why the application of the proposed technology will be done in a case of study which will befocused on waste separation tasks as part of a recycling process. The application of this technology willincrement the useful lifetime of the landfills and the reduction of waste transportation cost. Besides, usingrobotics to develop some task into a recycling process will increment the productivity of this industrialactivity by creating new jobs and bringing better work conditions to operators, which will be able todevelop more specialized tasks. Additionally, it is an ecofriendly application of technology. Finally, froman economic point of view the proposed technology will create a value chain of secondary raw materialsin Colombia.

To start developing the previously mentioned application in Colombia, this project aims to continuewhat PUJ has been developing at CTAI since 2016 with its PIR project (Perception for industrial robots)[14]. Therefore, the framework created in this project looks forward to the incorporation of a sensor toallow industrial robots to adapt quickly to a variety of tasks, with avoidance capabilities and serving asa starting point to increase human-robot interaction in dynamical environments.

Based on the previously exposed situation. This project aims to answer the following research question:By using open source software (ROS-I, OpenCV) and the equipment available at CTAI, Is it possible tocreate a framework(software - hardware) to provide a robot with perception capabilities using computervision to allow it to work in a picking application for waste separation tasks in a recycling process?

II. STATE OF ART

Research with SDA10F robot has been previously developed at CTAI, a research done by Eng. W.Hernandez. and L. Patino consisted in the implementation of a vision algorithm using a kinect sensor thatallows the robot previously mentioned to perform a picking task of elements with different shapes andplace them in a certain spot. The shapes were predefined and loaded to the computer vision algorithm.This system was created using Matlab on a windows platform [15].

Other solutions in picking task use ROS which stands for robot operating system. It is a free frameworkthat simplifies writing a complex algorithm and reduce the time of developing an industrial applicationthanks to the modules predefined to perform typical activities. At [14] the configuration done to theSDA10F dual-arm robot using ROS-Industrial at CTAI in Pontificia Universidad Javeriana is described.The authors did the configuration of the system and used Moveit! to do uncoordinated planning trajectory.The robot was able to avoid obstacles and detect collisions.

Another industrial application using ROS was done in [16], where they developed a control system ina dynamic environment using a vision system which allows the robot to identify a determined positionand orientation of the object using ROS. It succeed with a rate of 85%.

To correctly recyle is necessary to separate the waste. This process is currently done manually inColombia according to [17]. Which goes against to the world trend in this field. Usually, companiesautomate this process to efficiently separate the waste and mitigate the risk of contamination by othermaterials. Some researchers have applied machine learning techniques to recognize the type of the wastefrom the images as in [18], where two popular learning algorithms were used: deep learning withconvolutional neural networks (CNN) and support vector machines (SVM). Each algorithm creates adifferent classifier that separates waste into three main categories: plastic, paper and metal.

There are some applications that are only sorting plastic as in [19], where they classify plastic bottlesusing the morphological based approach. Morphological operations are used to describe the structureor form of an image. Another approach to waste bottle separation is [20], which shows a probabilisticautomated plastic bottle sorting by integrating size, colour, and distance modeling of the plastic waste.

Robots were introduced to industries to work in life-threatening and hard conditions, where humansare not able to work on [21]. But nowadays robots are used in tasks that are not necessary dangerous forhumans in short terms, but can cause injuries in future as musculoskeletal disorders, limb disorders andothers as it is described in [22].

Robots have become widely used in industries to co-work with workers and eventually create acollaborative environment. To create this relationship, mainly two types of industrial robots are used.The first type is robotic arm, which are robots that are programmed with specific parameters and routines

to carry out a defined activity. The second one is manipulator robot which are more flexible working withobjects that are randomly orientated. Some of these robots are able to identify the positions and performtasks that are usually done manually [1]. Manipulator robots, have become more popular in automationfor industries due to flexibility and productivity they represent for companies in industrial processes.

Other researchers working in the same application for robotics have used some manipulator robots thatare specially made to perform the waste separation task. One of the most used tendencies to developthis task is bin-picking, due to the advantages it represents, as increasing the automation level in manualassembly stations, cost reduction and also the workspace optimization as shown in [23], where they alsonotice that programming task is simplified by selecting a dual arm robot that has already got controlfunctions for bi-manual actions.

Vision is one of the major challenges in bin-picking, due to a workpiece can be obscured by anotherpiece in the bin. There are some studies that have approached this problem. In [24], the authors use akinect RGB-D sensor and a virtual camera that generates a point cloud database for objects using their3D CAD models, making able to estimate the object’s position and a voxel grid filter that reduces thenumber of 3D points.

The reduction in the uncertainty in perception of bin-picking is currently being studied as shown in[25], where a fine-positioning planner using a fine-motion suite has been developed to characterize andincorporate uncertainty into planning evaluation. In some cases, the use of algorithms for recognition hasmade it possible to achieve this target as in [26].

Usually, robots are placed separately from humans, but this can reduce the flexibility and productivityof production lines. In an investigation done by MIT researchers at a BMW factory, it was shown teamsmade up of humans and robots in collaboration are more productive than humans or robots workingseparately or on their own.

This project aims to use software that has been tested by other researchers as ROS-I and OpenCV todesign a framework that can be applied to the task of waste separation in Colombia. The authors alsopretend to display the opportunity to increase productivity and effectiveness in Colombian industries bydesigning a perception system using computer vision for a manipulator robot to provide it with the abilityto perform a picking task in a waste separation activity as part of the recycling process. By the author’sknowledge, this approach to automate this process has not been analyzed yet in Colombia.

III. RESEARCH OBJECTIVES

To develop a framework where an industrial robot is provided with perception capabilities, usingcomputer vision, aiming to augment its interaction skills in an industrial environment. A case of study isselected (robots for waste separation tasks) and emulated with a testbed located at CTAI, under controlledconditions, to test the robot’s flexibility in picking tasks, by enabling it to recognize and locate objects inthe scene.

a) Characterize and parametrize the application of an industrial robot for waste separation tasks, focusedon the interaction of the robot with its surroundings.

b) Design, implement, and test an algorithm that provides perception capabilities to an industrial robotby processing the information acquired by a camera, which allows it to detect and locate objects.

c) Design, implement, and test an algorithm that translates the perceived information into the robot’scommands, in order to pick up objects in the proposed case of study.

d) Analyze the socioeconomic impact of applying the proposed technology in the selected case of study.

IV. METHODOLOGY

The application of this project is emulated using elements available at CTAI (Centro tecnologico deautomatizacion industrial). This academic laboratory conducts research in different engineering fields asrobotics, automation, material science, among others. CTAI has the newest technology available in LatinAmerica in manipulators robots, in addition to other elements as sensors and cameras which are going tobe used to achieve the objective of this research.

A. Characterization of the case of study

For the characterization of the case of study different methods based on time and motion study areused, starting from time data collection, data analysis and reporting among with ergonomics analysis, thismethodology is described in depth in section VI-A.

To evaluate work conditions, ergonomics maps are used, they allow to determine if the environmentalconditions are adequate as well as the ergonomic conditions that permit to decide if workers are developingthe task safely. The data collection was only taken in the selection area since the project is focused inthere. The function of these maps is to evaluate the environment as illumination, temperature and noiseconditions. The measurement and control of ergonomic conditions is a must in any company due to theirimpact in productivity and the illness that can affect the employees when the work environment is notadequate.

Besides these maps a time study was conducted at UAESP’s La Alqueria in order to diagnose the waythe process is currently performed, which are the critical stages and how long it takes to classify the trashin Bogota. With all the retrieved data an emulation at CTAI was designed taking into account the wayrecyclers are developing the task and the different constrictions present in the process.

B. Software

A disadvantage of developing applications based on robotics in industrial environments, is the com-plexity of writing the software, since the proficiency required is beyond the skills of only one researcher.Robotics researchers have developed different frameworks, that make it easier to manage the complexityof writing software.

In this research we use ROS (Robot operating system) [27] , ROS-I (Ros industrial) [28] and theOpenCV library [29]. The proposed system uses ROS framework to communicate the different processes.ROS-I is used to deal with the communication with the industrial robot, and the OpenCV library is usedto create the computer vision algorithm that detects, classifies and locates the plastic bottles.

When a bottle is detected and its 3D position is estimated, the robot’s trayectory has to be planned.For dealing with the trayectory planning, MoveIt! [30] is used. MoveIt! package allows you to integratea virtual simulation of the system in RViz [31]. Figure 5 shows the Moveit package that comes with theMotoman driver [32], which was modified to recreate the environment of this project. Additionally, inthe figure, different capabilities of MoveIt! are shown.

To make possible information retrieval from the kinect it is necessary to use OpenNI’s (Open NaturalInteraction) middleware [33] among with Primesense module [34] for OpenNI devices. Besides OpenNiand Primesense’s module it is necessary to use openni camera [35] and the openni launch [36] ROSpackages. Since Kinect sensor is launched from ROS the information retrieved is ROS-Image format, butgiven that the algorithm is based on OpenCV it is necessary to fit the format into cv::Mat, this is achievedby using ROS package cv bridge.

Figure 6 left shows the real image captured by the Kinect and its projection in the RViz virtualenvironment. The image on the right shows the reconstruction of the point cloud obtained with Kinect’sdepth sensor. Figure 6 shows kinect image representation in RViz with RGB image overlapped on SDA10Frobot CAD model, next to it, is the point cloud reconstruction of the depth data and the FMS at thelaboratory.

(a) Trajectory planning (b) Collision avoidance (c) Collision Detection

Figure 5: Move it! implemented with the SDA10F robot

Figure 6: RGB image and point cloud reconstruction of the scenery seen in RViz

C. Hardware

For the emulation of the system an industrial robot available at CTAI was used. It is the dual-arm robotSDA10F (Figure 7(a)) from Yaskawa [37]. The robot at PUJ has sixteen Degrees of Freedom; seven foreach arm; an axis of rotation in the torso; and an axis that translate the robot to the left or right, whichis not used in this project.

The robot is controlled with FS100 Yaskawa’s controller [38]. This controller (see Figure 7(b)) supportsMotoSync, MotoPlus, and ROS-Industrial software environments.

The PUJ robot has two-different end-effectors from the company ROBOTIQ [39], one in each arm. Atwo-fingers gripper shown in Figure 7(c) is the one used in this project, it has a maximum recommendedpayload of 5 kg, and 85 mm of max opening.

For the computer vision task a Microsoft Kinect sensor shown in Figure 7(d) was used to capturedepth and image data. The Kinect was chosen due to its the advantage of recovering 2D and 3D datasimultaneously.

Due to high graphics requirements among with a lot of parallel process execution, it was necessary toimplement a distributed system.

A network based on Ethernet TCP/IP protocol was created using TL-WR1042ND router from TP-LINK

(a) Motoman SDA10Frobot [37]

(b) FS100 yaskawa con-troller [38]

(c) Robotiq gripper [39] (d) Microsoft’s Kinectsensor [40]

Figure 7: Robot description used for the emulation of the system

as a network hub [41].The implemented system shown in Figure 9 is also composed by two laptops:

• A DELL LATITUDE Intel Core I5 2.6GHz, 8 GB RAM memory, running Ubuntu 14.04 withROS indigo installed with the following packages: motoman driver indigo devel, ROS Industrialindigo devel, and MoveIt!, robotiq.

• A HP PROBOOK440 G3, Intel core I7 2.2GHzx8 6th generation, 8 GB SDRAM memory andIntel Haswell Mobile graphics, running Ubuntu 14.04 with ROS indigo installed with the followingpackages: ROS Industrial indigo devel, MoveIt!, cv bridge, openni camera, openni launch.

V. ENGINEERING DESIGN

The proposed project aims to continue the research developed at CTAI as part of the perception forindustrial robots project. In this sense, this project develops a framework to bring vision capabilities toa manipulator robot. Figure 8(a) shows the designed framework which features two modules to achievethe proposed task.

For the first module (8(b)), focused on image processing, a software based on OpenCV [29] was created.The algorithm recognizes different patterns and interpret the data acquired by a Microsoft’s Kinect sensortransforming it into useful information. This algorithm differentiates between three different polymersclasses that are currently recycled at UAESP and represent the majority of potential recycling materials.The algorithm differentiates between; high density polyethylene/Polystyrene, colored Polyethylene tereph-thalate and colored high density polyethylene. Its classification strategy is based on the way the processis currently done by operators, that is to say, it evaluates color and geometry of the objects in the scene toperform the classification. This algorithm is also focused in 3D pose estimation using Kinect’s IR sensor.

The second module (8(b)), is focused on conducting picking tasks with the robot. To achieve thispurpose another algorithm ROS-based was created as part of the same framework. It uses the informationfrom the computer vision stage and translates this information into robot’s commands, thus the robotdisplaces its end effector with the proper pose, to the known position, and perform the picking task. Thisplanning trajectory algorithm includes avoiding collision capabilities since Moveit! is used.

The created framework is executed in a distributed system due the high demanding processes inexecution. Figure 9 shows the distribution of the different tasks through the system. Finally, to evaluate thesystem a testbed was created using the facilities provided at CTAI (cameras, computers and the SDA10Fdual arm manipulator robot), the evaluation considers the computer vision algorithm and the pick andplace one. For this purpose a list of variables of interest were created and the retrieved data were analysed.

A. Design Process

The design process started with investigation of how the waste classification task is currently donein Bogota, particularly the separation of plastics at UAESP La Alqueria shown in Figure 10(a). There

(a) Designed framework based on ROS system

(b) Computer vision module (c) Pick and place module

Figure 8: The two modules of the designed framework

Figure 9: Distribution of the tasks through the distributed system

was noticed that the process is done manually and basically the classification is empirical, that is, peoplein charge of the classification doesn’t know the technical aspects of the labour, they classify by color,geometry and type of content and not by type of plastics. As shown in Figure 10(b) thermoplasticsclassification should be performed by families of plastics, however at UAESP it is “intuitive”.

(a) Plastics separation workstation at UAESP’s La Alqueria

(b) Potential recyclable plastics classifica-tion [42]

(c) Color class (d) Green class (e) white class

Figure 10: Families of polymers and the three classes classified by the system proposed

Based on this, the algorithm uses the same characteristics (color and geometry) recognized by operatorsto classify three types of polymers

• Colored-HDPE It is High density polyethylene that has color additive, this plastics are opaque andcolourful. See Figure 10(c).

• Green-colored-PET it is Polyethylene terephthalate with color additive. See Figure 10(d).• HDPE/PS Are all white bottles: This class is composed by pure high density polyethylene and

polystyrene with no colourant addition. See Figure 10(e).A fourth class is considered in the application but is not classified by the system, it is PET translucent

bottles, that eventually are removed by the operator as the last class in the worktable.To emulate the UAESP’s environment a testbed was created at CTAI. Figure 11, shows the conditioned

area at the laboratory where the system is tested. A table of 1mx1.20m is used as a worktable, black-painted to facilitate the elimination of background in the computer vision algorithm. Beside this, differentillumination conditions were tested (Figure 11(b)).

Different bins were installed to separate the classified bottles, the robot picks up the bottles from theworktable and place them into the bins. Figure 11(c). Notice that the bottles used in all the project wereselected as it is done by recyclers from a waste chute, that is to say the bottles had their label and cap,some were crushed and others were completely clean and in ideal conditions.

Computer Vision Task

To accomplish the task and algorithm written in C++ using some OpenCV functions was created.It consist in different classes specialized in different tasks. For more information about the code, thedocumentation of the created software can be found here and in section VI-B, the version used of OpenCVis 2.4.13.4 compiled with OpenNI capabilities.

(a) Worktable for classification (b) Illumination tests (c) kinect’s support design

(d) Yaskawa robot (e) Emulated classification bins (f) Emulated classification area

Figure 11: Emulation of the UAESP workplace

The computer algorithm is divided in three stages the first one focused in detection, works on HSVcolor space. Firstly it performs some preprocessing features as Gaussian filtering and channel splittingshown in Figure 12. The algorithm evaluates H and V channels with thresholds defined experimentally byvisual analysis of the thresholding results and performs logical basic operations (AND,OR) over masks.

(a) Gaussian Filtered Image (b) HSV image

(c) H Channel (d) S Channel (e) V Channel

Figure 12: Preprocessing features of the computer vision algorithm

With the evaluation of the color characteristics the images are binarized using. Figure 13 shows theimage segmentation of the objects of interest when the bottles are detected, in Figure 14 morphological

operations are performed (erode, dilate). Last, to find edges a gradient operation is executed, at this pointa size rejection is executed to eliminate big or small elements that are not considered bottles (see Figure15).

Finally openCV’s “RoundedBox” and momentum are used to extract the characteristics of the bottleas its centroid and orientation; then the characteristics are printed in the original image to show to theoperator the detections. Figure 16 shows the original image with ellipses and centroids of the detectedobjects which is the result of the detection algorithm.

(a) Colored HDPE (b) Colored PET (c) HDPE (d) Mixed

Figure 13: Binarized images based on thresholding for each bottle type

(a) Colored HDPE (b) Colored PET (c) HDPE (d) Mixed

Figure 14: Morphological operations performed over binarized images

(a) Colored HDPE (b) Colored PET (c) HDPE (d) Mixed

Figure 15: Gradient operation in order to find edges of bottles

The 3D pose estimation of the object is achieved after the characteristics of the objects are extractedusing the pinhole camera model [43].

Since the intrinsic parameters are necessary to estimate a 3D point, a camera calibration was performedbased on the chessboard technique(see Figure 17(a)) to estimate camera’s intrinsic parameters. Twomethods were used, one provided by openni launch ROS package [36] and the other one based onOpenCV software [43]. It was chosen the ROS package due better performance.

It is important to highlight that there is a zone where depth data is not available in the RGB imagedue to regular stereo cameras disparity. In Figure 17(b) the red stripe around the image represent the area

(a) Colored HPDE (b) Colored PET

(c) HDPE (d) Mixed

Figure 16: Original images with targets defined. Bottles are enclosed in color rectangles

where depth data is not available, hence the system cannot estimate its 3D pose. The 3D pose estimationtechnique is explained here among with the results of the Kinect’s calibration.

(a) Chessboard calibration (b) RGB image with no depth data availablein red

Figure 17: Camera calibration and unavailability of depth data due to disparity

Pick and Place task

For the second pillar of the framework, the process of designing a solution to the pick and place taskstarts with the vendor specific package (Motoman). It is used to allow communication with industrial

(a) Robot execution (b) Virtual environment execution(mirror)

Figure 18: Emulated versus simulated environment

robot controllers. With this package the communication with the robot can be achieved by setting-up thecorrect IP address of the robot.

In order to manage ROS data integration between both tasks a package called “vision” was createdthis package manage the perception module of the designed Framework and enables the communicationwith the pick and place module. The pick and place module uses this information to publish the data tothe system so the robot set it as its goal position, execute the task and later asks for a new pose.

In parallel, the simulated environment is running in RViz2 where the CAD-robot is executing the sametask that the real one is executing. (Figure 18). ROS tool rqt graph allows easy visualization of nodes,topics and packages and the established communication between them.

At this point is necessary to mention these facts:• MoveIt! comes with a simulator based on Rviz that lets you plan the robot’s movements without

requiring the real hardware.• All the motions that occur in the simulator are the ones the real hardware will execute.• Previous configuration must be done in the FS100 controller firstly in order to allow ROS functional-

ities, there is a special JOB created in the robot to be executed in remote mode, detailed informationis found in a previous written paper available at [14].

B. Performance Requirements

The framework fulfills these requirements:• The proposed computer vision algorithm identifies and locates an object of interest present in the

scene by color and/or geometry.• The proposed computer vision algorithm for detection and location works in real-time >5 fps (frames

per second).• The robot uses the information recovered by the computer vision algorithm to take decisions about

if picking up the object that is being perceived.• The camera’s resolution is >640x480 pixels to perform the pattern recognition task.• The robot picks up the detected object and place it inside a bin.• The developed framework is modular in order to be used in the future to add new functionalities or

improve the existing ones.

2It is a 3D visualization tool for ROS.

(a) Ground truth images with quad-rants

(b) Mayor-axis rotation

Figure 19: Test protocol evaluations

• The system is based on open source software.• Emergency stop is available at any time of the execution.• The environment can be simulated at any time using only software (no robot connection needed)

C. Performance Tests

To measure the framework’s performance there are two approaches. One focused in the computer visionalgorithm, it consist in creating a database of images of the objects that will appear in the scenery, 50images were used to adjust the algorithm and 150 images are used to evaluate the algorithm’s ability ofclassification. It is carry out with a “Ground Truth” that consist in using the database created to manuallytag the images by using a form from Google forms. In the form, the user was asked about the number ofbottles in the image, their type, orientation and position then the computer vision algorithm was executedand later the data was analysed. See Figure 19.

Then a comparison is done between these tags and the data retrieved by the computer vision algorithm.Later confusion matrices are used to analyse its performance. Beside this, a pose estimation evaluationis carry out.

For the pick and place task an execution test was conducted. It consisted in evaluate the ability ofthe system to locate and separate twelve plastic bottles (four of each type), shown in Figure 20(a). Itconsidered the time required to complete the task, the number of attempts and the software executionrates, taking into account which errors are present and their frequency. The evaluation was performed tentimes to reduce uncertainty.

Figure 20(b) and (c) shows the robot in execution mode during the test, note that this test was focusedin the ability of the robot to pick up successfully the twelve different bottles and place them in the rightbin.

D. Design Limitations

These are the most relevant limitations of the proposed framework:• This project is designed and developed at CTAI, therefore the project is restricted to elements available

there.• The code is programmed using open source software.• The availability of elements to develop the project is limited to the use of other students and teachers.• The computer vision uses 2D vision technology extracting the Kinect’s depth data to perform the

3D pose estimation.• The proposed technology takes into account all the restrictions derived from the parameterization of

the case of study.

(a) Picking up evaluation with samplebottles

(b) Approach in Z-axis (c) Picking up

Figure 20: Picking test with SDA10F robot

• The motion of the robot is restricted by the specifications of the SDA10F dual arm robot, includingmaximum range, volume load, maximum load, velocity, torque, etc [37].

• The movements of the robot are bounded to the workspace available at CTAI.• In case the project requires licensed software it will be used only if it is provided by PUJ.• The computer vision capabilities are restricted to the specifications of the camera.• The camera’s emplacement is fixed, thus the visibility capabilities are limited to one plane.• The precision of the robot’s trajectory depends of the planning trajectory algorithm used.• The picking capability is constricted to the Robotiq grippers specifications [39].• The waste separation task does not contemplate Bio-hazard, chemical or radioactive wastes.• The robot is not able to develop a coordinated task (moving both arms at the same time) with

MotoROS.• The MotoROS application is not compatible with the Human Collaborative features [32].• The RGB max resolution is 1280x1024 and for depth image is 640x480 at 15Hz and 30Hz

respectively.• The robot’s speed is defined by Moveit!• Translucent objects are not detected by the system.• The algorithm has not learning capabilities, thus it cannot recognize unknown objects.• At the moment human supervision is required to execute the system.• Some errors are beyond the limitations of the framework.• The Computer vision algorithm is limited to CTAI illumination conditions.• The computer vision algorithm executes ≈ 5.073 FPS.• The computer vision success rate is over 80%.• The global system success rate is over 60%.

E. Good Practices (Regulations & Standards)

When a new technology, service or product is designed it is important to take into account the differentnormative that exists, these regulations could be regional, national or worldwide. In this specific case,where a technological framework is developed using robots, there are the following standards that apply:

• ISO 10218-1:2011: specifies requirements and guidelines for the inherent safe design, protectivemeasurements, and information for the use of industrial robots. It describes basic hazards associatedwith robots and provides requirements to eliminate, or adequately reduce, the risks associated withthese hazards. [44]. This normative differences between four types of interaction human-machine:† Not collaboration: There is no contact human-robot.† Operation with no barriers: There are not barriers in workstations.† Sporadic Interaction: Workers enter in the robot’s workspace during the productive process spo-

radically.

† Collaboration: The worker and robots work together in the same workstation sharing the workspace.This normative also discusses the safety functions of industrial robots in collaborative operation:† Safety-rated monitored stop: To reduce risk by ensuring robot standstill whenever a worker is in

collaborative workspace.† Hand guiding: To reduce risk by providing worker with direct control over robot motion at all

times in collaborative workspace.† Speed and separation monitoring: To reduce risk by maintaining sufficient distance between worker

and robot in collaborative workspace.† Power and force limited by control: To reduce risk by limiting mechanical loading of human body

parts by moving parts of the robot.The biggest obstacle to achieve this standard is the space limitations at the laboratory, since thetechnology is available at CTAI there is no space to install it, besides it is not part of the scopeof this project, however the robot operates with no barriers as defined in the normative and speedcan be limited. As defined in the normative at the moment the framework not supports collaborationcapabilities.

• ISO 13850 Emergency stop. It is available at any time in the execution• ISO 10218-2:2011 specifies safety requirements for the integration of industrial robots and industrial

robot systems as defined in ISO 10218-1, and industrial robot cell(s). Describes the basic hazardsand hazardous situations identified with these systems, and provides requirements to eliminate or ad-equately reduce the risks associated with these hazards. Also specifies requirements for the industrialrobot system as part of an integrated manufacturing system [45].

• ISO 13857: Safety distances. Not achieved due to laboratory space limitations• ISO 13849: Safety standard which deals with safety-related design principles of employed control

systems to establish different safety Performance Levels• ISO 23570-1:2005:Industrial automation systems and integration – Distributed installation in indus-

trial applications – Part 1: Sensors and actuators.Is necessary to mention that to achieve all the safety standards the technology is currently available at

CTAI, hence it is proposed as future work.

VI. RESULTS

A. First Objective Results

“Characterize and parametrize the application of an industrial robot for waste separation tasks, focusedon the interaction of the robot with its surroundings”

This project takes place in Bogota where recycling is managed by local government through the UAESP(Unidad Administrativa Especial de Servicios Publicos) which has six recycling centers around the city;Puente Aranda, Engativa, Teusaqillo, Usaquen, Usme and La Alqueria. The case of study takes place atLa Alqueria center since it is the only recycling center own by the UAESP.

The process of Recycling Nowadays in Bogota

The process’ first stage is the collection of the potential recyclable materials, this stage is normallydone by recyclers whose pick this material from the waste chute house by house. Even though, peoplein Bogota are more aware of the importance of waste classification the recyclers still have to pre-selectit, which means they have to pick the recyclable material from the organic waste. To fulfill this task therecyclers associations have routes which they go over in a selected day of the week according to theschedule of the garbage collecting service.

Currently, there is an initiative that pretends to pick up the potential recyclable materials around thecity using the trash trucks of the city’s garbage collection company. For now, the process is done byrecyclers in Bogota who usually are part of the vulnerable population of the city whose in many casesare in situation of extreme poverty. By 2012 there were registered 13.771 recyclers in Bogota [46].

The second stage of the recycling process starts when the potentially recyclable materials arrive to therecycling center as shown in Figure 21(b) and (d); the truck with the collected material is weighed andthe materials are located to be classified in different types. Some of the materials arrive to the centeralready organized by type.

(a) UAESP’s recycling center La Alqueria

(b) Waste arrival (c) Selection Task (d) Waste weighting

Figure 21: La Alqueria recycling center

The third step is classifying, this process has two parts the pre-selection and selection tasks; this partof the process is currently done manually by the same person who previously collect the wastes in thefirst stage. At La Alqueria recycling center they recycle these main categories:

• Archive: Paper generally used in offices, photocopies and impressions.• Plastic: PET (Polyethylene terephthalate), HDPE (High density polyethylene), PS (Polystyrene), PP

(Polypropylene).• Scrap: all the metal and cables• Aluminium• Newspaper• Paperboard• GlassOnce the materials are correctly classified and some are discarded, they are bale out and stored. After this

stage the bailed waste is sold to private industries who are in charge of waste transformation to reintroducethe raw materials into the supply chain. Figure 22 shows the five main stages of the recycling processcurrently done at UAESP’s center La Alqueria. Since this project is only focused on one particular task,the case of study takes place in the selection and classification step because is time demanding, workersare exposed to contaminated materials for long periods of time as well as repetitive tasks are performed,

which can affect the worker’s health. See how the process is currently done here. The diagram of thedocumented recycling process is found here.

Figure 22: Recycling process currently done in Bogota at La Alqueria Center

In the classification stage materials are classified according different parameters, for instance scrapneeds to be classified by type such as cooper, iron, etc. Plastics are selected by type, HDPE which arenormally white containers like bottles of milk and whitish bottles. Colored-HDPE containers of differentcolors; shampoo bottles, detergent bottles etc. PET which turns into four other types by color: green,amber, transparent and transparent bottles for cooking oil, the last one cannot be mixed with other classbecause its contain would affect the reuse of the material.

Work Conditions (Environment & Ergonomics)

Once the process was identified, and the classification task was selected as the core stage. We proceedto evaluate the work conditions of the workers who develop this task, this is done in order to characterizeand parameterize the environment in order to emulate it at CTAI.

• Illumination ConditionsTo collect the information, there were established 16 points around the selection area. It is used a light(luxes) meter to measure light intensity.

To analyse the data the recommended lighting levels of the international commission on illumination(CIE) is used as reference [47].

Based on the retrieved data found at Table I, it was determined that the level of luminescence for theselection area is 500 luxes, even though the selection area has a part where the levels of illumination isunder the 500 luxes, this section of the area is only used for the air compressors, therefore the levels ofillumination in the selection area are the right ones. It is important to highlight the fact that the recyclingcenter does not have artificial lighting, therefore the employees can only work during the day time.

Illumination levels in luxes

Average 640.375Deviation 338.145

Table I: Retrieved Illumination Data

• Temperature ConditionsTo measure the temperature conditions, it was used a thermal-stress meter, the measures were done

at feet and head heights. In Table II the retrieved data is shown. 16◦C − 22◦C is considered the idealtemperature for a work environment up to the health and safety, analysing the information, the thermalcomfort is appropriated in the selection area. It is important to notice that this condition depends on thelocal weather because the recycling center is a hangar. The weather in Bogota is normally the ideal forphysical labours but in some occasions can reach the 26◦C, thus the center should have an alternativeway to control the temperature inside the hangar.

• Noise ConditionsTo determine the noise conditions data retrieved in Table III is compared with recommended levels

[48] The measurements were done close to sound source and some other random points at different hoursof the workday. The calculation of noise dose considers the time of exposure and the sound level.

Using the information given the noise levels in the center are acceptable according to the recommendedlevels and the time to the noise exposure.

Temperature levels in C◦

Average 18.08Deviation 0.51

Table II: Retrieved Temperature Data

Noise levels in dBA

Average 67.78Deviation 4.64

Table III: Retrieved Noise Data

For better understanding of this data each item has its own ergonomic map. Map of illuminationconditions can be found here, temperature conditions here and noise conditions here.

• Time MeasurementThe standard time (ST ) is the time required for an operator to perform; at normal rate, the activities

related to his work. This calculation takes into account: the observed time (OB), ergonomic supplements,contingencies (time contributed by inefficient methods in production) and the time contributed by thehuman resources known as base time (BT ).

ST = BT +BT (%ergonomic supplement) +BT (%Contingencies) (1)

• Observed TimeIt is the minimum amount of time necessary to produce one unit, to provide a statistic significant value

for each worker the measurement was performed ten times.• AppreciationIs the percentage that measures the rhythm or cadence with which a person works it is graded according

to co-workers, boss, or consultant perception as seen in Table IV

Productivity rate perception Quantitative appreciation

Fast > 100%Normal = 100%

Slow < 100%

Table IV: Appreciation according to speed of operator

Table V shows mean appreciations perceived by boss, co-workers and consultants of each worker thatclassify waste at UAESP’s La Alqueria.

Operator ID Name Appreciation (%)

1 Carolina 1202 Alberto 1003 Yuranni 1204 Tatiana 1005 Jasbleidy 90

Table V: Consolidated appreciation of workers at UAESP Alqueria

With this appreciation and the observed time for each operator, it is calculated the base time, using thenext formula:

BT = (Observed time ∗ appreciation%) (2)

• SupplementsThe supplements affect the standard time of an activity or process and is the result of a combination of

two factors. The first one is a constant, which involves constant tolerance of the workers (5%) and basicfor fatigue (4%). The second factor is variable, which can change due to the conditions of environmentat the workplace.

Using international labour organization’s tables found at [49] the process of selection is analyzed andvariable supplements are shown in Table VI.

Task Variable Characteristics Total (%)

Selection

Standing up

7%Fine WorkModerate Monotony

Weight Lifting ( 7.5 Kg)

Table VI: Variable percentage of supplements due to the task characteristics

The contingencies or unforeseen events of a task are influenced by two factors. The first factor is theconstant contingencies, it is standardized as 7%. The second factor is the variables contingencies, whichcan be observed on the red and green points analysis. With these two factors, it is possible to calculatethe total contingencies of each activity or process.

As mentioned above, it is necessary to perform the analysis of red and green points, to find the variablecontingencies. This analysis is done in Table VII and consist in discriminate the values that are outsidethe control limits. Once these atypical data is found, it is eliminated in order to decrease the number ofsamples, leaving the most representative. Measures that are above the average are considered red dots,and those that are below the average are considered green dots. The process is done for all operators andis available to be consulted here.

(a) Times Measuring

(b) Delta analysis

Table VII: Red and Green analysis to estimate variable contingencies done for each worker.

With this retrieved data of the case of study we can infer the standard time of this task for each workeras seen in Table VIII. It is defined the global standard time as 1.91 minutes of processing time. With thedefined standard time it is identified the rate at which the robot should operate in order to maintain theproduction rate.

With the characterization of the case of study, is possible to define the principal characteristics thatmust be evaluated at CTAI in order to perform an accurate emulation of the task. First it is definedthat the application designed will be focused in plastics sorting; particularly, in three types of polymers(PET,HPDE and PS). With the materials defined the workstation was recreated using a kinect sensor, arobot and classification bins. The emulated environment is shown in Figure 23.

Operator ObservedTime(min)

Appreciation BaseTime(min)

ErgonomicSupplement

Contingencies StandardTime (min)

Carolina 1.29 120% 1.548 16% 15% 1.6899Alberto 1.51 100% 1.51 16% 10% 1.9026Yuranni 1.41 120 % 1.692 16% 15% 1.8471Tatiana 1.3 100% 1.3 16% 15% 1.703Jasbleidy 1.98 90% 1.782 16% 7% 2.4354

Average 1.9156

Table VIII: Average standard time of the recycling process

Figure 23: Emulation of the case of study at CTAI, focused on the waste separation task

B. Second Objective Results

“Design, implement, and test an algorithm that provides perception capabilities to an industrial robotby processing the information acquired by a camera, which allows it to detect and locate objects.”

To achieve this objective a computer vision program was created taking into account two main tasks;the detection task, focused in find and classify bottles in an image and, the pose estimation task that isfocused in locate the bottles in a 3D space coordinate system after they are classified in the first stage.Figure 24 shows the sequential diagram of the algorithm.

(a) Detection algorithm

(b) Pose estimation algorithm

Figure 24: Pseudo-algorithm of computer vision program

• Detection and classification test

The computer vision algorithm was tested with a database of 150 images, however it was noticed thatnearly of 20% of the images present variations of color and light intensity (under the same light conditions),in Figure 25(a) there is a variation in intensity and the image has more noise, in Figure 25(b) there iscolor variation that affects the performance of the algorithm, due this situation some images have beenremoved from the database.

(a) Intensity Variation

(b) Colour Variation

Figure 25: Variation on retrieved images at the same light conditions

With the collected data from the form mentioned in section V-C a “Ground Truth” was conductedin order to analyze the behavior of the algorithm with the images of reference taken. Using confusionmatrices the data acquired by the algorithm is compared with the one acquired manually, the evaluationwas executed in an Excel file where a dedicated macro was programmed.

To perform the ground truth 372 bottles were selected as a representative sample, these bottles werecollected in the same way recyclers do; directly from a waste chute. The sample features only the threeclasses the algorithm is going to classify.

In the image database the colored-HDPE class was tested with 214 color bottles over 372 as shown inTable IX 191 bottles were correct classified as true positives and 119 as true negatives, while 39 bottleswere false positives and 23 false negatives.

COLOR NOT COLOR

COLOR 191 39NOT COLOR 23 119

Accuracy 83.33%

Table IX: Colored HDPE confusion matrix

For the colored PET bottles (Green class) the same analysis were performed with 36 green-bottlesover 372, with 25 true positives and 336 true negatives. Table X also shows that 11 bottles were classified

as false negatives.

GREEN NOT GREEN

GREEN 25 0NOT GREEN 11 336

Accuracy 97.04%

Table X: Colored-PET confusion matrix

Finally for the HDPE/PS group a test with 122 bottles over 372 were performed with 94 true positivesand 227 true negatives and a false classification of 51 bottles; 23 false positives and 28 false negative.See Table XI.

WHITE NOT WHITE

WHITE 94 23NOT WHITE 28 227

Accuracy 86.29%

Table XI: HDPE/PS confusion matrix

With this analysis it is possible to say that the detection algorithm global accuracy is 86.63%, withcolored-HDPE (Color) accuracy of 83.33%, a green-colored-PET (Green) accuracy of 97.04%, andHDPE/PS (White) accuracy of 86.29%. However to reduce overestimation of the system performance another accuracy estimation was done considering images with mixed bottles, the global accuracy underthis conditions is 83.33%. Table XII shows the result of the classification test using average accuracy asan evaluation rate.

COLOR GREEN WHITE

COLOR 191 11 28GREEN 0 25 0WHITE 23 0 94

Accuracy 83.33%

Table XII: Global confusion matrix of the results

It is noticed that labels are the first cause of confusion (Figure 26(b)), due to colourful labels difficultalgorithm’s correct classification and also if the label is huge enough it could be consider as a bottle,triggering a bad pose estimation. Figure 26(a) shows other problem; caps. Because colourful caps couldbe detected as color bottles when actually they are white bottles. Green bottles are translucent thischaracteristic difficult their detection, hence green bottles detection is huge affected for illuminationconditions. Besides, these bottles reflect light and alter the way other bottles are perceived by the sensor.See Figure 26(c). To reduce the impact of this conditions different morphological operations whereperformed creating different masks.

Other thing noticed in the algorithm’s execution is the fact that the sensor varies its data perceptionand images are brighter even when the light conditions are the same, perhaps it is caused because thesensor is on stand by for long periods. Figure 27(a) shows the different intensity levels that appear insome images. Other relation noticed is with translucent objects as Figure 27(b) shows the sensor turnsnoisy and the image becomes brighter when translucent objects are in the scene.

Figure 28 shows a perfect match of the classification algorithm executed under the appropriate illu-mination conditions, note that even if there are translucent objects in the scene the image is analysedcorrectly. HPDE-colored bottles are enclosed by orange rectangles, colored-PET bottles are enclosed bygreen rectangles and the HPDE bottles are enclosed by white rectangles. Their centroids are shown withcyan, blue and green points respectively.

(a) Cap detection error (b) Label detection error, causes orien-tation error

(c) Mismatch class detection

Figure 26: Usual problems in Computer vision algorithm

(a) Sensor’s bright issue (b) Reflection and bright issue

Figure 27: Sensor’s problems

Figure 28: Algorithm correct clasification

Finally, it necessary to mention that if many bottles specially translucent ones, are in the image, thebrightness increases and the classification becomes more difficult for the algorithm due the threshold isnot adaptive, however as the robot picks up bottles the image return to the normal levels of brightness.

• Pose Estimation TestTo evaluate the pose estimation a single test was performed with an object of reference as seen in Figure

29(a). Predefined poses were compared with estimated ones. Figure 29(b) shows the comparison betweenreal data in blue and estimate one in orange. It is noticed that the system is more accurate as the objectis closer to the image origin and present distortion in depth perception as the point approaches to therim of the depth image. Table XIII shows that the system has a resolution of 0.95mm in pose estimationwith a max variability of 10.0mm in regions near to the depth data rim. Therefore the system has amean estimation error of 3.7468mm in X-axis and 2.132mm in Y -axis. Z-axis error varies accordingthe material present in the scene, with translucent objects the data varies nearly 15mm and with opaqueones are around 5mm sometimes causing errors when the system is performing the pick and place task.This is describen in section VI-C.

X-axis Y-axis

Minimum 0.141mm 0.947mmMaximum 10.013mm 7.124mmMean 3.747mm 2.1325mmDeviation 4.180 1.940

RMSE 5.2316

Table XIII: Deviation results between real pose and estimated pose

(a) Nine points considered to perform the pose estimationevaluation

(b) Distance between the origin to nine points error com-parison

Figure 29: Pose estimation test based on nine points located at the worktable

It is necessary to highlight that this pose estimation is tolerable for the application, the computeralgorithm mean error is around 3mm taking into account that extrinsic parameters of the camera induceerror in the measurement and that the support structure where the camera was mounted was not completelyfixed, this error was expected and was the system was adjusted from software.

C. Third Objective Results

“Design, implement, and test an algorithm that translates the perceived information into the robot’scommands, in order to pick up objects in the proposed case of study.”

The designed solution consist in a Framework created over a distributed system as seen in section V-A.The proposed framework features three main modules. The first one is the perception module orientated

to the detection and location of objects in scene, that is to say; the computer vision algorithm. The secondone, is the pick and place module which is orientated to manage data between ROS master and robot’scontroller, in order to execute the task according to the information retrieved by the first module. Thepick and place module uses motoman and Moveit packages among with MotoROS driver to achieve thetask. The last module is the command user interface. Figure 30 shows the proposed framework and thefunctionality of each module.

Figure 30: Designed framework, describing the two principal modules integrated

To evaluate the system a simulation and emulation environment were created in order to execute thetask virtually and/or with real hardware. Figure 31 shows the comparison between both environments,also a video of the comparison is available here. Notice that the movements on the virtual environmentare the same ones that are executed by the robot so one can previsualize the system’s behavior beforeexecuting the real task.

(a) Emulated environment with work-table, bottles, bins and robots

(b) Simulated environment withkinect’s 3D data reconstruction

Figure 31: Recreation of the waste separation task in the testbed at CTAI

During the test described in section V-C some usual errors were found that affect the performance ofthe system, part of the test consist in analyze their frequency and see what happened to the system whenone appears. At the end of the test the errors and faults were grouped and defined in different tablesaccording to the module of the framework they belong. Tables XIV, XV and XVI summarize them.

• Error: When the system produces an incorrect or unexpected result, or behaves in unintended ways.

• Fault: An interruption caused by exceptional conditions

Error Trigger Result Solution

Fatal errorThis error shows up when the robot is inimminent collision trajectory or it has alreadycollided.

Manual system abort-ing.

System calibration ofenvironment location

Unreachabledestinationerror

This error shows up when objects in scene areout of range of the robot’s workspace, then, thetrajectory planning is not executed.

System aborting is au-tomatically triggered.

Relocate object in thescene

Z-axis approxi-mation error

This error shows up when after a first move-ment, is imposible to the robot perform thefinal approach to the object in Z-axis due tojoint’s position.

System stops unexpect-edly

Execute other trajec-tory for the same posi-tion

Table XIV: Errors caused by ROS or robot

Error Trigger Result

Pose estimationerror

This error occurs when the computer vision algorithm performsa mismatched pose estimation, usually orientation estimation,so the object detected is not actually in that pose.

Correct execution, bad perfor-mance.

Classificationerror

This error shows up when the computer vision algorithmperforms a bad classification that is not correct for instance,an object A detected as B.

Correct execution, wrong sepa-ration.

Detection errorThis error shows up when an object is detected where actuallythere is not any object, for instance the systems estimates thepose of an A object where actually there is not object there.

Correct execution, bad perfor-mance

Table XV: Errors caused by computer vision algorithm or kinect’s shut downs

Fault Trigger

Pick-up fault It is caused when due to the geometry of the object the gripper cannot handle it, so it isunable to pick it up.

Core fault It is caused when the system is overloaded with processes in execution. All the processesstop and the system has to be relaunched

Grippermalfunctionfault

It is produced when unexpectedly the gripper’s controller go offline. It is necessary to relaunchgripper’s controller.

Idle robot fault It is produced when for no reason the robot do not execute any task even when the servosare turned on. It is necessary to restart all the system

Z-axis estima-tion mismatch It is caused when the Kinect retrieves an incorrect depth data of the detected object

Table XVI: Principal faults presented in the system during the test

The results of the test protocol are shown in Table XVII, they are confusion matrices that evaluate theclassification performed by the robot. For each test of twelve bottles a matrix was generated and threemetrics were estimated.

The first metric is sensitivity, defined as the true positive rate of each class, which is the quantity ofclassifications on a class that are actually correct. The second metric is precision, defined as the rate inwhich the system can recognize a class. The last metric estimated is accuracy of the classificator, definedas global true positives over all elements. In this part of the test the system performed the less successfulclassification with 75% of accuracy and the best with 100%, with an average accuracy of 85%.

About precision , is evident that green bottles have a rate of 100%, it implies that no other class isclassified as green, however it does not implies green bottles can be misclassified. White bottles havea precision rate of 98% while color bottles have a rate of 72.76%, that is that, some bottles are badclassified as color, particularly considering the true positive rate of white bottles (67.5%) we can say that

Test 1 C G W Precision

C 4 0 1 0.8G 0 4 0 1W 0 0 3 1

Sensitivity 1 1 0.75 Accuracy 92%

Test 2 C G W Precision

C 4 0 2 0.67G 0 4 0 1W 0 0 2 1

Sensitivity 1 1 0.5 Accuracy 83%

Test 3 C G W Precision

C 4 1 2 0.57G 0 3 0 1W 0 0 2 1

Sensitivity 1 0.75 0.5 Accuracy 75%

Test 4 C G W Precision

C 4 1 1 0.67G 0 3 0 1W 0 0 3 1

Sensitivity 1 0.75 0.75 Accuracy 83%

Test 5 C G W Precision

C 4 1 1 0.67G 0 3 0 1W 0 0 3 1

Sensitivity 1 0.75 0.75 Accuracy 83%

Test 6 C G W Precision

C 4 1 1 0.67G 0 3 0 1W 0 0 3 1

Sensitivity 1 0.75 0.75 Accuracy 83%

Test 7 C G W Precision

C 4 0 2 0.67G 0 4 0 1W 0 0 2 1

Sensitivity 1 1 0.5 Accuracy 83%

Test 8 C G W Precision

C 4 0 3 0.57G 0 4 0 1W 0 0 1 1

Sensitivity 1 1 0.25 Accuracy 75%

Test 9 C G W Precision

C 4 0 0 1G 0 4 0 1W 0 0 4 1

Sensitivity 1 1 1 Accuracy 100%

Test 10 C G W Precision

C 3 0 0 1G 0 4 0 1W 1 0 4 0.8

Sensitivity 0.75 1 1 Accuracy 92%

Table XVII: Confusion Matrices for each test for the three defined classes

the most frequent misclassification are white bottles classified as color this due the label present in thebottles. For the green bottles a true positive rate of 90% is acceptable but again the labels reduce thesensitivity for this class as seen in results shown in VI-B. Finally, as expected the color class has thehigher true positive rate with 97.5%. This information is summarized in Table XVIII.

Type True Positive Mean rate Mean Precision rate

COLOR 97.50% 72.76%GREEN 90.00% 100.00%WHITE 67.50% 98.00%

Accuracy 85.00%

Table XVIII: Metrics of reference estimated from confusion matrices

Take into account that even when the classification was well performed it does not implies that theeffectiveness of the system is the same. In order to analyze this fact, Table XIX shows the frequencyin which errors are present in the task execution. With Paretto’s law we noticed that the 80% of thefailures of the system are represented by four errors being Z-axis estimation mismatch with 30.93% themost frequent, among with pose estimation (15.46%), pick-up fault (13.40%), and idle robot fault with10.31%. The main problem with depth estimation is the nature of the materials, because many bottlesare translucent and eventually the IR beam is refracted creating a bad depth estimation.

Pose estimation failures are caused by labels and caps that in many cases change the perceivedorientation (not position) of the item so the gripper is not able to pick up the bottles even when (X,Y, Z)

position is right. About the idle robot error we believe it is caused due the limited processing capacityof the computer. Detailed results of the test can be found here.

Error/Fault Mean Minimum Maximum Frequency

Z-axis estimation mismatch 3 1 5 30.93%Pose estimation error 2 1 3 15.46%Pick-up fault 3 1 6 13.40%Idle robot fault 2 1 5 10.31%Z-axis approximation error 4 2 6 8.25%Detection error 3 1 4 5.15%Fatal error 1 1 1 5.15%Gripper malfunction fault 1 1 2 4.12%Core fault 1 1 1 3.09%Unreachable destination error 2 1 2 3.09%Classification error 1 1 1 1.03%

Attempts required to accomplish the task 22 18 27 56.34%

System effectiveness 57.38% 44.44% 66.67%

Table XIX: General results of the executed test

About effectiveness it was estimated with the number of attempts that take to the robot accomplishall the classification of the twelve bottles in each test. An average rate of 57.38% is the effectivenessfor the system in the current conditions. Besides this, the calculated execution time mean is 34.45s perbottle, with a best performance of four bottles classified in 2.13 minutes; which means a production rateof 1.878 bottles per minute. Is necessary to highlight about the required time to execute the pick andplace, that the robot’s velocity is configured by Moveit!, based on the complexity of the movements, therobot could eventually reduce its velocity, even when the test was performed with 0.8 of robot’s nominalvelocity.

Finally in the test, the computer vision algorithm’s processing capacity was evaluated too. It wasdetermined that in the worst of the scenarios (with many borders detected the algorithm spends 261milliseconds in processing, that is 3.81903 FPS. On the other hand, in the best case it takes only 159milliseconds which means 6.28156 FPS. The usual processing time is around 197.11 milliseconds and 5.07FPS. With this processing rate the system satisfy the requirements to consider the application workingin real time.

This video shows the test performed with the robot to validate the designed framework.

D. Fourth Objective Results

“Analyze the socioeconomic impact of applying the proposed technology in the selected case of study.”

Due this project is developed as a first approach in the research of applied robotics for waste separationtasks, it is not possible to analyze completely the socioeconomic impact of the proposal. In fact, this projectwas mainly focused in the technical aspects of the application. This is why, this section consist in a briefcommentary about the characteristics of the case of study and the eventual impact of applying this kindof technology in these scenarios. We consider that further analysis, including quantitative data studies,is beyond the scope of this project and is proposed as complementary work.

Economic Impact:

The first stage to analyze the economic impact of this proposal is to consider the cost of the eventualinstallation of the system. As seen in section VI-C, the Yaskawa robot is not the adequate one to achievethe task in a real application. Based on the characteristics of the application an universal robot “U5”was chosen since it is ROS-compatible and eventually could be used in a collaborative environment. The

end effector chosen is a suction pad and a different vision system is quoted. Table XX shows the costestimation of the system including other peripherals, taxes, and transportation3.

Element Investment(USD)

Robot $23900Suction Pad $350

Vision System $2000Monitoring System $1500

Other $700Physical Barriers $400

Integration $1000Taxes, Trasnportation, fees... $4277

Starting Investment $34127

Table XX: Cost estimation of the system

With an inversion rounding USD$35.000, and assuming the employees year salary in just USD$3.000,the savings are estimated. Table XXI, shows the financial indicators calculated, these are; a payback of7.58 years and a cost-benefit rate of just 0.012, which means that under the current model, the project is noteconomically viable. Nevertheless, for recyclers the project could reduce exposure times to contaminatedmaterials allowing them to perform more specialized tasks for instance dedicate their time to the wastecollection routes that is what actually represent profit for them with a payment of COP$90 per kilogramof plastic waste collected, classified and shipped.

Implementing this technological proposal offers other benefits for the city’s population including; thereduction of the amount of material thrown away in the “Dona Juana” landfill (which currently dealswith a capacity problem), the generation of formal employment for waste pickers in Bogota with amore technical center where they can process the collected material, and the contribution to the integraldevelopment of the recyclers in conditions of poverty and vulnerability of Bogota.

Time of Reimbursement (Years) 7.58

Cost benefit rate 0.012

Table XXI: Financial indicators of the proposal

At this point is necessary to mention that the current model does not integrate all the different actorsalong the “Recycling chain” in fact, the recyclers work is isolated from the chain and just until 2003,with constitutional court judgement T724, their work was officially recognized as a formal employment.In this model, traditional waste management operators are paid by tons of waste disposed into a landfill,and recyclers have to compete with them to collect the usable waste. Besides this, while in Colombiathe cost to dispose into a landfill is USD$6 in other countries is around USD$10 per ton, this situationdiscourage recycling in Colombia because is cheaper to throw away than recycle.

With this situation in mind, we consider that the actual model must be redesigned in order to includethe different actors of the “recycling chain” integrating them and working together to increase the quantityof tons recycled. Figure 32, shows the different stages of recycling where the waste co-managers are therecyclers themselves who are the core of the proposed chain. Beside this, other changes to the proposedmodel could include; changing the way operators are paid (paying them per recycled ton), create politiesto encourage recycling in citizens and the creation of an environmental tax.

Social Impact:

To evaluate the social impact a brief interview with recyclers were done in order to retrieve data abouttheir work conditions. They state that, they receive training and guidance through the process, however

3Estimation based on 2017 Universal robots prices

Figure 32: Different actors involved in recycling

they mention that they have to create the collection route by their own. Besides this, the fact that thegovernment is not completely compromised with recyclers creates economic issues for these people.Currently, there is a parallel market where they offer the collected materials, because trading in thismarket represents greater incomes for them. Also in this interview they manifest the unhealthy conditionsat the recycling center. Figure 33 shows the different stages where the workers interact with trash andcontaminant substances. According to the recylers they are exposed to strong smell and unknown fluidsubstances.

Part of this interview was used among with the Delphi method in order to find out which is the mostrepresentative difficulty to recycle in Bogota. The test consists in carry out interviews to experts whichin our case are the recyclers and the administrative staff of UAESP, to inquire them “Which one ofthese causes, you consider, hinders recycling in Bogota?” Five recyclers and five administrative staff wereinterviewed twice with the same question.

After this, using Pareto rule it was determined the most frequent causes, that are considered by wastepickers, difficult the recycling task in Bogota. Table XXII shows the identified causes. The highlightedones represent the 80% of the main causes that waste pickers consider affect the recycling process in thecity.

Cause Frequency

Remuneration 20.51%Contact with contaminated materials 17.95%Weight lifting 12.82%Citizen’s wastes misclassification 12.82%Inefficient recollection routes 7.69%Citizen’s recycling awareness 7.69%Work conditions 7.69%Repetitive tasks 5.13%Being part of a recyclers association 5.13%Working hours 2.56%

Table XXII: Principal causes that hinder the recycling task in Bogota

The remuneration as the main issue is coherent with the economic diagnosis. After it, there are; contactwith contaminated materials, weight lifting, and citizen’s waste misclassification.

The principal issue that this proposed system solves, is the unhealthy exposure times for recyclers byutilizing robots in the pre-classification task to create a cooperative environment human-robot, in whichhumans could perform more specialized tasks such as bottle uncapping and washing, leaving for the robotthe exhausting work of classification. With these robots at the UAESP center more people will be able

(a) Unclassified waste storage (b) Classified PET (c) Compacted waste

Figure 33: Different stages where workers are exposed to wastes

(a) Waste storage (b) Classification workstation

(c) Waste conditioning (d) Work area saturated with trash (e) Distribution of the classification areaat UAESP

Figure 34: Ergonomics bad conditions an unhealthy environment

to perform the collection tasks, thus expanding the coverage of social inclusion to waste pickers andgenerate a new operating model that makes it self-sustainable and profitable.

Finally, the La Alquerıa recycling center is the way the district fulfills the T 724 of 2003 judgementof the Constitutional Court, in which actions are taken in favour of the recycling population of Bogotaparticularly the population in poverty and vulnerability. Thus La Alqueria serves as a social center betweenwaste pickers and UAESP, since this community is trained there, through agreements with SENA4.

The last consideration of this section is the ergonomics issues at the center. Figure 34 shows thedifficult conditions for workers. Taking into account that repetitive tasks are ergonomically tiring andcould eventually trigger muscular illness placing the robot in this repetitive task is convenient.

4Servicio Nacional de Aprendizaje SENA

VII. CONCLUSIONS & RECOMMENDATIONS

A. Conclusions

• The computer vision algorithm based in OpenCV executes the classification task, performing the3D pose estimation based on the Kinect depth data. However the current algorithm is sensible tolight changes despite it was designed using HSV color space. Besides this, the system is not able todifferentiate labels and caps this increases bad classification rate hence, the different polymers couldbe eventually bad classified by the robot.

• The 3D pose estimation based on kinect data and computer vision is integrated with the pick andplace task using a ROS-based system through a distributed system, the communication based onTCP/IP protocol over Ethernet was established successfully. With this setup the SDA10F robot wasable to interpret the ROS data and achieve the task proposed in the case of study.

• The designed framework fulfil the expected requirements of design, the system works in a virtualenvironment and in the real emulation too. However, considering that this is the first version of thesystem some errors appear when it is executed. Most of these errors are caused by unexpected dyingprocesses caused by the low processing capacity of the Master computer, nevertheless the pick andplace task is successfully executed based on the data retrieved by the computer vision algorithm.

• Recyclers are exposed for long periods of times to contaminated materials which can represent forthem eventual illness. On the other hand, the ergonomics issues need to be considered as a priority,particularly lifting loads, and repetitive tasks problems. They have to be studied in order to redesignthe work station and make changes as soon as possible.

• The process of recycling is not just a way for vulnerable people to get income but to reduce thecontamination and the amount of trash that is thrown away to a landfill. In Colombia, even thoughthere are regulations to recognize recyclers’ work and to pay properly for their task, they are notimplemented due to different government politics.

B. Recommendations

• A gripper is not the best solution for this task so it is recommended to adequate different finaleffectors to the robot to achieve specialized tasks. In this case we recommend the utilization of asuction pad.

• Since the project was a first approach we used a Yaskawa’s SDA10F robot however, for this task werecommend cheaper robots that are less heavy and can lift the required weight. In addition of thisthe interest is create collaborative environments we recommend robots from universal robots [50].

• For this particularly task we hugely recommend the work done by [51] and [52]. In which the systemof classification is based on laser scanning and spectroscopy with an excellent classification task. Theeventual integration of these sensor with the system would improve the effectiveness of the system.

• We recommend to the laboratory install all the sensors available in order to accomplish the standardregulations about safety, this due the technology is available at CTAI but still is not implemented.

C. Publications

• As part of this research a paper was submitted and presented in LACAR 2017 in Panama, with thetittle “Setup of the Yaskawa SDA10F robot using ROS-Industrial” [14].

• An abstract has been accepted in order to submit a paper to be presented at International Confer-ence on Production Research ICPR Americas 2018 “Improving Supply Chain Management throughSustainability” in June 2018 [53].

D. Future work

We propose as future work the implementation of the different sensors available at CTAI and integratethem to this framework in order to achieve the safety standard and regulations of the task. Also to continue

the research of solutions for this case of study, specially to the ergonomics issues that at the moment arecritical.

Also we propose consider the work done by [51] as a start point towards the improvement of the thedeveloped system of detection, we recommend the utilization of more specialized depth sensors in order tobring better perception capabilities to industrial robot and eventually create a collaborative environment.

Finally we also propose the development of a software in which both robot’s arms are used to executecoordinated tasks, in order to perform more specialized tasks.

VIII. GLOSARY

? 3D Three-dimentional? CAD refers to the use of computer software in the design of things such as cars, buildings, and

machines. CAD is an abbreviation for ’computer aided design’. [54]? Camera Link Camera Link is a serial communication protocol designed for computer vision appli-

cations based on the National Semiconductor interface Channel-link. It was designed for the purposeof standardizing scientific and industrial video products including cameras, cables and frame grabbers[55].

? CMOS stands for complementary metal-oxide semiconductor, is a major class of integrated circuits.CMOS imaging sensors for machine vision are cheaper than CCD sensors but more noisy [55].

? Color The perception of the frequency (or wavelength) of light [55].? Convolutional Neural Networks (ConvNets or CNNs) are a category of Neural Networks that

have proven very effective in areas such as image recognition and classification. ConvNets havebeen successful in identifying faces, objects and traffic signs apart from powering vision in robotsand self driving cars [56].

? Cobot or Co-robot (from collaborative robot) is a robot intended to physically interact with humansin a shared workspace. This is in contrast with other robots, designed to operate autonomously orwith limited guidance, which is what most industrial robots were up until the decade of the 2010s.[57]

? CV Computer vision? CTAI Centro Tecnologico de Automatizacion Industrial. Pontificia Universidad Javeriana. Bogota

Colombia.? HSV The HSV (Hue, Saturation, Value) model, also called HSB (Hue, Saturation, Brightness), defines

a color space in terms of three constituent components: Hue, the color type (such as red, blue, oryellow), Saturation, the ”vibrancy” of the color and colorimetric purity, Value, the brightness of thecolor [55].

? KPI Key performance indicator is a type of performance measurement [58].? MoveIt! is state of the art software for mobile manipulation, incorporating the latest advances in

motion planning, manipulation, 3D perception, kinematics, control and navigation. It provides aneasy-to-use platform for developing advanced robotics applications, evaluating new robot designsand building integrated robotics products for industrial, commercial, R&D and other domain [30].

? OpenCV (Open Source Computer Vision Library) is an open source computer vision and machinelearning software library. OpenCV was built to provide a common infrastructure for computer visionapplications and to accelerate the use of machine perception in the commercial products [11].

? OpenSource refers to something people can modify and share because its design is publicly ac-cessible. Open source software is software with source code that anyone can inspect, modify, andenhance [59].

? PIR Perception for industrial robots.? PUJ Pontificia Universidad Javeriana.? RGB-D An RGB color space is any additive color space based on the RGB color model [60].? ROS The Robot Operating System (ROS) is a set of software libraries and tools that help you build

robot applications. From drivers to state-of-the-art algorithms, and with powerful developer tools,ROS has what you need for your next robotics project. And it’s all open source [61].

? ROS-I ROS-Industrial is an open-source project that extends the advanced capabilities of ROSto manufacturing automation and robotics. The ROS-Industrial repository includes interfaces forcommon industrial manipulators, grippers, sensors, and device networks [10].

? TIFF Tagged Image File Format is a file format for mainly storing images, including photographsand line art [55].

? USB Universal Serial Bus provides a serial bus standard for connecting devices, usually to computerssuch as PCs, but is also becoming commonplace on cameras [55].

IX. APPENDIX

APPENDIX NAME DEVELOPER FILE LINK RELEVANCE

1 Algorithm documenta-tion Author pdf https://goo.gl/mKw2PN 1

2 Execution video Author video https://youtu.be/iROQcr7Bo5Y 13 Current separation task Author video https://youtu.be/NyXRJ23TDfY 14 Robot’s execution Author video https://youtu.be/unnmYN4LatE 15 Framework’s Features Author pdf https://goo.gl/xRseY2 16 Test protocol Author Excel https://goo.gl/EXfQ3o 17 Time Measurement Author Excel https://goo.gl/mZpbbx 18 Framework’s process Author png https://goo.gl/XQaieS 19 Distributed System Author png https://goo.gl/mHXtBn 210 Illumination Map Author pdf https://goo.gl/a8ePFN 211 Noise Map Author pdf https://goo.gl/PQDu4Y 212 Temperature Map Author pdf https://goo.gl/Yp85xT 2

13 Pseudo-algorithm ofthe task Author pdf https://goo.gl/MVttcT 2

14 Confusion Matricesdata Author Excel https://goo.gl/kYkwr8 2

15 Interview Author Audio https://goo.gl/utJUpa 2

16 Pose estimation evalua-tion Author Excel https://goo.gl/iXmb54 3

17 Internal ROS commu-nication Author-ROS png https://goo.gl/xo5VUi 3

18 Geometry for pose es-timation Author pdf https://goo.gl/Bw8PzH 3

19 Image Database Author TIFF https://goo.gl/wzzoFH 320 Diagram process Author pdf https://goo.gl/uPBFoz 4

Table XXIII: Appendixes summary

ACKNOWLEDGMENTS

We would like to thank to PhD. Carol Viviana Martinez Luna who guided us through this project andhas been supporting us from the beginning. We also want to thank to everyone at Centro TecnologicoDe Automatizacion Industrial (CTAI), particularly Eng. Wilson Hernandez, to the Pontificia UniversidadJaveriana specially to the industrial engineering department for motivate us to develop this research, andto MsC Stevenson Bolivar Atuesta for his counselling.

Text Powered by LATEX. February 26, 2018

REFERENCES

[1] F. R. Cortes, Robotica control de Robots manipuladores. Alfaomega, 2011.[2] IFR, “International federation of robotics,” date retrieved 22-03-2017. [Online]. Available: https://ifr.org/robot-history[3] “The fourth industrial revolution what it means and how to respond,” 2017, date retrieved 22-03-2017. [Online]. Available:

https://www.weforum.org/agenda/2016/01/the-fourth-industrial-revolution-what-it-means-and-how-to-respond/[4] B. Lydon, “Industry 4.0: Intelligent and flexible production,” INTECH, vol. 63, no. 3, pp. 12–17, May 2016, date retrieved

27-03-2017. [Online]. Available: https://search.proquest.com/docview/1799786990?accountid=13250

[5] “Realizing industry 4.0: Essential system considerations,” date retrieved 12-03-2017. [Online]. Available: https://www.maximintegrated.com/en/app-notes/index.mvp/id/5991

[6] IFR, “World robotics report 2016,” date retrieved 07-03-2017. [Online]. Available: https://ifr.org/img/uploads/ExecutiveSummary WR Industrial Robots 20161.pdf

[7] F. Tobe, “Why co-bots will be a huge innovation and growth driver for robotics industry,” Dec2015, date retrieved 20-02-2017. [Online]. Available: http://spectrum.ieee.org/automaton/robotics/industrial-robots/collaborative-robots-innovation-growth-driver

[8] “La seguridad en la colaboracion inteligente hombre-robot,” date retrieved 12-03-2017. [Online]. Available:http://www.infoplc.net/plus-plus/tecnologia/tendencias/item/104052-seguridad-colaboracion-inteligente-hombre-robot

[9] SIRRIS, “Votre prochain collaborateur sera-t-il un cobot ?” Jan 2016, date retrieved 01-04-2017. [Online]. Available:http://blog.sirris.be/fr/blog/votre-prochain-collaborateur-sera-t-il-un-cobot

[10] “Description,” date retrieved 02-04-2017. [Online]. Available: http://rosindustrial.org/about/description/[11] “About,” date retrieved 02-04-2017. [Online]. Available: http://opencv.org/about.html[12] Dinero, “Cuanta basura genera colombia y cuanta recicla,” Aug 2017, date retrieved 20-01-2018. [Online]. Available:

http://www.dinero.com/edicion-impresa/pais/articulo/cuanta-basura-genera-colombia-y-cuanta-recicla/249270[13] A. M. C. . Y. R. M., “Con reciclaje, en dos anos bogota podrıa comprar a neymar,”

Sep 2017, date retrieved 20-01-2018. [Online]. Available: https://www.elespectador.com/noticias/bogota/con-reciclaje-en-dos-anos-bogota-podria-comprar-neymar-articulo-715554

[14] C. Martinez, N. Barrero, W. Hernandez, C. Montano, and I. Mondragon, Setup of the Yaskawa SDA10F Robot forIndustrial Applications, Using ROS-Industrial. Springer International Publishing, 2017, pp. 186–203. [Online]. Available:http://dx.doi.org/10.1007/978-3-319-54377-2 16

[15] W. A. Hernandez Martinez and L. J. Patino Arevalo, “Seleccion y clasificacion de piezas mediante vision de maquinautilizando un robot industrial,” Apr 2016, date retrieved 06-05-2017. [Online]. Available: http://hdl.handle.net/10185/18728

[16] H. Wu, T. T. Andersen, N. A. Andersen, and O. Ravn, “Visual servoing for object manipulation: A case study inslaughterhouse,” pp. 1–6, Nov 2016.

[17] A. de recicladores de Bogota, “Resumen ejecutivo estudio nacional de reciclaje,” Jul 2011,date retrieved 27-03-2017. [Online]. Available: http://asociacionrecicladoresbogota.org/wp-content/uploads/2012/04/RESUMEN-EJECUTIVO-DEL-ESTUDIO-NACIONAL-DE-RECICLAJE.pdf

[18] G. E. Sakr, M. Mokbel, A. Darwich, M. N. Khneisser, and A. Hadi, “Comparing deep learning and support vector machinesfor autonomous waste sorting,” in 2016 IEEE International Multidisciplinary Conference on Engineering Technology(IMCET), Nov 2016, pp. 207–212, date retrieved 30-03-2017.

[19] “Support vector machines for automated classification of plastic bottles,” in 2010 6th International Colloquium on SignalProcessing its Applications, May 2010, pp. 1–5, date retrieved 30-03-2017.

[20] M. A. Zulkifley, M. M. Mustafa, and A. Hussain, “Probabilistic white strip approach to plastic bottle sorting system,” in2013 IEEE International Conference on Image Processing, Sept 2013, pp. 3162–3166, date retrieved 30-03-2017.

[21] S. K. Saha, A. A. J. Eduardo, S. P. F. J., and T. W. Bartenbach, Introduccion a la Robotica. McGraw-Hill Interamericana,2010.

[22] J. M. Muggleton, R. Allen, and P. H. Chappell, “Hand and arm injuries associated with repetitive manual work inindustry: a review of disorders, risk factors and preventive measures,” Ergonomics, vol. 42, no. 5, pp. 714–739, 1999.[Online]. Available: http://dx.doi.org/10.1080/001401399185405

[23] P. Tsarouchi, S. Makris, G. Michalos, M. Stefos, K. Fourtakas, K. Kaltsoukalas, D. Kontrovrakis, and G. Chryssolouris,“Robotized assembly process using dual arm robot,” Procedia CIRP, vol. 23, pp. 47 – 52, 2014.

[24] K.-T. Song, C.-H. Wu, and S.-Y. Jiang, “Cad-based pose estimation design for random bin picking using a rgb-d camera,”Journal of Intelligent & Robotic Systems, pp. 1–16, 2017. [Online]. Available: http://dx.doi.org/10.1007/s10846-017-0501-1

[25] K. N. Kaipa, A. S. Kankanhalli-Nagendra, N. B. Kumbla, S. Shriyam, S. S. Thevendria-Karthic, J. A. Marvel, and S. K.Gupta, “Addressing perception uncertainty induced failure modes in robotic bin-picking,” Robotics and Computer-IntegratedManufacturing, vol. 42, pp. 17 – 38, 2016.

[26] K. Ikeuchi, “Generating an interpretation tree from a cad model for 3d-object recognition in bin-picking tasks,” InternationalJournal of Computer Vision, vol. 1, no. 2, pp. 145–165, 1987. [Online]. Available: http://dx.doi.org/10.1007/BF00123163

[27] ROS, “Robot operating system,” http://www.ros.org, 2016, date retrieved 15-09-2016.[28] ROS-I, “Robot operating system-industrial,” http://rosindustrial.org, 2016, date retrieved 20-09-2016.[29] “About,” date retrieved 06-01-2018. [Online]. Available: https://opencv.org/about.html[30] “Moveit! motion planning framework,” date retrieved 02-04-2017. [Online]. Available: http://moveit.ros.org/[31] “Rviz 3d robot visualizer,” date retrieved 04-02-2018. [Online]. Available: http://wiki.ros.org/rviz[32] “Motoman driver,” date retrieved 07-01-2018. [Online]. Available: http://wiki.ros.org/motoman driver[33] “Openni,” Sep 2017, date retrieved 06-01-2018. [Online]. Available: https://en.wikipedia.org/wiki/OpenNI[34] avin2, “avin2/sensorkinect,” May 2012, date retrieved 02-08-2017. [Online]. Available: https://github.com/avin2/

SensorKinect[35] “Openni camera,” date retrieved 06-01-2018. [Online]. Available: http://wiki.ros.org/openni camera[36] “Openni launch,” date retrieved 06-01-2018. [Online]. Available: http://wiki.ros.org/openni launch[37] YASKAWA, “Motoman SDA10F robot,” https://www.motoman.com/industrial-robots/sda10f, 2016, date retrieved 20-10-

2016.[38] ——, “Fs100 controller,” https://www.motoman.com/products/controllers/fs100 controller, 2016, date retrieved 20-10-2017.[39] Robotiq, “Industrial grippers,” http://robotiq.com, 2016, date retrieved 24-12-2017.

[40] Microsoft, “Kinect for windows sensor components and specifications,” date retrieved 06-01-2018. [Online]. Available:https://msdn.microsoft.com/es-co/library/jj131033.aspx

[41] “300mbps wireless n gigabit router tl-wr1042nd,” date retrieved 06-01-2018. [Online]. Available: http://www.tp-link.com/us/products/details/TL-WR1042ND.html#specifications

[42] “How to avoid cancer causing plastics,” Aug 2016, date retrieved 06-01-2018. [Online]. Available: https://thetruthaboutcancer.com/cancer-causing-plastics/

[43] “Camera calibration with opencv,” date retrieved 07-01-2018. [Online]. Available: https://docs.opencv.org/2.4/doc/tutorials/calib3d/camera calibration/camera calibration.html

[44] ISO, “Iso - international organization for standardization,” Dec 2016. [Online]. Available: https://www.iso.org/standard/51330.html

[45] ——, “Iso - international organization for standardization,” Dec 2016, date retrieved 27-03-2017. [Online]. Available:https://www.iso.org/standard/41571.html

[46] F. C. B, “Informe “caracterizacion de la poblacion recicladora de oficio en bogota”,” Mar 2014, date retrieved 14-10-2017.[Online]. Available: http://www.uaesp.gov.co/images/InformeCaracterizacionpoblacinrecicladoradeoficio 2014.pdf

[47] “Recommended light levels for outdoor and indoor venues,” date retrieved 14-12-2017. [Online]. Available:https://www.noao.edu/education/QLTkit/ACTIVITY Documents/Safety/LightLevels outdoor+indoor.pdf

[48] “United states department of labor,” date retrieved 05-01-2018. [Online]. Available: https://www.osha.gov/pls/oshaweb/owadisp.show document?p table=STANDARDS&p id=9735

[49] G. Kanawaty, Introduccion al estudio del trabajo. Oficina Internacional del Trabajo, 2005, date retrieved 14-12-2017.[50] “Universal robots,” date retrieved 10-01-2018. [Online]. Available: https://www.universal-robots.com/es/[51] A. Nordbryhn, “The optics of recycling,” date retrieved 10-01-2018. [Online]. Available: https://www.osa-opn.org/home/

articles/volume 23/june 2012/features/the optics of recycling/[52] “Laser classification,” date retrieved 10-01-2018. [Online]. Available: https://forum.arduino.cc/index.php?topic=405887.0[53] “International conference on production research icpr americas 2018,” date retrieved 10-01-2018. [Online]. Available:

https://icpramericas2018.wixsite.com/icpr[54] “Cad definition and meaning collins english dictionary,” date retrieved 02-04-2017. [Online]. Available: https:

//www.collinsdictionary.com/dictionary/english/cad[55] P. Waszkewitz, “Machine vision in manufacturing,” Handbook of Machine Vision, p. 693–776, date retrieved 14-03-2017.[56] Ujjwalkarn, “An intuitive explanation of convolutional neural networks,” Apr 2017, date retrieved 02-04-2017. [Online].

Available: https://ujjwalkarn.me/2016/08/11/intuitive-explanation-convnets/[57] “Cobots:,” date retrieved 02-04-2017. [Online]. Available: http://peshkin.mech.northwestern.edu/cobot/[58] C. T. Fitz-Gibbon, Performance indicators. Multilingual Matters, 1990.[59] “What is open source?” date retrieved 02-04-2017. [Online]. Available: https://opensource.com/resources/what-open-source[60] C. A. Poynton, Digital video and HDTV: algorithms and interfaces. Morgan Kaufmann, 2007.[61] “About ros,” date retrieved 02-04-2017. [Online]. Available: http://www.ros.org/about-ros/