40
3D Ultrasound Reconstruction BME 4900 Final Report Michael Golden Khayriyyah Munir Omid Nasser Bigdeli Team 3 Client Contact Information: Dr. Joseph McIsaac Hartford Hospital 80 Seymour St. PO Box 5037 Hartford, CT 06102 (860) 545 2117

BME 4900 Final Report

Embed Size (px)

DESCRIPTION

The purpose of this project is to design and construct a low-cost device to produce a 3D image from existing ultrasound images in a clinical setting. Our client is Joseph McIsaac, an anesthesiologist at Hartford Hospital. When determining where the brachial plexus is using the ultrasound probe, it is difficult to reconstruct in one’s mind the 3D configuration of this nerve, so the proposal for this project is to ease this aspect of a clinician’s job. With knowledge of the location of the probe when the two dimensional ultrasound image is produced and mathematical calculations, it was expected that an algorithm could be made to take these images and reconstruct them into a three dimensional image. To determine the spatial configuration of the probe, an attachment consisting of three spheres oriented in an equilateral triangle is placed directly on the probe. Two typical web cameras are used to take pictures of the ultrasound probe and this “tracking pyramid” attachment. The cameras are interfaced with an image recognition program to detect changes in the position of the probe throughout the procedure. Knowing these changes along with the distance between the probe and the cameras, and the differences between the two images produced from the cameras, we are able to use stereo triangulation to calculate the exact spatial configuration of the probe. The images produced from the cameras are synced with the ultrasound images which are compiled in a 3D reconstruction program to produce a 3D image which corresponds to what was imaged in the clinical procedure. It is intended to make this design freely available to hospitals and clinics nationwide. All software is written in LabVIEW Developer Suite, and will be made available as a stand-alone program. Included with this will be a manual with instructions to setup and implement this device, making it truly useful to physicians in any aspect of healthcare.

Citation preview

Page 1: BME 4900 Final Report

3D Ultrasound Reconstruction

BME 4900 Final Report

Michael Golden

Khayriyyah Munir

Omid Nasser Bigdeli

Team 3

Client Contact Information:

Dr. Joseph McIsaac Hartford Hospital

80 Seymour St. PO Box 5037

Hartford, CT 06102 (860) 545 2117

Page 2: BME 4900 Final Report

|2

Table of Contents

Abstract Page 5

1 Introduction 6

1.1 Background 6

1.2 Purpose of the Project 6

1.3 Previous Word Done by Others 6

1.3.1 Summer Interns 6

1.3.2 BrainLAB 6

1.4 Map for the rest of the project 7

2 Project Design 8

2.1 Alternative Designs 8

2.1.1 Camera Mounting (1): Fixed Sawhorse Support 8

2.1.2 Camera Mounting (2): Sliding Angled Support 9

2.1.3 Camera Mounting (3): Sliding Perpendicular Support 9

2.1.4 Camera Configuration (1): Perpendicular 10

2.1.5 Camera Configuration (2): Parallel 11

2.1.6 Tracking System Configuration (1): In-line Triangle Configuration 12

2.1.7 Tracking System Configuration (2): Perpendicular and Off Center

2.1.8 Tracking System Configuration (3): Perpendicular and Centered

2.2 Optimal Design 14

2.2.1 Objective 14

2.2.2 Subunits 15

2.2.2.1 Ultrasound System and Probe 15

Page 3: BME 4900 Final Report

|3

2.2.2.2 Cameras Page 16

2.2.2.3 Camera Support 16

2.2.2.4 Camera Configuration 18

2.2.2.5 Camera Mount 18

2.2.2.6 Tracking Pyramid 19

2.2.2.7 Stereo Triangulation 20

2.2.2.8 Image Acquisition 25

2.2.2.9 Image Reconstruction 25

3 Realistic Constraints 26

3.1 Engineering 26

3.2 Economical 26

3.3 Manufacturability 26

3.4 Ethical 27

3.5 Health and Safety 27

3.6 Social and Political 28

4 Safety Issues 29

5 Impact of Engineering Solutions 30

6 Life-Long Learning 31

7 Budget and Timeline 32

7.1 Time Line

8 Team Members Contributions to the Project

9 Conclusion

10 References

11 Acknowledgements

Page 4: BME 4900 Final Report

|4

Page 5: BME 4900 Final Report

|5

Abstract

The purpose of this project is to design and construct a low-cost device to produce a 3D image from existing ultrasound images in a clinical setting. Our client is Joseph McIsaac, an anesthesiologist at Hartford Hospital. When determining where the brachial plexus is using the ultrasound probe, it is difficult to reconstruct in one’s mind the 3D configuration of this nerve, so the proposal for this project is to ease this aspect of a clinician’s job. With knowledge of the location of the probe when the two dimensional ultrasound image is produced and mathematical calculations, it was expected that an algorithm could be made to take these images and reconstruct them into a three dimensional image.

To determine the spatial configuration of the probe, an attachment consisting of three spheres oriented in an equilateral triangle is placed directly on the probe. Two typical web cameras are used to take pictures of the ultrasound probe and this “tracking pyramid” attachment. The cameras are interfaced with an image recognition program to detect changes in the position of the probe throughout the procedure. Knowing these changes along with the distance between the probe and the cameras, and the differences between the two images produced from the cameras, we are able to use stereo triangulation to calculate the exact spatial configuration of the probe. The images produced from the cameras are synced with the ultrasound images which are compiled in a 3D reconstruction program to produce a 3D image which corresponds to what was imaged in the clinical procedure.

It is intended to make this design freely available to hospitals and clinics nationwide. All software is written in LabVIEW Developer Suite, and will be made available as a stand-alone program. Included with this will be a manual with instructions to setup and implement this device, making it truly useful to physicians in any aspect of healthcare.

Page 6: BME 4900 Final Report

|6

1 Introduction

1.1 Background

This project is designed to assist anesthesiologists in imaging the brachial plexus region in the neck when administering anesthesia. The client is Dr. Joseph McIsaac, an anesthesiologist at Hartford Hospital. He currently uses an ultrasound probe to image in two dimensions the region of the brachial plexus. It would be more informative if this image was a three dimensional image—eliminating the need for the doctor to mentally recreate the patient’s nerve. This design would be freely available to the public, allowing physicians nationwide to make use of this tool.

1.2 Purpose of the Project

The main purpose of this project is to provide a low cost, unpatented tool for reconstructing 2D ultrasound images into 3D images which can be seen during the administration of anesthesia and throughout the procedure. Anesthesiologists currently have to imagine what the patient looks like subcutaneously (a 3D image), using the 2D ultrasound as a stepping stone. There is currently a product marketed by BrainLab which provides 3D images in real time, but its costs greatly exceeds the budget of many hospitals. This project will achieve the same goal as the product that already exists using two inexpensive web cameras and LabVIEW to acquire and process the image. This LabVIEW program and design of our tool will be made available to the public, allowing physicians nationwide to make use of this tool in a very cost effective manner.

1.3 Previous Work Done by Others

1.3.1 Summer Interns

In the summer of 2010, two high school juniors from the Avon robotics team worked as interns in Dr. McIsaac’s lab and started this project. They did a lot of preliminary research—establishing industry contacts, and writing a LabVIEW program which accomplishes many of the necessary tasks individually, but they have not integrated the various aspects together. It is expected that we will use what they have accomplished as a foundation for our project, and tie together what they have already produced. We have purchased the same cameras they used to ensure proper image acquisition with the existing LabVIEW program. The web cameras will image three balls placed on top of the ultrasound probe to recognize its spatial location. The model Taylor and Justin produced will be very similar to what we use, as this is what currently works well with what already exists in industry.

1.3.2 BrainLAB

BrainLAB, a company in Munich, Germany, which specializes in technology for neurosurgery currently markets VectorVision. In April 1997, BrainLAB received 510(k) clearance from the Food and Drug Administration to market this product. VectorVision uses three reflective balls arranged in an equilateral triangle, and is imaged by two

Page 7: BME 4900 Final Report

|7

cameras. This is the platform for image-guided surgery, especially neurosurgical and orthopedic procedures. The system does not have wires and can be integrated with any instruments currently used in the operating procedure. Using VectorVision, surgeons can follow the movements of their instruments on the computer screen in real-time during surgical procedures. The basis of BrainLAB’s product is the starting point for our project, but we hope to achieve the same goal at a significantly smaller fraction of the cost.

1.4 Map for the rest of the project

Forward movement on this project consists mostly of software development. Each of us has installed National Instrument’s LabVIEW 2010 with Developer Suite on our personal computers. We will use Source Code Control to develop and work on various program components at the same time without worry of having information overridden or the complications of merging programs. The components of the program that need to be developed include the image recognition to differentiate between the three different colored spheres in the tracking pyramid located on top of the probe. We also need calculate in our program the change in location of the pyramid to have an accurate understanding of the spatial change made by the probe during the imaging. The ultrasound images will be taken as input to our program and reconstructed into a 3D image using a third party reconstruction program that is compatible with LabVIEW. The program will be developed using the Developer Suite component of LabVIEW to produce a stand-alone program, useful for anyone who has a computer and two web cameras.

Page 8: BME 4900 Final Report

|8

2 Project Design

2.1 Alternative Designs

There are three main components to the hardware implementation. For each phase, there are different choices available to optimize this project. The first option is the mounting of the two webcams. The second option is the cameras’ configuration to each other and to the patient. The final option is the tracking pyramid’s design.

2.1.1 Camera Mounting (1): Fixed Sawhorse Support

In this design, there are a total of five bars, all composed of 80/20 Inc’s T-slotted aluminum framing. Bar A is four feet long and arranged parallel to the floor with both cameras mounted parallel to each other. The cameras are mobile in the x-direction in order to find an optimal distance between them for more accurate determinations of the distance between the cameras and the patient. Bars B and Bars C are also four feet long and are connected at 30 degrees. All bars are connected at points P and Q.

The advantages of this design include the mobility of the cameras in the x direction as well as the convenience of the support structure. The major disadvantage of this design is that the height of the cameras is fixed. Depending on the patient, the cameras’ height cannot be optimized to individual needs. However, with everything connected, the structure will be able to be moved from room to room. An example of this support structure is seen in Fig 2.1.

Figure 2.1 Fixed Sawhorse Support

Page 9: BME 4900 Final Report

|9

2.1.2 Camera Mounting (2): Sliding Angled Support

In this design, there are a total of five bars, all composed of 80/20 Inc’s T-slotted aluminum framing. Bar A is four feet long and arranged parallel to the floor, with two cameras mounted parallel to each other. The cameras are mobile in the x-direction in order to find the optimal distance between them for more accurate determinations of the distance between the cameras and the patient. Bar A is attached at either end to Bars B, and is mobile in the plane of Bars B. Bars B are each fixed to a two foot Bar C at 60 degrees. The advantages of this design include the ability for the cameras to move in the y-direction. This allows the physician to adjust the height of the cameras to be specific for any bed or patient in the hospital. The disadvantages of this design include the fact that as the height of Bar A decreases, the angle between the cameras and what needs to be imaged on the patient changes. This significantly increases the mathematical calculations necessary to determine the position of the ultrasound probe because a new set of calculations is necessary for every location of Bar A. This camera mount is shown below in Fig 2.2.

Figure 2.2 Sliding Angle Support

2.1.3 Camera Mounting (3): Sliding Perpendicular Support

In this design, there are a total of five bars, all composed of 80/20 Inc’s T-slotted aluminum. Bar A is four feet long and arranged parallel to the floor, with two cameras mounted parallel to each other. The cameras are mobile in the x-direction in order to find an optimal distance between them for more accurate determinations of the distance between the cameras and the patient. Bars B are perpendicular to the floor, and each attached to either end of Bar A. Bar A is mobile in the y-direction because Bar A is composed of T-slotted

Page 10: BME 4900 Final Report

|10

aluminum. The advantages of this design include the ability for the cameras to move in both the x-direction and the y-direction, making it possible to find the optimal position for both cameras for various bed heights throughout the hospital and patient sizes. An image of this mount is shown below in Fig 2.3.

Figure 2.3 Sliding Perpendicular Support

2.1.4 Camera Configuration (1): Perpendicular

In this design, each camera will be mounted to one end of an L-shaped support with each leg measuring 4.5 feet in length. They will both image the probe and tracking system, but in a perpendicular fashion. The disadvantages of this design include the fact that is very bulky and requires a lot of material. It takes up a lot of space and would inhibit the movement of the physician, and therefore his contact with the patient, from two directions. Also with the cameras arranged 90 degrees to each other, the convergence of the image areas is much less, so the location of the patient needs to be very specific and is not easily changed with patient variance. The perpendicular camera configuration is shown below in Fig 2.4.

Page 11: BME 4900 Final Report

|11

Figure 2.4 Perpendicular Orientation

2.1.5 Camera Configuration (2): Parallel

In this design, both cameras are arranged on one bar, four feet in length, and parallel to each other. Because they both image from the same direction, the stereotriangulation calculations are simpler. The cameras are close to each other which makes this design extremely space efficient. The optimal distance from the patient is four feet, and as long as this is the case the spatial configuration can be anything, as long as the patient is in the view of the cameras. There is no need for the cameras to be in any one specific location, which is desirable for the physician. The parallel camera orientation is illustrated with Fig 2.5.

Page 12: BME 4900 Final Report

|12

Figure 2.5 Parallel Orientation

2.1.6 Tracking System Configuration (1): In-line Triangle Configuration

In this design, spheres A, B, and D are configured in an equilateral triangle, all in the same plane. The ultrasound probe is attached to sphere A, in the same line as that made by connected A and B. The disadvantages of this design include the fact that if for some reason the ultrasound probe is in line with the focal line of one of the cameras, the camera will only image two spheres, with the other being lost. This would give rise to complications in the calculations, as information from three spheres is necessary for sterotraingulation. This tracking design is shown below in Fig 2.6.

Figure 2.6 In-Line Triangle Configuration

Page 13: BME 4900 Final Report

|13

2.1.7 Tracking System Configuration (2): Perpendicular and Off Center

In this design, spheres A, B, and D are configured in an equilateral triangle, all in the same plane. The ultrasound probe is attached to sphere D, perpendicular to the plane formed by spheres A, B, and D. The advantages to this design include that at any point all three spheres are imaged, and there is no way for them to block one another from either one of the cameras. Disadvantages of this design include a non-optimal range of motion, particularly with respect to spheres A and B. As the angle between the probe and the neck region decreases, spheres A and B get significantly closer to the patient and can eventually make contact with the patient, limiting the range of motion in that direction. The perpendicular and off centered tracking system is shown in Fig 2.7.

Figure 2.7 Off Center Triangle Configuration

2.1.8 Tracking System Configuration (3): Perpendicular and Centered

In this design, spheres A, B, and D are configured in an equilateral triangle, all in the same plane. The ultrasound probe is perpendicular to the plane, but at the centroid of the plane. Advantages to this design include that at any point, all three spheres are imaged and there is no way for them to block one another from either of the cameras. Also, with the ultrasound probe being located at the centroid, the physician has the greatest range of motion in any direction. This tracking system is illustrated by Fig 2.8.

Page 14: BME 4900 Final Report

|14

Figure 2.8 Centered Triangle Configuration

2.2 Optimal Design

After a review of the proposed alternative designs, the decision was to use a structure similar to the sliding perpendicular mount, with the cameras parallel to each other. The tracking system is designed to be similar to the perpendicular and centered configuration.

2.2.1 Objective

This project is designed to assist anesthesiologists in imaging the brachial plexus region in the neck when administering anesthesia. Currently anesthesiologists us an ultrasound probe to image in two dimensions the region of the brachial plexus, but it would be more informative if this image was 3-D. Anesthesiologists currently have to imagine what the region they are imaging looks like in 3-D, using the 2-D movie as a platform.

We intend to use inexpensive web cameras in conjunction with LabVIEW to write a program to calculate the spatial location of the ultrasound probe and sync that with the produced ultrasound image. This image will be disassembled and reassembled in 3-D. Figure 2.9 displays the sequence of steps to be taken to obtain a 3-D image of the brachial plexus region.

Page 15: BME 4900 Final Report

|15

Figure 2.9 Sequence of Tasks to Reconstruct a 3D Image

We intend to design and manufacture an unpatented tool for reconstructing 2-D ultrasound images into 3-D images which can be seen during the administration of anesthesia and throughout the procedure. The design of our project will be made freely available to the public, particularly physicians who wish to have access to this technology and would like to do so inexpensively.

Our alternative designs were separated into the three components that will be part of our design--the camera support, the camera configuration, and the "Tracking Pyramid". From the various designs for each of these components we chose the most efficient and most practical components and have put those together for our optimal design.

For the camera support, we chose the sliding perpendicular support because it had the simplest calculations, most camera mobility, and consumed the least amount of space in the operating room. We chose the parallel option for the camera configuration because the stereo triangulation calculations are simpler than those necessary with the perpendicular configuration. Also with this design, both cameras can rest on the same bar and not require another entire support, conserving space in the operating room. For the ultrasound tracking system we chose the option in which the spheres for detecting the location of the ultrasound are arranged in an equilateral triangle, with the probe located at the center and perpendicular to the plane formed by the triangle. This design provides the greatest range of motion for the probe with the least possible patient interference. Also with this configuration, all of the spheres will be imaged at all times, as they cannot physically block each other.

2.2.2 Subunits

2.2.2.1 Ultrasound System and Probe

Sonosite’s Micromaxx ultrasound system will be used in the design and implementation of this project. This system operates by the means of a single ultrasound probe, of which there are several to choose from. The specific probe used by clinicians to examine the brachial plexus is known as the SLA probe. Dr. McIsaac has offered to us some time to work with the SLA probe to become familiar with it. During manufacturing of this project, the ultrasound cannot in any way be damaged due to the cost of the probe.

Image “Tracking Pyramid” with two cameras

Reconstruct series of 2D JPEG images into

3D image

Calculate coordinates of anatomy of what is produces in

MP4 ultrasound image

Disassemble MP4 into separate JPEG

images

Stereo triangulation to calculate specific

configuration of “Tracking Pyramid”

Page 16: BME 4900 Final Report

|16

This ultrasound produces a MPEG4 movie. It operates at a frequency of 13-6 MHz, with a 25 mm broadband linear array. The scan depth is approximately 6 cm. According to the manufacturer, the applications are for vascular, musculoskeletal, superficial, and nerve. [1]

2.2.2.2 Cameras

For this project, we will use two Logitech WebCam Pro 9000s web cameras. This camera is shown below in Fig 2.10. These cameras are capable of taking 8-megapixel photos and high definition video up to 30 frames-per-second. This model is Hi-Speed USB 2.0 certified for connection with the computer, and is compatible with Windows XP (SP2) or higher. The support of the cameras has been removed, exposing several options to properly secure the camera to the support bar. It is only after the cameras are fixed to this bar can several other key steps towards completion be taken.

Figure 2.10 Logitech Webcam Pro9000

2.2.2.3 Camera Support

The design chosen for the camera support is one in which it is easy to determine the ideal position for the cameras by way of adjusting their spatial location. All of the material in this support will be made with 80/20 Inc’s T-slotted aluminum. Figure 2.11 illustrates the camera support in detail. The side beams (A) will measure 6 feet (1.83 m) in length and are arranged perpendicular to the floor and parallel to each other. The top and bottom supports (B) as well as the camera support (C) will each measure 4 feet (1.22 m) in length, and are arranged parallel to the floor and perpendicular to the side beams (A). The top and bottom supports will be fixed in place with respect to the vertical supports whereas the camera support (C) will be able to move vertically up and down the side beams by means of a linear brake assembly. The base stands (E) will measure 2 feet (.60 m) each and the angled supports (D) will each measure 1 foot (.30 m) in length. Figure 2 displays the design for this support system.

The advantages of this design include the ability for the cameras to move in both the x and y-directions, making it possible to find the optimal position for both cameras for various bed heights and patient sizes. A current disadvantage to this setup is the size of the system. Standing 6 feet (2.22 m), the support structure is very tall—taller than it needs to be. The reason it is so tall is for optimization. If it is determined that the cameras will never have to be more than 4 feet (1.22 m) tall, then the support structure will be disassembled, cut and reassembled to fit the specifications.

Page 17: BME 4900 Final Report

|17

Figure 2.11 Camera Support Structure

The assembly of this support system will be simple due to the complexity of the 80/20 Inc. parts. These parts are designed to intimately interact with each other. This company has already developed many parts that allow for different angles to be made, as well as many other attachments to be added onto their T-Slotted aluminum framing. Using a linear brake will allow bar C to move up and down with ease.[2] This gives the cameras a greater range of motion.

2.2.2.4 Camera Configuration

The design chosen for the cameras configuration is the one in which the cameras’ optical axes are arranged parallel to each other and parallel to the ground. They will be placed such that their fields of view provide the greatest overlap. This ensures that the tracking system will be imaged by both cameras. In this design, the cameras are both Logitech Webcam Pro 9000s, and are arranged on one bar which measures 4 feet (1.22 m) in length. Because they both image in the same direction, the stereo triangulation calculations are

Page 18: BME 4900 Final Report

|18

simplified (stereo triangulation is discussed later in section 2.2.2.7). Another advantage of orienting the cameras in this manner is that the two webcams will be physically close to each other, making this design space efficient.

2.2.2.5 Camera Mount

The bracket the Logitech Webcam Pro 9000 comes with is not sufficient for this project. There are two joints where the webcam has practically free rotation: one at the neck of the camera; the other joint between the two plastic pieces holding the camera in a fixed position. In order to keep the cameras in one specific place during operation, a specialized bracket has been designed.

This newly designed bracket is more rigid and will only allow the camera to move along one direction. It is designed to mount onto the camera and act as a hinge with a plate (C). The original mount of the camera is removed and two new metal supports (A) are designed to hold the camera, as shown in Fig 2.12. These metal supports operate as the hinge with the three prongs of the bracket (B). The three prongs and the plate are the same piece of metal and are therefore rigid. The height of the hinge is determined such that when the camera rests on the plate, the camera’s optical axis is level with the ground.

Figure 2.12 Camera Bracket without Webcam

This bracket will be machined in the University of Connecticut machine shop. The mill will be used to create both a smooth surface as well as the intimate details. The two metal supports (A) will be machine independent of the rest of the bracket. The mill will make the larger cylinder protruding from the block by removing the other material from a larger piece of aluminum. The smaller cylinder is a pin that drops into a hole drilled using the mill. The advantage of this is that the mill has technology allowing it to accurately mill in a circular pattern—making a boss protrusion with an accurate diameter.

Page 19: BME 4900 Final Report

|19

The bracket itself will be milled from an angled piece of aluminum. Since the angle is already there, the only important parts of the bracket are the depression for the larger side of the camera as well as the three prongs of the bracket. The hole that will act as a hinge for the two parts will be made using a 1/8 inch diameter bit because it is the smallest bit that is long enough to drill through the three prongs accurately.

2.2.2.6 Tracking Pyramid

The purpose of the tracking pyramid is so the cameras have an object to watch where the software will be able to compare two images and give the 3D coordinate of the ultrasound probe. We must also make sure that LabVIEW will be able to locate the position of these three points, regardless of their orientation in 3D space. An object that has the same general outline regardless of which angle you take a picture of it is a sphere. Therefore, we have decided to use three spheres to determine the position and plane of the ultrasound probe.

The key to the design is how the three spheres are oriented to each other and to the ultrasound probe in space. We must take into account a few major factors when determining this. The first concept that must be considered is that the spheres should be connected to probe in such a way that when the doctor moves the probe along the neck of the patient and at different angles, the three spheres do not hinder the doctor’s desired movement of the probe. For example, if the spheres are placed in front of the probe instead of behind it, the spheres will hit the patient’s neck and limit the angular movement of the probe. The sphere must also come out far enough behind the probe so that the neck of the patient will never hinder the view of the spheres from the cameras. Another factor that must be considered is that the spheres are oriented with respect to the probe in such a manner that the 3D position and plane of the ultrasound probe can be calculated using vector analysis.

The design chosen for the configuration of the three spheres located on top of the ultrasound probe is such that the three spheres will be arranged in an equilateral triangle. The centroid of this triangle will be oriented so it is directly above the center of the ultrasound probe. This design allows all the spheres to be images by the two cameras at the same time. There will be few, if any, instances when one sphere blocks another. Also, this design brings the spheres away from the patient, allowing the clinician to have a greater freedom of movement. A CAD drawing of this proposal is found in Fig 2.13.

Page 20: BME 4900 Final Report

|20

Figure 2.13 Tracking Pyramid

With the spheres arranged in the equilateral triangle, as they are in this design, and the entire system being imaged by two cameras, we are able to use stereo triangulation to calculate the exact spatial location of the ultrasound probe (discussed in section 2.2.2.7). The images taken by both cameras will be imported into LabVIEW, after which LabVIEW assigns coordinates to the image.

2.2.2.7 Stereo Triangulation

The concept behind stereo triangulation is that if one knows the distance and orientation of two cameras to one another, then one can calculate the position of an object in 3D space by analyzing the difference in position of the object in the pictures taken by the two different cameras. The simplest form of stereo triangulation occurs when the camera lens have the same optical axes and are at a certain distance, b, apart, as can be seen from Fig 2.14. When looking at this image, one should think of the zx plane as parallel to the ground, and the y-axis corresponds to an altitude or height. The optical axes point in the direction that the lens are looking, and in this case the optical axes of both cameras are pointing towards the positive z-axis and are parallel to the z-axis. The variables X1 and X2 correspond to the x-distances from the center vertical axis of the two images.

Figure 2.14 Stereo Triangulation[3]

Page 21: BME 4900 Final Report

|21

Because we intend to determine the actual spatial location relative to the cameras, we will use a constant K which represents the difference in location between LabVIEW's assignment and the actual location. If we assume that the reference origin in this 3D space is the center of the left camera lens noted as L in Fig 2.14, then we can calculate the Z, X, and Y positions of point P with the following equations to determine K, because all other values are known:

( )

( )

To ensure that these calculations are accurate, we have made the design with the two cameras having the same optical axes. This is done by having the cameras parallel to each other and resting on the same bar. After determining the value of K, our next step will be to create a method to determine 3D position and plane of the ultrasound probe vs. time. Because we want to be able to determine a plane in 3D space, we need to determine the position of at least three non-collinear points in space. The tracking pyramid will be firmly attached to the ultrasound probe and will therefore serve as these three non-collinear points.

Once we have two cameras in the proper alignment and determine the value for K in our stereo triangulation equations, we will be able to determine the x, y, and z coordinates of points A, B, and D, which are the centers of the three spheres. Point C is the centroid of triangle ABD. Placing the center of the probe scanner, P, along the line perpendicular to the plane defined by ABD at the centroid allows for maximum angular revolution of the probe about the neck and maximum rotation about the line CP without the spheres hitting the neck. The combination of the placement of P along the line perpendicular to ABD at the centroid and the fact that the three spheres are aligned such that ABD is an equilateral triangle allows for unbiased movement of the probe about the neck. These points can be seen in Fig 2.15.

Page 22: BME 4900 Final Report

|22

Figure 2.15 2D Space of the Ultrasound Image

Once we calculate the coordinates of points A, B, and D using LabVIEW and our stereo triangulation equations, our next step is to calculate the centroid of the triangle ABD using the following equations:

Next the vector is determined. Vector is the vector from the centroid to the tip of the probe. Points A, B, and D are the three spheres and a vector of two of these points is simply the vector between them. By definition, a vector has two components: direction and

magnitude. Because is perpendicular to the plane defined by and , the direction of

is defined by either ( ) or ( ) By further inspection of Fig 2.15, one

can see that the direction of is in fact ( ). In order to calculate this cross

product, one must first calculate vectors and . The following equations can be used

to calculate and

Page 23: BME 4900 Final Report

|23

( )

( )

Therefore, ( ) can be defined by the following equation:

( ) |

|

[( )( ) ( )( )]

[( )( ) ( )( )]

[( ) ( ) ( )( )]

For simplicity of future equations, let us denote the x, y, and z components of ( ) as

respectively, such that

[( )( ) ( )( )]

[( )( ) ( )( )]

[( ) ( ) ( )( )]

and

( )

Now that we have calculated the direction of , we are ready to define because we

already know that it’s magnitude is d, as can be seen from Fig 2.14, can be defined as the

unit vector in the direction of ( ) multiplied by the magnitude of , which is d.

The unit vector of a vector can be calculated by dividing each component of a vector by the square root of the sum of the square of the components, as one can see below:

Therefore, we can define with the following equation:

Page 24: BME 4900 Final Report

|24

From Fig 2.15 we can see that . Therefore, we can define P with the following equation where all the values on the right side are originally known or have been previously calculated.

( )

So far we have completed the calculations for the 3D position of the center of the ultrasound probe scanner. Next we have to calculate the plane of the 2D image associated with each 3D position. The four verticies of this plan are labeled m, n, o and q. A plane can be defined by two non-collinear vectors. The two vectors that will define the plane of the

2D ultrasound image, according to Fig 2.15, are and We will not know but we

will know . According to our design, as one can see in Fig 2.15, ‖ Therefore, we

can define the plane of the 2D ultrasound image using two known vectors, and .

More specifically, one can define a plane geometrically using two non-collinear vectors. However, we must know the algebraic equation of the 2D ultrasound plane so that we can input it into a program for 3D reconstruction. To define a plane algebraically, one must know a normal vector to the plane and the coordinates of at least one point on the plane. If

[ ] is the normal vector to a plane, and ( ) is a point on the plane, then the

plane can be defined by the following equation: such that .

We can define our normal vector by the following equation: We can assign point P to ( ) Therefore, we can define the 2D ultrasound plane using the above

equation for a plane such that a, b, and c are the and values of , respectively, and

So far we have determined how to locate the center of the 2D ultrasound scanner in space and how to determine the plane of the image. However, in reality the 2D image isn’t an infinite plane. In fact, all the images will have a certain length and width is determined by the specific probe that we will use, which is undetermined at this point. However, once we discover the dimensions of each image, we can easily calculate the four endpoints of the 2D rectangle or square.

Page 25: BME 4900 Final Report

|25

2.2.2.8 Image Acquisition

Two low cost web cameras will be set up a fixed distance from each other, with the ultrasound probe at their focal point. On top of the ultrasound probe will be three different colored balls, all the same size and an equal distance apart, in a triangular formation. The size, color, and distance of the balls will be dependent on the ability of the cameras to differentiate them from one another, as well as the capability of LabVIEW to recognize the balls as separate objects. It is most desirable to have the balls as small as possible and as close to each other as possible to decrease the amount of crowding of tools that the physician is using. This apparatus will be fixed to the ultrasound probe, so after the location of these balls is calibrated, by recognizing where these balls are, the location of the ultrasound itself will be known.

A LabVIEW program will have each camera take images of the same ultrasound probe tracking system at the same time with the same frame rate as that of the ultrasound. The ultrasound produces a MP4 movie, so we need to have a camera image corresponding to each frame recorded by the ultrasound for proper reconstruction. Vision Acquisition software will be used to provide us with the x and y positions of the spheres in the image. Using these we can use stereo triangulation calculations to determine the z coordinate, providing us with the exact 3-D position of the ultrasound probe.

2.2.2.9 Image Reconstruction

The ultrasound technology produces its images in the form of a DICOM (Digital Imaging and Communications in Medicine) or MP4movie. This movie will be disassembled into multiple 2-D images using a MP4 to JPG converter program. Once we have one JPG image for each frame of the MP4 movie, we will have an image of the tracking system corresponding to one ultrasound image. These single 2-D images will then be reassembled into a 3D image using Biomedical Source Code 3.0. Once a 3-D image exists, it is possible to take slices at any angle to better understand the specifics of the imaged region for each patient.

Page 26: BME 4900 Final Report

|26

3 Realistic Constraints

3.1 Engineering

There are multiple components to this design, spread across a wide range of engineering disciplines. The largest component of our project involves computer programming, or more specifically bioinformatics. The main program we will use which is available in our laboratories is National Instrument’s LabVIEW 2010. This program will be used for image recognition, image comparison, stereo triangulation calculations, and to import the separate JPEG images corresponding to the ultrasound movie.

We will need to use National Instrument’s Vision Acquisition software which is compatible with LabVIEW, but we do not have access to this software and will need to obtain licensure. If we are able to obtain a student license, our expectation is that it will be for a limited amount of time, so we will need to do as much of our testing as possible in a short period of time. Hopefully when the project works the hospital will purchase the software and be able to have full access to it for use with our design.

To disassemble the MP4 movie which is obtained from the ultrasound software already existing in the hospital, we will need to use FFMPEG software. The movie produced will be dissected into individual JPEG images based on the frame rate of the ultrasound movie. We also do not have this program at our disposal, so we will need to obtain licensure for this program. If we receive a student license, our expectation is that it will be for a limited amount of time. We will need to do as much of our testing as possible in a short period of time. When the design is finished, the hospital will need to purchase this software to disassemble the MP4 movie obtained during each procedure.

3.2 Economical

Our intention is to design a prototype which will be made freely available to hospitals across the country. There is currently a product which will show 3-D images of an area being imaged with ultrasound, but it costs approximately $200,000. We are designing a product for under $1000 which will do essentially the same thing. One of the main motivations for our project is to make it as economically efficient as possible, so the majority of the components are separate pieces and we need to construct them ourselves. This gives rise to additional work on our part as well as the potential for a greater amount of time devoted to actually constructing the project.

Also, we do not have in our budget enough money to purchase licensure for the additional necessary programs, so we have to rely on free student versions which have a time limit on them. This will result in an increased necessity to work quickly in the time we have these programs available.

3.3 Manufacturability

The mechanical components to this design were all purchased separately, and the majority of them are from 80/20 Inc. This ensures us that these components will be compatible

Page 27: BME 4900 Final Report

|27

with each other and will be easily assembled. Because of economic restraints we were unable to send our design to a company and have them build exactly what we need, so much of the construction will take place in the design lab. For the components that were not purchased by 80/20 Inc, they were designed and built in the engineering machine shop. For someone who does not have access to such a shop, it would be very difficult to construct these components, such as the casing for the camera.

The ultrasound probe which we are studying and designing the tracking system for costs $5000, which is far out of our budget. There is one currently used clinically at Hartford Hospital, which we have limited access to. We only have access to the probe during non-business hours, and we cannot take the probe to school with us to secure the tracking system to it. All of the design will be based on measurements taken at the hospital, so we will need to leave room for expected errors in measurement and design. Once hospitals in the future use our design, they will be able know for sure which measurements to use, after we finally assemble the tracking system to the probe.

3.4 Ethical

When working in any environment with patients, it is necessary to abide by all HIPPA laws of confidentiality. With our design, there are a lot of new images which will be produced and processed associated with patients. We will need to ensure all information is stored on a secure network. If the computer used to process the images and reconstruct them to a 3-D image is on the same network that is currently used for all other hospital documents, our understanding and expectation is that it will be safe. For all preliminary testing, we will use images of our own brachial plexuses, so we will not have to be concerned with confidentiality for our own personal computers or the laboratory computers. If someone were to use our design in its initial stages, they would need to obtain an MP4 movie from an outside source. This is the case unless they had access to an ultrasound probe to image themselves, because the hospital, by law, is not allowed to distribute images of patients if there is an identifying label on the image.

3.5 Health and Safety

The ultrasound probe is currently located at Hartford Hospital, and is used on a daily basis clinically. Because patients are involved in these procedures and require this tool, we are unable to take the probe to the lab. With research and experiments requiring use of a human, we need to be extra cautious of safety concerns.

The optimal distance for the cameras is four feet away from the patient, so this leaves very little room for clinicians to move around in the extra space left in the room. Nothing can bein between the cameras and the patient to ensure that nothing obstructs the view of the cameras. Everything that is done in the operating room will now have to be restricted to an area 24 square feet smaller than was available prior to using this product.

3.6 Social and Political

Page 28: BME 4900 Final Report

|28

BrainLAB, a Munich, Germany company which specializes in technology for neurosurgery currently markets VectorVision. In April 1997, BrainLAB received 510(k) clearance from the Food and Drug Administration to market this product. VectorVision uses three reflective balls arranged in an equilateral triangle, and is imaged by two cameras. This is the platform for image-guided surgery, especially neurosurgical and orthopedic procedures. The system does not have wires and can be integrated with any instruments currently used in the operating procedure. Using VectorVision, surgeons can follow the movements of their instruments on the computer screen in real-time during surgical procedures. The basis of BrainLAB’s product is the starting point for our project, but we hope to achieve the same goal at a significantly smaller fraction of the cost.

Our design uses the same idea of three spheres arranged in an equilateral triangle imaged by two cameras and stereo triangulation calculations. For this reason we are unable to patent our design. If we added a new component innovation we would be able to apply for a patent, but this is not our intention. We intend to use BrainLAB’s design as a platform and provide this design for free to the public.

Page 29: BME 4900 Final Report

|29

4 Safety Issues

The mechanical component of this project will be assembled by the members of our group, which naturally gives rise to safety concerns. Two of the members have been certified in Machine Shop Safety and are able to use the engineering machine shop. With using protective devices and procedures, we hope that in using these metal pieces to construct fixture, we will not encounter any hazards. We ordered three 12’ (3.66m) instead of any 20’ (6.07 m) to eliminate potential hazards arising from delivering or working with such large pieces of metal.

Our constructed project will be placed inside of an operating room, consuming space which normally is free for physicians and other personnel to move around in. There is a high likelihood that someone could bump into or knock over this fixture, giving rise to huge medical concerns. It is possible that we surround the metal with a protective padding so that if someone did come in contact with it they wouldn’t get hurt. To limit the possibility of knocking the fixture over, we have ensured that the feet of the structure are large enough so that a lot of force would need to be applied to the structure to tip it over.

The ultrasound probe is currently used clinically with patients on a daily basis in a sterile operating room. When we are able to view and use the probe, we will need to ensure that we properly disinfect our hands and the ultrasound probe before and after we use it. If we don’t change into sterile clothes we will need to be extra cautious of the contact made with the probe, as well as other instruments in the operating room. All of the components that we construct in the laboratory at school will need to be properly sterilized before they enter the operating room, as is the case with any surgical material.

The program which disassembles the MP4 movie produced by the ultrasound software into individual JPEG images as well as the program which we will write to reconstruct these images into a 3-D image both use a lot of processing memory. If the computer used to run these programs is not strong enough, the program will run slow and the computer will heat up. This is a concern because there is a potential for internal components of the computer to fry if they get hot enough. We will try to alleviate this issue by using a computer with fast enough processing speed for these programs, as well as a good cooling system.

Page 30: BME 4900 Final Report

|30

5 Impact of Engineering Solutions

Our design is intended to provide a cost effective method for clinicians to view images in 3-D what they already have access to in 2-D, specifically of the brachial plexus. With our design, they will be able to use the MP4 movies already produced by the technology they have, and view the anatomy in 3-D.

Being provided with this tool will be useful, especially for small hospitals that don’t have the ability to purchase expensive products that will reconstruct a 3-D image. Any hospital that uses anesthesia via the brachial plexus will be specifically interested in our design, as it is designed for that purpose. These aren’t the only hospitals though. Because our design is very general, it can be used for any ultrasound procedure. As long as the ultrasound setup with the tracking device is used, and is imaged by both cameras, our design can be used for any ultrasound procedure. Virtually every hospital in this country uses ultrasound technology in some respect, ranging from echocardiograms to ultrasounds of fetuses during pregnancy. With our designs each of these procedures can now be viewed in 3-D instead of 2-D. Especially in terms of pregnancy ultrasounds where people take home copies of these images, parents will be able to see their child in three dimensions. If this becomes common, the view and expectation of ultrasounds in general will change, initially for hospitals that use it. As more and more hospitals use the design which will be made available to them for free, this will change globally.

Page 31: BME 4900 Final Report

|31

6 Life-Long Learning

Throughout the course of this semester, we have had to learn a lot of new techniques to accomplish various tasks. The majority of these pertain to the design and planning stages of project development. The first and most essential task was recognizing each team members qualities and learning styles to ensure that we can work together effectively. Based on these qualities various tasks have been divided to ensure that each person is contributing in the areas most suited to their interests and strengths. Each of us contribute a significant amount to writing components of this project, as it is necessary as engineers to be able to communicate their ideas in an effective manner.

We have all had to learn how to develop a timeline for a project and have realized that not always are tasks completed at the desired speed. This is one issue in particular that drastically affects their ability to complete projects.

This project required various oral presentations with use of PowerPoint as a resource. Through these presentations our public speaking skills have been tested, and have grown as a result. The ability to express orally though processes, especially in terms of problem solving particularly in the realm of engineering is an extremely useful quality, useful in any area of interest or career.

The future of this project involves much more structure in terms of outlining tasks and completing them in a timely manner. It is expected that not everything will go exactly as planned, which is why in our timeline the majority of the most time consuming tasks associated with research and programming have been clustered early in the next semester. This leaves ample time before the due date for changes and complications that may arise. It is expected that through all of these processes we will become more efficient and conscious engineers, able to take these skills into the work force.

Page 32: BME 4900 Final Report

|32

7 Budget and Timeline

7.1 Budget

We have been budgeted $1000 by the UConn School of Engineering to complete this project. All planning and purchasing have been made and projected keeping this budget in mind. It is expected that we will be able to complete this project well under budget. Purchase requisitions are located in the appendix which indicate how much we have spent on what products in particular.

7.2 Timeline

Task Name Duration Start Finish

Cameras 100 days Mon 9/20/10 Fri 2/4/11

Determine which web camera is most effective and cost efficient for project

1 day Mon 9/20/10 Mon 9/20/10

Fill order requisition form for cameras 4 days Tue 9/21/10 Fri 9/24/10

Verify that ordered parts arrive 5 days Mon 10/4/10 Fri 10/8/10

Determine necessary alignment of cameras for stereo triangulation

5 days Mon 9/20/10 Fri 9/24/10

Machine the camera bracket 27 days Mon 10/4/10 Tue 11/9/10

Attach cameras to camera beam via camera connection 2 days Mon 11/8/10 Wed 11/10/10

Determine the optimal distance between the two cameras 4 days Tue 1/18/11 Fri 1/21/11

Determine the optimal height of the cameras 5 days Mon 1/31/11 Fri 2/4/11

Determine the optimal distance between the cameras and the patient

5 days Mon 1/31/11 Fri 2/4/11

Camera Bracket 32 days Mon 9/27/10 Tue 11/9/10

Determine mechanical design for support of cameras based on necessary alignment

5 days Mon 9/27/10 Fri 10/1/10

Accurately measure camera dimensions 1 day Mon 10/4/10 Mon 10/4/10

Determine most efficient and cost effective material for camera mounting design

2 days Mon 10/11/10

Tue 10/12/10

Acquire material for camera mount 1 day Tue 10/12/10 Tue 10/12/10

Machine camera mount 7 days Mon 10/18/10

Tue 10/26/10

Machine camera connection to mount 10 days Mon 10/4/10 Fri 10/15/10

Connect camera to camera connection 2 days Mon 11/8/10 Tue 11/9/10

Connect mount to camera connection (using a pin as a hinge)

2 days Mon 11/8/10 Tue 11/9/10

Tracking Pyramid 85 days Mon 9/27/10 Sun 1/23/11

Determine optimal orientation of three spheres to each other

5 days Mon 9/27/10 Fri 10/1/10

Determine optimal orientation of three spheres to 5 days Mon 9/27/10 Fri 10/1/10

Page 33: BME 4900 Final Report

|33

ultrasound probe

Determine optimal size of spheres 10 days Mon 12/6/10 Fri 12/17/10

Determine the optimal colors of the three spheres 10 days Mon 12/6/10 Fri 12/17/10

Determine optimal distance of spheres from each other 10 days Mon 12/6/10 Fri 12/17/10

Determine optimal distance of spheres from probe 10 days Mon 12/6/10 Fri 12/17/10

Determine optimal design for attachment of tracking pyramid to ultrasound probe

5 days Mon 11/15/10

Fri 11/19/10

Determine the optimal material for the spheres 5 days Mon 12/6/10 Fri 12/10/10

Determine the optimal material to connect the spheres to the ultrasound probe

5 days Mon 12/6/10 Fri 12/10/10

Fill order requisition form for parts for tracking pyramid 5 days Tue 12/7/10 Sat 12/11/10

Verify that ordered parts arrive 5 days Mon 12/13/10

Fri 12/17/10

Build tracking pyramid from many optimizations 5 days Tue 1/18/11 Sun 1/23/11

Test various background colors 5 days Mon 12/6/10 Fri 12/10/10

Support Structure 10 days Sat 10/9/10 Fri 10/22/10

Determine mechanical design for support of cameras based on necessary alignment

4 days Wed 10/13/10

Mon 10/18/10

Determine most effective and cost efficient material for support

3 days Tue 10/12/10 Thu 10/14/10

Determine amount of beam material needed (T-slotted aluminum framing)

4 days Sat 10/9/10 Wed 10/13/10

Determine various mounting hardware for support 4 days Wed 10/13/10

Sat 10/16/10

Fill order requisition form for material from 80/20 2 days Sun 10/17/10 Mon 10/18/10

Verify that ordered parts arrive 3 days Mon 10/18/10

Wed 10/20/10

Determine mechanical design for camera mounts to fit necessary alignment restrictions

3 days Wed 10/20/10

Fri 10/22/10

Label heights in one inch increments on support beams 2 days

Determine the optimal height of the cameras 2 days

CAD 9 days Tue 10/19/10 Fri 10/29/10

CAD of Camera Mount 3 days Wed 10/27/10

Fri 10/29/10

CAD of Camera 3 days

CAD of Support Structure 4 days Tue 10/19/10 Fri 10/22/10

CAD of Tracking Pyramid 4 days Wed 10/20/10

Sun 10/24/10

CAD of Ultrasound Probe 4 days

CAD of all together 7 days

Ultrasound Probe 1 day Thu 10/28/10 Thu 10/28/10

Accurately measure ultrasound probe for accurate 1 day Thu 10/28/10 Thu 10/28/10

Page 34: BME 4900 Final Report

|34

attachment of tracking pyramid

Practice using ultrasound in Hartford Hospital 1 day Thu 10/28/10 Thu 10/28/10

Understand how the Sonosite M-Turbo Ultrasound machine works

1 day Thu 10/28/10 Thu 10/28/10

Understand how clinicians acquire ultrasound images 1 day

Learn how to use the various applications of the Sonosite system

1 day

LabVIEW 15 days Tue 11/9/10 Mon 11/29/10

Obtain license for LabVIEW Vision Assistant 2010 (LVA) 1 day Tue 11/9/10 Tue 11/9/10

Determine how to acquire two images in LVA simultaneously 1 day Wed 11/10/10

Wed 11/10/10

Acquire two images simultaneously 2 days Thu 11/11/10 Fri 11/12/10

Ensure the two images have the same frame rate 2 days Mon 11/15/10

Tue 11/16/10

Determine how to detect the three spheres in the images 1 day Wed 11/10/10

Wed 11/10/10

Use known distances of object and acquire images of object to determine the value of K in the stereo triangulation equations

2 days Wed 11/17/10

Thu 11/18/10

Test various background colors 2 days Thu 11/11/10 Fri 11/12/10

Implement the mathematical equations determined earlier into the main LabVIEW program

1 day

Practice following the tracking pyramid with the two webcams

3 days Mon 11/15/10

Wed 11/17/10

Have the LabVIEW program output four endpoints of the images in 3D space corresponding to each 2D ultrasound image

1 day

Practice acquiring the tracking pyramid images and ultrasound images simultaneously

3 days Thu 11/18/10 Mon 11/22/10

Troubleshoot main LabVIEW program whenever necessary 5 days Tue 11/23/10 Mon 11/29/10

After main LabVIEW program, use the development suite to create a stand-alone version

3 days Tue 11/16/10 Thu 11/18/10

MATLAB (Alternative to LabVIEW ) 15 days Tue 11/9/10 Mon 11/29/10

Determine how to use image acquisition toolbox in MatLAB

Determine how to use the image processing toolbox

Learn how to acquire two images

Record from two cameras simultaneously

Record from two cameras at the same frame rate

Learn how to begin and end recording simultaneously

Write an M-File that can record and stop recording from the two cameras simultaneously at the same frame rate

Page 35: BME 4900 Final Report

|35

Troubleshoot MATLAB program whenever necessary

Stereo Triangulation 10 days Tue 11/9/10 Mon 11/22/10

Determine best method to detect 3D location of object in space from two images

1 day Tue 11/9/10 Tue 11/9/10

Determine the mathematical calculations necessary in order to find the 3D position of the spheres

2 days Wed 11/10/10

Thu 11/11/10

Modify the general stereo triangulation equaions to fit our specific model using a constant to convert from LabVIEW's image to real-world distances

1 day Fri 11/12/10 Fri 11/12/10

Determine the value of K by imaging objects at known distances

4 days Wed 11/17/10

Mon 11/22/10

Client and Relevant Meetings 1 day Tue 11/9/10 Tue 11/9/10

Initial meeting with Dr. McIsaac 1 day

Follow-up meeting with Dr. McIsaac 1 day

Meeting with Taylor and Justin for knowledge on previous work

1 day

Meeting with Dr. McIsaac to practice with the Ultrasound technology

1 day

Meeting with Dr. Defaria (NI Representative) for LabVIEW licences

1 day

Final meeting with Dr. McIsaac to approve project 1 day

Project Assignments 1 day? Tue 11/9/10 Tue 11/9/10

Project statement

Project specifications

Project proposal

Alternative designs

Optimal designs

Fall sesmester final report

Fall semester final presentation

Weekly reports

Spring semester final report

Spring semester final presentation

Page 36: BME 4900 Final Report

|36

8 Team Members Contributions to the Project

8.1 Michael Golden

Michael brainstormed the majority of the ideas for various structural and mechanical components of this project. There have been many proposed ideas for the camera casing as well as the camera support. These designs were initially hand drawn but were then transferred to CAD drawings in Visio. All CAD drawings in any presentations or papers were all constructed by Michael.

Michael conducted the research into which materials are most effective for use in this aspect of the project and completed the necessary financial paperwork including product requisition forms. When the group decided on a final optimal design, Michael ordered the various necessary parts and worked closely with the machine shop staff to construct all mechanical parts. The camera casing was constructed from material that was available in the machine shop while waiting for the 80/20 Inc. T-slotted aluminum to be shipped. Once the aluminum arrived, Michael cut the pieces to desired lengths and constructed the support seen in the figure 2.11.

8.2 Khayriyyah Munir

Khayriyyah has taken on as her main tasks the majority of the administrative tasks associated with this project. There have been multiple reports due and presentations to present to which Khayriyyah has devoted most of her efforts. This project also requires a lot of contact with various representatives of companies, aside from the client. She has been in contact with the representatives at Sonosite to try to receive specifications and dimensions of the probe. She has also set up meetings with various professors and graduates of UCONN to acquire more ideas about the best method to go about reconstructing two dimensional images into a three dimensional image. Khayriyyah plans to exert the majority of her future effort to 3D reconstruction research and programming. This component of the project is quickly underway, especially now that the team members have LabVIEW on their personal computers and can test various third party programs for compatibility and efficiency.

8.3 Omid Nasser Bigdeli

Omid has taken on as his main contribution the mathematical components of this project. This included determining various designs for the configuration of the spheres and their orientation to the probe. The members of the group decided together, of these various designs which were optimal to use in the project. Omid also researched and determined which cameras are suitable for our project, and decided upon the best camera. He completed all necessary paperwork including the purchase requisition form and obtained the cameras. Omid has also explored various programming options, including image acquisition and recognition in matlab. It is expected that Omid will contribute significantly to the LabVIEW program for image acquisition and recognition.

Page 37: BME 4900 Final Report

|37

9 Conclusion

This project is intended to ease the process of administering anesthesia for anesthesiologists in the brachial plexus of patients prior to surgery. Currently anesthesiologists at Hartford Hospital use ultrasound technology with a Sonosite SLA probe. Our client, Dr. Joseph McIsaac, is the chief trauma anesthesiologist at Hartford Hospital and is sponsoring this project.

The design of the project is to produce a 3D image, using two dimensional images produced by the ultrasound machine. A tracking pyramid consisting of three differently colored spheres will be attached to the ultrasound probe. Using two web cameras in conjunction with LabVIEW’s image acquisition technology, we will take pictures of this tracking pyramid and calculate the position and movement of the probe using stereo triangulation equations. The images taken by each of these cameras will be synced with the images produced by the ultrasound machine to ensure that the ultrasound images used are only those that correspond with the movement of the probe. The 2D ultrasound images will be reconstructed into a 3D image using reconstruction software that is compatible with LabVIEW.

Using LabVIEW’s Developer Suite, a stand-alone program will be made so that any user with a computer can make use of this device. We will provide the design and information pertaining to this project entirely free to the public so that others can make use of this technology as well. It is hoped that with such a low cost, even small clinics and hospitals that cannot afford the products which currently exist (priced above $100k) will be able to image in three dimensions what they now image in two dimensions.

10 References

[1] Sonosite Products--Micromaxx. Transducers. SLA Probe

<http://www.sonosite.com/products/micromaxx/transducers/>

[2] 80/20 Inc. Product Catalog < http://www.8020.net/>

[3] Iocchi, Luca. Stereo Vision: Triangulation. 6 April 1998. <

http://www.dis.uniroma1.it/~iocchi/stereo/triang.html>

11 Acknowledgements

Sponsor- We would like to acknowledge firstly Dr. Jay McIsaac for his proposal for this - project. He has provided much guidance and plenty of ideas for the most successful route to take to complete this project in a timely manner. Professor- We would like to acknowledge Dr. John Enderle for advising us in our process throughout this semester. His contribution has enabled us to adequately plan for and follow in

Page 38: BME 4900 Final Report

|38

an effective manner our project guideline, while leaving room and expectations for adjustments as necessary. Summer Interns -We would also like to acknowledge two Avon High School robotics team members, Taylor Amarel and Justin Yost, who contributed a significant amount of LabVIEW code for this project. They volunteered at Hartford Hospital in the summer of 2010 in the anesthesiology department specifically working on coding for this project. Machine Shop Engineers – We would like to acknowledge Peter Glaude and Serge Doyon for their assistance and guidance in designing and constructing all mechanical supports and devices necessary for this project. Teaching Assistants – We would like to acknowledge Marek Wartenberg and Emily Jacobs for the lengthy hours they have contributed to assist us in the various components of this project, particularly the EKG project which taught us various circuitry and LabVIEW skills. BME Administrative Staff – We would like to thank Kerry Wenzler for ordering everything necessary for the project and notifying us when things arrive.

Page 39: BME 4900 Final Report

|39

12 Appendix

12.1 Purchase Requisitions

PURCHASE ORDER REQUISITION - UCONN BME SENIOR DESIGN LAB Instructions: Students are to fill out boxed areas with white background

Each Vendor will require a different purchase requisition Date: 10.22.2010 Team # Team 3

Student Name: Michael Golden Total Expenses 288.25

Ship to: University of Connecticut Lab Admin only:

Biomedical Engineering FRS #

U-2247, 260 Glenbrook Road Student Initial Budget

Storrs, CT 06269-2247 Student Current Budget

Attn: Project Sponsor

Project Name: 3D Ultrasound Reconstruction

ONLY ONE COMPANY PER REQUISITION

Catalog # Description Unit QTY Unit Price Amount

1010-145 1X1 BI-SLOT OPPOSITTE T-SLOT 145" PROFILE 1 3 $33.35 $100.05

3321 1/4-20X.5 FBHSCS & TNUT 1 75 $0.50 $37.50

3393 1/4-20X.5 BHSCS & TNUT 1 20 $0.40 $8.00

4013 10S 6 HOLE INSIDE CORNER BRACKET 1 2 $5.20 $10.40

4112 10S 7 HOLE TEE JOINING PLATE FOR 1010 1 2 $8.15 $16.30

4141 10S 4 HOLE TEE JOINING PLATE 1 2 $5.80 $11.60

4145 10S 4 HOLE 45 DEG. ANGLE JOINING PLATE 1 8 $5.10 $40.80

6415 10S SINGLE FLANGE LINEAR BEARING BRAKE

KIT 1 2 $38.60 $77.20

6850 10S L-HANDLE LINEAR BRAKE 1 2 $9.55 $19.10

Comments

Price Quote

Shipping $0.00

File Name: Total: $320.95

Yes or No Vendor Accepts Purchase Orders?

Vendor: 80/20 Inc.

Address: 1701 South 400 East

Columbia City, IN 46725 Authorization:

Phone: 260.248.8030

Contact Name: ______________________________

Page 40: BME 4900 Final Report

|40

PURCHASE ORDER REQUISITION - UCONN BME SENIOR DESIGN LAB Instructions: Students are to fill out boxed areas with white background

Each Vendor will require a different purchase requisition Date: September 14, 2010 Team # 3

Student Name: Omid Nasser Bigdeli Total Expenses $0.00

Ship to: University of Connecticut Lab Admin only:

Biomedical Engineering FRS #

U-2247, 260 Glenbrook Road Student Initial Budget

Storrs, CT 06269-2247 Student Current Budget

Attn: Project Sponsor

Project Name: 3D Ultrasound Project

ONLY ONE COMPANY PER REQUISITION

Catalog # Description Unit QTY Unit Price Amount

Logitech Webcam Pro 9000 2 $59.99 $119.98

$0.00

$0.00

$0.00

$0.00

$0.00

$0.00

$0.00

$0.00

$0.00

$0.00

$0.00

$0.00

$0.00

$0.00

Comments http://www.amazon.com/Logitech-Webcam-9000-Built-Microphone/dp/B002M78ECK

Price Quote

Shipping $0.00

File Name: Total: $119.98

Yes or No Vendor Accepts Purchase Orders?

Vendor:

Address:

Authorization:

Phone:

Contact Name: ______________________________