1
Gradient Flow Acoustic Localization with the Khepera II Robot Mack Ward 1,2 , Shuo Li 3 , Milutin Stanacevic 3 Background Gradient Flow Localization Hardware Testing Future Work References Software Development Acoustic localization is the process of determining the direction of the source of a sound. It is commonly performed by comparing temporal differences between sound waves impinging on microphone arrays. When microphone arrays become smaller and smaller, this process fails due to the smaller time delays between signals. The process of gradient flow localization solves this issue by converting the temporal differences of sound waves from phase shifts to altitudinal differences, which can be used to more accurately localize a sound source. This method of acoustic localization was utilized to design an acoustic navigation system for the Khepera II robot, in order to demonstrate the robustness and applicability of such algorithm. 1. Department of Computer Science and Engineering, University at Buffalo, Buffalo, NY 14260 2. Department of Electrical Engineering, University at Buffalo, Buffalo, NY 14260 3. Department of Electrical & Computer Engineering, Stony Brook University, Stony Brook, NY 11794 Methodology Stanacevic, Milutin, and Gert Cauwenberghs. “Micropower Gradient Flow Acoustic Localization.” IEEE Transactions on Circuits and Systems - I: Regular Papers 52.10 (2005): 2148-57. Print. Stanacevic, Milutin, Gert Cauwenberghs, and George Zweig. “Gradient Flow Adaptive Beamforming and Signal Separation in a Miniature Microphone Array.” Acoustics, Speech, and Signal Processing 4 (2002): 4016-19. Print. Stancevic, Milutin, Gert Cauwenberghs, and Larry Riddle. “Gradient Flow Bearing Estimation with Blind Identification of Non-Stationary Signal and Interference.” Circuits and Systems 5 (2004): V-5. Print. After multiple tests, it was concluded that the Khepera II acoustic navigation system fails due to communication issues between the Khepera II CPU and the FPGA board. There are many possible explanations of this issue. Some are: The clock signal output of the robot does not function properly. The reset pin output voltage of the robot is too low. The digital FPGA configuration prevents the robot from reading or writing to it. The circuitry on the FPGA board is incorrect, disallowing stable connectivity. In order to address this issue, further testing would need to be done to in order to determine the precise location of the problem, after which a particular solution could be proposed and implemented. Acknowledgements I want to thank Kathryne Piazzola for her guidance, helpfulness, and positivity throughout the duration of the summer. I also want to thank Dr. Gary Halada for his dedication to the REU program. I am grateful for having the opportunity to participate in such a rewarding experience so early in my collegiate career. In addition, I want to thank Chris Young, Kate Dorst, and Katherine Foret for their valuable instruction and insight into the graduate student experience. I also want to thank the CIE for their contributions to the program. Lastly, I want to thank my fellow REU students. This summer wouldn’t have been the same without them. This research was made possible by the NSF REU Nanotechnology site administered by the CIE under NSF grant # 1062806. The project consisted of integrating several separate components into one fully functioning mobile robotic acoustic localization device. Khepera II Robot (Figure 4) The Khepera II is a small programmable robot designed to test real world applications of obstacle avoidance, sensory navigation, and behavioral algorithms. It features programmable flash memory, a serial connection port, and pins for generic or user-designed external extension turrets. Microphone Board (Figure 5) A preexisting custom analog microphone array board was utilized to capture sound waves and send corresponding electrical signals to the digital FPGA board. FPGA Board (Figure 6) A custom digital analysis board was implemented to interpret varying voltages from the analog microphone board and convert the values into digital signals. This digital information was processed by a gradient flow chip, stored in a field programmable gate array, and made accessible to the Khepera II robot. MATLAB (Figure 7) The MATLAB software was used to communicate and control the Khepera II robot. Script files were implemented to develop the code necessary for acoustic localization. Connection between the robot and the PC was established via direct serial link from the Khepera II to a USB port on the computer. Figure 4 Khepera II robot with serial cable attachment and visible expansion pins Figure 5 Analog microphone board showing four coplanar microphones Figure 6 Digital FPGA board demonstrating pin connections and gradient flow chip Figure 7 Communication between the Khepera II CPU and MATLAB Proper function of the acoustic navigation system required successful integration of three major hardware components: the Khepera II robot, the microphone board, and the FPGA board (Figure 8, Figure 9). The complete functionality of the Khepera II motors, sensors, and CPU were confirmed by testing autonomous and serial command modes. The microphone board was tested with a power supply and oscilloscope to ensure correct functioning of the microphones and output pins (Figure 10). Multiple challenges with the FGPA board and the Khepera II/FPGA interface restricted the implementation of the system. These difficulties were addressed by testing the pin connections on the FPGA board individually through the utilization of a breadboard (Figure 11). Unfortunately, when supplied with power, a clock signal, and other appropriate connections (Figure 12), the FPGA board entered an unstable state, disallowing proper communication between the FPGA and the robot (Figure 13). Step 3: The information is added together to give the total delay between microphones. Step 5: The robot is commanded to turn towards the source of sound and move forward. Step 4: The delays are used to calculate the angle of the source relative to the robot. Step 1: MATLAB retrieves microphone signal data from the FPGA and stores it in an array. Step 2: The data received is converted from binary 2’s compliment encoding to decimal. Consider a sound wave impinging on an array of four coplanar microphones (Figure 1). Assume that contact is made with the microphone in the wave’s far field, thus wavefronts are generally straight lines. We can easily observe (1) - (4). The signal observed at each microphone is represented by the function (5). However, trying to determine 1 and 2 by comparing phase shifts is ineffective (Figure 2). In order to accurately determine 1 and 2 , we first consider the general formula for a Taylor Series summation (6). Rewrite (2) as a series summation in (7). We discount the later terms of the expansion because of their overall insignificance. Let a new function, , represent the signals observed at each microphone (8) - (12). Now we are able to manipulate these functions to accurately determine 1 and 2 . From (9) - (12) we get (13) and (14). From (9) and (11) we get (15) and (16). From (10) and (12) get (17) and (18). Now we are able to accurately calculate 1 and 2 by comparing altitudinal differences instead of phase shifts (Figure 3). Finally, from (2) and (4) we get (19). 1 = 0 ∗ cos (1), 1 = 0 ∗ cos (2) 2 = 0 ∗ sin (3), 2 = 0 ∗ sin (4) (+ ) (5) + = + 1! + 2! 2 + 3! 3 +⋯ (6) + + + () (7) 0 = + 0 () (8) 1 = + 1 + 1 () (9) 2 = + 2 + 2 () (10) 3 = 1 + 3 () (11) 4 = 2 + 4 () (12) = 1 + 2 + 3 + 4 4 (13) = 1 + 2 + 3 + 4 4 (14) 1 3 = 2 1 (15) 1 = 1 3 2 (16) 2 4 = 2 2 (17) 2 = 2 4 2 (18) = −1 2 1 (19) t Figure 3 1 2 1 2 0 Figure 2 2 1 ( + ) 2 1 m 1 m 3 m 2 m 4 m 0 d 0 d 0 d 0 d 0 sound θ θ θ Figure 1 t c = speed of sound = angle of incidence of sound wave m 0 = center of the array, reference point m 1 - m 4 = microphones = wavefront delay from one microphone to m 0 1 = time delay along horizontal axis 2 = time delay along vertical axis d 0 = distance between microphones and m 0 d 1 = distance of wavefront delay along horizontal axis d 2 = distance of wavefront delay along vertical axis Figure 8 Figure 9 Figure 10 %This program is the code necessary for gradient flow %localization with the Khepera II robot. x = [0;0]; %Gets 8 bits of data from the FPGA chip for c = 0:1:7 b1 = kReadByte(ref,2); %Y axis tau01 = rem(b1,2); %X axis tau10 = rem(floor(b1/2),2); array = horzcat(x,[tau10; tau01]); x = array; end %Eliminates the 'x' position in the array array(:,1) = []; %2's compliment encoding for y = 1:1:2 if array(y,1) == 0 for x = 2:1:8 %v is either a 1 or 0 v = array(y,x); array(y,x) = 64/(2^(x-2))*v; end else array(y,1) = -1; for x = 2:1:8 v = array(y,x); if array(y,x) == 0 array(y,x) = -64/(2^(x-2))*v; else array(y,x) = 0; end end end end %This array has two values, each of which are one byte %of information delays = sum(array'); %Eliminates the possibility of dividing by 0 if delays(1) == 0 delays(1) = .000001; end %Returns an angle theta of the direction of the sound %source theta = 180/pi * atan(delays(2)./delays(1)); %Establishes 0 degress perpendicular to the robot turn = theta-90; %Sets the right and left values to face the source of %sound (2040 = one revolution) left = turn/360*(-2040); right = turn/360*(2040); %Positions the robot to face the sound kMoveTo(ref,left,right); %Moves towards sound kcmd(ref,'D,2,2',1); Figure 11 Figure 12 Figure 13 d 1 d 2

Gradient Flow Acoustic Localization with the …curca.buffalo.edu/students/pdfs/2013_posters/WardMack.pdfGradient Flow Acoustic Localization with the Khepera ... Speech, and Signal

Embed Size (px)

Citation preview

Page 1: Gradient Flow Acoustic Localization with the …curca.buffalo.edu/students/pdfs/2013_posters/WardMack.pdfGradient Flow Acoustic Localization with the Khepera ... Speech, and Signal

Gradient Flow Acoustic Localization

with the Khepera II Robot Mack Ward1,2, Shuo Li3, Milutin Stanacevic3

Background

Gradient Flow Localization

Hardware Testing

Future Work

References

Software Development

• Acoustic localization is the process of determining the direction of the source of a sound.

• It is commonly performed by comparing temporal differences between sound waves impinging on microphone arrays.

• When microphone arrays become smaller and smaller, this process fails due to the smaller time delays between signals.

• The process of gradient flow localization solves this issue by converting the temporal differences of sound waves from phase shifts to altitudinal differences, which can be used to more accurately localize a sound source.

• This method of acoustic localization was utilized to design an acoustic navigation system for the Khepera II robot, in order to demonstrate the robustness and applicability of such algorithm.

1. Department of Computer Science and Engineering, University at Buffalo, Buffalo, NY 14260 2. Department of Electrical Engineering, University at Buffalo, Buffalo, NY 14260

3. Department of Electrical & Computer Engineering, Stony Brook University, Stony Brook, NY 11794

Methodology

Stanacevic, Milutin, and Gert Cauwenberghs. “Micropower Gradient Flow Acoustic Localization.” IEEE Transactions on Circuits and Systems - I: Regular Papers 52.10 (2005): 2148-57. Print.

Stanacevic, Milutin, Gert Cauwenberghs, and George Zweig. “Gradient Flow Adaptive Beamforming and Signal Separation in a Miniature Microphone Array.” Acoustics, Speech, and Signal Processing 4 (2002): 4016-19. Print.

Stancevic, Milutin, Gert Cauwenberghs, and Larry Riddle. “Gradient Flow Bearing Estimation with Blind Identification of Non-Stationary Signal and Interference.” Circuits and Systems 5 (2004): V-5. Print.

• After multiple tests, it was concluded that the Khepera II acoustic navigation system fails due to communication issues between the Khepera II CPU and the FPGA board.

• There are many possible explanations of this issue. Some are:

• The clock signal output of the robot does not function properly.

• The reset pin output voltage of the robot is too low.

• The digital FPGA configuration prevents the robot from reading or writing to it.

• The circuitry on the FPGA board is incorrect, disallowing stable connectivity.

• In order to address this issue, further testing would need to be done to in order to determine the precise location of the problem, after which a particular solution could be proposed and implemented.

Acknowledgements I want to thank Kathryne Piazzola for her guidance, helpfulness, and positivity throughout the duration of the summer. I also want to thank Dr. Gary Halada for his dedication to the REU program. I am grateful for having the opportunity to participate in such a rewarding experience so early in my collegiate career. In addition, I want to thank Chris Young, Kate Dorst, and Katherine Foret for their valuable instruction and insight into the graduate student experience. I also want to thank the CIE for their contributions to the program. Lastly, I want to thank my fellow REU students. This summer wouldn’t have been the same without them.

This research was made possible by the NSF REU Nanotechnology site administered by the CIE under NSF grant # 1062806.

The project consisted of integrating several separate components into one fully functioning mobile robotic acoustic localization device.

Khepera II Robot (Figure 4)

• The Khepera II is a small programmable robot designed to test real world applications of obstacle avoidance, sensory navigation, and behavioral algorithms.

• It features programmable flash memory, a serial connection port, and pins for generic or user-designed external extension turrets.

Microphone Board (Figure 5)

• A preexisting custom analog microphone array board was utilized to capture sound waves and send corresponding electrical signals to the digital FPGA board.

FPGA Board (Figure 6)

• A custom digital analysis board was implemented to interpret varying voltages from the analog microphone board and convert the values into digital signals.

• This digital information was processed by a gradient flow chip, stored in a field programmable gate array, and made accessible to the Khepera II robot.

MATLAB (Figure 7)

• The MATLAB software was used to communicate and control the Khepera II robot. Script files were implemented to develop the code necessary for acoustic localization.

• Connection between the robot and the PC was established via direct serial link from the Khepera II to a USB port on the computer.

Figure 4 Khepera II robot with serial cable attachment and visible expansion pins

Figure 5 Analog microphone board showing four coplanar microphones

Figure 6 Digital FPGA board demonstrating pin connections and gradient flow chip

Figure 7 Communication between the Khepera II CPU and MATLAB

• Proper function of the acoustic navigation system required successful integration of three major hardware components: the Khepera II robot, the microphone board, and the FPGA board (Figure 8, Figure 9).

• The complete functionality of the Khepera II motors, sensors, and CPU were confirmed by testing autonomous and serial command modes.

• The microphone board was tested with a power supply and oscilloscope to ensure correct functioning of the microphones and output pins (Figure 10).

• Multiple challenges with the FGPA board and the Khepera II/FPGA interface restricted the implementation of the system.

• These difficulties were addressed by testing the pin connections on the FPGA board individually through the utilization of a breadboard (Figure 11).

• Unfortunately, when supplied with power, a clock signal, and other appropriate connections (Figure 12), the FPGA board entered an unstable state, disallowing proper communication between the FPGA and the robot (Figure 13).

Step 3: The information is added together to give the total delay between microphones.

Step 5: The robot is commanded to turn towards the source of sound and move forward.

Step 4: The delays are used to calculate the angle of the source relative to the robot.

Step 1: MATLAB retrieves microphone signal data from the FPGA and stores it in an array.

Step 2: The data received is converted from binary 2’s compliment encoding to decimal.

• Consider a sound wave impinging on an array of four coplanar microphones (Figure 1).

• Assume that contact is made with the microphone in the wave’s far field, thus wavefronts are generally straight lines.

• We can easily observe (1) - (4).

• The signal observed at each microphone is represented by the function (5).

• However, trying to determine 𝜏1and 𝜏2 by comparing phase shifts is ineffective (Figure 2).

• In order to accurately determine 𝜏1and 𝜏2, we first consider the general formula for a Taylor Series summation (6).

• Rewrite (2) as a series summation in (7).

• We discount the later terms of the expansion because of their overall insignificance.

• Let a new function, 𝑋𝑚 𝑡 , represent the signals observed at each microphone (8) - (12).

• Now we are able to manipulate these functions to accurately determine 𝜏1and 𝜏2.

• From (9) - (12) we get (13) and (14).

• From (9) and (11) we get (15) and (16).

• From (10) and (12) get (17) and (18).

• Now we are able to accurately calculate 𝜏1and 𝜏2by comparing altitudinal differences instead of phase shifts (Figure 3).

• Finally, from (2) and (4) we get (19).

𝑑1 = 𝑑0 ∗ cos 𝜃 (1), 𝜏1 =𝑑0∗ cos 𝜃

𝑐 (2)

𝑑2 = 𝑑0 ∗ sin 𝜃 (3), 𝜏2 =𝑑0∗ sin 𝜃

𝑐 (4)

𝑆(𝑡 + 𝜏 𝑚 ) (5)

𝑓 𝑥 + 𝑑𝑥 = 𝑓 𝑎 +𝑓 𝑎

1!𝑥 − 𝑎 +

𝑓 𝑎

2!𝑥 − 𝑎 2 +

𝑓 𝑎

3!𝑥 − 𝑎 3 +⋯ (6)

𝑆 𝑡 + 𝜏 𝑚 ≈ 𝑆 𝑡 + 𝜏 𝑚 𝑆 𝑡 + 𝑛𝑚(𝑡) (7)

𝑋𝑚0𝑡 = 𝑆 𝑡 + 𝑛𝑚0

(𝑡) (8)

𝑋𝑚1𝑡 = 𝑆 𝑡 + 𝜏1𝑆 𝑡 + 𝑛𝑚1

(𝑡) (9)

𝑋𝑚2𝑡 = 𝑆 𝑡 + 𝜏2𝑆 𝑡 + 𝑛𝑚2

(𝑡) (10)

𝑋𝑚3𝑡 = 𝑆 𝑡 − 𝜏1𝑆 𝑡 + 𝑛𝑚3

(𝑡) (11)

𝑋𝑚4𝑡 = 𝑆 𝑡 − 𝜏2𝑆 𝑡 + 𝑛𝑚4

(𝑡) (12)

𝑆 𝑡 = 𝑋𝑚1+𝑋𝑚2+𝑋𝑚3+𝑋𝑚4

4 (13)

𝑆 𝑡 =𝑑

𝑑𝑡𝑋𝑚1+

𝑑

𝑑𝑡𝑋𝑚2+

𝑑

𝑑𝑡𝑋𝑚3+

𝑑

𝑑𝑡𝑋𝑚4

4 (14)

𝑋𝑚1𝑡 − 𝑋𝑚3

𝑡 = 2𝜏1𝑆 𝑡 (15)

𝜏1 =𝑋𝑚1 𝑡 −𝑋𝑚3 𝑡

2𝑆 𝑡 (16)

𝑋𝑚2𝑡 − 𝑋𝑚4

𝑡 = 2𝜏2𝑆 𝑡 (17)

𝜏2 =𝑋𝑚2 𝑡 −𝑋𝑚4 𝑡

2𝑆 𝑡 (18)

𝜃 = 𝑡𝑎𝑛−1𝜏2

𝜏1 (19)

t

Figure 3

𝜏1 𝜏2

𝑋𝑚1𝑡

𝑋𝑚2𝑡

𝑋𝑚0𝑡

𝑆 𝑡 𝜏

Figure 2

𝜏2

𝜏1

𝑆(𝑡 + 𝜏 𝑚 )

𝜏2

𝜏1

m1 m3

m2

m4

m0 d0 d0

d0

d0

sound

θ θ

θ

Figure 1

t

c = speed of sound 𝜃 = angle of incidence of sound wave m0 = center of the array, reference point m1 - m4 = microphones 𝜏𝑚 = wavefront delay from one microphone to m0

𝜏1 = time delay along horizontal axis 𝜏2 = time delay along vertical axis d0 = distance between microphones and m0

d1 = distance of wavefront delay along horizontal axis d2 = distance of wavefront delay along vertical axis

Figure 8 Figure 9 Figure 10

%This program is the code necessary for gradient flow

%localization with the Khepera II robot.

x = [0;0];

%Gets 8 bits of data from the FPGA chip

for c = 0:1:7

b1 = kReadByte(ref,2);

%Y axis

tau01 = rem(b1,2);

%X axis

tau10 = rem(floor(b1/2),2);

array = horzcat(x,[tau10; tau01]);

x = array;

end

%Eliminates the 'x' position in the array

array(:,1) = [];

%2's compliment encoding

for y = 1:1:2

if array(y,1) == 0

for x = 2:1:8

%v is either a 1 or 0

v = array(y,x);

array(y,x) = 64/(2^(x-2))*v;

end

else

array(y,1) = -1;

for x = 2:1:8

v = array(y,x);

if array(y,x) == 0

array(y,x) = -64/(2^(x-2))*v;

else array(y,x) = 0;

end

end

end

end

%This array has two values, each of which are one byte

%of information

delays = sum(array');

%Eliminates the possibility of dividing by 0

if delays(1) == 0

delays(1) = .000001;

end

%Returns an angle theta of the direction of the sound

%source

theta = 180/pi * atan(delays(2)./delays(1));

%Establishes 0 degress perpendicular to the robot

turn = theta-90;

%Sets the right and left values to face the source of

%sound (2040 = one revolution)

left = turn/360*(-2040);

right = turn/360*(2040);

%Positions the robot to face the sound

kMoveTo(ref,left,right);

%Moves towards sound

kcmd(ref,'D,2,2',1);

Figure 11 Figure 12 Figure 13

d1

d2