Smart Sensors and Mems || Smart acoustic sensor array (SASA) system for real-time sound processing applications

  • Published on
    01-Apr-2017

  • View
    214

  • Download
    0

Embed Size (px)

Transcript

<ul><li><p> Woodhead Publishing Limited, 2014</p><p>492</p><p> 17 Smart acoustic sensor array (SASA) system for </p><p>real-time sound processing applications </p><p> M. TURQUETI , E. ORUKLU and J. SANIIE, Illinois Institute of Technology, USA </p><p> DOI: 10.1533/9780857099297.2.492 </p><p> Abstract : This chapter describes the design and implementation of a smart acoustic sensor array (SASA) system that consists of a 52-microphone micro-electromechanical systems array embedded in a fi eld-programmable gate array platform with real-time data acquisition, signal processing and network communication capabilities. The SASA system is evaluated using several case studies in order to demonstrate the versatility and scalability of the sensor array platform for real-time sound analysis applications. These case studies include sound mapping and source localization. </p><p> Key words: acoustic MEMS array, sound localization, fi eld-programmable gate arrays, smart sensors. </p><p> 17.1 Introduction </p><p> Designing a smart acoustic sensor array (SASA) system presents signifi -cant challenges due to the number of sensors, the processing speed and the complexity of the targeted applications. This chapter describes the design and implementation of a 52-microphone micro-electromechanical systems (MEMS) array, embedded in an fi eld-programmable gate array (FPGA) platform with real-time processing capabilities. In this type of system, char-acteristics such as speed, scalability and real-time signal processing are para-mount and, furthermore, highly advanced data acquisition and processing modules are necessary. Sound processing applications such as sound source mapping, source separation and localization have several important char-acteristics that the system must meet such as array spatial resolution, low reverberation and real-time data acquisition and processing. </p><p> The acoustic MEMS array (AMA) system presented in this study is designed to meet these challenges. The design takes into account mechanical factors such as array geometry, sensor disposition and micro-phone reverberation. In addition, the design of the electronics system addresses electronic noise, power decoupling, cross-talk and connectivity. </p></li><li><p>SASA system for real-time sound processing applications 493</p><p> Woodhead Publishing Limited, 2014</p><p>The system integrates the acoustic array with real-time data acquisition, signal processing and network communication capabilities. FPGA-based smart sensor nodes are examined in several case studies, demonstrat-ing the versatility and scalability of the sensor array platform for real-time applications. These case studies include sound mapping and source localization. </p><p> Microphone arrays are capable of providing spatial information for incoming acoustic waves. Arrays can capture key information that would be impossible to acquire with single microphones; however, this addi-tional information comes at a price in terms of the increased complexity of the system. The design of microphone arrays faces several challenges such as number of detectors, array geometry, reverberation, interference issues and signal processing (Buck et al ., 2006). These factors are crucial to the construction of a reliable and effective microphone array system. </p><p> Microphone arrays present additional challenges for signal process-ing methods since large numbers of detecting elements and sensors gen-erate large amounts of data to be processed. Furthermore, applications such as sound tracking and sound source localization require complex algorithms in order to process the raw data appropriately (Benesty et al ., 2008). Challenging environments such as multiple sound sources moving with background noise are especially diffi cult to deal with when real-time processing is required. It is also important for such a system to be eas-ily scalable, as demonstrated by Weinstein et al . (2004) and Benesty et al . (2008). The performance of a microphone array typically increases linearly with the size of the array. </p><p> The system presented in this project is designed to provide an acoustic data acquisition system that is fl exible and expandable, with powerful real-time signal processing capability in order to be interfaced with a wide variety of sensor arrays. The data acquisition and processing architecture presented in this work is called the compact and programmable daTa acquisition node (CAPTAN) (Turqueti et al ., 2008) and the acoustic array embedded on the system is known as an AMA. The combination of AMA and CAPTAN is known as a SASA system. </p><p> The CAPTAN architecture is a distributed data acquisition and process-ing system that can be employed in a number of different applications rang-ing from a single sensor interface to multi-sensor arrays data acquisition and to high-performance parallel computing (Rivera et al ., 2008). This architec-ture has the unique features of being highly expandable, interchangeable and adaptable and with a high computation power inherent in its design. The AMA array was designed to conform to and to take advantage of the CAPTAN architecture. It is an acoustic array that employs sound or ultra-sound sensors distributed in two dimensions. </p></li><li><p>494 Smart sensors and MEMS</p><p> Woodhead Publishing Limited, 2014</p><p> 17.2 MEMS microphones </p><p> MEMS microphones have being introduced onto the market in large numbers by the advent of mobile communication devices. Mobile phones require inexpensive, robust and reliable microphones; MEMS microphones provide a perfect match. Since the introduction of the MEMS microphone, its performance has improved vastly. It now has comparable or better per-formance than conventional magnetic or piezo microphones, especially with regard to frequency response and the signal-to-noise ratio. </p><p> There are many possible implementations of MEMS microphones; the implementation utilized in this work is based on capacitive transduction. Capacitive MEMS microphone operation is based on the change of the capac-itance of the sensor element within the microphone when acoustic waves interact with the microphone. This is usually achieved by placing a metalized membrane a few microns from a metalized back plate, where the air gap is the dielectric. The membrane distance to the backplane fl uctuates based on air pressure changes caused by incoming acoustic waves. System operation is analogous to a parallel plates capacitor. The MEMS microphone utilized on this project also employs an acoustic chamber to increase the acoustic pres-sure on the membrane and to improve its gain. In order to translate the small capacitive oscillations to electrical signals, a pre-amplifi er is embedded in the same package. Usually, the pre-amplifi er is built using standard complemen-tary metal-oxide-semiconductor (CMOS) processes, while the membrane is deposited in additional steps. Frequently, these additional steps involve the sputtering of several metal layers, photoresist processes and polyimide. For the speaker microphone SPM 208 microphone (Knowles, 2006), the sensor element and the pre-amplifi er are built in two different wafers, comprising two chips that are electrically connected by wire bonds. The microphone membrane was built on top of a silicon substrate (see Fig. 17.1). </p><p>(a)</p><p>(b) (c)</p><p> 17.1 MEMS microphone utilized in this work. (a) Pre-amplifi er, (b) acoustic transducer and (c) aluminum cover forming resonant cavity. </p></li><li><p>SASA system for real-time sound processing applications 495</p><p> Woodhead Publishing Limited, 2014</p><p> 17.3 Fundamentals of acoustic sensor arrays and applications </p><p> There is currently a signifi cant amount of research and numerous applica-tions which use sound or ultrasound for communications, detection and analysis. Some of the research topics are multi-party telecommunications; hands-free acoustic human-machine interfaces; computer games; dictation systems; hearing-aids; medical diagnostics; the structural failure analysis of buildings or bridges, and the mechanical failure analysis of machines such as vehicles or aircraft; and robotic vision, navigation and automation (Llata et al ., 2002; Nishitani et al ., 2005; Alghassi, 2008; Kunin et al ., 2010, 2011; Eckert et al ., 2011; Kim and Kim, 2011). In practice, there are numerous issues encountered in the real-world environment which make the realistic application of this theory signifi cantly more diffi cult (Chen, 2002; Rabinkin et al ., 1996; Brandstein and Ward, 2001). These issues include ambient sound and electrical noise; the presence of wideband nonstationary source signals; the presence of reverberation echoes; high-frequency sound sources which require higher speed systems; and the fl uctuation of the ambient tempera-ture and humidity, which affect the speed at which sound waves propagate. </p><p> The geometry of the array can play an important part in the formulation of the processing algorithms (Benesty et al ., 2008). Different applications require different geometries in order to achieve optimum performance. In applications such as source tracking, the array geometry is very important in determining the performance of the system (Nakadai et al ., 2002). Regular geometries, where sensors are evenly spaced, are preferred in order to sim-plify development of the algorithm. Linear arrays are usually applied in medical ultrasonography, planar arrays are often used in sound source local-ization and three dimensional spherical arrays are most frequently used in sophisticated SONAR applications. In other applications, such as source separation, the geometry of the array is not as important as the transducer characteristics such as dynamic range and the transducer aperture. The size of an array is usually determined by the frequency at which the array will operate and the kind of spatial resolution required by the application using the acoustic array (Weinstein et al ., 2004). </p><p> A data acquisition system is necessary to condition, process, store and dis-play the signals received by the array. Data acquisition for an acoustic array can be very challenging, but most of the issues are co-related with the types of sensor, number of sensors and array geometry. Also, the application plays an important role in defi ning the needs of the array for signal processing or real-time operation requirements. </p><p> Beamforming is a technique used in transducer arrays for establishing the signal beam fi eld and directivity for transmission or reception. The spa-tial directivity is achieved by the use of interference patterns to change the </p></li><li><p>496 Smart sensors and MEMS</p><p> Woodhead Publishing Limited, 2014</p><p>angular directionality of the array. When used for the transmission of signals, the transmitting signal will be steered this is where the amplitude and phase of its individual elements are controlled to act in unison through patterns of constructive and destructive interference. The resulting wavefront will contentrate the energy of the array in the desired direction. There are two categories of beamforming; static and adaptive (Hodgkiss, 1980; Campbell, 1999). Static beamforming involves the use of a fi xed set of parameters for the transducer array. Adaptive beamforming, on other hand, can adapt the parameters of the array in accordance with changes in the application envi-ronment. Adaptive beamforming, although computationally demanding, can offer better noise rejection than static beamforming. </p><p> 17.4 Design and implementation of a smart acoustic MEMS array (AMA) </p><p> The design process for the AMA involves selecting the optimum topo-logical distribution of sensors, number of microphones, inter-microphone spacing and microphone type. The presented acoustic array was designed to operate within sonic frequencies ranging from 100 Hz to 17 kHz, while maintaining a fl at response within 10 dB between 100 Hz and 10 kHz. Also, for beamforming, the spatial resolution should allow the sampling of fre-quencies of 17 kHz. In order to avoid spatial aliasing and to obtain a good acoustic aperture, the inter-microphone distance was determined to be 10.0 mm from center to center. This spacing makes it possible to obtain rel-evant phase information of incoming acoustic sound waves, increasing the array sensitivity and allowing spatial sampling of frequencies up to 17 kHz without aliasing. The inter-microphone array spacing of 10.0 mm was calcu-lated by fi nding the wavelength of the desired 17 kHz upper limit. Utilizing Equation [17.1]: </p><p> c fuff f [17.1] </p><p> where c is the speed of sound (340 m/s), f u is the upper limit frequency and is the upper limit wavelength. Solving Equation [17.1], was calculated to be 2.00 cm. The inter-microphone is calculated to satisfy the NyquistShannon sampling theorem, and is therefore equal to the upper wavelength divided by two, which results in the array inter-microphone spacing being 1.00 cm (i.e., 10.0 mm). </p><p> The number of microphones is a compromise between the size of the board, the electronics needed to deal with the massive amount of data and the need to achieve a signal-to-noise ratio of at least 20 dB. The AMA board (see Fig. 17.2) consists of 52 MEMS microphones distributed in an </p></li><li><p>SASA system for real-time sound processing applications 497</p><p> Woodhead Publishing Limited, 2014</p><p>octagonal layout of 8 columns and 8 rows with 2 central transmitter ele-ments. The central elements can be used for calibration purposes or sonar applications. When used as a calibration element, the central element emits a set of mono-component frequencies. These signals are then cap-tured by the microphones for calibrating the response of the array, tak-ing into account the spatial distribution of microphones. When the central element is used for active sonar applications, a series of pre-programmed pulses are emitted and then, when the emitted waves encounter obstacles, they bounce back to the array, giving information about the distance and position of obstacles. </p><p> The MEMS microphones are the fundamental components of the array, and their design and fabrication must offer high sensitivity and low rever-beration noise. The overall sensitivity of the array increases monotonically with the number of sensors. The MEMS microphone chosen for this array </p><p> 17.2 Top view of the CAPTAN readout system. with AMA board hosting the 52 MEMS microphones. This package is named smart acoustic sensor array (SASA). </p></li><li><p>498 Smart sensors and MEMS</p><p> Woodhead Publishing Limited, 2014</p><p>(the SPM208) has a sensitivity of 1 V/Pa at 1 kHz (Knowles, 2006). Figure 17.1 shows the selected MEMS microphone. These microphones are omni direc-tional and, when combined into an array, they provide a highly versatile acoustic aperture. The selected MEMS microphone frequency response is essentially fl at from 1 to 8 kHz, and has a low frequency limit of 100 Hz and a high-frequency limit of 25 kHz. The microphones are glued with silver epoxy to the copper pads in the board...</p></li></ul>

Recommended

View more >