Upload
yahiasaid
View
84
Download
2
Embed Size (px)
Citation preview
International Review on Computers and Software (I.RE.CO.S.), Vol. 8, n. 2
ISSN 1828-6003 February 2013
Manuscript received and revised December 2012, accepted February 2013 Copyright © 2013 Praise Worthy Prize S.r.l. - All rights reserved
Smart Camera Based on FPGA Oriented to Embedded Image
Processing
Yahia Said1, Taoufik Saidani
1, Fethi Smach
2, Mohamed Atri
1, and Hichem Snoussi
3
Abstract – This paper presents an image processing system based on smart camera platform,
whose two principle elements are a Wide-VGA CMOS Sensor and a Field Programmable Gate
Array (FPGA). The latter is used to control the various sensor parameter configurations and,
where desired, to receive and process the images captured by the CMOS sensor. With the advent
of today's highly integrated Field Programmable Gate Array (FPGA) it is possible to have a
software programmable processor and hardware computing resources on the same chip. Apart
from having sufficient logic blocks on which the hardware is implemented these chips also have an
embedded processor with system software to implement the application software around it. In this
paper, the Spartan-3A DSP based Xilinx VSK platform is used for developing the proposed
extensible hardware-software video streaming and processing modules. In order to develop the
required hardware and software in an integrated fashion, Xilinx Embedded Development Kit
(EDK) design tool has been used. A number of Xilinx provided IPs are customized to realize the
hardware modules in the FPGA fabric. Copyright © 2013 Praise Worthy Prize S.r.l. -All rights
reserved.
Keywords: FPGA, CMOS Sensor, Smart Camera, HW/SW Co-design
I. Introduction
FPGA-based embedded systems are of increasing
importance especially in the signal and image processing
domain. For instance, intelligent embedded systems for
image processing, such as smart cameras, rely on FPGA-
based architectures [1]. One of the most important
advantages of the FPGA is the ability to exploit the
inherently parallel nature of many vision algorithms. The
role of FPGA in embedded systems is gaining importance
due to its increasing capabilities and availability of
powerful Electronic Design Automation (EDA) tools.
The amount of resources in today‟s FPGAs is quite high
and can practically handle many processing operations.
Data coming from the sensor or any acquisition device is
directly processed by the FPGA; no other external
resources are necessary.
A smart camera captures images or streaming video
and converts them into digital, processes and interprets
the data that it acquires in real-time and takes decision
intelligently. It captures high-level representation of a
scene and performs real-time analysis of what it perceives
[2], [3]. Thus, a smart camera is a camera that has the
ability not only to take pictures but also, more
importantly, to make intelligent decisions of what is
happening in the image and in some cases take
appropriate actions on behalf of the camera user. The
block diagram of the smart camera system is shown in
“Fig. 1”.
Fig. 1 Block diagram of smart camera system
The smart camera system consists of a data
acquisition block and application-specific data processing
block. It possesses the captured real-time video within its
region-of-interest (ROI) and displays the processed
results on a monitor. It may contain a wide range of
algorithms to extract meaning from the streaming video
[17].This device can support a wide variety of
applications including remote surveillance, motion
analysis, traffic monitoring, etc [2]. The architecture of
smart camera systems has been presented in [3], [4]
based on which an application developer can construct a
parallel image processing application with minimal effort.
If the image capture block can be likened to being the
eyes of the camera, the ASIP (application-specific
information processing) block can be thought of as its
brain, which is like a computer that makes the camera
smart. On the hardware side, the ASIP block consists of
one or more microprocessors with associated memory,
communication buses, and other circuits or components.
Yahia Said, Taoufik Saidani, Fethi Smach, Mohamed Atri, and Hichem Snoussi
Copyright © 2013 Praise Worthy Prize S.r.l. - All rights reserved International Review on Computers and Software, Vol. 8, n. 2
In this paper, we present a solution aimed at
integrating hardware and software processing in the same
device, specifically, a smart camera based on Spartan 3A
DSP. The aim was to implement image processing
applications mainly on reconfigurable hardware, that is,
not only to carry out the classical hardware image pre-
processing (gain correction and noise pattern correction,
decimation/binning, compression, etc.), but also to
implement processing algorithms capable of extracting
more abstract information (features extraction,
segmentation, classification, etc.). The system was
completed with a Microblaze processor for
implementing/managing mixed processing, which
combined the execution of hardware processing blocks
with software processing blocks.
As regards image capture, we chose the Aptina
MT9V032 [5], which captures a wide-VGA-sized color
image at 60 frames per second (fps). It was not the aim of
this study to implement novel image processing, and
accordingly, the camera implemented image processing
based on color space conversion, which enabled us to
validate the present proposal. With this system, color
space conversion can be implemented on the hardware
block as validating the intelligent camera architecture.
The remainder of the paper is divided into the
following sections: Section 2, which reviews some of the
most important studies regarding FPGA image
processing; Section 3, which details the structure of the
design presented here; Section 4, which gives the results
obtained in this study, and lastly; Section 5, which
summarizes the most important conclusions.
II. Related Studies
Smart cameras as embedded systems have to meet the
insatiable demand of video processing on one hand, and
to meet the challenging demands of embedded systems,
such as real-time, robustness, reliability under real-world
conditions, on the other hand. This has made smart
cameras a leading-edge application for embedded
systems research [2]. Recently there has been a
significant increase in research in building smart cameras
as embedded systems.
The Embedded Systems Group in Princeton
University‟s Department of Electrical Engineering [2] has
developed an embedded smart camera system that can
detect people and analyze their movement in real time.
The SmartCam project at University of Technology
Eindhoven [6] is an interesting research activity
involving the design of stand-alone smart cameras. This
project investigates multi-processor based smart camera
system architectures and addresses the critical issue of
determining correct camera architectural parameters
for a given application domain. The SeeMOS architecture presented by LASMEA [7]
consists of a Cypress LUPA4000 monochrome image
sensor, an Altera Stratix FPGA complemented by a Texas
Instruments TMS320C6455 DSP and a Fireware
communication interface. The SeeMOS camera is
designed to provide ample image processing flexibility,
and algorithms can be implemented on either software or
hardware.
The high speed intelligent camera developed in the
Le2i laboratories [8] uses the Aptina MT9M413C36STM
image sensor. Xilinx Virtex-II FPGA is used to pre-
process images at a pixel level (Sobel filter,
erosion/dilatation, calculating centre of mass, etc.),
before transmitting via USB 2.0 interface.
The Elphel‟s NC353camera[9] is based on a 5 Mpixel
Aptina MT9P001 color image sensor, a Xilinx Spartan
III FPGA, a 32 bit ETRAX processor which runs on
GNU/Linux. It is distributed with HDL modules for
image pre-processing (Fixed Pattern Noise (FPN)
elimination, RS compensation, etc.) and image
compression (JP4, JP6, etc.) via implementation of an
efficient pipeline which provides a continuous flow of
images.
Architecture for image acquisition and processing
using a CMOS sensor, which has been interfaced with
FPGA platform for the smart camera system, is presented
in [10]. Xilinx Virtex-4 FX (XC4VFX12) FPGA based
platform has been used, which contains embedded
PowerPC405 microprocessor. It was mentioned that in
order to implement complex image processing algorithms
it would be necessary to choose a FPGA device with
higher amount of resources, because the available FPGA
slices (5,472 slices) was not sufficient.
The Virtex-5 based Xilinx ML-507 platform is used
in [11], for developing an extensible hardware-software
video streaming module in smart camera system. It
contains embedded PowerPC440 microprocessor for real-
time video processing applications. The module, as
designed in [11] is easily configurable for various video
resolutions. Without any modification to the hardware, it
can be extended and used in various image and video
processing applications.
III. Embedded Architecture for Video
Acquisition/Processing
We have proposed an embedded architecture for
image and video acquisition and processing modules for
the smart camera system. The design facilitates the
streaming of video from camera to the monitor through
the DDR2 memory and the FPGA logic in real-time. The
system architecture as proposed is shown in “Fig. 2”. It
consists of the Aptina MT9V032 CMOS image sensor
[5], the Xilinx Spartan-3ADSP FPGA board, and a DVI
monitor for displaying the output video.
The MT9V032 active imaging pixel array is 752H x
480V. It incorporates sophisticated camera functions on-
chip, such as averaging 2 x 2 and 4 x 4, to improve
sensitivity when operating in smaller resolutions, as well
as windowing and column and row mirroring.
Yahia Said, Taoufik Saidani, Fethi Smach, Mohamed Atri, and Hichem Snoussi
Copyright © 2013 Praise Worthy Prize S.r.l. - All rights reserved International Review on Computers and Software, Vol. 8, n. 2
MicroBlaze
ProcessorUART
GPIO
DIP
Switches
GPIO
Push
Buttons
GPIO
LEDS
MPMC
XPS IIC
PLB
ARB
Block
RAM
ILMB DLMB
PLB
Camera_In
Core
Video to
VFBC
Core
Display
Controller
Core
DVI_OUT
Core
V
F
B
C
V
F
B
C
HW
Processing
DDR2
SDRAM
IXCL
DXCL
CMOS
Sensor
VGA/DVI
Display
FPGA
Fig. 2 Architecture of Intelligent Camera based on FPGA
The MT9V032 can be operated in its default mode or be
programmed for frame size, exposure, gain setting, and
other parameters. The default mode outputs a wide-VGA-
sized image at 60 frames per second (fps). An on-chip
analog-to-digital converter (ADC) provides 10 bits per
pixel. In addition to a traditional, parallel logic output,
the MT9V032 also features a serial low voltage
differential signaling (LVDS) output [5]. The block
diagram of this sensor is presented in “Fig. 3”.
Fig. 3 The MT9V032 sensor Block Diagram
The Spartan-3A DSP (XC3SD3400A-4FGG676C)
FPGA has mainly the following internal resources:
53,712 logic cells, 126 DSP48A Slices, and 2,268Kb of
block ram (BRAMs), Input-Output Block (IOB), Digital
Clock Management (DCM), Hardware Multipliers and
Microblaze soft processor core.
In this design Microblaze embedded processor is used
for the interfacing of FPGA-based custom modules and
IPs along with the configuration of platform peripherals.
The software environment of the system consists of
application software and device drivers. The hardware
part of the system includes the configurable logic blocks
in FPGA. This integration of software and hardware
provides the complete system functionality.
The smart camera system requires interfacing of
CMOS camera with the FPGA board. The high-speed
LVDS data stream from the camera is received and
deserialized using the DS92LV1212A deserializer chip.
This is capable of carrying LVDS data from a camera
which has a pixel rate of 26.6 MHz.
The Camera_In peripheral core brings in the input signals
from the input chip, registers the signals, and groups the
video signals into a unified bus that can be connected to
other IPs for processing. The MT9V032 image sensor
can be controlled by an IIC programming. The IIC
programming is not handled by the Camera_In peripheral
core. The MicroBlaze processor performs the IIC
processing by way of the XPS_IIC peripheral core.
To interface a DVI monitor, DVI_OUT port of Spartan-
3A DSP board has been used after configuring the on-
board video display controller chip registers through the
IIC bus. The application software is written in C language
and it runs on the Xilinx-provided standalone software
platform [12]. Further, it uses the developed APls as
needed and also employs the required ones provided by
the software platform [12].
In this embedded architecture on-board peripherals
like Display Controller (DC) with some of the Xilinx
provided IPs, such as, Multi-Port-Memory Controller
(MPMC) [13], Digital Clock Manager (DCM) [14], and
Xilinx Platform Studio (XPS) IIC controller have been
used.
Apart from the variety of IPs, the architecture uses the
128-bit Processor Local Bus (PLB) protocol [15], which
provides the infrastructure for connecting an optional
number of PLB masters and slaves into an overall PLB
system. Based on this bus interfaces, the detailed
descriptions of the peripheral cores are as follows:
III.1. Camera_In Core
The Camera_In peripheral core provides a connection
to the Aptina MT9V032 Digital Image Sensor and
transports the video stream to the FPGA in a serial
format. The serial video is de-serialized by the
DS92LV1212A deserializer chip.
The MT9V032 Digital Image Sensor that is
associated with the Camera_In peripheral core must be
properly initialized to function correctly. It also has a
number of image processing capabilities that can be
controlled by the user.
III.2. HW Processing Core
The main tasks performed by the HW Processing
peripheral core are to provide stuck pixel correction,
color interpolation, brightness control, contrast Control,
and color space conversion from RGB to Gray.
III.3. Video to Frame Core
The Video to Frame peripheral core controls the
storing of video frames into frame buffers. It writes the
video data to the VFBC interface on the MPMC memory
Yahia Said, Taoufik Saidani, Fethi Smach, Mohamed Atri, and Hichem Snoussi
Copyright © 2013 Praise Worthy Prize S.r.l. - All rights reserved International Review on Computers and Software, Vol. 8, n. 2
controller. Using the vsync signal the Video to VFBC
generates properly timed command words to the VFBC.
When a vsync signal is received a new video frame is
about to arrive. The peripheral core responds by
incrementing to the next frame buffer to be written to and
issues a command to the VFBC to start writing to the new
buffer. It then waits until it receives the DE (Data Enable)
signal, indicating active video, at which time it writes the
data to a VFBC data FIFO. The peripheral core continues
the process of writing data to the VFBC until it receives
the next vsync. The peripheral core also sends a gray
coded frame index to the display controller to facilitate
frame synchronization.
III.4. Multiport Port Memory Controller (MPMC)
The MPMC is a parameterizable memory controlIer
that supports DDR2 SDRAM [13]. MPMC provides
access to memory for one to eight ports. It has been used
for interfacing to DDR2 SDRAM. Video Frame Buffer
ControlIer (VFBC) is a special interface for video frame
data and is a vital part of the MPMC. It is used in video
applications where hardware control of Two Dimensional
(20) data is needed to achieve real-time operation. The
VFBC alIows a user-defined Intellectual Property (lP) to
read and write data in 20 sets regardless of the size or the
organization of external memory transactions. It has
separate asynchronous First In, First Out (FIFO)
interfaces for write data input, command input and read
data output.
III.5. Display Controller Core
The Display Controller peripheral core reads video
frames out of memory and displays them to a DVI/VGA
display at any resolution requiring up to a 74.25 MHz
pixel clock. The peripheral core generates properly timed
video synchronization signals for any of the given
resolution based on software configuration registers and
the frequency of the display clock input. The display
clock is generated by an external clock generation chip. It
also reads video data from frame buffers via the VFBC.
When the peripheral core generates a vsync signal, the
block looks at the gc_frame_count signal to determine
which frame buffer it needs to read from next. It then
sends a command to the VFBC to initiate a read of the
frame. When the block asserts the DE signal, it reads
video data from the VFBC read port FIFO. This process
is continued until the next vsync is issued indicating the
current frame is finished.
Software configurable registers are used for the
locations of the frame buffers, for holding VFBC
commands, and for video sync timing information. This
gives the Display Controller the flexibility to support a
wide range of resolutions. This is limited by the
frequency of the display clock (currently limited to 74.25
MHz) and the width of the timing registers. The highest
resolution supported is 1920x1080P.
III.6. DVI_OUT Core
The DVI_OUT peripheral core provides a connection
to the CH7301C DVI Transmitter Device. This
peripheral core brings in the DVI_VIDEO_IN bus and
formats the video data to the format required by the
CH7301C. The CH7301 is capable of driving either
digital DVI displays or analog VGA displays.
The peripheral core supports multiple modes and
video resolutions. The IIC programming is used to set the
FPGA into the desired mode and resolution. The IIC
programming is not handled by the DVI_OUT peripheral
core. The MicroBlaze processor performs the IIC
processing by way of the XPS IIC peripheral core.
IV. Results
The real-time video in RGB analog format is captured
from the Aptina MT9V032 CMOS image sensor. The
captured video is color space converted by the HW
processing block. The gray level video is converted into
the frames and buffered into DDR2 SDARM memory
using Multi Port Memory ControlIer (MPMC). The
stored frames are converted into VGA resolution of
640X480 and displayed on the DVI monitor. The
architecture uses Xilinx Spartan-3A DSP FPGA board.
The embedded Microblaze processor is used to configure
the on-board peripherals. A captured video frame with
the complete set-up of the design is shown in “Fig. 4”.
We have used the ISE, EDK and SDK Xilinx tools,
version 10.1. Table 1 gives a summary of the resources
consumed in the FPGA. The device utilization summary
shows that, with the proposed embedded architecture
based video acquisition and processing modules, the
remaining FPGA resources are sufficient for
implementing many practical real-time video processing
applications.
Fig. 4 Complete set-up of the design
Yahia Said, Taoufik Saidani, Fethi Smach, Mohamed Atri, and Hichem Snoussi
Copyright © 2013 Praise Worthy Prize S.r.l. - All rights reserved International Review on Computers and Software, Vol. 8, n. 2
TABLE I
SUMMARY OF INTERNAL FPGA RESOURCES USED
Resource Type
Used
Available %
Slices 9094 23872 38%
Slice Flip Flops 11451 47744 24%
4 input LUTs 12883 47744 27%
bonded IOBs 78 469 17%
BRAMs 69 126 55%
DSP48s 7 126 6%
An analysis of the number of slices occupied in the
different components of the system is given in “Fig. 5”,
showing a comparative diagram of distribution of
occupied slices in the system. It can be seen that the
component which requires most slices is the Multiport
Port Memory Controller MPMC, with a 13% occupation
rate, followed by the Microblaze core with an occupation
rate of around 6% and the hardware processing block,
occupying around 4% of slices.
As regards timing and frequency constraints, Table 2
shows the results obtained from the implementation tool
following synthesis.
Fig. 5 Internal FPGA resources occupied by each module of Fig. 2
V. Conclusions
The growing popularity of smart cameras can be
attributed to the progress made in semi-conductor process
technology, embedded computer vision techniques, along
with socio-economic factors such as safety and security,
increase in productivity and cost savings. Today, smart
camera products are used in many real-world
applications, especially in video surveillance, industrial
machine vision, robotics, games and toys, and human–
computer interfaces [16].
This paper presents the architecture for image capture
and processing using a CMOS Digital Image Sensor
implemented on an FPGA. In the present proposal, a
versatile hardware/software system was implemented on
the above platform, which enabled us to develop an
intelligent camera based on an FPGA, capable of internal
image processing without need to use an external PC.
The existence of a embedded microprocessor in the
FPGA provides the system with the flexibility to choose
which parts of an image processing algorithm to
implement on the software (programme housed in the
microprocessor) or hardware (using the HDL modules
which are encapsulated as microprocessor peripherals):
Thus, and with the aim of parallelizing tasks in order to
attain concurrent execution, it is possible to attain high
speed image processing.
The main purpose of the software part on the
embedded processor is to control and manage the overall
system. It contains high-performance peripherals,
interfaces, and other IP cores. These are, e.g., a memory
controller to provide access to an external RAM, a serial
port interface for user commands, and a module for
accessing the integrated reconfiguration interface of the
FPGA. All components of the embedded processor sub-
system are connected by the main on-chip system bus, the
processor local bus (PLB).
The use of Aptina CMOS sensors, and consequently,
the use of HDL modules to control them, means that
these can be reused in other sensor models without
requiring more than minor modifications to the control
and data buses connecting the FPGA and the CMOS
sensor.
TABLE II
USER TIMING CONSTRAINTS FOR THE DEVELOPED DESIGN
Module
Clock
Maximum
Frequency
(MHz)
Microblaze
DCACHE_FSL_OUT_CLK
DBG_CLK
DBG_UPDATE
88.547
88.547
88.547
Display Controller SPLB_Clk
CMD0_CLK
99.681
99.681
Debug Module
debug_module/update
SPLB_Clk
debug_module/drck_i
101.133
101.133
101.133
Video to VFBC SPLB_Clk
CMD0_CLK
122.654
122.654
XPS IIC SPLB_Clk
128.031
RS232 SPLB_Clk 146.864
MB_PLB PLB_Clk 160.823
SYS_RESET Slowest_sync_clk 207.555
LEDs SPLB_Clk 207.684
PUSH_BUTTONS SPLB_Clk 207.684
SWITCHES SPLB_Clk 207.684
FLASH SPLB_Clk
SysACE_CLK
213.083
213.083
LMB LMB_Clk 264.480
Clock Generator CLKIN 264.480
Yahia Said, Taoufik Saidani, Fethi Smach, Mohamed Atri, and Hichem Snoussi
Copyright © 2013 Praise Worthy Prize S.r.l. - All rights reserved International Review on Computers and Software, Vol. 8, n. 2
References
[1] A.Oetken, S.Wildermann, J.Teich, D.Koch ,A Bus-based SoC
Architecture for Flexible Module Placement on Reconfigurable
FPGAs, International Conference on Field Programmable Logic
and Applications, 2010.
[2] W. Wolf, B.Ozer and T. Lv, Smart Cameras as Embedded
Systems, IEEE Computer Society, Vol. 35, Issue 9, pp. 48-53,
September 2002.
[3] H. Broers, W. Caarls, P. Jonker and R. Kleihorst, Architecture
study for smart cameras, Proceedings of EOS Conference on
Industrial Imaging and Machine Vision, European Optical
Society, Munich, Germany, pp 39- 49, June 2005
[4] M. Leeser, S. Miller, and H. Yu, Smart Camera based on
Reconfigurable Hardware Enables Diverse Real-time
Applications, in Proc. of the 12th annual IEEE Symposium on
Field programmable Custom Computing Machines (FCCM‟04),
2004, pp 147-155.
[5] Aptina MT9V032 CMOS image sensor Data sheet,
http://www.aptina.com
[6] SmartCam: Devices for Embedded Intelligent Cameras.
http://www.stw.nl/projecten/E/ees5411.html.
[7] Dias Real, F.; Berry, F.; Marmoiton, F.; Serot, J. Hardware,
Design and Implementation Issues on a FPGA Based Smart
Camera, In Proceedings of 1st International Conference on
Distributed Smart Cameras (ICDSC), Vienna, Austria, 25–28
September 2007; pp. 20-27.
[8] Mosqueron, R.; Dubois, J.; Paindavoine, M, High-Speed Smart
Camera with High Resolution, Journal of Embedded Systems,
2007.
[9] A. Filippov, High Resolution Network Camera, In Proceedings of
the 11th Annual IEEE Symposium on Field-Programmable
Custom Computing Machines (FCCM‟03), Napa, CA, USA, 8–
11 April 2003; pp. 276-277.
[10] I. Bravo, J. Balinas, A. Gardel, J. L. Lazaro, F. Espinosa, and J.
Garcia, “Efficient smart CMOS camera based on FPGAs oriented
to embedded image processing,” Sensors, 11(3), 2282-2303.
[11] Jai Gopal Pandey, Shashwat Purushottam, Abhijit Karmakar, and
Chandra Shekhar “Platform-Based Extensible Hardware-Software
Video Streaming Module for a Smart Camera System,”
International Journal of Modeling and Optimization, Vol. 2, No.
4, August 2012.
[12] http://www.xilinx.com/tools/sdk.htm
[13] Xilinx Multi-Port-Memory Controller (MPMC) IP,
http://www.xilinx.com/products/ipcenter/mpmc.htm
[14] Xilinx Digital Clock Manager (DCM) IP,
http://www.xilinx.com/support/documentation/ip_documentation/
dcm_module.pdf
[15] Xilinx Processor Local Bus (PLBv46),
http://www.xilinx.com/support/documentation/ip_documentation/
plbv46.pdf
[16] Yu Shi, Fabio Dias Real, “Smart Cameras: Fundamentals and
Classification,” Smart Cameras, Springer, 2010, pp 19-34
[17] Dong Zhang, Ping Li, “A Framework for Computer Vision in
Dynamical Scenes,” International Review on Computers and
Software (IRECOS), Vol 7, n 6, November 2012.
Authors’ information
1Laboratory of Electronics and Microelectronics (EμE), Faculty of
Sciences of Monastir 5000, TUNISIA. 2Active Networks, 1 rue de Terre Neuve, BP 127 - 91944 Courtaboeuf
Cedex - France. 3Université de technologie de Troyes, Institut Charles Delaunay, UMR
STMR 6279, BP 2060 - 10010 TROYES, France.
Yahia SAID received the Master„s Degree in
Micro-electronics from Faculty of Science of
Monastir, Tunisia in 2010. Since 2011, he has
been working as a Research Scientist at the
Laboratory of Electronics & Micro-electronics,
Faculty of Science of Monastir where he
prepares his thesis. His areas of interest include
Embedded Processor, Embedded System, Image
and Video Processing, and HW/SW Co-design.
Taoufik SAIDANI received the Master„s
Degree in Micro-electronics from Faculty of
Science of Monastir, Tunisia in 2007. Since
2008, he has been working as a Research
Scientist at the Laboratory of Electronics &
Micro-electronics, Faculty of Science of
Monastir where he prepares his thesis.
His major research interests include VLSI and
embedded system in video compression.
Mohamed ATRI received his Ph.D. Degree in
Micro-electronics from the University of
Monastir, Tunisia in 2001. He has obtained the
HDR degree from the University of Monastir in
2011.
He is currently a member of the Laboratory of
Electronics & Micro-electronics, Faculty of
Science of Monastir.
His research includes Circuit and System Design, Image processing,
Network Communication, IPs and SoCs.
Fethi SMACH received his Ph.D. Degree in
signal and image processing from the University
of Burgundy, France in 2009.
His areas of interest include pattern recognition
and real-time implementation of classification
algorithms.
Hichem SNOUSSI received his Ph.D degree in
signal processing from the University of Paris-
Sud, Orsay, France in 2003. He has obtained the
HDR degree from the University of Technology
of Compiègne in 2009. Since 2010, he is Full
Professor at the University of Technology of
Troyes.
His research interests include Bayesian
techniques for source separation, information geometry, machine
learning, robust statistics, with application to brain signal processing,
astrophysics, advanced collaborative signal/image processing
techniques in wireless sensor/cameras networks,.