67
Dependable Systems of System DSoS IST-1999-11585 Dependable Systems of Systems Final demonstration of smart sensor interface Dependable System Composition 6 (DSC6) Report Version: Deliverable DSC6 Report Preparation Date: April 2003 Classification: Public Circulation Contract Start Date: 1 April 2000 Duration: 36m Project Co-ordinator: Newcastle University Partners: DERA, Malvern – UK; INRIA – France; CNRS-LAAS – France; TU Wien – Austria; Universität Ulm – Germany; LRI Paris-Sud - France Project funded by the European Community under the “Information Society Technology” Programme (1998-2002)

Final demonstration of smart sensor interface

  • Upload
    others

  • View
    4

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Final demonstration of smart sensor interface

Dependable Systems of System

DSoS

IST-1999-11585

Dependable Systems of Systems

Final demonstration of smart sensor interface Dependable System Composition 6 (DSC6)

Report Version: Deliverable DSC6 Report Preparation Date: April 2003 Classification: Public Circulation Contract Start Date: 1 April 2000 Duration: 36m

Project Co-ordinator: Newcastle University

Partners: DERA, Malvern – UK; INRIA – France; CNRS-LAAS – France; TU Wien – Austria; Universität Ulm – Germany; LRI Paris-Sud - France

Project funded by the European Community under the “Information Society Technology” Programme (1998-2002)

Page 2: Final demonstration of smart sensor interface
Page 3: Final demonstration of smart sensor interface

Contents

1 Introduction 1

1.1 Organization of this Document . . . . . . . . . . . . . . . . . . . . . . . .2

2 Smart Transducer Interface 3

2.1 Time-Triggered Transport Service . . . . . . . . . . . . . . . . . . . . . .3

2.2 Principles of Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . .5

2.2.1 Data Transmission . . . . . . . . . . . . . . . . . . . . . . . . . .6

2.2.2 Properties of the Fireworks Byte . . . . . . . . . . . . . . . . . . .7

2.2.3 Types of Rounds . . . . . . . . . . . . . . . . . . . . . . . . . . .8

2.3 The Interface File System . . . . . . . . . . . . . . . . . . . . . . . . . . .9

2.3.1 File Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

2.3.2 The Header Record . . . . . . . . . . . . . . . . . . . . . . . . . .9

2.3.3 Special files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10

2.3.4 Round Description List (RODL) files . . . . . . . . . . . . . . . .13

2.4 Master/Slave Rounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . .13

2.5 The Baptizing Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . .15

2.6 CORBA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16

3 Autonomous Mobile Robot 17

3.1 Time-Triggered Sensor Fusion Model . . . . . . . . . . . . . . . . . . . .19

3.1.1 Transducer Level . . . . . . . . . . . . . . . . . . . . . . . . . . .20

3.1.2 Fusion/Dissemination Level . . . . . . . . . . . . . . . . . . . . .22

3.1.3 Control Level . . . . . . . . . . . . . . . . . . . . . . . . . . . . .22

3.1.4 Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .23

3.1.5 Smart Transducer Interface . . . . . . . . . . . . . . . . . . . . . .23

3.1.6 Environment Image and Control Interface . . . . . . . . . . . . . .24

i

Page 4: Final demonstration of smart sensor interface

3.1.7 Man-Machine Interface . . . . . . . . . . . . . . . . . . . . . . . .24

3.2 Smart Car System Architecture . . . . . . . . . . . . . . . . . . . . . . . .25

3.3 Demonstrator Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . .27

3.3.1 Infrared Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . .27

3.3.2 Ultrasonic Sensors . . . . . . . . . . . . . . . . . . . . . . . . . .29

3.3.3 Servo Actuator . . . . . . . . . . . . . . . . . . . . . . . . . . . .30

3.3.4 Steering Actuator . . . . . . . . . . . . . . . . . . . . . . . . . . .32

3.3.5 Speed Actuator . . . . . . . . . . . . . . . . . . . . . . . . . . . .32

3.4 TTP/A Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .32

3.5 Demonstrator Software . . . . . . . . . . . . . . . . . . . . . . . . . . . .35

3.5.1 Infrared Sensor Filter . . . . . . . . . . . . . . . . . . . . . . . . .36

3.5.2 Servo Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . .36

3.5.3 Grid Generation . . . . . . . . . . . . . . . . . . . . . . . . . . .36

3.5.4 Navigation and Path Planning . . . . . . . . . . . . . . . . . . . .37

3.5.5 Fusion of Ultrasonic Observations . . . . . . . . . . . . . . . . . .41

3.5.6 Intelligent Motion Control . . . . . . . . . . . . . . . . . . . . . .42

3.6 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .43

3.6.1 Free Space Detection . . . . . . . . . . . . . . . . . . . . . . . . .43

3.6.2 Dead End Detection . . . . . . . . . . . . . . . . . . . . . . . . .45

3.6.3 Typical Situation with Three Obstacles . . . . . . . . . . . . . . .45

4 Fieldbus Gateway 48

4.1 Design Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .48

4.1.1 Considerations for Implementation . . . . . . . . . . . . . . . . . .50

4.1.2 Design Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . .50

4.2 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .51

4.2.1 Card Components . . . . . . . . . . . . . . . . . . . . . . . . . .52

ii

Page 5: Final demonstration of smart sensor interface

4.2.2 Memory Layout . . . . . . . . . . . . . . . . . . . . . . . . . . .53

4.2.3 Client Drivers and Kernel Modules . . . . . . . . . . . . . . . . .54

4.2.4 Accessing the IFS from the Gateway . . . . . . . . . . . . . . . . .56

4.2.5 Mapping of Gateway Request onto the Interface File System . . . .56

4.2.6 CAN Access from a TTP/C host via PCMCIA . . . . . . . . . . .58

4.3 Throughput Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . .58

5 Summary and Conclusion 59

5.1 Contributions to DSoS . . . . . . . . . . . . . . . . . . . . . . . . . . . .60

5.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .60

References 60

iii

Page 6: Final demonstration of smart sensor interface

iv

Page 7: Final demonstration of smart sensor interface

Final Demonstration of Smart Sensor Interface

W. Elmenreich, W. Haidinger, A. Dias, C. Trödhandl, M. BorovickaInstitut für Technische Informatik

Technische Universität Wien,Vienna, Austria

1 Introduction

Throughout the DSoS (Dependable Systems of Systems) project the design of component-

based dependable systems have been studied. Thus, the objective of the DSoS project is to

investigate issues related to the integration of components that are existing complete systems

in order to generate a new set of dependable services from the resulting system of systems.

Another aspect handled in DSoS project is the notion of time. In contrast to the fact that the

real-time aspect is often neglected in theories of computation as was outlined by Lee in [16],

a mobile embedded system imposes also hard real-time requirements onto its control system.

Besides of the theoretical work in the project a main contribution has been the evaluation

of the elaborated concepts in form of case study implementations. There are two reasons

that justify the labor intensive practical work: First, these case studies demonstrate the ap-

plicability of the suggested theoretical ideas and second, there is an immense amount on

feedback and inspirations that comes from dealing with practical problems.

One major case study of the DSoS project focuses on the integration of linking interfaces

(LIFs) that export/import restricted views of component systems by hiding resource and

management issues that pertain just to individual component systems. In order to deal with

issues of management there will be a need for more than one kind of LIF. For example a LIF

that is used to request or accept service from a component system plays a very different role

in a composition from a LIF that is used to reconfigure the internal resources used to deliver

that service. Another kind of LIF might be used to signal to a component system that it is

no longer delivering the (quality of) service that is expected of it or to adjust some internal

parameter.

This document presents a case study of a smart sensor (or, in more general terms, a smart

transducer) LIF that was developed at the Institut für Technische Informatik during the DSoS

project. The application of the developed smart transducer LIF is shown in the implemen-

tation of an autonomous mobile robot, the “smart car”. It is shown how a robust sensor

component system can be configured by combining a set of sensors with standard LIFs.

1

Page 8: Final demonstration of smart sensor interface

Introduction IST-1999-11585

Since a mobile robot interacts with its environment by means of sensors and actuators and

such sensors and actuators come with a wide variety of interfaces and data types, ranging

from a simple binary signal over pulse-width-modulated signals up to CCD camera output, a

mobile robot is a good demonstration of the LIF’s capabilities. In the smart car application,

we integrated several commercial sensors and actuators, which all have their own specific

interfaces. This was done by assigning each sensor respectively actuator a microcontroller

unit (MCU), which transforms the measurement of a sensor in a unified form respectively

transforms the unified control value in the format required by the actuator. For short, such

a combination of a sensor/actuator with a microcontroller unit is called a smart transducer.

This case study was chosen, since integratinglegacy systems(cf. [11]) into areal-time sys-

temis a challenging task and an autonomous mobile robot requires such a real-time system.

Furthermore the connection of a smart transducer network to a larger real-time system has

been examined and demonstrated by implementing a PCMCIA gateway card that provides

an appropriate interface to the services of the network.

1.1 Organization of this Document

This report contains three main parts. The first part describes the main principals of oper-

ation of the Smart Transducer Interface (STI). This STI has been developed at the Institut

für Technische Informatik during the DSoS project and initially described in the document

Specification of the Smart Sensor Interface (DMS2) [8]. According to the DSoS Dissemina-

tion and Use Plan [2] the specification of the STI was submitted to the Object Management

Group (OMG) in response to the Request for Proposal for a Smart Transducer Interface.

Since the Finalization Task Force (FTF) of the OMG for the Smart Transducer Interface has

produced slight modifications of the standard’s proposal, the first part of this report describes

the actual version of the STI, as standardized by the OMG [19].

In the second part of this report an autonomous mobile robot (smart car) is presented. This

autonomous mobile robot was built during the DSoS project to act as a case study for the

STI and is equipped with serveral nodes communicating via the STI. It is the intention of

the second part to present the setup and evaluation of this case study.

The last part presents a multi-functional fieldbus gateway card. The card is implemented

with a PCMCIA interface and provides a a simple to use and universal monitoring access

to the smart transducer network. The flexible design supports interfaces for several net-

works including TTP/A and the Control Area Network (CAN). In this part we present the

architecture, the hard- and software, and an evaluation of the gateway card.

Final Demonstration of Smart Sensor Interface 2 Deliverable DSC6

Page 9: Final demonstration of smart sensor interface

Dependable Systems of Systems Smart Transducer Interface

2 Smart Transducer Interface

This section gives an overview on the smart transducer interface (STI), that has been devel-

oped within workpackage 4 of the DSoS project. The key feature of the STI is the concept

of an Interface File System (IFS) that contains all relevant transducer data. This IFS allows

different views of a system, namely a real-time service view, a diagnostic and management

view, and a configuration and planning view. The interface concept encompasses a commu-

nication model for transparent time-triggered communication.

2.1 Time-Triggered Transport Service

The time-triggered transport service provides the communication among the smart trans-

ducer nodes. It is based upon the TTP/A protocol [15]. TTP/A is a time-triggered field-bus

protocol used for the connection of low-cost smart transducer nodes. There is one active

master controlling the network. A cluster can have one or more shadow masters which

take over if the active master fails. The master can also serve as gateway to an outer network

running TTP/A on a different communication protocol such as TTP/C or TCP/IP or CORBA.

The TTP/A protocol was designed with the following objectives in mind [13]:

Time-Triggered: The protocol uses a TDMA (Time Division Multiple Access) scheme

to achieve a predictable time behavior. The nodes communicate with each other in

predefined slots, so that the origin of the messages are only determined by the time at

which they are sent and thus minimizing protocol overhead.

The protocol also synchronizes the slaves’ local clocks to the clock of the master.

To achieve minimal jitter, it is recommended that the TTP/A protocol task runs with

highest priority on a node.

Scalable hardware requirements: TTP/A can be implemented on a variety of nodes be-

ginning on the low end with 8 bit micro-controllers with very limited resources up to

32 bit embedded single board systems which can serve as gateways to higher level

protocols such as TTP/C and CORBA.

Small universal transducer interface: The interface of a smart transducer node must be

understandable, data-efficient and predictable. The interfaces for smart transducers

can be categorized into the following three types:

Deliverable PCE4 3 Final Demonstration of Smart Sensor Interface

Page 10: Final demonstration of smart sensor interface

Smart Transducer Interface IST-1999-11585

• The real-time service (RS) interface which provides the timely real-time services

to the component during the operation of the system.

• The diagnostic and management (DM) interface which is used to set parame-

ters and to retrieve information about the internals of a component, e. g., for the

purpose of fault diagnosis. The DM interface is available during system opera-

tion without disturbing the real-time service. Normally, the DM interface is not

time-critical.

• The configuration and planning (CP) interface which is necessary to access con-

figuration properties of a node. During the integration phase this interface is

used to generate the “glue”between the nearly autonomous components. The CP

interface is not time critical.

In TTP/A, transducer data is represented as standardized byte-oriented state messages

via the RS interface. TTP/A delivers real-time sensor data in a predictable fashion

and makes it possible to configure the attached nodes via the CP interface, e. g., set

parameters for range selection, alarm limits, signal conditioning, calibration, ... with-

out disturbing the real time communication. The Interface File System (IFS) makes

it possible to share data between different TTP/A clusters in a transparent fashion, so

that no distinction between data from local and remote nodes is made.

Sensor parameterization/plug and play capability: Nodes which are newly connected to

a TTP/A cluster have to be configured before they can be used by the system. TTP/A

serves these needs by means of the so calledmaster/slave roundswhich are periodi-

cally scheduled by the master concurrently to a real-time communication service. The

bandwidth for each node in a TTP/A cluster is allocated at system design.

Master/slave protocol: In TTP/A there are two types of nodes: low cost slave nodes which

handle the transmission of sensor/actuator data and more powerful master nodes

which act as an interface between TTP/A and higher level protocols such as TTP/C

and CORBA or serve as a monitoring node for a locally attached PC.

Multi-master protocol: A TTP/A cluster has one active master that controls the network by

sending so-calledfireworks bytes(FB) to start a TTP/A round. One ore more shadow

masters may be present. A shadow master takes over control of the cluster in case the

active master has failed.

Flexible physical layers: TTP/A can be used on many different physical layers, from sim-

ple single wire interfaces (e. g., the ISO 9141 standard) up to high speed links based

on fibre optics.

Final Demonstration of Smart Sensor Interface 4 Deliverable DSC6

Page 11: Final demonstration of smart sensor interface

Dependable Systems of Systems Smart Transducer Interface

IBG

MP−Rounds

Nr. 2 Nr. 3 Nr. 1time

IRG13 bit

Slots

Nr. 1

13 bit 13 bit 13 bit 13 bit13 bit

Slot 0 Slot 1 Slot 2 Slot (n−1) Slot n

Slot (n−1)Slot 0

11 bit 11 bit2 bi

t

2 bi

t

odd

even

T

1/f baud

bit

Fireworks Byte Data Byte

Mark (1)

Space (0)

Star

tL

SB

MSB

Pari

tySt

op

Mark (1)

Space (0)

Star

tL

SB

MSB

Pari

tySt

op

IBG

0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7

MP−Round(n+1)*13 bit

Multi−Partner Round

Inter Round Gap

Inter Byte Gap

MP−Round

IRG

IBG

Figure 1: A TTP/A Round

2.2 Principles of Operation

TTP/A is a time-triggered protocol used for the communication of one active master with

or among smart transducer nodes within a cluster. This cluster is controlled by the master,

which establishes a common time base among the nodes. In case of a master failure, a

shadow master can take over control. Every node in this cluster has a unique alias, an 8

bit (1 byte) integer, which can be assigned to the node a priori or set at any time via the

configuration interface.

The TTP/A communication is organized into rounds. Figure 1 shows a round sequence of

four subsequent multi-partner (MP) rounds seperated by inter round gaps (IRG). IRGs are

slots where the TTP/A bus is inactive for at least 13 bit cells. A TTP/A round consists of

one or more frames. A frame is a sequence of bytes transmitted by one node. A byte is

transmitted in a slot consisting of 13 bit cells (one start-bit, eight data-bits, one parity, one

stop-bit and a two bit cell wide inter byte gap (IBG)).

The rounds are independent from each other. Every round starts with a fireworks frame

(FF) sent by the master. The arrival of the fireworks frame is a synchronization event for

every node in the cluster and identifies the round. According to the specification of the

selected round, the fireworks frame is followed by data frames (DF) of specified length

from the specified nodes. Each such frame is described by an entry in the round descriptor

Deliverable PCE4 5 Final Demonstration of Smart Sensor Interface

Page 12: Final demonstration of smart sensor interface

Smart Transducer Interface IST-1999-11585

IRG Slot n

UART−F. n

DF

UART−F. 4

Slot 4Slot 3

UART−F. 3

DF

UART−F. 2

Slot 2Slot 1

UART−F. 1

DFFF

FB

Slot 0

Round

Figure 2: Structure of a TTP/A Round

UART−Frame 11 − bit

TTP/A Slot − 13 bit

Data Byte

0 1 2 3 4 5 6 7 even

MSB

Part

ity

Stop

Star

t

LSB

IBG

Figure 3: Data Byte

list (RODL) in the file-system of the sender and the receiver(s).

Because the slot position at which each communication action takes place is defined a priory,

no further communication for bus arbitration is necessary. Figure 2 shows the layout of a

TTP/A round.

2.2.1 Data Transmission

For the transmission of bytes on the TTP/A bus, a standard UART format has been chosen

(see figure 3): One start bit, 8 data bits, one parity bit and one stop bit. The parity for data

bytes has to be even, whereas for the fireworks byte the parity must be odd. The passive

state on the bus is logical 1 (high). The start of a new byte is marked by the falling edge of

the start bit. The stop bit (logical 1) is followed by the inter byte gap which is, in the current

implementation of TTP/A, 2 bit cells long, for which the bus is also in passive state (logical

1). So the 11 bit cells long UART frame is embedded in a TTP/A timeslot of 13 bit cells

times.

The length of the inter byte gap (IBG) depends on the chosen baud rate of the network and

its physical size. In faster and more expansive TTP/A networks the length of the IBG might

be up to 5 bit cells long.

A new round always is started by afireworks byte(FB). The FB is transmitted with odd

Final Demonstration of Smart Sensor Interface 6 Deliverable DSC6

Page 13: Final demonstration of smart sensor interface

Dependable Systems of Systems Smart Transducer Interface

parity, in contrast to other data bytes which are sent with even parity. There are only 8 valid

fireworks bytes (6 for multi partner rounds and 2 for master/slave rounds; see table 1). The

fireworks bytes (protected by a parity bit) have Hamming distance of at least 4.

firework Meaning Description0x78 RODL=0 Multi-Partner Round 00x49 MSD Master/Slave Data Round0xBA RODL=2 Multi-Partner Round 20x8B RODL=3 Multi-Partner Round 30x64 RODL=4 Multi-Partner Round 40x55 MSA Master/Slave Address Round (startup sync.)0xA6 RODL=6 Multi-Partner Round 60x97 RODL=7 Multi-Partner Round 7

Table 1: Firework codes

The Master/Slave Address (MSA) fireworks byte has been designed to generate a regular bit

pattern, which can be used by slave nodes with an imprecise on-chip oscillator for startup

synchronization (see figure 4).

IRG

UART−Frame 11 − bit

Star

t

LSB

MSB

Part

ity

Stop

TTP/A Slot − 13 bit

0 1 2 3 4 5 6 7

IBG

odd

Synchronize Pattern

Figure 4: Synchronization Pattern

The three least significant bits (bit 0...2) of the FB denote the round name. The remainding

bits (5 data and one parity bit) are used for error detection.

2.2.2 Properties of the Fireworks Byte

The generation of the firework codes had several requirements (see [9]): First, the byte

0x55 must be a part of the code because this regular bit pattern is also used for initial

synchronization of the slaves’ UARTs. Hamming distance should be maximized and the

Deliverable PCE4 7 Final Demonstration of Smart Sensor Interface

Page 14: Final demonstration of smart sensor interface

Smart Transducer Interface IST-1999-11585

resistance against burst errors should be optimal. The firework bytes are all sent with odd

parity and the lower three bits of the code have to be equal to the round number.

Code Generation: Because0x55 must be a member of the code, it was impossible to use

a cyclic redundant code (CRC) with the requested properties. So the code was created

by using an exhaustive search method.

Hamming Distance: The occurring Hamming distances are 4, 6, and 8. So the code will

detect all errors of weight less than 4.

Parity: The code includes an odd parity bit. So it will also detect all errors with an odd

weight above 4.

Burst Errors: Every possible burst error will be detected by the code.

Bulk Errors (force the bus to low or high): It is impossible to get a valid FB by setting

(or clearing) one or more adjacent bits. Therefore, it is impossible to corrupt a FB by

applying a direct voltage impulse to the bus and get another valid FB.

Error Exposure: Due to the fact that the byte is protected by a parity bit during transmis-

sion, we have an error exposure≥ 29−829 ≥ 98, 43%.

Error Pattern: There exist only 7 error patterns that will not be exposed under certain

circumstances. Four of them have weight 4 (0x02D , 0x11C , 0x131 , and0x1C2 ),

two error patterns have weight 6 (0x0DE and 0x0F3), and one has weight 8 (0x1EF ).

2.2.3 Types of Rounds

Multi-partner Rounds: A multi-partner round is used to transmit messages over the bus

from several nodes in predefined slots. Multi-partner rounds are scheduled periodi-

cally by the master. They are used to update real-time images, supporting the real-time

view of the cluster and also to periodically resynchronize the slaves’ clocks. It is pos-

sible to define 6 different multi-partner rounds per cluster.

Master-slave Rounds: The master of a TTP/A cluster can schedule master-slave (MS)

rounds to read data from an IFS file record, to write data to an IFS file record, or

to execute a selected IFS file record within the cluster.

The master-slave address (MSA) round specifies the node, the operation and the local

address of the desired data within the addressed node. The master-slave data (MSD)

round is used to transmit the data between master and slave.

Final Demonstration of Smart Sensor Interface 8 Deliverable DSC6

Page 15: Final demonstration of smart sensor interface

Dependable Systems of Systems Smart Transducer Interface

Broadcast Rounds: A broadcast round is a special form of MS round were the name of

the addressed node in the MSA round is set to0x00 . All nodes except those with

node alias0xFF are addressed by such a broadcast. Because more than one node

is addressed in such a round, only write and execute operations are permitted for

broadcast rounds. An example for such a broadcast is thesleepcommand, which puts

all nodes in a TTP/A cluster intosleep mode.

2.3 The Interface File System

Every TTP/A node has its own local interface file system (IFS) which is the source and

destination of the communication data. The IFS of a cluster consists of the local interface

file systems of all nodes in it. The IFS also serves as an interface to the application. All

relevant data of a node should be mapped into the IFS of this node.

The IFS of a single node can contain up to 64 files with a maximum of 256 four-byte records

in each file. The layout of the IFS is statically defined and each file of the IFS can have a dif-

ferent length. The only file which has to be implemented on all nodes is the documentation

file (file nr. 0x3D) which is used to identify the node.

2.3.1 File Structure

A file in the IFS can be viewed as an array of records. However, particular files can have

different lengths. Each record can store 4 bytes (32 bits) of data. For each node, up to 64

files can be defined. Every file must have a header record.

2.3.2 The Header Record

The first record of each file, which is the record with index0x00 , is the header record (see

figure 5). This record contains the information about the file.

Bit 7 of byte 0 is set if the file is read-only. The bits 5 and 6 hold the status of the file. The

status value is01b if the file is ok, otherwise the file is damaged. The remaining bits of these

bytes are reserved. Byte 1 gives the length of the file (in records) without the header record.

The next two bytes are internally used by the file-system.

Deliverable PCE4 9 Final Demonstration of Smart Sensor Interface

Page 16: Final demonstration of smart sensor interface

Smart Transducer Interface IST-1999-11585

BitNr. 7 6 5 4 3 2 1 0Byte 0 RO Status reservedByte 1 Length−1Byte 2 for file OSByte 3 for file OS

RO Read only: if set, file is read–onlyStatus The status of the file (ok if set to01b, damaged other-

wise).Length−1 Number of records in file decreased by one; index of

the last record; index start with 0 (header record)reserved These bits are reserved for further enhancements.for file OS Two bytes for use by the file operating routines.

Figure 5: IFS Header Record

Byte Nr. 0 1 2 3Record 0 HeaderRecord 1 Physical Name HiRecord 2 Physical Name Low

Header Header record of the documentation filePhysical Name Hi Upper 32 bit of the Physical NamePhysical NameLow

Lower 32 bit of the Physical Name

Figure 6: The Documentation File

2.3.3 Special files

The Documentation File: Every TTP/A node must contain a documentation file (file nr.

0x3D). This file is used to identify the node. The first and the second record after the header

record contains the unique physical name of the node which is an 8 byte (64 bit) integer (see

figure 6). This ID is stored in network order / big endian format (The MSB is stored in the

lowest byte).

The documentation file is always set to read only. Additional information about this node

can be added after the ID.

The Configuration File: The configuration file (file nr. 0x08; see figure 7) holds the

current logical name of the node and is also used by thebaptizingalgorithm (see section 2.5)

and with thesleepcommand.

Final Demonstration of Smart Sensor Interface 10 Deliverable DSC6

Page 17: Final demonstration of smart sensor interface

Dependable Systems of Systems Smart Transducer Interface

Byte Nr. 0 1 2 3Record 0 HeaderRecord 1 Cluster N. New LN Curr. LNRecord 2 Cmp. Phys. Name HiRecord 3 Cmp. Phys Name LowRecord 4 reservedRecord 5 Sleep

Header Header record of the configuration fileCluster N. Name of cluster to which the node belongsNew LN New logical name used by the baptize algorithmCurr. LN The current logical name of the nodeCmp. Phys. Name Hi Used for comparison by the baptize algorithmCmp. Phys. NameLow

Used for comparison by the baptize algorithm

Sleep Record used by the sleep command

Figure 7: The mandatory config File

Record0x01 holds in byte 3 the current logical name and in byte 2 the new logical name

used by thebaptizealgorithm. The records0x02 and0x03 contain the ID compare value

for the baptize algorithm. Record0x04 is reserved and record0x05 is used for the sleep

command.

The Membership File: It makes sense to implement this file on gateway or master nodes

while ordinary slave nodes will not benefit from the information gathered in this file. The

Membership file (file nr.0x09 ; see figure 8) contains two membership vectors of 256 bits

(32 byte) each. The logical name of each node is interpreted as an index to the 256 bits in the

membership vector. The first vector contains all slaves which have sent a live-sign during

the last multi-partner round. The second membership vector contains all slaves which have

responded to the most recent MS operation. To update the second membership vector the

master may fill empty MS slots by issuing read operations to the slaves documentation file,

which per definition must be present in every TTP/A node. The lowest bit of each of the

membership vectors referes to the highest Logical Name (0xFF).

The Round Sequence (ROSE) file: Only a master will benefit from the implementation

of the ROSE file. The ROSE file (file nr.0x0A ) specifies the sequence in which rounds

are scheduled by the master. In addition, start time and duration of this sequence can be

specified.

Deliverable PCE4 11 Final Demonstration of Smart Sensor Interface

Page 18: Final demonstration of smart sensor interface

Smart Transducer Interface IST-1999-11585

Byte Nr. 0 1 2 3Record 0 HeaderRecord 1Record 2Record 3Record 4 First MembershipRecord 5 VectorRecord 6Record 7Record 8Record 9Record 10Record 11Record 12 Second MembershipRecord 13 VectorRecord 14Record 15Record 16

Header Header record of the membership fileFirst MembershipVector

Membership vector of last sequence period

Second Member-ship Vector

Membership vector of master-slave responds

Figure 8: The Membership File

Byte Nr. 0 1 2 3Record 0 HeaderRecord 1 Active Sec. Start Sec. 2 Start Sec. 3

Header Header record of the ROSE fileActive Sec. 0: section two is active; 1: section three is activeStart Sec. 2 start record of section twoStart Sec. 3 start record of section three

Figure 9: Status Record of the ROSE File

The ROSE file is divided into three sections. The first section is the status record (record

nr. 0x01 ; see figure 9). The second and the third section each contain a sequence of round

names. At any given time, only one of the two sequences is active, the other is inactive. The

first byte of the status record defines which section is active. The second and the third byte

contain the start record of section two and three.

Final Demonstration of Smart Sensor Interface 12 Deliverable DSC6

Page 19: Final demonstration of smart sensor interface

Dependable Systems of Systems Smart Transducer Interface

Byte Nr. 0 1 2 3Record 2 StartRecord 3 TimeRecord 4 PeriodRecord 5 TimeRecord 6 Round Name IRG lengthRecord 7 Round Name IRG length

...

Start Time Start time of the round sequence (in 64 bit GPS formatdescribed in [19]

Period Time Length of the round sequenceRound Name Bits 0 ... 2 specify the name of the round, bit 7 is set if

this is the last round in the sequenceIRG length sets the length of the IRG following the specified

round. Must be in the range from 1 to 15.

Figure 10: Section two of the ROSE File

Sections two and three have the following format (see figure 10): The first two records hold

the start time of the sequence as an 64 bit (8 byte) integer value with a granularity of2−24 s.

The timer starts at the beginning of GPS time (January 6, 1980) plus an offset of238 seconds.

The next two records hold the period time of the whole sequence. All further records in the

section define a scheduled round. In the first byte of such a record, the three LSB define the

round. If the MSB is set, this entry is the last in this sequence. The second byte contains in

the lower 4 bits the number of slots in the inter round gap. Valid entries are0x01 , 0x02 , ...,

0x0F . The first entry of each sequence must be an MSA entry and every MSA entry must

have a complementary MSD entry.

2.3.4 Round Description List (RODL) files

The RODL file contains the information about the actions performed by a node for a partic-

ular round. The file names of the RODL files correspond to the round name.

The format of the RODL file is specific to the implementation of a given TTP/A node.

2.4 Master/Slave Rounds

The MS round is used by the master of a cluster to read data from an IFS file record, to write

data to an IFS file record, or to execute a selected IFS file record within the cluster.

Deliverable PCE4 13 Final Demonstration of Smart Sensor Interface

Page 20: Final demonstration of smart sensor interface

Smart Transducer Interface IST-1999-11585

IRG

DB Data ByteParameter BytePB

0x55 PB 0 PB 1 PB 2

FB

FB

0x49 DB 0 DB 1 DB 2 DB 3 Check

CheckPB 3

FF

FF DF

MSA Round

MSD RoundIRG

IRG

Multipartner RoundIRG

Multipartner Round

Figure 11: Master-slave round

BitNr. 7 6 5 4 3 2 1 0PB 0 EpochPB 1 Logical NamePB 2 File Name OP - CodePB 3 Record NameCheck Checkbyte

Epoch identifies the current epochLogical Name logical name of the addressed nodeOP - code specifies the operation which is performed (read, write

or execute)File Name name of the addressed fileRecord Name name of the addressed recordCheckbyte Checksum of the frame

Figure 12: Parameters in a MSA Round

An MS round is divided into two separate phases (see figure 11). The first is the MSA

round. During this phase, the master specifies (in a message to the slave node) which type

of operation is intended (read, write, or execute) and the address of the selected file record.

The format of the MSA round is shown in figure 12. The state of an issued MSA round

remains active until the arrival of a new MSA round.

The MSA round consists of six bytes (see figure 11). PB0 contains an epoch counter which

Final Demonstration of Smart Sensor Interface 14 Deliverable DSC6

Page 21: Final demonstration of smart sensor interface

Dependable Systems of Systems Smart Transducer Interface

provides the current epoch of the cluster internal time base. PB1 is the name of the addressed

node. PB2 specifies the operation type in bits 6 and 7 and the file which is addressed in the 6

low order bits. PB3 names the record, which is read from or written to. The parameter bytes

are followed by a check byte. This check byte is calculated as an exclusive or conjunction

over the parameter bytes and the FB.

In a subsequent MSD round, the master sends the firework which indicates that it is either

transmitting the record data or is waiting for the slave to transmit the requested record data,

depending on the operation specified in the previous MSA round. The message in the data

phase also consists of six bytes: The FB, four data bytes, and one check byte which is

calculated the same way as in the MSA round.

2.5 The Baptizing Algorithm

Until a logical name has been assigned to a node, it does not take part in the multi-partner

rounds. The baptize algorithm [5] is executed by the master to see which nodes are con-

nected to the TTP/A bus and to assign each of them a logical name, which is unique in this

TTP/A cluster.

This mechanism performs a binary search on all physical node names. A physical name is

unique for every TTP/A node within the entire universe of TTP/A nodes. The identification

of a new node takes 64 iterations. The master has to keep three 64 bit integer values,lo ,

hi , and the comparison identifierci . The values oflo andhi are only used internally to

calculate the newci values.

The variables oflo and hi are initialized to the minimum and maximum value of the

expected identifiers and move towards each other, untillo equalshi . In this case, the

identifier of a node is found and the master assigns a new alias to this node. This is done by

thebaptizeoperation.

The identifier comparison is done as follows:

• First, a lower limit for the node identifier is set by the master. This is done by perform-

ing a Master/Slave write operation to the records0x02 and0x03 of the mandatory

config file (0x08 ).

• Then the master requests an execute command on this records for all unbaptized nodes

(all with logical name0xFF ). The action which is assigned with this execute operation

is the comparison of the slave’s own unique identifier to the comparison identifier.

Deliverable PCE4 15 Final Demonstration of Smart Sensor Interface

Page 22: Final demonstration of smart sensor interface

Smart Transducer Interface IST-1999-11585

• In the corresponding MSD round of this Master/Slave round, all unbaptized nodes

whose own identifier is higher or equal than the comparison identifier write a data

frame with content0x00 in the first slot.

If no node responds to this command, then the value ofhi is set to the value ofci ,

otherwise lo is set to this value. If the value ofci andhi were equal and a node has

responded in this round, then the identifier of the new node has been found.

After finding the unique identifier of a node, a new logical name must be assigned to this

node. First, the new alias is written into byte0x02 of record0x01 of the mandatory config

file by a Master/Slave write operation. Because this operation addresses all unbaptized

nodes, in a second step, an execute operation is performed on this record. In this execute

operation, only the node whose unique identifier equals the comparison identifier, copies

this new logical name to its own logical name which is located in byte 0x03 of this record.

After this operation, the node takes part in multi-partner rounds and responds to Mas-

ter/Slave rounds with this new logical name.

2.6 CORBA

A CORBA gateway enables a cluster of smart transducers to share data (observations, con-

troll values, . . . ) with outer networks e. g., another smart transducer cluster. Both types

of data can be provided, real-time data (RS) and non time-critical data (DM and CP). A

CORBA object’s interface is defined in OMG IDL (Interface Definition Language). Such

an interface definition specifies the operations the object is prepared to perform. A detailed

description of the smart transducer’s interface in IDL can be found in [19].

For instantly response the gateway must held a copy of the real-time data within it’s own

IFS. Whereas non time-critical (remote) data are requested via MS rounds. To allow a global

interpretation of an observation the atomic triple

< name, instant, value>

is transmitted for each value. The instant is a 64 bit GPS related universal time, whereas a

name is composed ofClusterId , NodeId , FileNr andRecordNr .

Final Demonstration of Smart Sensor Interface 16 Deliverable DSC6

Page 23: Final demonstration of smart sensor interface

Dependable Systems of Systems Autonomous Mobile Robot

CORBA ORB Gateway

Active Master

Active Master

Active Master

Cluster A Cluster B Cluster C

Transducer Node

GIOP

Figure 13: Cluster of smart transducer networks

3 Autonomous Mobile Robot

This section describes the design and implementation of an autonomous mobile robot that

serves as a case study for the presented concepts on sensor fusion and time-triggered archi-

tectures. The robot is namedsmart carsince the robot consists of a four-wheeled model car

that is instrumented by a smart transducer network. The demonstrator’s components have

been developed during the last two years through the effort of several people.

The robot’s task is to navigate through a static obstacle course by perceiving its immediate

environment with the help of a combination of infrared and ultrasonic distance sensors.

Sensor fusion algorithms are used to integrate the sensor measurements into a certainty

grid, i. e., a description of the environment of the robot. Based on this information, a path

planning algorithm shall detect the most promising direction in order to detour obstacles in

front of the robot.

The robot consists of an off-the-shelf four-wheeled model car equipped with a fieldbus net-

work containing sensors, actuators, and control intelligence that enable an autonomous op-

eration of the car. Figure 14 depicts the smart car.

Deliverable PCE4 17 Final Demonstration of Smart Sensor Interface

Page 24: Final demonstration of smart sensor interface

Autonomous Mobile Robot IST-1999-11585

Figure 14: Smart Car

All employed sensors and actuators are commercial available components. In order to re-

solve interface mismatches, each sensor and actuator is implemented as a smart transducer,

thus being equipped with a small 8-bit microcontroller and a standardized network inter-

face. Furthermore, the network contains a control node and a master gateway node. The

control node hosts sensor fusion and control tasks. The gateway node enables the connec-

tion of monitoring hardware that supports the configuration of the network communication

and exports diagnostic data during operation. All transducer nodes are mounted on an area

of approximately 30 cm times 40 cm.

The software of the smart transducers are independent of the application. Each smart trans-

ducer contains the necessary software to instrument the local sensor or actuator and the

protocol interface software. The sensor fusion software is aware of the connected sensors

and the input requirements of the control application. The control application is decoupled

from the sensors, since it only uses data provided by the sensor fusion software and does

not depend on a particular sensor configuration. These properties are implemented in a

time-triggered sensor fusion model.

Final Demonstration of Smart Sensor Interface 18 Deliverable DSC6

Page 25: Final demonstration of smart sensor interface

Dependable Systems of Systems Autonomous Mobile Robot

3.1 Time-Triggered Sensor Fusion Model

Actuators Sensors

Environment Controlled Object

Transducer Level

Fusion/Dissemination Level

Control Level Control Application

Sensor Abstraction Layer

Fault Tolerance Layer Fault Tolerance Layer

Operator

Fault-Tolerant Image of the Environment

Man-Machine Interface

Fault-Tolerant Actuator Interface

Smart Transducers

Interface

Smart Transducers

Interface

Figure 15: Data flow in the time-triggered sensor fusion model

The time-triggered sensor fusion model incorporates properties like cyclic processing, com-

posable design, and introduces well-defined interfaces between its subsystems. Figure 15

depicts a control loop modelled by the time-triggered sensor fusion model. Interfaces are

illustrated by a disc with arrows indicating the possible data flow directions across the inter-

face. Physical sensors and actuators are located on the borderline to the process environment

and are represented by circles. All other components of the system are outlined as boxes.

The model distinguishes three levels of data processing with well-defined interfaces be-

tween them. Thetransducer levelcontains the sensors and actuators that interact directly

Deliverable PCE4 19 Final Demonstration of Smart Sensor Interface

Page 26: Final demonstration of smart sensor interface

Autonomous Mobile Robot IST-1999-11585

Level Task Implementer Knowledge

Transducer level Deliver sensormeasurements,instrument actuators

Transducer manufac-turer

Internals of sen-sor/actuator

Fusion/Dissemina-tion level

Gather, process, andrepresent sensor in-formation; dissemi-nate control decisionsto actuators

System integrator Sensor fusion algo-rithms, fault toler-ance concepts

Control level Find a control deci-sion, navigation andplanning

Application pro-grammer

Mission goals, con-trol theory, decisionfinding

Operator Definition of goals — Conceptual model ofsystem

Table 2: Properties of transducer, fusion/dissemination, and control level

with the controlled object. Asmart transducer interfaceprovides a consistent borderline to

the abovefusion/dissemination level. This level contains fault tolerance and sensor fusion

tasks. Thecontrol levelis the highest level of data processing within the control loop. The

control level is fed by a dedicated view of the environment (established by transducer and fu-

sion/dissemination level) and outputs control decisions to afault-tolerant actuator interface.

User commands from an operator interact with the control application via theman-machine

interface.

The breakdown into these three levels is justified by the different tasks the three levels have

to fulfill and the different knowledge necessary for designing the corresponding hard- and

software. Table 2 describes the task and the attributes of the different levels. The following

sections describe the three levels in detail.

3.1.1 Transducer Level

The sensors and actuators that physically interact with the environment establish thetrans-

ducer levelof the system. To support maximum modularity, the nodes are built assmart

transducers. The smart transducer technology offers a number of advantages from the point

of view of technology, cost, and complexity management. The task of a transducer is either

to provide observations about properties of the controlled object or, in case of an actuator,

execute a control value. An observation subsumes the information assigned to a measure-

ment such as measurement value, measurement instant, and meta-information about the

measurement.

Final Demonstration of Smart Sensor Interface 20 Deliverable DSC6

Page 27: Final demonstration of smart sensor interface

Dependable Systems of Systems Autonomous Mobile Robot

We have identified the following possible features of a smart transducer:

Signal amplifying and conditioning: Electrically weak non-linear sensor signals can be

amplified, conditioned, and transformed into digital form locally at the transducer

node without any noise pickup from long external signal transmission lines [3].

Local fusion of measurements:Either a local transducer node has access to two or more

sensor signals or the transducer is capable of using multiple measurements taken by a

single sensor at different instants.

Self-validation: The transducer can also provide information about the correctness and/or

quality of its measurements by running self-diagnostics. If an assumption of the max-

imum gradient of the measured signal can be made, the sensor can monitor changes

of measurements and detect abnormal behavior.

Conversion to standard representation: The measurement can be locally converted to

standard measurement units and data types that allow a uniform representation of

observations throughout the system regardless of the transducer type.

Timestamping: Each measurement can be augmented by information about the instant

when it has been performed. If the local clock of the microcontroller is synchronized

to a global time, the measurement can be assigned a timestamp of global validity. A

timestamp can provide information about the ordering and the interval time between

subsequent measurements.

Meta-information: The transducer can also assign meta-information, such as data type or

confidence of measurement, to its observations.

Typically, smart transducers will become mass products, sold in quantities in the tens of

millions. Therefore, a smart transducer must be flexible enough to be used for different

applications. The implementation of a smart transducer usually supports only generic or

sensor/actuator-specific tasks and comes with a host of configuration parameters. An im-

portant feature of a smart transducer is its capability of providing machine-readable self-

description. Tools operating on these descriptions support system architects in the consis-

tent configuration of smart transducers within a network. For example, a tool may access

the description of a newly connected node for an easy plug-and-play-like integration of the

node into the system.

Deliverable PCE4 21 Final Demonstration of Smart Sensor Interface

Page 28: Final demonstration of smart sensor interface

Autonomous Mobile Robot IST-1999-11585

3.1.2 Fusion/Dissemination Level

While each single sensor only provides a small view of the environment, it is the task of the

fusion/dissemination level to integrate the measurements into a description of the environ-

ment that enables the control program to assess the situation more comprehensively.

The fusion/dissemination level contains the hardware and software that act as a glue between

the transducers and the control program. By using sensor data, it creates a unified view,

theenvironment image. If required, this image is made robust to incomplete, erroneous or

missing measurements by implementing afault tolerance layer. Typical methods for the

fault tolerance layer are voting or averaging. While on the sensor side of the control loop,

data from multiple sensors are fused to reliable data, the task of the fault tolerance layer on

the actuator side of the control loop is to create multiple instances of a control message for

the redundant actuators.

Furthermore, the fusion/dissemination level can make use of all types of sensor fusion al-

gorithms. Competitive fusion methods increase the robustness of the perception, while co-

operative and complementary fusion provide extended and more complete views. Which

algorithms are particularly used in the fusion/dissemination level depends on the available

resources like processing power, working memory, and on application-specific needs at the

control level.

3.1.3 Control Level

The control levelcontains the control intelligence to make decisions based on the world

image. The control program initiates actions in order to achieve a given goal.

If the fusion/dissemination level additionally provides information about the condition of

the system, the control program also has to adapt its goal according to the circumstances.

For example, consider an onboard automatic flight control system of an aircraft that is pro-

grammed with the target “airport” as goal. However, if the system detects thatN − 1 out of

a set ofN redundant mission-critical sensors have already failed, it might select thenearest

airport for maintenance. On the other hand, if the fusion/dissemination level is implemented

in a way that hides all sensor or actuator problems from the control program, the complexity

at the control level can be reduced, however at the cost of system features. In our model

this problem is resolved by introducing different interfaces for real-time service and main-

tenance (see Section 2.1).

Final Demonstration of Smart Sensor Interface 22 Deliverable DSC6

Page 29: Final demonstration of smart sensor interface

Dependable Systems of Systems Autonomous Mobile Robot

3.1.4 Operator

The operator defines the goals, which the control application tries to reach. For this purpose

the operator does not need a detailed understanding of the technical operation of the system.

Instead, the operator follows aconceptual modelof the system that allows him or her to

understand the important aspects of the application and hence give the respective correct

inputs to the system.

For example, when driving a car, we usually do not imagine the operation of the carburetor,

sparks, and cylinders in the motor when we press the accelerator pedal. However, we have a

different conceptual model on accelerating and breaking, since even with the fastest cars it

makes a big difference between the time (or distance) a car takes to accelerate to a particular

speed and the time (or distance) the car needs to stop. Perhaps in future times, speed of a

car will be controlled by a single joystick that can be pushed forward or backward, and we

will switch to a conceptual model where accelerating or braking makes less difference.

3.1.5 Smart Transducer Interface

The smart transducer interface connects the transducer level with the fusion/dissemination

level. We have identified the following requirements for a smart transducer interface to be

used in the time-triggered sensor fusion model:

Real-time support: For the purpose of command-and-control-like architectures the real-

time service data of a smart transducer node must be accessible and delivered in an

efficient manner with bounded delay and minimal jitter.

Support for start-up and dynamic configuration: Maintenance support was a primary

design goal for our system model. Hence, the smart transducer interface must pro-

vide information about the transducer nodes that can be exploited by configuration

tools to provide computer-aided setup or re-configuration.

Online diagnostic capability: A maintenance interface must provide access to internal er-

ror logs and maintenance data of a transducer node while the node is performing its

real-time service.

Naming: A uniform naming and addressing scheme for all relevant data is necessary to

access the smart transducer data.

Deliverable PCE4 23 Final Demonstration of Smart Sensor Interface

Page 30: Final demonstration of smart sensor interface

Autonomous Mobile Robot IST-1999-11585

Implementation flexibility: In order to support low-cost implementations of smart trans-

ducers, the interface must support maximum implementation flexibility.

3.1.6 Environment Image and Control Interface

Theenvironment imageis an abstract description of the properties of the controlled object

that are of interest to the control application. The design of this image is governed by the re-

quirements of the control level. The environment image can consist of symbols, images, or

scalar quantities. The interface constituted by the environment image has to meet qualities

on accuracy, resolution, update timing, and behavior in the presence of faulty data. In gen-

eral, the image will be more complete, more exact, and less sensitive to sensor inaccuracies

than data from single sensors. Due to the fault tolerance layer, the environment image can

also act as a firewall against sensor malfunctions. On the other hand, the environment image

can also provide useful information on sensor problems or current system performance.

The control interfacesupports the control program in executing its control decisions. A

control decision on the control interface can range from a simple instruction like “open the

pressure control valve” to abstract commands like “move vehicle to 3 Treitlstraße”. In case

of the latter, the control system will recursively host a subsystem containing sensors and

control intelligence as depicted in figure 16.

3.1.7 Man-Machine Interface

The man-machine interface represents an optional interface to a user or operator. If a sys-

tem has a man-machine interface, it must be specifically designed for the stated purpose

and must be easy to operate. Ideally, the interface should be self-explanatory and not re-

quire training or an operating manual [12]. The interface must not mix up information for

different purposes, for example providing the user with an unwanted debugging view. A

good interface hides all unnecessary details from the user and provides a customized view

for different user groups.

A further requirement for the man-machine interface isrobustness. In the context of inter-

face design, robustness means the ability to tolerate or prohibit improper inputs [18].

Final Demonstration of Smart Sensor Interface 24 Deliverable DSC6

Page 31: Final demonstration of smart sensor interface

Dependable Systems of Systems Autonomous Mobile Robot

Control Application

Sensor Abstraction Layer

Fault Tolerance Layer

Sensors Actuators Sensors

Sub-Control Application

Fusion Dissemination

Environment

Operator

Nes

ted

Fusi

on A

pplic

atio

n

Out

er F

usio

n A

pplic

atio

n

Figure 16: Nested configuration with intelligent control interface

3.2 Smart Car System Architecture

The robot is build from commercial components, all provided with an appropriate interface

that allows a composition of separately developed and tested components. The demonstra-

tor’s architecture implements three-level design approach according to the time-triggered

sensor fusion model. The first level is the transducer level containing the sensors and ac-

tuators equipped with a TTP/A smart transducer interface. The sensors at the transducer

level provide redundant information that is used to generate a robust perception at the fu-

sion/dissemination level. This approach is also known as acompetitivesensor configuration.

Figure 17 depicts the system architecture of the main parts of the smart car. All transducers

are illustrated as circles, boxes represent control units, and rounded rectangles depict ele-

ments like fusion algorithms or filters that process data in order to enhance the data quality.

Deliverable PCE4 25 Final Demonstration of Smart Sensor Interface

Page 32: Final demonstration of smart sensor interface

Autonomous Mobile Robot IST-1999-11585

Local Filter

IR1

Local Filter

IR2

Local Filter

IR3

Serv1 Serv2 Serv3

Navigation and Path Planning G

ate

Gat

e

Gat

e

Servo Control

US1 US2

Grid generation

Intelligent Motion Control

Pos Steer Speed

Tran

sduc

er L

evel

Fu

sion

/Dis

sem

inat

ion

Leve

l C

ontro

l Lev

el

Sensor Fusion

IR1 . . . . . . . . . . . . . . . . . . . .Middle infrared sensor US1 . . . . . . . . . . . . . . . . . . . Right ultrasonic sensorIR2 . . . . . . . . . . . . . Right forward infrared sensor US2 . . . . . . . . . . . . . . . . . . . . Left ultrasonic sensorIR3 . . . . . . . . . . . . . . . Left forward infrared sensor Pos . . . . . . . . . . . . . . . . . . . Position encoder sensorServ1 . . . . . . . . . . . . . . . . . . . . . . . . . . Servo for IR1 Speed . . . . . . . . . . . . . . . . . . Speed control actuatorServ2 . . . . . . . . . . . . . . . . . . . . . . . . . . Servo for IR2 Steer . . . . . . . . . . . . . . . . .Steering control actuatorServ3 . . . . . . . . . . . . . . . . . . . . . . . . . . Servo for IR3

Figure 17: System architecture of smart car

The transducer level contains six sensors and five actuators. Each of the three infrared sen-

sors is equipped with a filtering mechanism that removes faulty sensor measurements and

smoothes the result. This filtering mechanism is locally implemented on the smart trans-

ducer, therefore belongs to the transducer level. The transducer level further contains a

position encoder node, a speed control node, a steering control node, and three servo nodes,

which are used to swivel around the infrared sensor nodes. At the fusion/dissemination level,

a servo control unit drives the servo nodes. The information of the current servo positions

and the measurements from the infrared sensors are fused by the robust certainty grid algo-

Final Demonstration of Smart Sensor Interface 26 Deliverable DSC6

Page 33: Final demonstration of smart sensor interface

Dependable Systems of Systems Autonomous Mobile Robot

rithm in order to create a description of the environment. Note that the robust certainty grid

algorithm provides a feedback value for each sensor that describes the current dependability

of that sensor. The result from the robust certainty grid algorithm is used by a navigation and

path planning unit at the control level. The measurements from the ultrasonic sensors are

fused to a unique observation using the confidence-weighted averaging algorithm. Based on

the available information the navigation and path planning unit decides about a navigation

path. This path is defined by turn angle and travel distance. A motion control unit hosted at

the fusion/dissemination level takes over the navigation path and generates the appropriate

values for the steering and speed control node while paying attention to the covered distance

provided by the position sensor.

3.3 Demonstrator Hardware

This section describes the relevant hardware components that form the demonstrator. Fig-

ure 18 depicts a categorization into three fields - mechanical hardware, electrical and elec-

tromechanical hardware, and TTP/A transducer hardware. The mechanical layer consists of

the main chassis of the smart car, which is an off-the-shelf four-wheeled model car fitted

with a wooden mounting board. The electrical and electromechanical hardware refers to the

physical sensors, power supplies, servos, LED indicators, and other components such as ad-

ditional power supply busses. The smart transducer hardware layer consists of the fieldbus

network with TTP/A nodes and the TTP/A communication bus.

3.3.1 Infrared Sensors

The Sharp GP2D02 infrared distance sensor is a low-cost distance sensor for the purpose

of measuring distances to objects within the range of 10–80 cm. It is designed for usage in

combination with small microcontrollers and is capable of taking measurements in varying

light conditions and against a wide variety of surfaces. The distance measuring technique

employed by the GP2D02 is triangulation. For this purpose, the GP2D02 emits light and

detects rays reflected by an object. By measuring the angle of the incoming rays and the

knowledge of the distance between the light source and drain, the distance from the sensor

to a reflecting object can be calculated. The output signal of the GP2D02 is proportional

to the angle and not the distance, thus the actual distance must be calculated by the host

microcontroller. Figure 19 depicts the conversion function from the sensor signal to the

distance of the reflective object. Objects closer than 10 cm are not recognized correctly.

Due to the static environment of the smart car, we can ensure that objects are recognized

Deliverable PCE4 27 Final Demonstration of Smart Sensor Interface

Page 34: Final demonstration of smart sensor interface

Autonomous Mobile Robot IST-1999-11585

Figure 18: Hardware parts of smart car

and avoided at distances greater than 10 cm. The conversion function can be approximated

by a hyperbolic function,

dist =KG

xSENSOR −K0

(1)

whereKG andK0 are sensor-dependent constants,xSENSOR is the sensor signal anddist is

the actual distance to the detected object in centimeters.

The GP2D02 needs a supply voltage within the limits of 4.4 V to 7 V and a suitable clock

signal for proper operation. A 4-pin connector is used to connect the four wires required by

the GP2D02: power, ground, clock in (VIn ), and signal output (VOut ). Figure 20 shows a

timing diagram of the process of initiating a measurement cycle and reading the data using

serial communication. To place the sensor in idle mode,VIn must be set to high. IfVIn

is high for more than 1.5 ms, the sensor will reset and go into idle mode. As shown in the

timing diagram, settingVIn to low initiates the measurement cycle. After the delay needed

by the sensor to take the reading, the sensor raisesVOut , signaling that it is ready to provide

the data.VIn will then be toggled between high and low. The output data is transmitted

Final Demonstration of Smart Sensor Interface 28 Deliverable DSC6

Page 35: Final demonstration of smart sensor interface

Dependable Systems of Systems Autonomous Mobile Robot

using a serial communication scheme, the most significant bit is transmitted first. Each bit

is valid shortly after the falling clock edge. After this cycle is finished,VIn should be held

high at least for the duration of the minimal inter-measurement delay.

The case of the sensor is a conductive plastic material. To insure reliable, noise-free mea-

surements, the case is connected to ground. Otherwise the sensor sometimes reports an

object even if none is present. Since theVIn signal must remain within the range -0.3 V to

+3.0 V while the output level of the microcontroller is around +5 V, two resistors are used

to achieve an appropriate signal level. Figure 21 depicts the circuit that interconnects sensor

and microcontroller.

3.3.2 Ultrasonic Sensors

For detection of objects at distances greater than 80 cm we use two Polaroid 6500 series

ultrasonic sensors. The sensing is performed by a sonic ping at a specific frequency that

travels from the transducer to the object and back. In the case of the Polaroid ultrasonic

module, 16 pings generated by transitions between +200 V and –200 V with a frequency of

220

210

200

190

180

170

160

150

140

130

120

110

100

90

80

70

60

500 20 40 60 80 100 120

Gray paper

White paper

Out

put v

alue

of

the

GP2

D02

Distance to reflective object L (cm)

Figure 19: Infrared sensor signal vs. distance to reflective object

Deliverable PCE4 29 Final Demonstration of Smart Sensor Interface

Page 36: Final demonstration of smart sensor interface

Autonomous Mobile Robot IST-1999-11585

around 50 kHz are used. The chirp moves radially from the source through the air at the

speed of sound, approximately 340msec

. When the chirp reaches an object, it is reflected in

varying degrees depending on the shape, orientation, and surface properties of the reflecting

surface. This reflected chirp then travels back towards the transducer. As the reflected

signal hits the transducer, a voltage is created, which is fed to a stepped-gain amplifier. To

avoid wrong measurements, the module only reports objects that have provided an echo to

subsequent pings.

Figure 22 describes the timing diagram for the ultrasonic sensor. A transition from low to

high at theINIT pin causes the generation of a chirp. When the sensor receives the reflected

signal, it raises theECHOpin. Thus, the transducer node is able to measure the duration

required by the sound to pass twice the distance chirp source to obstacle. Consequently, the

distance can be calculated by the following equation:

dist =tECHO [sec]− tINIT [sec]

2· 333[

m

sec] (2)

The measured durationtECHO has to be corrected by the initialization timetINIT . For the

employed modules, the valuetINIT is 0.55 msec. The sonic speed of 333msec

is valid for

sonic wave propagation in air of average humidity at 20 degree Celsius at sea-level pressure.

3.3.3 Servo Actuator

To move the infrared sensors through their sensor field some mechanism was needed. We

first tried to use stepper motors, which have the advantage that the angle can be determined

very exactly. However, the instrumentation of the motor coils needed extra hardware. So

we chose standard model car servos. They are cheap, lightweight, hard wearing, come in

many sizes, and have a standard control interface, that can be accessed by microcontrollers

without extra driver hardware.

VIn

VOut

MSB LSB

70ms 1.6ms 2ms

0.1ms 0.1ms

Figure 20: Timing diagram for the GP2D02 sensor

Final Demonstration of Smart Sensor Interface 30 Deliverable DSC6

Page 37: Final demonstration of smart sensor interface

Dependable Systems of Systems Autonomous Mobile Robot

3kΩ

3kΩ

Vcc

3V

2

GN

D 1

InOut

V

4

TTP/A Controller

Figure 21: Connection diagram for the GP2D02 infrared sensor

Vcc

INIT

ECHO

start ofmeasurement

echoreceived

end ofmeasurement

Figure 22: Timing diagram for the ultrasonic sensor

The Servo Interface Servos have a three-wire interface: ground, power, and control. The

input to the control line is a pulse width modulated signal from which all servo timings and

positions are derived. All servos have their own limitations concerning the ability to perform

a full turn. Convention states that applying a 1.5 ms signal holds the servo in neutral, a

1 ms signal turns it counter-clockwise to its maximum angle, and a 2 ms signal results in

a clockwise turn (see Figure 23). The signal must be repeated at least every 30–50 ms

otherwise the servo may start jittering and finally stop driving the output. The result would

be the loss of active hold on the desired position.

Connecting the servo The servo’s control line is connected directly to the microcon-

trollers output port. The power and ground wires are connected to the main line.

Deliverable PCE4 31 Final Demonstration of Smart Sensor Interface

Page 38: Final demonstration of smart sensor interface

Autonomous Mobile Robot IST-1999-11585

1ms 1.5ms 2ms

Figure 23: The servo control signal.

3.3.4 Steering Actuator

The steering actuator is used to select the tilt angle of the front wheels. For the steering of

the vehicle the exact same type of servo as for moving the infrared sensors is used. From

the interface view, the steering actuator is equal to the three servo actuators.

3.3.5 Speed Actuator

The digital speed control unit is instrumented very similar to the servo. Applying a 1.5 ms

signal puts the motor in neutral, while a pulse duration of 1 ms accelerates the car at full

speed forward. Accordingly, a pulse length of 2 ms results in full reverse speed.

While it is possible to regulate the forward speed by applying appropriate pulse-length be-

tween 1 ms and 1.5 ms, pulse lengths between 1.5 ms and 1.75 ms result in an active break,

i.e., the vehicle stops immediately respectively holds its current position. To regulate the

reverse speed pulse lengths between 1.75 ms and 2 ms can be used.

3.4 TTP/A Nodes

The network comprises 13 TTP/A fieldbus nodes. Some of them were designed especially

for this project while others have been adopted from existing applications (see [21]). Each

TTP/A node is equipped with a Motorola MC33290 ISO K Line Serial Link interface in

Final Demonstration of Smart Sensor Interface 32 Deliverable DSC6

Page 39: Final demonstration of smart sensor interface

Dependable Systems of Systems Autonomous Mobile Robot

Figure 24: Employed TTP/A node types (from left to right: master node with monitoringinterface, slave node based on AT90S4433 MCU, slave node based on ATMega128 MCUwith external RAM; scale in centimeters)

order to establish a communication at a bus speed of 9600 baud.1 All TTP/A nodes are im-

plemented in accordance to the standardized smart transducers interface specification [19].

Figure 24 shows the three node types that have been used with the smart car. Except for

transducer-specific circuitry, all nodes are implemented on commercial-off-the-shelf micro-

controllers on a printed circuit board of about 4 cm× 4 cm. The node hardware has been

designed at the Institut für Technische Informatik at the University of Technology in Vienna.

Figure 25 depicts the network schematic and placement of the 13 nodes on the smart car.

The following paragraphs describe the hardware of the nodes employed in the smart car:

Master node: The master node consists of an Atmel AT90S8515 microcontroller and is

clocked by a 7.3728 MHz quartz that allows standard baud rates for the hardware

UART. The Atmel AT90S8515 microcontroller is a low-power CMOS 8-bit micro-

controller based on the AVR RISC architecture. It features 8 KB of in-system pro-

grammable flash, 512 bytes of SRAM and 512 bytes of in-system programmable

EEPROM. It also has one 8-bit and one 16-bit timer/counter with separate prescaler

and a programmable serial UART.

The master has full access to the file system of all nodes in the network, and is also

used as a gateway to other networks. The gateway provides a monitoring interface for

accessing the IFS contents of any node in the network. Thus, a monitoring tool on a

1The performance of the particular nodes also allows higher transmission rates up to 38400 baud, howeverwe have chosen 9600 baud in order to show the capability of TTP/A to provide a high net bandwidth due to itshigh data efficiency.

Deliverable PCE4 33 Final Demonstration of Smart Sensor Interface

Page 40: Final demonstration of smart sensor interface

Autonomous Mobile Robot IST-1999-11585

IR3

Serv3 IR3

Serv1

Serv2 IR2

S

erv 3

IR1

S

erv 1

IR2

S

erv 2

S

teer

US

2

US

1

US1

US1 Speed

Pos

TT

P /A

M

aste

r

Sp

eed

co

ntr

ol

Po

wer

su

pp

ly

Steering Servo (underneath)

Position encoder (underneath)

T

TP /A Bus

IR1

Nav

Nav . . . . . . . . . . . . . . . . . . . . . . . . . Navigation node IR1 . . . . . . . . . . . . . . .Middle infrared sensor nodePos . . . . . . . . . . . . . . . . . . . . Position encoder node IR2 . . . . . . . . Right forward infrared sensor nodeSpeed . . . . . . . . . . . . . . . . . . . . .Speed control node IR3 . . . . . . . . . . Left forward infrared sensor nodeSteer . . . . . . . . . . . . . . . . . . . .Steering control node Serv1 . . . . . . . . . . . . . . Servo control node for IR1US1 . . . . . . . . . . . . . . Right ultrasonic sensor node Serv2 . . . . . . . . . . . . . . Servo control node for IR2US2 . . . . . . . . . . . . . . . Left ultrasonic sensor node Serv3 . . . . . . . . . . . . . . Servo control node for IR3

Figure 25: Network topology of smart car

PC can access the cluster via an RS232 serial link. Besides monitoring, the master’s

task is to provide a periodical synchronization event to all nodes via the TTP/A bus,

thus enabling conflict-free TDMA communication.

Infrared nodes: The three infrared nodes use an Atmel AT90S4433 microcontroller

clocked by a 7.3728 MHz quartz. This microcontroller features 4 KB of in-system

programmable flash, 128 bytes of SRAM and 128 bytes of in-system programmable

EEPROM. Each infrared node contains an interface to the TTP/A bus and the circuitry

for interfacing one Sharp GP2D02 distance sensor.

Final Demonstration of Smart Sensor Interface 34 Deliverable DSC6

Page 41: Final demonstration of smart sensor interface

Dependable Systems of Systems Autonomous Mobile Robot

Servo control nodes: Each of the three infrared sensors is mounted on a servo in order to

detect objects at different angles in front of the car. Each servo is instrumented by an

Atmel AT90S4433 microcontroller clocked by a 7.3728 MHz quartz.

Steering control node: The steering of the car is performed by an extra servo that allows

turning of the two front wheels. Likewise, this servo is also instrumented by an Atmel

AT90S4433 microcontroller clocked by a 7.3728 MHz quartz.

Ultrasonic nodes: Each of the two ultrasonic sensors is controlled by an Atmel AT90S4433

microcontroller clocked by a 7.3728 MHz quartz and an integrated circuit featuring a

stepped-gain amplifier that creates the chirp signal levels for the ultrasonic sensors.

Position encoder node:The position encoder node controls the travelled distance of the

smart car by use of a shaft encoding sensor that counts the number of revolutions of

a measuring wheel. The position node utilizes an Atmel AT90S4433 microcontroller

clocked by a 7.3728 MHz quartz.

Speed control node:The speed control node instruments a digital speed control unit that

sets the motor speed and direction. The speed node utilizes an Atmel AT90S4433

microprocessor clocked by a 7.3728 MHz quartz.

Navigation node: The navigation node is a pure computation node without associated

transducer. The navigation node is implemented with an Atmel ATMega128 micro-

controller, which is a high performance, low-power 8-bit RISC microcontroller with

128 KB of in-system reprogrammable flash memory, 4 KB of EEPROM and 4 KB of

RAM. The microprocessor has been equipped with an 14.7456 MHz quartz and 32 KB

of external memory. A key advantage of the ATMega128 is that, while providing

greater performance, it is fully backward compatible with the ATMega103 microcon-

troller (in ATMega103 compatibility mode supported on fuse M103C), which is well

supported by the AVR-GCC compiler and current TTP/A implementations.

3.5 Demonstrator Software

This section describes the tasks of the smart car software parts. As depicted in the system

architecture, the system can be split up into local infrared sensor filtering, servo control, grid

generation, navigation/path planning, and intelligent motion control.

Deliverable PCE4 35 Final Demonstration of Smart Sensor Interface

Page 42: Final demonstration of smart sensor interface

Autonomous Mobile Robot IST-1999-11585

3.5.1 Infrared Sensor Filter

The GP2D02 shows problematic behavior, when there are no objects within the sensor’s

detection range (about 110 cm). In this case, the sensor returns jittering measurements that,

depending on the sensor constantsKG andK0, can also correspond to measurements within

the 80 cm range specified in the sensor’s data sheet. Since it was not feasible to operate

the sensor only when objects are present within its defined range, we developed a simple

filter algorithm that compares subsequent measurements and detects the so-calledinfinity

case. The main difference between a sensor reading corresponding to a detected object and

a sensor reading corresponding to infinity is the variation of the sensor readings. The filter

uses a linear first-in-first-out queue that caches the last four infrared sensor readings in order

to estimate the variance of the sensor signal. If the variance is above a particular threshold,

the observation is considered to be an infinity measurement, i. e., no objects are present at

a distance of about 100 cm. Otherwise, the median of the history values is determined and

used in equation 1 for calculating a distance measurement.

3.5.2 Servo Control

The servo control unit instruments the three servo motors for infrared sensor IR 1, IR 2, and

IR 3 in order to perform a sweep over a sector of 72 degrees for each sensor. The sectors

overlap each other partially so that all three sensors cover an overall view of 120 degrees.

Each servo is instrumented to rack up 13 distinct positions that create the same number of

viewing angles for the infrared sensors. The necessary time for switching from one position

to the next is a critical parameter and depends on the employed servo, the angle difference

to go, and the supply voltage level. For the smart car we have determined a minimum value

of 200 ms that is necessary to move from one servo position to the next. However, since

the filter algorithm for the infrared sensors needs multiple measurements at every position,

we increased this time to 500 ms to be on the safe side. The servos are instrumented like a

windshield wiper [23]. This approach visits every second position in its forward wipe and

the other positions in its backward wipe. Besides setting the servos, the servo control unit

reports the current position of the servos to the grid generation unit.

3.5.3 Grid Generation

The task of the grid generation unit is to integrate the sensor measurements from the three

infrared distance sensors into a grid of 17 x 11 cells. Each cell corresponds to an area of

Final Demonstration of Smart Sensor Interface 36 Deliverable DSC6

Page 43: Final demonstration of smart sensor interface

Dependable Systems of Systems Autonomous Mobile Robot

10 cm x 10 cm in the car’s environment.

For each of the 13 viewing angles created by the servo control unit, the sensor’s line of sight

has been generated in the grid for each of the three sensors. Figure 26 depicts the lines in

the grid. The numbers refer to the respective servo positions. The dark box on the lower

center of each grid represents the smart car. Sensor 3 is located on the left corner, sensor

1 in the middle, and sensor 2 on the right corner. In order to save memory, the lines are

stored only by their endpoints in the grid. Each new distance measurement is added to the

grid by proceeding along the line using the Bresenham line algorithm [10]. All line points

that are within the measured distance are fused asfree with the respective grid cell value.

At the cell that corresponds with the measured distance, the value foroccupiedis fused

with the respective grid cell value. The fusion of afreeor occupiedmeasurement with the

cell occupancy value is performed by the robust certainty grid algorithm, which has been

introduced in [7]. Alternatively, we implemented Bayesian fusion, according to [4].

3.5.4 Navigation and Path Planning

In order to achieve fast advancement, two different modes of operation have been defined.

As long as no obstacles are detected, the car operates in “rabbit mode”. In this mode the

car drives straight forward at full speed using only the ultrasonic sensors to percept the

environment. The ultrasonic sensors report obstacles straight ahead of the car within a range

of about 150 cm.

In case an obstacle is detected, the car switches to “turtle mode”. In this mode the car

stops, performs a sensor sweep, and builds a map of the obstacles ahead by using the robust

certainty grid algorithm. Then the navigation and path planning algorithm is executed doing

the following steps:

Path planning: The first step in navigational decision-making is to plan the paths that the

car can take, given a fixed set of steering constants. The steering slave node can

currently handle up to 85 distinct constants. However, since most of them do not make

a significant directional difference for the short distances that apply in our application,

we reduced the number of possible directions to 13 evenly-spaced directional paths.

Each path contains the grid cells the car is crossing when choosing the respective

direction. Figure 27 depicts the 13 paths in relation to the certainty grid.

Path assessment:A key factor in sensible navigation decision-making lies in being able

to understand the relative risk of different obstacles in one’s path. Naturally, closer

Deliverable PCE4 37 Final Demonstration of Smart Sensor Interface

Page 44: Final demonstration of smart sensor interface

Autonomous Mobile Robot IST-1999-11585

Figure 26: Line of sight for each position and sensor

obstacles pose greater risks than further away obstacles. The smart car uses a simple

risk distribution scheme wherein the visible region around the smart car is divided into

concentric rings of 20 cm width. Each of these rings is then given a unique risk weight

Final Demonstration of Smart Sensor Interface 38 Deliverable DSC6

Page 45: Final demonstration of smart sensor interface

Dependable Systems of Systems Autonomous Mobile Robot

Figure 27: Path planning

for all the grid cells in that ring, starting with the least risk for the ring containing the

farthest visible point. Figure 28 depicts the risk distribution scheme in relation to the

certainty grid.

Deliverable PCE4 39 Final Demonstration of Smart Sensor Interface

Page 46: Final demonstration of smart sensor interface

Autonomous Mobile Robot IST-1999-11585

Figure 28: Risk distribution scheme

Figure 29: Example scenario for navigation decision-making

Using the information about the occupancy value of a cellcell .occ and the risk of

a cell cell .risk , a risk assessment for each path can be derived using the following

equation:

riskpath =∑

cell∈path

cell .risk · cell .occ (3)

Decision making: Once the expected risk for each of the 13 paths is evaluated, all paths

with assessed risks above a particular threshold value are discarded. From the re-

maining set of paths the one with the highest preference is chosen. Figure 29 depicts

an example scenario for navigation decision-making. Out of five possible paths, the

directions 00, 01, 02, 06, and 07 remain as feasible paths. Higher numerical values

Final Demonstration of Smart Sensor Interface 40 Deliverable DSC6

Page 47: Final demonstration of smart sensor interface

Dependable Systems of Systems Autonomous Mobile Robot

Measurement rangeConfidence(cm)

[0,12) 2[12,18) 3[18,24) 4[24,30) 5[30,38) 6[38,150] 11

Table 3: Relation between measurement range and confidence for the ultrasonic sensors

for PREFERENCEindicate higher priority for that direction. If more than one feasible

path with highestPREFERENCEis available, the path with the lowest risk and highest

PREFERENCEis chosen. If this criterium is still ambiguous, the path with the lower

direction index is chosen. In the current implementation, the preference values have

been chosen in order that the path closest to driving straight ahead is given preference.

If the set of feasible paths is empty, i. e., no path with a risk below the threshold value

is available, the car aborts further action and stops.

In the next stage of extension, the car will retrace its path in case of a dead end until

a decision to go forward can be made, which does not repeat the path it just retraced.

Furthermore, the car will be equipped with self-localization and the preference values

will be modified with respect to the location of a goal in relation to the position of the

smart car.

3.5.5 Fusion of Ultrasonic Observations

Due to the wide detection field of the ultrasonic sensors, it is not feasible to integrate the ob-

servations from the ultrasonic sensors into the sensor grid. Therefore, the information from

the sensors is processed separately using the confidence-weighted averaging algorithm [6].

Each ultrasonic sensor transmits a measurement and confidence value at predetermined

points in time. The confidence value is chosen by the ultrasonic transducer based on the

measurement range. The conversion function of the ultrasonic smart transducers has been

optimized for ranges farther than 100 cm, therefore, the accuracy is worse for short dis-

tances. Table 3 depicts the relation between measurement range and confidence for the

ultrasonic smart transducers.

The combination of the two measurement valuesx1 andx2 is straight forward. First, the

Deliverable PCE4 41 Final Demonstration of Smart Sensor Interface

Page 48: Final demonstration of smart sensor interface

Autonomous Mobile Robot IST-1999-11585

Confidence value Interval for uniformlydistributed error

Statistical Variance

0 [-100.0,100.0] 3333.331 [-70.2,70.2] 1644.652 [-49.3,49.3] 811.473 [-34.7,34.7] 400.374 [-24.3,24.3] 197.545 [-17.1,17.1] 97.476 [-12.0,12.0] 48.097 [-8.4,8.4] 23.738 [-5.9,5.9] 11.719 [-4.2,4.2] 5.7810 [-2.9,2.9] 2.8511 [-2.1,2.1] 1.4112 [-1.4,1.4] 0.6913 [-1.0,1.0] 0.3414 [-0.7,0.7] 0.1715 [-0.5,0.5] 0.08

Table 4: Conversion table for 16 different levels of confidence

respective variance valuesV[S1] andV[S2] for the transmitted confidence level are deter-

mined by using a lookup table. Since there are only two measurements to be fused, the

fusion algorithm can be simplified to the following two equations:

x =x1 · V[S2] + x2 · V[S1]

V[S1] + V[S2](4)

V[SO] =V[S1] · V[S2]

V[S1] + V[S2](5)

The resulting value isx with an expected varianceV[SO]. By using table 4 as reference, the

confidence of the fused value can be generated. The resulting confidence is always at least

as high as the highest confidence among the input observations.

3.5.6 Intelligent Motion Control

The purpose of the intelligent motion control unit is to liberate the navigation and path

planning unit from actuator control issues. Navigation and path planning issues a path,

described by direction and length, to the intelligent motion control unit. In turtle mode,

Final Demonstration of Smart Sensor Interface 42 Deliverable DSC6

Page 49: Final demonstration of smart sensor interface

Dependable Systems of Systems Autonomous Mobile Robot

length is constant in the current implementation, i. e., the car advances in steps of 20 cm in

turtle mode. In rabbit mode, the car advances straight forward until the ultrasonic sensors

detect an object. Thus, the intelligent motion control unit sets an appropriate forward speed

until the given distance is covered in turtle mode or the ultrasonic sensors report an obstacle

in rabbit mode.

3.6 Evaluation

In order to evaluate the performance of the robot’s perception, we test the grid generation

for three situations using the real hardware of the smart car. Each of the three infrared

distance sensors is used as an input to the robust certainty grid algorithm. The evaluation of

the sensor’s behavior has shown that the sensors show a markable amount of inaccuracies

and failures, which must be compensated by the robust certainty grid algorithm. Since the

main problem of the infrared sensors is the detection of free space, the first situation to be

tested does not contain any obstacles within a distance of 120 cm to the car. The second

parcour evaluates the opposite situation with all directions in front of the car being blocked.

The third parcour contains a typical situation with three obstacles. During the experiment,

neither the car nor the obstacles are moving.

As a benchmark, we also generate the grid with Bayesian fusion while using the same input

data.

3.6.1 Free Space Detection

Figure 30(a) depicts the parcour setup that has been used for the grid generation. This setup

does not contain any single obstacle within the sensors’ range. The smart car is located at

the bottom center of the picture. After performing a full sensor sweep, the system is stopped

and the grid is read out via the monitoring interfaces of the smart car. A sweep consists of

setting each sensor subsequently to the prespecified 13 viewing angles. Figure 30(b) depicts

the grid generated by Bayesian fusion while figure 30(c) shows the grid generated by the

robust certainty grid algorithm. Each of the values depicted in the grid corresponds to the

occupancy value of a grid cell. The gray tone also illustrates the occupancy value: a dark

gray means high probabilities and a light gray indicates low probabilities. Since the sensor

values are all filtered, both grids are generated without errors except for some cells that had

not been updated by a corresponding sensor beam. The occupancy values for these cells are

0.5, reflecting an uncertain state.

Deliverable PCE4 43 Final Demonstration of Smart Sensor Interface

Page 50: Final demonstration of smart sensor interface

Autonomous Mobile Robot IST-1999-11585

(a) Parcour setup (no obstacles)

0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0

0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0

0.5 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.5

0.0 0.5 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.5 0.0

0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0

0.5 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.5

0.5 0.5 0.5 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.5 0.5 0.5

0.5 0.5 0.5 0.5 0.5 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.5 0.5 0.5 0.5 0.5

0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.0 0.0 0.0 0.5 0.5 0.5 0.5 0.5 0.5 0.5

0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5

0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5

(b) Certainty grid generated with Bayesian fusion

0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0

0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0

0.5 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.5

0.0 0.5 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.5 0.0

0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0

0.5 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.5

0.5 0.5 0.5 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.5 0.5 0.5

0.5 0.5 0.5 0.5 0.5 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.5 0.5 0.5 0.5 0.5

0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.0 0.0 0.0 0.5 0.5 0.5 0.5 0.5 0.5 0.5

0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5

0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5

(c) Result of robust certainty grid method

Figure 30: Free space detection

Final Demonstration of Smart Sensor Interface 44 Deliverable DSC6

Page 51: Final demonstration of smart sensor interface

Dependable Systems of Systems Autonomous Mobile Robot

3.6.2 Dead End Detection

Figure 31(a) depicts the parcour setup with a dead-end situation. All directions in front of

the car, located at the bottom center of the picture, are blocked. After performing a full

sensor sweep, the system is stopped and the grid is read out via the monitoring interfaces of

the smart car. Figure 31(b) depicts the grid generated by Bayesian fusion while figure 31(c)

shows the grid generated by the robust certainty grid algorithm. In some cases, sensor 1

did not detect an object correctly and reported an erroneous value. This happens due to

the mechanical movement of the servo motors where the sensors are fitted on. When the

servos are moving, the vibrations cause subsequent sensor measurements to deviate from

each other, which upsets the filtering algorithm. Using the given hardware, this problem

cannot be overcome since the sensor performance is worse without filtering. As depicted in

figures 31(b,c), both grids are affected by these failures, however the robust certainty grid

algorithm compensated some of the wrong measurements using multiple sensor sources for

validation. Therefore the resulting grid in figure 31(c) better reflects the given situation than

the grid in figure 31(b) that was generated by Bayesian fusion.

3.6.3 Typical Situation with Three Obstacles

Figure 32(a) depicts a parcour setup with three obstacles. This is considered a typical sit-

uation for the operation of the smart car because some directions are blocked by obstacles

within the sensors’ range, while other directions contain just free space. After performing

a full sensor sweep the system is stopped and the grid is read out via the monitoring inter-

faces of the smart car. Figure 32(b) depicts the grid generated by Bayesian fusion while

figure 32(c) shows the grid generated by the robust certainty grid algorithm. As it was the

case in the dead-end situation, some sensor measurements have reported erroneously the

value for infinity. Therefore, the obstacle in front of the car is not correctly mapped in

the grid generated by Bayesian fusion which is a critical error since the car could hit that

obstacle when navigating according to the grid contents. Similar to the case in figure 31,

the robust certainty grid algorithm is also affected by the sensor failure, but the algorithm

compensated most of the wrong measurements using multiple sensor sources for validation.

Therefore the grid given in figure 32(c) can be safely used for navigation.

Deliverable PCE4 45 Final Demonstration of Smart Sensor Interface

Page 52: Final demonstration of smart sensor interface

Autonomous Mobile Robot IST-1999-11585

(a) Parcour setup (dead end)

0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 1.0 0.5 1.0 0.5 0.5 0.5 0.5 0.5 0.5

0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.0 0.5 0.0 0.5 0.5 0.5 0.5 0.5 0.5

0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.0 0.0 0.5 0.5 0.5 0.5 0.5 0.5 0.5

0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.0 0.0 0.5 0.5 0.5 0.5 0.5 0.5 0.5

0.5 0.5 0.5 0.5 0.5 1.0 1.0 1.0 0.0 0.0 1.0 1.0 0.5 0.5 0.5 0.5 0.5

0.5 0.5 0.5 0.5 0.5 1.0 0.0 0.0 0.0 0.0 0.0 1.0 0.5 0.5 0.5 0.5 0.5

0.5 0.5 0.5 0.5 0.5 1.0 0.0 0.0 0.0 0.0 0.0 1.0 0.5 0.5 0.5 0.5 0.5

0.5 0.5 0.5 0.5 0.5 1.0 0.0 0.0 0.0 0.0 0.0 1.0 0.5 0.5 0.5 0.5 0.5

0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.0 0.0 0.0 0.5 0.5 0.5 0.5 0.5 0.5 0.5

0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5

0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5

(b) Certainty grid generated with Bayesian fusion

0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 1.0 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5

0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.0 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5

0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5

0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5

0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 1.0 1.0 1.0 1.0 0.5 0.5 0.5 0.5 0.5

0.5 0.5 0.5 0.5 0.5 1.0 1.0 1.0 0.0 0.0 0.0 1.0 0.5 0.5 0.5 0.5 0.5

0.5 0.5 0.5 0.5 0.5 1.0 0.0 0.0 0.0 0.0 0.0 1.0 0.5 0.5 0.5 0.5 0.5

0.5 0.5 0.5 0.5 0.5 1.0 0.0 0.0 0.0 0.0 0.0 1.0 0.5 0.5 0.5 0.5 0.5

0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.0 0.0 0.0 0.5 0.5 0.5 0.5 0.5 0.5 0.5

0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5

0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5

(c) Result of robust certainty grid method

Figure 31: Dead end situation

Final Demonstration of Smart Sensor Interface 46 Deliverable DSC6

Page 53: Final demonstration of smart sensor interface

Dependable Systems of Systems Autonomous Mobile Robot

(a) Parcour setup with three obstacles

0.5 0.0 0.0 0.0 0.0 0.0 0.0 0.5 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.5 0.5

0.5 0.5 0.0 0.0 0.0 0.0 0.0 0.5 0.0 0.0 0.0 0.0 0.0 0.0 0.5 0.5 0.5

0.5 0.5 0.5 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.5 0.5 0.5

0.5 0.5 0.5 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.5 0.5 0.5 0.5

0.0 0.5 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 1.0 0.5 0.0

0.5 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.5

0.5 0.5 0.5 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.5 0.5 0.5

0.5 0.5 0.5 0.5 0.5 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.5 0.5 0.5 0.5 0.5

0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.0 0.0 0.0 0.5 0.5 0.5 0.5 0.5 0.5 0.5

0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5

0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5

(b) Certainty grid generated with Bayesian fusion

0.5 0.0 0.0 0.0 0.0 0.0 0.0 0.5 1.0 0.5 0.0 0.0 0.0 0.0 0.0 0.5 0.5

0.5 0.5 0.0 0.0 0.0 0.0 0.0 0.5 0.0 0.5 0.0 0.0 0.0 0.0 0.5 0.5 0.5

0.5 0.5 0.5 0.0 0.0 0.0 0.0 1.0 1.0 0.0 0.0 0.0 0.0 0.5 0.5 0.5 0.5

0.5 0.5 0.5 0.0 0.0 0.0 0.0 0.5 0.0 0.0 0.0 0.0 0.0 0.5 0.5 0.5 0.5

0.0 0.5 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 1.0 0.5 0.0

0.5 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.5

0.5 0.5 0.5 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.5 0.5 0.5

0.5 0.5 0.5 0.5 0.5 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.5 0.5 0.5 0.5 0.5

0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.0 0.0 0.0 0.5 0.5 0.5 0.5 0.5 0.5 0.5

0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5

0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5

(c) Result of robust certainty grid method

Figure 32: Parcour setup with three obstacles

Deliverable PCE4 47 Final Demonstration of Smart Sensor Interface

Page 54: Final demonstration of smart sensor interface

Fieldbus Gateway IST-1999-11585

4 Fieldbus Gateway

To provide an easy to use and universal monitoring access to smart transducer networks, we

developed a PCMCIA gateway card (figure 33). It is the intention of the following section

to describe the design objectives, the implementation and to provide an evaluation of the

gateway card.

Figure 33: Printed circuit board of PCMCIA gateway card in comparison to size of 2 Eurocoin

4.1 Design Objectives

As this gateway should be able to operate in a real-time environment, the deterministic

structure given from this environment should be contained by using this gateway. We have

identified the following requirements:

Connect different real-time hierarchies: For example it should be possible to connect a

TTP/A node to a TTP/C host.

Possibility for real-time communication: Interconnected clusters should be able to com-

municate with each other in a hard real-time way, which means a bounded message

delay with low jitter.

Standardized interface: Implementing a widely used and adopted interface provides reli-

ability because it has been often tested and approved. Besides, easy integration and

use in future systems is given.

Final Demonstration of Smart Sensor Interface 48 Deliverable DSC6

Page 55: Final demonstration of smart sensor interface

Dependable Systems of Systems Fieldbus Gateway

Flexibility of connections: The design should be applicable for many different applica-

tions and a variety of fieldbuses.

Integration into the fieldbus: The gateway should provide full access to the fieldbus net-

work.

No property mismatch on connecting interfaces:This directly relates to the decision of

the interface, that has to avail the different property dimensions [14].

Clock synchronization: In order to achieve a global synchronized time, synchronization

within the network hierarchies must be possible.

Special care has to be taken when selecting a physical interface for the gateway. A collection

of possible interfaces is examined in [24]. Four possibilities have been considered regarding

their suitability:

RS232: A gateway using RS232 has already been implemented and is in use for monitoring

low-speed TTP/A networks. Although being a cheap and easily applicable solution,

the serial connection is naturally message-orientated. This leads to a change of real-

time semantics when shared memory information has to be mapped onto messages,

which poses a good example for a property mismatch.

USB: USB offers the isochronous mode, which allows to request an amount of bandwidth

for exclusive use and is therefore suited for real-time applications. Another benefit

would be the ability to connect more than one fieldbus-master device to the USB-

bus and therefore making the gateway a multi-point gateway. However, in general

the USB alternative suffers from the same property mismatch problem as the RS232

solution.

Dual-ported RAM: The dual-ported RAM provides a good architecture for real-time ac-

cess, as it supports a direct access to the fieldbus-masters memory and variables. The

disadvantage comes from the scalability, when multiple fieldbus devices have to be

integrated.

Ethernet: Ethernet offers a standard interface with high bandwith. The disadvantage lies in

the Carrier Sense Multiple Access / Collition Detection (CSMA/CD) access control,

since the possibillity of collitions makes real-time communication impossible unless

all nodes on one Ethernet segment follow a TDMA scheme for bus arbitration.

Deliverable PCE4 49 Final Demonstration of Smart Sensor Interface

Page 56: Final demonstration of smart sensor interface

Fieldbus Gateway IST-1999-11585

4.1.1 Considerations for Implementation

Taking into account the facts listed in Section 4.1, the decision was made towards the dual-

ported RAM, considering it the best approach to preserve real-time behavior. On access via

the PCMCIA interface from a TTP/C host or laptop-computer, the integration can be done

very easily. Therefore, the considerations are as follows:

Minimal hardware resources: If possible, single chip solutions should be used to reduce

the overall system cost.

RAM size about 1 KB: This reflects a typical TTP/A node specification for reasonable ap-

plications.

Flexibility: The gateway should support the interconnection of different fieldbus types

(e. g., TTP/A, CAN) to various host networks.

Commercial hardware: As far as possible, available and standardized hardware that has

proven its reliability should be used.

Device driver availability: When connecting the gateway to the higher level hierarchy, ac-

cess to the card must be given within the three interface types: real-time service (RS),

diagnostic and management (DM) interface, and configuration and planning (CP) in-

terface.

4.1.2 Design Evaluation

To our knowledge, there does not exist any commercial PCMCIA card providing an interface

to an MCU via dual-ported RAM that would be suitable for our purposes, so a dedicated

design has to be conceived. After having a general look on the market of available PCMCIA

cards, it is noticed that the market has shrunken mainly to the network- and modem-card

sector. As a result, single PCMCIA interface chips are no longer available because this

service is already integrated in application dedicated chips. This means that the PCMCIA

interface logic has to be implemented from scratch. The whole design has to be made on

a 54 mm× 85.6 mm area, the dimension of a PCMCIA card. Only chips below a height

of 2.8 mm (on top) respectively 1.1 mm (on bottom) can be used, unless invoking Chip-in-

PCB2 design. Moreover, several other logic elements (reset, etc.) have to be added, so the

2This refers to a special type of board design, where an area of the printed circuit board (PCB) is cut out inorder to be able to solder the according chip from beneath to reduce height.

Final Demonstration of Smart Sensor Interface 50 Deliverable DSC6

Page 57: Final demonstration of smart sensor interface

Dependable Systems of Systems Fieldbus Gateway

chip count is expected to increase rapidly. To cover this uprising and due to the fact that the

design changes during ongoing development, we decided to use a Field-Programmable Gate

Array (FPGA), which allows the integration of many functions into one component and

therefore saves precious space. Furthermore, a FPGA provides an easy way for applying

design changes in future. Since the card also holds an MCU for the implementation of

a fieldbus node, a serial PROM used for FPGA configuration can be omitted. A simple

bootloader programm running at the MCU may overtake this configuration task.

4.2 Implementation

This gateway card was designed to connect various types of fieldbuses to high-level hierar-

chies. Due to its flexible architecture, reconfigurations can be made and the possibility of

implementing extended functions is given. Since it uses a dual-ported RAM for exchange

of data, it is ideal for applications depending on real-time operations.

The following purposes are intended:

• to provide a connection from TTP/C via the PCMCIA interface to the TTP/A fieldbus.

• to be able to communicate from TTP/C with the CAN fieldbus.

TTP/A

TTP/C

CAN

Figure 34: Gateway Functionality

Thus, PCMCIA card is a three point gateway (see figure 34). A communication from TTP/A

to CAN would be also possible, but is not covered in this implementation. The TTP/A

Deliverable PCE4 51 Final Demonstration of Smart Sensor Interface

Page 58: Final demonstration of smart sensor interface

Fieldbus Gateway IST-1999-11585

interface is implemented in the MCU. By reprogramming the MCU, other fieldbusses like

LIN [1] can be interfaced as well.

To fulfill the first objective, a design with a dual-ported RAM has proven to be a good

approach towards a solution that fully supports real-time behavior. Since the card already

hosts an FPGA to provide the PCMCIA interface, it is advantegeous to use an FPGA device

that includes dual-ported RAM. Since the configuration of the used FPGA is located in

volatile memory, it always needs reprogramming after power up to become operative. This

is done by a bootloader via the onboard MCU. As a further benefit, this approach enables

an easy and unlimited reprogramming of the FPGA.

For the second purpose a special component is required: the CAN-Controller. This is a

ready-made commercial chip that takes care of the whole CAN-related communication.

4.2.1 Card Components

As the functional diagram in Figure 35 presents, the card consists of the four main parts:

• PCMCIA Interface Logic

• Dual-ported RAM

• MCU

• CAN-Controller

The PCMCIA interface is intended to support the connection to a high-level network. This

means, that this side of the gateway can be connected to a TTP/C host or a laptop-computer,

or any device that offers PCMCIA insertion. This card is designed to be PCMCIA 2.1

compliant, which is the last standard derived and therefore is backwards-compatible to 8-bit

hosts. So it can be used in every PCMCIA slot.

The design uses the following chips:

Xilinx Spartan-II FPGA: The product of choice is the Xilinx Spartan-II Family, which

also offers the possibility to implement dual-ported RAM into the chip. For a lookout

on FPGA chip possibilities.

Atmel ATmega128 MCU: This is an 8-bit microprocessor, which offers many I/O pins and

128 KB of flash memory.

Final Demonstration of Smart Sensor Interface 52 Deliverable DSC6

Page 59: Final demonstration of smart sensor interface

Dependable Systems of Systems Fieldbus Gateway

CAN IF Logic

PCMCIAHost BusAdapter

HBA DriverBus

TTP/A Network

PCMCIA

IF

Logic

RAM

CAN Controller

MCU

TTP/A

Master

Dual Ported

Driver

Bus

FPGA

PCMCIA Gateway Card

TTP/A Nodes

CAN Network

CAN Nodes

Figure 35: Gateway Architecture

Philips SJA1000 CAN Controller: This common chip offers two different modes of oper-

ation: BasicCAN Mode and PeliCAN Mode. For more information please see [22].

Philips TJA1050 CAN Transceiver: The chip is used as interface between the CAN Con-

troller and the physical CAN bus.

Motorola ISO K line serial link interface: This is a serial link bus interface for bi-

directional half-duplex communication. This single-wire bus interface is widely used

in the low-cost automotive domain.

Maxim RS485/422 Transceiver:This chip is a differential bus driver supporting the

RS485 standard.

The following connectors are offered by the card as shown in table 5:

4.2.2 Memory Layout

Figure 36 gives an overview on the memory layout of the dual-ported RAM within the

FPGA. The RAM can be accessed from two sides. The left side in the figure corresponds to

Deliverable PCE4 53 Final Demonstration of Smart Sensor Interface

Page 60: Final demonstration of smart sensor interface

Fieldbus Gateway IST-1999-11585

Connector Description

PCMCIA connector Connection to host computerJTAG (for FPGA) For debugging of FPGA from any PC via paral-

lel cable and JTAG interfaceJTAG (for Atmel) For programming and debugging of the MCURS485 Connector Connection to twisted pair RS485 networksISO K-line Connector Connection to single-wire network (e.g. TTP/A

or LIN)CAN Connector Connection to CAN fieldbus

Table 5: Connectors implemented on the PCMCIA card

the PCMCIA host, while the right one is dedicated to the MCU. Since the Atmel mega128

maps external RAM at0x8000 the address-ranges differ.

07FF 0x87FF0800 0x8800

0x80000000

0x00000x0000

0x0000 (0x0100 Bytes)IFS Headers

Dual−Ported RAM(0x0700 Bytes)

0x0000 00FF 0x80FF

(0x0100 Bytes)CAN−Controller

notimplemented

0x0000 08FF09000x0000

0x88FF0x8900

0xFFFF

FFFF

Add

ress

spa

ce f

or a

cces

sfr

om P

CM

CIA

hos

t

Add

ress

spa

ce f

or a

cces

sfr

om A

TM

ega1

28

0x03FF

0x0000 0100 0x8100

Figure 36: Memory layout of the dual-ported RAM

The PCMCIA host addresses0x0000 to 0x00FF contain information about the organiza-

tion of the IFS for the TTP/A network (see Section 4.2.4). The addresses0x0100 up to

0x07FF host the IFS contents. The area from0x0800 up to0x08FF is used to interface

the CAN-Controller.

4.2.3 Client Drivers and Kernel Modules

For each PCMCIA card a client driver is required. This driver reads the card information

structure (CIS) to determine the card’s resource requirements, initializes the host bus adapter

Final Demonstration of Smart Sensor Interface 54 Deliverable DSC6

Page 61: Final demonstration of smart sensor interface

Dependable Systems of Systems Fieldbus Gateway

(HBA), and configures the card. According to the PCMCIA specification [20], this driver

is called the card enabler. This driver uses functions provided by card services.

Accessing the card via PCMCIA requires several components (client driver, card services,

socket services, CIS). A registration process permits access to card services and allows the

driver to subscribe events the driver wants to be informed of. The following paragraph gives

a short overview to the configuration process:

The registration process starts with a validation of the installed card services. This task is

done by the functionGetCardServicesInfo . Afterwards, card services can be reg-

istered by using the functionRegisterClient for memory clients. This function also

enables the reception of insertion, removal, and call-back events. Furthermore, a handle is

returned to the client to allow identification on future calls. The client must be aware of

call-backs from card services when the card is inserted respectively removed. On insertion,

the configuration table within the card information structure (CIS) must be read. The client

driver can use the functionsGetFirstTuple , GetNextTuple , andGetTupleData

to process the CIS. Important values are the device type, which is set toDPRAMin case of the

gateway card, and the memory size. If the device type differs fromDPRAM, the driver has to

return an error-code indicating that the card has not been found. Using the size parameter,

the address space of the card´s memory is determined. Note that function names given here

are taken from the PCMCIA specification [20] and can differ in some implementations.

Shared IFS

Local IFS

Gateway

TTP /A Master

Local IFS TTP/A Slave

Local IFS TTP/A Slave

Local IFS TTP/A Slave

TTP

/A B

us

Figure 37: Network structure of the distributed interface file system

Deliverable PCE4 55 Final Demonstration of Smart Sensor Interface

Page 62: Final demonstration of smart sensor interface

Fieldbus Gateway IST-1999-11585

Bit No. 7 6 5 4 3 2 1 0Byte 0 RO Status reservedByte 1 Length−1Byte 2 Pointer (LSB)Byte 3 Pointer (MSB)

RO Read only: if set, file is read–onlyStatus The status of the file (ok if set to01b, damaged other-

wise).Length−1 Number of records in file decreased by one; index of

the last record; index start with 0 (header record)reserved These bits are reserved for further enhancements.Pointer Locates the file relative to start of dual-ported

RAM(record address).

Figure 38: IFS header record in the dual-ported RAM

4.2.4 Accessing the IFS from the Gateway

A part of the IFS (Interface File System) of a TTP/A master is mapped into a shared memory

realized by a dual-ported RAM. This memory is accessed by a gateway that either directly

accesses the visible IFS or performs a master-slave request to poll the required data. Fig-

ure 37 depicts the network structure of the distributed interface file system.

The first0x0100 bytes of the dual-ported RAM contain the header records (see figure 38)

of all IFS files of the gateway. This header records contain pointers, which are used to locate

files that are directly mapped into the shared IFS.

The pointer value holds the start address of each IFS file (without header) relative to the

start of the dual-ported RAM in records (4 bytes). For IFS files that are not mapped into the

shared memory, this value is set to0x0000 . These files and files located on other nodes

can be accessed via a special gateway request file.

4.2.5 Mapping of Gateway Request onto the Interface File System

All master-slave requests are mapped onto the gateway request file (file0x0c ). This file

must be located in the shared IFS. Byte 0 of record 1 of this file specifies the number of

available gateway requests (see figure 39). Following this number is the respective number

of buffers for requests.

A gateway request is stored according to the format depicted in table 6.

Final Demonstration of Smart Sensor Interface 56 Deliverable DSC6

Page 63: Final demonstration of smart sensor interface

Dependable Systems of Systems Fieldbus Gateway

n Request 1 Request n

Figure 39: Gateway request file

Byte index Element name Size in bytes0 Status 11 Cluster name 12 Node alias 13 File:Op-Code 14 Record number 15 Checksum (addr.) 16 Data bytes 410 Checksum (data) 1

Table 6: Format of a gateway request

Each request takes 11 byte. The status byte at the beginning indicates if the request is passive

(0x00 ), active (0x01 ), or done (0x02 ). If an error has occured during execution, the status

byte contains an error code (≥0xf0 ).

A master-slave request from the gateway side is thus performed as follows:

1. Select an entry with astatusbyte0x00 . Be aware that astatus≥ 0x02 may indicate

that the request has been performed, but has not yet been processed by the gateway.

2. Write the addressing parameters (i. e.,cluster name, alias, file name, alignmentand

record number).

3. In case of a write operation, the containts of the target record are also written into the

four byte data buffer (starting at index 6) of the request. The checksum is generated

by the master.

4. Change status byte to0x01 . This sets the entry to active and causes the master to

resolve the request.

5. Wait until a change of the status byte. It is possible to initiate an other request during

that time.

6. A status byte of0x02 indicates that the request has been well performed, otherwise

the status byte contains an error code. If the status byte shows0x02 , it is now possible

to read the results from the data field, in case of an read or execute operation.

Deliverable PCE4 57 Final Demonstration of Smart Sensor Interface

Page 64: Final demonstration of smart sensor interface

Fieldbus Gateway IST-1999-11585

7. If the results from the request have been consumed, set the status byte to0x00 in

order to allow new requests.

Note that there can be more than one active request at a time. The master may answer these

requests in an arbitrary order. The data bytes can be either written by the TTP/A master or

the gateway, according to the specified operation.

Establish Connection: Actually, when the IFS is accessed from the TTP/C host, an IFS

address is sent to the driver at the higher level network, which translate the IFS address into

a memory address and than retrives the information from the dual-ported RAM via a simple

RAM read. As a result, the driver requires knowledge of the virtual address given from card

services whenever a card is inserted.

4.2.6 CAN Access from a TTP/C host via PCMCIA

Via the onboard CAN controller it its possible to send/receive messages over the CAN field-

bus network. In general, the registers of the controller are mapped into the address-range

of the PCMCIA bus according to Figure 36. The chip chosen for the CAN interface is the

Philips SJA1000 because it offers a circular buffer and is commonly used.

4.3 Throughput Evaluation

Option Register Data Width Calculated ThroughputRead Access

16-bit 16-bit 1.60µs per access≈ 1.25 MByte/s8-bit 16-bit 3.23µs per access≈ 619 kByte/s8-bit 8-bit 1.61µs per access≈ 623 kByte/s

Write Access16-bit 16-bit 1.67µs per access≈ 1.18 MByte/s8-bit 16-bit 3.26µs per access≈ 613 kByte/s8-bit 8-bit 1.66µs per access≈ 602 kByte/s

Table 7: PCMCIA throughput

A measurement determines the PCMCIA data throughput in both 8-bit and 16-bit mode.

The PCMCIA Host Bus Adapter (HBA) offers several registers [17] for configuration and

signaling. To switch the HBA between 8/16-bit the PCMCIA Port Size (PPS) field of the

Final Demonstration of Smart Sensor Interface 58 Deliverable DSC6

Page 65: Final demonstration of smart sensor interface

Dependable Systems of Systems Summary and Conclusion

option register is used. If the register field is set to 0, the port size is 8 bit, if the register field

is set to 1, the port size is 16 bit.

Therefore, the HBA is either put in 8-bit or 16-bit mode and a test program is executing a

sufficient number of writes or reads to produce a meaningful result. The measuring data and

computed access time is shown in table 7.

5 Summary and Conclusion

This report describes the principles of operation of the smart transducer interface and the

design and implementation of a case study.

The smart transducer interface specifies two generic digital communication interfaces (LIFs)

to a sensor bus: a time-triggered LIF that periodically transmits the current measured val-

ues, and a non-time-critical event-triggered LIF that is used for sensor configuration and

diagnostics. By hiding the internal structure of the sensor, the smart sensor technology can

contribute to a reduction in complexity at the system level. The smart transducer interface

specification supports applications of several fields, namely time-triggered real-time sys-

tems, smart transducer interfaces, monitoring and configuration, sensor fusion, and robot

navigation.

The autonomous mobile robot (“smart car”) acts as a case study for the smart transducer in-

terface. The software is structured according to the three-level model of the Time-Triggered

Sensor Fusion Model: The node level contains local self-contained software of the trans-

ducer nodes. Communication among the nodes is strictly time-triggered. The cluster level

contains the software that generates the glue between the sensors and the actuators and the

control application. The cluster level hosts the sensor fusion algorithms, that enable a con-

sistent view of the environment regardless of sensor deficiencies. The control application

level hosts the navigation code and handles the user interface. The system architecture al-

lows monitoring of all relevant node properties without affecting the real-time behavior of

the application. Configuration tools can access the cluster via an RS232 interface.

The PCMCIA gateway card provides an alternative implementation of a gateway. While the

RS232 solution had to map the shared memory-based Interface File System (IFS) concept to

a message-based interface, this property mismatch can be overcome by the used PCMCIA

interface, which is also based on a shared memory. Using this shared memory, the gateway

card provides access to the Real-Time Service (RS), the Configuration and Planning (CP),

and the Diagnostics and Management (DM) interfaces.

Deliverable PCE4 59 Final Demonstration of Smart Sensor Interface

Page 66: Final demonstration of smart sensor interface

References IST-1999-11585

5.1 Contributions to DSoS

From a number of different ideas for designing a smart transducer interface the notion of

the Interface File System emerged as the central concept for an understandable and flexible

generic smart transducer interface. With the start of the DSOS project, this concept was

implemented and expanded. We consider the very positive experience with a number of

prototype implementations of this interface on different low-cost micro-controllers. This

work has already generated a high level of industrial interest.

In order to achieve interoperability between subsystems that have been designed in different

organizations at different times, legal or de-facto standards for the interaction of subsystems

are necessary. The standardization of the smart transducer interface by the Object Manage-

ment Group provides such a standard and is thus an important contribution in the planned

dissemination actions within the DSoS project.

The implementation of the smart transducer interface in the smart car and the PCMCIA

gateway card has provided a proof-of-concept of the interfacing strategies regarding com-

posability, dependability, and event-triggered versus time-triggered linking interfaces.

5.2 Future Work

The DSoS project has provided a well-defined fundament for the design and implementa-

tion of further concepts. In a more focused view, future work will be the integration of

time-triggered and event-triggered services into a dependable system, the development of

computer-aided support tools that support a remote monitoring and configuration of a smart

transducer network, and the integration of simulation and test methods for smart transducer

networks, and

References

[1] Audi AG, BMW AG, DaimlerChrysler AG, Motorola Inc. Volcano Communication Technologies AB,Volkswagen AG, and Volvo Car Corporation. LIN specification and LIN press announcement. SAEWorld Congress Detroit, http://www.lin-subbus.org, 1999.

[2] Dependable Systems of Systems (DSoS), IST-1999-11585.Dissemination and Use Plan, 2002. Availableat http://www.newcastle.research.ec.org/dsos/.

[3] P. Dierauer and B. Woolever. Understanding smart devices.Industrial Computing, pages 47–50, 1998.

[4] A. Elfes. Using occupancy grids for mobile robot perception and navigation.IEEE Computer, 22(6):46–57, 1989.

Final Demonstration of Smart Sensor Interface 60 Deliverable DSC6

Page 67: Final demonstration of smart sensor interface

Dependable Systems of Systems References

[5] W. Elmenreich, W. Haidinger, P. Peti, and L. Schneider. New node integration for master-slave fieldbusnetworks. InProceedings of the 20th IASTED International Conference on Applied Informatics (AI2002), pages 173–178, February 2002.

[6] W. Elmenreich and P. Peti. Achieving dependability in a time-triggered network by sensor fusion. InProceedings of the 6th IEEE International Conference on Intelligent Engineering Systems (INES), pages167–172, Opatija, Croatia, May 2002.

[7] W. Elmenreich, L. Schneider, and R. Kirner. A robust certainty grid algorithm for robotic vision. InProceedings of the 6th IEEE International Conference on Intelligent Engineering Systems (INES), pages25–30, Opatija, Croatia, May 2002.

[8] W. Elmenreich H. Kopetz, W.Haidinger. Specification of the smart sensor interface. ResearchReport 7/2001, IST-1999-11585 Dependable Systems of Systems (DSoS), 2001. Available athttp://www.vmars.tuwien.ac.at.

[9] W. Haidinger and R. Huber. Generation and analysis of the codes for TTP/A fireworks bytes. ResearchReport 5/2000, Technische Universität Wien, Institut für Technische Informatik, Vienna, Austria, 2000.

[10] D. Hearn and M. P. Baker.Computer Graphics. Prentice Hall, 1986.

[11] C. Jones, M.-O. Killijian, H. Kopetz, E. Marsden, N. Moffat, D. Powell, B. Randell, A. Ro-manovsky, R. Stroud, and V. Issarny. Final version of the DSoS conceptual model.DSoS Project(IST-1999-11585) Deliverable CSDA1, October 2002. Available as Research Report 54/2002 athttp://www.vmars.tuwien.ac.at.

[12] H. Kopetz. Real-Time Systems, Design Principles for Distributed Embedded Applications. KluwerAcademic Publishers, Boston, Dordrecht, London, 1997.

[13] H. Kopetz, M. Holzmann, and W. Elmenreich. A universal smart transducer interface: TTP/A.Interna-tional Journal of Computer System Science& Engineering, 16(2):71–77, March 2001.

[14] H. Kopetz, M. Paulitsch, C. Jones, M.-O. Killijian, E. Marsden, N. Moffat, D. Powell, B. Randell,A. Romanovsky, and R. Stroud. Revised version of DSoS conceptual model.DSoS Project (IST-1999-11585) Deliverable IC1, October 2001.

[15] H. Kopetz et al. Specification of the TTP/A protocol. Technical report, Technische Universität Wien,Institut für Technische Informatik, Vienna, Austria, March 2000. Available at http://www.ttpforum.org.

[16] K. B. Lee and R. D. Schneeman. Internet-based distributed measurement and control applications.IEEEInstrumentation and Measurement Magazine, 2(2):23–27, June 1999.

[17] Motorola. MPC855T Users Manual Integrated Communications Microprocessor, April 2002. UsersManual. Available at http://www.motorola.com/semiconductors.

[18] N. Murphy. Principles of user interface design.Embedded Systems Programming, December 2000.

[19] Object Management Group (OMG).Smart Transducers Interface V1.0, January 2003. Specificationavailable at http://doc.omg.org/formal/2003-01-01 as document ptc/2002-10-02.

[20] PCMCIA. PC Card Standard 8.0. PCMCIA, 2001.

[21] P. Peti and L. Schneider. Implementation of the TTP/A slave protocol on the Atmel ATmega103 MCU.Technical Report 28/2000, Technische Universität Wien, Institut für Technische Informatik, Vienna,Austria, August 2000.

[22] Philips. SJA1000 Stand-alone CAN controller, August 1997. Product specification. Available athttp://www.philips.semiconductors.com.

[23] L. Schneider. Real time robot navigation with a smart transducer network. Master’s thesis, TechnischeUniversität Wien, Institut für Technische Informatik, Vienna, Austria, 2001.

[24] C. Trödhandl and L. Schneider. Interfaces for ttp/a intercluster communication and monitoring. ResearchReport 5/2002, Technische Universität Wien, Institut für Technische Informatik, Vienna, Austria, 2002.

Deliverable PCE4 61 Final Demonstration of Smart Sensor Interface