16
IEEE TVT – Manuscript VT-2008-00390 1 Abstract— As functionality of vehicles increases in complexity, the demands on the in-vehicle networks increase as well. Maximum bus utilization often becomes the communication bottleneck. One way to satisfy the high bandwidth requirement for future vehicles is to use a higher bandwidth bus or multiple buses. However, the use of a higher bandwidth bus increases the cost of the network. Similarly, the use of multiple buses increases the cost as well as the complexity of wiring and network handling. Both options are becoming solutions to the high bandwidth demand. An alternative option is the development of a higher layer protocol that uses data reduction techniques to reduce the amount of data to be transferred. Its goal is to communicate the same amount of information using less bus bandwidth. It would be acceptable provided that it does not increase the message latencies significantly so that the safety of the vehicle is not compromised. The cost of the protocol is expected to be marginal because it consists of one-time changes to software. Various data reduction algorithms are available in the literature, but data reduction technology has not been introduced in in-vehicle protocol standards. This paper presents a unique data reduction methodology, along with its comparison with other proposed methodologies. The performance of this new data reduction algorithm is found to be better than that of the existing data reduction algorithms for a wide range of signal dynamics. The cost as well as the impact of this protocol on the end-to-end message latency has been found to be very marginal. Index Terms— Controller Area Network, Data Reduction, Event-Triggered Bus, In-Vehicle Network, Multiplexing. I. INTRODUCTION VER the years, as the number of vehicular electronic components increased significantly, vehicle multiplexing evolved due to the need for better wiring, diagnosis, reliability and lower cost. The need for vehicle multiplexing was Manuscript received May 5, 2008; revised September 4, 2008. Copyright (c) 2008 IEEE. Personal use of this material is permitted. However, permission to use this material for any other purposes must be obtained from the IEEE by sending a request to [email protected]. R. Miucic is with the Department of Electrical and Computer Engineering, Wayne State University, Detroit, MI 48202 USA (e-mail: [email protected]). S. M. Mahmud is with the Department of Electrical and Computer Engineering, Wayne State University, Detroit, MI 48202 USA (e-mail: [email protected]). Z. Popovic is with the Department of Electrical and Computer Engineering, University of Michigan, Dearborn, MI 48128 USA (e-mail: [email protected] ). predicted as early as 1976 [1]. Lupini presented need and advantages of vehicle multiplexing [2]. Lupini also presented issues related to designing vehicle multiplexing systems and future trends in the area [3]. Appropriate development tools are also necessary to design and maintain a network system. Wolfhard [4] presented a detailed description of network development tools, their characteristics and handling procedures. Computer simulation technique is also a vital tool to study the behavior of a proposed network system and study the performance of some alternative network design [5]. As the number of electronic components grows in a system, the probability of system malfunction due to a faulty component increases as well. Masrur [6] proposed a fault-tolerant multiplexing network architecture and compared the reliability and cost of the fault-tolerant system versus those of a non- fault-tolerant system. During the last three decades, various networking protocols were proposed for vehicle multiplexing. However, at present CAN (Controller Area Network) is the most popular protocol and it is widely used. Wolfhard [7] presented valuable information about CAN protocol, its application layer design, CAN chip implementation and CAN testing technique. LIN protocol is also used for low-cost and low-speed networks. Various suppliers are manufacturing Electronic Control Units (ECUs) with built-in CAN and LIN controllers. Sometimes interpretation of protocol specifications by various suppliers may be slightly different. Therefore, interoperability of distributed ECUs is risky. Conformance tests are necessary to reduce the risk of lacking interoperability among cooperative ECUs. Wolfhard [8] described the process for related conformance tests and presented the implementation architecture. As different features such as telematics, multimedia, X-by- wire, etc. are being added to vehicles, future vehicles will need various types of networks with various types of protocols. Lupini [9] predicts that at least eighty in-vehicle networks may be necessary mainly on high-end vehicles in the next ten years. Interconnecting those various types of networks will also be a challenge. Kassakian and Perreault noted in 2001 [10] that there are up to 70 ECUs scattered throughout the vehicle. Numerous ECUs in today’s vehicles are exchanging an ever-increasing amount of information. These demands are reaching the bandwidth limitations of the existing in-vehicle networks. Potential An Enhanced Data Reduction Algorithm for Event-Triggered Networks Radovan Miucic, Student Member, IEEE, Syed Masud Mahmud, Senior Member, IEEE, and Zeljko Popovic O

An Enhanced Data Reduction Algorithm for Event-Triggered

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

IEEE TVT – Manuscript VT-2008-00390 1

Abstract— As functionality of vehicles increases in complexity,

the demands on the in-vehicle networks increase as well.

Maximum bus utilization often becomes the communication

bottleneck. One way to satisfy the high bandwidth requirement

for future vehicles is to use a higher bandwidth bus or multiple

buses. However, the use of a higher bandwidth bus increases the

cost of the network. Similarly, the use of multiple buses increases

the cost as well as the complexity of wiring and network handling.

Both options are becoming solutions to the high bandwidth

demand. An alternative option is the development of a higher

layer protocol that uses data reduction techniques to reduce the

amount of data to be transferred. Its goal is to communicate the

same amount of information using less bus bandwidth. It would

be acceptable provided that it does not increase the message

latencies significantly so that the safety of the vehicle is not

compromised. The cost of the protocol is expected to be marginal

because it consists of one-time changes to software. Various data

reduction algorithms are available in the literature, but data

reduction technology has not been introduced in in-vehicle

protocol standards. This paper presents a unique data reduction

methodology, along with its comparison with other proposed

methodologies. The performance of this new data reduction

algorithm is found to be better than that of the existing data

reduction algorithms for a wide range of signal dynamics. The

cost as well as the impact of this protocol on the end-to-end

message latency has been found to be very marginal.

Index Terms— Controller Area Network, Data Reduction,

Event-Triggered Bus, In-Vehicle Network, Multiplexing.

I. INTRODUCTION

VER the years, as the number of vehicular electronic

components increased significantly, vehicle multiplexing

evolved due to the need for better wiring, diagnosis, reliability

and lower cost. The need for vehicle multiplexing was

Manuscript received May 5, 2008; revised September 4, 2008.

Copyright (c) 2008 IEEE. Personal use of this material is permitted.

However, permission to use this material for any other purposes must be

obtained from the IEEE by sending a request to [email protected].

R. Miucic is with the Department of Electrical and Computer Engineering,

Wayne State University, Detroit, MI 48202 USA (e-mail:

[email protected]).

S. M. Mahmud is with the Department of Electrical and Computer

Engineering, Wayne State University, Detroit, MI 48202 USA (e-mail:

[email protected]).

Z. Popovic is with the Department of Electrical and Computer

Engineering, University of Michigan, Dearborn, MI 48128 USA (e-mail:

[email protected]).

predicted as early as 1976 [1]. Lupini presented need and

advantages of vehicle multiplexing [2]. Lupini also presented

issues related to designing vehicle multiplexing systems and

future trends in the area [3]. Appropriate development tools

are also necessary to design and maintain a network system.

Wolfhard [4] presented a detailed description of network

development tools, their characteristics and handling

procedures. Computer simulation technique is also a vital tool

to study the behavior of a proposed network system and study

the performance of some alternative network design [5]. As the

number of electronic components grows in a system, the

probability of system malfunction due to a faulty component

increases as well. Masrur [6] proposed a fault-tolerant

multiplexing network architecture and compared the reliability

and cost of the fault-tolerant system versus those of a non-

fault-tolerant system.

During the last three decades, various networking protocols

were proposed for vehicle multiplexing. However, at present

CAN (Controller Area Network) is the most popular protocol

and it is widely used. Wolfhard [7] presented valuable

information about CAN protocol, its application layer design,

CAN chip implementation and CAN testing technique. LIN

protocol is also used for low-cost and low-speed networks.

Various suppliers are manufacturing Electronic Control Units

(ECUs) with built-in CAN and LIN controllers. Sometimes

interpretation of protocol specifications by various suppliers

may be slightly different. Therefore, interoperability of

distributed ECUs is risky. Conformance tests are necessary to

reduce the risk of lacking interoperability among cooperative

ECUs. Wolfhard [8] described the process for related

conformance tests and presented the implementation

architecture.

As different features such as telematics, multimedia, X-by-

wire, etc. are being added to vehicles, future vehicles will need

various types of networks with various types of protocols.

Lupini [9] predicts that at least eighty in-vehicle networks may

be necessary mainly on high-end vehicles in the next ten years.

Interconnecting those various types of networks will also be a

challenge.

Kassakian and Perreault noted in 2001 [10] that there are up

to 70 ECUs scattered throughout the vehicle. Numerous ECUs

in today’s vehicles are exchanging an ever-increasing amount

of information. These demands are reaching the bandwidth

limitations of the existing in-vehicle networks. Potential

An Enhanced Data Reduction Algorithm for

Event-Triggered Networks

Radovan Miucic, Student Member, IEEE, Syed Masud Mahmud, Senior Member, IEEE, and Zeljko

Popovic

O

IEEE TVT – Manuscript VT-2008-00390 2

solutions are increasing the bandwidth of existing buses or

using multiple buses. Although these solutions are effective,

they necessarily increase the network cost and complexity. An

alternative solution that comes with a negligible cost increase

is the use of a data reduction technique. Data reduction uses

algorithms to represent information more efficiently, thus

using less bus bandwidth to exchange the given amount of

information. The cost increase is negligible because it is

limited to one-time software development.

To find an effective data reduction method for in-vehicle

networks, it is necessary to understand the nature of the

information flow. Numerous modules connected to the bus

handle distributed functionalities of the vehicle. Modules send

measurements of their internally connected sensors, status of

their actuators, states of operation, and so forth. Generally,

messages are periodic or event based. Periodic messages are

transmitted at a fixed time intervals. Event-based messages are

transmitted on event occurrences. Frequent periodic and event-

based messages contribute the most to the bus utilization.

Therefore, we will focus on reducing the data of the most

frequent messages.

The amount of data can be reduced in several ways. In the

case of a vehicle traveling on the highway, the vehicle speed

varies very little. Thus, the electronic module responsible for

sending the vehicle speed can send only one bit instead of the

actual speed to inform the recipients when the speed does not

change. Similarly, when the change in speed is very small, the

electronic module can send only the amount of change using a

few bits, rather than the actual speed using many bits. The

protocol can also be made adaptive by allowing it to select

different message formats for different conditions of the

parameter. For example, when the vehicle speed is changing

slowly with respect to time, the message format can be

different from the format when the vehicle speed is changing

rapidly.

Data reduction is not only useful in production vehicles, but

in development environment as well. For example, data

reduction can be applied in data acquisition tools that use

CAN Calibration Protocol (CCP) [11] to reduce the number of

messages and bus utilization. The advantages for resulting

tools are increased data rate and ability to monitor more

parameters. Examples of CCP instrumentation tools are

Vector’s CANape [12] and ATI’s Vision [13] software.

This paper explains in detail a new algorithm that utilizes

these ideas and compares its performance with earlier attempts

at in-vehicle data reduction. The performance of this algorithm

has been found to be significantly better than that of the

previous algorithms. The rest of the paper is organized as

follows. Section II describes background information. Section

III describes the proposed Enhanced Data Reduction algorithm

along with the analysis of cost and impact on message latency

for the algorithm. Section IV shows the performance analysis

of the proposed algorithm, and Section V presents the

conclusions.

II. BACKGROUND INFORMATION

The transmission and reception of messages in all data

reduction techniques work as follows: Consider a node sending

a periodic message with the ID M every T milliseconds. The

sending node stores all signals at the designated transmitting

buffer TX[M] for message M. The first message M going on

the bus at t = tn contains all signals in their entirety. At the next

time interval, t = tn + T, the sending node compares the current

values of the signals to the ones stored in the TX[M],

transmitting buffer. The sending node then assembles the

second message M for transmitting based on the signals’

differences. Similarly, at the reception of the first message M,

at t=tn+tmtt (tmtt is message travel time), the receiving node

stores all signals to the designated receiving buffer RX[M] for

message M. At the reception of the second message M, at t =

tn+ T + tmtt, the receiving node first decodes the message and

then using the receiving buffer RX[M] and the information

from the message, reconstructs the signals. New signals

replace their previous versions in the receiving buffer for

message M.

Misbahuddin et al [14] provides a comprehensive overview

of previous work applicable to in-vehicle networks. The

examined data reduction techniques are several variants of

Huffman coding [15], several variants of arithmetic coding

[16], textual substitution coding [17], and command data

stream reference coding. However, the applicability of all

these methods is limited to textual data. As a remedy,

Misbahuddin et al [14] proposed a general data reduction

technique based on repeatability of data bytes in a message.

Ramteke et al [18] suggested a data reduction technique based

on signal stability. Both algorithms are designed to work with

the Controller Area Network (CAN) [19] but can be applied to

any serial protocol. Brief descriptions of various existing data

reduction techniques are given below.

A. Data Reduction (DR)

A CAN message has up to 8 bytes [19]. Data bytes often do

not change from one to the next message transmission.

Misbahuddin [14] exploited byte repeatability to come up with

a general form of a data reduction algorithm. In Misbahudin's

data reduction algorithm (DR), if all bytes change from the

previously transmitted message to the current message, the

message to be transmitted is sent as is, without data reduction.

If more than one byte remains unchanged, a reduced message

is sent.

DR uses the reserve bit (R) in the control field of a CAN

message to indicate that data reduction is taking place. In DR,

this bit is called the Data Compression Bit (DCB). The DCB is

set to “1” when a message is compressed, and it is cleared to

“0” when the message is not compressed. In a compressed

CAN message, the first byte is the Data Compression Code

(DCC). The position of each bit in the compression code

corresponds to the data byte position of the originally intended

uncompressed message. A bit with a value of “1” indicates a

byte that has not changed since the previous transmission, and

a bit with a value of “0” indicates that the byte has changed.

IEEE TVT – Manuscript VT-2008-00390 3

The changed bytes are placed after the compression code in

the data field of the actual data frame sent over the

multiplexing bus [14].

As an example, Fig. 1 shows the data field of the first,

original, message with 8 data bytes and the second,

Misbahuddin’s reduced message, with only 4 bytes. In the

second message, bytes 0, 2, 4, 5, and 7 have not changed from

the first message. Thus the reduced second message contains

only the changed bytes 1,3, and 6. Notice that the compression

code has three zeros corresponding to data bytes 1, 3, and 6

indicating that these three bytes changed in values from the

first message to the second.

Fig. 1: The First Message and the Consequent Reduced Message of the DR

Algorithm.

Fig. 2: The First Message and the Consequent Reduced Message of the ADR

Algorithm [18].

B. Adaptive Data Reduction (ADR)

Instead of concentrating on byte value preservation as

Misbahuddin did, Ramteke looked at the value of signals in the

message to come up with the Adaptive Data Reduction (ADR)

algorithm (Fig. 2) [18]. In the ADR algorithm, if the entire

message remains unchanged from one intended transmission to

the next, then no message is transmitted. In this case, there is a

provision to still send such a message if it is deemed critical to

do so for synchronization, and if it has been suppressed longer

than a predetermined amount of time. Further, if some of the

signals in the message do not change in value from the

previous to the current transmission, those signal values are

not sent. If any signals in the message change beyond the

scope of its assigned difference field (delta), the next message

is sent in entirety. Otherwise, if signals change such that they

can be represented with their deltas, then only the differences

are sent over the multiplexing bus. A Data Compression Code

(DCC) byte is used to designate the compression type (delta

representation or no change) for each delta signal within a

message. Instead of using a reserved bit to differentiate

between reduced and unchanged messages, ADR uses two

different message IDs for the uncompressed and compressed

messages, where the ID of the compressed message is created

by subtracting one from the original message ID.

Fig. 2 shows an example of ADR algorithm. In reduced

message of this example, the signals 0, 2 and 4 are represented

in delta. Thus, the corresponding DCC bits of these signals are

1. Signals 1 and 3 did not change. As a result, the DCC bits of

signals 1 and 3 are 0.

Fig. 3: The First Message and the Consequent Reduced Message of the IADR

Algorithm [20].

C. Improved Adaptive Data Reduction (IADR)

Miucic et al [20] addressed shortcomings and improvements

over the Adaptive Data Reduction (ADR) suggested by

Ramteke [18]. The Improved Adaptive Data Reduction

algorithm, IADR, uses different management of the reduced

and original message frames by using a convention as depicted

in Fig. 3. In this convention, the first bit of the data field serves

as the Data Compression Bit (DCB) and all signals in the

unchanged message are shifted to the right by one bit. In the

ADR algorithm [18], if the value of any delta of the signals in

the consequent message exceeds the length of the assigned

delta field, the absolute values of all the signals (the original

CAN message) are transmitted rather than the delta-

compressed version of the message. In the IADR algorithm,

the consequent message allows mixtures of different signal

representations. For example, in a message any signal may be

IEEE TVT – Manuscript VT-2008-00390 4

represented as a no-change (signal did not change from the

previous to the current transmission), a delta-change (the

signal difference from the previous to current transmission

which does not exceed the length of the assigned delta field),

or a signal-in-entirety (the entire signal, because the difference

from the previous to the current transmission exceeds the

length of the assigned delta field). This is accomplished using

three levels of codes as shown in Fig. 3. The first level is the

aforementioned DCB, the first bit of the data field, indicating

with 1 that there is at least one compressed signal in the

message. The second level is the Data Compression Code

(DCC), which immediately follows the DCB and consists of

one bit for each signal, where 1 indicates that some form of

compression occurs for the corresponding signal, and 0

indicates that the original signal is included instead. The third

level is the bit preceding each compressed signal. It describes

how the signal is compressed: 0 represents the delta

representation (only the difference is included) while 1

represents no change (no signal information included).

In addition, to preemptively prevent potential issues of not

having synchronized signals in transmitting and receiving

nodes, IADR uses “Cyclic Refresh”. In this scheme, the first

message carries all signals in their entirety. The second

message forcibly sends the first signal in entirety regardless of

the amount of change. The third message forcibly sends the

second signal in entirety, and so on. The signal that is sent in

entirety rotates.

Fig. 3 shows an example of the IADR algorithm. In this

example, the signals 1 and 3 were not compressed in the

second message. Thus, the DCC bits of signals 1 and 3 are 0,

and the signals are sent in entirety. The DCC bits of signals 0,

2 and 4 are 1. This means that some kind of compressions,

whether delta or full, were done on signals 0, 2 and 4. Signals

0 and 4 are delta compressed. As a result, the bit preceding

each one of these delta values is a 0. Signal 2 is fully

compressed. Thus, only a high bit (1) is sent instead of the

signal. This high bit indicates that the signal is fully

compressed.

III. THE PROPOSED ENHANCED DATA REDUCTION ALGORITHM

(EDR)

Unlike other existing data reduction algorithms, this

algorithm does not use a Data Compression Bit (DCB) to

indicate whether or not a message has been compressed. Since

all the receivers of a particular message know the length of the

uncompressed data field of the message, the receiver, by

looking at the length of the data field, will know whether or

not the message has been compressed. This eliminates the

difficulties associated with earlier solutions to identification of

compressed messages such as the use of the reserved bit [14]

or the use of the dedicated message IDs [18], or made the

decoding more involved via the additional bit in the data field

[20].

Since every data reduction algorithm has some overhead

such as using a Data Compression Bit (DCB) and/or using a

Data Compression Code (DCC), the signal values could be

such that after applying the data reduction algorithm the

resulting message may not be shorter than the original

uncompressed message. In the Enhanced Data Reduction

(EDR) algorithm, a message will be compressed provided the

length of the data field of the compressed message is less than

that of the original uncompressed message. This means that,

after applying the EDR algorithm on a particular message if it

is seen that the length of the data field of the resulting message

is less than that of the original uncompressed message, then a

compressed message will be sent; otherwise, the original

uncompressed message will be sent. Therefore, the EDR

method either reduces the amount of data sent, or in the worst

case, neither results in any savings nor produces any overhead.

A. Signal Types and Signal Reduction Mechanisms

In the EDR algorithm, the coding of compressed signals is

done in a similar way as it was done for the IADR algorithm.

However, we define stricter rules and some modifications.

Signals can be of any bit in length (BL). Typical signals

represent either continuous functions or discrete states. A

signal is continuous if its value does not change significantly

from one message transmission to the next. Examples of some

continuous signals are vehicle speed, engine temperature,

engine RPM, etc. Examples of some discrete signals are

engine state, gear state, headlight state, wiper state, etc. In this

paper, the discrete signals are also called state signals. In order

to reduce the compression overhead and make the EDR

algorithm an effective algorithm, we propose to combine

several state signals into one signal called the group signal.

Since state signals do not change very frequently, the group

signal that is made using a number of state signals will not

change very frequently. As a result, significant reduction can

be made in a message.

In the EDR algorithm, the encoded message can have two

types of signals. The first type of signal is SDN (Signal, Delta,

No-change). SDN allows a signal to be represented in entirety,

as a delta-change, or as no-change. The second type of signal

is SN (Signal, No-change). SN allows a signal to be

represented in entirety or as no-change. Continuous signals

such as vehicle speed, engine temperature, engine RPM, etc.

can be encoded as SDN signals. Since there is some overhead

involved in encoding an uncompressed message into a

compressed message, a continuous signal must be greater than

or equal to a minimum length for it to be encoded into a

compressed signal. Otherwise, the data reduction algorithm

will not provide any benefits. For each signal to be

compressed, there is a coding overhead of two bits: one bit of

the Data Compression Code (DCC) which indicates whether

the signal has been compressed or not, and another bit,

reduction type (RT) bit, that precedes the compressed signal

which indicates whether the signal has been fully compressed

or delta compressed. Since the delta value of a signal needs at

least two bits, one bit for the sign and at least another bit for

the value, the total cost of delta compression is 4 bits including

the two bits required as coding overhead. Thus, a continuous

IEEE TVT – Manuscript VT-2008-00390 5

signal has to be at least 5-bit long in order to get some benefits

out of the compression technique. State signals with data

length from 1 to 4 bits can be grouped together and be

represented as an SN signal. Fig. 4 shows an example of the

vehicle signal and its corresponding delta representation.

Using a delta size that is half the size of the signal will be

more effective on the smaller size signals than on the larger

size signals. For example, let us examine 8 and 16-bit signals.

Let us say that the engine ECU keeps track of the vehicle

speed. The ECU publishes the vehicle speed information in

one of its periodic messages with the period T=10 ms. The

vehicle speed signal has the range of -100 to 400 km/h, and in

the first case it is represented with 16-bit variable. Each count

in the 16 bit variable has a weight of (500km/h)/(65536

counts)=0.00763km/h/count. The delta size is 8 bits and it

spans smhkm /26915.0/96895.0 ±=± (due to the sign bit

plus the 7-bit magnitude). If the vehicle speed is sent once

every 10 ms, then in order to be able to represent the signal

with its delta form, the acceleration of the vehicle should not

exceed 2/9152.26 sm± . In the real world, forward

acceleration of a very fast vehicle may go up to 6.4 m/s2 [20].

Of course, here we have assumed that the signal is stable and

noise free.

Fig. 4: Two examples of Vehicle Speed Signal and Its Delta Representation.

TABLE I: DELTA SPAN VERSUS SIGNAL SIZE

Signal

Size

(bits)

Delta Size

(bits)

Signal Range

Abs. value

Delta Range

Abs. value

Delta Span

(%)

5 2 0-31 0-1 6.25

6 3 0-63 0-3 6.25

7 3 0-127 0-3 3.125

8 4 0-255 0-7 3.125

9 4 0-511 0-7 1.5625

10 5 0-1023 0-15 1.5625

11 5 0-2047 0-15 0.78125

12 6 0-4095 0-31 0.78125

13 6 0-8191 0-31 0.390625

14 7 0-16383 0-63 0.390625

15 7 0-32767 0-63 0.195313

In the second case, the vehicle speed signal is represented

with an 8-bit variable, where each count is

( ) ( ) counthkmcountshkm //96078.1255//500 = . Here, the

delta size is 4 bits and it spans

smhkm /81264.3/72549.13 ±=± (sign plus the 3-bit

magnitude). In order to represent the signal with the delta, the

acceleration of the vehicle now should not exceed 2/2636.381 sm± . In this second case, all realistic

consequent vehicle speed signals can be represented with

delta.

Following the proposed logic in choosing the delta size,

Table I presents the delta span for various signals. For

example, the size of the delta for a 5-bit signal is 2. The 2-bit

delta spans 6.25 % of the signal. However, the 8-bit delta for

16-bit signal spans only 0.1953 % of the signal. If all signals

have the same changing probability, from Table I it is obvious

that larger size signal will have less chance to be represented

with delta form.

Fig. 5: An example of the original message and its mask

with SDN and SN type groupings.

Fig. 6: An example of the original message and its mask with only SN type

groupings.

Fig. 5 shows an example of an original message and its

message mask. All signals of size higher or equal to 5 bits

(vehicle speed, coolant temp, and engine RPM) are

represented with SDN type masks (SDN_0, SDN_1, and

SDN_2). On the other hand, signals of smaller size than 5 bits

(fan RPM, oil sensor, engine state, gear state, and park status)

are grouped into SN type masks (SN_0 and SN_1). Similarly,

Fig. 6 shows a message with all signals being of size smaller

than 5 bits and its message mask representation.

Our proposed rules for creating a reduced message in the

EDR algorithm are as follows:

1. any signal with bit length BL ≥ 5 is masked with SDN

type

2. only signals with BL ≥ 5 are represented with delta (∆)

3. size of ∆ = Integer (BL/2)

4. the first bit of ∆ is the sign bit, and the rest of the bits

IEEE TVT – Manuscript VT-2008-00390 6

indicate magnitude

5. signals with BL < 5 are bundled into SN groups

6. maximum length of an SN group is 8 bits

B. Summary of Signal Representations in the EDR Algorithm

In the EDR algorithm, the data field of a compressed

message starts with a Data Compression Code (DCC). The

DCC is followed by signal representation fields, where the

values of different signals are expressed in compressed or

uncompressed forms depending upon their current values

compared to the values during the previous transmission of the

message. Like the IADR algorithm, the DCC carries

information on whether the signals are encoded in reduced

form or in entirety. The DCC contains compression code bits

for both SDN and SN type signals. A DCC bit of a signal

indicates if the signal is presented in reduced form (bit =1) or

in its original form (bit =0).

If an SDN type signal is not reduced, then its signal

representation field contains its value in entirety. If an SDN

type signal is reduced, then the first bit of the corresponding

signal representation field is the reduction type bit (RT bit).

The RT bit indicates the type of compression: delta

compressed (RT=0) or fully compressed (RT=1). If the change

in a signal value can be expressed in delta (∆) form, then the

RT bit of the signal is 0, i.e. for this particular signal, the

signal representation field contains a 0 followed by the value

of ∆. If the current value of a signal is the same as the previous

value of this signal, then the signal is fully compressed

(RT=1), i.e. no signal value is sent. For this signal, the signal

representation field contains only the RT bit which is a 1.

If an SN type signal is compressed, the compression type is

always fully compressed. That is no signal value is sent. Thus,

no RT bit is necessary in the signal representation field of an

SN signal.

C. EDR Message Encoding and Decoding Algorithms

Now we are going to explain the encoding technique of a

message and show how a message is converted into a reduced

encoded message using the EDR algorithm. Assume that our

message format is the one shown in Figure 5. Note that this

message has three SDN type signals and two SN type signals.

Assume that from the previous transmission to the current

transmission of the message, its signals changed as follows:

1. Vehicle Speed changed significantly. Thus, it can’t be

represented in delta (∆) form. So no data reduction is

possible for Vehicle Speed. As a result, the DCC bit for

Vehicle Speed is 0.

2. Coolant Temperature changed slightly, and the change can

be represented in delta form. Thus, the signal is delta

compressed. As a result, the DCC bit for Coolant

Temperature is 1. Since the signal is delta compressed, the

RT (Reduction Type) bit which precedes the actual delta

signal is 0 for Coolant Temperature.

3. Engine RPM did not change. Thus, no data needs to be

sent for Engine RPM which means the signal is fully

compressed. Hence, the DCC bit for this signal is 1. Since

the signal is fully compressed, the RT bit is 1 and it is not

followed by any data for Engine RPM.

4. Fan RPM and/or Oil Sensor State changed. Thus SN_0

signal which is a combination of Fan RPM and Oil Sensor

State can’t be compressed. As a result, the DCC bit for

this signal is 0.

5. Engine State, Gear State and Park Status did not change.

Thus SN_1 signal is fully compressed and no data needs

to be sent for this signal. Hence, the DCC bit for this

signal is 1. Since SN type signals do not have any RT bits,

this signal does not have any RT bit.

Figure 7 shows the reduced encoded message for the above

example. Figure 8 shows the flowchart of the EDR message

encoding algorithm. In this figure, the notation Si and Si-1 are

used to indicate the current and previous values of a signal,

respectively. The notation ∆max indicates the maximum change

that a signal can have from its previous value in order for the

signal to be represented in delta form. For each signal of a

message, the transmitting node will execute the algorithm

shown in Figure 8 in order to create the final message that will

be sent through the bus. The receiving node will have to

execute a decoding algorithm in order to extract signals from a

message. Figure 9 shows the EDR message decoding

algorithm in detail. For every signal to be extracted from a

message, the receiving node has to execute the decoding

algorithm shown in Figure 9.

Fig. 7: An example of the Data Field of a Reduced Message.

Fig. 8: Flowchart of EDR message encoding algorithm.

D. Cost of the EDR Algorithm

Here we determine the cost of the EDR algorithm in terms

IEEE TVT – Manuscript VT-2008-00390 7

of memory required to store the code and data. The encoding

and decoding algorithms can be converted into two

subroutines. A node will repeatedly call these routines every

time it wants to encode and decode signals of a message.

There are 12 decision and assignment blocks (high-level

operations) in the encoding algorithm as shown in Figure 8.

Similarly, Figure 9 shows that there are 9 decision and

assignment blocks in the decoding algorithm. Three

assembly/machine level instructions, LOAD, COMPARE and

BRANCH, are necessary to implement the operation of a

decision block. In order to implement the operation of an

assignment block, we need one to three machine level

instructions, such as LOAD-ADD-STORE or BITTEST-

BRANCH or STORE and so on. Thus, the maximum number

of machine level instructions required to implement a decision

or assignment block is three. Since altogether there are 21

decision and assignment blocks in the encoding and decoding

algorithms, at most 63 machine level instructions are necessary

to implement the EDR algorithm. For a typical microcontroller

such as a PIC18 family microcontroller, the average number of

bytes required to implement a machine level instruction is two.

Hence, 126 bytes of ROM will be necessary to keep the code

of the EDR algorithm. Each node will also need some memory

for receive and transmit buffers. A typical CAN node for

vehicular applications does not transmit too many different

types of messages and also does not take actions based upon

too many different messages from remote nodes. If we assume

that a node transmits 10 different messages and receives

another 10 different messages from remote nodes, then

altogether the node needs 20 buffers. Since the maximum

length of the data field of a CAN message is 8 bytes, a node

needs 160 bytes of RAM for transmit and receive buffers.

Hence, the total cost of the algorithm in terms of required

memory bytes for code and data is 126 + 160 = 286 bytes.

This cost is insignificant compared to the amount of ROM and

RAM available in today’s microcontrollers.

Fig. 9: Flowchart of EDR message decoding algorithm.

E. Impact of the EDR Algorithm on Message Latency

In this subsection we investigate the impact of the EDR

algorithm on end-to-end message latency. Both the encoding

and decoding algorithms will incur additional message latency.

The total impact on the message latency will be equal to the

summation of the impacts due to encoding and decoding

algorithms. The impact of encoding algorithm on the latency

of a particular message depends on the CPU time necessary to

execute the encoding algorithm for all the signals of that

particular message. In order to determine the worst case

impact of the encoding algorithm on the latency, we determine

the computation time necessary for the longest path of the

flowchart shown in Figure 8. The first column of Table II

shows the high-level statements present in the longest path of

the encoding algorithm. The second column shows the

assembly/machine-level instructions necessary to implement

each high-level statement. The third column shows the total

number of processor clock cycles necessary to execute each

high-level statement assuming that it takes four processor

clock cycles to execute each machine-level instruction. Table

II shows that in the worst-case 48 clock cycles are necessary to

encode a signal using the EDR encoding algorithm.

TABLE II: WORST-CASE COMPUTATION TIME TO ENCODE A SIGNAL USING

EDR ALGORITHM.

High-Level Statement Assembly/Machine-

Level Statements

Clock

Cycles

1. Signal Type? (SDN or SN) LOAD, COMPARE and

BRANCH

12

2. ∆ = Si – Si-1= 0? LOAD, COMPARE and

BRANCH

12

3. ?max∆≤∆ LOAD, COMPARE and

BRANCH

12

4. Set DCC bit and clear RT bit

of Si.

BIT-SET and BIT-

CLEAR

8

5. Send ∆ STORE 4

Total clock cycles 48

Similarly, we can determine the impact of the decoding

algorithm on the message latency. Table III shows the high-

level statements present in the longest path of the decoding

algorithm, the corresponding assembly/machine-level

instructions and clock cycles necessary to execute each high-

level statement. Table III shows that for the worst-case, the

total number of clock cycles necessary to execute the EDR

decoding algorithm for a signal is 44.

TABLE III: WORST-CASE COMPUTATION TIME TO DECODE A SIGNAL USING

EDR ALGORITHM.

High-Level Statement Assembly/Machine-Level

Statements

Clock

Cycles

1. Signal Type? (SDN or SN) LOAD, COMPARE and

BRANCH

12

2. DCC bit of Si= 1? BIT-TEST and BRANCH 8

3. RT bit of Si= 1? BIT-TEST and BRANCH 8

4. Accept ∆ from the receiving

buffer

LOAD 4

5. Si = Si-1 + ∆ LOAD, ADD and STORE 12

Total clock cycles 44

From Tables II and III we see that the worst-case

computation time for encoding and decoding a signal using the

EDR algorithm is 48+44=92 clock cycles. It is mentioned

earlier that SDN type signals are at least 5-bit long, and SN

IEEE TVT – Manuscript VT-2008-00390 8

type signals are generated by combining a number of signals

whose lengths are less than 5 bits. Since the maximum number

of data bits in a CAN message is 64 and we need DCC bits for

encoding and decoding signals, the total number of SDN and

SN type signals in a CAN message can’t be more than 11. If

we assume that in the worst-case there are 11 signals in a CAN

message, then the total impact of the EDR algorithm on the

message latency is 92*11 = 1012 clock cycles. Note that a

receiving node checks the data length code of a message to

determine whether or not the message has been reduced.

Checking the data length code is necessary only once for each

message, and the implementation of this operation needs three

assembly/machine level instructions: LOAD, COMPARE and

BRANCH. Thus, another 12 clock cycles are necessary to

encode and decode the algorithm. Hence, the overall impact of

the EDR algorithm on the message latency is 1012+12 = 1024

clock cycles. Various PIC18 family microcontrollers have

built-in CAN modules, and these microcontrollers can use

clocks up to 40MHz frequency. So if a 40MHz oscillator is

used with a PIC18 family microcontroller, then the worst-case

impact on the end-to-end message latency is 1024cycles*0.025

microseconds/cycle = 25.6 microseconds. Even if a 10MHz

oscillator is used, the worst-case impact on the latency is 102.4

microseconds which is acceptable for any safety applications.

Note that if a vehicle moves at 100 miles/hour, it can move

only 0.18 inch in 102.4 microseconds. Thus, if the EDR

algorithm is used with a microcontroller with a 10MHz

oscillator, the vehicle will move an additional 0.18 inch before

the receiving node can take actions based on the message

contents. As a result, we believe that vehicle safety will not be

compromised due to the use of our EDR algorithm. Moreover,

a higher frequency oscillator can be used with the

microcontroller, and for most cases the vehicle speed will be

less than 100 miles/hour which will even lessen the impact of

the EDR algorithm on message latency and consequently on

the safety of the vehicle.

F. Synchronization

The presented data reduction methodology relies heavily on

uninterrupted synchronous communication. A synchronization

problem arises when the receiving node goes through a reset

and loses all information on the previous signal values. All

signals in the receiving buffers of the resetting node become

invalid. Another loss of synchronization occurs when one or

more messages are incorrectly received and the receiver

disregards the content of the messages and disrupts the ability

to reproduce the signals associated with incorrectly received

messages. Encoded messages that the node receives afterward

are not sufficient to reconstruct the signals any more.

It is mentioned earlier that a Cyclic Refresh technique could

be used to keep all receiving nodes synchronized with the

transmitting nodes. In this technique, each signal is sent in

entirety in a round-robin fashion. Thus, if a message has S

signals, then the receiving node which went through a reset has

to wait for S transmissions of the message to get the values of

all signals of the message. As a result, the receiving node, after

went through a reset, will have to wait for a long time before it

can use all the signals that came from remote nodes. If this

long delay for a particular receiving node is not going to cause

any safety problems for the vehicle or any major functionality

problems for the receiving node, then the receiving node can

take this reactive approach to update its messages.

The resetting node can also take a proactive approach to

update its messages. One proactive approach could be

considered as a Demand Refresh, where the resetting node

could request all transmitting nodes to send their messages

with signals in entirety. The Demand Refresh could

significantly increase the bus utilization for a short period of

time depending upon how many resetting nodes are requesting

how many messages at a time. In order to update all messages

at the resetting node, the Demand Refresh will incur

significantly less waiting time compared to that for Cyclic

Refresh. A second proactive approach could be as follows: if

the functionality of the receiving node requires that

immediately after reset, the node must control some actuators

based on some signal values from a remote node, then the

receiving node must request appropriate remote nodes to send

only those required signals in entirety. The remaining signals

could be updated based on the Cycle Refresh technique. This

second proactive approach will incur less bus utilization

compared to that for Demand Refresh. If the receiving node is

not required to take any actions immediately after reset, then

another proactive action can be taken as follows: the receiving

node can wait until it gets messages from transmitting nodes

according to the schedule of the transmitting nodes, and if the

signals arrive in entirety, then the receiving node can accept

the signals and take actions. However, if the signals arrive in

reduced form, then at that time the receiving node can demand

the transmitting node to send the signals in entirety.

In summary, we would like to say that we have suggested a

number of techniques using which a resetting node can be

synchronized with the transmitting nodes. Since the reset event

is not a common phenomenon for a node, and not too many

nodes will go through a reset at the same time, except during

the initial startup of the vehicle, we believe that such an event

will not have any significant impact on the performance of our

data reduction protocol. Since the reset events are not periodic,

they could be considered as rare time-to-time glitches in the

operations of in-vehicle networks. Moreover, since under

normal operations, the loading of an asynchronous bus like a

CAN bus is kept around 30%, occasional rare glitches could

be tolerated by the bus. Thus, we believe that node resetting

events will not be major issues as far as the performance of the

EDR algorithm is concerned.

G. Handling of Initial Transients

Data reduction techniques promise to reduce the amount of

bus traffic while transmitting the same required information

content. The practical attractiveness of this lies in the

opportunity it provides to add more messages. In other words,

it is the desire to send more messages through the bus without

exceeding the bus load beyond the acceptable maximum limit

IEEE TVT – Manuscript VT-2008-00390 9

that derives the need for data reduction. However, while

assessing the benefits of a data reduction technique, with

respect to the room it provides for additional messages, it is

important and sometimes neglected, to consider the initial

conditions.

At the vehicle boot up, the transition from ignition-key off

to on, the bus becomes active. Many ECUs start sending

periodic messages. At this initial stage, the receiving (RX) and

transmitting (TX) buffers for the signals are not initialized.

Therefore, all initial messages need to be sent in entirety. This

condition creates peak bus load.

To eliminate this peak bus load we propose that all

messages be prioritized into vehicle functionality-critical and

non-critical groups. For example, vehicle functionality-critical

group would consist of messages carrying critical information

for vehicle functionality such as engine, torque, and brakes

information. Messages carrying leisure and convenience

information such as AC, radio, and display would comprise the

non-critical group. For some initial period, for example 200

ms, only vehicle functionality-critical messages could appear

on the bus. After the initial period, when the RX and TX

buffers of vehicle functionality-critical messages have been

filled, the rest of the non-critical messages would go on the

bus. At that time, the vehicle functionality-critical messages

are sent in reduced form while the initial messages of the non-

critical group are sent in entirety. This method eliminates the

peak load of having all messages being sent in entirety at the

same time.

H. Adaptation of EDR Algorithm to other In-Vehicle Buses

The EDR algorithm can be used for any asynchronous bus

where time slots are not reserved for messages to use the bus.

For a CAN-bus system, the EDR algorithm requires the

receiving node to check the data length code to determine

whether or not the message has been reduced. If the CAN

protocol did not have a data length code in the message, then

the first bit of the data field could have been used as a

compression bit to indicate whether or not the message has

been reduced. Similar coding techniques could be designed for

those in-vehicle networking protocols where a data length code

is not available within the message. Thus, our EDR algorithm

can also be used for LIN protocol. If the EDR algorithm is

going to be used for the LIN protocol, then the first bit of the

response field of a LIN message should be used as a

compression bit. This compression bit will indicate whether or

not the LIN message has been compressed. The compression

bit will be followed by the Data Compression Code (DCC).

The DCC will then be followed by the Signal Representation

Fields as shown in Figure 7.

The Flexray protocol can support a bit rate of up to 10Mps,

which is high enough for any real-time automotive

applications. Thus, in the near future, data reduction may not

be necessary for a Flexray bus. However, EDR algorithm can

also be used for the Flexray protocol. A Flexray message

frame has two types of segments: static segment and dynamic

segment. The static segment contains a number of static slots,

and each static slot is reserved for a particular time-triggered

message that needs deterministic latency. Thus, data reduction

techniques should not be applied for the static segment. The

time slots in the dynamic segment are not reserved for any

particular messages. Since a Flexray frame also has a field

called the Payload Length (analogous to the data length code

of CAN), our EDR algorithm can be used in the dynamic

segment of Flexray in a similar way as it can be used for CAN.

The EDR algorithm can be used with other in-vehicle

networking protocols as long as some time slots are available

which are not reserved for any particular messages.

I. Advantages of EDR Algorithm compared to other

existing Data Reduction Algorithms for Vehicular

Applications

In this subsection we compare EDR algorithm with other

existing data reduction algorithms in terms of bit savings and

complexity of detecting a reduced message.

Fig. 10: Example of a Message with an 8-Bit Signal.

Fig. 11: Example of Signal Behavior and Signal Representation in EDR and

DR.

1. Advantages of EDR compared to DR

Bit Savings Comparison: Let us consider a periodic message

with an 8-bit numerical signal and period T. An example

signal from Fig. 10 occupies the third data byte of the

message. We will examine how the signal is represented in DR

IEEE TVT – Manuscript VT-2008-00390 10

and EDR methodology.

Fig. 11 shows a hypothetical signal behavior and the savings

that can be achieved with EDR and DR methodologies.

According to the rules we presented earlier, the delta span of

the signal is ± 7. The very first message at t = 0 has the signal

represented in entirety. The next message at t = T contains the

delta change, followed by several messages where the signal

does not change. All other messages after t = 5T, besides the

message at t = 10T, can have the signal represented as a delta.

On the other hand, in DR methodology, only signals for

messages at t = 2T, 3T and 4T can be reduced. For this

example, the total number of bit savings using DR and EDR

algorithms are 21 and 36, respectively.

Reduced Message Detection: The major drawback of DR

method is using the reserved bit of the CAN message frame as

the DCB flag. Existing transceivers and microcontrollers [21],

[22] often assume a set value of the reserved bit. Hence, for

the DR methodology to work, existing hardware would have to

change. EDR improvement is to examine the length of the

received message and based on the length to determine if the

message frame is in original or reduced form. For example,

consider the original message Y being 8 bytes long. If a node

receives message Y with 5 bytes in length, then the received

message is in some reduced form and has to be decoded to

extract the signals.

2. Advantages of EDR compared to ADR

Bit Savings Comparison: Reduction fields of the ADR are

fixed in length. Let us consider an example of an original

message with four signals. After the original message has been

sent, the data reduction algorithm is in place. A transmitter

prepares the next message. Assume that one signal fails to be

represented with a delta value, and other three can be

represented with delta values. In ADR, the original message is

transmitted. However, in EDR, if one signal fails to be

represented with a delta value and other three signals can be

represented with delta values, the reduced message is still

transmitted. In addition, in ADR, no-change is represented

with one plus size of the delta in bits, while in EDR, no-change

is represented with only two bits.

Reduced Message Detection: In ADR having a sub set of

message IDs with the sole purpose to denote reduced

messages, complicates the network. If all messages are

covered with the ADR algorithm, network designer will have

only half of the available IDs to work with. In addition, the

network designer can assign only every other message ID since

the consequent message ID will denote a reduced message. In

EDR, the network designer can use all message IDs.

3. Advantages of EDR compared to IADR

Reduced Message Detection: In the IADR algorithm, data

content is shifted to the right by one bit to make room for the

Data Compression Bit (DCB) bit. A reduced message is

detected if the DCB, the first bit of the data field, is set. In

EDR, offsetting is not needed because a reduced message is

detected by the length comparison of the received message to

the uncompressed message.

Though the first bit of the data field can be used as the DCB

flag, it is inconvenient to shorten the message which is already

using the entire data field. For example, in J1939 protocol

[23], SAE defines signals that span the 8-byte data field of the

CAN frame. In this case, IADR is not applicable since IADR

reserves one bit out of data field for DCB.

IV. PERFORMANCE ANALYSIS

In this section, we present a theoretical analysis of the EDR

algorithm to show how EDR algorithm performs. We also

present simulation results which were collected by running a

software simulation using both real and synthetic data.

A. CAN Protocol

Fig. 12 shows all fields of a standard CAN message. Fig. 13

shows the fields in detail. The CAN protocol exclusively uses

1/0 (recessive/dominant) edge transitions for synchronization.

To maintain synchronization between all nodes on the bus,

sufficient edge transitions are needed. The stream of all

recessive or all dominant bits is insufficient for

synchronization. Therefore, bit stuffing has been introduced in

the protocol. After every 5 consecutive identical bits, a

complement bit, the stuff bit, is inserted [24]. The stuff bit

gives a forced transition. As a result, the receiving nodes can

synchronize with the transmitting node.

Fig. 12: Standard CAN Message Format (11 Bit Identifier).

Fig. 13: The Data Frame of a CAN Message.

B. Theoretical Analysis of the EDR Algorithm

All the parameters used in this performance analysis are

shown in Table IV.

The bus utilization, BU, of a CAN bus in general, is the sum

of the lengths, li(t), of all N messages in the network divided

by the period, ∆T, for which the BU is calculated. The sum of

the lengths is expressed in seconds.

T

tl

BU

N

i

T

t

i

∆=∑∑=

=1 0

)(

(1)

For the original messages, the length of messages, const_li,

does not change over time (ignoring changes due to bit

IEEE TVT – Manuscript VT-2008-00390 11

stuffing). Thus, the expression for BU becomes

( )

T

lconstn

BU

N

i

ii

∆=∑=1

_*

(2)

Fig. 13 shows the portion of the data frame susceptible to bit

stuffing. The overall data frame length, li(t), is the sum of the

unstuffed message length, Li(t), and stuff bits sbi(t), i.e.

)()()( tsbtLtl iii += (3)

The maximum number of bits associated with a standard

message including the stuff bits is 130, as shown in Table V.

TABLE IV: LIST OF NOTATIONS USED IN ANALYSIS

Notation Description Unit

∆T Period [sec]

BU bus utilization for original message traffic [-]

BU* bus utilization for EDR message traffic [-]

const_li constant length of the original message i [bit]

SL(i,j) length of signal j of message i [bit]

DL(i,j) length of delta signal j of message i [bit]

li(t) length of message i at time t [bit]

Li(t) length of unstuffed content of message i at time t [bit]

i message id [-]

j signal index [-]

N number of messages in the network [-]

ni number of occurrences of message i within ∆T [-]

Ni(t) number of data bytes in message i [-]

sbi(t) length due to stuff bits in message i at time t for the

original message traffic, 0 ≤ sbi(t) ≤ 19

[bit]

sbi*(t) length due to stuff bits in message i at time t for EDR

message traffic, 0 ≤ sbi(t) ≤ 19

[bit]

SL(t,i,j) length of signal j of message i at time t [bit]

Si number of signals in message i [-]

OL overhead length [bit]

PNC(i,j) probability of signal being represented with no change

during t = 0 ≤ t ≤ ∆T

[-]

PES(i,j) probability of signal being represented with in entirety

during t = 0 ≤ t ≤ ∆T

[-]

PDS(i,j) probability of signal being represented with delta

change during t = 0 ≤ t ≤ ∆T

[-]

PI Performance Improvement [-]

wb{} Whole byte operator. Returns the number of bits of the

whole-byte datum. If mod(a,8)=0 then wb{a}=a else

wb{a}=a+(8-mod(a,8)).

[bit]

Unstuffed message length, Li(t), is the sum of all signal

lengths, SL(t,i,j), of the data field and overhead bits OL. OL is

the sum of all bits in the message frame except the data field

and stuff bits, and that is 47. The unstuffed message length can

be expressed as

∑=

+=iS

j

i jitSLOLtL1

),,()( (4)

Substituting (3) in (1) we get

[ ]

T

tsbtL

tBU

N

i

T

t

ii

+

=∑∑=

=1 0

)()(

)( (5)

Further substituting Li(t) from (4) in (5) we get

T

jitSLtsbOL

tBU

N

i

T

t

S

j

ii

i

++

=

∑∑ ∑=

= =1 0 1

),,()(

)( (6)

For the original message traffic, the signal length does not

change from one sending to the next. This means that

),(),,( jiSLjitSL = (7)

Therefore, BU for the original message traffic is

T

jiSLtsbOLn

tBU

N

i

S

j

ii

i

++

=

∑ ∑= =1 1

),()(

)( (8)

For reduced messages, the signal length of a particular

message can be of different value from one message to another

message.

TABLE V: MAXIMUM NUMBER OF BITS IN A STANDARD ID (11 BIT) CAN

MESSAGE

Number of Bits Description

1 Start of frame

11 Identifier

1 RTR

6 Control bits

64 Data (8 byte)

15 CRC

1 CRC delimiter

1 ACK

1 ACK delimiter

7 End of frame

3 Interframe space

19 Stuff bits (maximum)

130 Total (maximum)

TABLE VI: POSSIBLE ),,( jitSL VALUES

Signal Compression Type SL Value (bits)

signal presented with no change 2

signal presented in entirety 1 + SL(i,j)

signal presented as delta change 2 + DL(i,j)

Probabilities of different signal behaviors such as no change

(PNC), delta of signal (PDS) and entire signal (PES) of the jth

signal of the ith message sum up to 1. Hence,

1),(),(),( =+++ jiPjiPjiP ESDSNC (9)

Table VI shows the length (in bits) of signal j of message i

for various conditions of the signals. Using probabilities, the

estimated length of the signal can be represented a

IEEE TVT – Manuscript VT-2008-00390 12

[ ][ ]

+

+++=

),(),(2

),(),(1),(2),,(

jiPjiDL

jiPjiSLjiPjitSL

DS

ESNC (10)

The number of bits in the sum of the lengths must be a

multiple of 8, whole bytes. We define operator wb that returns

the number of bits of the whole-byte datum. If mod(a,8)=0,

then wb{a}=a, else wb{a}=a+(8-mod(a,8)). Bus utilization

(BU*) for the EDR message traffic becomes

T

jitSLwbtsbOL

tBU

N

i

T

t

S

j

i

i

++

=

∑∑ ∑=

= =1 0 1

),,()(*

)(* (11)

Where sbi*(t) is the length of stuff bits of the reduced

message. Substituting the value of SL(t,i,j) from (10) in (11)

we get

[ ][ ]

T

jiPjiDL

jiPjiSL

jiP

wb

tsbOL

tBU

N

i

T

t

S

jDS

ES

NC

i

i

+

++

+

++

=

∑∑ ∑=

==

1 0

1),(),(2

),(),(1

),(2

)(*

)(* (12)

Fig. 14: Performance Improvement vs. Signal Type Probability for a Message

Containing a 64-Bit Long Signal.

As long as the EDR bus utilization, BU*, is lower than the

bus utilization BU of the original message traffic, it makes

sense having EDR algorithm employed. This means that, for

the EDR algorithm to be useful BU*<BU. Performance

improvement, PI, is the ratio between BU* and BU. Therefore,

using (6) and (12) we can write PI as

[ ][ ]

∑∑ ∑

∑ ∑

=

==

= =

+

++

+

++

++

=

N

i

T

t

S

jDS

ES

NC

i

N

i

S

j

ii

i

i

jiPjiDL

jiPjiSL

jiP

wb

tsbOL

jiSLtsbOLn

PI

1 0

1

1 1

),(),(2

),(),(1

),(2

)(*

),()( (13)

Fig. 15: Performance Improvement vs. Signal Type Probability for a Message

Containing a 64-Bit Long Signal.

Let us calculate performance improvement of EDR vs.

uncompressed bus utilization for one message having one 64-

bit long signal. From (13), performance improvement is

+++

+++

++=

)},(]322[),(]641[

),(2{)(*47

64)(47

jiPjiP

jiPwbtsb

tsbPI

DSES

NCi

i

+

+++

+=

)},(34),(65

),(2{)(*47

)(111

jiPjiP

jiPwbtsb

tsb

DSES

NCi

i (14)

Let us consider three boundary examples.

1. For 1),( =jiPNC, 0),( =jiPDS

and 0),( =jiPES

performance improvement is

( ) ))(*55/()(111 tsbtsbPI ii ++= . Since sbi(t) has a

maximum value of 19 and a minimum value of 0, and

sbi*(t) has a maximum value of 8 and a minimum value of

0, the performance improvement is 36.276.1 ≤≤ PI .

Thus, for this particular case, we can send 1.76 to 2.36

times more traffic if we use EDR.

2. For 0),( =jiPNC, 0),( =jiPDS

and 0),( =jiPES

performance improvement is

IEEE TVT – Manuscript VT-2008-00390 13

( ) ))(*87/()(111 tsbtsbPI ii ++= where, in this case,

sbi*(t) has a maximum value of 14 and a minimum value

of 0. For this case, the performance improvement is

49.11.1 ≤≤ PI . Hence, for this case, we can send 1.1 to

1.49 times more traffic if we use EDR.

3. For 0),( =jiPNC, 0),( =jiPDS

and 1),( =jiPES the

EDR algorithm does not send the reduced message that is

equal or larger than the original. Therefore, PI=1.

The above examples show that the performance

improvement ranges from 1.00 to 2.36, for this case of a

message consisting of one 64-bit signal, which is an ideal case

for compression performance. Fig. 14 and Fig. 15 give a more

comprehensive look at the range of performance improvement

for this algorithm. Fig. 14 takes the message example from

above and varies the probabilities to show the entire range of

compression performance for this type of message. Fig. 15, on

the other hand, considers a message which has a composition

poorly predisposed for compression. This message consists of

12 signals of length 5 bits and 1 signal of length 4 bits. Fig. 15

also covers the entire domain of probability combinations. The

achieved performance envelope in Fig. 15 is noticeably shifted

toward less achieved compression. These figures illustrate the

strong dependence of the algorithm on the signal dynamics and

message structure, but also demonstrate the large compression

potential for the more prevalent steady-state conditions.

C. Simulation, Test Results and Discussion

1. Test Methodology

We used a real-life vehicle CAN message log collected

during vehicle testing, as well as several logs consisting of

synthetic message traffic created by manipulating signal

contents of the real-life message log. The performance of

EDR, for the tests based on the real-life message log, is

measured by comparing bus utilization of the original message

log with the message log that is produced from original

message log when EDR is applied. The results using the real

vehicle data will be discussed first, followed by the results

gathered using synthetic data.

2. Test of Real-Life Message Logs

Table VII shows the message distribution of a typical

passenger vehicle. The exact details of the messages are not

given because of the proprietary concerns. The bus baud rate is

250 kb/sec. The messages that have smaller periods and

greater lengths contribute the most to bus utilization. We have

run the simulations considering all messages (Test 1) and

considering only the messages that contribute the most to the

bus utilization (Test 2). From Fig. 16, the bus utilization

savings of the EDR using all messages does not differ much

from the bus utilization of EDR using the most frequent

messages. The average difference in bus utilization is 1.113 %.

Thus, targeting only frequent messages in the bus network will

achieve good amount of BU savings and reduce complexity

and computing overhead in the nodes.

Fig. 16: Performance Comparison of EDR Considering All Messages (Test1)

and the Most Frequent Messages (Test2).

TABLE VII: CONFIGURATION OF THE MESSAGES USED IN THE ANALYSIS.

(Messages marked with "*" are the biggest contributors to bus utilization.) Msg. ID

(hex)

ID

(STD or EXT)

Data

Length(byte)

Period

(ms)

1* 128 STD 3 10

2* 180 STD 4 10

3* 110 STD 8 12

4* 120 STD 7 12

5* 124 STD 5 12

6* 150 STD 8 12

7* 151 STD 8 12

8 140 STD 8 20

9 144 STD 5 20

10 380 STD 5 20

11 388 STD 5 20

12 280 STD 8 32

13 300 STD 8 100

14 308 STD 7 100

15 320 STD 8 100

16 330 STD 3 100

17 410 STD 5 100

18 520 STD 3 100

19 420 STD 2 106

20 2F0 STD 7 106

21 130 STD 6 200

22 348 STD 4 250

23 170 STD 2 500

24 2D0 STD 1 500

25 510 STD 8 1000

26 670 STD 8 1000

27 674 STD 5 1000

28 38A STD 1 1500

Performance Comparison of Different Data Reduction

Algorithms: Fig. 17 shows performance comparisons, in terms

of bus utilization, of three different reduction algorithms and

uncompressed traffic for the real-life test. EDR bus utilization

is the lowest of the three. The average bus utilization

difference between EDR algorithm and the uncompressed

traffic is 13.44 %. The average performance improvement,

based on (15) is 1.462. EDR algorithm performs the best for

signals of the log. ADR algorithm performs poorly because of

the lack of adaptability. DR algorithm performs relatively well

IEEE TVT – Manuscript VT-2008-00390 14

because of its low overhead and stable signal behavior in this

example. We must note that this log represents only a snapshot

of possible signal behaviors and message compositions. In the

next section of this paper we examine DR, ADR, IADR and

EDR performance based on synthetic signals.

Fig. 17: Comparison of EDR, ADR, IADR, DR and Uncompressed Bus

Utilization for the Real-Life Message Log.

3. Test of Artificially Created Message Logs

In this subsection of the paper we compare the performance

of EDR algorithm for different signal behaviors. We also

examine the performance of the EDR when bus reaches

saturation by increasing the number of simulated messages.

Diverse Signal Behavior Test: Here we used the message

configuration from Table VIII. The first column of Table VIII

shows the ID of a message, the second column shows the

period of the message in ms, and the third column shows the

lengths of different signals in the message. For example, the

ID of the first message in Table VIII is 1; its period is 10 ms

meaning that the message repeats every 10 ms; and there are

11 signals (8,8,8,8,6,6,6,6,6,1,1) in the message. Out of the 11

signals, four are 8-bit long, five are 6-bit long and 2 are 1-bit

long. The baud rate of the bus is assumed 250 kb/sec. We

chose a mixture of signal sizes for a realistic network

representation.

Fig. 18: Number of Bytes Saved vs. Signal Type Probability for EDR, DR,

IADR, and ADR Algorithms.

TABLE VIII: MESSAGE CONFIGURATION

ID Period (ms) Message content, signal sizes (bit)

1 10 8,8,8,8,6,6,6,6,6,1,1

2 10 8,8,8,6,6,6,6,6,6,1,1,1,1

3 10 8,8,8,8,8,6,6,6,1,1,1,1,1,1

4 10 16,8,8,6,6,6,6,1,1,1,1,1,1,1,1

5 20 16,8,8,8,8,8,6,1,1

6 20 16,8,8,8,6,6,6,2,1,1,1,1

7 20 16,8,8,8,8,6,6,1,1,1,1

8 20 8,8,8,8,8,8,6,2,1,1,1,1,1,1,1,1

9 20 8,8,8,8,8,8,6,6,1,1,1,1

10 20 16,8,8,6,6,6,6,6,2

11 50 16,8,8,6,6,6,6,6,1,1

12 50 16,8,8,8,6,6,6,1,1,1,1,1,1

13 50 8,8,8,8,8,6,6,6,6

14 50 8,8,8,8,8,6,6,6,6

15 50 8,8,8,8,8,6,6,1,1,1,1,1,1,1,1,1,1,1,1

16 50 16,8,8,8,6,6,6,6

17 100 8,8,8,8,6,6,6,6,2,1,1,1,1,1,1

18 100 16,8,8,8,6,6,6,1,1,1,1,1,1

19 100 16,8,8,8,6,6,6,6

20 100 8,8,8,8,8,6,6,6,2,1,1,1,1

The left five columns of Table IX show different probabilities,

PES, PDS, and PNC, for the SDN signal type, and PES and PNC for

the SN signal type used for the test. The right four columns of

the table show the resulting number of reduced messages per

second for all algorithms. It is desirable to have greater

number of reduced messages and lower overhead per message.

For all combinations of the probabilities except one, EDR

shows more or the same number of reduced messages as the

other algorithms. For PES(SDN)=0.8, PDS(SDN)=0,

PNC(SDN)=0.2, PES(SN)=0.8 and PNC(SN)=0.2, DR has higher

number of reduced messages than EDR. For this particular

case, EDR requires more overhead bits than DR due to the

IEEE TVT – Manuscript VT-2008-00390 15

adaptability nature of EDR. In this limited case, DR performs

better.

TABLE IX: NUMBER OF REDUCED MESSAGES PER SECOND FOR DIFFERENT

SIGNAL TYPE PROBABILITY.

SDN Type Signal

SN Type

Signal Number of reduced messages

PES PDS PNC PES PNC DR ADR IADR EDR

1 0 0 1 0 0 0 0 0

0.8 0.2 0 0.8 0 0 0 3 3

0.6 0.40 0 0.6 0 0 0 37 43

0.4 0.60 0 0.4 0 1 20 199 232

0.2 0.80 0 0.2 0 14 120 520 601

0 1 0 0 0 93 585 809 809

0.8 0 0.2 0.8 0.2 224 0 148 154

0.6 0.2 0.2 0.6 0.2 298 0 321 331

0.4 0.4 0.2 0.4 0.2 343 11 485 503

0.2 0.6 0.2 0.2 0.2 397 122 695 725

0 0.8 0.2 0 0.2 512 581 808 808

0.6 0 0.40 0.6 0.4 564 0 511 524

0.4 0.2 0.4 0.4 0.4 614 19 671 679

0.2 0.4 0.4 0.2 0.4 682 132 761 778

0 0.6 0.4 0 0.4 739 581 806 806

0.4 0 0.6 0.4 0.6 754 13 750 758

0.2 0.2 0.6 0.2 0.6 783 94 793 796

0 0.4 0.6 0 0.6 806 581 806 806

0.2 0 0.8 0.2 0.8 803 102 803 804

0 0.2 0.8 0 0.8 802 577 802 802

0 0 1 0 1 809 583 809 809

TABLE X: CONFIGURATION OF MESSAGES USED IN BUS SATURATION TEST.

Message No. 1 2 3 4 5 6 7 8 9 10 11

No. of Bytes 7 5 5 8 8 8 5 7 8 3 8

Period (ms) 10 10 12 12 12 12 12 20 20 20 20

Message No. 12 13 14 15 16 17 18 19 20 21 22

No. of Bytes 4 5 5 7 8 5 8 3 5 8 4

Period (ms) 32 32 32 32 10 10 10 10 10 10 10

Message No. 23 24 25 26 27 28 29 30 31 32 33

No. of Bytes 5 5 7 8 5 8 3 5 8 8 5

Period (ms) 10 10 10 10 10 10 10 10 10 10 10

Message No. 34 35 36 37 38 39 40 41 42

No. of Bytes 5 5 8 4 5 8 3 7 8

Period (ms) 10 10 10 10 10 10 10 10 10

Fig. 18 shows byte savings in percents vs. different signal

type probabilities for DR, ADR, IADR, and EDR algorithms.

For the PNC=1, the highest no change probability, the DR

algorithm has the lowest overhead because all the reduced

messages are one data byte long. As the probabilities of the

delta change, PDS, and entire-signal, PES, increase, the benefits

of EDR become more evident. Thus, EDR saves more data

bytes in dynamic environments because of its adaptability. In

real-life, signal behaviors are more dynamic than static. Hence,

EDR will be the most appropriate algorithm for practical

applications.

Saturation Test: Even though no vehicle manufacturer

would ever have bus utilization near saturation, we wanted to

see how EDR behaves when the bus is being loaded and

becoming saturated. We measured average BU starting with 12

messages and gradually incremented the number of messages

in the simulation by 5 until the total of 42 messages. Table X

shows the configuration of initial and added simulated

messages. Uncompressed message traffic reaches saturation

with 32 messages, while the message traffic compressed with

EDR algorithm is getting saturated with 37 messages, as

shown in Fig. 19. In other words, with EDR, 5 more messages

can be added to the bus before the bus becomes saturated

compared to uncompressed traffic.

Fig. 19: BU Comparison Between Uncompressed and EDR Traffic for

Varying Number of Messages.

V. CONCLUSION

This paper presented an Enhanced Data Reduction (EDR)

algorithm for reducing bus utilization in future vehicles. The

proposed scheme does not require any changes to the existing

wired infrastructure. We showed, using simulation results, that

the EDR can be beneficially used. In the simulation, we

considered realistic examples of message traffic. We have

proposed an improvement to detection of data reduction usage

by looking at message length. Also, we proposed a method to

manage signals of shorter length (<5 bits) by combining them

into groups that are handled as single signals. A solution is

provided for handling the worst case conditions for

compression, which arise during network initialization. The

cost of the algorithm in terms of memory usage is under 300

bytes which is very insignificant compared to the amount of

ROM and RAM available in today’s microcontrollers. The

impact of the algorithm on the end-to-end message latency is

also very insignificant. The algorithm can also be used for

other networking protocols as long as some time slots are

available which are not reserved for any particular messages.

REFERENCES

[1] R. Bell, “Multiplexing-Past, Present and Future,” SAE Paper Number

760178, 1976.

IEEE TVT – Manuscript VT-2008-00390 16

[2] Christopher Albert Lupini, “Advantages of Integrating a Serial Data

Link Controller with CPU [vehicles],” IEEE 42nd Vehicular

Technology Conference, pp. 1051-1055, May 10-13, 1992.

[3] Christopher Albert Lupini, “Vehicle Multiplex Communication: Serial

Data Networking Applied to Vehicular Engineering,” SAE International

2004, Warrendale, PA, USA, ISBN 0-7680-1218-X.

[4] Wolfhard Lawrenz, “Network Development Techniques,” IEE

Colloquium on Vehicle Networks for Multiplexing and Data

Communication, pp. 5/1-5/8, Dec 19, 1988.

[5] Abul Masrur, "Digital Simulation of an Automotive Multiplexing

Wiring System," IEEE Transactions on Vehicular Technology, Vol. 38,

No. 3, pp. 140-147, August 1989.

[6] Abul Masrur, "Studies on Some Alternative Architectures for Fault-

Tolerant Automotive Multiplexing Networking Systems," IEEE

Transactions on Vehicular Technology, Vol. 40, No. 2, pp. 501-510,

May 1991.

[7] Wolfhard Lawrenz, “CAN System Engineering: From Theory to

Practical Applications,” 1997 Springer-Verlag, New York, Inc., USA,

ISBN 0-387-94939-9.

[8] Wolfhard Lawrenz, “Communication Protocol Conformance Testing -

Example LIN -,” IEEE International Conference on Vehicular

Electronics and Safety, pp. 155-162, December 13-15, 2006.

[9] Christopher A. Lupini, "Multiplex Bus Progression 2003," Proc. of the

SAE 2003 World Congress, March 3-6, 2003, Detroit, Michigan, USA,

Paper Number: 2003-01-0111.

[10] Kassakian, J.G. and Perreault, D.J. "The future of electronics in

automobiles", Proceedings of the 13th International Symposium on

Power Semiconductor Devices and ICs, 2001. pp. 15-19, Osaka, Japan

[11] H. Kleinknecht, “CAN Calibration Protocol specification”, Version 2.1,

Germany, 1999.

[12] Vector CANtech URL: http://www.vector-cantech.com/[15 August

2005]

[13] Accurate Technologies Inc URL:

http://www.accuratetechnologies.com/corp/ [15 August 2005]

[14] Syed Misbahuddin, Syed Masud Mahmud and Nizar Al-Holou,

"Development and Performance Analysis of a Data-Reduction

Algorithm for Automotive Multiplexing," IEEE Transactions on

Vehicular Technology, Vol. 50, No. 1, pp. 162-169, January 2001.

[15] D. Huffman, “A method for the construction of minimum redundancy

codes,” Proc. of IRE, vol. 40, pp. 1098–1101, Sept. 1952.

[16] G. G. Kempf, M. J. Eckrich, and O. J. Rumpf, “Data reduction in

automotive multiplexing systems,” SAE paper 940 135, pp. 45–50.

[17] Storer, J.A. and Szymanski, T.G., “Data compression via textual

substitution”, Journal of the ACM, Vol. 29, No. 4, October 1982, pp.

928-951.

[18] Praveen R. Ramteke and Syed Masud Mahmud “An Adaptive Data-

Reduction Protocol for the Future In-Vehicle Networks,” SAE

Transactions on Passenger Cars: Electrical and Electronic Systems,

pp. 519-530, 2005.

[19] Bosch, “CAN specification Ver 2.0,” Robert Bosch GmbH, Stuttgart,

Germany, 1991.

[20] Radovan Miucic and Syed Masud Mahmud, “An Improved Adaptive

Data Reduction Protocol for In-Vehicle Networks,” SAE Transactions

on Passenger Cars: Electrical and Electronic Systems, pp. 650-658,

2006.

[21] Microchip, “dsPIC30F4011/4012 Data Sheet,” Microchip Technology

Inc., Chandler, Arizona, 2005.

[22] STMicroelectronics, “ST92F124/ST92F150/ST92F250 Data Sheet Ver

1.4,” Microchip Technology Inc., Geneva, Switzerland, 2003.

[23] “J1939” SAE Standard, SAE International.

[24] “CAN Protocol” Vector Training Material, Vector CANtech Inc., 39500

Orchard Hill Place, Novi, MI 48375.

Radovan Miucic received a Bachelor of Science in

electrical engineering, and a Master of Science in

computer engineering both from Wayne State

University, Detroit, USA in 2001 and 2002,

respectively. Currently he is working on his PhD

degree in computer engineering.

From 2001 to 2007 he has been working for

various automotive companies as embedded

software engineer. In 2007, he joined Honda R&D in Southfield Michigan

where he works as a Communication Research Engineer researching wireless

protocols for automotive use. Mr. Miucic is the author of a journal and

several conference papers on topics related to vehicular networks.

Syed Masud Mahmud (S’82–M’84–SM’07)

received the Ph.D. degree in electrical engineering

from the University of Washington, Seattle, in 1984.

Since 1988, he has been with Wayne State

University, Detroit, MI. Currently, he is an Associate

Professor of Electrical and Computer Engineering

Department. During the last 20 years, he has been

working in the areas of hierarchical multiprocessors,

hierarchical networks, performance analysis of

computer systems, digital signal processing, embedded systems, in-vehicle

networking, performance analysis of networking protocols, secure wireless

communications, and privacy protected vehicle-to-vehicle communications

and simulation techniques. He has supervised a number of projects from Ford

Motor Company and other local companies. He has published over 100 peer-

reviewed journal and conference proceeding papers. He has supervised four

Ph.D dissertations and 8 MS theses.

Dr. Mahmud received the President’s Teaching Excellence Award of

Wayne State University in 2002. He also received several other teaching

excellence awards within the college of engineering. He has served as a

Technical Reviewer for many conferences, journals, and funding agencies.

Currently, he is the Editor of the SAE Transactions on Passenger Cars:

Electrical and Electronic Systems. Since 2008 he has also been serving as an

ABET program evaluator. He is also a member of SAE, ASEE, Sigma Xi and

Tau Beta Pi. He has been listed in the Who’s Who in Science and

Engineering, Empowering Executives and Professionals, and many others.

.

Zeljko Popovic received a Bachelor of Applied

Science in mechanical engineering, with a

multidisciplinary specialization in mechatronics,

from the University of Waterloo, Waterloo, Canada

in 2000. In 2008, he completed a Masters in

computer engineering with specialization in

intelligent systems at the University of Michigan

Dearborn.

He developed experimental research platforms for evaluating control systems

for hybrid and continuously variable valve timing powertrains at Visteon

from 2000 to 2005. Since 2005, he has improved processes for developing

embedded software for commercial truck applications at Siemens VDO.

Zeljko was jointly granted two patents on variable valve timing systems for

engines and published a paper on variable valve timing control.