16
IEEE INTELLIGENT TRANSPORTATION SYSTEMS MAGAZINE 42 SPRING 2015 1939-1390/15©2015IEEE Abstract—The BMW Group Research and Technology has been testing automated vehicles on Germany’s high- ways since Spring 2011. Since then, thousands of kilo- meters have been driven on the highways around Munich, Germany. Throughout this project, fundamental technologies, such as en- vironment perception, localization, driving strategy and vehicle con- trol, were developed in order to safely operate prototype auto- mated vehicles in real traffic with speeds up to 130 km/h. The goal of this project was to learn what technologies are necessary for automated driving. This paper presents the architecture and algo- rithms developed during this project, results from real driv- ing scenarios, the lessons learned throughout the project and a quick introduction into the latest develop- ments for improving the system. I. Introduction D river assistance systems, such as active cruise control (ACC) or lane departure warning (LDW) have become a common option in pre- mium vehicles in recent years. With each new model, new driver assistance sys- tems are being introduced. It has been proven that driver assistance systems offer greater safety to the driver and the surround- ing traffic [1]. Recently, BMW has introduced the traffic jam assistant in the latest 5 Series and X5 as well as the new BMW i3, which will introduce lateral control of the vehicle under 40 km/h. The pinnacle of driver assistance systems is fully autonomous driving. In recent years, many projects from automobile manufacturers, automotive suppliers, universities and research institutes across the world have been developing prototype vehicles for automated driving. The DARPA Urban Challenge in 2007, won by the team from Carnegie Mellon University [2], was a turning point in demonstrating the potential of automated vehicles [3]. Universities, research institutes and coopera- tive projects have since continued researching automated vehicles, such as Stanford University [4], Carnegie Mellon Date of publication: 20 January 2015 Digital Object Identifier 10.1109/MITS.2014.2360306 Michael Aeberhard, Sebastian Rauch, Mohammad Bahram, Georg Tanzmeister, Julian Thomas, Yves Pilat, Florian Homm, Werner Huber, and Nico Kaempchen Group ConnectedDrive at BMW Group Research and Technology, Munich, Germany E-mails: [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], and [email protected] Experience, Results and Lessons Learned from Automated Driving on Germany’s Highways IMAGE PUBLISHED BY INGRAM PUBLISHING This work was supported by the BMW Group Research and Technology in Munich, Germany.

Experience, Results and Lessons Learned from Automated

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Experience, Results and Lessons Learned from Automated

IEEE IntEllIgEnt transportatIon systEms magazInE • 42 • Spring 2015

1939-1390/15©2015IEEE

Abstract—The BMW Group Research and Technology has been testing automated vehicles on Germany’s high-ways since Spring 2011. Since then, thousands of kilo-meters have been driven on the highways around Munich, Germany. Throughout this project, fundamental technologies, such as en-vironment perception, localization, driving strategy and vehicle con-trol, were developed in order to safely operate prototype auto-mated vehicles in real traffic with speeds up to 130 km/h. The goal of this project was to learn what technologies are necessary for automated driving. This paper presents the architecture and algo-rithms developed during this project, results from real driv-ing scenarios, the lessons learned throughout the project and a quick introduction into the latest develop-ments for improving the system.

I. Introduction

Driver assistance systems, such as active cruise control (ACC) or lane departure warning (LDW) have become a common option in pre-

mium vehicles in recent years. With each new model, new driver assistance sys-

tems are being introduced. It has been proven that driver assistance

systems offer greater safety to the driver and the surround-ing traffic [1]. Recently, BMW has introduced the traffic jam assistant in the latest 5 Series and X5 as well as the new BMW i3, which will introduce lateral control of the vehicle

under 40 km/h. The pinnacle of driver assistance systems is

fully autonomous driving. In recent years, many projects from automobile

manufacturers, automotive suppliers, universities and research institutes across

the world have been developing prototype vehicles for automated driving. The DARPA Urban Challenge in 2007, won by the team from Carnegie Mellon University [2], was a turning point in demonstrating the potential of automated vehicles [3]. Universities, research institutes and coopera-tive projects have since continued researching automated vehicles, such as Stanford University [4], Carnegie Mellon Date of publication: 20 January 2015

Digital Object Identifier 10.1109/MITS.2014.2360306

Michael Aeberhard, Sebastian Rauch, Mohammad Bahram, Georg Tanzmeister, Julian Thomas, Yves Pilat, Florian Homm, Werner Huber, and Nico Kaempchen

Group ConnectedDrive at BMW Group Research and Technology, Munich, GermanyE-mails: [email protected], [email protected],

[email protected], [email protected], [email protected], [email protected], [email protected],

[email protected], and [email protected]

Experience, Results and Lessons Learned from Automated Driving on Germany’s Highways

image published by ingram publishing

This work was supported by the BMW Group Research and Technology in Munich, Germany.

Page 2: Experience, Results and Lessons Learned from Automated

IEEE IntEllIgEnt transportatIon systEms magazInE • 43 • Spring 2015 IEEE IntEllIgEnt transportatIon systEms magazInE • 43 • Spring 2015

FIG 1 Highly automated driving project from the BMW Group Research and Technology.

University [5], the Stadtpilot project at the Technical Uni-versity Braunschweig [6], [7], Team AnnieWAY of the Karl-sruhe Institute of Technology [8], VisLab of the University of Parma [9], the V-Charge project [10] and many others. The automotive industry has also been quite active in recent years with automated driving, such as in the HAVEit [11] and SARTRE [12] European Union projects, but also in indepen-dent projects, such as the Bertha Benz drive [13].

For several years, the BMW Group Research and Technology has been testing automated vehicles on the highways around Munich, Germany [14], [15]. The goal of this project was to learn about the technologies and algorithms necessary in developing a highly automated driving (HAD) system, as per the BASt definitions [16], or conditional automated driving, as per the NHTSA [17] and SAE [18] definitions, of automation levels. This project is a successor of the BMW Track Trainer [19] and BMW Emergency Stop Assistant [20] projects, where core tech-nologies of those projects were integrated and further developed. Most of the technology required for automated driving technology, such as the sensors, already exist in today’s premium segment production vehicles. Other technologies still need to be developed and are a cur-rent topic of research. The first successful automated trip between Munich and Ingolstadt, without driver interven-tion, occurred on June 16th, 2011. Since then, thousands of kilometers of automated driving experience on high-ways have been achieved. In this paper, an overview of the BMW automated driving prototype vehicles is presented along with a discussion on the lessons learned during the development process. Additionally, a short overview of current research is presented, which will improve the reliability and robustness of the system in the future.

II. System ArchitectureA flexible, prototype hardware and software architecture was chosen for developing the automated driving system. This allowed for easy development and testing of new com-ponents without the need to make any changes to the over-all architecture. A simple overview of the system architec-ture is shown in Fig. 2.

The system runs mainly on two computers: a stan-dard PC and a real-time embedded prototyping computer (RTEPC). The PC serves as a sensor data processing and algorithm platform using a proprietary software frame-work. Additionally, the database for the high-precision maps are stored on the PC. The sensors and vehicle bus sig-nals are connected to the PC via Ethernet and CAN buses. The RTEPC runs the localization, driving strategy and con-trol algorithms. The PC and the RTEPC communicate over a direct Ethernet connection. The RTEPC is connected to the vehicle’s actuators for steering, braking and throttle control using a CAN bus. A secondary actuator bus controls other elements of the vehicle, such as the blinker. Addition-

ally, some sensors, such as GPS, and vehicle bus signals are also directly connected to the RTEPC via CAN.

III. Development ProcessA three-stage development process was used during the development of the automated driving system. In this sec-tion, each stage of development and its goals are described.

A. Virtual Development and EvaluationThe development of the HAD functionality was initially con-ducted in a virtual environment using driving simulation software and hardware of the BMW Group Research and Technology [21], shown in Fig. 3.

The driving simulator software provides a perfect repre-sentation of the world, including intelligent dynamic objects and the road’s infrastructure, and allow for the development of the core functionality of the HAD system. Additionally, the simulation software can be coupled to a static cut-out of a vehicle cockpit or a complete dynamic simulator with fully functional steering wheel, pedals and displays in order to assist in development or to evaluate human-interaction concepts, such as driver overtake scenarios [22]. The driv-ing simulator was primarily used for the development of the driving strategy, as well as the longitudinal and lateral controllers. Besides testing single automated maneuvers and scenarios, long-term tests were carried out in order to evaluate the overall functionality. For this purpose the traf-fic simulation program PELOPS©fka was used [23]. The long-term tests were initially carried out for a driven distance of about 5000 kilometers at a time. An overview of the quanti-tative evaluation is given in [24].

B. Validation on Test TrackOnce the system has achieved a thoroughly tested and sta-ble version in the virtual environment, tests are carried on a closed test track in a prototype BMW 5 series outfit-ted with vehicle environment perception sensors, actua-tors and prototyping hardware and software, as will be described in Sec. II and Sec. IV-A. Validation on the test track begins modestly, first making sure all of the compo-nents, such as the sensors, perception, localization, etc., are working independently. Once all of the components are verified and integrated, basic driving maneuvers,

Page 3: Experience, Results and Lessons Learned from Automated

IEEE IntEllIgEnt transportatIon systEms magazInE • 44 • Spring 2015

such as lane keeping at different speeds, were indepen-dently tested in order to ensure safe vehicle control; this includes the testing of all safety mechanisms and worst-case scenarios for the automated driving system. After safe operation of the vehicle control is verified, more compli-cated driving maneuvers were added to the testing catalog, which must now rely on the vehicle’s perception systems in order to detect other dynamic and stationary objects. Up to three other extra vehicles were used in testing in order to stage certain scenarios and agitate the automated driving vehicle. Additionally, virtual objects were played into the system in order to test the reaction of the automated driv-ing vehicle in critical situations.

C. Testing on Public RoadsOnce the HAD system has been validated on the test track, the first tests on public roads took place. Several highway routes around Munich were used for tests on public roads. The challenge of testing on public roads versus on a test

track is that the HAD system must oper-ate in real traffic, where normal persons are driving, all of whom are completely unaware that an autonomous vehicle is in their surroundings. This led to the discovery of many new challenges that needed to be solved. An example of such a new challenge was cooperative driv-ing, where other vehicles attempting to merge onto the highway at an on-ramp expect a certain amount of cooperation from the vehicles already on the high-way. Dealing with different types of vehicles, such as trucks, was also a chal-lenge, since they tend to act differently from a normal passenger car. With the identification of new challenges, new approaches were developed in order to deal with the unique problems faced when driving automated on public roads with real traffic. This is a continuous and still on going process, where new challenges are first worked on in the

virtual environment, then optimized on the test track and finally verified on public roads.

IV. Environment PerceptionReliable and robust environment perception is critical in automated driving. Not only is the sensor configuration important, but the perception algorithms used to extract the environment models are a key aspect for successful envi-ronment perception. In this section, the concept for envi-ronment perception used in the automated driving project is presented, from the sensor configuration to the extraction and fusion of the environment models.

A. Sensors for Environment PerceptionThe prototype vehicles for HAD are fitted with 12 sensors for environment perception: four laser scanner (two 4-layer, two single-layer), three radar, four ultrasonic and a mono camera. The laser scanner sensors provide a complete sur-round view of the vehicle’s environment without any gaps. The radar sensors in the front and the rear enable long-range detection of vehicles and obstacles. The ultrasonic sensors on the side provide a redundant source for detecting close vehicles directly to the side. The mono camera in the front is able to reliably classify obstacles, such as vehicles, and detect lane markings for localization. The sensor con-figuration is shown in Fig. 4.

With the exception of the laser scanner sensors, all of the sensors are series production sensors currently integrated into the BMW 5 series for various driver assistance applica-tions. Most areas of the vehicle are observed by at least two sensors with different measuring principles, particularly

(a) (b)

FIG 3 Virtual validation using a static driving simulator (a) or a dynamic simulator (b).

Additional Sensors for HAD

Production Vehicle Sensor Bus(e.g. Speed, Yaw Rate, etc.)

Perception(Standard PC)

Functionality(RTEPC)

Driving Strategy

ActuatorBus

2nd ActuatorBus

HMI

Trajectory Planning& Control

Localization

ObjectDetection

Fusion

Digital Map

Sensor 1 … DGPSSensor NSensor 2

FIG 2 Modular functional design of the global system architecture.

Page 4: Experience, Results and Lessons Learned from Automated

IEEE IntEllIgEnt transportatIon systEms magazInE • 45 • Spring 2015 IEEE IntEllIgEnt transportatIon systEms magazInE • 45 • Spring 2015

the front and the rear, resulting in a configuration with a redundant sensor concept, improving system robustness.

B. Object Detection and FusionAn important aspect of automated driving is detecting dynamic objects in the vehicle’s environment. A dynamic object Oi is described by

{x, , ( ), }x cP pOi 7= t (1)

where xt is the state vector, P the state covariance matrix, ( )xp 7 the object’s existence probability and c the classifica-

tion vector. The state vector is represented by a 2D rectan-gular model with velocity, acceleration, orientation and yaw rate in the host vehicle coordinate system. Eight features are defined on the rectangular model for a dynamic object: four at the corners of the rectangle and four on the middle edge of the sides. These features are used in the tracking and fusion algorithms in order to guarantee consistent and reliable state estimations [25]–[27].

A high-level sensor data fusion architecture with a sen-sor-, fusion- and application-level is used to fuse the data from the various sensors around the vehicle [27]. At the sen-sor-level, traditional Kalman filter tracking algorithms with a feature-based association method is used. The existence probability is estimated independently of the state estima-tion using a Bayes filter, similar to the algorithm presented in [28]. The classification vector is estimated using machine learning methods, such as support vector machines, depend-ing on the sensor. At the fusion-level, a sensor-to-global fusion method with the information matrix fusion algo-rithm is used [29]. The Dempster-Shafer evidence theory is used to fuse the existence probability [30] and classification vector. The result of the object detection at the sensor-level and fusion-level with ground truth is shown in Fig. 5. At the application-level, the fused object data is further processed by the environment model or directly used by a ADAS appli-cation, such as automated driving.

C. Grid-Based Environment RepresentationThe previous section presented the typical box-model object detection used to track dynamic objects in the vehicle’s environment. However, such an object representation has its limits when detecting static objects, as static objects have much complexer and larger variety of forms. In order to detect the static environment, a grid-based environment representation is used. Grid-based representations, such as an occupancy grid, are very common in robotics, where the static environment can be used for motion planning or localization [31] or for the extraction of static obstacles.

One form of a grid-based environment representation is an occupancy grid, where each grid cell represents the occupancy of that cell due to an obstacle. In the HAD sys-tem, a laser scanner sensor was mainly used for generat-

ing the grid-based environment, where experiments were also made with a radar sensor [32]. The occupancy grid algorithms were efficiently implemented on a GPU platform [32]. An example occupancy grid using a laser scanner is shown in Fig. 6. This occupancy grid map forms the basis for extraction algorithms for detecting features in the environ-ment, such as road boundaries (see Sec. V-B). In standard occupancy grid mapping, the environment is assumed to be static. However, sensor data also contains information about dynamic objects, which can cause unwanted artifacts and violates the static world assumption.

Another form of grid-based environment representation is a ground map, where each cell represents the probability of a ground location with high reflectivity. Lane markings, reflectors and various types of road surfaces have different reflection characteristics regarding laser pulses emitted by a laser scanner. Such a grid-based environment representa-tion of the ground is calculated using the same occupancy grid methods used for a static obstacle occupancy grid map [32]. An example of such a grid map of the ground using a laser scanner is shown in Fig. 7. These types of ground maps can be used to extract features from the ground, such as lane markings (see Sec. V-B).

V. Road Model and Vehicle LocalizationAutomated driving applications require cm-accurate, in the area of 10–20 cm, localization and a description of the road in order to realize robust and reliable path and

1,3,4

2

2

5 5

5 5

3

3

1

Four-LayerLaser Scanner1

Single-LayerLaser Scanner2

3

4

5

Radar

Mono Camera

Ultrasonic

FIG 4 Sensor configuration for environment perception used in the prototype vehicles for automated driving.

(a) (b)

FIG 5 Object detection results at the sensor-level, where the colored boxes represent object from various sensors (a) and fusion-level, where the blue box is the fused result (b). The ground truth is shown with a wire-frame white box.

Page 5: Experience, Results and Lessons Learned from Automated

IEEE IntEllIgEnt transportatIon systEms magazInE • 46 • Spring 2015

trajectory planning, as well as vehicle control. Addition-ally, many complex scenarios can only be solved using lane-accurate localization in conjunction with a digital map in order to obtain a higher level understanding of the vehicle’s environment, such that the automated vehicle is able to make the correct decision about its next driving maneuver. In this section, the concept for vehicle local-ization and mapping for highway scenarios used in the automated driving project is presented.

A. Map and Sensor-Based Concept for Vehicle LocalizationA map-based localization concept was developed for the automated driving project [33]. The localization concept is illustrated in Fig. 8. Together with vehicle odometry, GPS, and sensor-based road model recognition, a cm-accurate position is obtained. Once localized in the map, the a-priori information contained within it enables the driving strat-egy to make decisions about lane change maneuvers and

navigation, some of which are not possible solely with sen-sor-based road model recognition methods. Additionally, a map-based approach increases robustness and foresight, both of which are necessary considering the requirements of HAD, where the driver may be out-of-the-loop.

The map consists of mainly two layers: a semantic, geometric layer and a localization layer. The semantic, or geometric, layer contains the lane model geometry and the high-level semantic information, such as lane connectiv-ity, required for path planning and decision making. The localization layer contains the data required for localizing the vehicle within the map. In this project, the localiza-tion layer consists mainly of lane marking and road bound-ary data, which can be detected by the sensors and then together with GPS and vehicle odometry can be used to match the vehicle onto the map.

Using the a-priori map (see Sec. V-C for details on how such a map is generated), a Kalman filter is used to esti-mate the vehicle’s position within the map. The filter is updated with GPS data and detected landmarks, such as lane markings, in order to continuously update the local-ization hypothesis. The vehicle’s position is also predicted using odometry data at a higher rate, which is required for vehicle control.

B. Road Model Estimation Using Environment SensorsUsing vehicle sensors to detect the road and lanes has been extensively used in driver assistance systems. The most tra-ditional approach is to use a vision-based sensor for detect-ing the lane markings of the road [34], where image-based extraction algorithms are used to estimate a 2nd or 3rd degree polynomial to model the lane markings. A monocu-lar camera was also used in the automated driving project for lane marking detection.

However, for automated driving, a redundant source of information for detecting the road infrastructure is very useful for increasing the robustness and availability of the system. Lane markings can also be detected using a laser scanner. An algorithm was developed based on a ground map occupancy grid, as described in Sec. IV-C. From the

ground map, lane mark-ings in form of a 2nd de-g ree poly nom ia l a re extracted and classified [35]. An advantage of us-ing a laser scanner is its larger field-of-view, al-lowing the detection of not just the host-vehicle’s lane, but also neighbor-ing lanes. The approach, however, is strongly de-pendent on the ref lec-tive quality of the lane

FIG 6 Grid-based environment perception using a laser scanner sensor.

FIG 7 Grid-based ground mapping using a laser scanner.

Sensor Road Model

Lane Detection

LandmarkDetection

+ + + +GPS Odometry

Multilane,Highprecision

Digital Map withLandmarks

Global Location Determination

FIG 8 Accurate vehicle localization using a high-precision digital map, combined with GPS, vehicle odometry and environment perception.

Page 6: Experience, Results and Lessons Learned from Automated

IEEE IntEllIgEnt transportatIon systEms magazInE • 47 • Spring 2015 IEEE IntEllIgEnt transportatIon systEms magazInE • 47 • Spring 2015

markings. Results of such a lane marking algorithm using a laser scanner is shown in Fig. 9.

In addition to lane markings, other elements of the road infrastructure can be detected in order to improve vehicle localization. Typically, road boundaries are used to detect hard boundaries of the road, such as guardrails or jersey barriers. In the automated driving system, an occupancy grid based approach was used to extract road boundaries using a 2nd degree polynomial model [32]. Such road bound-aries can be detected using a laser scanner or a radar sen-sor. An example result of such a road boundary extraction algorithm is shown in Fig. 10.

The above presented algorithms for road model estima-tion are used for generating high-precision maps (see Sec. V-C) and then again to localize the vehicle in the map (see Sec. V-A). In case localization algorithms fail, due to an invalid map, for example, the road model estimation algo-rithms using environment sensors can serve as a fall back until the drive has retaken control of the vehicle.

C. Generation of High-Precision MapsToday’s navigation maps do not meet the requirements needed for HAD, where lane-accurate information is required. Therefore, new high-precision maps were gen-erated for highways around the Munich area for develop-ing and testing automated driving in this project. Maps for automated driving require cm-accurate lane-level informa-tion about the road, with detailed attributes about the road, where the accuracy corresponds to a relative accuracy of the map and not necessary absolute accuracy. The maps are then used for vehicle localization and situation interpreta-tion in the driving strategy, in order to carry out complex driving maneuvers.

In this project, the base data for generating a map was collected using a small fleet of vehicles with similar sen-sor configurations as the HAD prototypes, described in Sec. IV-A. The base data consisted mainly of GPS trajectories, vehicle odometry and road model perception data using the vehicle’s sensors, such as lane marking and road boundary detection. However, the initial map data contains uncertain-ties and are not yet accurate to one another, as shown in Fig. 11a. In order to homogenize the data, the map data is

modeled and processed using the GraphSLAM algorithm [33], [36]–[38], results of which are shown in Fig. 11b. Fig. 12 shows an example of how the graph is modeled. The nodes of the graph correspond to vehicle poses, where relation-ships between the nodes are modeled representing motion models between nodes or sensor measurements such as a GPS position or a landmark feature. After the data has been homogenized, the geometrical lane model is extracted and saved into a database.

A challenge of mapping is the necessity for creating maps on a large-scale. In this project, only a few maps were gen-erated: two longer routes (one between Munich and Ingol-stadt on the A9 and another between Munich and the airport with a highway interchange) and several smaller maps for testing specific scenarios, such as tunnels. In the future, maps would need to be created for all of the highways in a country and continent. There are two main strategies for obtaining such maps: create the maps using a fleet of spe-cially equipped vehicles, which continuously keep the maps up-to-date, or use crowd-sourced fleet data, where maps are generated and kept up to date in a backend server. Both are feasible solutions, but much more work still needs to be done to realize either approach and make it commercially viable.

VI. Driving Strategy and Vehicle ControlSafe and comfortable driving are the most important requirements for the acceptance of HAD. The driving strat-egy is the central instance of artificial intelligence and responsible for plausible decision-making. Based on these decision, trajectories are generated and sent to a robust and real-time vehicle controller. This requires a very high con-troller performance and comfortable behavior.

In this section, an overview of the driving strategy is pre-sented. It also addresses the concepts of lateral and longitu-dinal vehicle control.

A. Driving StrategyThe driving strategy determines a unique maneuver based on the current traffic situation, traffic rules, the driver’s intention and global destination. To reduce the complexity of the model, the continuous driving task is divided into a finite set of lateral Qlat^ h and longitudinal Qlong^ h guidance

FIG 9 Laser scanner based lane marking detection using a ground map. Dashed lane markings are colored in green whereas solid lane markings are shown in red.

FIG 10 Road boundary detection using a laser scanner static obstacle occupancy grid map.

Page 7: Experience, Results and Lessons Learned from Automated

IEEE IntEllIgEnt transportatIon systEms magazInE • 48 • Spring 2015

states. The system states q Qlat lati ! and q Qlong long

i ! are summarized in Table 1.

The lateral guidance consists of eight possible states: Sys-tem Off, Lane Keeping, Lane Change Gap Approach (LCGA), Lane Change (left and right) and Lane Change Abort (left and right). In the Lane Keeping state, the vehicle’s goal is to remain in the same lane. If a lane change is desired, but due to the traffic situation is not possible, the lateral and longitu-dinal guidance is changed to the Lane Change Gap Approach state. In this state an appropriate gap is selected and tracked over time. If possible, this gap is then approached. Once a lane change is desired and the situation allows for its execu-tion, the Lane Change state is selected and a continuous lane change trajectory is calculated. If a critical situation occurs during a lane change, the system will abort the lane change immediately and the vehicle automatically returns to the originating lane.

The longitudinal guidance consists of five states: System Off, Dynamic Cruise Control (DCC), Active Cruise Control

(ACC), Lane Change Gap Approach (LCGA) and Critical Con-trol. If there is no vehicle in front within the sensor range, the host vehicle drives in the system state DCC with the desire speed, where restrictions, such as speed limits, are taken into account. Within the ACC state, the speed of the host vehicle is adjusted to a target object (such as the vehicle in front). The target object, e.g., during a lane change, can be the front vehicle in the adjacent lane. The state Criti-cal Control is activated in critical situations, such as when another vehicle cuts in closely. It allows a greater dynamic response in the longitudinal guidance in order to resolve critical situations.

In order to select the appropriate system states qlat and qlong for the current situation, decision-making processes are required. These processes are modeled here via a net-work of hybrid deterministic automata and decision-trees. The system structure for the driving strategy and vehicle control of the HAD system is illustrated in Fig. 13.

The decision-making process runs through four hierar-chical levels. From the evaluation of the situation, driving requests are derived. These requests are investigated con-cerning their feasibility in the next level and depending on the feasibility, an appropriate driving maneuver is executed.

An example visualization of the vehicle’s environment, including the driving strategy (bottom-left and bottom-right of the image) is shown in Fig. 14. Further information about the driving strategy is given in [39].

B. Vehicle ControlWithin the control modules the required trajectories for the respective driving maneuvers are generated and the control performance is adapted to the specific driving tasks. In the following the control modules are presented for both lateral and longitudinal vehicle guidance.1) Longitudinal Dynamics Control: In order to facilitate

velocity and distance control, a cascade control structure was developed. The structure consists of three nested control loops for acceleration, velocity and distance

i q4 q lat q long

0 Off Off

1 Lane Keeping DCC

2 LCGA (left) ACC

3 LCGA (right) LCGA

4 Lane Change (left) Critical Control

5 Lane Change (right)

6 LC-Abortion (left)

7 LC-Abortion (right)

Table 1. System states for HAD on freeways.

(a)

(b)

(c)

FIG 11 The steps in map generation: collecting raw sensor data from a fleet of vehicles (a), homogenization of the data using GraphSLAM (b) and extraction of the geometrical lane model used in the HAD system.

x1 x2 x3

xi xi+1 xi+2

FIG 12 An example graph in modeling the road, where the nodes represent vehicle poses and the edges represent models between the nodes (gray: odometry, green: data association, red: GPS).

Page 8: Experience, Results and Lessons Learned from Automated

IEEE IntEllIgEnt transportatIon systEms magazInE • 49 • Spring 2015 IEEE IntEllIgEnt transportatIon systEms magazInE • 49 • Spring 2015

control. A detailed descrip-tion of the introduced state adaptive control concept is presented in [40].

2) Lateral Dynamics Control: The lateral controller is based on the control concept dis-cussed in [41]. Since the lateral controller should be used up to a moderate dynamic range, a simplified vehicle model (e.g. single-track model) can be used for modeling the controlled system. For this purpose, a decoupling-con-troller is used, which has high control accuracy, stability and comfort. The controller con-sists of an inner and an outer loop. The inner loop cor-responds to the decoupling

SituationEvaluation

DrivingRequest

Driving Strategy

Target Object Selection

TrajectoryPlanning

Decision Tree(Long. Guidance)

Decision Tree(Lat. Guidance)

LateralControl

ParallelComposition

Network of HybridDeterministic Automata

Discrete Events

LongitudinalControl

FIG 13 System structure for the driving strategy and control of highly automated driving vehicles.

Adaptive CruiseControl

Lane Change‘Left’

km/h200

BMW Group

150

100

50

0

Research and Technology

FIG 14 Visualization of the HAD system and the driving strategy during a lane change maneuver.

Page 9: Experience, Results and Lessons Learned from Automated

IEEE IntEllIgEnt transportatIon systEms magazInE • 50 • Spring 2015

controller with the yaw rate as output. It ensures that the yaw motion of the vehicle follows its desired value from the generated trajectory. The outer control loop provides the required nominal variables for the yaw rate controller through state feedback.

VII. Discussion and Lessons LearnedSince BMW began its research in automated driving, a lot has been learned. In this section, some important topics are discussed from the results of this research, which will require attention in the future in order to bring HAD func-tions into production vehicles.

A. Dynamic Limits of the Prototype HAD SystemThe HAD system was intentionally developed with limits at the actuators in order to meet controlability requirements in operating the system on public roads. Braking and steering moments are in the current system limited, which means that full ABS-braking or abrupt evasive maneuvers in criti-cal situations are not possible in the automated driving mode. This limits the system’s capabilities in handling criti-cal situations. In the current prototypes, this leads to the fact that the test driver must be aware of this limit and take over manual control when such potential critical situations arise. When speaking about future HAD systems, where the driver can be out-of-the-loop, such critical situations must be handled in the automated driving mode, which requires the integration of automatic braking and evasive maneuver systems into the overall HAD system.

B. Artificial IntelligenceDriving is a very complicated task. What has been learned in this project is that the automation of this complicated task by a computer is quite a challenge. Despite testing “only” on highways, it was learned that even in this rela-tively simple environment, in terms of possible scenarios and situations, many challenges remain in order to reach a level of intelligence in the automated driving system that parallels the human’s ability to drive and react to sit-uations. Humans tend to compare an automated driving system’s performance with their own performance and usually expect that the HAD system decides and reacts to situations in a similar manner as a human driver would, or even better. We as humans have an unprecedented abil-ity to learn, remember and recall events. Additionally we have the instant ability to perceive and understand what we are looking at and can most of the time easily make the correct decision and react appropriately in various situa-tions. This seemingly easy task for humans is in actuality incredibly complex and difficult to model and reproduce with a machine and algorithms. Simple situations can be easily modeled, but there are always exceptions and unex-pected situations that can happen at any time; developing the algorithms to react correctly in these unique situa-

tions is still quite challenging. The challenges in artificial intelligence for automated driving systems will always have their limits, but will also continuously improve until a level of intelligence is reached with which HAD will be possible and where safety, within certain conditions, can be guaranteed.

C. Driver AcceptanceAn important aspect of getting automated driving func-tions into production vehicles is the acceptance with which drivers, and the public in general, greet such new technologies. It is essential that the first time someone experiences such a technology, it works well, feels safe and comfortable and can be trusted. The question will be, what system criteria will be most important in influenc-ing driver acceptance? Is it the HAD system’s ability to drive like a human? Its ability to drive smoothly without disturbing the driver during a secondary task? Or will it be its ability to increase safety and prevent accidents? Experience with the BMW HAD system has shown that the system’s control of the steering influences driver accep-tance: lateral control cannot be abrupt and must make smooth steering inputs. Several persons from the press have had the chance to have an exclusive test drive with the BMW HAD system and most find the control inputs smooth and comforting, leading to a fairly quick accep-tance of the HAD system. However, it is still unknown if the public in general is ready to accept automated driv-ing technology. The public may still have certain fears about this technology which need to be addressed. It will be necessary to continue positive communication about automated driving in order to prepare the public for the its introduction.

D. Challenges of Mapping and LocalizationThe HAD prototypes in this project all rely on a high-preci-sion digital map for localization and for road foresight. The maps were created with a partially automated process with some manual work, where the data came from vehicles with a similar sensor configuration as the HAD prototypes. Only a few maps were created during the project: a map with a large distance, a map with a highway interchange and sev-eral smaller maps to test specific scenarios.

A practical problem that often arose during the project is that the mapped highways throughout the year at various locations had road work, where the lane markings and road boundaries slightly changed and therefore made the origi-nal map inaccurate. In some cases the inaccuracy was neg-ligible, but in other cases, that part of the highways needed to be remapped, which in itself is a challenging process. The HAD system will need to be updated in the future in order to detect the fact that an old map is not valid anymore and that the driver must now drive manually. This is a current focus of research. Furthermore, the updating of an old map

Page 10: Experience, Results and Lessons Learned from Automated

IEEE IntEllIgEnt transportatIon systEms magazInE • 51 • Spring 2015 IEEE IntEllIgEnt transportatIon systEms magazInE • 51 • Spring 2015

needs to become a seamless and automated process, as road environments are surprisingly dynamic and can change quite often. An automated updating process of the map, for example, could in the future be realized through cloud com-puting applications using a backend server.

Achieving a robust and highly available localization is also quite a challenge. The HAD prototypes in this project used a differential GPS (DGPS) to aid in the localization process. DGPS allows for a lane-accurate initial localiza-tion hypothesis for finding the correct position in the map. Once localized in the map, DGPS was only necessary when landmark features were sparse or difficult to detect. Due to the high costs of DGPS and the need for robust localization despite many uncertainties, the algorithms for localization still need to be improved.

Another challenge of mapping is dealing with maps on a large-scale. Maps will eventually need to be generated for all of the highways in a given country or continent, in order to have a wide offering of HAD. This requires the col-lection of more detailed data than what is currently used in today’s navigation maps. These maps will also need to be kept up-to-date on a much larger scale, where the subscrip-tion-based model of today’s navigation maps will not suf-fice. New business models need to be developed and new technical challenges will need to be solved in order to reach a largescale availability of high-precision digital maps for automated driving. The advantage is that even if not all of the vehicles have an automated driving system, older sys-tems, such as incar navigation, can be vastly improved using these new maps.

The above problem leads to another issue with large-scale mapping: standardization. Currently, all of the automobile manufacturers, institutes and universities re-searching in automated driving are generating their own maps in their own format, specific to their version of an automated driving system. There are quite a few similari-ties between the different approaches, but also a few dif-ferences. Maps on such a large-scale, such that they can be used in production vehicles, will need to go through a stan-dardization process, with the cooperation of the automobile manufacturers working together in a consortium for speci-fying the requirements and format of the maps. This will reduce complexity and vastly reduce costs if a standard for the high-precision maps needed in automated driving can be developed.

E. Driver Overtake ScenarioWith HAD, it is assumed that the driver can be out-of-theloop and is not continuously monitoring the system [16], [17]. However, the HAD system may still reach a limit, which will require the driver to takeover manual control of the vehicle. How these overtaking scenarios should hap-pen are still an open research topic. One problem is that it is still undetermined how long a driver needs to safely

take over manual control [22]. Another issue is how does the system notify the driver about the necessity to take over control. The overtaking process needs to be transparent and completely understandable by the driver, such that no mode confusion occurs. It is critical that the driver is able to react correctly to the presented situation at the time of overtaking manual control. These problems are current research topics in the human factors field of automated driving. Currently, studies are being done with test per-sons in dynamic driving simulators in order to understand a person’s reaction to, and understanding of, overtaking scenarios [22].

F. Limits of Sensor Range and CapabilityIt is assumed that automated driving can be made available with a maximum speed of 130 km/h, the highest allowed speed limit on European highways. At this speed, and con-sidering the time required for a driver to manually take over control at a system limit, a theoretical requirement on the sensor range can be derived for which a HAD vehicle must at all times be capable of avoiding a collision using solely the on board sensors. Depending on the derivation, today’s sensors either do not meet the requirement or only do so very slightly. Therefore, some scenarios can become critical, where the complete driver overtaking time cannot be guaranteed. An increase in comfort, especially in driver overtaking scenarios, could be achieved by extending the range of sensors through other means, for example car-to-X communication.

Additionally, sensors must improve in reliability and redundancy, such that even by sensor failures, the HAD vehicle is able to continue long enough until the driver has overtaken control or the vehicle comes to a safe and com-plete stop. Sensors must also improve in their ability to esti-mate their own capability, such that a HAD system, which must guarantee collision-free driving for at least as long as it takes to come to a stop, can adapt itself based on the current overall capability the sensors.

G. Challenges in Perception AlgorithmsPerception algorithms have come a long way in recent years. However, challenges still remain in achieving the robust-ness and reliability required for HAD.

Detection of vehicles has become quite reliable with driver assistance system such as ACC. However, with a 360º object detection, some work still needs to be done. Getting reliable object detection such that a distinct and reliable lane association can be made is still challenging, especially at distances greater than 100 m on a curvy road. False object association can lead to an incorrect reaction by the driving strategy. Additionally, the problem of false positives still remains, especially when a lane closely bor-ders a road boundary, for example, in tight spaces on a con-struction site.

Page 11: Experience, Results and Lessons Learned from Automated

IEEE IntEllIgEnt transportatIon systEms magazInE • 52 • Spring 2015

As mentioned in Sec. IV-C, a challenge in the grid-rep-resentations is the static-world assumption of occupancy grids, which cannot be guaranteed with traditional algo-rithms. In order to make full use of a static obstacle occu-pancy grid, no dynamic object artifacts can occur. Ensuring that a grid only contains static obstacles allows it to be used for more advanced motion planning algorithms, where an automated driving system would be able to react to unclas-sified objects that are difficult to model using traditional object tracking algorithms. It is therefore important to further develop algorithms that successfully separate the model-based dynamic-world with that of the static grid-based representation.

Perception of the road can also be quite challenging, especially in scenarios such as road construction sites. Currently, only limited landmarks, such as lane mark-ing and road boundaries, are detected, which may not be reliable in all situations. Detecting discontinuous road boundaries, such those outlined with traffic cones, is also quite difficult. The simple polynomial models typically used also cannot represent more difficult road scenarios, such as S-curves. New detection possibilities of the road infrastructure are required, not only to increase robust-ness, but also as a means of interpretation and plausibility checks against other detected aspects of the road. Different sources of information about the road need to be detected, but then also combined together and interpreted in order to obtain a uniform representation and model of the road environment, as required for automated driving, espe-cially in cases where the high-precision map is temporar-ily inaccurate.

H. Validation and CertificationNext to the technical challenges for HAD, the validation and certification processes required in order to bring such technology into production vehicles is arguably an even bigger challenge. Validation of an automotive electronic system, including driver assistance systems, require a thorough validation and testing process. The standard ISO 26262 is a functional safety standard for road vehicles, which describes the requirements that automobile manu-facturers and the automotive suppliers need to follow in order to safely certify their components and systems. HAD will need to meet the toughest requirements by such stan-dards, as a software or hardware error could be critical and result in a fatal injury if the system fails at the wrong time and in the wrong situation. This requires extensive testing at the component (hardware and software) and sys-tem level, where, theoretically, thousands and thousands of kilometers need to be driven to validate the system. With traditional validation and testing processes, it will be quite difficult to meet the necessary requirements in a realistic development time frame. Therefore, the automotive indus-try must develop new processes that can validate HAD.

Such new processes need to be developed by the automo-bile manufacturers, but also together with the automotive suppliers, such that HAD can one day become a reality in production vehicles.

VIII. Latest DevelopmentsThe above sections described the current state of the HAD system. Many areas of the system are continually being worked on and improved on in order to increase safety, com-fort, robustness and availability of the system. In this sec-tion, the latest developments in several areas of the system are described.

A. Dynamic GridsIn [42], a novel environment perception approach was presented that estimates the static as well as the dynamic environment at the same time in a grid-based model. Similar to occupancy grid mapping [31], the environment is divided into a number of cells. Differently though, a Dempster-Shafer model is used with a 3-class frame of discernment

{ , , }F S DH = (2)

which represents hypothesis for free space, for static occu-pancy and for dynamic occupancy.

First, the dynamic environment is estimated using a par-ticle filter framework, that builds upon ideas from [43], and evidences for the dynamic and the static environment are deduced. Then, in a single frame these evidences are com-bined with free space information that comes from a scan grid that is built from the sensor data. The combination is done using the non-normalized Dempster’s rule of combi-nation, where the conflicts are explicitly resolved. The tem-poral filtering is done using a weighted version of Jøsang’s cumulative operator [44].

Fig. 15 shows some of the results. The benefit of the approach is that uncertainty of whether sensor measure-ments belong to static or to dynamic objects can be inte-grated and does therefore not need to be binary. The dynamic and the static environment are estimated simul-taneously in a uniform representation, rather than using, what is often done, a model-based approach for tracking the dynamic world and a grid-based approach for mapping the static world, relying on the static/dynamic classification of the former.

B. Real-Time Road Model EstimationUsually, the road model is stored in a high-precision digi-tal map, as described in Sec. V, and is used for localiza-tion, situation evaluation, prediction, decision-making and motion planning. However, the map may be invalid or inaccurate, for example due to a new construction site, where the stored road model does not match the current

Page 12: Experience, Results and Lessons Learned from Automated

IEEE IntEllIgEnt transportatIon systEms magazInE • 53 • Spring 2015 IEEE IntEllIgEnt transportatIon systEms magazInE • 53 • Spring 2015

road environment. Therefore, it is necessary to estimate a real-time road model using only the vehicle’s on-board sensors. Typically, landmarks such as lane markings and road boundaries, as described in Sec. V-B, are detected. In this section, two novel approaches for detecting aspects of the road model are presented, where the approaches do not directly detect landmarks, but extract and interpret them from other types of data.1) Grid-based Road Course and Road Boundary Esti-

mation: As mentioned, typical approaches work with featurebased lane marking as well as road boundary detection using cameras [34]. In complex, unstruc-tured environments, however, lane marking and road boundary detection systems still have difficulties. For example, in construction sites, temporary lane mark-ings may exist, making it difficult to detect the correct markings. Road boundaries are also not always con-sistent and may be continuous structures or sparse structures, such as cones, making the detection of road boundaries with previous grid-based approaches dif-ficult as well [45]–[47]. The new approach uses path planning on a static obstacle occupancy grid to yield boundary separators that are then used to extract the road boundaries [48]. Since the goal pose is not known, as this would imply knowing a position on the road which is to be estimated, a path planner was developed that efficiently finds feasi-ble paths [49]. The most compute-intensive part in many motion planning algorithms, especially in grid-based environment representations, is checking for collisions and evaluating workspace costs. Therefore, two novel

algorithms for the real-time calculation of the configu-ration space costs were developed, allowing the cost and collision evaluation of a particular configuration, incor-porating the entire vehicle geometry, to be performed with a single look-up [50]. Fig. 16 shows some of the results in a simulated road construction site with several different types of road boundaries such as traffic cones, parked vehicles and bushes. Shown are the paths from the path planner (blue), the extracted boundary cells (colored pyramids), the continuous semantic road boundary and smoothed center-line trajectory (green). The approach can han-dle curves with strong curvatures, S-shaped curves and branches.

2) Lane Boundary Estimation using the Movement of Dynamic Objects: The history of detected dynamic objects can also be used to estimate aspects of the road model, particularly the lane boundaries on which objects are moving. In contrast to the grid-based methods, where the hard boundaries of the road are estimated, this approach is capable of estimating the borders of all of the lanes on which dynamic objects are currently observed. The underlying assumption behind this approach is that all vehicles drive within valid lanes. The advantage of this approach is that a redundant source of information about the lanes is available and can also be used in situations where lane markings may not be available or where the traf-fic is ignoring the lane marking due to a unique road situation. Results of such an algorithm is shown in Fig. 17.

(b)(a)

(d)(c)

FIG 15 Results from the grid-based mapping and tracking [42]. The dynamic and the static environment are estimated simultaneously in a uniform, grid-based representation. It allows the integration of uncertainties of whether sensor data is static or dynamic. (a) Uniform static and dynamic environment representation. (b) Estimated velocities, color coded. (c) Standard Bayesian occupancy grid. (d) Reference image.

(a)

(b)

FIG 16 Road course estimation [48] in an unknown environment showing a simulated road construction site. The approach is capable of representing strong curves, (a), as well as branches, (b) and can deal with arbitrary obstacle-based road boundaries such as parked vehicles, bushes or traffic cones.

Page 13: Experience, Results and Lessons Learned from Automated

IEEE IntEllIgEnt transportatIon systEms magazInE • 54 • Spring 2015

C. Grid-Based Mapping and LocalizationThe current mapping approach uses a feature map, where feature vectors are stored in a database and used to update a Kalman filter for position estimation. This approach, however, has its limits once the vehicle’s environment becomes more complex. A key problem is data associa-tion between the map and the detected features. There-fore, current research is focusing on a generic localization approach based on probabilistic maps [51] and scan-match-ing algorithms [52].

With the scan-matching algorithm, localization can be achieved using two types of data: environment grid data or environment feature data. As described in Sec. IV-C, a generic grid-based representation can be generated of the ground using sensor data. This complete grid can be accu-mulated over a larger area and saved into a database, where the grid data can later be used by the HAD system for local-ization. Such a grid-based data sample from a map is shown in Fig. 18a. Another option is to still save feature data into the map, but as an intermediate step convert the feature data into a grid-based representation, where the same matching algorithms can be used for localization. This results in simi-lar accuracy, but a vastly reduced size of the map. Examples of such grid maps are shown in Fig. 18.

Initial results have been promising for this approach. The next generation of maps and localization algorithms will use this approach, which should also be viable for more complex driving scenarios, such as country roads or city driving.

D. Hierarchical GraphSLAM for Large-Scale Map GenerationOnce maps become larger, the number of nodes required in the graph for generating the map increases drasti-cally. Considering that maps need to be made for com-plete countries and continents, a hierarchical approach is required for solving the mapping problem. The idea is to break down a large graph into regional sub-graphs. Each sub-graph is solved and condensed into a smaller graph with just a few variables. This process can be hierarchi-cally repeated until the highest hierarchical level has few enough variables such that it is computationally solvable. The solution from the highest hierarchical level can then be back-propagated into the lower levels to solve the com-plete mapping problem at a large scale. The concept of this hierarchical approach is shown in Fig. 19. It can be shown that a hierarchical approach is equivalent to solv-ing a non-hierarchical graph with the same nodes. Using such an approach, it may be possible to generate and store maps for very large areas.

E. Vehicle Maneuver PredictionFor a forward-looking decision-making process in the driving strategy, a robust and computationally efficient maneuver prediction of other road users is required. Therefore, an early response to critical events (for exam-ple, cut-in vehicles) is made possible. For highway sce-narios, the velocities in the longitudinal direction can be approximately assumed to be constant for a small pre-diction horizon. However, this assumption is not valid in the lateral direction due to the fact that lane change maneuvers have a non-linear behavior. Therefore, a sep-arate prediction model is needed which determines the probability of a lane change | f ,P Ci^ h where Ci is the set of possible lateral maneuvers and f is the measured feature vector for estimating these maneuvers. For lat-eral maneuver prediction, a feature vector consisting of the lateral distance to the center of the lane and lateral velocity in the lane-relative coordinate system is used. As a classifier, a Bayes classifier is used. This approach pro-vides a suitable framework for further extensions. Results of the lateral maneuver prediction and its application in the driving strategy can be found in [53].

FIG 17 Lane boundary estimation (orange lines) using solely the movement history of dynamic objects, overlaid with the high-precision digital map described in Sec. V-C. Note that the two bottom lanes are not part of the digital map.

(a) (b)

FIG 18 Grid-based mapping approach with sensor ground grids (a) and feature-grids (b).

G2

G1

G0

FIG 19 A hierarchical approach to GraphSLAM in order to reduce the computation and complexity needed for larger-scale map generation.

Page 14: Experience, Results and Lessons Learned from Automated

IEEE IntEllIgEnt transportatIon systEms magazInE • 55 • Spring 2015 IEEE IntEllIgEnt transportatIon systEms magazInE • 55 • Spring 2015

F. Impact Evaluation of Highly Automated Vehicles (HAV) in Traffic Flow on HighwaysTo find out how the traffic performance parameters will alter while HAV drives in traffic, the PELOPS software is improved to handle several HAV simultaneously. With the new controller there is a possibility to make scenarios with different penetration rates (up to 100%) to observe interac-tions between HAV vehicles and vehicles with driver mod-els in PELOPS. This gives the opportunity to compare traf-fic efficiency and safety measures in different scenarios by means of microscopic traffic simulation.

IX. ConclusionOver the past few years, a lot of progress has been made in automated driving applications. BMW has been testing automated driving on highways in Germany since 2011. This paper presented the architecture and algorithms developed in the HAD prototypes, along with some discus-sions on the results and lessons learned. The HAD proto-types are still on going testing and are being continuously improved. Although there have been major improvements in the last decade, all aspects of the automated driving system, including perception, localization, decision-mak-ing and path planning algorithms, still need to be further developed in order to bridge the gap between robotics research and a customer-ready system. The next big steps will be to focus on the industrialization of HAD technology: what needs to be done to get such technology into produc-tion vehicles? In this area, there is still a lot of work to be done, especially in the area of validation/certification and the generation of large-scale digital maps. However, before HAD goes into production, partially automated driving sys-tems, such as the traffic jam assistant, are already going into production today, and more such partially automated driver assistance system will make it to market in the com-ing years.

About the Authors Michael Aeberhard was born in Laus-anne, Switzerland on December 5, 1983. He received a Bachelor of Science in Computer Engineering from the Georgia Institute of Technology in Atlanta (GA), USA in 2007, a Master of Science in Elec-trical and Computer Engineering from

the Georgia Institute of Technology in Metz, France in 2009 and the Master Professionnel degree from the Ecole supéri-eure d’électricité (Supélec) in Metz, France in 2009. Since 2010, he has been working towards the Doctoral degree in Engineering (Dr.-Ing.) at the Technische Universität Dort-mund in Dortmund, Germany in conjunction with the BMW Group Research and Technology in Munich, Germany. Since 2012 he is a sub-project leader for highly automated driving at the BMW Group Research and Technology. His

research interests include sensor data fusion, multi-object tracking, environment perception and functional develop-ment for advanced driver assistance systems and highly automated driving.

Sebastian Rauch received the Diploma degree of Mechanical Engineering from the Technical University of Munich, Munich, Germany, in 2009. Since 2010, he has been working towards the Doc-toral degree in Engineering (Dr.-Ing.) at the Technical University of Braun-

schweig, Braunschweig, in conjunction with the BMW Group Research and Technology in Munich, Germany. Since 2012 he is the sub-project leader environment model for highly automated driving at the BMW Group Research and Technology. His research interests include environ-ment perception, highly accurate digital maps and precise localization for advanced driver assistance systems and automated driving.

Mohammad Bahram was born in Karaj, Iran on August 2, 1985. He received a diploma engineer of electrical engineer-ing and information technology (Dip-lom-Ingenieur) from the RWTH Aachen University in 2012. Since 2012, he is working towards the Doctoral degree at

the Teschnische Universität München in Munich, Germany in conjunction with the BMW Group Research and Tech-nology in Munich, Germany. His research interests include driving strategy and maneuver prediction for the highly automated driving.

Georg Tanzmeister received the Bache-lor of Science degree in computer science in 2009 and the Diploma Engineering degree in computer graphics and com-puter vision in 2011 from the Technische Universität Wien, Vienna, Austria. In 2008 he studied computer science at the

Universitat Autònoma de Barcelona, Barcelona, Spain and in 2009 at the City College of New York, New York City, USA. He is currently working towards the Doctor of Engineering degree in electrical engineering at the Technische Univer-sität München in conjunction with the BMW Group Research and Technology both in Munich, Germany. His research interests include autonomous vehicles, path and motion planning, environment perception and mapping.

Julian Thomas was born in Frankfurt, Germany on Febru-ary 2, 1987. He received a Master of Science in Physics from the Universität Frankfurt in 2013. Since 2013, he is working towards the Doctoral degree at the Freie Universität Berlin

Page 15: Experience, Results and Lessons Learned from Automated

IEEE IntEllIgEnt transportatIon systEms magazInE • 56 • Spring 2015

in Berlin, Germany in conjunction with the BMW Group Research and Technol-ogy in Munich, Germany. His research interests include sensor data fusion, real-time road model estimation and environment perception for highly auto-mated driving.

Yves Pilat was born in Niederuzwil, Switzerland on September 15, 1985. He received a Master of Science in Mechani-cal Engineering from the ETH Zurich in 2011. He is currently working in the team for highly automated driving at the BMW Group Research and Technology.

His research interests include autonomous vehicles, motion control and functional development for advanced driver assistance systems and highly automated driving.

Florian Homm received the Diploma degree in computer science from the Technical University of Munich, Munich, Germany, in 2007, and the Dr.rer.nat. degree from the Technical University of Munich, Munich, Germany, in 2012. From 2007 until 2011 he was with the

BMW Group Research and Technology, Munich, Germany, doing research on grid-based environment perception, laser scanner based lane detection, sensor data fusion and vehicle localization.

Werner Huber studied civil engineering at the Technical University Munich with a focus on traffic engineering between 1984 and 1991. Dr. Huber was a research assistant in the field of traffic engineer-ing at TU Munich till 1996. He evaluated driver assistance and telematics systems

within the framework of national and European F+E proj-ects. He did his doctoral thesis on the capture and utiliza-tion of floating car data. Since 1996 Dr. Huber has been working at the BMW Group. Before he changed to innova-tion management for interior, driving experience and E/E-infrastructure in 2002, he operated on traffic and vehicle research. In 2004 he switched to the department for vehicle electronics where he was responsible for the integration of driver assistance functions. In 2006 Dr. Huber became the team leader for integral safety focused on complete vehicle architecture and integration at the department of vehicle safety. In 2009 he came to the central unit for driving assis-tance at the BMW group. Since 2012 he has been leading the research group of driving assistance and perception at BMW Group Research and Technology where he is respon-sible for the project Highly Automated Driving.

Nico Kaempchen received the M.Sc. in artificial intelligence from the Univer-sity of Edinburgh, Scotland, in 2000, the Diploma degree (Dipl.-Ing.) in electrical engineering from the University of Stutt-gart, Germany, in 2001 and Doctoral degree (Dr.-Ing.) from the University of

Ulm, Germany, in 2007. Between 2006 and 2012, he worked with the BMW Group Research and Technology, Munich, Germany, working on driver assistance systems. Since 2012 he works for the BMW AG in the department of environment model and software development. His research interests include highly automated driving, environment percep-tion, sensor data fusion, estimation and multi-object track-ing in the context of intelligent transportation systems. Dr. Kaempchen received an Award from the Heidelberg Acad-emy of Sciences, Germany, in 2007 for his doctoral thesis on feature-level fusion of laser scanner and video data for advanced driver assistance systems.

References[1] S. Karush, “They’re working: Insurance claims data show which new

technologies are preventing crashes,” Status Report of the Insurance Institute for Highway Safety—Highway Loss Data Institute, vol. 52, no. 5, July 2012.

[2] C. Urmson, J. Anhalt, D. Bagnell, C. Baker, R. Bittner, M. N. Clark, J. Dolan, D. Duggins, T. Galatali, C. Geyer, M. Gittleman, S. Harbaugh, M. Hebert, T. M. Howard, S. Kolski, A. Kelly, M. Likhachev, M. McNaugh-ton, N. Miller, K. Peterson, B. Pilnick, R. Rajkumar, P. Rybski, B. Salesky, Y.-W. Seo, S. Singh, J. Snider, A. Stentz, W. R. Whittaker, Z. Wolkowicki, J. Ziglar, H. Bae, T. Brown, D. Demitrish, B. Litkouhi, J. Nickolaou, V. Sadekar, W. Zhang, J. Struble, M. Taylor, M. Darms, and D. Ferguson, “Autonomous driving in urban environments: Boss and the urban chal-lenge,” J. Field Robot., vol. 25, no. 8, pp. 425–466, Aug. 2008.

[3] M. Buehler, K. Iagnemma, and S. Singh, Eds., The DARPA Urban Chal-lenge: Autonomous Vehicles in City Traffic (Series Springer Tracts in Advanced Robotics), vol. 56. Berlin Heidelberg, Germany: Springer-Verlag, 2009.

[4] J. Levinson, J. Askeland, J. Becker, J. Dolson, D. Held, S. Kammel, J. Z. Kolter, D. Langer, O. Pink, V. Pratt, M. Sokolsky, G. Stanek, D. Stavens, A. Teichman, M. Werling, and S. Thrun, “Towards fully autonomous driving: System and algorithms,” in Proc. Intelligent Vehicles Symp., Baden-Baden, Germany, June 2011, pp. 163–168.

[5] J. Wei, J. Snider, J. Kim, J. Dolan, R. Rajkumar, and B. Litkouhi, “Towards a viable autonomous driving research platform,” in Proc. IEEE Intelli-gent Vehicles Symp., Gold Coast, Australia, June 2013, pp. 763–770.

[6] J. Wille, F. Saust, and M. Maurer, “Stadtpilot: Driving autonomously on Braunschweigs inner ring road,” in Proc. IEEE Intelligent Vehicles Symp., San Diego, CA, June 2010, pp. 506–511.

[7] F. Saust, J. Wille, B. Lichte, and M. Maurer, “Autonomous vehicle guid-ance on Braunschweigs inner ring road within the Stadtpilot project,” in Proc. IEEE Intelligent Vehicles Symp., Baden-Baden, Germany, June 2011, pp. 169–174.

[8] A. Geiger, M. Lauer, F. Moosmann, B. Ranft, H. Rapp, C. Stiller, and J. Ziegler, “Team AnnieWAY’s entry to the grand cooperative driving challenge 2011,” IEEE Trans. Intell. Transport. Syst., vol. 13, no. 3, pp. 1008–1017, Sept. 2012.

[9] M. Bertozzi, L. Bombini, A. Broggi, M. Buzzoni, E. Cardarelli, S. Cattani, P. Cerri, A. Coati, S. Debattisti, A. Falzoni, R. I. Fedriga, M. Felisa, L. Gatti, A. Giacomazzo, P. Grisleri, M. C. Laghi, L. Mazzei, P. Medici, M. Panciroli, P. P. Porta, P. Zani, and P. Versari, “VIAC: An out of ordinary experiment,” in Proc. IEEE Intelligent Vehicles Symp., Baden-Baden, Germany, June 2011, pp. 175–180.

[10] P. Furgale, U. Schwesinger, M. Rufli, W. Derendarz, H. Grimmett, P. Muhlfellner, S. Wonneberger, J. Timpner, S. Rottmann, B. Li,

Page 16: Experience, Results and Lessons Learned from Automated

IEEE IntEllIgEnt transportatIon systEms magazInE • 57 • Spring 2015 IEEE IntEllIgEnt transportatIon systEms magazInE • 57 • Spring 2015

B. Schmidt, T. N. Nguyen, E. Cardarelli, S. Cattani, S. Bruning, S. Horstmann, M. Stellmacher, H. Mielenz, K. Koser, M. Beermann, C. Hane, L. Heng, G. H. Lee, F. Fraundorfer, R. Iser, R. Triebel, I. Posner, P. Newman, L. Wolf, M. Pollefeys, S. Brosig, J. Effertz, C. Pradalier, and R. Siegwart, “Toward automated driving in cities using close-to-market sensors,” in Proc. IEEE Intelligent Vehicles Symp., Gold Coast, Australia, June 2013, pp. 809–816.

[11] HAVEit. [Online]. Available: http://www.haveit-eu.org/[12] SARTRE Project. [Online]. Available: http://www.sartre-project.eu/[13] J. Ziegler, P. Bender, M. Schreiber, H. Lategahn, T. Strauss, C. Stiller, T.

Dang, U. Franke, N. Appenrodt, C. G. Keller, E. Kaus, R. G. Herrtwich, C. Rabe, D. Pfeiffer, F. Lindner, F. Stein, F. Erbs, M. Enzweiler, C. Knöp-pel, J. Hipp, M. Haueis, M. Trepte, C. Brenk, A. Tamke, M. Ghanaat, M. Braun, A. Joos, H. Fritz, H. Mock, M. Hein, and E. Zeeb, “Making bertha drive—An autonomous journey on a historic route,” IEEE Intell. Transp. Syst. Mag., vol. 6, no. 2, pp. 8–20, 2014.

[14] N. Kaempchen, M. Aeberhard, M. Ardelt, and S. Rauch, “Technolo-gies for highly automated driving on highways,” Automobiltechnische Zeitschrift, pp. 48–52, June 2012.

[15] BMW Group PressClub Global. (2011, Aug.). Ready for takeover! [On-line]. Available: http://www.press.bmwgroup.com/

[16] T. M. Gasser, C. Arzt, M. Ayoubi, A. Bartels, J. Eier, F. Flemisch, D. Häcker, T. Hesse, W. Huber, C. Lotz, M. Maurer, S. Ruth-Schumacher, J. Schwarz, and W. Vogt, “Rechtsfolgen zunehmender fahrzeugautoma-tisierung,” in Bundesanstalt für straßenwesen, Tech. Rep. Fahrzeug-technik Heft F 83, Jan. 2012.

[17] S. P. Wood, J. Chang, T. Healy, and J. Wood, “The potentially regulatory challenges of increasingly autonomous motor vehicles,” Santa Clara Law Rev., vol. 52, no. 4, pp. 1423–1502, Dec. 2012.

[18] SAE International, “Taxonomy and definitions for terms related to on-road motor vehicle automated driving systems,” Tech. Rep. SAE J 3016, Jan. 2014.

[19] P. Waldmann and D. Niehues, “Der BMW track trainer—Automatisiertes fahren im grenzbereich auf der nürburgring nordschleife,” in 4. Ta-gung Sicherheit durch Fahrerassistenz, Garching, Germany, Apr. 2010.

[20] N. Kaempchen, M. Aeberhard, P. Waldmann, M. Ardelt, and S. Rauch, “Der BMW nothalteassistent: Hochautomatisiertes fahren für mehr sicherheit im straßenverkehr,” Elektr. Autom., vols. 8–9, pp. 26–29, Sept. 2011.

[21] M. Strobl and A. Huesmann, “High flexibility—An important issue for user studies in driving simulation,” in Proc. Driving Simulation Conf., Paris, France, Sept. 2004.

[22] C. Gold, D. Damböck, L. Lorenz, and K. Bengler, “‘Take over!’ How long does it take to get the driver back into the loop?” in Proc. 57th Human Factors Ergonomics Society Int. Annu. Meeting, San Diego, CA, Sept. 2013.

[23] A. Hochstädter, P. Zahn, and K. Breuer, “A universal driver model with the applications traffic simulation and driving simulator,” in Proc. 1st Human-Centered Transport Simulation Conf., 2000.

[24] M. Ardelt, “Hybrid control strategies for advanced safety-and driver assistance systems,” Ph.D. dissertation, Tech. Univ. Munich, Munich, Germany, 2012.

[25] N. Kaempchen, “Feature-level fusion of laser scanner and video data for advanced driver assistance systems,” Ph.D. dissertation, Univ. Ulm, Ulm, Germany, June 2007.

[26] K. Schueler, T. Weiherer, E. Bouzouraa, and U. Hofmann, “360 degree multi sensor fusion for static and dynamic obstacles,” in Proc. IEEE Intel-ligent Vehicles Symp., Alcalá de Henares, Spain, June 2012, pp. 692–697.

[27] M. Aeberhard and N. Kaempchen, “High-level sensor data fusion archi-tecture for vehicle surround environment perception,” in Proc. 8th Int. Workshop Intelligent Transportation, Hamburg, Germany, Mar. 2011.

[28] R. Altendorfer and S. Matzka, “A confidence measure for vehicle track-ing based on a generalization of bayes estimation,” in Proc. IEEE Intel-ligent Vehicles Symp., San Diego, CA, June 2010, pp. 766–772.

[29] M. Aeberhard, S. Schlichthärle, N. Kaempchen, and T. Bertram, “Track-to-track fusion with asynchronous sensors using information matrix fusion for surround environment perception,” IEEE Trans. In-tell. Transport. Syst., vol. 13, no. 4, pp. 1717–1726, Dec. 2012.

[30] M. Aeberhard, S. Paul, N. Kaempchen, and T. Bertram, “Object exis-tence probability fusion using Dempster-Shafer theory in a high-level sensor data fusion architecture,” in Proc. IEEE Intelligent Vehicles Symp., Baden-Baden, Germany, June 2011, pp. 770–775.

[31] S. Thrun, W. Burgard, and D. Fox, Probabilistic Robotics. Cambridge, MA: MIT Press, 2005.

[32] F. Homm, N. Kaempchen, J. Ota, and D. Burschka, “Efficient occupancy grid computation on the GPU with lidar and radar for road boundary detection,” in Proc. Intelligent Vehicles Symp., San Diego, CA, June 2010, pp. 1006–1013.

[33] S. Rauch, T. Schaller, A. Savkin, and P. Hecker, “Hochgenaue fah-rzeugeigenlokalisierung und kollektives erlernen hochgenauer digi-taler karten,” in Proc. 12. Symp. Automatisierungssysteme, Assistenz-systeme und eingebettete Systeme für Transportmittel, Braunschweig, Germany, Feb. 2011.

[34] J. McCall and M. Trivedi, “Video-based lane estimation and tracking for driver assistance: Survey, system, and evaluation,” IEEE Trans. In-tell. Transport. Syst., vol. 7, no. 1, pp. 20–37, Mar. 2006.

[35] F. Homm, N. Kaempchen, and D. Burschka, “Fusion of laserscannner and video based lanemarking detection for robust lateral vehicle con-trol and lane change maneuvers,” in Proc. Intelligent Vehicles Symp., Baden-Baden, Germany, June 2011, pp. 969–974.

[36] S. Rauch, M. Aeberhard, and P. Hecker, “Kollektives erlernen hochgenau-er straßenmodelle als grundlage zur einführung automatisierter fahr-funktionen,” in 6. Tagung Fahrerassistenz, Munich, Germany, Nov. 2013.

[37] S. Thrun and M. Montemerlo, “The GraphSLAM algorithm with appli-cations to large-scale mapping of urban structures,” Int. J. Robot. Res., vol. 25, nos. 5–6, pp. 403–429, May–June 2006.

[38] G. Grisetti, R. Kmmerle, C. Stachniss, and W. Burgard, “A tutorial on graph-based SLAM,” IEEE Intell. Transp. Syst. Mag., vol. 2, no. 4, pp. 31–43, 2010.

[39] M. Ardelt, P. Waldmann, F. Homm, and N. Kaempchen, “Strategic deci-sion-making process in advanced driver assistance systems,” in Proc. 6th IFAC Symp. Advances Automotive Control, Munich, Germany, July 2010, pp. 566–571.

[40] B. Adiprasito, “Fahrzeuglängsführung im niedergeschwindigkeits-bereich,” Ph.D. dissertation, Technische Universität Braunschweig, Braunschweig, Germany, 2004.

[41] M. Werling, “Ein neues konzept für die trajektoriengenerierung und -stabilisierung in zeitkritischen verkehrsszenarien,” Ph.D. disserta-tion, KIT Scientific Publishing, Karlsruhe, Germany, 2010.

[42] G. Tanzmeister, J. Thomas, D. Wollherr, and M. Buss, “Grid-based mapping and tracking in dynamic environments using a uniform evi-dential environment representation,” in Proc. IEEE Int. Conf. Robotics Automation, 2014, pp. 6090–6095.

[43] R. Danescu, F. Oniga, and S. Nedevschi, “Modeling and tracking the driving environment with a particle-based occupancy grid,” IEEE Trans. Intell. Transport. Syst., vol. 12, no. 4, pp. 1331–1342, Dec. 2011.

[44] A. Jøsang and S. Pope, “Dempsters rule as seen by little colored balls,” Comput. Intell., vol. 28, no. 4, pp. 453–474, Nov. 2012.

[45] T. Weiss, B. Schiele, and K. Dietmayer, “Robust driving path detection in urban and highway scenarios using a laser scanner and online occu-pancy grids,” in Proc. IEEE Intelligent Vehicles Symp., 2007, pp. 184–189.

[46] M. Darms, M. Komar, and S. Lueke, “Map based road boundary estima-tion,” in Proc. IEEE Intelligent Vehicles Symp., 2010, pp. 609–614.

[47] M. Konrad, M. Szczot, and K. Dietmayer, “Road course estimation in occu-pancy grids,” in Proc. IEEE Intelligent Vehicles Symp., 2010, pp. 412–417.

[48] G. Tanzmeister, M. Friedl, A. Lawitzky, D. Wollherr, and M. Buss, “Road course estimation in unknown, structured environments,” in Proc. Intelligent Vehicles Symp., Gold Coast, Australia, June 2013, pp. 630–635.

[49] G. Tanzmeister, M. Friedl, D. Wollherr, and M. Buss, “Path planning on grid maps with unknown goal poses,” in Proc. IEEE Intelligent Trans-portation Systems Conf., Oct. 2013, pp. 430–435.

[50] G. Tanzmeister, M. Friedl, D. Wollherr, and M. Buss, “Efficient evalu-ation of collisions and costs using grid maps for autonomous vehicle motion planning,” IEEE Trans. Intell. Transport. Syst., vol. 15, no. 5, pp. 2249–2260, 2014.

[51] J. Levinson and S. Thrun, “Robust vehicle localization in urban envi-ronments using probabilistic maps,” in Proc. IEEE Int. Conf. Robotics Automation, Anchorage, AK, May 2010, pp. 4372–4378.

[52] E. Olson, “Real-time correlative scan matching,” in Proc. IEEE Int. Conf. Robotics Automation, Kobe, Japan, May 2009, pp. 4387–4393.

[53] M. Bahram, A. Wolf, M. Aeberhard, and D. Wollherr, “A prediction-based reactive driving strategy for highly automated driving function on freeways,” in Proc. IEEE Intelligent Vehicles Symp., Dearborn, MI, June 2014, pp. 400–406.