7
Proposal Title: Measuring Situational Awareness for Regulating the Transfer from highly-automated Driving to Human Control Description The transition towards fully autonomous driving is in full swing. Whilst for driving assistance systems such as adaptive cruise control, it has been of major importance to scan the external surroundings of the car and assumptions over the state of the driver could be made, it nowadays gets more and more important to recognize the state of the driver and take it as an internal factor into account as well. In near future, there will be parts in the road network, where automated driving will not be available and the car has to make a takeover request for manual control by the user. Typical examples would be the loss of the lane markings or reaching the end of the proper road. In such situations, the vehicle needs to perform a take-over request to hand over the control and, consequently, the responsibility to the driver. Generally, if there are automated phases, in which the driver does not have to monitor the vehicle and on some roads has to manually drive, it is generally known as highly-automated driving. During the automated phases, the driver can be engaged in side activities, such as Actions with other devices (Talking on phone, writing/reading emails) Drinking coffee or smoking. Having rest (sleeping) Within that transition period, the car has to evaluate if the driver is ready for taking over manual control. The crucial factor for determining take-over readiness is the driver’s level of inattention, which is influenced by the secondary task being performed by the driver. More specifically, the car needs to check before handing over control if the driver knows at minimum: The location of the car, The heading direction, The speed as well as Relevant moving and non-moving objects in the direct environment of the car. If these conditions are not met, e.g. the driver could be asleep, the car has to react accordingly by for example go into a parking position at the next rest area. A particular attention centered on the road cannot be presumed anymore.

FurkatKochkarov-propoal

Embed Size (px)

Citation preview

Page 1: FurkatKochkarov-propoal

Proposal

Title: Measuring Situational Awareness for Regulating the Transfer from highly-automated Driving to Human Control

Description The transition towards fully autonomous driving is in full swing. Whilst for driving

assistance systems such as adaptive cruise control, it has been of major importance to scan

the external surroundings of the car and assumptions over the state of the driver could be

made, it nowadays gets more and more important to recognize the state of the driver and

take it as an internal factor into account as well.

In near future, there will be parts in the road network, where automated driving will

not be available and the car has to make a takeover request for manual control by the user.

Typical examples would be the loss of the lane markings or reaching the end of the proper

road. In such situations, the vehicle needs to perform a take-over request to hand over the

control and, consequently, the responsibility to the driver.

Generally, if there are automated phases, in which the driver does not have to monitor the

vehicle and on some roads has to manually drive, it is generally known as highly-automated

driving .

During the automated phases, the driver can be engaged in side activities, such as

● Actions with other devices (Talking on phone, writing/reading emails)

● Drinking coffee or smoking.

● Having rest (sleeping)

Within that transition period, the car has to evaluate if the driver is ready for taking over

manual control.

The crucial factor for determining take-over readiness is the driver’s level of inattention,

which is influenced by the secondary task being performed by the driver. More specifically,

the car needs to check before handing over control if the driver knows at minimum:

● The location of the car, ● The heading direction, ● The speed as well as ● Relevant moving and non-moving objects in the direct environment of the car.

If these conditions are not met, e.g. the driver could be asleep, the car has to react accordingly by for example go into a parking position at the next rest area. A particular attention centered on the road cannot be presumed anymore.

Page 2: FurkatKochkarov-propoal

Proposal for Master Thesis

If it is possible to automatically detect secondary tasks while they are being performed, the

driver’s inattention level could be inferred. Consequently, the driver could be supported in

the best possible manner during take-over situations.

The goal of the master thesis is to investigate and implement methods for

driver-activity recognition based on eye and head movement in the context of conditionally

autonomous driving. Additionally, the time needed by the driver for taking over control should

be calculated. Based on the developed methods, corresponding user study and evaluation

should be done.

Roadmap Step 1: Setup driving simulator and instrument driving seat with sensors

● Driving simulator software OpenDS

● Attach sensors:

○ Pupil Eyetracker

○ Intel Realsense for face expression and body posture

Step 2: Activity Recognition while driving

Recognize and classify side activities (e.g. reading emails) and driving related activities (e.g.

observing other cars or vehicles) by machine learning, such as deep neural networks.

With Image Processing and Computer Vision

○ identify the drivers body posture

■ For example: Can he reach the breaking pedal?

■ Example: Is the driver in a relaxed body posture?

Step 3: Create a measure for situational awareness (Sensor Fusion and Signal processing)

● identify sensor data that have positive influence on situational awareness and

negative accordingly

● data from all available sensors have to be fused in order to retrieve a single measure

for situational awareness

Step 4: Evaluation

Page 3: FurkatKochkarov-propoal

Proposal for Master Thesis

Proposed approach for Eye-Tracking. In order to classify eye-gaze activities we can record the eye movements by means mobile

eye-tracker. Based on these data we can detect basic eye patterns - saccades, fixations and

blinks.

In order to distinguish between saccades and fixations we can use Bayesian online mixture

model or an algorithm based on Haar wavelets. Eye blinks are not explicitly detected but

modelled from the set of data - set of (x,y) gaze positions which form discrete signal.

Based on these detected 3 patterns we can extract multiple features for specific task e.g.

reading or sleeping.

Encoding: The combined eye movement encoding and wordbook analysis perform a mapping of every

saccade to a character depending on the amplitude and direction of the saccade. With the

use of a moving window of a specified size l, which is shifted over the sequence of

characters, all existing combinations of characters, called words, are detected and saved in

the wordbook.

Page 4: FurkatKochkarov-propoal

Proposal for Master Thesis

Feature Extraction: Conditionally autonomous driving scenario can be seen as far more dynamic and distracting

than a static lab environment. For example: Gazes away from the secondary tasks towards

the road because of the Reasons traffic participants attracting the attention of the driver.

Therefore, this work examines novel eye and head features introduced to address the

behavior of the test subjects in the vehicle.

Features for head tracking All these new introduced features are shown below. The picture a) outlines 20 features

derived from the head-tracking signal.

Here, every leaf node corresponds to an actual feature, while the parent nodes show the

dependencies to the different head and eye patterns.

We calculate the mean and variance features for every position and rotation in the

3D-space and divide the field of view into 8 quadrants to know where and how long the

driver’s head was directed.

Page 5: FurkatKochkarov-propoal

Proposal for Master Thesis

The inner four quadrants result from the circumstance that the gaze and head direction

straight ahead cannot be seen as an exact point but only as a narrow field of view. The size

of the inner quadrants was set to 10 ◦ in x- and 5 ◦ in y-direction based on a previous

analysis of the head direction.

Features for eye-tracking For eye tracking in static environment we can derive 92 features that contain mean,

variance, rate, and maximum values and can be separated into 4 groups:

● 62 features related to saccades

● 5 features derived from fixations

● 3 features related to blinks

● 20 wordbook features.

● 2 features describe the x- and y-coordinate of the centroid of a blink frequency

histogram.

Additionally for driving environment we can add 32 novel eye-based features as

listed below

Here, 20 of these features are based on the distribution of driver’s saccades in the four

outer quadrants Q1 to Q4 and the remaining 12 features can be seen as an addition to the

above mentioned 92 features.

Page 6: FurkatKochkarov-propoal

Proposal for Master Thesis

Classification: We can classify using SVM or Neural Networks. However, SVM is originally designed for

binary classification that is why in order to extend SVM to the multi-class scenario we can

use a One-Against-All Multi-Class SVM classification coupled with a leave-on-out

cross-validation

With respect to the features, we have to chose as many training samples as possible to

cover as much different driver behaviors as possible. References

[1] A. Cacilo, S. Schmidt, P. Wittlinger, F. Herrmann, W. Bauer, O. Sawade, H. Doderer, M.

Hartwig, and V. Scholz, “Hochautomatisiertes Fahren Auf Autobahnen – Industriepolitische

Schlussfolgerungen,” Studie im Auftrag des Bundesministeriums für Wirtschaft und Energie,

pp. 1–14, 2015.

[2] Gold, C., Damböck, D., Lorenz, L., & Bengler, K. (2013). “ Take over !” How long does it

take to get the driver back into the loop ?, 1938–1942.

[3] Adams, M. J., Tenney, Y. J., & Pew, R. W. (1995). Situation Awareness and the

Cognitive Management of Complex Systems. Human Factors: The Journal of the Human

Factors and Ergonomics Society, 37(1), 85–104. http://doi.org/10.1518/001872095779049462

[4] Braunagel, C., Kasneci, E., Stolzmann, W., & Rosenstiel, W. (2015). Driver-Activity

Recognition in the Context of Conditionally Autonomous Driving. In IEEE Conference on

Intelligent Transportation Systems, Proceedings, ITSC (Vol. 2015–Octob, pp. 1652–1657). http://doi.org/10.1109/ITSC.2015.268

[5] Endsley, Mica R; Bolté, Betty; Jones, D. G. (2003). DESIGNING FOR SITUATION

AWAR-NESS: An Approach to User-Centeres Design.

Endsley, M. R. (2013). Situation Awareness. http://doi.org/10.1093/oxfordhb/9780199757183.013.0006

[6] Gawron, V. J. (2008). Human Performance, Workload, and Situational Awareness

Measures Handbook, Second Edition, 296. http://doi.org/10.1201/9781420064506

[7] Pfleging, B., Schneegass, S., & Schmidt, A. (2012). Multimodal interaction in the car -

combining speech and gestures on the steering wheel. Proceedings of the 4th International

Page 7: FurkatKochkarov-propoal

Proposal for Master Thesis

Conference on Automotive User Interfaces and Interactive Vehicular Applications, (c),

155–162. http://doi.org/10.1145/2390256.2390282

[8] Flemisch, F., Heesen, M., Hesse, T., Kelsch, J., Schieben, A., & Beller, J. (2012).

Towards a dynamic balance between humans and automation: authority, ability,

responsibility and control in shared and cooperative control situations. Cognition,

Technology & Work, 14(1), 3–18. http://doi.org/10.1007/s10111-011-0191-6

[9] Gugerty, L. J. (1997). Situation awareness during driving: Explicit and implicit knowledge

in dynamic spatial memory. Journal of Experimental Psychology: Applied, 3(1), 42–66. http://doi.org/10.1037/1076-898X.3.1.42

[10] Ji, Q. (2002). Real-Time Eye, Gaze, and Face Pose Tracking for Monitoring Driver

Vigilance. Real-Time Imaging, 8(5), 357–377. http://doi.org/10.1006/rtim.2002.027

Rauch, N., Kaussner, A., Boverie, S., & Flemisch, F. (n.d.). THE IMPORTANCE OF DRIVER

STATE ASSESSMENT, 1–8.

[11] Salvucci, D., Boer, E., & Liu, A. (2001). Toward an Integrated Model of Driver Behavior

in Cognitive Architecture. Transportation Research Record, 1779(1), 9–16. http://doi.org/10.3141/1779-0