Journal of Transportation Technologies
Vol.08 No.03(2018), Article ID:85780,15 pages
10.4236/jtts.2018.83011

Effect of Difference in Form of Driving Support Agent to Driver’s Acceptability

―Driver Agent for Encouraging Safe Driving Behavior (2)

Takahiro Tanaka, Kazuhiro Fujikake, Takashi Yonekawa, Makoto Inagami, Fumiya Kinoshita, Hirofumi Aoki, Hitoshi Kanamori

Institutes of Innovation for Future Society, Nagoya University, Nagoya, Japan

Copyright © 2018 by authors and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).

http://creativecommons.org/licenses/by/4.0/

Received: April 28, 2018; Accepted: June 30, 2018; Published: July 3, 2018

ABSTRACT

In recent years, the number of traffic accidents caused by elderly drivers has increased in Japan. However, a car is an important mode of transportation for the elderly. Therefore, to ensure safe driving, a system that can assist elderly drivers is required. In this study, we propose a driver-agent system that provides support to elderly drivers during and after driving and encourages them to improve their driving. This paper describes the prototype system based on the analysis of the teaching records of a human instructor, and the subjective evaluation of driving support to elderly and non-elderly driver from three different agent forms, a voice, visual, and robot. The result revealed that the robot form is more noticeable, familiar, and acceptable to the elderly and non-elderly than other forms.

Keywords:

Driving support, Agent, Elderly

1. Introduction

Recently, the number of traffic accidents caused by elderly drivers has increased in Japan. Although there has been a gradual decrease in the number of fatalities due to traffic accidents annually, the highest number of accidents are caused by driver falling in the age group between 65 and 74 years. It has been reported in previous studies that the aforementioned age group is more likely to cause accidents compared with other age groups [1] [2] . Extant studies have indicated that one of the reasons behind the increase in the number of accidents caused by elderly drivers is the impact of aging on cognitive, visual, and physical function. Previous studies have reported that elderly individuals are unable to focus on appropriate targets or retrieving necessary information owing to changes in the abovementioned functions [3] . Conversely, an automobile is an important mode of transportation for the elderly, and a lack of this transportation mode decreases their quality of life. Moreover, a high-level disparity exists between individual changes in biological functions according to age. Thus, determining driving capability based on age is insufficient, and investigating an appropriate method for evaluating driving capability and support methods in line with individual features is necessary.

Previous attempts have focused on the use of information display devices (such as small displays, car navigators, and HUD) and presentation methods based on sounds, voice, and vibration [4] . Extant studies have also attempted to improve driver behavior [1] [5] to deal with the negative adaptation [6] or false recognition of sensors. In addition, a few have also developed communication robots for cars [7] [8] , in which agents and robots accompany drivers [9] [10] . However, previous research indicated that the behavioral transformation toward a safe driving by instructions while driving is only temporary effect.

The goal of our research is for reducing the traffic accident caused by elderly drivers. Moreover, from our previous researches, the encouraging a self-awareness has a potential to reduce the accidents, and driving with fellow passengers decreases the traffic accident rate. This study proposes a driver agent with the aim of encouraging a safe driving behavior by allowing drivers to recognize their own behavior. Furthermore, the agent has a support model based on knowledge of driving instructors, and potential for becoming to be an acceptable and ideal passenger. The study proposes driver agents based on analysis of the biofunction and the instructions given to elderly drivers by driving instructors. In this paper, we explained the agent system and discussed the results of experiment by evaluating the differences in agent form; voice, displaying of the driver agent and robot.

2. Related Work

2.1. Relation Analysis between the Intersection Collision Rate and Biological Functions

To analyze the relationship between the collision rate and biological functions of drivers including elderly, the experiment to collect driving-behavior data was conducted using a driving simulator (DS) and involved passing through a stop sign intersection [11] [12] . Thirty-three male and female drivers aged between 50 and 76 years (average 66 years) who were registered in Dahlia participated in the experiment. Thus, it was possible to observe the effect of visual information processing abilities, such as awareness functions, awareness allocation functions, effective field of view, and horizontal field of view on the collision rate at the intersection. Hence, this experiment could indicate the possibility that the loss of the aforementioned functions owing to aging was connected to accidents at intersections.

Conversely, an analysis of a survey on driving features (Driving Style Questionnaire [13] ) and collision rate indicated a correlation with the self-recognition of “unstable driving”. Therefore, a correlation analysis was performed following the classification of participants into groups with a strong (strong group) and weak (weak group) awareness of instability. The results showed that biological function significantly influenced the collision rate of the weak group (p < 0.01) compared with that of the strong group. It is considered that driving behavior has a possibility to be changed based on the strengths or weakness of self-recognition. Therefore, we compared the number of safety confirmation prior to crossing an intersection, and the results revealed that the strong group had a higher number of safety confirmations than the weak group. Moreover, the findings indicated that there was minimal bias in confirming left or right directions. Thus, it was surmised that the correlation between biological functions and collision rate is weaker because the strong group compensates for decreased functionality by following a safer driving behavior. It is not easy to improve biological functions that weaken owing to aging; however, encouraging self-awareness of an individual’s own driving behavior and driving ability hold the potential for creating changes in driving behavior such that it is less susceptible to the influence of decreases in biological functions.

2.2. Analysis of Elderly Driver Instruction Records

The recently, evaluations and instructions for driving for the elderly were provided by driving instructors through lectures to the elderly when they renewed their driving licenses in Japan. Thus, we collected the instruction records provided by driving instructors of driving school to elderly drivers, investigated these instructions, and analyzed the impression they had on elderly drivers [14] . We conducted an experiment that involved collecting the instruction records provided by driving instructors who were seated in the passenger seat, including navigation and auxiliary braking, to elderly drivers while driving around residential roads at the periphery of Nagoya University. Subjects included 16 elderly drivers (average age of 77 years) and comprised 9 males and 7 females. Four instructors participated in this experiment, and each instructor was responsible for providing instructions to four subjects.

From the analysis of drivers whose driving evaluation by instructors were low, instructions on deceleration timing, degree of deceleration, and appropriate safety confirmation are required to decrease the accident rate of elderly drivers. However, the result of the impressions the driving instruction had on the subjects suggests the possibility that the instruction frequency and timing could influence the impression of annoyance of driving support. Furthermore, the analysis suggests that changes in driving behavior caused by instruction/support during driving are temporary. The questionnaire results with respect to whether subjects “can accept instructions from driving instructors, spouse, children, grandchildren, friends, car, robot and nether” are shown in Figure 1. The

Figure 1. Selected targets for receiving support.

subjects received no explanation regarding the purpose of this study or agents/robots. The results of the questionnaire indicated that the highest proportion was attributed to the instructor at 63%, and this was followed by robots at 38% and cars at 25%. The result of a chi-square test confirmed a significant difference in the acceptability of options (χ2 = 19.3, p < 0.01). In particular, 56% of all subjects selected a car or a robot, which was equivalent to the same level as the instructor. Introspective reports suggested that the subjects who selected robots or cars imagined a more developed form of car navigation systems. This result suggests the possibility that the support from an artifact such as a car or a robot is more acceptable for elderly than the support from familiar human without trained driving skills.

3. Propose of Driver Agent

3.1. Encouraging Safe Driving Behavior through Self-Recognition

A previous study investigated the instruction records provided to elderly drivers, conducted interviews of instructors, and extracted an instruction model of instructors [15] . Furthermore, as described above, the effect of instruction is temporary, although driving behavior is improved through instructions. Parker et al. [16] suggested that driving behavior is determined based on the driving situation and the driving model acquired from the experiences of the drivers themselves, and thus, drivers will revert to the same driving behavior if the driver model does not change. The training methods based on the Coaching theory to change the driver model to a safer one [17] involve repeating the process in which drivers are made aware of their own driving behavior (self-recognition), analyze their driving behavior (self-analysis), and improve their driving behavior (self-improvement). Furthermore, Japanese driving schools provide lectures for beginner drivers or professional driver training wherein videos of drivers driving are shown and safe driving behavior is discussed based on the video. A study reported that the use of an individual’s own driving record is effective in improving the acceptance of instruction from others [1] . However, special equipment is required for recording and reflecting driving behavior; therefore, these types of courses are limited. In this study, with the aim of reducing the accident rates of elderly drivers, we proposed a driver-agent system that provides driving support and a reflection method to encourage changes in driving behavior through self-recognition.

3.2. System Overview

The driver agent has two main functions. One is driving support function that provides an attention attracting and a revising suggestion of driving operation during driving by high acceptable way. Another is reflection support function that evaluates his/her driving behaviors and provides feedbacks as good/bad driving scenes by providing advice comments and short movie from recorded data based on the evaluation. Through these support systems, the driver agent makes the elderly aware of their own driving behavior and encourages them to improve their driving behavior.

The system configuration of the prototype driver agent is shown in Figure 2. The system acquires the driving operational data from the Controller Area Network (CAN) and the facial direction by applying a facial recognition program (face API, Seeing Machines). Furthermore, the agent acquires the distance between the car and objects such as a stop line and a pedestrian through onboard sensors and GPS/map information. The control module determines the support content based on the instruction model of instructors (Figure 3) developed from the aforementioned data [15] .

The instruction content comprises the following five main categories: 1) Route navigation, 2) Review, 3) Attention awakening, 4) Driving instruction, and 5) Driving intervention. The first category involves a car navigation system;

Figure 2. Structure of driver-agent system.

Figure 3. Model of driver instructor.

the second category is performed when the traffic situation is determined as acceptable by an instructor. Categories 3 - 5 are based on the grace period until a situation deemed as dangerous by the instructor is reached. Furthermore, they are performed in order while observing the reaction of the driver. The instruction timing is based on a time to collision (TTC), which was calculated by the speed of the vehicle and the distance between the vehicle and the object. From the hearing survey with six driving instructors, the timing of the attention awakening was decided to be 5 s in advance. From the analysis of the impression of instructions, attention attracting affords the impression of being “gentle and kind.” However, the results also indicate that this impression worsened with high instruction frequency and direct instructions to change driving operations. Hence, we selected the traffic scenes in which the agent supports based on the report of accidents by elderly drivers [2] . Moreover, the instructors attempted different methods of expression such as simply performing a movement without voicing any instruction content (demonstrating the action of confirmation in advance).From the above discussion, considering the support frequency, the following target traffic scenarios were selected: intersections with a stop sign, parked car/pedestrian avoidance, and traffic confluence. Furthermore, the agent provides two types of driving support: attention attracting and revising suggestion of driving operation.

The support content determined in the control module is presented to the driver via the presentation module. The method of presentation in this study involves using a small robot located in the dashboard or in the vicinity of the driving seat. The use of anthropomorphic robots promotes an intuitive understanding of the support content and enables natural control of the presentation strength through the concurrent use of voice and movement. In particular, this study uses a personalized small-scale conversation robot that has been actively developed by several companies in recent years. The use of such robots in daily life and cars is expected to improve the acceptability for support and encourage feelings of affection and trust toward the robot. For attention awakening, a clear expression combining the use of voice and gesture was selected. Conversely, for revising suggestion of driving operation corresponded to an ambient expression based solely on a gesture (suggesting driving correction). We chose the ambient expression as a revising suggestion because the instruction is a strong expression and has the possibility of decreasing the acceptability of the support from the agent, which has social status lower than that of human instructors. The example of driving supports by the agent at an intersection with a stop sign was shown in Table 1. The timing of supports was based on the hearing survey with driving instructors and the adjustment of the timing is also a future work. The influence of age highly depends on the individual; thus, individual application of support is required. We aim to apply this in an individual manner in future work by adding support required by the individual (such as confirming the target of attention) based on several kinds of models.

With respect to reflecting driving behavior using an individual’s own driving records, it is expected that this will improve driving behavior. Furthermore, previous studies have reported that acceptability is higher after driving than during driving because it is possible to view the record more objectively [18] . The time at which the driver can safely confirm the record corresponds to after the occurrence of driving (for example, at home). To develop a low-cost reflection, it is necessary to determine a way to record driving easily and view the recorded data at any location. We are developing a smartphone application to support a replay function after driving. This application includes a drive recorder function to record images while driving and a function to extract and present a scene based on driving evaluation.

4. Experiment

Three forms can be assumed as driving support agents, a voice agent such as a car navigation system, a visual agent displayed on LCD monitor around a dashboard or on a smartphone, and a robot set around a dashboard. However, there is no study that analyzes the effect of driving supports from different agent

Table 1. Example of driving supports by the agent at an intersection with a stop sign.

forms on drivers, in particular, on the elderly. In this study, we conducted an experiment using a driving simulator, wherein three different agent forms provided the same driving support to elderly drivers, and analyzed the difference in the impact on the driver via subjective evaluation.

4.1. Method

In this experiment, the subjects drove the car on the experimental course for approximately five minutes; the setup simulated a residential road using a driving simulator with five LCD monitors. The subjects received driving support from our agents. The course involved traffic situations corresponding to a high accident rate with respect to elderly drivers, including the following; an intersection with a stop sign (straight and turn right), a situation avoiding a pedestrian or a parked car, and a traffic confluence. Thirty-three elderly individuals participated in this experiment (three subjects retired because of DS sickness), and the average age of the subjects was 72.5. Thirty non-elderly individuals also participated in this experiment, and the average age of the subjects was 50.2.

4.2. Conditions

In this experiment, we defined three experimental conditions with respect to the agent forms as the following: a voice-only agent, a visual agent displayed on an LCD monitor set in front of the driver, and a robot (Sota, Vstone Co., Ltd) agent located in front of the driver. We configured the visual agent to be of almost the same size and location as that of the robot agent. Figure 4 shows the actual experimental environment of 1) visual condition and 2) robot condition.

As mentioned in Section 3.2, we defined two kinds of driving support received from the agents: an attention awakening and a revision suggestion for the driving operation. The attention awakening support involved approach notifications regarding the intersection with a stop sign, pedestrian and parked car, and traffic confluence. In contrast, the revision suggestion involved suggestions of speed reduction and direction of a safe confirmation at intersections with a stop sign or a traffic confluence, and suggestions of speed reduction and direction of avoidance in a situation wherein the driver was required to avoid a pedestrian or a parked car. Table 2 shows the each conditions of agent support in this experiment. The attention awakening is an approach notification to drivers, and the

(a) (b) (c)

Figure 4. Experimental condition. (a) Voice condition; (b) Visual condition; (c) Robot condition.

Table 2. Conditions of driving support in the experiment.

condition is controlled based on TTC. There are several kinds of revising suggestions in driving support on each situations. In intersection with a stop sign, the agent suggests a speed reduction based on TTC and car speed as shown in Table 2. Moreover, the agent suggests a direction of a safe confirmation based on a face recognition program using a driver camera. In a situation of a pedestrian or parked car avoidance, the agent provides two kinds of suggestion. The one is a speed reduction based on TTC and car speed, and the other is a direction of avoidance based on TTC and width between own car and the object.

The voice agent provided all support by voice only. In contrast, the visual and robot agents provided the attention awakening by both voice and gesture and the revision suggestion by gesture only. The timing of provision of supports, and the voice used were the same for all conditions. The gesture of attention awakening involved pointing ahead by the left arm while slightly turning around toward the driver. The gesture of a revising suggestion regarding speed reduction was moving the right arm up and down twice. The gesture of a revising suggestion regarding avoidance direction was moving both arms to the right from the left twice. Moreover, the gesture of a revising suggestion regarding direction of a safe confirmation was turning toward the specified direction.

4.3. Hypotheses

Even though the perspicuity and usefulness of the support are important factors for the driving support, it is also important that the support does not disrupt a driving, and is acceptable style for the driver.

We expected that the voice agent would be easier to notice than the other two agents would because the information was always provided using voice. However, being notified by a voice persistently could possibly annoy the driver, leading to the acceptability of this condition being lower than that of the other two conditions. The support from the visual and robot agents employs both voice and gesture and is therefore expected to be more intuitive and understandable than the voice agent. Furthermore, if the agent is located in the peripheral visual field of the driver, then the distraction by the agent would be smaller enough to consider the advantage of the agent. The robot has a stronger presence than the visual agent [19] . Moreover, the sound of the robot motion causes the driver to become intuitively aware of the support being offered. For these reasons, we further expected that the acceptability of the robot would be higher than that of the visual. Comparing elderly and non-elderly, the acceptability of robot agent of non-elderly is expected to be higher than that for elderly because of the difference of attitude for new technology.

4.4. Procedure

In this experiment, ten elderly subjects and ten non-elderly subjects participated in each specified condition. The procedure of the experiment is described in this section: the subject drove a car on the practice course and became proficient at interacting with the driving simulator. Further, the subject drove on the experimental course once for a practice without the agent, three times with the specified condition, and then, without the agent again.

The data collected in the experiment was the log data from the driving simulator (speed of car, operation of accelerator and brake pedals, position of car, distance between car and intersection, pedestrian, or parked car) and the gaze behavior data of the driver obtained from a gaze recognition device (TobiiX2-30, Tobii AB). Moreover, the subjects answered a subjective evaluation, specifically, the “driving evaluation” after each trial.

5. Result

5.1. Agent Form Evaluation by the Elderly

Figure 5 shows the result of the average driving evaluation corresponding to the third trial. The subjects assigned 7-levels of subjective scores to the nine items shown in Figure 5. Figure 5(a) shows the subjective results by elderly drivers, and Figure 5(b) shows the results by non-elderly drivers. We conducted one-way ANOVA regarding the agent form on each of the nine items. If the main effect was significant, we also conducted a multiple comparison. The results showed significant differences on several items, as shown in the figure. In nearly all the items for both age categories, the evaluation of the robot agent was observed to be better than the visual or voice-only conditions. In particular, for elderly, the evaluation of voice-only condition was worst. However, the evaluation of visual condition was worst for non-elderly. Through the three trials, the evaluation of the robot and visual agents improved; however, the evaluation of the voice-only agent remained nearly constant.

5.2. Agent Form and Gaze Behavior

Initially, we were concerned that the agent being located in front of the driver might attract the driver’s attention and cause annoyance. However, the results

(a)(b)

Figure 5. Results of average driving evaluation on third trial. (a) Results of average driving evaluation by elderly drivers; (b) Results of average driving evaluation by non-elderly drivers.

contradicted this. Therefore, we analyzed the relationship between the agent form and gaze behavior during driving for investigating the correlations between noticeability and annoyance.

We analyzed the gaze behavior of 24 elderly (7 subjects from voice condition, 9 subjects from visual condition and 7 subjects from robot condition) and 26 non-elderly (8 subjects from voice condition, 9 subjects from visual condition and 9 subjects from robot condition)whose gaze were recognized by the device during driving. Figure 6 shows the distribution map of the fixation points corresponding to the three conditions, Figure 6(a) shows the result of elderly and Figure 6(b) shows the result of non-elderly. The results of elderly revealed that fixation points while driving were most diverged in the voice condition without agent appearance and most converged in the robot condition. We believe that one of the reasons for the divergence of fixation points in the voice condition is that the driver could not understand the support content smoothly and exhibited the gaze behavior corresponding to the seeking of objects mentioned by the support. In contrast, for non-elderly, the fixation points in three conditions are converged but the fixation points in the visual condition is more distributed on the agent than the robot.

To compare the frequencies of gazing the agent between the visual and robot conditions, we defined the area (x < −80, y < −80) where the agent was displayed based on the coordinates of the camera image to the agent area, and counted the number of fixation points within the area. Figure 7 shows the result of time ratio of gazing in the agent area during driving for elderly and non-elderly. In elderly,

(a-1) (a-2) (a-3)(a)(b-1) (b-2) (b-3)(b)

Figure 6. Distribution map of fixation points during driving in three conditions. (a) Distribution map of fixation points of elderly drivers. (a-1) Voice condition, (a-2) Visual condition, (a-3) Robot condition; (b) Distribution map of fixation points of non-elderly drivers. (b-1) Voice condition, (b-2) Visual condition, (b-3) Robot condition.

Figure 7. Result of time ratio of gazing into the agent area.

the gaze time ratio exhibited by the driver in the agent area for the visual condition was 0.026. On the other hand, the gaze time ratio exhibited by the driver in the agent area for the robot condition was only 0.006. In non-elderly, the gaze time ratio exhibited by the driver in the agent area for the visual condition was 0.021. On the other hand, the gaze time ratio exhibited by the driver in the agent area for the robot condition was 0.013. Moreover, every time driving, in both age groups, the gaze time ratio to the robot agent decreased. In contrast, the time ratio of visual condition in non-elderly did not changed during the experiment.

6. Conclusions

In this paper, we proposed the use of a driver agent that encourages safe driving behavior for elderly and non-elderly drivers. To validate the proposal, we conducted an experiment wherein three different forms of agents (voice-only, verbal, and robot) provided the same driving support to the driver, and analyzed the differences in their impact via subjective evaluation and gaze behaviors.

The result of the experiment revealed that compared to the other conditions, the robot was more noticeable, familiar, and acceptable to the elderly and the non-elderly. Moreover, the reliability and the usability, which are of importance to the agent function, were evaluated as being good. In particular, the attention awakening using only voice is not adequate for the provision of information to the elderly. Therefore, the result implied that it is possible that the elderly is not able to understand the content of support enough by the sudden occurrence of a voice. The motion with pointing directions such as a pointing ahead would be more effective in directing attention than voice. Furthermore, the research on interruptions during work [20] reports that a previous notice of interruption, specifically, the “interruption lag” that occurs a few seconds before the actual interruption reduces the worker’s reaction time to the interruption. The driving supports provided by the visual and robot conditions first exhibit a motion and then provide the voice. In particular, the robot motion induces a sound and then indicates gestural motion when offering support, and this feature might be seen as a previous notice through a different modal from a vision and help elderly to turn him/his attention to the support smoothly. Moreover, this feature also may reduce the possibility of a driver being annoyed by robot’s support. For non-elderly, the evaluation of voice-only condition was similar to the evaluation of robot condition. On the other hand, the evaluation of visual was lower than the voice condition. To understand the support provided by voice is not hard for non-elderly. Furthermore, the robot motion and sound provides stronger “previous notice” than visual condition. As the reasons for the evaluation, these subjects reported that they had to gaze the visual condition to recognize whether the driving support is occurred or not. The gaze time ratio of visual agent for non-elderly was more than the ratio of robot agent, and implies the ratio affected to the evaluation about annoying.

The frequency of gazing the robot agent while driving was less than the frequency of gazing a visual agent. In the introspections of subject behavior, they noticed support offered by the motion sound and motion indication of the robot. This result suggested one of the reasons why the noticeability of the robot condition was better than that of the visual condition. Moreover, the gaze time on both conditions corresponded to a very small duration; therefore, this result implied that the existence of an agent does not necessarily lead to huge driving disturbance. Fixation points during driving were most diverged in the voice condition and most converged in the robot condition for elderly. It is reported that the accident rate could be greatly reduced if the elderly driver drives a car with a fellow passenger [21] , and this phenomenon is known as the fellow passenger effect. Existing research suggests that the existence of a fellow passenger causes a positive effect on the driver's attention, leading to a conscious safe driving [22] . The results from our experiments revealed that the divergence of fixation points while driving was suppressed if the form of agent was more clearly presented. Therefore, this result implies the possibility of the generation of the fellow passenger effect by the robot agent because the elderly driver tends to regard the robot as a fellow passenger.

In this paper, we focused on the subjective evaluation at first because we thought the acceptability is a first barrier for continuous usage of the agent to encourage behavioral transformation. This experiment was conducted for only one kind of robot. It is possible that the noticeability of the robot is affected by the size of the used robot. Deliberation on the gestures appropriate to the support content and the effect of using smaller robots are need. We collected the objective data (speed, car position, etc.), so analyzing the data is future work. Furthermore, the biological functions of the elderly driver, such as cognitive functions, may affect the given evaluations. We also collected the biological features of subjects in this work; therefore, the analysis between the functions and evaluations is future work too.

Acknowledgements

This research is in part supported by the Center of Innovation Program (Nagoya University COI; Mobility Innovation Center) from Japan Science and Technology Agency. We also thank T. Sato and Y. Kobayashi for developing the smartphone application.

Cite this paper

Tanaka, T., Fujikake, K., Yonekawa, T., Inagami, M., Kinoshita, F., Aoki, H. and Kanamori, H. (2018) Effect of Difference in Form of Driving Support Agent to Driver’s Acceptability. Journal of Transportation Technologies, 8, 194-208. https://doi.org/10.4236/jtts.2018.83011

References

  1. 1. Kumeta, K. (2015) Study on Analytical Method of Older Driver’s Occur Frequently Accident Types. Journal of Society of Automotive Engineers of Japan, 69, 90-95.

  2. 2. Institute for Traffic Accident Research and Data Analysis (2007) ITARDA Information, 68.

  3. 3. Kitajima, M., et al. (2008) Usability of Guide Signs at Railway Stations for Elderly Passengers—Focusing on Planning, Attention, and Working Memory. Japan Ergonomics Society, 44, 131-143.

  4. 4. Kanemaru, T. and Kuwamoto, H. (2015) The Transition of HMI Technology and the Approach for In-Vehicle Infotainment System. Journal of Society of Automotive Engineers of Japan, 69, 39-42.

  5. 5. Hosokawa, T., et al. (2016) Evaluation of an Assistance System for Elderly Drivers When Approaching Stop Intersections. JARI Research Journal, JRJ20161102.

  6. 6. Haga, S. (2012) Risk Management and Accident Avoidance. Traffic Safety Education, 551, 6-16.

  7. 7. DENSO (2013) Communication Robot Hana. http://www.globaldenso.com/en/newsreleases/events/tokyomotorshow/2013/booth/

  8. 8. NISSAN (2005) Concept Car PIVO2. http://www.nissan-global.com/EN/PIVO2/index.html

  9. 9. Katagami, D., Hongou, M. and Tanaka, T. (2014) Agent Design for the Construction of New Relationship between a Car and a Driver. 30th Fuzzy System Symposium, Kouchi, 2 September 2014, TA2-3.

  10. 10. Nakagawa, Y., et al. (2014) Driving Assistance with Conversation Robot for Elderly Drivers. Universal Access in Human-Computer Interaction, LNCS, 8515, 750-761.

  11. 11. Aoki, H., et al. (2015) Study on Driver Characteristics for Delaying Driving Cessation (1)-Database Construction of Older Drivers’ Human, Aging, Driving Characteristics. JSAE Annual Congress (Spring), 45-15S, 1091-1094.

  12. 12. Tanaka, T., et al. (2017) Analysis of Relationship between Driving Behavior and Bio-Function of Drivers Including Elderly at Intersection with a Stop Sign: Study on Driver Characteristics for Delaying Driving Cessation. Journal of Society of Automotive Engineers of Japan, 48, 147-153.

  13. 13. Driving Style Questionnaire (2004) Research Institute of Human Engineering for Quality Life.

  14. 14. Tanaka, T., et al. (2018) Study on Driver Agent Based on Analysis of Driving Instruction Data-Driver Agent for Encouraging Safe Driving Behavior (1). IEICE Transactions on Information and Systems, E101, 1401-1409. https://doi.org/10.1587/transinf.2017EDP7203

  15. 15. Tanaka, T., et al. (2015) Elderly Driver Supporting Agent: Analysis of Instructor’s Teaching Model. 31st Fuzzy System Symposium, Tokyo, 3 September 2015, TA4-4.

  16. 16. Parker, D. and Stradling, S. (2001) Influencing Driver Attitudes and Behavior. Road Safety Research Report, 17.

  17. 17. Edwards, I. (2014) Can Drivers Really Teach Themselves? SMA Support Inc.

  18. 18. Takemoto, M., Higuchi, K. and Tanaka, Y. (2012) Multiple Effects of Real-Time Interactive Driver Assistance and Post-Driving Evaluation and Feedback Systems for Safety Checks at Intersection with Stop-Sign. Journal of Society of Automotive Engineers of Japan, 43, 605-610.

  19. 19. Kanda, T. (2011) Research Trends towards Social Robots in HRI. Journal of the Robotics Society of Japan, 29, 25.

  20. 20. Salvucci, D. and Taatgen, N.A. (2008) Threaded Cognition: An Integrated Theory of Concurrent Multitasking. Psychological Review, 115, 101-130. https://doi.org/10.1037/0033-295X.115.1.101

  21. 21. Institute for Traffic Accident Research and Data Analysis (2008) ITARDA Information. 77.

  22. 22. Nishida, Y. (2008) Effect of a Following Passenger. Gekkan Koutu, 39, 56-61.