Navigation path

Additional tools

  • Print version 
  • Decrease text 
  • Increase text 

The opinions expressed in the studies are those of the consultant and do not necessarily represent the position of the Commission.

ESafety - evaluating measures

eSafety - evaluating measures

Systematic evaluations

There have been various attempts to record and classify eSafety measures by their impacts e.g. studies included in the eSafety effects database[25][1][45]. However, various problems need to be addressed both in the assessment of existing and new systems. No systematic methods currently exist to evaluate new systems. While systems are under development, they are not yet mature. It is not possible to predict eventual casualty reduction on the basis of experimental studies, field trials or simulators for most new systems [47].

HMI issues

While the European Statement of Principles [12] was updated in 2006, there is a need for a test regime to provide objective assessment and guidance. A test regime needs to be defined that:

  • Is technology-independent, i.e. does not depend on a particular technology being employed in a system design
  • Uses safety-related criteria
  • Is cost effective and easy to use
  • Is appropriate for a wide range of HMI
  • Is validated through real-world testing

At the same time, many driver assistance technologies are vehicle specific. That is, they apply to the vehicle in which they are fitted without knowledge of the level of assistance afforded to the surrounding vehicles.

In a market-driven implementation of new vehicle technologies, it is likely that nomadic devices would be freely available for purchase without the device being tried and tested in every vehicle in the fleet. The implications of retrofit of such devices could be problematic since the response of the vehicle to the technology in question could not be predictable. There needs to be a clear policy for handling nomadic devices such that no gross assumptions are made to the effect that any single device will offer the same benefit to all vehicle types and make/models and they will not interfere with vehicle systems or add to the load on the driver.

A clear framework is needed urgently to identify, evaluate, deliver and monitor technologies which improve safety and to identify and discontinue work on those which cost lives. Before measures are described as being eSafety measures, they need to demonstrably effective in their safety performance before they are introduced widely.

Some proposals shown below have been made to identify key needs of an assessment framework and evaluation tools (VSRC, 2008, unpublished) [49]

Assessing the effectiveness of existing eSafety systems (VSRC, 2008 unpublished)

  • Key questions:
    • Has the system introduced any new problems into the driving task?
    • How many crashes and deaths are expected to be avoided using the system?
  1. A prerequisite for monitoring is to be able to easily identify the systems that are standard and optional on each vehicle model. Currently there is no central source of this information and there is a need to collate information to form a comprehensive list and corresponding functionality of systems that are available in the current vehicle fleet. A central data base listing details of active safety systems by vehicle make and model according to year of manufacture or if necessary by the Vehicle Identification Number (VIN) would be a valuable tool. A method is required that takes account of systems that have been requested as 'optional extras' as well as those that are standard to a make, model and variant.
  2. There is a need to examine the available evidence relating to the effectiveness of currently available technology. This would involve consultations with suppliers and reviews of statistically robust studies.
  3. The evaluation of existing systems in the fleet is conducted by considering the crash involvement rates of cars with and without the system under evaluation. Since this requires sufficient fleet penetration before a significant evaluation can be made, multi-centre approaches may be necessary to bring data together from a wide international, geographic area to provide sufficient data for analysis.
  4. Exposure data relating to the prevalence of the comparison vehicles on the road is also required for robust accident involvement rates to be calculated and a methodology would need to be established.
  5. Using crash data and risk of crash involvement for post-evaluation of a new technology presents an immediate problem, since there may be more than one vehicle safety measure continuing to the outcome.
  6. Experimental work in the form of Field Operational Trails could go some way towards predicting the likely HMI effects of new technologies. Such trials allow for driver adaptation to be monitored over an extended driving experience during which the driver normally comes to ignore the presence of monitoring equipment. Simulator studies could be used to generate hypothesis about changes in driver behaviour that could then be validated and quantified in a larger Field Operational Trial.

Predicting effects of new and proposed eSafety systems (VSRC, 2008 unpublished)

  • Key questions:
    • Will the system introduce any new problems into the driving task?
    • How many crashes and deaths are expected to be avoided using the system?
    • Can the system be expected to operate as specified under all driving conditions?
  • A structured approach is needed to predict the benefits or disbenefits offered by new systems.
    1. System operation: An assessment should be made of the functioning of the technology. For example a collision avoidance system should demonstrate the capability to avoid collisions. Sometimes these systems will be simple and only one test or field trial may be necessary but systems that are more complex may need to have their performance confirmed under several test conditions. In general this assessment is likely to have been conducted by the system developer as part of the engineering process and there will probably be sufficient information available.
    2. Introduction of new crash risks: The use of the system in the car by the driver must not cause additional risks e.g. through distraction or conflicting information being presented to the driver (HMI issues).
    3. Driver adaptation: The issue of driver adaptation needs to be explored in the context of the system specification and functionality. For example, will the introduction of the technology promote an element of 'risk taking' or induce complacency within the driving task? A further issue for consideration is 'information overload'
    4. Predicting crash and fatality reductions: Prediction of casualty reduction will incorporate the following steps.
      • Accident analysis to estimate the total number of crashes that take place in conditions relevant to the technology. A system that prevents crashes in situations that only occur rarely will not have a big impact on casualty numbers, for example an icy road detector will have only a small value in many Mediterranean countries. More detailed accident data will support more accurate assessments;
      • Development work and field trials to evaluate the likely system effectiveness in each of these situations. A system may have a limited functionality and perhaps only prevent high proportions of crashes under ideal conditions that are relatively rare;
      • Estimation of consequences of any driver adaptation. Drivers may use more risky driving behaviour in vehicles equipped with safety technologies and the overall casualty reduction may be less than anticipated.

A simple checklist has been proposed to check the safety performance of systems.

  • Checklist for System Validity[47]
    1. Does the system address frequent or infrequent accident causation factors?
    2. Does the system reduce these by a large or small amount
    3. Does it address all crashes/injury crashes/fatal crashes?
    4. How do drivers change their behaviour?
      • Beneficially?
      • Adversely?
    5. Are there any introduced risks?
    6. What are the benefits compared to the costs?
    7. Where's the evidence?

Evaluation tools

  • Multi-centre studies are a powerful tool
  • On Board "black boxes" should be used
  • Powerful statistical techniques should be applied
  • Screening studies using mass data should be used more extensively
  • Getting rid of strange statements such as "you cannot evaluate crashes that have not occurred" and alike

Tingvall, SafetyNet Conference, Prague 2006