Virtual Testing (Part 1)

Cansin Demir & Rama Nanjundaiah
// Published

In this article we discuss the importance of virtual testing for autonomous driving and highlight the process steps towards a transparent validation concept.

AV Simulation

In virtual environments, the autonomous vehicle can be given the "drive of its life": packing more action into a virtual mile than you could recreate in thousands of physical real world tested miles. In this article we discuss about the coverage driven verification process in virtual environments and highlight the need of diverse data sets and accurate behavior models.

“Self-driving Uber car that hit and killed woman did not recognize that pedestrians jaywalk”

The automated car lacked "the capability to classify an object as a pedestrian - unless that object was near a crosswalk," an NTSB report said (Source).


Developing autonomous vehicles (AVs) lead to a paradigm shift from traditionally deterministic development processes to more complex unpredictable ones (more sensors, sensor fusion, several ML applications). However, the safety requirement is carved in stone while any error could lead to fatal consequences and lose the trust of consumers. 

Demonstrating the safe function of AVs requires training, validating, and testing algorithms/sensors with an enormous volume of data. Given the number of diverse situations AVs can encounter in real work, the amount of scenarios needed for validation of autonomous vehicles is humongous and it is widely agreed in the industry that a large part of these scenarios cannot be collected from real world origin or created manually. 

Simulations (& scenarios) are the heart of the development process that enables testing of autonomous driving functions for millions of scenarios at scale (beyond what we would be able to experience on the test track or the road). Advanced physical and mathematical models combined with reducing computational costs are enabling developers to stress test automated driving functions at practically acceptable cost and accuracy.

In the following sections, we will dive deeper into the process from tools to running the simulations and extracting insights. 

The simulation process (generic insights)

Before simulation tools can be leveraged to test algorithms for millions of scenarios, it is essential to establish a robust simulation tool chain. This is a rather complex process requiring seamless interplay between simulation tools and solution providers specialized in their own offerings as shown in the simplified representation below. 

Initiatives, such as ASAM or research projects like Pegasus promote standards for data exchange and interfacing to have a common understanding about quality, enable cross-collab, reuse and access between the tools.

The coverage driven verification process

Once the simulation tool chain is set up and the interfaces between them are well established, the process can be leveraged for a coverage driven verification process. 

The figure below illustrates the sequential flow of a coverage driven verification process:

coverage driven verification


Major steps include:   

(1) First acquire scenarios from various sources. Typically scenarios come from real world driving, NCAP requirements, manual specification and synthetic environments.  

(2) Then parametrize the scenarios and generate variations of them as test scenarios. Several techniques are available for sampling and covering the design space of parameterized variables. Running massive simulations for testing the driving functions for the generated scenarios. Other modules (like sensors) are coupled here depending on the intended test.

(3) Final step includes the evaluation of the results to assess coverage and identify gaps meaning critical scenarios, on which a second run will be conducted. 

It is clear that the success of coverage driven verification depends heavily on the base scenarios - a large set of diverse scenarios is necessary for this purpose.  

As previously mentioned, to access scenarios, one established process is to “record real world scenarios” which later are provided as a base for virtual scenarios. 

Hypothesis (Claim): “Real test miles are not a scalable approach to verify autonomous cars!” Let’s ignore the economical perspective of collecting scenarios by driving millions of miles with a dedicated fleet on roads but rather focus on the tech perspective. During the test drives, the fleets collect terabytes/petabytes in data. Leveraging the collected data requires massive amounts of data labeling and scenario identification, which creates new challenges itself. Why? There is no scientific approach to ensure high label quality (<> groundtruth), it takes time and represents a bottleneck regarding available data. Despite that, data quality, speed and cost are key. 

By using a mix of synthetic and real data as we do at Phantasma, we help our customers to cover the following questions: 

(1) How can I generate relevant scenarios to increase coverage?

(2) How can I automatically identify critical variables/ metrics and KPIs?

(3) How can I increase the parameter variation space?

As experts in Simulation Sciences we at Phantasma embrace the fact that conventional simulation and modeling approaches lack realism. To solve this we have developed an unique technology that enables us to generate realistic scenarios at scale from simulations. 

First, the parameterized scenarios are defined and afterwards the relevant KPIs are extracted, which can be used for the coverage driven verification of autonomous driving functions. 

An effective verification plan to ensure safety of such functions should enable: 

(1) Testing millions of scenarios at scale

(2) Ensure qualitatively and quantitatively that these scenarios cover the wide range of possible cases in the ODD (measuring completeness of testing)

(3) Expose critical gaps (which can be addressed during the R&D stage) 

The need for dynamic Behaviour Models in coverage driven verification process:

In the coverage driven verification process, it is important to ensure that the prescribed test cases are simulated and covered during the process. While there are several ways to prescribe the test cases prior to the execution of the simulation, ensuring that the test conditions are met during the simulation is a challenging task due to the dynamic nature of the simulations - for e.g. the response of the  EGO/DUT (Device Under Test) is not known completely while defining the test cases. To overcome this uncertainty, behavior models (smart agent representations) respond dynamically  to the EGO/DUT. Therefore BMs need to work in a co-simulation environment with the environment simulator. 

Authors: Cansin Demir & Rama Nanjundaiah

Stay tuned for part 2 

//Contact Form

Thank you! Your message has been received and someone will be in contact with you soon.
Oops! Something went wrong while submitting the form.