Management of scenario-based validation processes: Efficiency through the use of standards
Due to the large number of possible traffic scenarios and diverse validation methods, assessing the safety of autonomous driving functions and driver assistance systems is extremely time-consuming and complex. Peak Test Management Suite (TMS) and a number of industry standards can help implement a solution to transparently control the complex validation processes and manage the resulting data in a traceable manner.
The platform allows the user to efficiently create so-called validation campaigns. This comprises a list of predefined test case types which have to be conducted by one or several teams in the course of validating a defined System under Test. The system to be tested itself can be composed of several components (e.g. software version, control unit, vehicle type, etc.).
Beside the test cases and the System under Test, each validation campaign includes a schedule proposed by TMS as well as the allocation of suitable test resources for each test case (e.g. a specific XiL platform, proving ground, test vehicle, driver, etc.). At any time the user can add or delete single test cases and reschedule the validation tasks per Drag & Drop in a calendar view. If TMS detects possible planning conflicts (e.g. the multiple allocation of resources), it informs the user accordingly.
Based on “intelligent” templates, the user can specify for each test case the respective test goal as well as single test parameters and values. Furthermore, he can define appropriate metrics for determining the test completion and quality.
An important part of the test case description is the detailed specification of the scenario that shall be tested. This can be done both, based on individual modeled description structures or based on standardized methods, like provided by ASAM openLABEL. In a connected scenario repository, e.g. based on ASAM ODS, the user checks whether a scenario suitable for his test case is already available. Dependent on the intended test method and platform he can for example look for recorded scenarios, based on real road drives, or for generic scenarios, based on scenario description languages (SDLs), like provided by ASAM openSCENARIO. If the desired scenario is available he can link it to his test case. Otherwise he has the possibility to create a provision request, what induces a further workflow (e.g. commissioning of a recording with a test vehicle or generation of a scenario variant by parametrization).
After the test case definition has been completed and approved, all necessary information can be exported into a suitable data format for the initialization of the test execution. The consolidated test results are then imported again and assigned to the test case. More extensive raw data from the test execution (e.g. traces, logs, bus messages, etc.) are stored in a central, test system independent data management system, e.g. based on
Peak Test Data Manager, and linked as well. This ensures the long-term traceability and reproduction of validation results for different actors.
The application scenario shows very well how the combination of a well-selected set of standards and product platforms leads to an integrated tool chain, which makes the validation of autonomous driving functions more efficient and traceable.