Generation of safety-critical scenarios
(Partially) automated driving functions have undergone continuous development in recent years. One challenge that delays the widespread use of novel driving functions is the safety evaluation of these systems. Existing driving functions are usually evaluated based on recorded scenarios. Since safety-critical scenarios are rarely available in real-world data, methods for artificially generating scenarios are crucial to measure and minimize potential safety risks. For this reason, the FZI/ITIV researches different methods to generate safety-critical scenarios and to advance the safety evaluation of novel driving functions.
Software testing with reinforcement learning
Software testing is an essential part of the modern development process. Testing is particularly important where defects can lead to security problems or potentially dangerous situations. The ever-increasing functional scope of software requires efficient, automated methods of investigation. Reinforcement learning, by its very structure, allows for iterative testing of software and detection of bugs that require specific decision sequences. At ITIV&FZI, we research new reinforcement learning approaches for efficient software testing and anomaly detection.
Evaluation of perceptual functions
In the context of automated driving, assistive and automated driving functions benefit from precise knowledge of the environment. Various algorithms use environmental information to plan and execute routes, maneuvers, and trajectories. Loss of environmental information or anomalies in environmental signals can lead to unpredictable vehicle behavior. For this reason, the FZI/ITIV is developing evaluation methods that allow statements to be made about the accuracy and robustness of real-world perception functions.
|Evaluation of state-of-the-art lane detection algorithms for highly automated vehicles||Masterarbeit||ab 07 / 2023|
|Generation of safety-critical scenarios using multi-agent reinforcement learning||Masterarbeit||ab 07 / 2023|
|Adversarial reinforcement learning for safety-critical scenarios||Masterarbeit||ab 07 / 2023|