Adversarial reinforcement learning for safety-critical scenarios
- Subject:Reinforcement Learning, Adversarial Generation, Bewegungsplanung
- Type:Masterarbeit
- Date:ab 07 / 2023
- Tutor:
- Zusatzfeld:
Einsatzort am FZI Karlsruhe.
Adversarial reinforcement learning for safety-critical scenarios
Context
The research area of machine learning is experiencing a renaissance due to hardware acceleration. One use case in the area of scenario-based testing is the identification and parameterization of relevant scenarios. Technical implementations of scenario-based testing approaches often rely on groups of these parameters, which are individually adapted in a time-consuming or labor-intensive way. Reinforcement learning methods offer the possibility to learn such parameterizations through interaction even in complex environments. Based on state-of-the-art reinforcement learning algorithms, this thesis will investigate an approach and develop a method to generate safety-critical scenarios with a hostile agent. Furthermore, another agent will learn strategies to solve these scenarios.
Tasks
- Familiarization with the theory of Adversarial Reinforcement Learning
- Literature review on the topic of Adversarial Generation
- Implementation of selected approaches in Python, using PyTorch for RL algorithms
- Examination, comparison and processing of the results as well as documentation
Prerequisites
- Enthusiasm for the field of machine learning
- Basic knowledge in Python or comparable programming languages
- Independent thinking and working
- Very good knowledge of German or English
- Motivation, willingness to perform and commitment