AI voluTion: Towards explainable, sustainable and safe AI.

AI voluTion: Towards explainable, sustainable and safe AI.

Saliency Map Vehicle Detection
Model complexity reduction scheme

The use of artificial intelligence is advancing steadily. Especially in data processing for semantic environment perception, for example in autonomous vehicles, it is essential to provide explanatory approaches in order to be able to guarantee flawless system behavior even before the model is delivered. In addition, it is important to ensure sufficient privacy of sensitive data when using the model, e.g., for AI-supported diagnostics. Here, it is essential to ensure diagnostic understanding on the one hand and to prevent inferences to sensitive patient data on the other.

Furthermore, training and use of the model requires a not to be underestimated amount of energy and time, which makes for example the mobile application on edge devices in vehicles difficult. For this, possible approaches will be discussed, such as detection and testing of the model parameter space, to further increase the perfomance in future-oriented driving functions and to reduce the energy consumption by a possible parameter reduction on the model side during operation and iterative development.


  • Research the current state of the art of explainability and reconstruction approaches.
  • Develop a system for validating reconstructed data
  • Develop "AI whitebox" algorithms
  • Develop methods to increase computational efficiency


  • Programming skills (Python/C++/R, knowledge of common ML frameworks an advantage).
  • Enthusiasm for deep learning and autonomous systems
  • Interest in statistics and mathematical optimization advantageous