Safety and Security for AI Accelerators - Tracing Layer Activations
The classical approach for highest safety requirements (ASIL-D for automobiles) aims at finding all systematic system and software faults in the development phase. For certain classes of defects, this goal can be largely achieved through careful work (processes, tools, etc.). However, for systems that contain AI accelerators, there are always sporadic effects that lead to malfunctions or short-term performance degradation. These can be caused, for example, by colliding accesses to shared resources such as memory or input/output components, or by an undetected need for synchronization between processes, as well as by attacks or input distortions. Explaining the black-box behavior of an AI accelerator is an ever-evolving area of research, and effects that occur cannot always be explained by conventional or known methods. One attempt to explain classification results of AI accelerators would be to look at the activation patterns during a classification. Using a tool developed by ITIV, it is possible to trace the activations of layers during the inference of input images. It is your task to evaluate these traces, find metrics to describe them, and use them to make statements about the operational state of these accelerators.
- Literature review on AI explodability and accelerators.
- Evaluation and implementation of strategies to monitor activations and their impact on outcome validity
- Trade-off evaluation between resource consumption and health prediction accuracy.
The following assignment is required for a "very good" grade:
- Propose an intrusion detection strategy based on layer traces.
- Interest in embedded systems, AI research, and new design methods
- Very good knowledge of Python (preferably with knowledge of AI learning libraries, such as PyTorch, Tensorflow).
- Ability to work independently
A synopsis must be written and approved by the supervisor before starting the concrete work.
(Images created using Dalle 2).