High-Level Synthesis for AI Accelerators
In order to reduce the design time of AI accelerators the field of High-Level Synthesis has gained traction. It moves the design from traditional HDL languages to a more abstract level, such as SystemC or C++, and enable the designer to rapidly evaluate different architectures. Since AI accelerators are very datapath heavy, they are a good fit for this way of modelling them as they can easily be described algorithmically. Through these methods, designers can focus on the architecture while low-level modelling, such as pipelining and interfaces, can be handled by the tools.
AI Accelerator Verification
New design methods, such as HLS or HGL, enable the rapid development of new accelerator architectures. However, the verification process is still lacking in speed. While in the past extensive simulations of the design were performed in order to verify it, formal methods are becoming more common. They aim to mathematically proof the correctness of the design through means such as symbolic evaluation. Traditional bottlenecks, such as a large controlpath, are not an issue in AI accelerators as the minimize control logic. These design differences can be exploited to speed up the verification process.
Security Verification of AI Accelerators
With recent advances, dedicated AI accelerators have found their way into a multitude of security critical use cases. Ensuring that these architectures provide secure inference of NNs becomes an important task. Hardware trojans are one possible attack vector. They try to leak information about the state of the accelerator to the outside in order to, for example, leak the trained parameters of the network. Certain formal methods can be employed during the design phase to find these attackers on an IP level.
|Compiler-Based Integration of Neural Network Accelerators
|ab 05 / 2023
|Implementation of a hardware accelerator for neural networks for processing radar data
|ab 02 / 2023
|Concept and development of high-performance hardware accelerators for neural networks
|ab 03 / 2023
|Parallel result validation of AI accelerators using most neuron activation monitor
|ab 03 / 2023
|Empowering Tomorrow's Engineers: MLIR-Based Toolchain for Transforming Python Neural Networks into Verilog Hardware
ab 11 / 2023
|HW/SW Co-design for Embedded AI
ab 10 / 2023