The upcoming Belle II particle collider experiment is the most modern particle detector experiment in the world. It will target a world record Luminosity, which corresponds to the number of particle collisions over time. The resulting amount of data to be saved for later analysis of the experiment is simply too huge to get across the data transmission lines. Fortunately a huge chunk of data is produced by effects that are not important for the experiment, as they don’t result in new knowledge. Identifying this data early on allows discarding uninteresting data, saving only relevant data. This approach solves the data transmission problem; however mechanisms implementing the identification have to be employed.
At the ITIV so called trigger mechanisms based on machine learning are developed for the Belle II experiment. While they can be implemented within the requirements of the experiment in some cases, they suffer from the variability of the weighting of the inputs which is generated during the training phase. Specifically latency requirements are highly dependent on these weight set, since they’re impacted greatly by optimization of constants. This is especially critical for the overall system since realtime processing has to be insured.
Tasks of the Thesis
The main tasks of this thesis are the investigation of the impact of different machine learning weight sets on the resulting FPGA implementation and an estimation of the impact during the design time.
- Familiarization phase
- Familiarization with machine learning algorithms
- Orientation in design for FPGAs and the platforms used in Belle II
- Orientation in FPGA resource estimation
- Concept- and design phase
- Development weighting classes for estimation
- Evaluation of design time resource estimation
- Implementation phase
- Investigation of impact of weighting classes
- Implementation of design time estimation
- Creation of a documentation covering the topics described above