Spiking Neural Networks for Motion Planning in Pedestrian-rich Environments using Reinforcement Learning

  • Subject:Reinforcement Learning, Neuromorphic Computing, Bewegungsplanung, Spiking Neural Networks
  • Type:Masterarbeit
  • Date:ab 04 / 2024
  • Tutor:

    M. Sc. Alexandru Vasilache

    M. Sc. Daniel Flögel

  • Zusatzfeld:

    Thesis at the FZI.

Spiking Neural Networks for Motion Planning in Pedestrian-rich Environments using Reinforcement Learning



Intelligent, highly autonomous robots and mobile platforms have the potential to shape a future in which humans and machines can interact and move freely together in the same environments. Current research is focusing on machine learning methods such as Deep Reinforcement Learning (DRL) with Artificial Neural Networks (ANN) to train a policy that plans the robot's movements in crowds. Spiking neural networks (SNNs) are a unique class of artificial neural networks inspired by the biological behavior of neurons. Unlike conventional neural networks, which use continuous activation values, SNNs work with discrete events called spikes. These spikes carry temporal information and are generated when the cumulative input to a neuron reaches a certain threshold. SNNs are ideal for tasks that require temporal processing, such as in robotics.


The aim of this master thesis is to investigate how SNNs and DRL can be used together for motion planning and how this novel architecture compares to conventional approaches. An in-house approach is used as the basis for the master thesis:
[1] Flögel et. al. "Socially Integrated Navigation: A Social Acting Robot with Deep Reinforcement Learning", 2024, https://arxiv.org/abs/2403.09793

  • You will familiarize yourself with existing DRL motion planning methods and deepen your familiarization with SNNs and DRL.
  • You conceptualize an architecture for the Spiking Neural Network and integrate it into DRL algorithms.
  • You implement your new approach in an existing framework.
  • You compare your approach with the architecture and results from the paper [1].
  • Optional: You create an embedded variant of the two approaches and compare the energy consumption.


  • You have a basic understanding of machine learning and reinforcement learning.
  • You have very good knowledge of Python.
  • You are motivated and work independently.