Niklas Bunzel

I am a research scientist at the Fraunhofer SIT and currently pursuing my PhD at the TU-Darmstadt. I specialize in artificial intelligence and IT security, focusing on adversarial machine learning and robustness in AI systems. My expertise includes developing defenses against adversarial attacks, such as evasion attacks, and advancing deepfake detection methods, supported by a portfolio of over 20 publications. As a core member of the OWASP AI Exchange I bridge technical safeguards with regulatory requirements. I regularly present at academic and industry conferences, sharing insights on adversarial ML and AI security. I am skilled in frameworks and programming languages such as PyTorch, Keras, and Python, and I create proofs of concept that bring my research to life. My work is driven by the goal of advancing secure, trustworthy, and impactful AI systems.

Profile Picture

Research Scientist

Research

AI Security/Trustworthy AI

The RoMa project is a research project that aims to improve the robustness of image classifiers against both benign environmental influences and adversarial attacks. The team is researching methods to defend against adversarial examples that make minimal changes to the image inputs, as well as against adversarial patches. The project is also working to raise public awareness of the potential security risks posed by adversarial attacks through publications, lectures, educational events and demonstrations.
We investigate adversarial attack detectors, with a particular emphasis on adversarial patch detection in both digital and physical domains, focusing on tasks such as face recognition, object detection, and deepfake detection. Our research also explores the transferability of evasion attacks, as well as the use of evasion attacks and adversarial training in continual learning environments. In addition, we advance AI safety by identifying and generating rare edge-case data and simulating adverse weather conditions to improve system robustness and reliability.

Deepfakes

SecMedID focuses on addressing the escalating risks posed by deepfake technologies, which enable the highly realistic manipulation of faces and voices in digital media. These technologies threaten individuals, organizations, and society by facilitating fraud, extortion, and disinformation, eroding trust in media, compromising democratic processes, enabling defamation, tampering with legal evidence, and jeopardizing public safety through falsified communications. To counter these threats, the project investigates the state-of-the-art in deepfake methods for video and audio manipulation and advances techniques such as Face Swapping, Facial Reenactment, Voice Conversion, and Text-to-Speech to assess future risks. Additionally, it explores forensic detection mechanisms to identify deepfakes, even when adversarial attacks are employed to obscure their authenticity. By advancing knowledge and tools in both deepfake generation and detection, SecMedID seeks to ensure the integrity and trustworthiness of digital media.

Media Security & Steganography

We conduct research in the fields of media security, steganography, and steganalysis. Media security involves ensuring the integrity and authenticity of digital media, and our work includes developing robust image hashing techniques, particularly for law enforcement applications. Steganography involves concealing information within another medium, such as an image or video, while steganalysis focuses on detecting such hidden data. We have explored its forensic applications. For instance, we investigated scenarios where law enforcement agencies possess auxiliary knowledge, as well as the potential use of platforms like Telegram as a steganographic channel.

Talks

Details of my past and upcoming talks.

Services

Technical Program Committee Member

Journal Reviewer

Curriculum Vitae

Experience

  • Core Member, OWASP AI Exchange (Since 11.2023)
  • Research Scientist, Fraunhofer SIT (Since 04.2020)
  • Software Engineer, Independent (12.2019–04.2020)
  • Software Engineering \& Project Design, SÖRF GbR (03.2018--12.2019)
  • Software Engineer, Independent (06.2016--02.2018)
  • Student Assistant, TU Darmstadt (04.2013--09.2013)

Education

  • PhD in Computer Science, TU-Darmstadt (2020–2025)
  • Master of Science IT-Security, TU-Darmstadt (2015–2019)
  • Master of Science Computer Science, TU-Darmstadt (2015–2019)
  • Bachelor of Science Computer Science, TU-Darmstadt (2010–2015)
  • Abitur, Martin-Niemöller Schule (2010–2015)

Skills

  • Industry Knowledge: Machine Learning, Adversarial ML, Digital Signatures, IT-Forensics, Cryptography, PKI
  • Programming Languages: Python, C#/.Net, PHP
  • Frameworks: PyTorch, TensorFlow, Adversarial Robustness Toolbox
  • Languages: German (native), English (fluent)

Certificates

Data Scientist Specialized in Trustworthy AI

Interests

Volunteer fire brigade (Leading firefighter), Ving Chun (martial art), Board games.

Publications