SynAdult: Multimodal Synthetic Adult Dataset Generation via Diffusion Models and Neuromorphic Event Simulation for Critical Biometric Applications


1C3I Group, School of Engineering, University of Galway

Example GIF Example GIF Example GIF

Rendered Results Using SynAdult Model.

Introduction

SynAdult is a scalable, multimodal data generation framework designed to create synthetic adult face datasets for biometric and computer vision research. This project integrates cutting-edge diffusion models, neuromorphic event simulators, 2D–3D face morphing, and a video retargeting pipeline to produce realistic facial expressions and head pose animations. The dataset spans multiple modalities—RGB, event streams, 3D data and includes diverse demographic coverage (Asian, African, and White ethnicities). We provide rigorous validation using KID, BRISQUE, identity similarity, and CLIP scores. Our open-source release supports research in face recognition, expression analysis, and fairness-aware AI by offering high-fidelity, ethically curated synthetic data.

The work is accepted and published in IEEE Access Journal.

Proposed Framework

MY ALT TEXT

Block Diagram

Results

Video retargetting pipeline is integrated to synthesize high level facial expression and headpose animations.

MY ALT TEXT

MY ALT TEXT MY ALT TEXT

Video retargetting integration with AdultDiffusion for precise facial motion synthesis

Events Simulation

V2E (Video-to-Event) simulator with optimized parameter settings was employed to generate realistic event modality data for both male and female classes. This allowed the creation of temporally aligned and high-quality event representations that complement the RGB-based synthetic data.

MY ALT TEXT MY ALT TEXT

V2E simulation results

Facial Landmark Detection

RGB and Event Facial Landmark Comparasion Results. To qualitatively assess the synthetic events, their usability in the downstream task of facial landmark detection was tested. The event-based facial landmark detector was used which can detects 98 different points with high accuracy. Additionally, this network was adapted from the Pixel-in-Pixel Net (PIPNet) trained on RGB face images, was used to obtain reference facial landmarks on the RGB portrait videos.

MY ALT TEXT

Event and RGB Facial Landmark Results

3D Rendering Results

We generated 3D face texture from single 2D frames using a monocular reconstruction approach. This additional data modality enhances the dataset by providing realistic 3D geometry aligned with RGB samples, enabling downstream tasks like pose estimation, expression transfer, and identity-preserving animation.

MY ALT TEXT MY ALT TEXT

Validation Results

MY ALT TEXT

3D t-SNE Visualization

Paper PDF

Related Links

Link of dataset samples.

Link of our models.

BibTeX

@ARTICLE{farooq2025synadult,
  title={SynAdult: Multimodal Synthetic Adult Dataset Generation via Diffusion Models and Neuromorphic Event Simulation for Critical Biometric Applications},
  author={Farooq, Muhammad Ali and Paul Kielty and Yao, Wang and Corcoran, Peter},
  journal={IEEE Access}, 
  year={2025},
  volume={13},
  number={},
  pages={137327-137347},
  keywords={Face recognition;Three-dimensional displays;Synthetic data;Diffusion models;Older adults;Aging;Neuromorphics;Pipelines;Monitoring;Ethics;Multimodality;synthetic data;diffusion models;neuromorphic event imaging;privacy;V2E;ML},
  doi={10.1109/ACCESS.2025.3594875}}