-
This work contributes to the field of image synthesis by introducing a novel diffusion-based framework, ChildDiffusion, designed to render high-quality child facial data with advanced transformations and augmentations. This framework enables the creation of new synthetic datasets with the implementation of advanced data augmentation strategies, incorporating control-guided annotations and extensive text prompts generated by off-the-shelf large language models (LLMs), which significantly enriches the diversity and fidelity of synthetic child data. These methodologies enhance the utility of the data across various applications. Additionally, we have open-sourced a unique dataset of synthetic child race images generated through the ChildDiffusion framework, enhancing accessibility and furthering research in this domain.
The work is submitted in IEEE Transactions on Image Processing Journal.
@ARTICLE{11021410,
author={Farooq, Muhammad Ali and Yao, Wang and Corcoran, Peter},
journal={IEEE Access},
title={ChildDiffusion: Unlocking the Potential of Generative AI and Controllable Augmentations for Child Facial Data Using Stable Diffusion and Large Language Models},
year={2025},
volume={13},
number={},
pages={96616-96634},
keywords={Diffusion models;Faces;Data models;Text to image;Image synthesis;Computational modeling;Buildings;Adaptation models;Training;Pipelines;T21;stable diffusion;synthetic data;GAN’s;generative AI;diffusion models},
doi={10.1109/ACCESS.2025.3575964}}