Using LLMs as prompt modifier to avoid biases in AI image generators

Abstract

This study examines how Large Language Models (LLMs) can reduce biases in text-to-image generation systems by modifying user prompts. We define bias as a model's unfair deviation from population statistics given neutral prompts. Our experiments with Stable Diffusion XL, 3.5 and Flux demonstrate that LLM-modified prompts significantly increase image diversity and reduce bias without the need to change the image generators themselves. While occasionally producing results that diverge from original user intent for elaborate prompts, this approach generally provides more varied interpretations of underspecified requests rather than superficial variations. The method works particularly well for less advanced image generators, though limitations persist for certain contexts like disability representation. All prompts and generated images are available at https://iisys-hof.github.io/llm-prompt-img-gen/


Mehr zum Titel

Titel Using LLMs as prompt modifier to avoid biases in AI image generators
Medien 9th International Conference on Advances in Artificial Intelligence (ICAAI 2025), September 11-13, 2025 in Manchester, UK (under review)
Verlag ---
Heft ---
Band ---
ISBN ---
Verfasser/Herausgeber Prof. Dr. René Peinl
Seiten ---
Veröffentlichungsdatum 30.04.2025
Projekttitel ---
Zitation Peinl, René (2025): Using LLMs as prompt modifier to avoid biases in AI image generators. 9th International Conference on Advances in Artificial Intelligence (ICAAI 2025), September 11-13, 2025 in Manchester, UK (under review).