Large AI models can reveal both bias in training data or AI output, and bias in society, such as social disparities.Social biases often appear in image generation AI outputs as 'marked' features.Generative AI is sensitive to marked features, often exacerbating social biases.Complex prompts can be used to investigate bias in image generation AI models.