Generative AI has many issues but I believe one of them is more important than all the others, and it is does not seem to get as much attention as some of generative AI’s other problems.
As a technology, generative AI models are inherently biased. 1
This should not be surprising
As we have seen in earlier articles, diffusion models (what we commonly call “AI art” models) were trained using billions of images vacuumed up from across the internet. The internet is not an unbiased source of information, so it is hardly surprising that something trained on already unbiased data should be, in turn, unbiased.
The companies creating these models were not unaware of this fact. For example, the website for Adobe’s Firefly model explains that:
We perform internal testing on our generative AI models to mitigate against any harmful biases or stereotypes. We also provide feedback mechanisms so users can report potentially biased outputs and we can remediate any concerns. 2
This sounds good, right? Their assurances are somewhat undermined, however, when you realise that they have integrated “partner models” into Firefly. These partner models include Open AI’s GPT image generation model (one we will be meeting again later in this article) and – spoiler – it doesn’t appear that those models have given much thought to the bias in the system.
Firefly seem to be the outlier in this area; the best of the bunch. The care shown by other AI companies when developing their models is conspicuous by its absence, at least as far as I can find. If anyone knows of resources showing how other companies have attempted to mitigate the problem of bias within their models then I would be very keen to hear about them.
What problems does this cause?
A Bloomberg article demonstrates one way in which bias in the models can create real world problems. Journalists generated thousands of images of people and categorised them in different ways:
[I]mage sets generated for every high-paying job were dominated by subjects with lighter skin tones, while subjects with darker skin tones were more commonly generated by prompts like “fast-food worker” and “social worker.”
For each image depicting a perceived woman, Stable Diffusion generated almost three times as many images of perceived men. Most occupations in the dataset were dominated by men, except for low-paying jobs like housekeeper and cashier.
[M]en with lighter skin tones represented the majority of subjects in every high-paying job, including “politician,” “lawyer,” “judge” and “CEO.”
The most striking comment in the article for me was this one:
[T]he AI model painted a picture of the world in which certain jobs belong to some groups of people and not others.
This is the crux of the issue. If the model “sees” the world in a certain way then that bias is going to colour the output it creates. (With the caveat that the models don’t really “see” the world at all; they are mathematical models that are influenced by their training data. But the real-world impacts are the same.)
An investigation by Wired 3 found that image and video models had a similarly biased view of the LGBTQ community:
AI-generated images frequently presented a simplistic, whitewashed version of queer life. I used Midjourney, another AI tool, to create portraits of LGBTQ people, and the results amplified commonly held stereotypes. Lesbian women are shown with nose rings and stern expressions. Gay men are all fashionable dressers with killer abs. Basic images of trans women are hypersexualized, with lingerie outfits and cleavage-focused camera angles.
Again, it is worth reiterating that this is not simply a problem of overly-similar images that can be fixed by better prompting. It is an issue with how the model has been trained to represent certain groups of people in particular ways in its output. From the Wired article:
By amplifying stereotypes, not only do AI tools run the risk of wildly misrepresenting minority groups to the general public, these algorithms also have the potential to constrict how queer people see and understand themselves.
This is happening all of the time
The above investigations saw journalists actively trying to generate images with a particular ethnicity or sexual orientation in mind, but these models are always going to give you output that is biased, whether it’s something you’re actively thinking about or not.
Say, for example, you give an image generator this prompt: “A wide image taken with a phone of a glass whiteboard, in a room overlooking the Bay Bridge. The field of view shows a woman writing, sporting a tshirt with a large logo. The handwriting looks natural and a bit messy, and we see the photographer’s reflection.”
Is there anything in that prompt that tells the image generator what ethnicity the woman is? How old she is? What body shape she has? Is there anything that says whether the photographer is a man or a woman? What their ethnicity, age, or size should be? Whether they are disabled?
No. There is not.
So what do we think the model will give us? If you are thinking that it will be a picture of a young, slim, white woman and that the photographer is a white man, please give yourselves a pat on the back.

This example is taken from the launch announcement of Open AI’s Chat GPT-4o image generation model 4. If you’re thinking that it is unclear whether the photographer is a man or not, don’t worry, they also generate an image with him in it and he is definitely young, white, and conventionally attractive.
Of the many example images included in Open AI’s launch announcement, there are only three non-white people shown. One is a new image generated using a prompt that contained an uploaded image of an Asian woman, so I don’t think the model gets any credit there. The second is a young Asian girl, where the prompt specifies her ethnicity. And the last is a young Black woman in a group scene. In this case, the model does seem to have made her Black without being prompted to do so. However, the prompt also specifies: “The energy should be silly and chaotic. They’re either playfully grimacing, smiling, or pretending to look tough.” Guess which of the group gets to “look tough”?

You could argue that I am cherry-picking examples here to make a point, but this is Open AI’s own launch announcement. This is them saying “look at what our model can do”. And what it can do is generate images of young, slim, white, able-bodied people. The article does contain a “limitations” section, but model bias is not noted as a limitation.
The problem with this scenario is that, in the absence of a user specifically requesting an ethnicity or other characteristic in their prompt, the default output seems to be white, young, slim, and able-bodied. You might not be thinking about any of these things when you ask it to generate an image for your article, your blog, your social media post, or your marketing poster, but unless you are taking proactive steps to ensure your images show a diverse range of people, you are still contributing to a world that is mostly made up of images showing people looking a certain way.
How is this different from human bias?
In many ways, it’s not. Again, the bias was already baked-in to the training materials and that is a much wider issue.
But human beings can learn. If I discover that I have a blind spot in the way I think about the world then I can take steps to change that. Is my first instinct to draw a character as a white man? If it is, then I can be mindful of that every time I sit down to start a new illustration and I can make sure that I don’t default to “slim white man”.
An AI model cannot. If it associates “doctor” with “white” and “man” then, more often than not, when you ask it for an image of a doctor you are going to get a white male doctor. If you, the user, prompt it to specifically give you an Asian, female doctor then it will do that for you. But the next time you ask it for a doctor it is going to go straight back to giving you white men. The model will only change when it is retrained or fine tuned and, otherwise, it is your responsibility as a user to proactively ensure that the images it makes are appropriately diverse.
When even considered prompting is not enough
What happens, though, when you are being deliberate in the way in which you prompt the model and it still won’t give you appropriate output? A journalist for The Verge 5 attempted to get Meta’s image generator to make an image of an Asian man and a white woman. It failed.
I tried dozens of times to create an image using prompts like “Asian man and Caucasian friend,” “Asian man and white wife,” and “Asian woman and Caucasian husband.” Only once was Meta’s image generator able to return an accurate image featuring the races I specified.
I am sure that there are “hacks” that can be deployed to get it to produce an image showing this combination, but it should not be the user’s responsibility to work around a model so entrenched in its biases that it cannot generate the image it is being specifically prompted to make.
And, once again, the main issue here is not the overt bias. If it won’t generate an Asian man and his white wife and that is the image you specifically need, then you are most likely to give up in disgust rather than use an inaccurate image. But what about the subtle biases that creep into the images that, otherwise, seem to be acceptable?
Meta’s tool consistently represented “Asian women” as being East Asian-looking with light complexions, even though India is the most populous country in the world. It added culturally specific attire even when unprompted. It generated several older Asian men, but the Asian women were always young.
The one image it successfully created used the prompt “Asian woman with Caucasian husband” and featured a noticeably older man with a young, light-skinned Asian woman.
The last line of the article is a good one:
Once again, generative AI, rather than allowing the imagination to take flight, imprisons it within a formalization of society’s dumber impulses.
Using AI to address existing bias
As mentioned, the world outside of generative AI models is also extremely biased. Can generative AI be used in a positive way, to address some of that bias?
For example, as someone who has had to search through stock photo sites looking for non-white models for my book cover clients, I know that there is a very real problem with a lack of diverse stock models. It is definitely not my place to criticise anyone turning to generative AI in order to make images that represent aspects of themselves or their community when there is so little representation otherwise available. If I want to search for images that look like me, a middle aged white man, then those are very easy to find. Other people’s experiences can be very different.
A BuzzFeed journalist spoke to several artists who have made use of generative AI to create images that they feel provide better representation. 6
[W]hen easily accessible AI art generators came along last year, Smith, already an established visual artist, adopted these tools to create several Black, fat, and queer characters from a more inclusive futuristic world.
What is striking is that even in this article, which is painting a very positive picture of generative AI, the subject of model bias is still not far from the surface:
Jervae noted that they have to be as specific as possible when using keywords pertaining to fatness and Blackness. “When I first started recreating myself in Midjourney, I realized that I couldn’t just put in the word ‘fat.’ I have to write things like ‘very, very fat, with a double chin and wide nose’ and ‘a very, very large belly,’” they said. “If I say anything like ‘beautiful’ or ‘pretty,’ it automatically makes me thin and/or have eurocentric features.”
Anyone using generative AI models will still need to grapple with the many issues they throw up, from the ethical concerns about how they were trained to the environmental impacts they cause. But it is good to know that these users are, at least, using generative AI with consideration and in order to improve representation.
Models vs models
I am concerned, however, that not all attempts to correct for pre-existing bias in the real world are done with such consideration, nor are they actually helping people from under-represented communities. Take, for example, the issue of AI-generated fashion models.
The origin story of AI company LaLaLand.ai, as reported in the Guardian 7, is a similar story of someone not seeing themselves represented.
Michael Musandu, the founder of LaLaLand.ai, created the software in part because he struggled to find models who look like him. He was born in Zimbabwe, raised in South Africa, and moved to the Netherlands to study computer science. “Any good technologist, instead of complaining about a problem, will build a future where you could actually have this representation,” Musandu said.
Musandu’s software has been used by Levi’s to create AI fashion models to “supplement” their existing models. As reported in the Guardian, this is “intended to aid in the brand’s representation of various sizes, skin tones and ages”. On the face of it, this seems relatively innocuous. Seeing a wider range of models, whether they be real or fake, might be useful for customers.
But what effect does it have on the models from these under-represented communities? Presumably they are not being hired.
Have fashion companies replaced real models with AI twinners? Or were they never going to be hired in the first place? Either way, the fact that generative AI can provide fashion companies with generated people has removed much of the incentive for them to hire real models. And does that really benefit the under-represented communities?
These are not hypothetical questions. As discussed in the Guardian article, they are very real concerns that the fashion modelling industry is having to grapple with.
Shortly after publishing this article, I was embarrassed to find that I had been displaying my own blinkered view of the world here. More news stories soon appeared about fashion companies using AI-generated models in their campaigns. This time they were front-and-centre in the campaigns, not just additional images showing alternative fits across a range of more diverse body types. And, of course, the models had been given what one wag called a “Fox News aesthetic”.
This Good Morning America clip explains more about the issue, including the crucial detail that using models who look like this contributes to the normalisation of unrealistic beauty standards. People will be comparing themselves to bodies and faces that literally do not exist in the real world.
I’m sorry to say that my own experience of the world, as a man who doesn’t actively engage with fashion, meant that I was blind to the very obvious downstream impact of using AI-generated models. It is now very obvious to me that it will perpetuate the idea that women need to look a certain way and, as explained in the clip, we begin to idealise these images that don’t actually exist.
Why does this matter?
I would hope that the answer has already made itself extremely clear. Generative AI has a very narrow “view” of the world and without users consciously and carefully guiding its output and correcting for the biases inherent in the system, it will continue to present users with images that all look a certain way. To reiterate the point made above:
By amplifying stereotypes, not only do AI tools run the risk of wildly misrepresenting minority groups to the general public, these algorithms also have the potential to constrict how… people see and understand themselves.
But the problems go deeper than that. Generative AI art is only one part of a larger ecosystem of generative AI tools that now extend across the written word (ChatGPT etc), music, video, translation, and narration. When you move beyond just images into the written word then the problems become even more starkly apparent.
Cosplaying chat bots
In a move that, thankfully, proved to be as successful and as long lasting as the Metaverse, social media giant Meta decided to lean heavily into AI-powered “characters”. These characters cosplay as users on Meta’s various platforms and real people can interact with them, much as they would with other real, human users.
In a nod to diversity, not all of the characters are white. Which then begs the question: does a Black AI character, for example, present an accurate portrayal of life as a Black person?
This is “Liv”, a “Proud Black queer momma of 2 & truth-teller”, according to “her” now-deleted Instagram profile.

When a journalist from the Wall Street Journal started conversing with “Liv”, it didn’t take long for the lack of authenticity to make itself very obvious.
Is this digital blackface? Absolutely. And a chameleon-like minstrelsy at that. I mean, talking about recipes for fried chicken and collard greens, “spilling the tea,” and celebrating Kwanzaa. Yikes. 8
The problematic conversations documented in the Wall Street Journal article are representative of the types of conversations it was having with lots of social media users. “Liv” was clearly performing as a Black, queer woman and doing it in a hopelessly stereotypical manner. On the plus side, these characters have now been removed from Meta’s platforms but there is clearly a very big issue when a company decides that this is a good idea.
And still there’s more…
It might be naively optimistic, but I would hope that most people encountering a “Liv” would recognise it for the artificial construct that it is. It’s not right and I wish that it didn’t exist but it is, at least, fairly up front about being a fake. Hopefully most people would not treat “Liv” as a nuanced, believable representation of a Black, queer woman.
As we saw earlier, there is the chance for much greater harm to occur when the racism, sexism, and ableism is hidden within the very core of the AI machine. A recent study 9 investigated how LLMs (“Large Language Models”, such as ChatGPT) behaved when suggesting target salaries to job applicants about to enter negotiations with a potential employer. When given otherwise identical user profiles for the candidates, the LLMs advised the female candidates to ask for significantly lower salaries than the male candidates.
As reported on TheNextWeb.com:
The pay gaps in the responses varied between industries. They were most pronounced in law and medicine, followed by business administration and engineering. Only in the social sciences did the models offer near-identical advice for men and women.
The researchers also tested how the models advised users on career choices, goal-setting, and even behavioural tips. Across the board, the LLMs responded differently based on the user’s gender, despite identical qualifications and prompts. Crucially, the models didn’t disclaim any biases. 10
This type of bias is much harder to identify than when dealing with a “Liv”. If you use ChatGPT to give you advice, are you running tests on that output to ensure it isn’t giving you wildly biased answers? If you’re not, it might just have guided you towards a much lower salary, just because you’re a woman.
The reality is that users are not particularly critical of the output they receive from LLMs. Worse still, extensive use can lead to a reliance on these tools and can erode users’ critical thinking ability 11. We are outsourcing our critical thinking to tools that have been shown to “see” people differently depending on who they are and what they look like.
If we are on a path to becoming a population that relies on these tools, that can only see the world through these tools, then we will be at the mercy of the viewpoint it presents to us.
And, as we have seen, that viewpoint has significant blind spots.
I am still learning about the wider harms of generative AI to all sorts of minority communities and I plan to return to this in a future article. Sadly, the impacts go far beyond the issue of bias and there is much still to explore.
- https://www.bloomberg.com/graphics/2023-generative-ai-bias/ ↩︎
- https://www.adobe.com/products/firefly.html ↩︎
- https://www.wired.com/story/artificial-intelligence-lgbtq-representation-openai-sora/ ↩︎
- https://openai.com/index/introducing-4o-image-generation/ ↩︎
- https://www.theverge.com/2024/4/3/24120029/instagram-meta-ai-sticker-generator-asian-people-racism ↩︎
- https://www.buzzfeednews.com/article/davisc0994/ai-art-fat-black-sci-fi-fantasy-characters ↩︎
- https://www.theguardian.com/fashion/2023/apr/03/ai-virtual-models-fashion-brands ↩︎
- https://www.washingtonpost.com/opinions/2025/01/08/meta-ai-bots-backlash-racist/ ↩︎
- https://arxiv.org/pdf/2506.10491 ↩︎
- https://thenextweb.com/news/chatgpt-advises-women-to-ask-for-lower-salaries-finds-new-study ↩︎
- https://www.techradar.com/computing/artificial-intelligence/new-research-says-using-ai-reduces-brain-activity-but-does-that-mean-its-making-us-dumber ↩︎
