GENERATIVE “AI” AND THE ARTS
This page documents all the reasons why I do not support generative “AI” art. Each section contains a brief overview with links to further reading, both on this site and elsewhere.
If you have found yourself wondering “Why is AI bad?” then I hope you find this information useful.
These are my conclusions based on the research I have carried out. You may reach different conclusions. Either way, if you found this page useful then please share it with others. The more we know, the better able we are to have productive conversations about the future of generative “AI” in the arts.
Generative "AI" art arrived in a big way in the middle of 2022. Other models had existed before then, but the explosion of diffusion models in 2022 led to a sudden and massive interest in generative "AI" art.
As a working artist, it was important for me to assess this new arrival, just as I had with other technological developments. Sometimes a new technology turns out to be a great tool you can add to your workflow. Sometimes it's NFTs.
What I'm saying is: each new technological advancement deserves to be looked at critically to see if it can be useful. Or might burn the world. This is not to say I approached it in an entirely unbiased way. Far from it. My gut reaction was to be suspicious of "AI" art machines, as they seemed to be trying to automate the best parts of the artistic process, but I was open to being convinced otherwise.
However, it was not long before I came to the conclusion that generative "AI" art is not something I wanted to include in my workflow. More than that, I had serious concerns about how it had been developed and what impacts it could have on the illustration industry and the wider world. I was not alone in those concerns. Many, many artists and supporters of the arts were equally as horrified by what they uncovered. At the same time, many people felt the opposite and were keen to adopt generative "AI" art as part of their process.
Since then, these positions have only become more entrenched, with polite discussions all too quickly dissolving into shouting matches, insults, and nonsensical arguments being thrown around (and I level that accusation against all sides in this debate). As much as I try to remain polite and constructive online, I have found it far too easy to be drawn into arguments about generative "AI" art.
Hence the creation of this page. A repository of all the reasons why I do not support the use of generative "AI", presented without the word-count limitations of a social media post, nor the heated arguments that can all-too-often accompany them. You may disagree with my stance or reach different conclusions about how much we should worry about the issues documented below. But these are my views and I think it is important to share them.
Every generative "AI" image model - whether that's Midjourney, Stable Diffusion, or one of the many others - requires training data in order for the model to work. A diffusion model without access to training data is just a complicated bit of code that cannot do anything.
Once the model has been trained, all of the training data is jettisoned. The model does not store the training data for ongoing use. But, to reiterate, a diffusion model cannot work unless it has first been trained.
This training step requires a vast amount of data. Billions of images. And this is where the controversy comes in. People object to the methods the various companies used to obtain the billions of images needed to train their models.
At the better end of the spectrum, Adobe used images from Adobe Stock in order to train Firefly. While there are significant concerns about the fact that they didn't ask permission from their stock contributors first, these are images that they were legitimately allowed to use.
The same cannot be said for the other "AI" companies. Stable Diffusion and Midjourney, for example, trained their models on billions of images taken from across the internet. Images that, in most cases, were protected by copyright.
None of the above information is contested by either side of the debate. It is well established that this is what happened. The debate is around whether the indiscriminate vacuuming up of billions of copyrighted images from all across the internet is (a) morally acceptable and (b) legal.
My artwork was part of this process and, if you are interested in fully understanding how the training process works, I have written a detailed breakdown explaining how my illustrations ended up being used to train these "AI" models: Inside the Art Automaton
My view, having carried out that investigation, is that it was wrong of these companies to have done this.
If a company is creating a product, and they choose to use the work of other people in order to create that product, then I believe those people should consent to that happening and be appropriately compensated for it.
Whether it will turn out to be legally wrong is another question entirely.
Court cases are ongoing in various countries and it will be interesting to see how those pan out. But we are relying on legislation that was written well before generative "AI" was invented. The technology is far more evolved than the legislation and that is a problem.
For me, whether it ends up being given the green light from a legal perspective is irrelevant. The moral question has already been answered and I am not happy to use or support an industry that has conducted this large scale appropriation of other peoples' labour for their own profit-making use.
"AI" models have proven to be inherently biased.
They "see" the world through a particular lens and that leads them to produce output that tends towards a white, able-bodied, heteronormative, conventionally attractive, western "view" of the world.
When asked to produce output that deviates from this, they find it difficult to produce suitable images. They either cannot provide the output at all (interracial couples, beautiful obese people) or they present them without any nuance (LGBTQ+ people presented as stereotypical caricatures).
I have collected together news articles and other reports on a range of different ways in which "AI" models demonstrate this bias, including the examples mentioned above: The World According to AI
Of course, bias isn't a problem that is restricted to "AI" models. But the widespread adoption of these biased models, coupled with the proven inability of users to critically assess the output they are receiving, is making the situation worse.
The risk is that we end up with a world in which art and images all start looking a certain way and only feature people who, themselves, look and act a certain way.
That is a world in which minority communities do not see themselves represented in art and images. That is a world in which people's understanding of what they could be is stifled because they are only exposed to images that show a very limited set of options.
Art is supposed to be expansive. It's supposed to open up new possibilities and new ways of thinking. It's supposed to be inspiring. Giving up control of the art we make to an "AI" model that has an extremely narrow "view" of the world is a dangerous, backwards step.
Scams are nothing new. Misinformation and abuse are also not new.
What is new is the scale and sophistication of the scams, the abuse, and the misinformation once generative "AI" is added into the equation. From scam offers to attend book events (for a fee, of course!) to fake copies of real books appearing on Amazon, generative "AI" has given bad actors new ways in which to abuse and scam people.
Worse than that, is the extent to which generative "AI" is driving the increase in abuse against the most vulnerable.
AI is both creating entirely new forms of abuse and dramatically amplifying existing ones. The scale and undetectability of AI create more widespread and significant harm than traditional forms of technology-facilitated violence. UN Women
I have written about the many different ways in which generative AI is being used and misused in my article Bad Actors and Bad Acts.
Let's be honest. Having been subjected to an onslaught of NFT propaganda in the early 2020s - a technology that was positively gleeful about how bad it was for the environment - when generative "AI" came along shortly afterwards, it seemed instinctively true that this, too, would be bad for the environment.
So obvious did it appear, that I didn't focus on this issue at first, choosing instead to find out more about the training process and how generative "AI" was being misused.
But we shouldn't lose sight of the fact that the environmental effects are really bad.
I plan to dig into the issues in more depth in an upcoming article (and I will update this page once that has been done) but, for now, I hope we can all agree that this is possibly the most serious issue with generative "AI" and one that none of these companies seem to be doing anything to address.
In the first half of 2025 alone I have had two projects cancelled part way through when the client pivoted to "AI" instead. On top of this, I would not be surprised to find that fewer clients are approaching me in the first place because they have opted to use "AI" instead of hiring an artist. It is very hard to prove this, however.
What is not in doubt is that there is significant anecdotal evidence that this is happening across the sector.
My larger concern is not for my own ongoing career (although I am concerned) but for up-and-coming artists just starting out. This was one of the obvious impacts of generative "AI", right from the beginning. As I wrote in Automating the Creative Process:
We care because there are young artists, just starting out, who need opportunities to find a place in this industry. As generative AI gobbles up a greater and greater share of the jobs in the marketplace then the routes into the industry for young artists become fewer and fewer. With fewer new viewpoints and experiences and world views entering the industry, the more staid and pedestrian the art world will become.
This issue extends beyond the art sector to the whole workforce. An Axios article recently reported on a study by Stanford University that saw a 16% drop in employment for younger workers in sectors affected by "AI".
Even if demand is steady or rising for more experienced workers, industries will struggle to find the next generation of experienced workers if people can't get a first job in the field.
I have focused on generative "AI" models that create images because those are the models that I have spent the most time researching. They are also the ones that affect me directly.
But this issue is much bigger than just images. Similar models have sprung up for writing (Chat GPT etc), video production, translations, music, and narration.
Each model will work in a slightly different way, but they all have a similar underlying structure: a clever bit of code that requires enormous amounts of existing material in order to be trained. And, as in the case of images and art, that material has largely been vacuumed up from across the internet without consent or compensation.
Just as we have seen above in the case of visual art, there are similar issues across all of these different models when it comes to bias, environmental impact, loss of jobs and opportunities, and the misuse of these new tools.
It is important that a stand made against generative "AI" is one that encompasses all of these fields. I believe it is hypocritical to reject the use of generative "AI" in my own field (art) but to be happy to use it in another.
The articles below are just a small sample of stories about these issues in other creative fields:
Audible unveils plans to use AI voices to narrate audiobooks, Guardian
The Unbelievable Scale of AI’s Pirated-Books Problem, The Atlantic
Survey finds generative AI proving major threat to the work of translators, Guardian
AI claims and a hoax spokesman: Viral band confuses the world of music, BBC
AI was enemy No. 1 during Hollywood strikes. Now it's in Oscar-winning films, BBC
The above sections detail the reasons why I believe generative “AI” is harmful and should not be used in the arts.
There are still some popular talking points and common questions that have not been covered in the information above, so I have given my take on those below.
The term "AI" has been slapped onto so many different types of technology now that it has become functionally useless. It is an umbrella term for a whole raft of different technologies that, in many cases, have nothing to do with one another.
I have used it throughout this page because it is easier than typing out "diffusion models" every time.
At the same time, it is important to keep reminding ourselves that these diffusion models are not the same as Large Language Models. They are not the same as machine learning models deployed in medical research. They are not the same as the computers that can beat you at chess. They are not the same as the model that predicts the weather. Yet, we have decided to call all of these things "AI".
The problems I have outlined on this page are mostly issues that I have specifically with diffusion models. It is important to remember that an objection to diffusion models does not mean I am against cancer-diagnosing machine learning models. I'm not.
I have used the term "AI" because it is a useful shorthand, but I don't want you to forget that it's not really the correct term. Hence the quotation marks.
New "AI" models keep popping up. Some appear to be front end applications over the top of existing models. Others seem to be genuinely new models. Many of these market themselves as "ethical", clearly recognising that generative AI has obtained a reputation as being unethical.
Is it possible for one of these new models to actually be "ethical"? I don't see why not. They would need to overcome all of the issues we have identified (the environmental concerns, the training issues, the bias, guarding against misuse, and so on and so on) but it should be possible for a company to achieve that.
Do I think any of them have overcome all of these issues? No. I don't.
What incentive is there for them to do that? Is it not easier to make a few improvements and then slap an "ethical" label across all of their marketing?
I will keep one eye on this and update this page if and when I see something that convinces me that it is different. That it is truly "ethical". But I'm not holding my breath.
This is a thorny issue. More to come on this soon.
Yes. That's probably true.
It doesn't mean I'm wrong, though.
I am not anti-technology. I paint almost all of my illustrations using Photoshop on a PC or a tablet. I post my art to social media using my phone. I have a website. If a new technology is either useful (eg my ipad) or has become so ubiquitous that it's difficult to operate without it (my phone) then I will make use of it.
Generative "AI" is neither of these things.
As someone affected by the loss of work to "AI" then of course I am not entirely neutral. But, I have approached all of my investigations into generative "AI" aware that I had some preconceived negativity towards it. I was open to being convinced that this new technology had been implemented with care and consideration, and that it could be useful to me.
I did not find anything that convinced me.
Instead, I found that my preconceptions were, if anything, underplaying the problems. The issues with "AI", as outlined above, go much further than I had initially imagined.
Yes, I was biased but I did the work to investigate these new tools. What I found only strengthened my beliefs.
