Bad Actors and Bad Acts

When we consider the harms of generative AI, “misuse” of the technology is clearly one of the big issues.

I say “misuse”, with quotation marks around it, because it’s not clear to me that the companies providing these tools have provided much in the way of guidance when it comes to laying out what is acceptable use and what is a “misuse” of their product.

One final note before we jump into some examples: defenders of generative AI will point out that it isn’t the technology that’s the problem, it’s the people. I don’t disagree. But the rise of generative AI has given people new ways to create harm, faster and in greater quantity than ever before. Yes, it’s the people. But generative AI is the means by which they are doing these harmful things.

Bad people doing bad things

Scams are nothing new. Versions of the Nigerian prince scam have been running since the 18th century. But the internet opened up new opportunities for scammers, making it easier to send scam letters in bulk and far more cheaply. Generative AI has provided a way for scammers to generate ever more convincing scam letters, even faster.1

I mostly work within the author and publisher community, so let’s look at an example from that sector. An attempt to cover the full range of scam behaviour being carried out using generative AI would take far too long.

Author Jason Sanford has written an excellent overview of one of the most common scams currently targeting authors:

[The scammer] was offering what’s being called the book club scam, something a large number of authors have received in recent weeks. [They] claimed to be the “curator of a private community of over 2,000 readers who devour books like caffeine addicts in a library” and offered to share my book with those readers.

For a price, of course.

This is a form of the Nigerian prince scam and, in that sense, is nothing new. What is new is the use of generative AI to make the approaches specific to the author being targeted. Generative AI allows the scammers to personalise their emails with details of an author’s books, using grammatically-correct English, while still operating at scale. Of course, they don’t always get it right (that’s a feature of LLMs after all) and some very confused authors have received emails talking about books that don’t exist. But when they do get it right, and authors receive an email full of praise for their book, the allure of a book club appearance is easy to see.

But there’s a catch, as noted in this report from Writers Beware:

The catch, as you’ll doubtless have guessed, is that the author has to pay a fee for their appearance, variously described as a “spot fee” or a “spotlight fee” or a “spot-securing fee” or a “participation fee”. (Needless to say, real book clubs don’t charge fees to their guests). 2

Needless to say, if you receive an unsolicited approach from an organisation, club, or any other outfit wanting to talk up your book, then it pays to be cautious. If there’s a fee involved then you should walk away immediately.

Another potentially profitable avenue for scammers is to generate “AI slop” versions of existing books. Using generative AI, they can pump out similar-sounding and similar-looking “copies” of real books and upload them to Amazon. 3 The hope is that unsuspecting customers will buy the book, thinking it is the real thing.

Comedian Rhys James is one of many authors targeted this year with at least five different versions of his book available on Amazon ahead of the official release. Sam Blake is another author similarly targeted, with new books “by her” appearing on her Amazon profile page. 4

Scams are an annoyance but, even with the increase in scale and sophistication that comes from generative AI, they can still be guarded against by staying vigilant and sceptical. Other misuses of this technology are far, far worse and much harder to do anything about.

In an article about AI-Powered Online Abuse, UN Women shared the stark fact that 85% of women online have witnessed digital violence against others, with 38% having personally experienced it. And generative AI is making it worse.

AI tools target women, enabling access, blackmail, stalking, threats and harassment with significant real-world consequences – physically, psychologically, professionally, and financially. 5

The article is unequivocal about the impact of generative AI:

AI is both creating entirely new forms of abuse and dramatically amplifying existing ones. The scale and undetectability of AI create more widespread and significant harm than traditional forms of technology-facilitated violence.

I would really recommend reading the entire article as it documents, with examples and worrying statistics, the scale of the problem and the role that generative AI platforms are playing.

AI technology has made the tools user-friendly and one doesn’t need much technical expertise to create and publish a deepfake image or video.

You can find out how to support the UN Women campaign against gender-based violence at https://www.unwomen.org/en/get-involved/16-days-of-activism

Good people doing bad things

All of the above examples could be countered with the statement made at the top of this article: it’s not the technology, it’s the people. Yes, generative AI made it easier to carry out these scams, but it comes down to people misusing the technology rather than it being an issue with the technology itself.

I would argue that if you are a company releasing a product into the world then you have a responsibility to put guardrails in place to ensure that it can’t be used for nefarious purposes (or to, at least, make it very difficult). I would also argue that the normalisation of generative AI has encouraged people to use it in all sorts of illegal and unpleasant ways. It is my opinion that if we didn’t have generative AI then there would be significantly fewer non-consensual deepfakes out there in the world.

But even if you believe that the technology is blameless and the problem lies with bad actors, what about situations where there is no ill intent from the users but bad outcomes happen anyway? In those situations, we can’t put all of the blame on the people.

My earlier article, The World According to AI, explores how generative AI models have biases built into them. The lens through which they “view” the world (to anthropomorphise the models) means that they tend to produce output (eg images) that are white, straight, able-bodied, and western-centric. This means that even people with good intentions will still be causing harm unless they are actively fighting against the biases built-in to the AI models. That is not an example of a bad actor, but a bad product, and I would argue it is still generating a bad outcome.

You could argue that it isn’t a true example of “misuse” of generative AI and you would have a point. It’s doing what it is intended to do. It doesn’t mean that no harm is being done, however.

Dumb people doing dumb things

We have talked about the outright scams and criminal use of generative AI. We have talked about the unintended harms that can arise from the “well-intentioned” use of generative AI. Then there is the third category: people using generative AI in lazy, dumb, or cost-cutting ways.

Earlier this year, the Chicago Sun-Times published a summer reading list that featured several books that don’t exist. 6 The article had been at least partially written by generative AI and had made up many of the books on the list, including detailed descriptions of their (non-existent) plots.

The legal profession has also fallen victim to generative AI’s tendency to convincingly invent things, with multiple reports of cases where lawyers submitted examples of made-up case law. 7

A Christmas mural in London was hastily removed after it drew waves of criticism and ridicule for being clearly AI-generated. 8 Described as “Lovecraftian” and “an abomination”, it featured distorted faces, a dog/seagull hybrid, and a deformed snowman, among many other horrors.

Delivery company DPD deployed a chatbot that, with a little encouragement, was soon swearing and composing poetry about how bad a company DPD is. 9 A “recent update” to the chatbot was blamed, with no one seemingly interested in testing out what guardrails they should put in place before letting bored, probably pissed off, customers chat with it.

An “AI teddy bear” went rogue and started bringing up extremely inappropriate topics when conversing with researchers. 10 The company had tapped into Open AI’s LLM service to generate its conversation responses, but appeared to have not put much effort into safeguarding what topics it should (and shouldn’t!) talk about.

Elsewhere, website “developers” jumped on the AI bandwagon. Excitable AI enthusiasts power.ai shared a video in late 2023 showing how they had set up a website in seconds using a service called 10web. Leaving aside the potential trademark issues of setting up a website that fuses “the world of Marvel with fitness”, the video boasts how all of the images and content were created by the AI service including the client testimonials.

Asking an AI service to write your marketing copy is one thing. Getting it to write fake testimonials from fake people is another thing entirely.

Companies doing bad things

Finally we must consider the platforms that are enabling these AI images to be created and shared.

Take Facebook. The mountain of “AI slop” that has overrun that site was the subject of a Last Week Tonight segment this year. It highlighted just how credible people are when it comes to these (to me) extremely obvious AI images. I recommend watching the entire segment as it gives a good overview of the type of content that is being generated, how and where it is being distributed, and why people might be doing it.

They reported on how, in the wake of a recent flood in America, people began sharing AI-generated images of “survivors”. This was, understandably, confusing for the emergency services who use social media as one way to find people in need of rescuing and it resulted in them allocating resources to search for people who weren’t really there.

Clearly, tech companies are not doing much in the war against misinformation and the misuse of this technology. The UN Women campaign discussed above has, as one of its four campaign aims:

Make tech companies step up by hiring more women to create safer online spaces, removing harmful content quickly, and responding to reports of abuse.

It feels as though we are a long way from achieving that aim, particularly as companies accelerate their efforts to create ever-more-convincing AI output.

Which brings us to Sora, currently the number one app on Google Play Store. It is Open AI’s new video generation and sharing app; a closed eco-system of 100% AI-generated videos presented to you in an endless slop parade.

The “Cameo” feature within the app lets you use other people’s likenesses in your videos, which feels like a development that can only end in disaster. Thankfully, it is fair to say that Sora has been met with a healthy amount of concern:

[E]xperts warn that this innovation comes with potentially significant child-safety risks, from misinformation to the misuse of kids’ likenesses.

In an October safety report, Common Sense Media, a nonprofit that monitors children’s digital well-being, gave Sora an “Unacceptable Risk” rating for use by kids and teens, citing its “relative lack of safety features” and the potential for misuse of AI-generated video. 11

From the same ABC News article:

[Chief Parent Officer at Bark Technologies, Titania] Jordan cautions that these protections may not be enough. “Once your likeness is out there, you lose control over how it’s used,” she said. “Someone could take your child’s face or voice to create a fake video about them. That can lead to bullying, humiliation, or worse. And when kids see so many hyper-realistic videos online, it becomes harder for them to tell what’s true, which can really affect self-esteem and trust.”

Dobuski, [a technology reporter for ABC News Audio], said that while OpenAI has included several safeguards, enforcement has been spotty. “When Sora first launched, people were making videos of copyrighted characters like SpongeBob and Pikachu, even fake clips of OpenAI’s CEO doing illegal things,” he said. “So, it appears a lot of those restrictions are easy to get around.”

Nothing about this new app fills me with hope. Generative AI’s roll-out over the past few years has been rife with exploitation and misuse, often deployed against the most vulnerable in society. Releasing a new app with a devil-may-care attitude to content safety feels like an escalation of a bad situation.

Yes, it will be individuals who make the deepfakes, who misuse people’s likenesses, who create false and misleading videos. But it is the AI companies that are enabling this to happen and who are, it seems, not too concerned with preventing it from happening.


  1. https://www.threatmark.com/the-dark-side-of-artificial-intelligence/ ↩︎
  2. https://writerbeware.blog/2025/09/19/return-of-the-nigerian-prince-redux-beware-book-club-and-book-review-scams/ ↩︎
  3. https://www.thebookseller.com/news/ai-slop-versions-of-books-on-retailers-like-amazon-risk-harming-consumer-confidence ↩︎
  4. https://www.thebookseller.com/news/author-sam-blake-urges-better-protections-from-amazon-after-ai-rip-off-books-appear-under-her-name ↩︎
  5. https://www.unwomen.org/en/articles/faqs/ai-powered-online-abuse-how-ai-is-amplifying-violence-against-women-and-what-can-stop-it ↩︎
  6. https://theguardian.com/us-news/2025/may/20/chicago-sun-times-ai-summer-reading-list ↩︎
  7. https://www.theguardian.com/technology/2025/jun/06/high-court-tells-uk-lawyers-to-urgently-stop-misuse-of-ai-in-legal-work ↩︎
  8. https://futurism.com/artificial-intelligence/christmas-mural-ai ↩︎
  9. https://www.theguardian.com/technology/2024/jan/20/dpd-ai-chatbot-swears-calls-itself-useless-and-criticises-firm ↩︎
  10. https://www.malwarebytes.com/blog/news/2025/11/ai-teddy-bear-for-kids-responds-with-sexual-content-and-advice-about-weapons ↩︎
  11. https://abcnews.go.com/GMA/Family/what-is-sora/story?id=127188940 ↩︎