and Got Roasted by the House of Lords
It sounds like a subplot from Black Mirror, but the UK government actually tried to sneak through legislation that would’ve let AI developers treat your copyrighted content like it was free pizza at a startup party. That legislation? The Data (Use and Access) Bill — a delightful piece of proposed law that wanted to let AI firms train their models on your books, your music, your photos, your blog posts without asking you, paying you, or even telling you.
Because apparently, intellectual property is just a suggestion now.
But not so fast. The House of Lords said absolutely not. For the fifth time. Yes, fifth! The bill has been kicked down the road more times than a cheap rental scooter. Most recently, on July 17, 2024, the Lords voted 242 to 116 in favour of an amendment that demands AI firms disclose which data sources they’re using and how. [Source: The Guardian, July 2024].
This isn’t just about disclosure for vibes. It’s about power, transparency, and money. Erm, mostly money.
Lord Clement-Jones, who led the amendment charge, called the bill “a stealth attack on copyright” and accused the government of pushing an “AI Wild West” agenda that would undermine creators’ rights in favour of Silicon Valley speed. He was backed by a coalition of artists, journalists, software engineers and academics who have spent years warning about exactly this kind of digital land grab.
Meanwhile, the UK government, propped up by corporate lobbying (hi, Sir Nick Clegg), warned that over-regulating AI would kill innovation. The classic argument was kinda like: “If you don’t let AI eat your life’s work, China will.” But creators weren’t buying it. As the Authors’ Licensing and Collecting Society (ALCS) said in its response, “You don’t get to take our work for free just because your investors are impatient.”
⚖️ So What’s Actually at Stake Here?
Let’s talk data rights — because this affects everyone. If the government can grant AI companies a free pass to scrape content, that means your tweets, Medium posts, Spotify tracks, or even your voice notes from 2013 could be scooped up and turned into chatbot fuel. And if you’re not being asked, not being told, and definitely not being paid… well, that’s not progress. That’s digital serfdom.
The UK is one of the few countries attempting to codify a legal framework for AI development — but doing so while ignoring copyright law is like building a mansion and skipping the plumbing. You’re just going to end up with mess everywhere.
This isn’t just a British problem. Countries like Canada, the U.S., Nigeria and Australia are watching closely. If the UK sets a precedent where governments can override copyright protections in favour of AI innovation, you can bet others will follow. And suddenly, every creator on Earth is working unpaid overtime for robots.
🧠 What Does the Law Actually Say?
The UK Copyright, Designs and Patents Act 1988 currently allows a narrow exemption for text and data mining (TDM) for non-commercial research purposes — emphasis on non-commercial. What the Data Bill tried to do was expand this exemption to commercial uses, essentially letting AI developers mine data from books, articles, or artwork as long as it was “publicly available.” No licensing. No consent. Just vibes.
The Lords’ amendment doesn’t stop TDM, but it demands transparency. If you’re going to use someone’s work to train an AI model that might later put them out of business, the least you can do is tell them.
The Society of Authors, the NUJ, and the Writers’ Guild of Great Britain all supported the amendment. In a joint statement, they said, “This isn’t about being anti-tech — it’s about being pro-human.”
🛡️ How Can You Protect Yourself?
If you’re a creator, step one is simple: read the fine print. Many platforms — from YouTube to Substack — have quietly added clauses letting them use your content to train AI. You have to opt out, but they’ll never send you a polite email letting you know. They just slide it into the Terms and Conditions and pray you’re too tired to read past paragraph three.
If you’re in the EU, GDPR gives you some rights around how your data is used. Canada’s PIPEDA and Nigeria’s NDPR offer similar (if patchy) protections. But outside those jurisdictions? You’re in no man’s land. And if you’re in the U.S., you’ll need to thank California and Colorado for at least trying — because there’s still no federal law covering this.
🎤 Final Word? Creators Deserve a Seat at the Table
The AI boom is being built on the backs of real people — journalists, poets, meme makers, and yes, even YouTubers. If governments want to support innovation, great. But don’t let that innovation turn into a heist.
Tech companies already have the cash, the lawyers and the server farms. What they don’t have is your permission.
And until they ask? They can keep their hands out of the data cookie jar.
This content, also shared on Law Bants, is for information and entertainment purposes only. It reflects personal opinions and does not constitute legal advice or create a lawyer-client relationship. If you need professional legal guidance, please consult a qualified attorney in your jurisdiction.