Let’s say you’re in your room one humid evening, drinking garri with groundnuts because you can’t come and kill yourself in this hot weather, and your crush texts: “Send me something romantic.”
You panic.
Because what is love, if not a crisis of imagination?
So, you open ChatGPT. Because a written “something romantic” is better than a picture that can later cast on social media and make your family disown you.
You borrow yourself sense, type in some love prompts and boom — 30 seconds later, it has written a masterpiece: poetic, heartfelt, and filled with metaphors that would make Maya Angelou weep.
Your crush replies: “Wow, thanks babe. I didn’t know you could write like this. Please come and teach me”
And now you’re sweating more than the garri. Because, warris all this? Can’t this boo just let you prompt AI in peace!?
Did you write it? Or did Chatgpt?
But you sent it. So is it yours?
Now swap the love letter for a song, an ad campaign, a children’s book, or a viral Twitter thread. When AI helps us to create, who owns the final product? The human with the prompt? The company that owns the AI? Or the AI itself, silently judging our grammar?
This question is not just philosophical. It is legal. And right now, the law is blinking like an old Windows computer trying to load.
Let’s get one thing clear: Most copyright laws around the world (the US, UK, Canada, even parts of Asia and Africa) say that copyright protects works created by humans. The moment your artwork, story, or software is birthed from your brilliant brain (and hands), the law puts a gold crown on it: your intellectual property.
But AI? AI has no brain. No hands. No heartbeat. It doesn’t dream. It doesn’t feel heartbreak or drink garri. And so, most courts and lawmakers are reluctant to let it wear the copyright crown.
Still, things are messy. Because when you give the AI the prompt, and you edit its output, and you publish it — it feels like ownership. And ownership equals intellectual property rights.
Right?
Well… who knows with AI.
If this feels vague, it’s because it is. The law hasn’t caught up yet. We’re all running on vibes and Terms & Conditions that we didn’t read.
Some countries are trying to write sense into the chaos.
In the EU, the AI Act now says AI-generated content should be labelled clearly, especially when it could mislead people. (So no more deepfake videos of the Pope in a puffer jacket without a disclaimer, please.)
In Canada, policymakers are drawing up an action plan to tackle copyright, privacy, and accountability in AI creation. Even the US is exploring ways to require AI companies to disclose what copyrighted content they used to train their models.
It’s like the world is cooking a pot of jollof AI law. Everyone’s recipe is different, but at least we’re finally in the kitchen.
This content, also shared on Law Bants, is for information and entertainment purposes only. It reflects personal opinions and does not constitute legal advice or create a lawyer-client relationship. If you need professional legal guidance, please consult a qualified attorney in your jurisdiction.