AI Tools Have A Major Achilles’ Heel
The mass hysteria surrounding everything AI since the release of ChatGPT and DALL-E has gotten to the point that everyone, except my grandmother, knows about it.
But she’s dead. So, that makes sense.
Everyone else, either gets overly excited about it or shifts into panic mode, fearing for their job.
Yet another example of standard 21st century human behaviour lacking both common sense and appetite for pragmatism.
It’s both ridiculous and infuriating. Fear not, though. I think both camps will very soon feel the proverbial brakes applied to all things AI, and even as a technologist I say, it will be for the better.
Meta and Shutterstock announced very recently that they are going to form an AI partnership. Shutterstock will offer up its content for Meta to use for training its AI. Predictable move in a world where everyone wants a slice of the AI pie, only to generate what I like to call digital rubbish.
Clearly, we don’t have enough organic content out there for companies to get rich off of, we need generated garbage too. The fact that the partnership involves Meta, yet another company notorious for either relying on junk or encouraging its creation, is anything but surprising.
While Shutterstock clearly wants to play the good guy role in this game, their December 2022 announcement that they’ll be paying contributors for AI rights, makes one wonder, just how much. Likely peanuts.
One thing is clear, though. At least one company realised, mooching off others’ hard work is both illegal and unethical, and the current truly public domain licensed image data available on the web is about as scarce as a live gazelle in a lion colony.
Nom-nom… 😋 Sorry, not a vegetarian-friendly article, I know. 🤷♂️
In the meanwhile, DALL-E’s OpenAI claims it used an implementation of GPT-3 and trained it with sets of text and image pairs. An inoffensive claim until you ask yourself where the text and images came from.
They didn’t pull it out their butts, that’s for sure. The source data has not been revealed, and OpenAI to the best of my knowledge does not intend to reveal this information anytime soon.
One has to wonder why. Judging by images generated, one can make an educated enough guess that whatever it could get its “woboty hands” 🤖 on from the internet, it did, except for famous people and nudes — at least we hope so.
Stable Diffusion, on the other hand, takes it further. Much further. To the extent that one of my friends sent me a fake photo of Jenna Coleman — one of my favourite actresses. She looked quite convincingly real. Printed a duvet cover out of it too! Just kidding… 🤣
You may be thinking, well, what’s the harm in that? Erm, plenty. Firstly, because Jenna Coleman never agreed to a photo being generated of her, secondly because porn sites are now getting inundated by Stable Diffusion generated nudes, and no, you ain’t getting a link! 😈
What used to take a Photoshop wizard hours, now takes less than a minute for a bored and dirty mind.
All that’s common, and an undeniable Achilles’ heel between these examples, is the aspect of permission and copyright. The lack of it, to be more exact. One of my favourite writers on Medium,
You see, there is no artificial intelligence without organic intelligence.
So let’s look at the organic aspect. Most commercial AI currently relies on data created by humans in some shape or form. In that, I also include the text prompt you type onto the likes of DALL-E to get your funky image.
The same goes for the text you use to generate an article or a piece of software via ChatGPT. Even that gets reused for later searches. But let’s not focus on that, but rather the content and software millions of people create organically and publish on the web every day.
I know many still find this shocking, but simply because something is publicly available to view on the web, does not mean it’s in the public domain. For instance, I retain all the rights to the text in my articles.
You may read it, feel inspired by it, but God help you if you decide to copy or use any of it, as I’ll be going after your sorry ass in court. I’m a nice guy, but piss me off, and the devil itself will seem like a fluffy teddy-bear.
A distinction has to be made between consuming content and using content. The latter has almost guaranteed legal implications.
The reckoning will come for AI companies when everyone realises their content is being exploited for free and being sold back to them. We can already see that certain platforms ban AI-generated content, and good on them, but again, that’s only half the story.
The other half is barring content from being used for AI training. Not my images, not my articles, not my software code. Microsoft’s Co-Pilot can go f*** right off, and if I have to move off GitHub to protect my software, I will do that too.
In the name of AI, we cannot just p*** all over copyright and content licences. Some argue we need new laws to cater for AI specifically. I beg to differ. The licensing types, as confusing as they sometimes can be, are already clear enough to conclude that “all rights reserved” applies to anyone and everything, including software bots and AI.
As exciting and genuinely useful as AI can be, it has to follow the exact same rules. It wants something? It needs to pay for it, and handsomely so. More and more creators, writers, artists, and software developers will realise they’re being taken for a ride with some of these “jaw-dropping” tools.
They’re only that “good”, because they already used far more data than they had the right to.
We will see, soon enough, class-action lawsuits against AI companies, and rightly so. Using data without consent is breaking the law, using for free someone else’s premium work to create another — be that free or paid-for product or service — is illegal.
Copyright lawyers will have so much work on their hands, they’ll be working 24/7, winning case after case, and getting richer in one year than wall-street suits in 10.
See more here medium.com
Header image: Deepmind
Please Donate Below To Support Our Ongoing Work To Defend The Scientific Method
PRINCIPIA SCIENTIFIC INTERNATIONAL, legally registered in the UK as a company incorporated for charitable purposes. Head Office: 27 Old Gloucester Street, London WC1N 3AX.
Trackback from your site.
Tom
| #
A/I is then new smart phone…dumber than a donut.
Reply