Current Generative AIs Have Critical Quality Issues
- Business, Quality Assurance, Security
Latest revision:
The hype for generative AI is real. It is now possible for anybody to dynamically generate various types of media that are good enough to be mistaken as real, at least at first glance, either for free or at a low cost. In addition, the seemingly-creative solutions they come up with, and the unexpected application of large language model AIs as personal assistants, have fueled the imagination of many. I personally believe that deep learning AI has a bright future, and that these breakthroughs are great examples of that.
But do not be fooled: we have not yet reached the level of an artificial general intelligence. There's a reason why Stable Diffusion have trouble generating pictures of human hands, or why ChatGPT often gives the wrong answer when attempting to multiply large integers.
Normally, this is not something I would talk about, as overhyped technologies are nothing new. Ever heard about the dot-com bubble? Or 3D televisions? What about NFTs? Or the metaverse? All of these things have something in common: hustlers trying to get rich by attempting to convince a bunch of other people that they have everything to gain by joining in on the hype. And the current generative AI trend is no different on that regard.
What is different this time around is that many powerful organizations, and many influential people I previously trusted and respected, are completely blinded by the hype, using and integrating the current generation of generative AI for stuff that should never be used for, and in outright dangerous ways. And I'm not talking about the "robots taking over the world" kind of dangerous, but the "you could lose your job, go to jail or kill yourself for doing this" kind.
What can go wrong?
Misinformation
The biggest issue by far is when people use generative AIs as if they were their personal assistant. The problem is that these AIs are simply not designed to perform this task. However, because they are designed to imitate the way humans use language, they sound very convincing. As such, when they respond in a biased manner from their training data, omit important details and nuances, or completely make stuff up, the generated text will tend to fool the user, and it can only be verified by cross-checking all of its output.
For example, when I asked ChatGPT about chess champion Garry Kasparov, it incorrectly told me that he was also an AI expert, most likely because of his legendary matches against Deep Blue. Debunking that is not trivial however, especially when considering that one of his recent books is called Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins.
Think you can work around this problem by challenging generative AIs whenever they claim something as fact? This may work, but get equally ready to be gaslighted instead.
What about asking for references? Think again, because the generated references may end up not being real, or worse, actually be real but not actually relevant. For example, I got the AI Summarizer of Brave Search to correctly tell me Garry Kasparov's full birth name and cite a valid reference for it, except it was the wrong one. Oops.
And then of course, people may share this misinformation, creating false rumors and attracting a lot of engagement on social media from people that don't know better and don't take the time to cross-check the information, making the situation even worse. Don't do that. Especially not in federal court.
Copyright infringement
Because current generative AIs are designed to imitate its training data, it also has the capability of outputting said data back. Scientists originally assumed that this was unlikely to happen because their models' sizes are several orders of magnitude smaller than their training data. Unfortunately, further studies proved that this assumption was incorrect.
As such, unless the generative AI was trained exclusively on public domain data, there is a reasonable chance that it may generate an existing work or something very close to it, which would be illegal to use.
Data leakage
Consider the previous problem. Now consider what would happen when the training data contains sensitive information. Now consider what would happen if your input, containing sensitive information, would also be used as future training data. Oh wait, that's how ChatGPT works. Oh wait, Samsung shared trade secrets with it the other day. Oops!
Unoriginality
Even when the AI doesn't generate someone else's work, it will tend to generate unoriginal output. For example, fiction magazines are recently getting flooded with fraudulent story submissions of bad quality with obvious AI-generated tells, and it tends to generate the same jokes over and over.
In other words, assuming there is no misinformation nor any legal issue with a specific use of a work generated by the current generation of generative AIs, it will probably look and feel too similar to other works generated by them. That's probably not a desirable characteristic outside of very specific use cases.
Copyrightability
If you were to use AI to generate some piece of work, and assuming it would be original enough, who would own its copyright? You? The AI? The creator of the AI? The creators of the AI's training data? Nobody?
This is currently an open legal question in many jurisdictions. In the USA, the answer appears to be that AI output is public domain unless sufficiently transformed afterwards. In Canada, as far as I'm aware, that question remains open.
In any case, if you were to publish content that uses AI output, you may be unexpectedly stepping on a legal mining field anyway.
Manipulability
Let's say that despite it all, you think generative AIs are so cool that you want to integrate it into your product or service. Think again, because you may open said product or service to injection attacks, a classic security weakness. More specifically, if arbitrary user input is fed to generative AIs, they may output content that goes against your expectations, and cause unexpected problems. This type of attack in this context is known as prompt injection, and there are already many documented examples of it.
For a more technical discussion about the potential risks and (lack of) mitigations, check out this article about prompt injection by the NCC Group.
Why does this happen?
It's important to understand how generative AIs work. Specifically, they see things like images getting progressively noisier or text being progressively shortened, and learn to predict how to reverse these operations purely on intuition alone. (For those interested how that is mathematically possible, you may refer to my original article on deep learning.)
The important part is that there is no logic or reasoning involved in this process. It's all intuition-based. You can't instantly solve a large multiplication problem in your head without having it memorized in advance, so why should we expect better from artificial intuition?
As for sources of creativity, they are limited to emergent generalizations derived from training data, and randomly picking possible output based on the AI's predicted probabilities of the correct answer. These allow generative AIs to return some surprising solutions and some variance, but ultimately ones that will either feel too samey after a few attempts, or be outright bad.
Honestly, it's remarkable that despite this, generative AIs perform well at many standardized exams. However, in my opinion, this may be more of a failure in the way we currently assess people's skills through said exams, rather than something truly amazing emerging from these AIs.
It's not ready... yet
I personally believe that we are very close to achieving general intelligence in AI. However, that time is not now, despite appearances.
That said, the future is bright on that front. We have figured out how to simulate intuition, and we have harnessed the computing power to do so. I believe what we are mainly missing now is how to properly create and structure artificial versions of the other pieces of the human brain that attach to it. Some amateurs have already tried creating such systems and integrating them with ChatGPT with questionable successes, but that are impressive enough to support my belief on this regard...
...and to fool those that don't know any better.
Related content I wrote
I Designed the Perfect Gambling Game, But...
- Mathematics, Business, Game Design
Back in 2006-07-08, during the 13th Canadian Undergraduate Mathematics Conference at McGill University, I presented a gambling game I designed with the novel property of being both advantageous to players and the house, and that despite this proprety, that pretty much nobody in their right mind…
Current Data Serialization Formats May Be a Waste of Money
- Programming, Business
Storing data. Transmitting data. Processing data. These fundamental topics of computer science are often overlooked nowadays thanks to the historical exponential growth of processing power, storage availability and bandwidth capabilities, along with a myriad of existing solutions to tackle them. So…
After 8 Years, Double Fine's Hack 'n' Slash Secret Room Has Finally Been Cracked
- Video Games, Security
In the history of obscure video game secrets, not many has been quite infamous as the SecretRoom.lua puzzle in 2014's computer hacking game Hack 'n' Slash by Double Fine. Since the game's release, a mysterious encrypted file was found in the game files, yet despite the very nature of the game being…
Upgrading Your Cybersecurity from Cowboys to Sheriffs
- Security, Business, Anecdotes
Roaming throughout the countryside, dangerous desperados are awaiting in their hideout for the perfect opportunity to rob their victims in silence. Powerless, the authorities have posted wanted posters on public boards with cash bounties for any information that could lead to their arrest or death…
Scrum Is Not Agile
- Programming, Business, Psychology
While there is no denying that Scrum revolutionized the software industry for the better, it may seem a little strange to read about someone that dislikes it despite strongly agreeing with the Agile Manifesto, considering the creator of Scrum was one of its signers. However, after having experienced…