(Ai)llusions
“By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.” – Eliezer Yudkowsky, co-founder and research fellow at the Machine Intelligence Research Institute
Welcome back to the best AI blog on the internet – according to GPT4 (when I tell it to say so). We’ve seen another wave of AI developments, from what I will now be calling Big AI, which in turn came with plenty of complications, backtracks, and… fakery(?), which leads us perfectly into this blog, where I’ll be doing a deeper dive into the risks that come with AI. Hopefully, by the end of this blog, you’ll have a more accurate, (healthily) skeptical, and cautious approach to accepting AI’s help with open arms, protecting yourself and your company from the real business and financial risks that others might be overlooking.
News
But first, we of course have to start with a roundup of AI news since last time.
Business
Yellen Yells – Janet Yellen warns that AI poses “significant risks” in the financial industry. See, she agrees.
AI takes a break – ChatGPT, Gemini, Claude, and other AI chatbots all had their platforms go down for multiple hours within 24 hours of each other. At the time of writing, the cause is still unknown (maybe AI is uprising?).
RegulAIte? – AI business hasn’t escaped the regulators:
- FTC and Justice Dep. set to investigate dominance of Nvidia, Microsoft and OpenAI;
- FTC investigate recent $620M deal between Microsoft and AI startup Inflection over potential impacts on market competition;
- CMA investigate Microsoft and Amazon’s AI deals, having found an “interconnected web” of over 90 partnerships involving the same 6 BigAI groups (Google, Apple, Microsoft, Meta, Amazon, Nvidia) creating antitrust concerns.
- Move aside, Apple – Nvidia recently overtook Apple’s market cap, surpassing $3tn, as they once again beat earning expectations by almost 10%.
- “AI” company? Instant-unicorn – Humane, the company behind the flopped AI pin – selling just 10k units of its 100k target – is allegedly in discussions with HP to sell itself for $1 billion.
- Hitachi x Microsoft – Announcing a new partnership, Hitachi and Microsoft join forces to accelerate business innovation using Microsoft’s AI offerings. Hitachi said to train more than 50,000 “GenAI professionals” on advanced AI skills.
Technical
Model updates – OpenAI, Google, Anthropic, and Meta (amongst others) all released their latest batch of LLMs: GPT-4 Turbo closely followed by GPT4o; Gemini 1.5 and Gemma; Claude 3; and Llama3. Along with general reasoning, accuracy, and context size improvements, they also come with a focus on multimodality (accepting more input “formats” eg images, video, audio etc).
You should be worried – A group of current and former workers from OpenAI, Google DeepMind, and Anthropic released an open letter warning of a lack of safety oversight within the industry, and calling for increased protections for whistle-blowers.
Cognition jump scares developers – Cognition, a smaller competitor in AI, released Devin, “the first AI software engineer”, worrying developers everywhere with their very impressive demo… temporarily (see below).
Apple has joined the party – As was long suspected, Apple have been working on their own AI models and finally released a series of open-source, small (enough to run on, say, a phone?) LLMs called OpenELM. Maybe Siri will finally do more than offer me Google results?
OpenAI goes after video production – OpenAI announced their new text-to video model, impressing the industry with their demo… temporarily (see below).
Fakery?
What do I mean by “temporarily” impressed? Why did I say fakery? Those really impressive demos of AI doing really impressive things? Well…
Sora demo turned out to need human post-production FX to actually be impressive.
Devin’s seemingly impressive capabilities were also (at the very least) significantly exaggerated.
Gemini’s viral demo was, you guessed it, faked.
Slightly tangential, but those cool AI-powered, cashierless Amazon grocery stores turned out to be… human-powered.
And that’s just the ones we know of.
The Risks
Examples
Before diving head first into all the potential dangers of AI usage, it might be useful to start with some real world examples of what can happen when it goes wrong:
Google’s new AI-powered search-summary feature started telling people to add glue to pizza and that one small rock a day would keep the doctor away.
Google had to temporarily remove Gemini’s ability to generate images after controversy over it depicting historically inaccurate images – such as generating Black and Asian WW2 Nazi soldiers.
Air Canada was ordered to pay damages after its chatbot lied about discount policies for bereaved families.
NYC’s AI chatbot told businesses to break the law.
DPD closed its online chatbot after a user posted on X about how it could easily be manipulated into swearing and criticising the company and itself.
A Chevy dealership’s AI chatbot agreed to sell cars for a whopping $1.
That’s just recently too.
The Risks of you using AI:
Quick definition: the process of AI generating very believable, yet incorrect and made-up answers is called “hallucination”.
So, why is AI always sounding, but not always being correct, such a problem? Why are hallucinations so dangerous?
Using AI for increased productivity:
It stems from the fact that hallucinations are really, really hard to detect, especially automatically. Whilst at the same time, AI tools really can make many tasks – especially the more boring, manual ones – a lot easier and faster to complete. This dichotomy has a very serious danger: complacency.
Due to the fact that AI can complete the tasks that people usually don’t want to do, and considering that it does it accurately and correctly most of the time, there is a big danger of eventually assuming it’s correct all of the time. If a tool makes doing something that used to be cumbersome be super quick, it defeats some of the purpose if the user then has to manually check the output of the tool every time. This means that, purely because of human nature – not because of specific inadequacy – it is likely that with time, the results of the tool will tend to be blindly accepted.
Again, this will be fine most of the time. However, in a highly-regulated industry of managing (other people’s) money, that one rare time when it hallucinates and it’s blindly accepted… well it could be business-ending.
For example, don’t be like the lawyer who used ChatGPT to prepare a filing, without actually checking the filing properly, and cited hallucinated cases in court.
Using AI as a skilled worker:
Whilst it is common to use AI to do the cumbersome tasks you don’t want to do, it is also common to use it to do the tasks you can’t do. Think coding, spreadsheeting, data analysis, etc – AI can also do some of the higher skilled tasks that might normally require an employee/expert.
This does reduce the barrier to entry significantly for many tasks. For example being able to perform simple data analysis, with no prior data analytics knowledge. However, there is an important thing to bear in mind here. Whilst, yes, it allows people to do some tasks without the expertise normally required, it opens the possibility for projects to be delivered by people without the knowledge to correctly assess them.
Using the data analysis example, the benefit here is that people who aren’t data scientists can now easily produce some data analytics conclusions – but how much data analysis would you be comfortable relying on that hasn’t been seen or checked by a single data scientist (answer should be very little)? I am not saying AI isn’t useful for data analysis, but I am saying that AI isn’t a reliable data scientist by itself. This applies to any high-skill task, not just data analysis.
This means an over-reliance on AI to perform high-skill tasks opens up significant business and operational risk. At the very least, (continuing the example) you’re left with somewhat-correct conclusions based on over-simplified data analysis which missed some key insights. At the worst, you’re left with (hallucinated) conclusions that aren’t correct, and you might not even know it until it’s too late. Not to mention the risk of real knowledge and talent decline over time – I personally wouldn’t want a data team of prompt-engineering “data scientists”, who don’t actually know how to analyse data, but how to ask an AI to do so, and learnt all their data knowledge from it.
Essentially, AI can be a great co-pilot and helper to increase the efficiency of high-skill workers, but it cannot, and should not, replace those high-skill workers. (For now?)
The Risks of your clients using your AI:
Automating client-facing interactions with AI, especially if it’s given the ability to then take actions, can be extremely detrimental depending on the use case. Even the small risk of an AI giving false, hallucinated information to clients can be unacceptable in certain domains – like, say, the highly regulated finance and treasury industry, where giving made-up financial advice is potentially both career-ending and illegal.
Another challenge with the technology is how hard it is to reliably control its responses. Whilst it generally listens to instructions you give regarding how to reply, it’s not foolproof. Which is dangerous depending on how much internal information and ability to take actions you give it.
For example, a report by Immersive Labs found that 88% of attempts to trick AI into revealing confidential company information were successful in at least one level of an increasingly difficult challenge, and 17% across all difficulty levels. Now that’s not very GDPR friendly, is it? This is clearly a serious issue, especially if the AI is publicly accessible to malicious actors.
This is even worse if you give the (public) AI power to take actions on internal systems. From a cybersecurity perspective, if something is that easy to trick into revealing unintended information, imagine what havoc it could be tricked into causing in your systems.
It should hopefully be pretty obvious how this can be detrimental anywhere, let alone in an industry where almost all processed data is sensitive and protected.
The Risks of everyone using AI:
Lastly, I want to make a note on the potential dangers we may be headed towards with widespread AI usage. Whilst this is less relevant to operational risks, I believe it’s always important to be as well informed as possible on dangers in the horizon – especially in an industry which capitalises on predictions.
We’ve already talked about how AI is good at being very believable, has the imagination to make up false information, and can be easily tricked into doing things it’s been designed not to. We’ve seen how there are AIs capable of generating text, images, audio and videos, by simply giving plain english instructions. And lastly we just need to look around to see how widespread and accessible it is.
This, I believe, is a dangerous concoction. It’s an easy recipe for generating fake information, across many (any?) mediums, at scale, efficiently. Information which ends up on the internet, which then ends up back in the training data of the models that created it, to make it even better at doing it.
How do we make decisions when there is a constant risk that the information you’re basing on was created from either the pure imagination of an AI somewhere, or any bad actor, with any level of skill? What happens to an economy and society which no longer has irrefutable proof?
Conclusion
I want to be clear that I am not against the use of AI. I use AI on a daily basis, and experience its benefits on productivity and the power of what it can be used for constantly. However, as with any technology, whilst there are benefits, there are also risks. Risks I see as especially and uniquely dangerous if not taken into account. Unfortunately, I do not think they are taken into account and given the level of attention and scrutiny warranted.
As is always the case at the cutting edge, if used well, carefully, and with the appropriate protections and controls, its potential could be unimaginable. The thing is, I also think understanding what are the appropriate protections and controls can be uniquely difficult in the AI realm.
Oh, and as always, one thing to leave you with. In this blog, the type of AI I refer to is narrow AI – AI models which are really good at doing relatively specific things (generating text, images, video etc). I haven’t talked about the risks of (true) general AI – AI which is really good at everything we can do… except at computer speed. Is it a risk? Well, OpenAI’s CEO Sam Altman says it is “the greatest threat to humanity’s existence”. So… probably?
*TreasurySpring’s blogs and commentaries are provided for general information purposes only, and do not constitute legal, investment or other advice.