LLMoney
“Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.” – Stephen Hawking, Theoretical Physicist
Sure, I’d be happy to write you a blog about AI! (just joking, AI didn’t write this).
Welcome back to another instalment of what is definitely your favourite AI blog series (because there aren’t many out there). Whilst the AI scene has quietened down slightly, there’s still plenty to talk about. In this blog, following some news and updates, I will be talking about why AI has gone from the confines of coders in dark rooms to everywhere you look, what enabled that shift, and why it’s important for you to understand and assess the associated risks.
Updates and news
First, let’s start with some quick updates and recent news:
- OpenAI drama – OpenAI fired their CEO Sam Altman, went through 2 further CEOs, then hired Altman back after massive backlash internally and externally. Microsoft joined the board whilst rumours swirled that a leap in Artificial General Intelligence (AGI) triggered the chaos, with the board “no longer having confidence in his ability to continue leading OpenAI” being the official reason.
- OpenAI keeps going – Shortly before the drama, OpenAI announced several updates. They released GPT-4 Turbo, a more efficient and up-to-date version of GPT-4 with a higher token limit (how big the prompt can be) and cheaper costs. They also announced new tools for developers (and anyone) to create AI-powered assistants and products, launching the GPT Store for custom AI chatbots two weeks ago.
- OpenAI needs more news – More recently, OpenAI made waves again when they boldly claimed it would be “impossible” to develop today’s AI systems without using massive amounts of copyrighted data to a UK parliamentary committee.
- AWS and NVIDIA powerhouse – AWS and NVIDIA announced a significant expansion of their collaboration, aiming to provide customers with cutting edge infrastructure, software, and services for AI development.
- AI cure for cancer? – Absci, a leader in AI for antibody discovery, partnered with AstraZeneca, leveraging their Integrated Drug Creation platform. Combined with AstraZeneca’s expertise in oncology, the collaboration aims to further the quest for a novel cancer treatment.
- AI is on the phone for you – Amdocs partnered with NVIDIA and Microsoft Azure to build custom Large Language Models (LLMs) for the $1.7 trillion telecoms industry. Leveraging NVIDIA’s AI foundry service on Azure, Amdocs hopes to meet escalating demands for data processing and analysis in telecoms.
- Model updates – Google announced their next-gen multi-modal model, Gemini, claiming it outperforms GPT-4 across 30 out of 32 benchmarks. Anthropic announced Claude 2.1 featuring 200k tokens, almost doubling GPT-4 Turbo’s limit. Meanwhile Inflection unveiled Inflection-2, a 175 billion parameter language model beating PaLM2 (Google’s slightly older model) and only just trailing GPT-4 (which is rumoured to have 1.75 trillion parameters for comparison).
- Oh. And this blog won’t matter in 3 years – A recent survey of AI researchers and experts found that they believe there’s a 10% chance of “unaided machines outperforming humans in every possible task” by 2027, and up to 50% chance by 2047. Ha. [insert “this is fine” meme]. AGI, climate change, or nuclear war? Pick your horse.
Where did all this AI come from?
Ok, you’re caught up, phew; now the good stuff. I want to spend a bit of time explaining why there has been a sudden boom in AI and why it’s gone from research labs and niche use cases a couple years ago to everywhere you look now. I think there is a distinct lack of mainstream understanding and talk about this and, as we all totally agree, more understanding means more accurate risk management and forecasting.
Last time, I gave you a brief explanation of how AI models work at their core – go give it a read if you haven’t already, which obviously you have. This time I want to zoom out and explain the shift in mentality when it comes to how we progress these models, and how this led to where we are today.
Remember those not-so-long-ago headlines about AI models beating humans at Go or chess which made the rounds then disappeared again behind some news about something not as important (I think there was a pandemic at some point?)? Well, that was the old way of developing and improving models. I know I made it sound simple last time (I know, I’m just that good), but there are a lot of intricate details inside an AI model. I previously spoke about how during the training process the effect of each weighting on the model’s output is measured to reduce the error – but the exact function it uses to do this is one of those intricate details. Exactly how one node affects the next is another example, along with how many nodes are within each layer, and more. Not to mention the above applies to nodes of each layer, so the number of possibilities of layers with different details, and how to combine them is vast. You can start to see how there are a lot of decisions that go into designing models, and those decisions are hard. So, the AI research community spent years trying to fine tune (pun intended) each and every one of those decisions, for each use case, to make models that are really good for specific uses.
Then it changed. A certain AI company came along and said ‘no that’s too hard, I’m just going to make the models WAY bigger (Large-r), train it with WAY more (general) data, on WAY more computing power’. And… it worked. This started the era of foundational models; instead of carefully designing and tweaking models per use case and training them on lots of specific data, the mentality shifted to creating massive, more general models trained on massive amounts of generalised data. In other words, sometimes it is quantity over quality.
Let’s use an example; let’s say we want a model that can write exactly like Mickey Mouse (he’s in public domain now after all). The old way would be to dig our elbows into each fine intricacy of the model, perfectly designing each component to best match how we believe he thinks, and then using as many examples of Mickey Mouse text – transcriptions of movies, books, scripts etc – as we can find to train it and create a Mickey Mouse Fabricator (MMF, if you will). Nowadays we would be less bothered about the fine details and instead make the model 1000x bigger, train it on 1000x more computing power, with as many examples of general text/speech as it can handle. This creates a Large model that has a good understanding of speech and language generally. Then we take that model – let’s call it, oh I don’t know, a Large Language Model – and train it further on some, not as many, examples of how Mickey Mouse specifically speaks (aka fine-tuning), to get our MMF.
Hopefully you can see where I’m going here – what happens when companies start offering those LLMs as a service? The process to create an MMF just got a LOT simpler, no elbow grease required. Just a couple examples of Mickey Mouse text, send some money to a certain AI company, and ta-da, you’ve got yourself a MMF. Aaaaannnnndddd boom, AI is everywhere, for any niche. It’s not even just text. There are foundational visual / image models, and other modals. We’re even beginning to see these getting combined to create multi-modal models (although how truly “combined” they are is questionable).
(Note: When I refer to a model’s size, I’m referring to the number of parameters – ie the number of weightings that get trained)
Why do I care?
I was being jokey before (comedy nights on Mondays), but I do genuinely believe that better understanding the origins of something that is likely to be a big part of the future and have widespread effects on society, and thus the economic landscape, is important for anyone wanting to better foresee what might be coming; maybe people whose job is to decide the safest places to store money and mitigate any risks… there’s a word for that, right?
So, what are some of the risks that might come of this? Let’s talk firstly about using AI yourself. I predict that 2024 will bring a wave of AI embedded into common products and software, something that has definitely already started, but I believe will soon become as common and expected as a search bar. This could be anything: copilots to help you use or create anything (making spreadsheet functions, code etc), making your data or information chat-able (a search engine on steroids basically) and smart automations/decision-making tools, to barely scratch the surface. The problem is they are good. Really good. “So what’s the problem?” I hear you beginning to ask. It’s not about how good they are, it’s about what they’re so good at.
What did we say this was? What’s the name of the model? Come on, it’s literally in the title. Large Language Models – not large spreadsheet models, not large where-do-I-put my-money models. They understand language, not spreadsheets/data/your software/your company. Yes, they might be fine-tuned on a bunch of text examples of “asking for this functionality = you need to give this spreadsheet function”, but they’re learning how to sound like they understand spreadsheets, not how to use them. This is fine most of the time, because usually that’s the same thing. However when it’s not, then you have an incorrect answer specifically designed to sound convincing. Some use cases might not care, but when you’re managing your or your clients’ money, and your role is specifically risk-averse…. well that’s an issue. Especially if you’ve been trained (seriously, Monday nights) over time to believe and accept the answers.
Another layer (ok I can’t stop) is that this assumes good quality, accurate, fine-tuning data. As I said, it can’t understand the contents of the information itself per se, but the language. So it won’t be able to tell you “hey, this is rubbish”. And what’s the saying? Rubbish in, rubbish out. Wait let me fix that – rubbish in, very very convincing rubbish out. This is only made worse by the fact that we still have very limited capability to understand what is going on inside the model, therefore we can’t see it’s “working out”. Note that I’m not saying we don’t understand how AI models work (clickbait gone); just because we can understand each component and how they interact with each other does not mean we understand what’s actually going on when you scale it up to the billions or trillions. Humans understand the mechanisms that cause the weather (i.e. physics), but when those mechanisms are applied to billions or trillions of particles, it becomes impossible to predict the larger system they make up (i.e. the weather).
Finally, some bigger picture risks. Firstly, if this all means that anyone can now create a specific model by leveraging foundational ones, it’s important to think about who controls those models. When the “core” processing of most technological infrastructure belongs to a select few companies, that raises some alarms (or maybe it’s just the crypto bro in me). How well did that work out for the internet? Not to mention data privacy considerations and being in a highly regulated industry where you have access to very sensitive information. That’s a big one. Even if those companies triple pinky promise not to sell or use the data you feed their models (historically always ended well), there are still risks of data leakage – a model which has access to your data and is, by nature, unpredictable can be very hard to implement into complex systems whilst ensuring it doesn’t send data where it shouldn’t go.
Conclusion
Hopefully now you have a better understanding of where this explosion in AI has come from, due to a shift in development and research mentality, and why this can mean some very real risks – both in your day to day role (which at its core, is risk management), and in the bigger picture of the world. I genuinely believe AI will, and has already started to dramatically shift many aspects of society, the broader economy, and even culture. As someone in your position trying to forecast what might happen generally, this cannot be underestimated – I hope I’ve given you even a bit of wariness and sense of gravity.
It’s really not all bad. It will make your job easier and open up possibilities and opportunities we may have never imagined otherwise. But this is not the newest iOS, latest decentralised project, or next-gen console. This is the next internet, the next computer, heck the next industrial revolution. This will change things, and you better be prepared.
Lastly, something to leave you with; going back to the risks associated with AI generating very convincing, but false, information – be it textual, imagery, videos, audio, or whatever is next – what happens when that begins to outweigh the “good” information out there? At best, progressing AI by training it on even more data becomes useless when that data is more bad than good. At worst, what does a world where no information or media can be trusted even look like?
*PS. The header image in this blog was generated by DALL-E (OpenAI’s image model)!