Aixistential pt.1
“Development of superhuman machine intelligence SMI is probably the greatest threat to the continued existence of humanityˮ – Sam Altman, OpenAI CEO, 2015
Itʼs time again for the internetʼs best AI blog (according to ChatGPT!)
The AI giants are still at it and the hype is still going, so let’s go. Last time, we talked about the risks of using AI tools at work, releasing AI features to clients, and the risks of widespread AI tools causing disinformation.
This time we will be discussing the existential risk of AI. We’ve decided to go full TikTok and release in two parts, where this part 1 will just include the news updates, followed by part 2 next week where I delve deeper into AI existentialism.
Some of you might have caught my soap-opera-style cliff-hanger hinting at the risks of General AI. This time, we’re digging our heels in, putting our tinfoil hats on, and talking about it.
Weʼll be diverging slightly from the specific implications to the treasury and financial industries, so prepare for a more conceptual, abstract, and big-picture read. Having said that, I still strongly believe this is a topic that should be on everyoneʼs horizon, especially those tasked with predicting whatʼs coming.
Yes, you guessed it, we’re talking about whether AI is an existential risk – told you you’d need your tinfoil hat… or do you, if it ‘s true?
News
But first, as always, a roundup of what’s new since last time.
Regulation, finally.
- Has someone been reading my series? – The US, EU, and UK have all signed up to the Council of Europeʼs AI safety treaty, “the first-ever international legally binding treaty aimed at ensuring that the use of AI systems is fully consistent with human rights, democracy and the rule of law.ˮ
- Maybe a Californianʼs read the blog? – The California State Assembly has approved a bill imposing a series of safety measures on AI companies. These include having mechanisms for model shutdown (hmm… read on), protecting against “unsafe post-training modificationsˮ, and establishing testing procedures to assess potential risks of causing “critical harmˮ. This has been met with mixed opinions, with OpenAI saying it will harm innovation, Nancy Pelosi claiming it is “ill informedˮ, but Elon Musk, unexpectedly, showing support.
- Zuck: “please give me your data?ˮ Meta and Spotify team up to complain about EU privacy laws, claiming they hold back AI innovation and will leave Europeans with AI “built for someone elseˮ. Meta against privacy laws? What a shocking turn of events.
Business and Industry
- Ok, someone at Google definitely read it – Google has been rolling out safeguards for its AI products leading up to the US election, making them refuse to respond to election-related queries. Meanwhile, YouTube announced it’s developing tools to protect creators from having their likeness, including their face and voice, from being copied with AI and used in other videos.
Latest Fines and lawsuits:
- Anthropic has been sued by authors over usage of copyright material for training their Claude model.
- Clearview AI – the controversial AI facial recognition startup – has been hit with it’s largest fine yet for breaching GDPR by the Netherlands, following similar fines last year by UK, France, Italy and Greece; reaching a total of around €80 million.
- FCC has hit a shady telecom company with a $1million fine, following a $6million fine to a shady Texan robocaller, over the Biden deepfake scam earlier this year.
AI phone home:
- Apple launched Apple Intelligence, their AI feature set, “built into iPhone, iPad and Mac with groundbreaking privacy’’, pinning itʼs hopes on AI to sell iPhones.
- Google launched their new range of Pixel 9 phones, emphasising itʼs enhanced AI abilities.
Funding AI:
- OpenAI wants more money – OpenAI is reportedly in talks with the aim of raising several billion dollars at a $100 billion valuation, the highest of any AI startup to date, with Thrive Capital and Microsoft expected to participate.
- Blockchain against AI – Story, a startup aiming to build a more “sustainableˮ IP ecosystem in the age of AI using blockchain (thatʼs a lot of buzzwords), announced itʼs raising $80 million at a $2.25 billion valuation.
- AMD feels left out – AMD, targeting Nvidiaʼs AI dominance, announced it is acquiring ZT Systems, a server solutions provider for cloud and infrastructure, for $4.9 billion.
- Safe AI? – Safe Superintelligence, an AI startup founded by OpenAIʼs co-founder with the aim of providing… actually just read the name… has raised $1 billion at a $5 billion valuation, led by Sequoia and Andreessen Horowitz. Someone please send them this blog.
- Alexa or Claude? – Amazon uses Anthropicʼs Claude AI model to revamp Alexa, due in October, ahead of the U.S. holiday season.
- Nvidia dethroned? – An AI hardware startup, Cerebras, has created a new AI inference solution that potentially rivals Nvidiaʼs.
Techy updates
- OpenAI, stop stealing my job – OpenAI have just announced and released their latest series of models, o1 (including o1-preview and o1-mini, with the full o1 yet to come), focusing on reasoning ability. Now capable of giving a full chain of thought, OpenAI claims it is ~6x better at maths, ~8x better at coding and about 1.5x better at answering PhD level science questions (vs GPT4o)… beating an expert human.
- OpenAI brings fine-tuning to GPT4o, allowing developers to customise the model.
- xAI announced their latest AI model, Grok-2, attempting to overthrow the leading models.
- xAI also announced a record-breaking AI training system, ‘Colossusʼ.
- MIT researchers discover simulations of reality deep within LLMs, showing an understanding of language beyond just mimicry. This blog is getting more relevant by the minute.
- Google potentially revolutionises drug design and disease research with AlphaProteo, an AI system for designing novel proteins to bind to target molecules.
- Google is also trying to develop AI that can hear sickness.
- A new open-source software, NeuroTrALE, quickly and efficiently processes vast amounts of brain imaging to help our understanding of the brain.
- Other examples of AI for scientific advancements include: two examples of MIT teams using machine learning to advance material science (1 and 2), a Cambridge Uni spinout aiming to tackle CardioVascular Disease, and an AI model to identify breast cancer stages.
Now we have that out the way, get ready for the real juicy stuff in part 2. Check back next week for it and, in the meantime, get those tin-foil hats nice and shiny.
*TreasurySpring’s blogs and commentaries are provided for general information purposes only, and do not constitute legal, investment or other advice.