Beta

This is not the AI we were promised | The Royal Society

Below is a short summary and detailed review of this video written by FutureFactual:

The 2025 Faraday Lecture: Modern AI Emergence, Boundaries, and the Reality Behind Large Language Models

The Royal Society hosted the 2025 Michael Faraday Prize lecture delivered by Professor Michael Wooldridge of Oxford on artificial intelligence. Wooldridge argues that contemporary AI, including large language models and transformers, is remarkably capable yet bizarre, not a rational mind or sentient being. He explains how scale and the transformer architecture enable emergent reasoning and even solve advanced math problems, while also highlighting fundamental limits such as hallucinations, inconsistent outputs, and the lack of true understanding. He emphasizes that AI should augment human intelligence as a cognitive tool rather than replace it, and discusses future frontiers such as robotics and safe deployment. The event included a Q&A with audience and online questions.

Overview

The 2025 Michael Faraday Prize lecture at the Royal Society featured Professor Michael Wooldridge, Oxford, discussing the current state of artificial intelligence, its remarkable progress, and the surprising ways it differs from human thinking. He argues that while AI has achieved unprecedented capabilities, it remains weird, not a rational mind or sentient being, and requires careful, responsible use.

AI Today: Remarkable Yet Weird

Wooldridge traces AI’s ascent from 2014 to 2024, highlighting key moments such as GPT-3 in 2020 and the 2024 mathematical breakthroughs that showcased emergent reasoning in large language models. He emphasizes that contemporary AI can perform tasks that once seemed out of reach, including writing code, solving complex problems, and understanding natural language at a high level. Yet these systems frequently produce plausible but incorrect outputs, lack true understanding, and can hallucinate facts or misinterpret prompts. The talk makes clear that these tools are powerful, but not infallible.

How Large Language Models Learn

The core technical story centers on large language models trained with transformer architectures. Wooldridge explains that to predict the next word in a sequence, models must ingest vast amounts of text and enormous compute power. He illustrates the scale with GPT-3’s 175 billion parameters and hundreds of billions of training words, processed on AI supercomputers rather than a desktop. The result is a system that can imitate patterns of reasoning and problem solving seen in its training data, rather than solving problems from first principles.

Chain of Thought and Emergence

A key feature discussed is chain-of-thought reasoning, where models are prompted to “think through” steps to improve accuracy, at the cost of greater compute during inference. Wooldridge demonstrates this with live coding and problem solving, describing how such reasoning emerges from data scale and training rather than explicit programming of every rule. He notes the computational cost of this approach and its implications for real time use.

Rational Minds, Turing Tests and Sentience

The talk revisits classic AI expectations, including the Turing test, arguing that while the test in its original form is not technically passed, millions interact with these models as if they were rational minds. Wooldridge stresses that these systems are not conscious, do not have beliefs, and do not truly understand the world, even though they can convincingly simulate human discourse. He also discusses hallucinations as a fundamental challenge that is deeply embedded in the objective of maximizing plausible next-word predictions.

Future Frontiers and Safety

Looking ahead, Wooldridge identifies robotics and AI in the physical world as a major frontier, highlighting the difficulty of endowing robots with human-like dexterity and everyday competence. He argues for viewing AI as a cognitive prosthesis that augments human intelligence, rather than a replacement for human capabilities. The talk closes with reflections on responsible use, governance, and ongoing research into safer, more trustworthy AI.

Audience Q&A Highlights

Questions range from how data quality affects AI performance to the comparability of toddler learning with machine learning, and from the potential for general purpose robots to the ethics and governance of AI systems. The session underscores the complexity of achieving true generality and reliability in AI, while reaffirming the transformative potential of these technologies when guided by careful engineering and policy choices.

Related posts

featured
The Royal Institution
·22/07/2025

Will AI outsmart human intelligence? - with 'Godfather of AI' Geoffrey Hinton

featured
StarTalk
·28/02/2026

Is AI Hiding Its Full Power? With Geoffrey Hinton

featured
The Francis Crick Institute
·01/10/2025

Can We Harness AI for Good? – A Question of Science With Brian Cox

featured
3Blue1Brown
·20/11/2024

Large Language Models explained briefly