Invited Speakers Talks
Steve Awodey

Steve Awodey
Carnegie Mellon University &
Royal Society Wolfson Visiting Fellow

Category Theory as a Tool for Mathematical Discovery

Category theory can be seen as the mathematics of transformations of various kinds. As such it has been used as a tool for discovering and reasoning about structural invariants. We consider several examples of such discoveries in algebra, logic, computer science, and higher mathematics. An application currently under development is the computer formalization of infinite-dimensional categories, which capture the invariant logic of continuous deformations of spaces.
Tony Cohn

Tony Cohn
Alan Turing Institute &
University of Leeds

Can Large Language Models Solve Spatial Puzzles?

Spatial reasoning is a core component of an agent’s ability to operate in or reason about the physical world. LLMs are widely promoted as having abilities to reason about a wide variety of domains, but much of our human ability to reason about spatial and temporal information is grounded in our acquired knowledge of the physical world as an embodied agent. Can a disembodied LLM reason about problems involving spatial and temporal information? In this talk I will discuss the ability of state-of-the-art LLMs to reason about qualitative spatial and temporal information. Across a wide range of LLMs, although they usually show abilities rather better than chance, they still struggle with many questions and tasks, for example when reasoning about directions, topological relations, or temporal points and intervals.
Cristina Cornelio

Cristina Cornelio
Samsung AI in Cambridge

Derivable Scientific Discovery

Scientific progress has long relied on discovering new laws through domain expertise and experimental validation. Modern AI can now generate candidate hypotheses at unprecedented scale and speed. Yet this creates a critical bottleneck: without rigorous, scalable verification, the volume of AI-generated hypotheses risks overwhelming discovery rather than accelerating it. Verification is not an afterthought but the foundation of meaningful AI-assisted science. This talk presents a vision for scientific discovery in which models are derivable from explicit axioms while using minimal experimental data. This allows understanding why a law holds, when to trust it, and what must change when it fails. I will present three complementary systems we developed to achieve this: 1) AI-Descartes: uses symbolic regression to propose candidate models from data, then applies logical reasoning to select those most consistent with established axioms; 2) AI-Hilbert: integrates polynomial optimization with logical constraints, enforcing theoretical consistency and empirical validity simultaneously; and 3) AI-Noether: when current theory cannot derive a hypothesis, it proposes a minimal set of new axioms that make the hypothesis derivable. Together, these methods establish a new paradigm of "derivable scientific discovery", where integrating data and logic transforms AI from a mere hypothesis generator into a system that produces meaningful, verifiable laws.
Anders C. Hansen

Anders C. Hansen
Cambridge

Necessary mechanisms for super AI and stopping hallucinations--The consistent reasoning paradox and the indeterminacy function

Creating Artificial Super Intelligence (ASI) (AI that surpasses human intelligence) is the ultimate challenge in AI research. This is, as we will discuss, fundamentally linked to the problem of avoiding hallucinations (wrong, yet plausible answers) in AI. We will describe a key mechanism that must be present in any ASI. This mechanism is not present in any modern chatbot and we will discuss how, without it, ASI will never be achievable. Moreover, we reveal that AI missing this mechanism will always hallucinate. Specifically, this mechanism is the computation of what we call an indeterminacy function. An indeterminacy function determines when an AI is correct and when it will not be able to answer with 100% confidence. The root to these findings is the Consistent Reasoning Paradox (CRP), which is a new paradox in logical reasoning that we will describe in the talk. The CRP shows that the above mechanism must be present as – surprisingly – an ASI that is ‘pretty sure’ (more than 50%) can rewrite itself to become 100% certain. It will compute an indeterminacy function and either be correct with 100% confidence, or it will not be more than 50% sure. The CRP addresses a long-standing issue that stems from Turing’s famous statement that infallible AI cannot be intelligent, where he questions how much intelligence may be displayed if an AI makes no pretence at infallibility. The CRP answers this – consistent reasoning requires fallibility – and thus marks a necessary fundamental shift in AI design if ASI is to ever be achieved and hallucinations to be stopped.

FEAT: Free energy Estimators with Adaptive Transport

We present Free energy Estimators with Adaptive Transport (FEAT), a novel framework for free energy estimation -- a critical challenge across scientific domains. FEAT leverages learned transports implemented via stochastic interpolants and provides consistent, minimum-variance estimators based on escorted Jarzynski equality and controlled Crooks theorem, alongside variational upper and lower bounds on free energy differences. Unifying equilibrium and non-equilibrium methods under a single theoretical framework, FEAT establishes a principled foundation for neural free energy calculations. Experimental validation on toy examples, molecular simulations, and quantum field theory demonstrates improvements over existing learning-based methods.
Alessandro Sperduti

Alessandro Sperduti
Università di Padova

Learning neuro-symbolic convergent term rewriting systems

Building neural systems that can learn to execute symbolic algorithms is a challenging open problem in artificial intelligence, especially when aiming for strong generalization and out-of-distribution performance. In this talk, I introduce a general framework for learning convergent term rewriting systems using a neuro-symbolic architecture inspired by the rewriting algorithm itself. I present two modular implementations of such architecture: the Neural Rewriting System (NRS) and the Fast Neural Rewriting System (FastNRS). As a result of algorithmic-inspired design and key architectural elements, both models can generalize to out-of-distribution instances, with FastNRS offering significant improvements in terms of memory efficiency, training speed, and inference time. We evaluate both architectures on four tasks involving the simplification of mathematical formulas and further demonstrate their versatility in a multi-domain learning scenario, where a single model is trained to solve multiple types of problems simultaneously.
Barbara Tversky

Barbara Tversky
Stanford University &
Columbia Teachers College

Mind in Motion: How Action Shapes Thought

I will argue that spatial thinking is the foundation of thought, not the entire edifice, but the foundation. I will bring support from neuroscience, language, gesture, and visualizations and bring them together with the notion of spraction, actions in space create abstractions. I will also put forth that these findings about human thought and creativity present challenges to current GenAI.
Petar Veličković

Petar Veličković
Google DeepMind

Please maximise signal... Do you copy? ...

We are heading toward a world where large language model (LLM)-based systems drive general-purpose computation. It is hence important to assess the extent to which such models robustly perform this computation. We will focus on basic tasks that often form part of a larger-scale system invocation—such as predicting maxima and input copying—to demonstrate that contemporary decoder-only Transformers cannot robustly perform these tasks, or even always know when they're wrong.
Pei Wang

Pei Wang
Temple University

A Model of Reasoning that is Both Normative and Realistic

This talk presents NARS (Non‑Axiomatic Reasoning System), an AGI framework built upon the Assumption of Insufficient Knowledge and Resources (AIKR). Within this paradigm, intelligence is defined not as the pursuit of absolute truths or optima, but as the capacity to utilize available knowledge and resources to achieve goals in a changing environment. NARS employs an experience‑grounded semantics in which concepts are abstractions of experience and truth‑values quantify evidential support. Knowledge is organized as a continually evolving concept graph, where a unified term logic drives reasoning, learning, and other cognitive processes across the structure. The system integrates multimodal experience, including spatiotemporal perception, linguistic input, and introspective signals. By constructing inference processes under real-time pressure, NARS is guided by a dynamic priority distribution over tasks, beliefs, and concepts. The result is a rigorous and flexible reasoning engine that stands in contrast to the statistical nature of Large Language Models (LLMs), while still allowing LLMs to serve as complementary tools when appropriate.
Yuguang Wang

Yu-Guang Wang
Shanghai Jiaotong University

AI Antibody Design Superintelligent Agent: An All-Atom Modeling and Automated Laboratory Closed-Loop Framework for Multi-Objective Developability Optimization

Antibody drug discovery is constrained by the vastness of the design space, the long cycles of wet-lab experimentation, and the difficulty of jointly optimizing multiple competing objectives. We developed an AI antibody design superintelligent agent that systematically integrates all-atom protein modeling, diffusion- and flow-matching–based sequence editing, cross-modal text-to-sequence retrieval, and zero-code training and deployment capabilities into an automated laboratory workflow, forming a closed-loop dry–wet data flywheel. In each round, the AI designs 96 candidate sequences, which undergo automated gene synthesis, expression, and multidimensional developability assessment before being fed back for model fine-tuning. Within at most three iterative cycles, the system achieves or surpasses state-of-the-art benchmarks across affinity, stability, expression yield, specificity, and immunogenicity. The platform demonstrates high hit rates and strong generalization across three tumor and immune targets, and further enhances clinical translatability through organoid-based toxicity evaluation. These results show that the deep coupling of AI and automated laboratories can elevate antibody design from single-point optimization to multi-objective coordinated optimization, providing a systematic and auditable framework for iterative biologics R&D.
Kelin Xia

Kelin Xia
NTU Singapore

Mathematical AI: from topological data analysis to topological deep learning

A central challenge in artificial intelligence (AI)-driven molecular science lies in efficiently representing molecular data and developing learning architectures that capture intrinsic structure-function relationships. In this work, we introduce advanced mathematics-based molecular representations and learning frameworks. Molecular structures and interactions are encoded using high-order topological and algebraic representations, including Rips complexes, Alpha complexes, Neighborhood complexes, Dowker complexes, Hom-complexes, Tor-algebras, Rhomboid tilings, Sheaves, Categories, etc. Building on these foundations, we design physics-informed geometric and topological deep learning models that systematically integrate high-order, multiscale, and periodic information of molecular systems. These models have been successfully applied to diverse molecular datasets across chemistry, biology, and materials science, demonstrating their versatility and effectiveness in uncovering complex structural-functional relationships.