Workshop Program of NeurMAD@AAAI’25
March 4, 2025
Pennsylvania Convention Center, Room 115A| Philadelphia, Pennsylvania, USA
Tuesday, March 4, 2025 | |||
09:00-09:05 | Opening remarks Challenger Mishra, Mateja Jamnik, Pietro Lio and Tiansi Dong | ||
Keynote #1 "Machine Learning Geometry?" Speaker: Elli Heyes, Imperial College London Chair: Mateja Jamnik | |||
9:05-9:45 | AI techniques are often called blackbox, meaning that they are difficult to interpret and understand due to them having a large number of parameters. One might think, therefore, that these techniques are ill-suited for application in pure mathematics, including geometry, which prioritises rigor and understanding. On the contrary, however, AI has been applied with great success to geometry in the past few years; raising conjectures by finding new patterns, generating new geometries to be studied by hand and approximating metrics. In this talk we will review some of the work in this area. | ||
Keynote #2 "Curiosity Styles in the (Natural & Artificial) Wild" Speaker: Dani Bassett, University of Pensylvania Chair: Tiansi Dong | |||
09:45–10:30 | What is curiosity? Across disciplines, some scholars offer a range of definitions while others eschew definitions altogether. Is the field of curiosity studies simply too young? Should we, as has been argued in neuroscience, press forward in definition-less characterization? At this juncture in the field's history, we turn to an examination of curiosity styles, and ask: How has curiosity been practiced over the last two millennia and how is it practiced today? We exercise a recent historico-philosophical account to catalogue common styles of curiosity and test for their presence as humans browse Wikipedia. Next we consider leading theories from psychology and mechanics that could explain curiosity styles, and formalize those theories in the mathematics of network science. Such a formalization allows theories of curiosity to be explicitly tested in human behavioral data and for their relative mental affordances to be investigated. Moreover, the formalization allows us to train artificial agents to build in human-like curiosity styles through reinforcement learning. Finally, with styles and theories in hand, we expand out to a study of several million users of Wikipedia to understand how curiosity styles might or might not differ around the world and track factors of social inequality. Collectively, our findings support the notion that curiosity is practiced---differently across people---as unique styles of network building, thereby providing a connective counterpoint to the common acquisitional account of curiosity in humans. | ||
Coffee break 10:30-11:00 | |||
Keynote #3 "Automating the discovery of mathematical conjectures and identities" Speaker: Thomas Fink, London Institute for Mathematical Sciences Chair: Tiansi Dong | |||
11:00-11:45 | Given a mathematical object, identities help us understand the behavior of that object. For example, we understand the special functions exp(x) and cos(x) by deducing identities about them, such as exp(2x) = exp(x) · exp(x), and cos(2x) = 2 cos(x) · cos(x) − 1. A lot of research involves searching for identities about new mathematical objects. To what extent can this be automated? To find out, we searched over all possible definitions of special functions with a quadratic form: f (m x) = a f (x) · f (x) + b f (x) + c. Using intelligent automation, we were able to discover conjectures about a variety of new and known special functions. | ||
Keynote #4 "Why Machine Learning Cannot Reach the Rigour of Logical Reasoning?" Speaker: Tiansi Dong, University of Cambridge Chair: Mateja Jamnik | |||
11:45-12:30 | In this talk, I will argue that supervised deep learning cannot achieve the rigour of syllogistic reasoning, and, thus, will not reach the rigour of logical reasoning. I will spatialise syllogistic statements into part-whole relations between regions and define the neural criterion that is equivalent to the rigour of the symbolic level of syllogistic reasoning. By dissecting Euler Net (EN), a well-designed supervised deep learning system for syllogistic reasoning (reaching 99.8% accuracy on the benchmark dataset), I will show three methodological limitations that prevent EN from reaching the rigour of syllogistic reasoning: (1) the methodology of reasoning through a combination table — they cannot cover all valid syllogistic reasoning types. ); (2) the end-to-end mapping from the premises to the conclusions—this introduces contradictory features of object recognition (good to recognise the whole from parts) and logical reasoning (not good to inject new parts); (3) using latent feature vectors to represent geometric structures, which may not be there. As Transformer’s Key-Query-Value structure is automatically learned combination tables through end-to-end mapping, they and neural networks built upon them will not reach the rigour of syllogistic reasoning. | ||
Lunch break 12:30-14:00 | |||
Keynote #5 "NeuroSymbolic AI and combinatorial competitive programming" Speaker: Alex Vitvitskyi, Google DeepMind Chair: Tiansi Dong | |||
14:00-14:45 |
From AlphaZero and Agent57 to AlphaProof; from Reinforcement Learning-fuelled Chain of Thought to FunSearch. While these are all very different systems, they have one key element in common: they are neurosymbolic.
Neural and symbolic paradigms have different inherent strengths and weaknesses. Hybrid neurosymbolic architectures, such as the ones above, combine both of these approaches in multiple creative ways to maximize systems’ performance. In this talk, we will be looking at a few such architectures, their structure, commonalities and differences between them. A particular focus will be placed on FunSearch, a neuro-symbolic system powered by genetic algorithms, and its application to the challenging domain of combinatorial competitive programming. | ||
Poster session 14:45-15:30 Chair: Mateja Jamnik | |||
Active Symbolic Discovery of Ordinary Differential Equations via Phase Portrait Sketching Nan Jiang, Md Nasim, Yexiang Xue | |||
LLM-based SQL Generation with Reinforcement Learning Mariia Berdnyk, Marine Collery | |||
From Black Box to Algorithmic Insight: Explainable AI in Graph Neural Networks for Graph Coloring Elad Shoham, Havana Rika, Dan Vilenchik | |||
Beyond Interpolation: Extrapolative Reasoning with Reinforcement Learning and Graph Neural Networks Niccolò Grillo, Andrea Toccaceli, Benjamin Estermann, Joël Mathys, Stefania Fresca, Roger Wattenhofer | |||
Can Better Solvers Find Better Matches? Assessing Math-LLM Models in Similar Problem Identification Savitha Sam Abraham, Pietro Totis, Marjan Alirezaie, Luc De Raedt | |||
Towards Learning to Reason: Comparing LLMs with Neuro-Symbolic on Arithmetic Relations in Abstract Reasoning Michael Hersche, Giacomo Camposampiero, Roger Wattenhofer, Abu Sebastian, Abbas Rahimi | |||
Enhancing Reasoning through Process Supervision with Monte Carlo Tree Search Shuangtao Li, ShuaihaoDong, Kexin Luan, Xinhan Di, Chaofan Ding | |||
An Evaluation of Approaches to Train Embeddings for Logical Inference Yasir White, Jevon Lipsey, Jeff Heflin | |||
Coffee break 15:30-16:00 | |||
Panel discussion 16:00-16:55 Interdisciplinary Discussions on Neural Reasoning and Mathematical Discovery Panel list: Thomas Fink, Alex Vitvitskyi, Tiansi Dong Chair: Mateja Jamnik | |||
16:55–17:00 | Closing remarks Challenger Mishra, Mateja Jamnik, Pietro Lio and Tiansi Dong |