Keynote Speakers

Deep Ethics

Speaker: Maximilian Kiener, Hamburg University of Technology, Germany

Abstract: How can we integrate ethics into the core of AI without distorting either? This question becomes urgent as AI systems increasingly shape decisions in healthcare, mobility, education, and law enforcement. Yet AI and ethics sometimes speak in seemingly incompatible terms: precision and optimisation on one side, nuance and moral judgment on the other. This talk addresses that gap in four steps: first, by unravelling the ethical dimensions hidden in technical structures such as reward functions and Markov Decision Processes in reinforcement learning; second, by outlining the conceptual and practical challenges in aligning machine learning with ethical principles; third, by introducing the idea of normative power as a new way to bridge these domains. I conclude by motivating the need for a new field: deep ethics: ‘deep’ because it integrates ethical reasoning into the architecture of AI itself, and because it explores the foundational link between ethics and intelligence.

 

Will Embodied AI Become Sentient?

Speaker: Edward A. Lee, UC Berkeley, US

Abstract: Today’s large language models have relatively limited interaction with the physical world. They interact wth humans through the Internet, but even this interaction is limited for safety reasons. According to psychological theories of embodied cognition, they therefore lack essential capabilities that lead to a cognitive mind. But this is changing. The nascent field of embodied robotics looks at properties that emerge when deep neural networks can sense and act in their physical environment. In this talk, I will examine fundamental changes that occur with the introduction of feedback through the physical world, when robots can not only sense to act, but also act to sense. Processes that require subjective involvement, not just objective observation, become possible. Using theories developed by Turing Award winner Judea Pearl, I will show that subjective involvement enables reasoning about causation, and therefore elevates robots to the point that it may become reasonable to hold them accountable for their actions. Using theories developed by Turing Award winners Shafi Goldwasser and Silvio Micali, I will show that knowledge can be purely subjective, not externally observable. Using theories developed by Turing Award winner Robin Milner, I will show that first-person interaction can gain knowledge that no objective observation can gain. Putting all these together, I conclude that embodied AI agents may in fact become sentient, but also that we can never know for sure whether this has happened.

 

Gaps in Generalization: A Case for Neurosymbolic AI

Speaker: Alvaro Velasquez, University of Colorado, US

Abstract: The Chat-GPT moment demonstrated that ubiquitously useful generalization is possible for AI foundation models. However, this capability is limited by classical assumptions on machine learning models, such as the test distribution matching the training distribution, and the manifold hypothesis of relying on shared simple features across an otherwise complex dataset. These assumptions raise a critical question: how can AI generalize outside of such domains? In this talk, we mention how the foregoing assumptions are violated for important problems in autonomy, synthetic biology, logistics, and creative scientific discovery. We posit that, at some level of abstraction, the shared symbolic structures across domains will enable greater generalization and present research directions for neurosymbolic AI to achieve this vision of symbolic generalization that is robust to the gaps between AI and reality.