Keynote Speakers

Will Embodied AI Become Sentient?

Speaker: Edward A. Lee, UC Berkeley, US

Abstract: Today’s large language models have relatively limited interaction with the physical world. They interact wth humans through the Internet, but even this interaction is limited for safety reasons. According to psychological theories of embodied cognition, they therefore lack essential capabilities that lead to a cognitive mind. But this is changing. The nascent field of embodied robotics looks at properties that emerge when deep neural networks can sense and act in their physical environment. In this talk, I will examine fundamental changes that occur with the introduction of feedback through the physical world, when robots can not only sense to act, but also act to sense. Processes that require subjective involvement, not just objective observation, become possible. Using theories developed by Turing Award winner Judea Pearl, I will show that subjective involvement enables reasoning about causation, and therefore elevates robots to the point that it may become reasonable to hold them accountable for their actions. Using theories developed by Turing Award winners Shafi Goldwasser and Silvio Micali, I will show that knowledge can be purely subjective, not externally observable. Using theories developed by Turing Award winner Robin Milner, I will show that first-person interaction can gain knowledge that no objective observation can gain. Putting all these together, I conclude that embodied AI agents may in fact become sentient, but also that we can never know for sure whether this has happened.