Prof. Kees van Deemter
Title: Explanation and Rationality in Models of Language
When theories of human behaviour aim to offer explanations, they often use rationality as their linchpin: to the extent that a theory helps us to see behaviour as optimising some form of rationality/utility, we feel that our theory explains this behaviour. This approach is not uncontroversial, however. For example, four decades of research in Behavioural Economics have shown that people behave in ways that are not easily explained by rationality alone.
Rationality has long had its adherents in the explanation of language use as well, for example via the Gricean Maxims. Recently, a Bayesian approach known as Rational Speech Act (RSA) theory has made inroads into the computational modelling of language use. In a nutshell, the idea is to build tightly coupled models of language comprehension and production in which speakers and hearers assume each other to behave rationally.
In this talk I will sketch a series of experiments focussing on the way in which speakers refer to objects. These experiments paint a less “rational" picture of human language use, and they offer confirmation of a model, known as Probabilistic Referential Overspecification (PRO), that balances rationality with other considerations. I hope to engage in a discussion of the dilemma of having to choose between two these very different models, one of which is elegant and explanatory yet empirically inadequate, while the other is messy yet empirically very adequate.
Conceptualization in Reference Production: Probabilistic Modeling and Experimental Testing. R.P.G. van Gompel, K. van Deemter, A. Gatt, R. Snoeren, E.J. Krahmer (2019). Psychological Review 126 (3), 345-373.
Biography:- "Me in brief"
Prof. Ute Schmid
Title: Hybrid, Explanatory, Interactive Machine Learning-- Towards Trustworthy Human-AI Partnerships
For many practical applications of machine learning, it is appropriate or even necessary to make use of human expertise to compensate a too small amount or low quality of data. Taking into account knowledge which is available in explicit form reduces the amount of data needed for learning. Furthermore, even if domain experts cannot formulate knowledge explicitly, they typically can recognize and correct erroneous decisions or actions. This type of implicit knowledge can be injected into the learning process to guide model adaptation. These insights have led to the so-called third wave of AI with a focus on explainability (XAI). In the talk, I will introduce research on explanatory and interactive machine learning. I will present inductive programming as a powerful approach to learning interpretable models in relational domains. Arguing the need for specific explanations for different stakeholders and goals, I will introduce different types of explanations based on theories and findings from cognitive science. Furthermore, I will show how intelligent tutor systems and XAI can be combined to support constructive learning. Algorithmic realisations of explanation generation will be complemented with results from psychological experiments investigating the effect on joint human-AI task performance and trust. Finally, current research projects are introduced to illustrate applications of the presented work in medical diagnostics, quality control in industrial production, file management, and accountability.
Ute Schmid is a professor of Cognitive Systems at the University of Bamberg since 2004. She has university diplomas both in psychology and computer science from Technical University Berlin (TUB). She received her doctoral degree and her habilitation in computer science also at TU Berlin where she was an assistant professor in the Methods of AI group. Ute was a visiting researcher at Carnegie Mellon University (funded by DFG) and she worked as a lecturer for Intelligent Systems at the Department of Mathematics and Computer Science at the University of Osnabrück where she was also a member of the Cognitive Science Institute. Ute Schmid is a member of the board of directors of the Bavarian Institute of Digital Transformation (bidt) and a member of the Bavarian AI Council (Bayerischer KI-Rat). Since 2020 she is head of the Fraunhofer IIS project group Comprehensible AI (CAI). Ute Schmid won the Minerva Gender Equality Award of Informatics Europe 2018 for her university. Since many years, she is engaged in educating the public about artificial intelligence in general and machine learning and she gives workshops for teachers as well as high-school students about AI and machine learning. For her outreach activities, she has been awarded with the Rainer-Markgraf-Preis 2020.