Part of the
4TU.
Ethics and Technology
TU DelftTU EindhovenUniversity of TwenteWageningen University
4TU.
Ethics and Technology
Close

4TU.Federation

+31(0)6 48 27 55 61

secretaris@4tu.nl

Website: 4TU.nl

Uppsala Vienna AI Colloquium

Monday, 21 October 2024

The Uppsala Vienna AI Colloquium is a series of online colloquium style talks focused on philosophical issues surrounding AI technology. Each talk will address a specific issue of relevance to AI systems (e.g., intelligence, agency, responsibility, etc.) and will be delivered by an expert with research background on the topic. The intended audience of the talks are philosophically informed individuals with an interest in the philosophy of artificial intelligence. More details about the Uppsala Vienna AI Colloquium are available at: https://uv-colloquium.com/

This month's talk will be delivered at 25 October by Charles Rathkopf, Tenured research associate in Philosophy and Neuroscience, Jülich Research Center, Jülich, Germany

https://www.charlesrathkopf.net/

The title and abstract of the talk:

(working) Title: Hallucination, justification, and the role of generative AI in science

Abstract: Generative AI models are now being used to create synthetic climate data to improve the accuracy of climate models, and to construct virtual molecules which can then be synthesized for medical applications. But generative AI models are also notorious for their disposition to “hallucinate.” A recent Nature editorial defines hallucination as a process in which a generative model “makes up incorrect answers” (Jones, 2024).  This raises an obvious puzzle. If generative models are prone to fabricating incorrect answers, how can they be used responsibly? In this talk I provide an analysis of the phenomenon of hallucination, and give special attention to diffusion models trained on scientific data (rather than transformers trained on natural language.) The goal of the paper is to work out how generative AI can be made compatible with reliabilist epistemology. I draw a distinction between parameter-space and feature-space deviations from the training data, and argue that  hallucination is a subset of the latter. This allows us to recognize a class of cases in which the threat of hallucination simply does not arise. Among the remaining cases, I draw an additional distinction between deviations that are discoverable by algorithmic means, and those that are not. I then argue that if a deviation is discoverable by algorithmic means, reliability is not threatened, and that if the deviation is not so discoverable, then the generative model that produced it will be relevantly similar to other discovery procedures, and can therefore be accommodated within the reliabilist framework. 

To register and to receive the zoom link to join the talk, please fill the following form: https://forms.office.com/e/1MxRxDRfW5 

(The zoom link will be emailed a few hours before the talk to all those who register using the link above.)

On behalf of all the organizers,
Nikhil Mahant
Marie Curie Postdoctoral Fellow,
Department of Philosophy, Uppsala University

nikhil.mahant@filosofi.uu.se

https://www.nikhilmahant.com/


Upcoming talks:

15 November: Mihaela Constantinescu

13 December: Fabio Tollon and Ann-Katrien Oimann