Utrecht University is hiring a PhD position in Philosophy of Science and Ethics of AI. It is a fully funded, 4-year research position on the ERC project Machine Learning in Science and Society: A Dangerous Toy? (TOY). The application deadline is 10 Dec. Applicants must have a Master's degree in philosophy (or related field) by the start of the position.
PhD project description:
The same machine learning methods that are unprecedently used across large areas of science are also wide ranging across society. ML models are determining what news we see, risk scores for fraud, and more. LLMs are structuring our knowledge with ChatGPT integrated with Bing search and Quora answers, even despite well documented cases of ChatGPT ‘hallucinations.’
Current approaches to evaluating ML models in society have clustered around issues of fairness, bias, problems of justice by introducing ML models at scale, the right to explanation, and more. While all these issues remain important, there is a deep worry that ML models might not be providing us with genuine information or knowledge in the first place. Before we can make informed decisions about when and where ML models should be used across society, we need to understand their epistemic value.
The aim of this PhD project is to bring methods and resources from philosophy of science (e.g. idealization and representation) to answer important questions in AI ethics regarding appropriate use of ML models in society. How do ML models idealize social phenomena? When do the idealizations of ML models get in the way of acceptable use?
This PhD project is part of the ERC Starting Grant project Machine Learning in Science and Society: A Dangerous Toy? (TOY) The project team consists of the PI (Emily Sullivan), this PhD position, and two forthcoming postdoc positions. The PhD candidate will be embedded within the theoretical philosophy group at Utrecht University and the Normative Philosophy of Science research lab.