Logic List Mailing Archive
PhD student position in Argumentative Explainable AI, Utrecht (The Netherlands), Deadline: 31 May 2022
PhD position in Argumentative Explainable Artificial Intelligence at
Utrecht University, the Netherlands, for five years.
Deadline: 31 May.
If we want to apply artificial intelligence (AI) in high-risk domains,
such as health care and the legal system, the applications need to be
transparent and trustworthy. Additionally, they need to comply with an
increasing range of ethical and legal guidelines. The fast-growing
research area of eXplainable Artificial Intelligence (XAI) is aimed at
increasing transparency and trust by providing explanations for AI systems
and the decisions they make. Research in XAI has mainly been focused on
making learning-based approaches to AI more transparent. However, these
approaches do not account for the use of prior knowledge and reasoning in
human cognition. Furthermore, they do not take the important aspect of
contestability of explanations into account. In this project, you will
study knowledge-based approaches to XAI, specifically approaches to XAI
based on computational argumentation (Dung, 1995).
Knowledge-based approaches in AI are applied in many real-life applications, for example at the Netherlands Police and the Dutch Tax and Customs Administration. Additionally, it has been suggested that learning-based approaches to AI could be made more transparent by combining them with knowledge-based approaches: hybrid or neuro-symbolic AI. Therefore, developing good explanations for knowledge-based approaches to AI is essential. Explanations for knowledge-based AI have a long history, with argumentation-based XAI receiving an impetus as of late (Cyras et al., 2021; Borg and Bex, 2021) - human cognition is inherently argumentative and the interactive argumentative dialogues are important for increased trust in the explanations. However, many questions remain. In particular, implementing the argumentative nature and interactive aspects of human explanation with computational argumentation has not been sufficiently explored yet. Therefore, in this PhD project you will:
- investigate how the benefits of explicit knowledge and argumentation can be implemented for XAI;
- determine what makes good argumentation-based explanations; and
- apply argumentation-based approaches to provide explanations for outcomes of other AI approaches, such as (deep) machine learning.
Besides conducting research, you will spend 30% of your time on teaching activities.
You will join the Intelligent Systems Group at the Department of Information and Computing Sciences at Utrecht University. This group has a strong tradition in logic and AI, including computational argumentation. Together with the Hybrid Intelligence Centre and the National Police-lab AI, they constitute a vibrant community of researchers on the subjects of XAI and hybrid knowledge-based/machine learning AI.
(Borg and Bex, 2021) A. Borg, F. Bex (2021). A Basic Framework for Explanations in Argumentation. IEEE Intelligent Systems.
(Cyras et al., 2021) K. Cyras, A. Rago, E. Albini, P. Baroni and F. Toni (2021). Argumentative XAI: A Survey. In: Proceedings of IJCAI’21.
(Dung, 1995) P.M. Dung (1995). On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games. Artificial Intelligence Journal.
We are looking for a candidate with:
- a Master's degree in Artificial Intelligence or a related field (e.g., computer science, logic, mathematics or philosophy);
- a strong background in logic and/or knowledge-based approaches to (X)AI;
- excellent English communication skills (written and spoken);
- a background and/or an interest in machine learning or ethical/legal aspects of XAI is a plus.
- a position for five years;
- a full-time gross salary that starts at €2,443 and increases to €3,122 per month (scale P of the Collective Labour Agreement Dutch Universities (cao));
- 8% holiday bonus and 8.3% end-of-year bonus;
- a pension scheme, partially paid parental leave, and flexible employment conditions based on the Collective Labour Agreement Dutch Universities.
In addition to the employment conditions laid down in the cao for Dutch Universities, Utrecht University has a number of its own arrangements. For example, there are agreements on professional development, leave arrangements and sports. We also give you the opportunity to expand your terms of employment via the Employment Conditions Selection Model. This is how we like to encourage you to continue to grow.
For more information and to apply, please visit: https://www.uu.nl/en/organisation/working-at-utrecht-university/jobs/phd-position-in-argumentative-explainable-artificial-intelligence-10-fte
If you have any questions regarding the position, please contact AnneMarie Borg (email@example.com).
[LOGIC] mailing list
provided by a collaboration of the DVMLG, the Maths Departments in Bonn and Hamburg, and the ILLC at the Universiteit van Amsterdam