Logic List Mailing Archive
CfA: Fully-Funded PhD in Explainable Reasoning & Argumentative AI, CRIL – Université d'Artois (France), start: October 2025, deadline: asap
𝗙𝘂𝗹𝗹𝘆-𝗳𝘂𝗻𝗱𝗲𝗱 𝟯-𝘆𝗲𝗮𝗿 𝗣𝗵𝗗 𝗶𝗻 𝗰𝗼𝗺𝗽𝘂𝘁𝗲𝗿 𝘀𝗰𝗶𝗲𝗻𝗰𝗲 - 𝗮𝗿𝘁𝗶𝗳𝗶𝗰𝗶𝗮𝗹 𝗶𝗻𝘁𝗲𝗹𝗹𝗶𝗴𝗲𝗻𝗰𝗲 - 𝗲𝘅𝗽𝗹𝗮𝗶𝗻𝗮𝗯𝗹𝗲 𝗿𝗲𝗮𝘀𝗼𝗻𝗶𝗻𝗴
𝗞𝗲𝘆𝘄𝗼𝗿𝗱𝘀: Argumentative AI, Explainable Reasoning, Human-Centered AI, Argument Influence
𝗦𝘁𝗮𝗿𝘁𝗶𝗻𝗴 𝗱𝗮𝘁𝗲: October 2025
𝗟𝗼𝗰𝗮𝘁𝗶𝗼𝗻: Centre de Recherche en Informatique de Lens (CRIL UMR 8188 - CNRS & Université d'Artois), France
𝗦𝘂𝗽𝗲𝗿𝘃𝗶𝘀𝗼𝗿𝘀: Dr Srdjan Vesic (CNRS CRIL Université d’Artois) and Dr Mathieu Hainselin (CRP-CPO Université de Picardie Jules Verne)
𝗗𝗲𝘀𝗰𝗿𝗶𝗽𝘁𝗶𝗼𝗻:
Computational argumentation theory provides essential tools for analyzing structured debates, with applications in AI-assisted decision-making systems, online discussion platforms, and human-AI interaction. In this context, explainability is critical: systems must not only determine which arguments are accepted based on abstract semantics, but also make this reasoning transparent and cognitively accessible to human users. Yet, existing semantics—typically grounded in logic-based frameworks and Dung’s abstract argumentation—might fail to align with human intuitions, limiting both their usability and trustworthiness in practice.
This fully funded PhD thesis will focus on improving the alignment between formal acceptability semantics and human reasoning. Research objectives include:
• Evaluating whether current principled constraints are perceived as intuitive by users
• Assessing their explanatory power, particularly in helping users grasp why certain arguments are accepted or rejected
• Formalizing new principles or designing alternative semantics to better capture observed reasoning patterns
• Investigating quantitative impact measures, which capture how much individual arguments influence the acceptability of others, and evaluating how such influence is perceived and interpreted by users
The project is highly interdisciplinary, involving close collaboration with psychologists and cognitive scientists, and combining formal modeling, empirical user studies, and potential software prototypes. The overarching goal is to contribute to the development of more explainable, intuitive, and responsible AI systems, grounded in both logical foundations and empirical validation.
Good level of English is required. Applicants should have a strong background in logic, AI, computer science, or related fields. An interest in cognitive science is welcome.
The PhD includes full funding, collaboration opportunities and publication support.
For more details about the thesis and to apply, send an email to vesic@cril.fr<mailto:vesic@cril.fr>
--
[LOGIC] mailing list, provided by DLMPST
More information (including information about subscription management) can
be found here: http://dlmpst.org/pages/logic-list.php