Logic List Mailing Archive

PhD student position in Logics for Ethical Reasoning in Social Robots, Toulouse (France), Deadline: 21 Feb 2021

PhD position in Logics for Ethical Reasoning in Social Robots

Institut de Recherche en Informatique de Toulouse (IRIT), Toulouse 
University, France


Description of the research project

The International Center for Mathematics and Computer Science in Toulouse 
(https://cimi.univ-toulouse.fr/en), named CIMI, offers a 3-year support 
grant for students starting a PhD in October 2021. Recruited doctoral 
students will be paid ? 1900 gross per month. They will have the 
opportunity to sign a teaching endorsement for the duration of their 
doctoral studies. Umberto Grandi (https://www.irit.fr/~Umberto.Grandi/) 
and Emiliano Lorini (https://www.irit.fr/~Emiliano.Lorini/) are seeking a 
candidate for a PhD position at CIMI to work on the research project 
?Logics for Ethical Reasoning in Social Robots? in close cooperation with 
Rachid Alami (https://homepages.laas.fr/rachid/) and Aurélie Clodic 
(https://homepages.laas.fr/aclodic/).

Description of the research project

An autonomous agent is, by definition, endowed with endogenous 
motivations, commonly called goals, which determine her preferences, 
thereby indirectly influencing her decision-making process. The connection 
between an agent?s goals and preferences is highly relevant for machine 
ethics, one of the central areas of AI nowadays (Allen et al. 2000; 
Etzioni & Etzioni 2017; Wallach & Allen 2008). Indeed, for an autonomous 
agent to be ethical and to behave responsibly, some of her goals must 
reflect values and norms with which she is expected to comply and which 
take other agents and their welfare into consideration. This includes both 
abstract values such as justice, fairness, reciprocity, equity and honesty 
and more concrete ones such as ?greenhouse gas emissions are reduced?. A 
typical example of ethical autonomous agent is a robot whose set of values 
includes the respect for human integrity (Winfield et al. 2014). In order 
to supply her expected functionality, an ethical agent should be capable 
of computing her preference ordering over the alternatives directly from 
her values and then use it, together with her knowledge and belief, as 
input of her decision-making process.

There have been some attempts to formalize ethical reasoning with the aid 
of logical tools. There are approaches based on preference logic (Hansson 
2001), event calculus (ASP) (Berreby et al. 2017), temporal-epistemic 
logic (Lorini 2015), BDI (belief, desire, intention) agent language 
(Dennis et al. 2016) and classical higher-order logic (HOL) (Benzmüller et 
al. 2002). The focus of this PhD thesis is the formalization of the 
relationship between ethical values and preferences as well as the 
influence of ethical values on decision-making. The methodology used in 
the project is a combination of epistemic logic (Fagin et al. 1995), 
dynamic epistemic logic (van Ditmarsch et al. 2007) and preference logic 
(van Benthem & Liu 2007) interpreted on a variety of formal semantics 
including relational semantics (Blackburn et al. 2001), neighbourhood 
semantics (Chellas 1980) and belief base semantics (Lorini 2020). The 
output of the PhD thesis will be a family of logics for ethical reasoning 
aimed at modelling interactive situations in which: (i) an agent?s value 
may concern other agents? well-being, safety and integrity, and (ii) 
agents? decisions are interdependent so that the possibility for an agent 
to achieve her values may depend on what other agents decide to do. The 
latter are the typical situations studied in game-theory. The logics 
developed in the context of the PhD thesis will allow us to express 
solution concepts from game theory and to elucidate the strategic aspects 
of ethical reasoning. Their semantics will borrow from well-studied 
concepts in social choice (most notably fairness criteria) and compact 
languages for preference, goals, and values representation (Loreggia et 
al., 2018) and their aggregation (Novaro et al., 2019, Haret et al. 2018). 
Decision procedures for their satisfiability checking and model checking 
problems will be devised.

We will focus on social robotics as a pertinent context to investigate a 
potential algorithmic implementation of the framework. Indeed, human-robot 
joint action opens very challenging decisional problems for the robot to 
elaborate strategies which are not only pertinent, but also acceptable and 
legible by its human partner. Architectures, models and algorithms (Clodic 
2017, Lemaignan 2017, Kruse 2013) have been proposed to reason about human 
mental state, to generate human-aware plans which allow to conduct 
collaborative human-robot task achievement. One objective would be to 
combine and enrich such systems with ethical reasoning.

References

C. Allen, G. Varner, and J. Zinser (2000). Prolegomena to any future 
artificial moral agent. Journal of Experimental and Theoretical Artificial 
Intelligence, 12, 3, 251-261.

J. van Benthem and F. Liu (2007). Dynamic logic of preference upgrade. 
Journal of Applied Non- Classical Logics, 17, 2, 157?182.

C. Benzmüller, X. Parent, and L. W. N. van der Torre (2020). Designing 
normative theories for ethical and legal reasoning: LogiKEy framework, 
methodology, and tool support. Artificial Intelligence, 287.

F. Berreby, G. Bourgne, and J.-G. Ganascia (2017). A Declarative Modular 
Framework for Representing and Applying Ethical Principles. In Proceedings 
of the 16th Conference on Autonomous Agents and MultiAgent Systems (AAMAS 
2017), ACM, 96-104.

P. Blackburn, M. de Rijke, and Y. Venema (2001) Modal Logic. Cambridge 
University Press, Cambridge.

G. Buisan, G. Sarthou, R. Alami (2020). Human Aware Task Planning Using 
Verbal Communication Feasibility and Costs. International Conference on 
Social Robotics, Golden, United States. pp. 554-565.

B. Chellas (1980). Modal logic: an introduction. Cambridge University 
Press, Cambridge.

A. Clodic, J. Vázquez-Salceda, F. Dignum, S. Mascarenhas, V. Dignum, et 
al. (2018). On the Pertinence of Social Practices for Social Robotics. IOS 
Press. Envisioning Robots in Society ? Power, Politics, and Public Space, 
pp. 36-74.

A. Clodic, E. Pacherie, R. Alami, R. Chatila (2017). Key Elements for 
Human Robot Joint Action. Sociality and Normativity for Robots 
Philosophical Inquiries into Human-Robot Interactions, Springer, 
pp.159-177, Studies in the Philosophy of Sociality.

L. A. Dennis, M. Fisher, M. Slavkovik, and M. Webster (2016). Formal 
verification of ethical choices in autonomous systems. Robotics and 
Autonomous Systems, 77, 1-14.

H. P. van Ditmarsch, W. van der Hoek, and B. Kooi (2007). Dynamic 
Epistemic Logic. Kluwer Academic Publishers.

A. Etzioni and O. Etzioni (2017). Incorporating Ethics into Artificial 
Intelligence. ?The Journal of Ethics, 21, 403-418. ?

R. Fagin, J. Halpern, Y. Moses, and M. Vardi (1995). Reasoning about 
Knowledge. MIT Press, Cambridge.

S. O. Hansson (2001). The Structure of Values and Norms. Cambridge 
University ?Press.

A. Haret, A. Novaro, U. Grandi. Preference Aggregation with Incomplete 
CP-nets. Proceedings of the 16th International Conference on Principles of 
Knowledge Representation and Reasoning (KR), 2018. ?

T. Kruse, A. Pandey, R. Alami, A. Kirsch (2013). Human-Aware Robot 
Navigation: A Survey. Robotics and Autonomous Systems, Elsevier, 61 (12), 
pp.1726-1743.

S. Lemaignan, M. Warnier, E. A. Sisbot, A. Clodic, R. Alami (2017). 
Artificial Cognition for Social Human-Robot Interaction: An 
Implementation. Artificial Intelligence, Elsevier, 247, pp. 45-69.

A. Loreggia, N. Mattei, F. Rossi, K. B. Venable: Preferences and Ethical 
Principles in Decision Making. AAAI Spring Symposia 2018

E. Lorini (2015). A logic for reasoning about moral agents. Logique & 
Analyse, 58, ?230, 177-218. ?

E. Lorini (2019). Reasoning about cognitive attitudes in a qualitative 
setting. In Proceedings of the 16th European Conference on Logics in 
Artificial Intelligence (ECAI 2019), LNCS, vol. 11468, Springer, 726-743.

E. Lorini (2020). Rethinking epistemic logic with belief bases. Artificial 
Intelligence, 282.

A. Novaro, U. Grandi, D. Longin, and E. Lorini. Goal-Based Collective 
Decisions: Axiomatics and Computational Complexity. Proceedings of the 
27th International Joint Conference on Artificial Intelligence (IJCAI), 
2018.

W. Wallach and C. Allen (2008). Moral Machines: Teaching Robots Right from 
Wrong. Oxford University Press.

F. T. Winfield, C. Blum, and W. Liu (2014). Towards an Ethical Robot: 
Internal Models, Consequences and Ethical Action Selection. In Proceedings 
of the 15th Annual Conference on Advances in Autonomous Robotics Systems 
(TAROS 2014), LNCS, Vol. 8717, Springer, 85-96.

Candidate profile

The PhD is at the intersection of logic, knowledge and preference 
representation, game theory, social choice theory and social robotics. The 
ideal candidate should have a strong mathematical background and a 
master?s degree in Computer Science, Logic or Mathematics. She/he should 
also have previous experience in programming. Ideally, she/he should be 
familiar with propositional logic, modal logic as well as with the theory 
of static and sequential games.

Further information and how to apply

For further information about the application and the CIMI competition 
please email to Emiliano.Lorini@irit.fr <mailto:Emiliano.Lorini@irit.fr> 
and Umberto.Grandi@irit.fr <mailto:Umberto.Grandi@irit.fr> For 
application, please email your detailed CV, a motivation letter, and 
transcripts of bachelor's degree and master?s degree to the previous 
e-mail addresses. Samples of published research by the candidate and 
reference letters will be a plus.

APPLICATION DEADLINE FOR FULL CONSIDERATION: February 21st 2021
--
[LOGIC] mailing list
http://www.dvmlg.de/mailingliste.html
Archive: http://www.illc.uva.nl/LogicList/

provided by a collaboration of the DVMLG, the Maths Departments in Bonn and Hamburg, and the ILLC at the Universiteit van Amsterdam