Autonomous agents operating in a dynamic environment must be able to reason about their actions in pursuit of their goals. An additional consideration for such agents is that their actions may be constrained by norms that aim at defining an acceptable behaviour for the agents. The inclusion of normative reasoning into practical reasoning is derived from the necessity for effective mechanisms that regulate an agent’s behaviour in an open environment without compromising their autonomy. However, the conflict between agents’ individual goals and societal goals (i.e. norms) makes deciding what to do a complicated activity. Argumentation enables reasoning and decision-making in presence of conflict and supports explaining the reasoning mechanism in terms of a dialogue. The need for explanation of a complex task undertaken by an autonomous entity lies in the importance of facilitating human understanding of such entities and consequently increasing their trust in these systems.Existing argumentation-based approaches to practical reasoning often ignore the role of norms in practical reasoning and commonly neglect the dialogical aspect of argumentation to explain the process of practical reasoning. To address these shortcomings, the research presented in this thesis allows an agent to use argumentation to support deciding what to do while the agent is able to explain why such a decision is made. To this end, we demonstrate a model for normative practical reasoning that permits an agent to plan for conflicting goals and norms. We use argumentation frameworks to reason about these conflicts by weighing up the importance of goal achievement and norm compliance against the cost of goals being ignored and norms being violated in each plan. Such a reasoning serves as the basis of identifying the best plan for the agent to execute, while the reasoning process is explained using natural language translation of a proof dialogue game.
|Date of Award||3 May 2016|
|Supervisor||Marina De Vos (Supervisor) & Julian Padget (Supervisor)|