My research is motivated by the problem of building explainable AI systems that can reason. Focusing on the merging of computational neural networks with argumentation semantics has resulted in the NAN architecture with which algorithms can be integrated that learn in a logically coherent manner according to argument acceptability. Intuitively, arguments can be understood as structured statements that are judged based on their truth, which we might call their acceptability. One argument may be more acceptable to one person than it is to another, and different argumentation semantics seek to capture the manner in which the acceptability of arguments is to be determined.
One way of determining acceptability is to judge arguments solely based on their relationship with other arguments. Thus far, NANs have been developed that learn according to this method of determining acceptability and learn the attack relation between arguments based on acceptability data, which can then be used to generalise to unseen data sets. This is actually the inverse problem to the majority of argumentation research, which assumes that the relationships (i.e. the attack relation) between arguments is already known and seeks to calculate the possible valid sets of argument acceptability statuses (known as argument extensions). My research takes a data set of valid sets of argument acceptability statuses (which like most data sets are not assumed to be exhaustive) and seeks to calculate the attack relation that is consistent with such data.
My ambition for the future is to expand the research to incorporate argument relationships beyond simple attacks to take account of support, weighting and timing of arguments. I also expect to apply the theoretical NAN architecture and associated algorithms to real data in order to assess their ability to address the overaching aim of building explainable AI systems that can reason.
The OHAAI project is anticipated as an annual curation of selected papers describing PhD work on argumentation in AI. Argumentation, as a field within artificial intelligence, is highly relevant for researchers interested in symbolic representations of knowledge and defeasible reasoning. Myself and several fellow PhD researchers were motivated by the observation that the cutting-edge research conducted by PhD students could often suffer from a lack of representation and struggle to reach a wide audience. OHAAI is designed to serve as a research hub to keep track of the latest and upcoming PhD-driven research on the theory and application of argumentation in all areas related to AI. The handbook’s goals are to: 1. Encourage collaboration and knowledge discovery between members of the argumentation community. 2. Provide a platform for PhD students to have their work published in a citable peer-reviewed venue. 3. Present an indication of the scope and quality of PhD research involving argumentation for AI.