Engineering
by Ruth Ntumba
The Department of Computing will be represented at AAMAS 2026 with several co-authored papers. The conference, held in Paphos, Cyprus from 25–29 May 2026, is a leading venue for research on autonomous agents and multiagent systems.
We are pleased to announce that multiple papers co-authored by members of the Department of Computing have been accepted to the 25th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2026), which will be held 25–29 May 2026 in Paphos, Cyprus.
AAMAS is the largest and most influential conference in the area of agents and multiagent systems, bringing together researchers and practitioners in all areas of agent technology and providing an internationally renowned high-profile forum for publishing and discovering the latest developments in the field. AAMAS is the flagship conference of the non-profit International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS).
Main Track Papers
Strength Change Explanations in Quantitative Argumentation
Authors: Timotheus Kampik, Xiang Yin, Nico Potyka, Professor Francesca Toni
This paper introduces strength change explanations for quantitative argumentation frameworks. Rather than only explaining how an AI system reached a conclusion, the authors investigate what would need to change for a different outcome to occur.
They model reasoning as a network of supporting and attacking arguments, each with a strength value. Their method identifies how adjusting certain initial strengths can alter final rankings among arguments. The paper formally shows that earlier inverse and counterfactual approaches can be understood as special cases of this framework, proves soundness and completeness under specific conditions, and analyses when such explanations exist. Experiments suggest that in structured, layered argument settings—common in real-world applications—such explanations can often be efficiently computed.
From User Preferences to Base Score Extraction Functions in Gradual Argumentation
Authors: Aniol Civit, Antonio Rago, Antonio Andriella, Guillem Alenyà, Professor Francesca Toni
Gradual argumentation assigns numerical strengths to arguments to support transparent decision-making. A major challenge is choosing appropriate starting scores.
This paper proposes a method to derive these scores automatically from simple user preferences (e.g., “I prefer A over B”), avoiding the need for users to assign numbers directly. The authors formalise desirable properties of such extraction functions, present an algorithm, and validate it experimentally, including in a robotics application. Results indicate that the method better reflects genuine human preferences and supports more practical deployment of argumentation systems.
Retrieval and Argumentation Enhanced Multi-Agent LLMs for Judgmental Forecasting
Authors: Deniz Gorur, Antonio Rago, Professor Francesca Toni
This work explores judgmental forecasting using multiple AI agents powered by large language models (LLMs).
The system structures disagreements between agents as explicit arguments for and against a claim. Some agents directly generate and evaluate arguments, while others retrieve external evidence before forming their reasoning. Experiments show that combining multiple agents—especially three working together—improves predictive accuracy while maintaining transparency. The research demonstrates how structured argumentation can enhance both the performance and explainability of LLM-based systems.
Constrained Assumption-Based Argumentation Frameworks
Authors: Emanuele De Angelis, Fabio Fioravanti, Maria Chiara Meo, Alberto Pettorossi, Maurizio Proietti, Professor Francesca Toni
Traditional Assumption-Based Argumentation (ABA) frameworks are limited to variable-free representations, restricting their expressive power.
This paper extends ABA to support arguments containing variables with constraints, enabling reasoning over groups or even infinitely many entities. The authors define how attacks operate in this richer setting and prove that the extended framework preserves the core properties of standard ABA while substantially increasing its representational flexibility.
Safe and Scalable Multi-Agent Coordination with Reconstructed Level-k Monte Carlo Tree Search.
Authors: Zhihao Lin, Lin Wu, Zhen Tian, Alessio Lomuscio, and Jianglin Lan
Researchers from the University of Glasgow and 51³Ô¹ÏÍø have developed a novel framework to address the critical challenge of coordinating autonomous agents in decentralized, safety-critical environments. To be presented at the AAMAS 2026 conference, the study introduces a reconstructed Level-k cognitive hierarchy that transforms descriptive behavioural models into a constructive planning algorithm by replacing standard random baseline assumptions with conservative, safety-oriented trajectories. By integrating this recursive structure with Monte Carlo Tree Search (MCTS), the framework utilizes a Dynamic Interaction Graph and safety-aware pruning to reduce computational complexity from exponential to linear, enabling real-time planning cycles under 100ms. Experimental results on symmetric multi-agent intersections demonstrate that this "cascading conservatism" achieves a 0% collision rate and superior scalability, consistently outperforming vanilla MCTS and traditional game-theoretic methods.
Blue Sky Ideas Track
Argumentative Human-AI Decision-Making: Toward AI Agents That Reason With Us, Not For Us
Authors: Stylianos Loukas Vasileiou, Antonio Rago, Professor Francesca Toni, William Yeoh
This visionary paper argues for integrating computational argumentation with large language models to create AI systems that reason with people rather than for them.
While argumentation provides structured and transparent reasoning, it often requires expert configuration. LLMs can process unstructured information but lack transparency. The authors outline how combining argument mining, structured reasoning, and LLM capabilities could enable interactive AI systems whose conclusions can be questioned, challenged, and revised. Such systems could play a transformative role in high-stakes domains such as healthcare, law, and public policy.
Demonstration Track
ArgLLM-App: An Interactive System for Argumentative Reasoning with Large Language Models
Authors: Adam Dejl, Deniz Gorur, Francesca Toni
Argumentative LLMs (ArgLLMs) are a way of combining large language models (LLMs) with structured argument techniques to help make decisions. The goal is to make sure the AI’s decisions are not only explained clearly, but can also be questioned and challenged by people.
The team introduce ArgLLM-App, a web-based tool that uses these ArgLLM-powered agents to handle simple yes-or-no decisions. The tool shows users how the system reached its conclusion in a clear, visual way. It also lets users interact with the system, spot possible mistakes in its reasoning, and challenge them. The system is built in a flexible, modular way, which means it can easily connect to reliable external information sources.
Congratulations to all authors and collaborators on these significant contributions to multi-agent systems and trustworthy AI research.
Article text (excluding photos or graphics) © 51³Ô¹ÏÍø.
Photos and graphics subject to third party copyright used with permission or © 51³Ô¹ÏÍø.
Faculty of Medicine
Engineering
Health
Health
Campus and community
Science
Cross-faculty
Health
Discover more 51³Ô¹ÏÍø News
Search all articlesDiscover more 51³Ô¹ÏÍø News
Search all articles