In conjunction with the Luxembourg Logic & AI Summit

University of Luxembourg, Luxembourg, 1-5 December 2025.

Description

Causality, Agents and Large Models (CALM) represents three rapidly-growing fields within artificial intelligence research. This workshop aims to establish methodological integration between these disciplines by investigating how causal reasoning and explainable AI (XAI) techniques can enhance transparency, decision-making, and adaptation in multi-agent systems (MAS). The workshop will provide a forum for researchers to discuss theoretical foundations, practical applications, and future directions at the intersection of causal AI, XAI, Large Language Models (LLM) and MAS.

Workshop goals

  • Explore the role of causal reasoning in enhancing decision-making, coordination, and adaptation in multi-agent systems.
  • Discuss technical challenges and opportunities for integrating causal AI techniques and LLM into MAS frameworks.
  • Develop methodologies and metrics to evaluate the explainability of causal reasoning in agents and large models.
  • Foster interdisciplinary collaboration between researchers in causal reasoning, XAI, LLM, and MAS.
  • Examine how explanations support user trust, cognitive ergonomics, and effective human-agent collaboration.
  • Identify promising directions for future research and development in this rapidly advancing research domain.

Topics

The main topics of the CALM-25 workshop are (but not restricted to):

  • Theoretical foundations of causal reasoning in Multi-Agent Systems (MAS)
  • Causal reasoning capabilities in Large Models (e.g., Large Language Models, LLMs)
  • Theoretical and practical Agentic-AI
  • Explainable AI (XAI) with active inference techniques
  • Applications of XAI techniques in agent-based modeling and simulation
  • Human-centered evaluation of causal explanations in MAS and LLMs
  • Causal inference in complex and dynamic multi-agent environments
  • Mechanistic Interpretability of Causality in Large Language Models
  • LLMs for coordination, cooperation, and communication among agents
  • Challenges and opportunities for incorporating XAI into MAS frameworks
  • Case studies and empirical evaluations of XAI approaches in agents
  • Generative AI as preprocessing for MAS
  • Exploring causality with deep generative models
  • Digital twins and simulators for interpretable synthetic data generation
  • Graph neural network causal learning
  • Interpretable and ergonomically-grounded root cause analysis methods for agent decision-making
  • Logic and argumentation-based approaches to causal reasoning
  • Ethical and Responsible XAI in LLM
  • Human Factors in XAI and Agentic AI
  • Adaptive and Personalized Explanations (Context-aware, and Human-centric)
  • Multi-modal explanations and Cross-cultural ergonomics
  • Ergonomic evaluation of explanation modalities (visual, textual, interactive)
  • Explainability in human-agent teaming: Ergonomic principles for effective collaboration in mixed teams (humans + AI agents)

Important Dates

  • Submission deadline: October 01st October 08th (extended), 2025
  • Notification: October 24th, 2025
  • Final date for camera-ready copy: TBD
  • Workshop: December 03rd, 2025

Submission

Submissions will be reviewed by 2-3 members of the program committee, who are experts in the field. The acceptance of the submitted papers will be based on scientific rigor, methodological soundness, contribution to causality and explainability in agents and large models, and originality. All accepted papers will be published in the Springer proceedings Communications in Computer and Information Science (CCIS) series devoted to the publication of proceedings of computer science conferences.

Participants are therefore invited to submit papers up to 16 pages in length (excluding references) for Long papers, or up to 6 pages in length (excluding references) for Short papers. All papers will be subject to double-blind review. Papers must be edited using the LNCS/CCIS format (applying the LNCS/CCIS proceedings template):  Springer Computer Science Proceedings and have to be submitted electronically as PDF files via the EasyChair submission page:  EasyChair

Workshop Chairs

  • Dr. Yazan Mualla (Belfort-Montbeliard University of Technology, France). yazan.mualla[at]utbm.fr
  • Dr. Liuwen Yu (University of Luxembourg, Luxembourg). liuwen.yu[at]uni.lu
  • Dr. Hui Zhao (Tongji University, China). huizhao[at]tongji.edu.cn
  • Dr. Amro Najjar (Luxembourg Institute of Science and Technology, Luxembourg). amro.najjar[at]list.lu
  • Dr. Davide Liga (University of Luxembourg, Luxembourg). davide.liga[at]uni.lu

Advisory Board

  • Prof. Dr. Stéphane Galland (Belfort-Montbeliard University of Technology, France)
  • Prof. Dr. Abdeljalil Abbas-Turki (Belfort-Montbeliard University of Technology, France)

Program Committee

  • Stefano Tedeschi, Università della Valle d’Aosta – Université de la Vallée d’Aoste, Italy
  • Adeel Ahmad, Université du Littoral Côte d’Opale, France
  • Stéphane Galland, Université de Technologie Belfort-Montbéliard, France
  • Utsav Patel, Microsoft, United States
  • Eskandar Kouicem, Snowflake, France
  • Shiva Kumar Bhuram, Dorman Products, United States
  • Ilaria Amantea, Università di Torino, Italy
  • Badreddine Chah, Université de Technologie Belfort-Montbéliard, France
  • Alaa Daoud, Université Polytechnique de Hauts de France, France
  • Syrine Haddad, Université de Technologie Belfort-Montbéliard, France
  • Hedi Tebourbi, University of Luxembourg, Luxembourg
  • Sukriti Bhattacharya, Luxembourg Institute of Science and Technology (LIST), Luxembourg
  • Alexandre Lombard, Université de Technologie Belfort-Montbéliard, France
  • Arianna Rossi, LIDER Lab, DIRPOLIS, Sant’Anna School of Advanced Studies, Italy
  • Jeet Mehta, Netflix, United States
  • Yanming Liu, Zhejiang University, China

Registration

Please visit the Registration for more information.

Program

Please visit the Summit Program for more information.

Previous Iterations

CALM-2024, Proceedings (Springer): Advances in Explainability, Agents, and Large Language Models.