In conjunction with the The 25th International Conference on Principles and Practice of Multi-Agent Systems PRIMA 2024

Kyoto, Japan | November 18-24, 2024.

Description

Causality, Agents and Large Models (CALM) represents three rapidly-growing fields within artificial intelligence research. This workshop aims to bridge the gap between these disciplines by investigating how causal reasoning can enhance the capabilities of multi-agent systems (MAS). The workshop will provide a forum for researchers to discuss theoretical foundations, practical applications, and future directions at the intersection of causal AI, Large Language Models (LLM) and MAS..

Workshop goals

  • Explore the role of causal reasoning in enhancing decision-making, coordination, and adaptation in multi-agent systems.
  • Discuss technical challenges and opportunities for integrating causal AI techniques and LLM into MAS frameworks.
  • Foster interdisciplinary collaboration between researchers in causal reasoning, LLM, and MAS.
  • Identify promising directions for future research and development in this emerging area.

Topics

The main topics of the CALM-24 workshop are (but not restricted to):

  • Theoretical foundations of causal reasoning in Multi-Agent Systems (MAS) and Large Language Models (LLMs)
  • Applications of causal AI techniques in agent-based modeling and simulation
  • Causal reasoning capabilities in LLMs
  • Causal inference in complex and dynamic multi-agent environments
  • Mechanistic Interpretability of Causality in Large Language Models
  • Integration of causal reasoning with decision-making and planning in MAS
  • Learning causal models from observational and interventional data in MAS
  • Causal reasoning for coordination, cooperation, and communication among agents
  • Challenges and opportunities for incorporating causal AI into MAS frameworks
  • Case studies and empirical evaluations of causal AI approaches in agents
  • Generative AI as preprocessing for MAS
  • Exploring causality with deep generative models
  • Digital twins and simulators for interpretable synthetic data generation
  • Graph neural network causal learning
  • Causal reinforcement learning
  • Interpretable/Explainable root cause analysis methods
  • Explainable AI with active inference techniques
  • Logic and argumentation-based approaches to causal reasoning

Important Dates

  • Submission deadline: September 19th, 2024
  • Notification: October 05th, 2024
  • Final date for camera-ready copy: TBD
  • Workshop: November 18-19, 2024

Submission

Submissions will be reviewed by at least three members of the program committee, who are experts in the field. The acceptance of the submitted papers will depend on their quality, relevance, and originality. All accepted papers may be published in the Springer post-proceedings Lecture Notes in Artificial Intelligence (LNAI). This is to be confirmed.

Participants are therefore invited to submit papers up to 16 pages (excl. references) in length (5 pages incl. references for demo papers). Papers must be edited using the LNCS format (applying the LNCS post-proceedings template) and have to be submitted electronically as PDF files via the EasyChair submission page:  EasyChair

Workshop Chairs

  • Dr. Yazan Mualla (Belfort-Montbeliard University of Technology, France). yazan.mualla[at]utbm.fr
  • Dr. Liuwen Yu (University of Luxembourg, Luxembourg). liuwen.yu[at]uni.lu
  • Dr. Davide Liga (University of Luxembourg, Luxembourg). davide.liga[at]uni.lu
  • Dr. Igor Tchappi (University of Luxembourg, Luxembourg). igor.tchappi[at]uni.lu

Advisory Board

  • Prof. Dr. Stéphane Galland (Belfort-Montbeliard University of Technology, France)
  • Prof. Dr. Abdeljalil Abbas-Turki (Belfort-Montbeliard University of Technology, France)

Program Committee

  • TBD

Accepted Papers

  • TBD

Registration

Please visit the PRIMA-24 Registration Page for more information.