The notion of “we-rationality”, introduced by the philosopher Martin Hollis, asserts that within any complex ecosystem, there shouldn’t be a “mentality” that makes one say “this action has good consequences for me”, but rather think that “this action is my part of a global action that has good consequences for us”. When dealing with AI systems, this shift calls for a different approach where the focus extends beyond one single AI model to encompass complex systems involving many, potentially different, agents. In such a context, which we refer to as Cooperative AI, the “we-rationality” concept is fully realized only if all the agents cooperate to fulfill a “common” objective.
Cooperation mechanisms exhibit a wide range of complexities, reflecting various context-dependent dynamics and including AI-human interactions, AI-AI interactions, and AI-environment interactions. In human-AI interactions, on one hand, the feedback of a domain expert is used to guide the models’ training (Active Learning), on the other, the goal is to enhance the AI system’s information delivery, providing support to practitioners and improving final-users awareness (eXplainable AI). Another cooperation schema involves different AI entities (sub-symbolic and/or symbolic) operating in a shared environment to accomplish either a single common goal or individual subgoals without interfering with each other (Neuro-symbolic AI). Alternatively, these entities can be orchestrated by a central agent responsible for agents’ evolution (Federated Learning). In other contexts, the AI agents interact with the surrounding environment to get guidance for objective attainment by a system of rewards and penalties (Reinforcement Learning). As a broad and overarching topic, a fundamental issue in Cooperative AI is sustainability, which aligns with the 17 Sustainable Development Goals (SDGs) defined by the United Nations. This encompasses both environmental aspects, such as designing novel, low-impact, energy-efficient AI models to address climate action, and social aspects, including efforts to reduce bias and discrimination in the models’ responses, thereby promoting equality and inclusivity.
The CAIMA Workshop is tailored to foster discussion on interaction-driven AI systems by allowing participants to introduce and discuss new methods, theoretical approaches, algorithms, software tools, and applications.