Invited Speakers

Professor Ralph Bergmann, University of Trier & DFKI, Germany

EXAR: A Unified Experience-Grounded Agentic Reasoning Architecture

Current AI reasoning often relies on static pipelines (like the 4R cycle from Case-Based Reasoning (CBR) or standard Retrieval-Augmented Generation (RAG)) that limit adaptability. We argue it is time for a shift towards dynamic, experience-grounded agentic reasoning. This paper proposes EXAR, a new unified, experience-grounded architecture, conceptualizing reasoning not as a fixed sequence, but as a collaborative process orchestrated among specialized agents. EXAR integrates data and knowledge sources into a persistent Long-Term Memory utilized by diverse reasoning agents, which coordinate themselves via a Short-Term Memory. Governed by an Orchestrator and Meta Learner, EXAR enables flexible, context-aware reasoning strategies that adapt and improve over time, offering a blueprint for next-generation AI.

 

Professor Claire Gardent, LORIA & CNRS, France

Case-Based Reasoning and Retrieval Augmented Generation 

In Natural Language Processing (NLP), Retrieval Augmented Generation (RAG) has been gaining traction as a way to complement the parametric knowledge encoded in Large Language Models (LLMs) with explicit information collected on the fly to expand the input query with relevant knowledge. This talk will start by briefly summarizing the impact of neural approaches on natural language processing (NLP), explaining how these methods have revolutionized the field by addressing some of the key challenges raised by natural language. The presentation will go on to discuss the parallel between RAG approaches, which retrieve and generate, and Case Based Reasoning, which retrieves, reuses, revises and retains. This latter part of the talk will be based on NLP examples Dr. Gardent has been working on with her students: retrieval based question answering and summarizing, generating Wikipedia biographies and verbalizing knowledge graphs.

 

Professor David Leake, Indiana University, USA

CBR Tomorrow: Cases in the Age of Generative AI

Case-based reasoning research has a long history, starting more than four decades ago, inspired by insights on human memory, reasoning, and learning. Since that time, CBR research has elucidated the principles, methods, and practice of CBR and illuminated rich opportunities
for synergies with other AI approaches. In 2022 artificial intelligence
had a watershed moment: With the release of ChatGPT and subsequent
models, generative AI took the world by storm, capturing public attention,
transforming AI practice, challenging beliefs about the nature of
intelligence, and recasting expectations for future society. What does
generative AI mean for the future of case-based reasoning? To answer,
I first touch on CBR history, revisiting the meaning of CBR through
foundational motivations, tenets, and past insights. I then highlight why
cases remain vital: what CBR can do that large language models cannot,
the roles of CBR separately and in concert with generative AI, and why
CBR-insipred memory matters. From this analysis I propose opportunities
for tomorrow’s CBR.