1st NORA Workshop at ICLR'26, April 23-27th, 2026

Agents have experienced significant growth in recent years, largely due to the rapid technological advancements of Large Language Models (LLMs). Although these agents benefit from LLMs’ advanced generation proficiency, they still suffer from catastrophic forgetting and a limited context window size compared to the agents’ needs in terms of contextual information. Knowledge Graphs (KGs) are a powerful paradigm for structuring and managing connected pieces of information while unlocking deeper insights than traditional methods. Their value is immense for tasks that require context, integration, inter-linking, and reasoning. However, this power comes at the cost of significant upfront and ongoing investment in construction, curation, and specialised expertise. The NORA workshop aims at analysing and discussing emerging and novel practices, ongoing research efforts and validated or deployed innovative solutions that showcase the growing synergy between LLMs agents and KGs.
The recent proliferation of large language models (LLMs) has opened the doors for new paradigms that benefit many applications like intelligent assistants, content creation & summarisation, code generation & debugging, and knowledge discovery, to name a few. Such applications are achieved through prompt engineering & in-context learning, retrieval augmented generation, fine-tuning & alignment, and function calling & tool usage. These families of techniques can be used on their own or combined for better results.
Thanks to the constantly improving reasoning and function calling capabilities of LLMs, LLM-based agents have attracted more attention. While performing their allocated tasks, these agents usually need to accumulate memory and feedbacks from tool calls and maintain a long run of these tasks. Consequently, they can easily exceed the context window size, explode costs, and degrade both latency and performance, due to their growing usage of tokens.
Depending on their tasks, agents usually need access to minimal portions of semantic memory (i.e. facts), episodic memory (i.e. events), and procedural memory (i.e. instructions). However, it remains challenging for agents to select relevant examples from different memories, especially in large-scale applications (e.g., personal memories for personal assistance).
Knowledge Graphs (KGs) model data and knowledge in a structured and explicit format known as graphs. Thanks to this native structure, they have demonstrated great capabilities in capturing rich semantics and connections between entities and concepts in both closed and open domains. This feature has enabled both 1) complex logical reasoning, which is needed for multi-hop queries and deriving new implicit knowledge from explicit facts; and 2) graph-based learning through richer features of the structured data. However, curating knowledge can be challenging, especially from heterogeneous data sources and formats (e.g., personal assistants). As a consequence, large-scale and industrial applications' scenarios are even more impacted by this bottleneck, which thereby lower the adoption of pure KG-based solutions in some Industrial use-cases.
Therefore, this first edition of the workshop aims to unveil the emerging yet growing interplay between two key paradigms of recent AI systems: Agents and Knowledge Graphs. On the one hand, the efficiency and performance of agentic systems can benefit greatly from KGs as a structured data model and reasoning foundation, especially in designing and implementing their various memories. On the other hand, KGs can leverage the advanced linguistic capabilities of LLM agents in extracting, computing and engineering knowledge from unstructured, multi-modal & multi-lingual data sources.
Topics of interest include, but are not limited to:
We envision four types of submissions covering the entire workshop topics spectrum (page limits does not include references and appendices):
In order to ease the reviewing process, authors may add
the track they are submitting to directly in their titles,
for instance: "Article Title [Industry]".
Workshop submissions must be self-contained and in English. Note: The review process is double-blind, authors should be careful and submit anonymous articles.
All papers should be submitted to https://openreview.net/TO-BE-DEFINED.
To be announced
To be announced
NORA 2026 is co-located with ICLR 2026.
Rio de Janeiro, Brazil
More info. about the venue.