Trust, Autonomy and Accountability in PKG-Based Agentic AI

TAAPAAI Workshop

2nd Edition - Workshop at ESWC 2026

May 10-14, 2026 | Dubrovnik, Croatia

About the Workshop

Agentic AI systems, i.e., autonomous agents capable of planning, reasoning, and acting through tools, are rapidly shifting from labs into core enterprise and consumer products, leading to an era of operational AI. This transition significantly magnifies deep-seated concerns regarding Trust, Autonomy, and Accountability (TAA), as demonstrated by incidents ranging from internal data leaks to legal liability for misinformation. These failures underscore the urgent need for transparent, explainable, and governable agentic pipelines. Personal Knowledge Graphs (PKGs) offer a principled and verifiable solution. By modelling a user's personal context, including policies, constraints, and data provenance, using explicit, machine-readable semantics, PKGs can operationalize TAA. They enable agents to achieve stronger alignment with user intent, provide clear, grounded explanations, and enforce accountability through traceable, auditable links between actions and inferences. The second edition of the TAPAI workshop at ESWC 2026 is designed to leverage this crucial intersection of Agentic AI, TAA, and Knowledge Graphs. We will convene Semantic Web/KG researchers, AI safety scholars, industry practitioners, and policy experts. The day will feature an engaging, outcome-oriented format, including keynote and lightning talk provocations from academia and industry, interactive panel and live polling to facilitate real-time engagement, as well as an outcome-oriented roadmapping session, where participants will co-create a vision document and manifesto for TAA issues in PKG-based agentic AI, providing a clear path forward for the community.

Key Topics

  • Trust
  • Autonomy
  • Sovereignty
  • Accountability
  • Ethics
  • Diversity
  • PKGs
  • Agentic AI

Motivation & Objectives

Agentic AI systems—autonomous or semi-autonomous agents that can plan, reason, and act through tools and user interfaces—are rapidly moving from labs into everyday products. In the past year, major players have converged on agent-first roadmaps: OpenAI introduced Operator and followed with ChatGPT Atlas, a native browser with an embedded agent mode; Salesforce launched Agentforce 360 for enterprise-grade agent orchestration; and Google released SIMA 2, a Gemini-powered embodied agent for interactive environments, alongside Gemini 3 and Gemini Agent that coordinate multi-step tasks across Gmail, Calendar, live browsing, and more. Together, these advances signal not just improved assistants but operational AI that executes goals on our behalf across the web and enterprise systems.

Yet this shift magnifies long-standing concerns around trust, autonomy, and accountability. Recent incidents illustrate how fragile trust can be when agents act with imperfect oversight. For example, Samsung's internal data leak via ChatGPT sparked an enterprise ban and policy reset; an OpenAI bug exposed some users' chat titles and limited billing details; a Canadian tribunal held Air Canada liable for a chatbot's misinformation; and prompt-injection exploits have shown how chatbots can be manipulated to make absurd, brand-damaging commitments. These failures are not curiosities; they underscore the need for transparent knowledge grounding, verifiable provenance, and explicit user-governed policies in agentic pipelines.

Personal Knowledge Graphs (PKGs) offer a principled answer. PKGs represent a user's personal context including preferences, constraints, roles, data entitlements, and task histories as explicit, machine-readable semantics, enabling agents to: (i) align decisions with user intent; (ii) provide explanations grounded in named entities, relations, policies and provenance; and (iii) enforce accountability via traceable, signed, and auditable links between inputs, inferences, and actions. For a community centred on Knowledge Graphs and the Semantic Web, PKGs are the obvious substrate to operationalize Trust, Autonomy, and Accountability in agentic systems—turning opaque behaviours into inspectable, governable workflows, enabling technologies associated with linked data, reasoning, and provenance to enable human agency in the agent era.

Building on the success of the first edition at ISWC 2025 (run in a Dagstuhl-style format and attracting a focused group of ~20 researchers and practitioners), this second edition at ESWC 2026 transitions to a standard interactive workshop to broaden participation and crystallize a research agenda. We wish to examine these aspects further through the following topics:

Trust

What are the key requirements for a PKG-based agentic AI to enhance human & institutional trust?

Autonomy

How can individuals retain meaningful autonomy when interacting with or delegating to PKG-Based Agentic AI systems?

Accountability

How can PKG-based agentic AI be made accountable for decisions they inform or make?

Ethical Diversity

How can such systems ensure the surfacing of diverse, ethically informed perspectives rather than reinforcing biased views?

Expected Outcomes

This workshop will convene Semantic Web/KG researchers, AI safety scholars, industry practitioners, and policy experts. The day will feature an engaging, outcome-oriented format, including keynote and lightning talk provocations from academia and industry, interactive panel and live polling to facilitate real-time engagement, as well as an outcome-oriented roadmapping session, where participants will co-create a vision document and manifesto for TAA issues in PKG-based agentic AI, providing a clear path forward for the community.

All workshop materials will be openly shared on this website to support transparency and community engagement.

Call for Papers

Topics of Interest

We invite submissions addressing one or more of the following themes (non-exhaustive list):

  • Trust and transparency in PKG-based agentic systems
  • Accountability frameworks for autonomous agents
  • Ethical, compliance and legal considerations in personalized AI
  • PKG-driven reasoning and decision-making for personal agents
  • Explainability and interpretability in agentic AI
  • Human-agent collaboration and oversight
  • Security, privacy, and data governance in PKGs
  • Evaluation metrics for trust and autonomy in AI agents

Submission Types

  • Full papers: Up to 12 pages (excluding references)
  • Position papers: Up to 6 pages (excluding references)

Submissions must be formatted according to the Springer LNCS guidelines and submitted via EasyChair.

Submissions must be either in PDF or in HTML, formatted in the style of the Springer Publications format for Lecture Notes in Computer Science (LNCS). For details on the LNCS style, see Springer's Author Instructions. For HTML submission guidance, see the HTML submission guide.

Papers must be submitted via https://easychair.org/conferences?conf=taapaai26

Important Dates

All deadlines are 23:59 Anywhere on Earth (UTC-12).

Submit via EasyChair
Submission Deadline
March 3, 2026
Notification of Acceptance
March 31, 2026
Camera-ready Deadline
April 15, 2026

Proceedings

Accepted papers will be published in the CEUR Workshop Proceedings (indexed in DBLP).

Workshop Format

The workshop will feature:

  • Paper presentations
  • Interactive discussions
  • A panel on trust and accountability in PKG-based AI

For questions about submissions, please contact us at taapaai26@easychair.org.

Organizing Committee

John Domingue

John Domingue

The Open University, UK

John Domingue holds the position of Professor of Computer Science, at the Knowledge Media Institute (KMi), the Open University's technology research and innovation centre. He also serves as the chair of the ESWC conference series Steering Committee and is a member of SWSA. With a career including serving as KMi Director from 2015 to 2022, Prof. Domingue has contributed 250 refereed articles in fields such as semantics, AI, the Web, distributed ledgers, and eLearning. From 2017 to 2021, he led the first of five themes, on University Learners, for the £40M Institute of Coding, an initiative aimed at increasing the number and diversity of computing graduates in the UK while strengthening the connection between university teaching and corporate training. Since the beginning of 2023, he has been at the forefront of examining the impact of Generative AI on higher education including a UKRI funded project (SAGE-RAI) with the Open Data Institute. John has delivered numerous talks on his work, including appearances at the Royal Institution in 2018, TEDx, and featured in THE Campus on interdisciplinary research teams. In 2019, he was inducted as a Fellow of the British Blockchain Association, and in 2020, he became an Honorary Professor at Amity University.

Aidan Hogan

Aidan Hogan

University of Chile

Aidan Hogan is an Associate Professor and Director of the Department of Computer Science, University of Chile, and an Associate Researcher and Subdirector of the Millennium Institute for Foundational Research on Data (IMFD). Aidan’s research interests relate primarily to the Semantic Web, Graph Databases, Knowledge Graphs, Information Extraction and Reasoning; he has published over one hundred peer-reviewed works on these topics.

Sabrina Kirrane

Sabrina Kirrane

Vienna University of Economics and Business, Austria

Sabrina Kirrane is an Associate Professor at the Vienna University of Economics and Business Institute for Complex Networks and a member of the Competence Center for Applied AI and Scientific Computing. Sabrina’s research interests include Security, Privacy, and Policy aspects of the Next Generation Internet (NGI), Distributed and Decentralised Systems, Big Data and Data Science, with a particular focus on policy representation and reasoning (e.g., access constraints, usage policies, regulatory obligations, societal norms, business processes), and the development of transparency and trust techniques.

Oshani Seneviratne

Oshani Seneviratne

Rensselaer Polytechnic Institute, USA

Oshani Seneviratne is an Assistant Professor of Computer Science at Rensselaer Polytechnic Institute, where she leads the BRAINS Lab (Bridging Resilient, Accountable, Intelligent Networked Systems). Her research focuses on decentralized systems, including web technologies, blockchain, and decentralized learning, with applications in health informatics and decentralized finance. More Information about Oshani can be found here.