POA Summit – Program Structure

The planned program structure is shown below. It is organized around the four topic areas Data Products & Agentic AI, Data Product Ecosystems, Data Product Governance and Trends and Value in Industry.

Keynotes by Google, IBM and Bosch as well as academia are planned. Further participants and presenters are expected from ABN AMRO, ASML, Dataminded, DEME Group, DPG Media, Entropy Data, ING, KPN, Mercedes Benz, NS, NXP, Quantyca, Zeiss  and leading academic institutions such as JADS, Politecnico di Milano, Universität Stuttgart, University of Lausanne and TU Berlin.

Day 1 – Thursday, September 18

13:00 – 13:15 Welcome by local chairs
13:15 – 15:30 Data Products & Agentic AI
Session chair: Bernhard Mitschang

Keynote: Building and executing a data strategy that enables data prodcut creation for Agentic AI – Philipp Tenor, IBM Deutschland GmbH >>>

Data Product MCP: Enable AI to answer any business question – Simon Harrer, Entropy Data >>>

Data Products in federated data architectures with TEADAL – Pierluigi Plebani, Politecnico di Milano >>>

Data Mesh Reference Architectures as a Foundation for Agentic AI  – Geert Monsieur, JADS >>>

15:30 – 16:00 Coffee break
16:00 – 18:15 Data Product Ecosystems
Session chair: Ulrich Teufel

Keynote: The Data Product Advantage: Driving Value Through Effective Governance and Management – Remzi Kurshumliu, Google EMEA >>>

Data Products in Practice – Jonny Daenen, Dataminded >>>

Business Data Cloud – an architecture deep dive – Björn Friedmann, SAP, CTO for Business Data Cloud, Head of BDC Technology Office >>>

Next-Generation Data Management: From Silos to Borderless Data Product Ecosystems – Tobias Guggenberger, Fraunhofer ISST, TU Dortmund >>>

18:15 –  Get together & networking

Day 2 – Friday, September 19

 9:30 – 10:45 Data Product Governance
Session chair: Pierluigi Plebani

Keynote: Data Product Journeys: Unpacking Pathways and Key Areas of Organizational Implementation – Christine Legner, Professor of Information Systems & Director Competence Center Corporate Data Quality (CC CDQ) University of Lausanne >>>

Empowering the Data Teams at NS with Entrepreneurship through our Data Product Architecture – Yorick Fredrix & Thomas Hakkaart,  NS >>>

10:45 – 11:15 Coffee break
11:15 – 12:15 Data Product Governance (continued)
Session chair: Christoph Gröger

Data Domains and a Federated Data Governance Model – Jan Mark Pleijsant, ABN AMRO >>>

LLM-powered accessibility and discoverability of Data Products – Matteo Falconi and Valeria Fortina, Politecnico di Milano >>>

12:15 – 13:15 Lunch break
13:15 – 15:30 Trends and Value in Industry
Session chair: Stefan Driessen

Keynote: Data Products in Industry Practice: Strategy and Implementation at BoschChristoph Gröger, Rainer Metje, Bosch >>>

Standardizing the process of ideating, designing and building data products at DPG Media – Sven van Egmond, DPG Media >>>

Generative AI for Generating Data Products from Data – Patrick Vaudrevange, TWT >>>

Community-based Federated Governance for Data Products at Mercedes-Benz – Ulrich Teufel, Mercedes-Benz >>>

15:30 – 15:45 Coffee break
15:45 – 17:15 Workshops and Panel Discussion

Workshop 1: Value of Data – Alexander Röck, Bosch >>>

Workshop 2: DPROD – data products standard using ontologies – Marcel Fröhlich, Eccenca, OMG >>>

Workshop 3: Data Product Architecture – Jonny Daenen, Dataminded >>>

Workshop 4: Data Product Platform Standardisation – Arif Wider, HTW Berlin & Simon Harrer, Entropy Data >>>

 17:15 Farewell

Speakers of Day 1

Philipp Tenor – IBM Deutschland GmbH

Building and executing a data strategy that enables data product creation for Agentic AI

We are entering an era where AI systems are increasingly agentic—able to perceive, reason, and act autonomously. In this context, data and in consequence data products become more than just input; it becomes the fabric through which intelligent agents learn and evolve.
We will highlight the need for a forward-thinking data strategy that enables the creation of scalable, reusable data products tailored for Agentic AI.
We’ll explore key pillars such as data driven use cases, data governance, platform & architecture, and the people and culture aspect. The session will also cover how to move from assessment, to roadmap and execution of a data strategy for data products that support AI.
Drawing on real-world examples, attendees will gain practical insights and a strategic blueprint to turn their data ecosystems into a foundation for innovation in the era of Agentic AI.

Philipp is a data enthusiast and leads the area Data Strategy and Data Governance at IBM Consulting within DACH.
Together with his international teams he runs complex data projects across various industries that range from drawing up and executing data strategies, defining and implementing federated data governance approaches to building modern data platforms that support innovative AI-driven use cases.

Simon Harrer – Co-Founder and CEO Entropy Data

Data Product MCP: Enable AI to answer any business question

The Model Context Protocol (MCP) empowers AI agents to autonomously discover, understand, and query data products in decentralized data architectures. By leveraging data contracts that define schema, semantics, and usage policies, MCP ensures responsible and governed data access.

This talk introduces how MCP enables AI agents to navigate complex data landscapes, automate access requests, and retrieve answers to business questions—securely and at scale—across platforms like Snowflake and Databricks.

Dr. Simon Harrer is Co-Founder and CEO of Entropy Data. A software engineer at heart, he’s passionate about connecting people, data, and AI. At the core of his work is the Data Mesh Manager, a data marketplace powered by data products and data contracts. He champions open standards as part of the Linux Foundation’s bitol Technical Steering Committee, advancing the Open Data Contract Standard and the Open Data Product Standard. Simon also co-maintains the widely used open source tool Data Contract CLI, enabling automated enforcement of data contracts.

Pierluigi Plebani – Politecnico di Milano

Data Products in federated data architectures with TEADAL

Data is widely regarded as a valuable asset within an organisation. However, the value of an asset is closely tied to its usefulness to someone. Therefore, sharing data is essential, but it must be done in a way that complies with governance rules and legal norms. At the same time, controlled sharing of data is a challenging task as it requires that data be properly described, stored, and processed at both the provider and consumer side.
The EU Project TEADAL addresses these challenges by providing a toolset that combines principles of service orientation with the concept of data products. This approach defines a Federated Data Product, a component that minimises the effort in inter-organisational data sharing. A standardised service contract describes both the data offered and the data governance policies that define methods of access, use, and storage. This contract is used by cloud-native technologies to enable data sharing while enforcing the defined policies.

Pierluigi Plebani is associate professor at Politecnico di Milano where he belongs to the RAISE group (Research on Advanced Information Systems Engineering). His primary research interests span the broad field of Information Systems Engineering, including Service-Oriented Computing (SOC), Business Process Management (BPM), and Blockchain. He explores these topics across various application domains, with a particular focus on healthcare and Industry 4.0.

Geert Monsieur – Jheronimus Academy of Data Science (JADS) / Eindhoven University of Technology (TU/e)

Data Mesh Reference Architectures as a Foundation for Agentic AI

This presentation examines three reference architectures for data mesh that highlight its core dimensions: the organization of capabilities and roles, the development perspective, and the runtime environment. Rather than offering competing alternatives, these architectures provide complementary views on how to operationalize data mesh principles and prepare foundational data infrastructure for agentic AI.

Geert Monsieur is an assistant professor in data engineering with a strong focus on bridging research and industry. He teaches databases, software engineering, and data science, and has led and contributed to many research projects in areas such as smart cities and data-driven innovation. His current work centers on data mesh and AI-driven data management, helping organizations explore how architectural approaches can make data a reliable and actionable asset.

Remzi Kurshumliu – Google EMEA

The Data Product Advantage: Driving Value Through Effective Governance and Management

The Data Product Advantage—a powerful framework for transforming your data function from a cost center into a strategic value engine. We will move past the hype and dive into the two foundational pillars that separate successful data-driven companies from the rest: disciplined management and enabling governance.

A strategic Data, Analytics, and AI leader with over two decades of experience architecting the future of enterprise intelligence. As a Data & Analytics Solutions Lead at Google, Remzi partners with major European companies to move beyond traditional analytics and embed intelligent systems into their core operations. His focus is on operationalizing AI/ML—from predictive modeling to Generative AI—to solve high-value business problems and unlock transformative growth.

Jonny Daenen – Dataminded

Talk: Data products in practice
Workshop: Data Product Architecture

Talk: Data products in practice
Data products promise a lot, but what does it actually take to make them work inside an organization? In this talk, we share a practical approach to bridging business and technical roles around data. For business teams, we show how data products can be managed and scaled without getting lost in complexity. For technical teams, we translate the idea of data products in concrete tools and workflows. Finally, we connect these perspectives to the bigger picture: how a well-designed data platform can help your organization create and maintain data products that stay useful, reliable, and “healthy” over time.

Workshop: Data Product Architecture
In this demonstration, we show how to build a real data product from scratch, using a production-like environment based on an actual client setup. This unique opportunity enables participants to witness how data products are brought to life in a real-world context.
We will walk through the lifecycle of a data product, including:

  • Creating and structuring a new data product
  • Writing and organizing the code
  • Deploying and running the product in an operational environment
  • Managing access between data products

Throughout the session, we will make each component and process tangible using appropriate technologies. By the end of the workshop, attendees will have a clear, practical understanding of what it takes to build, run, and manage high-quality data products at scale.
Note: This workshop has a primarily technical angle and is not hands-on.

Bio
Jonny Daenen is Knowledge Lead at Dataminded and a Senior Data Engineer supporting organizations across industries. He helps them deliver value from data at scale by creating environments where data products are easy to build, run, and maintain. By creating mental models, he bridges the gap between business and technology, making abstract concepts tangible for all roles. His mission is simple: to raise the bar for what great data products look like.

Björn Friedmann – SAP, CTO for Business Data Cloud, Head of BDC Technology Office

Business Data Cloud – an architecture deep dive

SAP Business Data Cloud is a fully managed SaaS solution that unifies and governs all SAP data and seamlessly connects with third-party data—giving line-of-business leaders context to make even more impactful decisions.
In this session Bjoern Friedmann (CTO Business Data Cloud) will do an architecture deep dive and share some insights how Data Products are being produced and explain why Data Products are helping in creating next-gen applications

Björn Friedmann joined SAP in 2008 starting as a kernel developer in the SAP Netweaver stack. In 2010 he joined the SAP HANA Core team and became one of the founding members for HANA XS which introduced application server capabilities into HANA.
After holding several architect positions in the HANA area he joined the Technology Office as the Chief Architect for SAP Datasphere.
In 2024 he helped to shape the Business Data Cloud architecture end-to-end as a Chief Architect and took over the leadership of the BDC Technology Office. Today he is the CTO for Business Data Cloud and Head of BDC Technology Office.

Tobias Guggenberger – Fraunhofer ISST, TU Dortmund

How To Deal In Reality With The Data Challenge

Data management is shifting from an internal support function to a borderless value-creation system. Regulation pushes more data outward with reusability and standards, while AI (LLMs/agents) demands stable, semantically rich interfaces—making the old “inside vs. outside” split increasingly untenable. Organizations may face emerging challenges in aligning how data is produced for internal efficiency with how it’s consumed externally for monetization, compliance, and contracting; pipeline-era practices, uneven interfaces, and non-executable policies can introduce cost and friction.

Rather than a grand blueprint, the way forward rests on a few guiding principles: think in products, design for reuse across boundaries, make governance intrinsic to how data moves, and assume AI is a first-class consumer. Crucially, this implies a shift from internal governance to external ecosystem orchestration—less about tighter control, more about coordinating value: shared semantics, portable trust, and lightweight contracts that travel between organizations. With these as north stars—clear accountability where it matters, connections that travel well, and quiet guardrails—data becomes dependable value. The result is faster AI delivery, lower integration drag, confident compliance, and room for new revenue—grown step by step as the organization learns.

Dr.-Ing. Tobias Guggenberger is a Postdoctoral researcher and Group Lead for Data Space Concepts at Fraunhofer ISST, serving as Deputy Coordinator of the Data Spaces Support Centre. His work focuses on European data and AI infrastructure, sovereign data sharing, AI value chains, and trusted data transactions. He bridges research, policy, and industry, leading cross-border consortia and complex workstreams. Tobias has authored peer-reviewed publications and whitepapers and contributed to major conferences. Recent work centers on AI-ready organizations, federated governance for GenAI, data valuation and value-based management, and orchestration across AI factories and data spaces. He designs governance frameworks that align incentives, standards (CEN/CENELEC, ISO), and technology, translating regulation into implementable practice through guidelines and testbeds. Tobias delivers measurable impact in supply chains, product development, and public-private ecosystems. Known for a pragmatic, market-led approach, he emphasizes interoperability, standards, and outcomes.

Speakers of Day 2

Christine Legner, Professor of Information Systems & Director Competence Center Corporate Data Quality (CC CDQ) University of Lausanne

Data Product Journeys: Unpacking Pathways and Key Areas of Organizational Implementation

Data products represent a paradigm shift in how data is managed and used within organizations. Although the concept has gained considerable traction in both research and practice, implementation varies across organizations, and and many struggle to move beyond the foundational stages. To advance understanding of this phenomenon, we conceptualize data product journeys as the organizational processes through which data products are initiated, developed, and embedded into practice.
Drawing on multiple case studies, we examine the triggers, objectives, and work packages that shape data product journeys. Our analysis shows that these journeys unfold along multiple pathways, which differ in their starting conditions, scope, and sequence. Across these pathways, we identify five key areas that require systematic attention: (1) foundations, (2) platforms, (3) data product life-cycle, (4) data product and portfolio management, and (5) organization and culture.
Conceptually, our adds a process perspective to debates on data products in organizations. Practically, it offers guidance for data leaders and managers seeking to navigate the complexities of embedding data products into their organizations.

Yorick Fredrix & Thomas Hakkaart – NS

Empowering the Data Teams at NS with Entrepreneurship through our Data Product Architecture

Implementing a Data Product Architecture at NS to promote entrepreneurship within our data teams, enabling them to develop innovative solutions that contribute to better train operations and improved passenger experience.

Jan Mark Pleijsant – ABN AMRO

Data Domains and a Federated Data Governance Model

Discover how to design Data Domains within a Federated Data Governance Model (FDGM) to cut complexity, boost compliance, and turn data into real business value.

At the heart of FDGM are Data Domains, business-aligned organisations that manage data about a logical business topic. You’ll learn practical approaches to designing these domains using subject area modelling, stakeholder engagement, and AI-driven insights, and see how they are organised to manage data and build data products to bring value to the enterprise.

Jan Mark Pleijsant is Senior Data Strategy & Governance Advisor at the Central Data Office of ABN AMRO Bank in the Netherlands, which he joined over 20 years ago. He played an important role in shaping the data management capabilities, with special focus on data governance and business data modelling. During the Federated Data Governance Model implementation, he designed the Data Domains for the Bank.

Matteo Falconi & Valeria Maria Fortina – Politecnico di Milano

LLM-powered accessibility and discoverability of Data Products

Accessibility and discoverability are essential dimensions in the management of data products, as they enable control and access for authorized consumers and facilitate the identification and effective use of available resources. In practice, however, these capabilities are often constrained by technological and organizational limitations, which reduce their overall effectiveness. At the same time, the recent evolution of Large Language Models (LLMs) has shown increasing potential in supporting tasks related to data discovery and code generation. This work aims to investigate the use of LLMs as a means to mitigate these limitations, with the goal of enhancing the accessibility and discoverability of data products.

Christoph Gröger & Rainer Metje – Bosch

Data Products in Industry Practice: Strategy and Implementation at Bosch

This talk gives an overview of concepts and implementations of data products at Bosch. It details both data strategy aspects and IT delivery aspects of data products in a very large global technology and service company.

Dr. Christoph Gröger is Chief Expert for Data & AI in Bosch’s global data strategy team. Rainer Metje is Vice President for Data & Integration Platforms at Bosch Digital.

Sven van Egmond, DPG Media

Standardizing the process of ideating, designing and building data products at DPG Media

In his talk Sven will focus on standardising the process of ideation, creation and management of data products.

Sven van Egmond serves as Head of Data at DPG Media. And is responsible for managing the Data platform, Tracking platform, Data architecture and Data governance. DPG is a first mover when it comes to the adoption of data mesh.

Patrick Vaudrevange – TWT GmbH Science & Innovation

Generative AI for generating data products from data

This talk explores how generative AI – especially large language models – can be leveraged to automatically generate data products by understanding raw data, metadata, and user interactions with data. We’ll cover the underlying technologies, practical applications, and future directions.

Dr. habil. Patrick Vaudrevange holds a doctorate and habilitation in physics and is an expert in the field of data science and AI. After research stays at LMU Munich and the DESY Research Center in Hamburg, he headed a junior research group at the Technical University of Munich from 2016 and wrote his habilitation in the field of string theory and machine learning. Since 2021, he has been working at TWT Science & Innovation GmbH and now holds the position of “Senior Principal Data Analytics & AI”. He is involved in the development of innovative data science and AI methods for the automotive industry.

Ulrich Teufel – Mercedes-Benz Group AG

How To Deal In Reality With The Data Challenge

Making plans to create a data mesh within a big enterprise and to share valuable information via data products makes sense. Absolutely! But how does it feel if you really want implement this? How does it look in reality and how to govern?

I want to give insights into our approach and also share my experience with that topic here.

Ulrich Teufel is a seasoned professional with over two decades of experience in the IT and business sectors. From 1999 to 2019, he made significant contributions in the Aftersales domain, excelling in software development, data warehousing, data modeling, and project management. His entrepreneurial spirit led him to spearhead startup projects and data platform initiatives, culminating in the creation of advanced data analytics platforms.

Since 2019, Ulrich has been at the forefront of IT Enterprise Architecture, focusing on data architecture, data products, and semantic modeling. His innovative approach and deep understanding of data have been instrumental in driving forward-thinking solutions and strategies. He has been instrumental in building a robust Enterprise Architecture community and fostering a collaborative work model. His leadership in planning and executing an annual multi-day internal conference on digital transformation has been pivotal in driving the company’s innovation agenda.

Alexander Röck, Bosch

Value of Data

Have you ever asked yourself whether data is worth something or just a cost? And if you do assign it value, how much is it? In this entertaining workshop, you’ll not only get some food for thought but also have to grapple with these questions yourself.

Dr. Alexander Röck has a long history at the Robert Bosch GmbH. After spending a long-time developing components for combustion engines, he has been working in the field of digitalization since 2016. He and his team are currently responsible for Bosch’s data strategy and fundamental data management architectures.

Marcel Fröhlich – eccenca GmbH & OMG Enterprise Knowledge Graph Platform Taskforce Chair (DPROD standard)

DPROD – data products standard using ontologies

Machine-readable ontologies and descriptions of data products provide great opportunities for data product-oriented architectures to leverage task-agnostic knowledge bases for automation. Ontology-based knowledge graphs are the ideal tool for creating an authoritative, integrated representation of data meaning, data connections, and data logic. These in turn provide the guidance required for both human and AI data usage.

Marcel Fröhlich is chairing the OMG Enterprise Knowledge Graph Platform Task Force and is an editor of the OMG DPROD data product ontology standard. He is part of the eccenca management team, leading the customer engagement practice. Eccenca is a software vendor focused on enterprise knowledge graph technology.

Arif Wider – HTW Berlin

Data Product Platform Standardisation

Data product ecosystems (intra- or inter-organizational) depend on all kinds of services to work, e.g. a data product catalog or continuous delivery infrastructure for data products. For the intra-organizational case, these services are usually provided by an organization-wide data product platform. By now, many companies have either built their own platform for this or use one of the few vendor-provided offerings (e.g. Nextdata OS, Data Mesh Manager). In order to align these independent efforts and to help with interoperability, esp. for the inter-organizational case, we want to standardise the most important capabilities of such a data product platform. This workshop is only the first step towards such a standardisation and will mostly aim for (1) a collection of common platform capabilities to be standardised, (2) a clarification of the relation to other standards, e.g. on data products and data contracts, and (3) a selection of the right forum and/or committee to further develop such a standard.

Arif Wider is professor of software engineering at HTW Berlin and a fellow technology consultant with Thoughtworks Germany, where he served as Head of Data & AI before moving back to academia. As a vital part of research, teaching, and consulting, he is passionate about distilling and distributing great ideas and concepts that emerge in the software engineering community. As such, he frequently speaks and writes about large-scale enterprise data architectures such as Data Mesh on which he co-authored dozens of articles as well as an O’Reilly-published booklet.