ACCELERATING DATA PRODUCT DEVELOPMENT -- SPEED UP THE TIME TO VALUE
When we talk about time to value in data-powered organizations, we often jump too quickly to dashboards, AI models, or the latest agentic AI solutions. But time to value does not start with the use case. It starts much earlier - with the creation of the data product that enables the use case in the first place. Time to value, in this context, means the time it takes to define, build, deploy, and operate a data product together with its consuming use case - whether that use case is a dashboard, an analytical application, or an agentic AI solution. If we want to accelerate value creation, we must minimize the time spent on data product development so teams can focus on what really matters: solving the business problem.
SECURE AI - AVOID GETTING FAMOUS ON DEFCON
At this year’s DefCon, the world's largest security conference, one theme dominated the conversation: AI systems are already under attack. From Agentic AI agents exposing sensitive data to misconfigured Model Context Protocol (MCP) servers granting unauthorized access, to LLM-powered apps tricked by prompt injections - DefCon made it brutally clear that vulnerabilities in AI aren't theoretical. They're here, and they're being actively exploited. Does that mean you should halt your Agentic AI innovation? Absolutely not. But it is a powerful call to action on the awareness of the multiple threats in Agentic AI ecosystems. To truly unlock the power of data-powered innovation and agentic AI, organizations must weave DevSecOps principles into the Golden Path - turning security from a burdensome afterthought into a seamless, automated foundation for safe, rapid innovation.
CHASING THE HOLY GRAIL (AND FORGETTING EVERYTHING FROM THE PAST)
Sometimes I have the feeling that Iām not only watching Monty Python and the Holy Grail again, but living it through. Oh, I should have written it Monty Python and the Holy GrAIl … When companies (both on the vendor and on the consumer side) are desperately chasing the AI game, I wonder if they have forgot the learnings and best practices from decades of system and application development. We knew how to create secure (not opening security holes as big as a barn door) and scalable applications as part of a bigger system.
SECURE THE AI STACK THROUGH PLATFORM ENGINEERING
As data and AI-driven organizations push the boundaries of innovation, platform engineering has emerged as a key enabler of speed, scale, and reliability. Whether you’re deploying microservices, data products, or advanced AI agents, the promise of self-service developer platforms is to make innovation repeatable and secure (see the last post for reference). But speed without control is risky. The recent Docker security bulletin exposed a significant threat: thousands of unprotected MCP (Model Context Protocol) servers running in production across the internet. These insecure endpoints provide attackers with direct access to AI model internals,posing risks from model theft to poisoning attacks. But it’s not only the MCP servers that pose a threat. When OpenAI announced its ChatGPT Agents, Sam Altman said:
UNLOCKING INNOVATION WITH PLATFORM ENGINEERING
Data, AI, and innovation are essential to staying competitive. Organizations need to accelerate development while maintaining quality to remain ahead. Platform engineering, traditionally tied to cloud-native environments and microservices, is now crucial for enabling data mesh architectures, transforming how data products and AI applications are built. Data mesh allows teams to efficiently develop AI models, AI agents, and analytical tools, such as dashboards. Platform engineering provides the infrastructure and standardized processes ā known as the Golden Path ā that streamline development, reduce time to market, and improve product quality.
SHAKING THE KINDER EGG - OR: METADATA OF DATA PRODUCTS?
Shaking the Kinder egg - or: metadata of Data Products Who hasn’t done it? shaking the Kinder eggs to “guess” what’s inside and raise the chance of getting one of the figures. We even put them on the vegetable scale to increase the chance (many many years back the figures had higher weight than the assemble stuff). But in the end, it was all guessing. When we deal with data, we don’t want to guess. We want to get the clearest view on “what’s inside” as possible. And with the Data Mesh approach, in with we create the Data Products to reuse existing data, this is more important than ever. But how do we do that?
DATA MESH - NOT SUCH A NEW CONCEPT AFTER ALL?
Not such a new concept after all? I don’t think that I have to tell much about the recent developments on Data Mesh (see this and that), the world doesn’t need another “Data Mesh introduction” article just to tell these things again. But when we dig deeper into that topic and look on the Data Product there might be some similarities ringing a bell. We’ll come to this later. But how is a Data Product defined (by Zhamak)?
FIRST POST
Hello World package main import "fmt" func main() { fmt.Println("hello world") }