Chasing the Holy Grail (And Forgetting Everything From the Past)

Sometimes I have the feeling that I’m not only watching Monty Python and the Holy Grail again, but living it through. Oh, I should have written it Monty Python and the Holy GrAIl …

When companies (both on the vendor and on the consumer side) are desperately chasing the AI game, I wonder if they have forgot the learnings and best practices from decades of system and application development. We knew how to create secure (not opening security holes as big as a barn door) and scalable applications as part of a bigger system.

But when I read the announcement from Hashicorp on introducing MCPs server for Vault and Terraform, my first reaction was “what could possibly go wrong”???

hashicorp MCP server announcement

If you look at the MCP server specification and, more importantly, how many MCP servers are implemented without proper security, this can only lead to a big disaster. I’m not saying that there’s no value in using AI (including GenAI and Agentic AI), but when 95% of the companies struggle to get a positive ROI, you should think

  1. What’s really my business process I’d like to improve, how is it currently look like, how should it be (and stating “a magic AI wand is doing a lot of magic to figure out everything” is not a proper statement) –> start thinking about how to improve the business flow, than which technologies you already have to leverage and than how to use AI to support it in a SECURE WAY.
  2. To what extend do you really need applying (Agentic/Gen)AI? Quite often a simple API call can do magic (oh wait, what again were all the MCP and “Agent Tools”? right, API calls … ).
  3. Security is not optional, all your thoughts should go with a zero-trust and security-by-design mindset.

Another important aspect in the MIT report lays in the adoption of AI. Building every AI tool is probably good, but only if you’re Google, Netflix, AWS and the rest of the digital and AI native companies. For the majority of companies, it’s better to purchase AI tools, partner with a system integrator and together build a system for repetitive creation of secure AI-augmented applications. And the best way to do it? Follow the Golden Path.

With that, IT departments and experts in the field of AI, your enterprise landscape and security can layout a simple path to follow when creating AI augmented applications. They make sure the latest SDKs and frameworks are used in the application templates, through proper setup of CI/CD pipelines all linting and code design rules are applied. And not only that, in that step, the scanning of security vulnerabilities is done automatically and the results are reported back to the development team. This ensures that the applications are secure from the start and any vulnerabilities are identified and addressed early in the development process.

Another aspect of the adoption of this is, that certain AI tools (MCP server, language models, Agent Tools, etc.) should be limited to access only by authorized applications / systems. One aspect of the development of secure applications is the need-to-know principle. This means that only those who need to access the tools should have access to them. This can be achieved through proper access control mechanisms such as role-based access control (RBAC) or attribute-based access control (ABAC). And with that, your IT might not give you access to everything, but they will help you to get what you need. But that means you need to articulate what you need and why. Remember my first point on the business process? You need to define the business process and the requirements for the AI application. This will help you to identify the tools and resources that are needed and ensure that they are used securely.

A practical way to implement this is through Backstage scaffolder templates. For example, when a developer initiates the creation of a new AI-augmented service, the template could present options like “Select Cloud Environment” (e.g., AWS, Azure, GCP, On-Prem) and “Select APIs” (e.g., internal Customer API, Finance API, LLM API). Based on the user’s role, the scaffolder would automatically filter available choices:

  • A Data Scientist might only see LLM APIs and staging cloud environments.
  • A Backend Engineer could access transactional APIs and multiple deployment environments.
  • A Contractor may only see sandbox APIs with limited scope.

Behind the scenes, the scaffolder injects the correct SDKs, CI/CD configuration, and role-aligned access controls (RBAC/ABAC policies, API keys, vault integration). This way, developers can only build what they are entitled to, while IT ensures compliance and security-by-design without slowing innovation.

Here’s a simplified YAML snippet to illustrate:

apiVersion: scaffolder.backstage.io/v1beta3
kind: Template
metadata:
  name: secure-ai-service
  title: Secure AI Service
  description: Scaffold a secure AI-augmented service with role-based access controls
spec:
  owner: platform-team
  type: service

  parameters:
    - title: Cloud Environment
      description: Select the target cloud environment
      type: string
      ui:select:
        options:
          - label: AWS
            value: aws
            if: ${{ user.groups.includes('engineers') }}
          - label: Azure
            value: azure
            if: ${{ user.groups.includes('engineers') || user.groups.includes('data-scientists') }}
          - label: GCP
            value: gcp
            if: ${{ user.groups.includes('engineers') }}
          - label: Azure ML
            value: azure-ml
            if: ${{ user.groups.includes('data-scientists') }}
          - label: GCP Vertex AI
            value: gcp-vertex
            if: ${{ user.groups.includes('data-scientists') }}
          - label: Sandbox
            value: sandbox
            if: ${{ user.groups.includes('contractors') }}

    - title: APIs
      description: Choose APIs this service should consume
      type: array
      items:
        type: string
      ui:select:
        options:
          - label: Customer API
            value: customer-api
            if: ${{ user.groups.includes('engineers') }}
          - label: Finance API
            value: finance-api
            if: ${{ user.groups.includes('engineers') }}
          - label: LLM API
            value: llm-api
            if: ${{ user.groups.includes('data-scientists') }}
          - label: Mock API
            value: mock-api
            if: ${{ user.groups.includes('contractors') }}

  steps:
    - id: fetch
      name: Fetch Base
      action: fetch:template
      input:
        url: ./skeleton

    - id: configure
      name: Configure Service
      action: template:execute
      input:
        values:
          cloud: ${{ parameters.cloud }}
          apis: ${{ parameters.apis }}

This way, developers see only what matches their role, while the resulting service comes pre-wired with the right SDKs, APIs, and secure-by-design pipeline configurations.

Sounds like an unachievable target? Well, you can also run nice GenAI tools to create your applications by just chatting with them and deploy the resulting applications in your infrastructure or connected cloud. But this might open another barn door (remember that story. So it might be a better idea team up with the IT and your System Integrator of choice and explore the right way to adopt AI. That propably leads to a positive ROI and your CISO might not be so angry anymore. :-)