Working in an enterprise as vast as Maersk, much of my time is spent gathering information from a myriad of different sources. Recently, I have been making a determined effort to document organisational processes and tribal knowledge in my agent instructions file. How do I get access to a new Kubernetes cluster? How do I consume an API via our common gateway? Where do I find the credentials for our git bot? I figured that even if the agent couldn’t put this knowledge to good use, it would at least serve as a useful reference for me. This concern turned out to be unjustified; modern agents are great with this kind of information.

A large part of the reason it works so well is that our code and documentation is centralised in GitHub. Like many other companies, GitHub exposes a Model Context Protocol (MCP) server. The agent uses the MCP server to search for relevant information. For features that the MCP does not support, the agent can fall back to GitHub’s CLI.

A couple of weeks ago, I made a request to an internal API and got back a 500 Internal Server Error. Before the advent of agentic AI, I would have been reluctant to dig into the problem myself. At best, I would have spent some time figuring out which team owned the API and opened a bug report. Now, I gave the agent the request I was making, the response I was getting back, and it:

  • Looked up which cluster the API was running in.
  • Granted itself access to the cluster.
  • Inspected the logs of the API to find the root cause.
  • Found the repository with the API’s source code.
  • Located the bug in the code and suggested a fix.

In this case, the bug was a nil pointer dereference that would not have been difficult for a human to find with the log output. The main benefit of the agent was not the code change itself, but the glue work it performed to gather all the necessary data.

This year, we have been working to combine documentation from our different platforms into a single repository. This allowed us to set up a chatbot through which engineers can interact with the documentation. I expected the chatbot to be the main benefit of consolidation, but in practice, I find that telling the local agent “look up any information you need in this repository” is more effective than interacting with the chatbot.

Initially, I would copy parts of our documentation into the instructions file. This helps, but it quickly becomes outdated. Instead, I found that telling the agent how to autonomously gather information is better. This is the single most useful paragraph I have added to the agent instructions:

We store documentation in <repository>. Specifically, documentation for each platform is stored in <directory>. Problems and features often involve other services. Use the documentation repository to search for relevant information about other services.

This way of using agentic AI is also a good test of system self-serviceability. If access is gated behind a ticketing system or requires reaching out to a human, then the agent will constantly get stuck.