Agents
in the
Pipeline
From a skill that runs well locally to a service you can trust in production
From a skill that runs well locally to a service you can trust in production
Credentials, internal docs, personal data, confidential content
External URLs, user input, web pages, third-party data sources
Outbound network access, file writes, API calls to external endpoints
The first step. You decide exactly which libraries, frameworks, and tools are available to the agent — nothing else. Built with a Dockerfile, cached in a container registry (e.g. GitLab Registry) for reuse across every pipeline run. No dependency drift, no surprises.
PreToolUse validation intercepts every tool call before execution. PostToolUse captures metrics. Tip: configure your hooks to log every rejected call — review these to understand what the agent tried to do, then add or remove tools from the container accordingly.
GitLab CI orchestration with manual triggers, timeouts, artifact retention and audit compliance.
Security · Infra · DevOps · Domain — each team owns their layer. No single team is a bottleneck.
Scans git history, groups changes by theme, generates ADRs and Mermaid architecture diagrams. Runs incrementally.
Creates full developer onboarding docs — app overview, architecture guide, getting started, troubleshooting.
Webhook-triggered when errors exceed a threshold. Uses Azure MCP to analyse the issue and prepare evidence — deep links to KQL queries and charts — so the engineer assigned has a running start before deciding the best course of action.
Hooks are bash/PowerShell. Container runs anywhere. Platform-agnostic by design.
AI Lead · Natural History Museum · South Kensington