Claude Code Doesn't Know Your Deployment Pipeline
on Ai, Devops, Python, Opentelemetry
Claude Code is an excellent pair programmer. It reads your codebase, suggests refactors, writes tests, and generates code with impressive fluency. But there’s a critical gap you need to understand: Claude Code has zero context for how your code actually ships to production.
I learned this the hard way last week while instrumenting a Python application with OpenTelemetry.
The Setup
I asked Claude Code to help me add OTEL instrumentation to a Python service. The recommendation was straightforward: wrap the entry point with opentelemetry-instrument. Something like:
opentelemetry-instrument python my_app.py
This worked perfectly in local development. Traces flowed, spans populated, dashboards lit up. I was ready to deploy.
The Gap
In production, the application runs under ProcMgr — a process manager that controls startup, lifecycle, and restarts. ProcMgr invokes the Python interpreter directly, bypassing the opentelemetry-instrument wrapper entirely.
Result: zero telemetry in production.
The code was correct. The instrumentation was correct. But the deployment context — how the process actually starts in production — was invisible to Claude Code. It only saw files, not pipelines.
Why This Happens
Claude Code (and similar AI coding assistants) operate on the codebase you give them. They don’t see:
- Process managers (ProcMgr, systemd, supervisord)
- CI/CD pipelines (GitHub Actions, Jenkins, CircleCI)
- Container orchestration (Kubernetes, ECS, Nomad)
- Environment-specific configs (staging vs. production env vars, secrets)
- Deployment order (migrations before app startup, feature flags, canary rolls)
- Compliance gates (security scans, approval workflows)
These are all external to the repository. They’re infrastructure, not code. And AI assistants don’t have access to your infrastructure unless you explicitly provide that context.
The Fix
Here’s what I did to resolve it:
-
Document the deployment path — I added a
DEPLOYMENT.mdnoting that ProcMgr controls startup and how instrumentation should be applied. -
Modify the entry point — Instead of wrapping externally, I added instrumentation initialization directly in the application code:
from opentelemetry.instrumentation.auto_instrumentation import initialize
def main():
initialize()
# ... rest of application
- Validate in staging — Before deploying, I verified the instrumentation worked under ProcMgr in a staging environment that mirrors production.
The Takeaway
AI coding assistants are powerful tools, but they’re pair programmers, not deployment owners. They excel at:
- Code generation and refactoring
- Test writing
- Debugging within the codebase
- Explaining patterns and best practices
They’re not reliable for:
- Deployment-specific configuration
- Infrastructure-aware decisions
- Production readiness validation
Bridge the gap yourself. Document your deployment context. Keep humans in the loop for deployment decisions. And when using AI, explicitly include pipeline constraints in your prompts:
“This app runs under ProcMgr in production. How should I instrument it?”
That one sentence would have saved me hours of debugging.
AI is transforming how we write code. But shipping code is still a human responsibility — and that’s not going to change anytime soon.