GitOps: From Theory to Production in 90 Days
GitOps sounds simple in theory: your entire infrastructure lives in Git. Every change goes through code review. Deployments are automated. Rollbacks are one commit away.
In practice, many organizations spend six months struggling with this approach before realizing they've built something fragile. Developers accidentally delete production databases. Secrets end up in repositories. Tools like ArgoCD or Flux are powerful but require discipline to avoid becoming liability rather than asset.
We've helped dozens of Swiss and European enterprises navigate this journey. Here's the practical path to GitOps that actually works.
What GitOps Really Means
GitOps isn't just "push code and infrastructure appears." It's an operational model with specific principles:
- Git is the single source of truth: Your entire desired infrastructure state lives in a Git repository
- Declarative infrastructure: You describe what you want, not how to build it
- Automated reconciliation: A controller continuously compares desired state (Git) with actual state (running systems) and fixes mismatches
- Auditable changes: Every change has a commit, author, timestamp, and reviewer approval
The power comes from point 3. If someone manually deletes a pod, the controller recreates it within seconds. If a rogue deploy overwrites a service, Git becomes the source of authority again.
The Three Stages of GitOps Adoption
Stage 1: Foundation (Weeks 1-4)
Goal: Get Git connected to your infrastructure with basic reconciliation working.
What you need:
- A Git repository (GitLab, GitHub, Gitea) with branch protection rules
- A Kubernetes cluster or virtual machine fleet (Hidora can provide this)
- An operator tool (ArgoCD, Flux, or Kustomize)
- A small team trained on the approach
Day one checklist:
- Set up a Git repository structure
- Install ArgoCD or Flux on your cluster
- Connect it to your repository
- Deploy a simple application (not production-critical yet)
- Test what happens when you change the Git file vs. deleting a pod manually
Common mistake: Trying to GitOps everything on day one. Don't. Start with non-critical services.
Stage 2: Pattern Building (Weeks 5-12)
Goal: Establish repeatable patterns and governance.
Organizational patterns you'll develop:
- How teams structure their Kustomize bases and overlays
- Secret management (external secret operator, Sealed Secrets, or Vault)
- Namespace isolation and RBAC policies
- Promotion pipeline (dev → staging → production)
- Emergency procedures (manual deploys when Git-based automation fails)
Technical setup:
- Multiple Git repositories: one for infrastructure, one or more for applications
- Environment-specific overlays (dev, staging, production)
- Policy enforcement (using tools like Kyverno or OPA)
- Notifications (Slack alerts when deployments succeed/fail)
The promotion flow looks like:
Developer writes code
↓
PR merged to main branch
↓
Container image built and pushed
↓
Bot updates deployment manifest in Git (image tag)
↓
ArgoCD detects change, deploys to staging
↓
Tests pass
↓
Explicit promotion to production (via Git pull request or tag)
↓
ArgoCD deploys to production
This discipline prevents accidental production deployments.
Stage 3: Maturity (Weeks 13-90)
Goal: GitOps becomes operational reality, integrated with your entire pipeline.
What maturity looks like:
- Developers can't remember how deploys worked before GitOps (muscle memory shifted)
- Incident response is "revert Git commit," not manual pod surgery
- Compliance auditors are happy (full audit trail)
- Secrets are never exposed (external secret operator handles them)
- New services on-board in hours, not weeks
Advanced patterns:
- Cross-cluster promotions (edge deployments, multi-region)
- GitOps-driven infrastructure provisioning (Terraform + ArgoCD)
- Progressive delivery (Flagger for canary deployments)
- Cost optimization automation (scale down dev environments at night)
The Practical 90-Day Timeline
Month 1: Foundation
Week 1-2:
- Set up Git repository with proper branching strategy
- Install ArgoCD on your Kubernetes cluster
- Deploy three non-critical applications to test the flow
Week 3-4:
- Implement secret management (don't commit secrets to Git)
- Set up Slack notifications for deployment events
- Train teams on the workflow
- Handle first emergency: a pod crashes and auto-recovers
Month 2: Pattern Stabilization
Week 5-6:
- Design namespace and RBAC structure
- Build Kustomize overlays for dev/staging/prod
- Migrate 5-10 more applications to GitOps
- Implement pre-deployment checks (static analysis, security scanning)
Week 7-8:
- Test disaster recovery: delete a pod, watch it come back
- Implement policy enforcement (prevent privileged containers, enforce resource limits)
- Set up cross-team communication (who approves production changes?)
- Document runbooks
Month 3: Production Hardening
Week 9-12:
- Move critical workloads to GitOps-managed deployment
- Test rollback procedures
- Implement health checks and automatic remediation
- Fine-tune promotion policies
- Run game days (simulate failures, practice recovery)
Secret Management: The Trickiest Part
Never commit secrets to Git, even in encrypted form initially. Secrets management is where GitOps implementations often fail.
Three approaches (in order of simplicity to sophistication):
- Sealed Secrets: Simple, works on day one, requires keys management
- External Secrets Operator: Fetches secrets from HashiCorp Vault, AWS Secrets Manager, or Azure Key Vault at deploy time
- Vault + Kubernetes integration: Most flexible, requires Vault expertise
For Swiss organizations handling sensitive data, ESO (External Secrets Operator) connecting to a managed Vault is the sweet spot.
The Governance Question
GitOps enforces discipline, but which kind?
Centralized approval: All production changes require a specific team's Git approval. Slower but auditable.
Team-based approval: Each team approves their own changes. Faster but requires trust.
Hybrid: Some services (core infrastructure) require central approval, others (application teams) approve themselves.
Most organizations start centralized, then shift toward team-based as maturity increases.
Emergency Procedures
GitOps makes normal operations excellent. But what about 3 AM when your primary database goes down and you need to act now?
Your emergency protocol:
- All teams have access to manual deployment tools (kubectl, helm)
- Documented trigger: when you can bypass GitOps
- After-action review: why did Git state drift? How do we prevent it?
- Automatic remediation: within 30 minutes, GitOps corrects the manual change
This preserves the safety of GitOps while acknowledging operational reality.
Tools and Ecosystem
ArgoCD vs Flux:
- ArgoCD: UI-driven, easier learning curve, good for teams new to GitOps
- Flux: GitOps-first, more minimal, powerful for experienced teams
Secret storage:
- Sealed Secrets: Simple, embedded
- Vault: Industry standard, complex setup
- External Secrets Operator: Bridges between Git and external vaults
Policy enforcement:
- Kyverno: Kubernetes-native, easy to learn
- OPA/Gatekeeper: More powerful, steeper learning curve
Common Failures and Recovery
Failure 1: Git as a dumping ground What went wrong: Teams push everything to Git without structure, repositories become unreadable. Recovery: Enforce directory structure, require Kustomize overlays, code review all submissions.
Failure 2: Secrets in Git What went wrong: A developer commits a database password "temporarily," it lives in Git history forever. Recovery: Implement pre-commit hooks, use GitLab/GitHub secret scanning, rotate all exposed secrets.
Failure 3: GitOps conflicts with CI/CD What went wrong: Your GitOps tool and your CI/CD tool fight over who owns deployments. Recovery: Clear separation: CI/CD builds and tests, GitOps handles deployment orchestration.
Failure 4: Rollback paralysis What went wrong: Git state and actual state diverge so much that reverting a commit doesn't fix problems. Recovery: Implement health checks, canary deployments, automated rollbacks on failure.
Quick Reality Check
GitOps is powerful, but it's not a magic fix. Expect:
- Week 1-2: "This seems simple"
- Week 3-4: "Why is this so complicated?"
- Week 8-12: "Oh, I see how this works now"
- Month 4+: "I don't remember how we deployed before GitOps"
The 90-day timeline is realistic for a medium organization. Yours might take 120 days. That's fine. Rush it and you'll spend months fixing security holes and operational chaos.
Moving Forward
GitOps isn't about tools. It's about changing how your organization thinks about infrastructure: as code, versioned, audited, and automated. The tools (ArgoCD, Flux, Kustomize) are just implementations.
Start small. One team. One non-critical service. Get comfortable with the workflow. Then expand.
Hidora helps organizations navigate this transition, providing both the infrastructure (Kubernetes, GitLab) and the operational expertise to make GitOps reliable and sustainable.
Related reading:
- Platform Engineering: Why Your Dev Teams Need It Now
- Infrastructure as Code: Terraform in Production, No Regrets
Found this helpful? See how Hidora can help: Professional Services · Managed Services · SLA Expert



