The CIS Docker Benchmark is 200 pages. The CIS Kubernetes Benchmark is longer. Most organizations that commit to implementing them discover the same problem: many of the controls are clear in specification but expensive in execution, and the tools that automate compliance checking do not automate compliance remediation.
Understanding which controls automation handles well — and which require something different — is the difference between a benchmark compliance project that succeeds and one that stalls.
What the CIS Benchmarks Actually Cover?
The CIS Docker Benchmark organizes controls across host configuration, Docker daemon configuration, container images, container runtime, Docker security operations, and Docker Swarm configuration. For Kubernetes, the benchmark covers API server, etcd, scheduler, controller manager, node configuration, and policies.
The image-specific controls are where most security teams have the largest compliance gap, and where the automation picture is most nuanced.
Key image controls from CIS Docker Benchmark:
- 4.1: Create a user for the container
- 4.2: Use trusted base images
- 4.3: Do not install unnecessary packages
- 4.4: Scan and rebuild images to include security patches
- 4.6: Add HEALTHCHECK instructions
- 4.7: Do not use update instructions alone
- 4.9: Use COPY instead of ADD
- 4.10: Do not store secrets in Dockerfiles
Control 4.3 — “Do not install unnecessary packages” — is among the most impactful for security and among the hardest to satisfy manually. What counts as unnecessary? Which packages can safely be removed without breaking the application? Manual analysis is error-prone and does not scale across a large container image portfolio.
The Two Categories of CIS Compliance Automation
Configuration Compliance Scanning
Tools like kube-bench (for Kubernetes) and docker-bench-security (for Docker) scan your configuration against benchmark controls and report pass/fail for each. These tools are valuable for:
- Identifying configuration gaps against the benchmark
- Generating compliance evidence for audit
- Continuous monitoring of configuration drift
What they do not do: fix the gaps they find. A failed control on daemon configuration requires a daemon configuration change. A failed control on image composition requires an image change.
Image Component Hardening
Container hardening that addresses CIS 4.3 (“do not install unnecessary packages”) requires knowing which packages are actually used at runtime. This is not something a static analysis tool can determine with confidence — it requires runtime execution profiling.
The workflow that works:
- Run the container with runtime profiling enabled in a test environment representative of production load
- Capture which packages, binaries, and libraries are actually executed
- Generate a minimal image that contains only what was observed to be needed
- Validate that the hardened image passes functional testing
The result is a hardened container images set that satisfies CIS 4.3 with evidence: the runtime profile documents that removed packages were not used during observed execution.
Mapping Specific CIS Controls to Hardening Activities
CIS 4.1 (Non-Root User)
Automation coverage: high. Dockerfile USER instruction sets a non-root user. Admission controllers can enforce that pods run as non-root. This control is fully automatable in both implementation and enforcement.
CIS 4.3 (Minimal Packages)
Automation coverage: requires runtime profiling. Static analysis can flag obviously unnecessary packages, but the definitive list of what to remove requires runtime observation. Automated removal based on runtime profiles satisfies this control with evidence.
CIS 4.4 (Patching)
Automation coverage: high for detection, medium for remediation. Container scanning tools identify CVEs automatically. Rebuilding images with updated base layers is automatable in CI/CD. The gap is that automated rebuild requires triggered pipelines when upstream base images update.
CIS 4.7 (Avoid Stale Update Instructions)
Automation coverage: high. Dockerfile linting catches RUN apt-get update without a subsequent install in the same layer. Tools like Hadolint flag this at build time.
CIS 5.1 (Privileged Container Restriction)
Automation coverage: high. Admission controllers with Pod Security Standards at Restricted profile deny privileged containers. This is enforceable in policy without requiring image changes.
CIS 5.7 (No Sensitive Host Paths)
Automation coverage: high. Admission policy can deny mounts of sensitive host paths. This is a Kubernetes-layer control, not an image control.
Continuous Compliance vs. Point-in-Time Assessment
The more significant limitation of CIS benchmark automation is that compliance checking is typically event-driven: run the benchmark tool, get a report, remediate, run again. In between runs, configuration and images change.
Container images update. New packages are added. Base image updates introduce new packages. A container that passed benchmark assessment last month may have image changes that introduced CIS 4.3 violations since.
Continuous compliance requires integrating CIS benchmark checks into the CI/CD pipeline: every image build triggers hardening validation, every Kubernetes configuration change triggers policy re-evaluation. Benchmark compliance becomes a property that is maintained continuously rather than demonstrated periodically.
Frequently Asked Questions
What is the CIS hardening process?
The CIS hardening process involves implementing the controls defined in CIS Benchmarks to reduce the attack surface of a system. For containers, this means applying the CIS Docker Benchmark and CIS Kubernetes Benchmark controls — including running containers as non-root, minimizing installed packages, scanning images for CVEs, and enforcing policy through admission controllers. Container hardening against CIS benchmarks requires both configuration changes and runtime-profiling-based image minimization to satisfy controls like CIS 4.3 (do not install unnecessary packages).
What is CIS Level 1 server hardening?
CIS Level 1 is the baseline profile within CIS Benchmarks, designed to be implementable without significant operational impact. These controls represent the minimum recommended security configuration — disabling unnecessary services, enforcing secure defaults, and removing unneeded software. For container images, CIS Level 1 controls map directly to practices like running as non-root (CIS 4.1), using minimal package footprints (CIS 4.3), and scanning images for vulnerabilities before deployment (CIS 4.4).
Which tool is used to deploy containerized applications?
Containerized applications are typically deployed using Kubernetes as the orchestration platform, with tools like kubectl, Helm, and CI/CD platforms (such as GitHub Actions, GitLab CI, or Jenkins) driving the deployment pipeline. For CIS compliance specifically, kube-bench audits Kubernetes configurations against the CIS Kubernetes Benchmark, while docker-bench-security audits Docker daemon and image configurations against the CIS Docker Benchmark.
What security technology best assists with the automation of security workflows?
Admission controllers — such as Kubernetes Pod Security Standards, OPA/Gatekeeper, or Kyverno — are among the most effective technologies for automating container security workflows. They enforce policy at the deployment layer, preventing non-compliant configurations from ever reaching production. Combined with CI/CD pipeline integration that runs CIS benchmark checks on every image build, these tools enable continuous compliance rather than point-in-time assessment.
Starting the CIS Hardening Project
For organizations beginning CIS benchmark implementation for containers:
- Run kube-bench and docker-bench-security to establish your current state. Know where you stand before you start.
- Prioritize controls by risk. The privileged container, non-root execution, and minimal packages controls have the highest security impact. Start there.
- Treat image minimization as a separate workstream from configuration hardening. It requires runtime profiling and image rebuild, not daemon configuration changes.
- Automate compliance checking in CI/CD so that future changes cannot silently break benchmark compliance.
- Keep evidence. Audit demonstrations of CIS benchmark compliance require more than a passing scan — they require evidence of continuous compliance and documented remediation when gaps are found.
The CIS benchmarks are a floor, not a ceiling. Satisfying them does not mean your containers are secure in every dimension. But they represent the consensus baseline for a reason: the controls that appear in the benchmark are the ones that consistently make a material difference.