Securitatea containerelor in Kubernetes necesita o abordare de aparare in profunzime care acopera intregul ciclu de viata, de la construirea imaginilor pana la protectia runtime. Acest ghid ofera implementari practice pentru securizarea aplicatiilor Kubernetes folosind principii DevSecOps.
Securitatea imaginilor de container
Scanarea imaginilor in CI/CD
Integreaza scanarea de vulnerabilitati in pipeline-ul de build:
# .github/workflows/container-security.yml
name: Container Security Pipeline
on:
push:
branches: [main]
pull_request:
branches: [main]
jobs:
build-and-scan:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Build Container Image
run: |
docker build -t myapp:${{ github.sha }} .
- name: Run Trivy Vulnerability Scanner
uses: aquasecurity/trivy-action@master
with:
image-ref: 'myapp:${{ github.sha }}'
format: 'sarif'
output: 'trivy-results.sarif'
severity: 'CRITICAL,HIGH'
exit-code: '1'
- name: Upload Trivy Results
uses: github/codeql-action/upload-sarif@v2
if: always()
with:
sarif_file: 'trivy-results.sarif'
- name: Run Grype Scanner
uses: anchore/scan-action@v3
with:
image: 'myapp:${{ github.sha }}'
fail-build: true
severity-cutoff: high
- name: Scan Dockerfile with Hadolint
uses: hadolint/hadolint-action@v3.1.0
with:
dockerfile: Dockerfile
failure-threshold: warningModele securizate de Dockerfile
Construieste imagini de container minimale si securizate:
# Build multi-stage pentru suprafata de atac minima
FROM node:20-alpine AS builder
# Creeaza utilizator non-root
RUN addgroup -g 1001 -S nodejs && \
adduser -S nextjs -u 1001
WORKDIR /app
# Copiaza fisierele de pachete pentru cache mai bun
COPY package*.json ./
# Instaleaza dependentele cu audit de securitate
RUN npm ci --only=production && \
npm audit --audit-level=high
COPY --chown=nextjs:nodejs . .
RUN npm run build
# Etapa de productie - imagine minimala
FROM gcr.io/distroless/nodejs20-debian12
WORKDIR /app
# Copiaza doar fisierele necesare
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package.json ./
# Foloseste utilizator non-root
USER 1001
# Verificare de sanatate
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD node healthcheck.js || exit 1
EXPOSE 3000
CMD ["dist/server.js"]Semnarea si verificarea imaginilor
Implementeaza securitatea lantului de aprovizionare cu cosign:
#!/bin/bash
# sign-and-verify.sh
IMAGE="registry.example.com/myapp:latest"
# Genereaza pereche de chei (configurare initiala)
cosign generate-key-pair
# Semneaza imaginea
cosign sign --key cosign.key $IMAGE
# Verifica semnatura
cosign verify --key cosign.pub $IMAGE
# Semneaza fara cheie (Sigstore)
COSIGN_EXPERIMENTAL=1 cosign sign $IMAGE
# Verifica semnatura fara cheie
COSIGN_EXPERIMENTAL=1 cosign verify \
--certificate-identity=user@example.com \
--certificate-oidc-issuer=https://accounts.google.com \
$IMAGEConfiguratii de securitate Kubernetes
Standarde de securitate pentru pod-uri
Aplica standarde de securitate pentru pod-uri folosind etichete de namespace:
# namespace-security.yaml
apiVersion: v1
kind: Namespace
metadata:
name: production
labels:
pod-security.kubernetes.io/enforce: restricted
pod-security.kubernetes.io/enforce-version: latest
pod-security.kubernetes.io/audit: restricted
pod-security.kubernetes.io/audit-version: latest
pod-security.kubernetes.io/warn: restricted
pod-security.kubernetes.io/warn-version: latestConfiguratie securizata a pod-urilor
Creeaza pod-uri respectand bunele practici de securitate:
# secure-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: secure-app
namespace: production
spec:
replicas: 3
selector:
matchLabels:
app: secure-app
template:
metadata:
labels:
app: secure-app
spec:
# Context de securitate non-root
securityContext:
runAsNonRoot: true
runAsUser: 1001
runAsGroup: 1001
fsGroup: 1001
seccompProfile:
type: RuntimeDefault
# Service account cu permisiuni minime
serviceAccountName: secure-app-sa
automountServiceAccountToken: false
containers:
- name: app
image: registry.example.com/myapp:v1.0.0@sha256:abc123...
# Context de securitate la nivel de container
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop:
- ALL
privileged: false
# Limite de resurse (previne DoS)
resources:
limits:
cpu: "500m"
memory: "256Mi"
ephemeral-storage: "100Mi"
requests:
cpu: "100m"
memory: "128Mi"
# Probe pentru verificarea sanatatii
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 10
periodSeconds: 15
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 10
# Montari de volume pentru directoare inscriptibile
volumeMounts:
- name: tmp
mountPath: /tmp
- name: cache
mountPath: /app/.cache
# Variabile de mediu din secrete
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: app-secrets
key: database-url
volumes:
- name: tmp
emptyDir:
sizeLimit: 50Mi
- name: cache
emptyDir:
sizeLimit: 100Mi
# Restrictii de distributie topologica pentru HA
topologySpreadConstraints:
- maxSkew: 1
topologyKey: kubernetes.io/hostname
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
app: secure-appSecuritatea retelei
Politici de retea
Implementeaza retea zero-trust cu politici de retea:
# network-policies.yaml
# Blocheaza implicit tot traficul de intrare si iesire
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: production
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
---
# Permite comunicarea specifica intre aplicatii
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-app-traffic
namespace: production
spec:
podSelector:
matchLabels:
app: api-server
policyTypes:
- Ingress
- Egress
ingress:
# Permite de la pod-urile frontend
- from:
- podSelector:
matchLabels:
app: frontend
ports:
- protocol: TCP
port: 8080
# Permite de la ingress controller
- from:
- namespaceSelector:
matchLabels:
name: ingress-nginx
ports:
- protocol: TCP
port: 8080
egress:
# Permite catre baza de date
- to:
- podSelector:
matchLabels:
app: postgres
ports:
- protocol: TCP
port: 5432
# Permite DNS
- to:
- namespaceSelector: {}
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: UDP
port: 53
---
# Politica de retea pentru baza de date
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: database-policy
namespace: production
spec:
podSelector:
matchLabels:
app: postgres
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: api-server
ports:
- protocol: TCP
port: 5432Admission Controllers
Politici OPA Gatekeeper
Aplica politici de securitate cu OPA Gatekeeper:
# gatekeeper-constraints.yaml
# Sablon de constrangere: Necesita non-root
apiVersion: templates.gatekeeper.sh/v1
kind: ConstraintTemplate
metadata:
name: k8srequirenonroot
spec:
crd:
spec:
names:
kind: K8sRequireNonRoot
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package k8srequirenonroot
violation[{"msg": msg}] {
container := input.review.object.spec.containers[_]
not container.securityContext.runAsNonRoot
msg := sprintf("Container %v must set runAsNonRoot to true", [container.name])
}
violation[{"msg": msg}] {
container := input.review.object.spec.containers[_]
container.securityContext.runAsUser == 0
msg := sprintf("Container %v must not run as root (UID 0)", [container.name])
}
---
# Aplica constrangerea
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sRequireNonRoot
metadata:
name: require-non-root
spec:
match:
kinds:
- apiGroups: [""]
kinds: ["Pod"]
namespaces:
- production
- staging
---
# Sablon de constrangere: Necesita limite de resurse
apiVersion: templates.gatekeeper.sh/v1
kind: ConstraintTemplate
metadata:
name: k8srequireresourcelimits
spec:
crd:
spec:
names:
kind: K8sRequireResourceLimits
validation:
openAPIV3Schema:
type: object
properties:
maxCpu:
type: string
maxMemory:
type: string
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package k8srequireresourcelimits
violation[{"msg": msg}] {
container := input.review.object.spec.containers[_]
not container.resources.limits.cpu
msg := sprintf("Container %v must specify CPU limits", [container.name])
}
violation[{"msg": msg}] {
container := input.review.object.spec.containers[_]
not container.resources.limits.memory
msg := sprintf("Container %v must specify memory limits", [container.name])
}
---
# Sablon de constrangere: Necesita digest de imagine
apiVersion: templates.gatekeeper.sh/v1
kind: ConstraintTemplate
metadata:
name: k8srequireimagedigest
spec:
crd:
spec:
names:
kind: K8sRequireImageDigest
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package k8srequireimagedigest
violation[{"msg": msg}] {
container := input.review.object.spec.containers[_]
not contains(container.image, "@sha256:")
msg := sprintf("Container %v must use image digest, not tag", [container.name])
}Politici Kyverno
Motor alternativ de politici cu Kyverno:
# kyverno-policies.yaml
# Necesita etichete
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: require-labels
spec:
validationFailureAction: Enforce
rules:
- name: require-team-label
match:
resources:
kinds:
- Pod
validate:
message: "Eticheta 'team' este obligatorie"
pattern:
metadata:
labels:
team: "?*"
---
# Adauga context de securitate implicit
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: add-default-securitycontext
spec:
rules:
- name: add-security-context
match:
resources:
kinds:
- Pod
mutate:
patchStrategicMerge:
spec:
securityContext:
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
---
# Restrictioneaza registrele de imagini
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: restrict-image-registries
spec:
validationFailureAction: Enforce
rules:
- name: validate-registries
match:
resources:
kinds:
- Pod
validate:
message: "Imaginile trebuie sa provina din registre aprobate"
pattern:
spec:
containers:
- image: "registry.example.com/* | gcr.io/myproject/*"Securitate runtime
Monitorizare runtime cu Falco
Deplaseaza Falco pentru detectarea amenintarilor la runtime:
# falco-rules.yaml
customRules:
rules-custom.yaml: |-
# Detecteaza shell pornit intr-un container
- rule: Shell Spawned in Container
desc: Detecteaza pornirea unui shell intr-un container
condition: >
spawned_process and
container and
shell_procs and
not shell_allowed_parent_processes
output: >
Shell pornit intr-un container
(user=%user.name container=%container.name shell=%proc.name
parent=%proc.pname cmdline=%proc.cmdline container_id=%container.id
image=%container.image.repository)
priority: WARNING
tags: [container, shell, mitre_execution]
# Detecteaza accesul la fisiere sensibile
- rule: Sensitive File Access
desc: Detecteaza accesul la fisiere sensibile
condition: >
open_read and
container and
(fd.name startswith /etc/shadow or
fd.name startswith /etc/passwd or
fd.name startswith /proc/self/environ)
output: >
Fisier sensibil accesat
(user=%user.name file=%fd.name container=%container.name
image=%container.image.repository)
priority: CRITICAL
tags: [container, filesystem, mitre_credential_access]
# Detecteaza crypto mining
- rule: Crypto Mining Activity
desc: Detecteaza potential crypto mining
condition: >
spawned_process and
container and
(proc.name in (xmrig, minerd, cpuminer) or
proc.cmdline contains "stratum+tcp" or
proc.cmdline contains "pool.minexmr")
output: >
Crypto mining detectat
(user=%user.name process=%proc.name container=%container.name
cmdline=%proc.cmdline image=%container.image.repository)
priority: CRITICAL
tags: [container, cryptomining, mitre_resource_hijacking]Monitorizare de securitate cu eBPF
Deplaseaza Tetragon pentru securitate la nivel de kernel:
# tetragon-policy.yaml
apiVersion: cilium.io/v1alpha1
kind: TracingPolicy
metadata:
name: container-security-events
spec:
kprobes:
- call: "security_file_open"
syscall: false
args:
- index: 0
type: "file"
selectors:
- matchArgs:
- index: 0
operator: "Prefix"
values:
- "/etc/shadow"
- "/etc/passwd"
- "/root/.ssh"
action: Audit
---
apiVersion: cilium.io/v1alpha1
kind: TracingPolicy
metadata:
name: network-connections
spec:
kprobes:
- call: "tcp_connect"
syscall: false
args:
- index: 0
type: "sock"
selectors:
- matchNamespaces:
- namespace: production
operator: In
action: AuditManagementul secretelor
External Secrets Operator
Sincronizeaza secretele din vault-uri externe:
# external-secrets.yaml
apiVersion: external-secrets.io/v1beta1
kind: SecretStore
metadata:
name: vault-backend
namespace: production
spec:
provider:
vault:
server: "https://vault.example.com"
path: "secret"
version: "v2"
auth:
kubernetes:
mountPath: "kubernetes"
role: "production-app"
serviceAccountRef:
name: "vault-auth"
---
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: app-secrets
namespace: production
spec:
refreshInterval: 1h
secretStoreRef:
name: vault-backend
kind: SecretStore
target:
name: app-secrets
creationPolicy: Owner
data:
- secretKey: database-url
remoteRef:
key: production/database
property: connection_string
- secretKey: api-key
remoteRef:
key: production/api
property: keyPipeline de scanare de securitate
Pipeline complet de securitate
#!/usr/bin/env python3
# security_pipeline.py
import subprocess
import json
import sys
from dataclasses import dataclass
from typing import List, Optional
@dataclass
class ScanResult:
scanner: str
passed: bool
critical: int
high: int
medium: int
findings: List[dict]
def run_trivy_scan(image: str) -> ScanResult:
"""Ruleaza scanare de vulnerabilitati Trivy"""
result = subprocess.run(
["trivy", "image", "--format", "json", "--quiet", image],
capture_output=True,
text=True
)
data = json.loads(result.stdout)
critical = high = medium = 0
findings = []
for result_item in data.get("Results", []):
for vuln in result_item.get("Vulnerabilities", []):
severity = vuln.get("Severity", "UNKNOWN")
if severity == "CRITICAL":
critical += 1
elif severity == "HIGH":
high += 1
elif severity == "MEDIUM":
medium += 1
findings.append({
"id": vuln.get("VulnerabilityID"),
"severity": severity,
"package": vuln.get("PkgName"),
"title": vuln.get("Title")
})
passed = critical == 0 and high == 0
return ScanResult(
scanner="trivy",
passed=passed,
critical=critical,
high=high,
medium=medium,
findings=findings
)
def run_kubesec_scan(manifest_path: str) -> ScanResult:
"""Ruleaza scanare kubesec a manifestelor Kubernetes"""
result = subprocess.run(
["kubesec", "scan", manifest_path],
capture_output=True,
text=True
)
data = json.loads(result.stdout)
score = data[0].get("score", 0) if data else 0
critical_findings = data[0].get("scoring", {}).get("critical", []) if data else []
findings = [{"type": "critical", "rule": f} for f in critical_findings]
return ScanResult(
scanner="kubesec",
passed=score >= 0 and len(critical_findings) == 0,
critical=len(critical_findings),
high=0,
medium=0,
findings=findings
)
def main():
image = sys.argv[1] if len(sys.argv) > 1 else "myapp:latest"
manifest = sys.argv[2] if len(sys.argv) > 2 else "deployment.yaml"
print(f"Rulam scanari de securitate pentru {image}")
# Ruleaza scanarile
trivy_result = run_trivy_scan(image)
kubesec_result = run_kubesec_scan(manifest)
# Raporteaza rezultatele
all_passed = trivy_result.passed and kubesec_result.passed
print(f"\nRezultate Trivy:")
print(f" Critice: {trivy_result.critical}")
print(f" Ridicate: {trivy_result.high}")
print(f" Status: {'TRECUT' if trivy_result.passed else 'ESUAT'}")
print(f"\nRezultate Kubesec:")
print(f" Critice: {kubesec_result.critical}")
print(f" Status: {'TRECUT' if kubesec_result.passed else 'ESUAT'}")
print(f"\n{'Toate verificarile de securitate au trecut!' if all_passed else 'Verificarile de securitate au esuat!'}")
sys.exit(0 if all_passed else 1)
if __name__ == "__main__":
main()Sumar
Securitatea containerelor in Kubernetes necesita mai multe straturi de protectie:
- La build: Scaneaza imaginile, foloseste imagini de baza minimale, semneaza artefactele
- La deployment: Admission controllers aplica politicile
- La runtime: Monitorizeaza anomalii si amenintari
- Retea: Zero-trust cu politici de retea
- Secrete: Management extern cu rotatie
Implementarea acestor controale ca parte a pipeline-ului DevSecOps asigura ca securitatea este automatizata si consistenta in toate mediile.