Recent Posts
Archives

Posts Tagged ‘Docker’

PostHeaderIcon How to Backup and Restore All Docker Images with Gzip Compression

TL;DR:
To back up all your Docker images safely, use docker save to export them and gzip to compress them.
Then, when you need to restore, use docker load to re-import everything.
Below you’ll find production-ready Bash scripts for automated backup and restore — complete with compression and error handling.

📦 Why You Need This

Whether you’re upgrading your system, cleaning your Docker environment, or migrating to another host, exporting your local images is crucial. Docker’s built-in commands make this possible, but using them manually for dozens of images can be tedious and space-inefficient.
This article provides automated scripts that will:

  • Backup every Docker image individually,
  • Compress each file with gzip for storage efficiency,
  • Restore all images automatically with a single command.

🧱 Backup Script (backup_docker_images.sh)

The script below exports all Docker images, one by one, into compressed .tar.gz files.
Each image gets its own archive, named after its repository and tag.

#!/bin/bash
# --------------------------------------------
# Backup all Docker images into compressed .tar.gz files
# --------------------------------------------

BACKUP_DIR=~/docker-backup
mkdir -p "$BACKUP_DIR"
cd "$BACKUP_DIR" || exit 1

echo "📦 Starting Docker image backup..."
echo "Backup directory: $BACKUP_DIR"
echo

for image in $(docker image ls --format "{{.Repository}}:{{.Tag}}"); do
  # sanitize file name
  safe_name=$(echo "$image" | tr '/:' '__')
  outfile="${safe_name}.tar"
  gzfile="${outfile}.gz"

  echo "🟢 Saving $image → $gzfile"

  # Save and compress directly (no uncompressed tar left behind)
  docker save "$image" | gzip -c > "$gzfile"

  if [ $? -eq 0 ]; then
    echo "✅ Successfully saved $image"
  else
    echo "❌ Error saving $image"
  fi
  echo
done

echo "🎉 Backup complete!"
ls -lh "$BACKUP_DIR"/*.gz

💡 What This Script Does

  • Creates a ~/docker-backup directory automatically.
  • Iterates over every local Docker image.
  • Uses docker save piped to gzip for direct compression.
  • Prints friendly success and error messages.

Result: You’ll get a set of compressed files like:

jonathan-tomcat__latest.tar.gz
jonathan-mysql__latest.tar.gz
jonathan-grafana__latest.tar.gz
...

🔁 Restore Script (restore_docker_images.sh)

This companion script automatically restores every compressed image. It detects both .tar.gz and .tar files in the backup directory, decompresses them, and loads them back into Docker.

#!/bin/bash
# --------------------------------------------
# Restore all Docker images from .tar.gz or .tar files
# --------------------------------------------

BACKUP_DIR=~/docker-backup
cd "$BACKUP_DIR" || { echo "❌ Backup directory not found: $BACKUP_DIR"; exit 1; }

echo "🚀 Starting Docker image restore from $BACKUP_DIR"
echo

for file in *.tar.gz *.tar; do
  [ -e "$file" ] || { echo "No backup files found."; exit 0; }

  echo "🟡 Loading $file..."
  if [[ "$file" == *.gz ]]; then
    gunzip -c "$file" | docker load
  else
    docker load -i "$file"
  fi

  if [ $? -eq 0 ]; then
    echo "✅ Successfully loaded $file"
  else
    echo "❌ Error loading $file"
  fi
  echo
done

echo "🎉 Restore complete!"
docker image ls

💡 How It Works

  • Automatically detects .tar.gz or .tar backups.
  • Decompresses each one and loads it into Docker.
  • Prints progress updates as it restores each image.

After running it, your local Docker environment will look exactly like before — same repositories, tags, and image IDs.

⚙️ How to Use

1️⃣ Backup All Docker Images

chmod +x backup_docker_images.sh
./backup_docker_images.sh

You’ll see a live summary of each image as it’s saved and compressed.

2️⃣ Restore Later (After a Prune or Reinstall)

chmod +x restore_docker_images.sh
./restore_docker_images.sh

Docker will reload each image automatically, maintaining all original metadata.

💾 Bonus: Cleaning and Rebuilding Your Docker Environment

If you want to clear all Docker data before restoring your images, run:

docker system prune -a --volumes

⚠️ Warning: This deletes all containers, images, networks, and volumes.
Afterward, simply run the restore script to bring your images back.

🧠 Why Use Gzip?

Docker image archives are often large — several gigabytes each. Compressing them with gzip:

  • Saves 30–70% of disk space,
  • Speeds up transfers (especially over SSH),
  • Keeps the backups cleaner and easier to manage.

You can still restore them directly with gunzip -c file.tar.gz | docker load — no decompression step required.

✅ Summary Table

Task Command Description
Backup all images (compressed) ./backup_docker_images.sh Creates one .tar.gz per image
Restore all images ./restore_docker_images.sh Loads back each saved archive
Prune all Docker data docker system prune -a --volumes Clears everything before restore

🚀 Conclusion

Backing up your Docker images is a crucial part of any development or CI/CD workflow. With these two scripts, you can protect your local Docker environment from accidental loss, disk cleanup, or reinstallation.
By combining docker save and gzip, you ensure both efficiency and recoverability — making your Docker workstation fully portable and disaster-proof.

Keep calm and backup your containers 🐳💾

PostHeaderIcon Fixing the “Failed to Setup IP tables” Error in Docker on WSL2

TL;DR:
If you see this error when running Docker on Windows Subsystem for Linux (WSL2):

ERROR: Failed to Setup IP tables: Unable to enable SKIP DNAT rule:
(iptables failed: iptables --wait -t nat -I DOCKER -i br-xxxx -j RETURN:
iptables: No chain/target/match by that name. (exit status 1))

👉 The cause is usually that your system is using the nftables backend for iptables, but Docker expects the legacy backend.
Switching iptables to legacy mode and restarting Docker fixes it:

sudo update-alternatives --set iptables /usr/sbin/iptables-legacy
sudo update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy

Then restart Docker and verify:

sudo iptables -t nat -L

You should now see the DOCKER chain listed. ✅


🔍 Understanding the Problem

When Docker starts, it configures internal network bridges using iptables.
If it cannot find or manipulate its DOCKER chain, you’ll see this “Failed to Setup IP tables” error.
This problem often occurs in WSL2 environments, where the Linux kernel uses the newer nftables system by default, while Docker still relies on the legacy iptables interface.

In short:

  • iptables-nft (default in modern WSL2) ≠ iptables-legacy (expected by Docker)
  • The mismatch causes Docker to fail to configure NAT and bridge rules

⚙️ Step-by-Step Fix

1️⃣ Check which iptables backend you’re using

sudo iptables --version
sudo update-alternatives --display iptables

If you see something like iptables v1.8.x (nf_tables), you’re using nftables.

2️⃣ Switch to legacy mode

sudo update-alternatives --set iptables /usr/sbin/iptables-legacy
sudo update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy

Confirm the change:

sudo iptables --version

Now it should say (legacy).

3️⃣ Restart Docker

If you’re using Docker Desktop for Windows:

wsl --shutdown
net stop com.docker.service
net start com.docker.service

or simply quit and reopen Docker Desktop.

If you’re running Docker Engine inside WSL:

sudo service docker restart

4️⃣ Verify the fix

sudo iptables -t nat -L

You should now see the DOCKER chain among the NAT rules:

Chain DOCKER (2 references)
target     prot opt source               destination
RETURN     all  --  anywhere             anywhere

If it appears — congratulations 🎉 — your Docker networking is fixed!


🧠 Extra Troubleshooting Tips

  • If the error persists, flush and rebuild the NAT table:
    sudo service docker stop
    sudo iptables -t nat -F
    sudo iptables -t nat -X
    sudo service docker start
    
  • Check kernel modules (for completeness):
    lsmod | grep iptable
    sudo modprobe iptable_nat
    
  • Keep Docker Desktop and WSL2 kernel up to date — many network issues are fixed in newer builds.

✅ Summary

Step Command Goal
Check backend sudo iptables --version Identify nft vs legacy
Switch mode update-alternatives --set ... legacy Use legacy backend
Restart Docker sudo service docker restart Reload NAT rules
Verify sudo iptables -t nat -L Confirm DOCKER chain exists

🚀 Conclusion

This “Failed to Setup IP tables” issue is one of the most frequent Docker-on-WSL2 networking errors.
The root cause lies in the nftables vs legacy backend mismatch — a subtle but critical difference in Linux networking subsystems.
Once you switch to the legacy backend and restart Docker, everything should work smoothly again.

By keeping your WSL2 kernel, Docker Engine, and iptables configuration aligned, you can prevent these issues and maintain a stable developer environment on Windows.

Happy containerizing! 🐋

PostHeaderIcon How to Install an Old Version of Docker on a Recent Debian: A Case Study with Docker 20.10.9 on Debian 13 (Trixie)

In the rapidly evolving landscape of containerization technology, Docker remains a cornerstone for developers and system administrators. However, specific use cases—such as legacy application compatibility, testing, or reproducing historical environments—may necessitate installing an older version of Docker on a modern operating system. This guide provides a detailed walkthrough for installing Docker Engine 20.10.9, a release from September 2021, on Debian 13 (codename “Trixie”), the testing branch of Debian as of September 2025. While the steps can be adapted for other versions or Debian releases, this case study addresses the unique challenges of downgrading Docker on a contemporary distribution.

Introduction: The Challenges and Rationale for Installing an Older Docker Version

Installing an outdated Docker version like 20.10.9 on a recent Debian release such as Trixie is fraught with challenges due to Docker’s evolution and Debian’s forward-looking package management. Docker transitioned from Calendar Versioning (CalVer, e.g., 20.10) to Semantic Versioning (SemVer, starting with 23.0 in May 2023), introducing significant updates in security, features, and dependencies. This creates several obstacles:

  • Package Availability and Compatibility: Docker’s official APT repository prioritizes current versions (e.g., 28.x in 2025) for supported Debian releases. Older versions like 20.10.9 are often archived and unavailable via apt for newer codenames like Trixie, requiring manual downloads of .deb packages from a compatible release (e.g., bullseye for Debian 11). This can lead to dependency mismatches or installation failures.
  • Security and Support Risks: Version 20.10.9 is end-of-life (EOL) since mid-2023, lacking official security patches for known vulnerabilities (e.g., CVEs in networking or containerd). This poses risks for production environments. Additionally, compatibility issues may arise with modern WSL2 networking in Windows Subsystem for Linux (WSL) environments.
  • Dependency Conflicts: Older Docker versions rely on specific versions of components like containerd.io, which may conflict with newer libraries on Debian 13, potentially causing installation or runtime errors.
  • Docker Compose Compatibility: Modern Docker integrates Compose as a plugin (docker compose), but older setups require the standalone docker-compose (v1, with hyphen), necessitating a separate binary installation.

Why pursue this downgrade? Legacy applications, specific toolchains, or compatibility with older Dockerfiles may require it—such as maintaining a telemetry stack with Elasticsearch, Kibana, and APM Server in a controlled environment. However, for production or security-sensitive deployments, upgrading to the latest Docker version (e.g., 28.3.3) is strongly recommended. This guide assumes a WSL/Debian Trixie setup but is applicable to native Debian installations, with precautions for data loss and system stability.

Prerequisites

Before proceeding, ensure the following:

  • A running Debian 13 (Trixie) system (verify with lsb_release -cs).
  • Administrative access (sudo privileges).
  • Backup of critical Docker data (e.g., export volumes using docker volume ls).
  • Internet access for downloading packages.
  • Awareness of risks: Manual package installation bypasses APT’s dependency resolution, and EOL versions lack security updates.

Step 1: Prune All Local Docker Resources

Why This Step is Needed

Before uninstalling the current Docker version (e.g., 28.3.3), pruning all local resources—images, containers, volumes, and networks—ensures a clean slate. This prevents conflicts from residual data, reclaims disk space, and prepares the system for the downgrade. Since pruning is irreversible, backing up critical data (e.g., telemetry stack volumes) is essential.

What It Does

The docker system prune command removes all unused Docker artifacts, including stopped containers, unused images, volumes, and networks, ensuring no remnants interfere with the new installation.

Commands


# Stop all running containers (if any)
docker stop $(docker ps -q) 2>/dev/null || true

# Prune everything: images, containers, volumes, networks, and build cache
docker system prune -a --volumes -f
    

Verification

Run these commands to confirm cleanup:


docker images -a  # Should list no images
docker volume ls  # Should be empty
docker network ls # Should show only defaults (bridge, host, none)
docker ps -a      # Should show no containers
    

If permission errors occur, verify your user is in the docker group (sudo usermod -aG docker $USER and log out/in) or use sudo.

Step 2: Remove the Current Docker Installation

Why This Step is Needed

Removing the existing Docker version (e.g., 28.3.3) eliminates potential conflicts in packages, configurations, or runtime components. Residual files or newer dependencies could cause the older 20.10.9 installation to fail or behave unpredictably.

What It Does

This step stops Docker services, purges installed packages, deletes data directories and configurations, and removes the Docker APT repository to prevent accidental upgrades to newer versions.

Commands


# Stop Docker services
sudo systemctl stop docker
sudo systemctl stop docker.socket

# Uninstall Docker packages
sudo apt-get purge -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin docker-ce-rootless-extras
sudo apt-get autoremove -y --purge

# Remove Docker data and configs
sudo rm -rf /var/lib/docker
sudo rm -rf /var/lib/containerd
sudo rm -rf /etc/docker
sudo rm -f /etc/apparmor.d/docker
sudo rm -f /var/run/docker.sock
sudo groupdel docker 2>/dev/null || true

# Remove Docker repository
sudo rm -f /etc/apt/sources.list.d/docker.list
sudo rm -f /etc/apt/keyrings/docker.gpg
sudo apt-get update

# Verify removal
docker --version  # Should show "command not found"
    

Reboot WSL if needed (in Windows PowerShell: wsl --shutdown).

Step 3: Install Docker Engine 20.10.9 via Manual .deb Packages

Why This Step is Needed

Debian Trixie’s Docker repository does not include 20.10.9, as it is an EOL version from 2021, unsupported since mid-2023. The standard apt installation fails due to version mismatches, so we manually download and install .deb packages from Docker’s archive for Debian 11 (bullseye), which is compatible with Trixie. This approach bypasses repository limitations but requires careful dependency management.

What It Does

The commands download specific .deb files for Docker CE, CLI, and containerd.io, install them using dpkg, resolve dependencies with apt, and lock the version to prevent upgrades. The process also ensures Docker starts correctly and is accessible without root privileges.

Commands


# Install prerequisites
sudo apt-get update
sudo apt-get install -y ca-certificates curl gnupg lsb-release

# Create directory for downloads
mkdir -p ~/docker-install
cd ~/docker-install

# Download .deb packages for 20.10.9 (bullseye, amd64)
curl -O https://download.docker.com/linux/debian/dists/bullseye/pool/stable/amd64/containerd.io_1.4.13-1_amd64.deb
curl -O https://download.docker.com/linux/debian/dists/bullseye/pool/stable/amd64/docker-ce-cli_20.10.9~3-0~debian-bullseye_amd64.deb
curl -O https://download.docker.com/linux/debian/dists/bullseye/pool/stable/amd64/docker-ce_20.10.9~3-0~debian-bullseye_amd64.deb

# Verify file sizes (should be MBs, not bytes)
ls -lh *.deb

# Install .deb packages
sudo dpkg -i containerd.io_1.4.13-1_amd64.deb
sudo dpkg -i docker-ce-cli_20.10.9~3-0~debian-bullseye_amd64.deb
sudo dpkg -i docker-ce_20.10.9~3-0~debian-bullseye_amd64.deb

# Fix any dependency issues
sudo apt-get install -f

# Hold versions to prevent upgrades
sudo apt-mark hold docker-ce docker-ce-cli containerd.io

# Start and enable Docker
sudo systemctl start docker
sudo systemctl enable docker

# Add user to docker group (log out/in after)
sudo usermod -aG docker $USER

# Verify installation
docker --version  # Should show "Docker version 20.10.9, build ..."
docker run --rm hello-world  # Test pull and run

# Clean up downloaded files
cd /mnt/c/workarea/development-tools/telemetry-poc/etc/docker/telemetry
rm -rf ~/docker-install
    

Note: If the curl URLs return 404 errors, browse Docker’s bullseye pool to find the exact filenames (e.g., try docker-ce_20.10.10~3-0~debian-bullseye_amd64.deb if 20.10.9 is unavailable). Use containerd.io_1.4.11-1_amd64.deb if 1.4.13 fails.

Step 4: Install Standalone docker-compose (v1.29.2)

Why This Step is Needed

Modern Docker includes Compose as a plugin (docker compose, v2), but legacy setups like 20.10.9 often require the standalone docker-compose (v1, with hyphen) for compatibility with older workflows or scripts. This binary ensures the hyphenated command is available.

What It Does

Downloads the v1.29.2 binary (the last v1 release, compatible with 20.10.9) from GitHub, installs it to /usr/local/bin, and makes it executable.

Commands


sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose

# Verify
docker-compose --version  # Should show "docker-compose version 1.29.2, build ..."
    

Step 5: Post-Installation and Verification

Why This Step is Needed

Post-installation steps ensure Docker and docker-compose function correctly, especially in a WSL environment where networking can be temperamental. Verifying the setup confirms compatibility with your telemetry stack (e.g., Elasticsearch, Kibana, APM Server).

What It Does

Restarts WSL to apply changes, tests Docker and docker-compose, and rebuilds your telemetry stack. It also checks network accessibility from Windows.

Commands


# Restart WSL (in Windows PowerShell)
wsl --shutdown

# Reopen WSL terminal and verify
docker --version  # Should show "Docker version 20.10.9, build ..."
docker-compose --version  # Should show "docker-compose version 1.29.2, build ..."
docker run --rm hello-world

Troubleshooting

If issues arise, consider these steps:

  • Download Failures: Check Docker’s bullseye pool for correct filenames. Use curl -I <URL> to verify HTTP status (200 OK).
  • Dependency Errors: Run sudo apt-get install -f to resolve.
  • Docker Not Starting: Check sudo systemctl status docker or journalctl -u docker.
  • WSL Networking: Update WSL (wsl --update) and restart Docker (sudo service docker restart).
  • Share Outputs: Provide ls -lh ~/docker-install/*.deb, dpkg -l | grep docker, or error messages for debugging.

Conclusion

Installing an older version of Docker, such as 20.10.9, on a recent Debian release like Trixie (Debian 13) is a complex but achievable task, requiring careful management of package dependencies and manual installation of archived .deb files. By pruning existing Docker resources, removing the current installation, and installing bullseye-compatible packages, you can successfully downgrade to meet legacy requirements. The addition of the standalone docker-compose ensures compatibility with older workflows..

However, this approach comes with caveats: version 20.10.9 is end-of-life, lacking security updates, and may face compatibility issues with modern tools or WSL2 networking. For production environments, consider using the latest Docker version (e.g., 28.3.3 as of September 2025) to benefit from ongoing support and enhanced features. Always test thoroughly after installation, and maintain backups to mitigate data loss risks. If you encounter issues or need to adapt this process for other versions, consult Docker’s official repository or community forums for additional resources.

PostHeaderIcon Understanding Kubernetes for Docker and Docker Compose Users

TL;DR

Kubernetes may look like an overly complicated version of Docker Compose, but it operates on a different level entirely. Where Compose excels at quick, local orchestration of containers, Kubernetes is a robust, distributed platform designed for automated scaling, fault-tolerance, and production-grade deployments across multi-node clusters. This article provides a comprehensive comparison and shows how ArgoCD enhances GitOps-based Kubernetes workflows.


Docker Compose vs Kubernetes – Similarities and First Impressions

At a high level, Docker Compose and Kubernetes share similar concepts: containers, services, configuration, and volumes. This often leads to the assumption that Kubernetes is just a verbose, harder-to-write Compose replacement. However, Kubernetes is more than a runtime. It’s a control plane, a state manager, and a policy enforcer.

Concept Docker Compose Kubernetes
Service definition docker-compose.yml Deployment, Service, etc. YAML manifests
Networking Shared bridge network, service discovery by name DNS, internal IPs, ClusterIP, NodePort, Ingress
Volume management volumes: PersistentVolume, PersistentVolumeClaim, StorageClass
Secrets and configs .env, environment: ConfigMap, Secret, ServiceAccount
Dependency management depends_on initContainers, readinessProbe, livenessProbe
Scaling Manual (scale flag or duplicate services) Declarative (replicas), automatic via HPA

Real-Life Use Cases – Docker Compose vs Kubernetes Examples

Tomcat + Oracle + MongoDB + NGINX Stack

Docker Compose


version: '3'
services:
  nginx:
    image: nginx:latest
    ports:
      - "80:80"
    depends_on:
      - tomcat

  tomcat:
    image: tomcat:9
    ports:
      - "8080:8080"
    environment:
      DB_URL: jdbc:oracle:thin:@oracle:1521:orcl

  oracle:
    image: oracle/database:19.3.0-ee
    environment:
      ORACLE_PWD: secretpass
    volumes:
      - oracle-data:/opt/oracle/oradata

  mongo:
    image: mongo:5
    volumes:
      - mongo-data:/data/db

volumes:
  oracle-data:
  mongo-data:

Kubernetes Equivalent

  • Each service becomes a Deployment and a Service.
  • Environment variables and passwords are stored in Secrets.
  • Volumes are defined with PVC and StorageClass.

apiVersion: v1
kind: Secret
metadata:
  name: oracle-secret
type: Opaque
data:
  ORACLE_PWD: c2VjcmV0cGFzcw==

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: tomcat
spec:
  replicas: 2
  selector:
    matchLabels:
      app: tomcat
  template:
    metadata:
      labels:
        app: tomcat
    spec:
      containers:
      - name: tomcat
        image: tomcat:9
        ports:
        - containerPort: 8080
        env:
        - name: DB_URL
          value: jdbc:oracle:thin:@oracle:1521:orcl

NodeJS + Express + MySQL + NGINX

Docker Compose


services:
  mysql:
    image: mysql:8
    environment:
      MYSQL_ROOT_PASSWORD: rootpass
    volumes:
      - mysql-data:/var/lib/mysql

  api:
    build: ./api
    environment:
      DB_USER: root
      DB_PASS: rootpass
      DB_HOST: mysql

  nginx:
    image: nginx:latest
    ports:
      - "80:80"

Kubernetes Equivalent


apiVersion: v1
kind: Secret
metadata:
  name: mysql-secret
type: Opaque
data:
  MYSQL_ROOT_PASSWORD: cm9vdHBhc3M=
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: api
spec:
  replicas: 2
  template:
    spec:
      containers:
      - name: api
        image: node-app:latest
        env:
        - name: DB_PASS
          valueFrom:
            secretKeyRef:
              name: mysql-secret
              key: MYSQL_ROOT_PASSWORD

⚙️ Docker Compose vs kubectl – Command Mapping

Task Docker Compose Kubernetes
Start services docker-compose up -d kubectl apply -f .
Stop/cleanup docker-compose down kubectl delete -f .
View logs docker-compose logs -f kubectl logs -f pod-name
Scale a service docker-compose up --scale web=3 kubectl scale deployment web --replicas=3
Shell into container docker-compose exec app sh kubectl exec -it pod-name -- /bin/sh

ArgoCD – GitOps Made Practical

ArgoCD is a Kubernetes-native continuous deployment tool. It uses Git as the single source of truth, enabling declarative infrastructure and GitOps workflows.

✨ Key Features

  • Declarative sync of Git and cluster state
  • Drift detection and automatic repair
  • Multi-environment and multi-namespace support
  • CLI and Web UI available

Example ArgoCD Commands


argocd login argocd.myorg.com
argocd app create my-app \
  --repo https://github.com/org/app.git \
  --path k8s \
  --dest-server https://kubernetes.default.svc \
  --dest-namespace production

argocd app sync my-app
argocd app get my-app
argocd app diff my-app

Sample ArgoCD Application Manifest


apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: my-api
spec:
  destination:
    namespace: default
    server: https://kubernetes.default.svc
  project: default
  source:
    path: k8s/app
    repoURL: https://github.com/org/api.git
    targetRevision: HEAD
  syncPolicy:
    automated:
      prune: true
      selfHeal: true

✅ Conclusion

Docker Compose is perfect for prototyping and local dev. Kubernetes is built for cloud-native workloads, distributed systems, and high availability. ArgoCD makes declarative, Git-based continuous deployment simple, scalable, and observable.

PostHeaderIcon Essential Security Considerations for Docker Networking

Having recently absorbed my esteemed colleague Danish Javed’s insightful piece on Docker Networking (https://www.linkedin.com/pulse/docker-networking-danish-javed-rzgyf) – a truly worthwhile read for anyone navigating the container landscape – I felt compelled to further explore a critical facet: the intricate security considerations surrounding Docker networking. While Danish laid a solid foundation, let’s delve deeper into how we can fortify our containerized environments at the network level.

Beyond the Walls: Understanding Default Docker Network Isolation

As Danish aptly described, Docker’s inherent isolation, primarily achieved through Linux network namespaces, provides a foundational layer of security. Each container operates within its own isolated network stack, preventing direct port conflicts and limiting immediate interference. Think of it as each container having its own virtual network interface card and routing table within the host’s kernel.

However, it’s crucial to recognize that this isolation is a boundary, not an impenetrable fortress. Containers residing on the *same* Docker network (especially the default bridge network) can often communicate freely. This unrestricted lateral movement poses a significant risk. If one container is compromised, an attacker could potentially pivot and gain access to other services within the same network segment.

Architecting for Security: Leveraging Custom Networks for Granular Control

The first crucial step towards enhanced security is strategically utilizing **custom bridge networks**. Instead of relying solely on the default bridge, design your deployments with network segmentation in mind. Group logically related containers that *need* to communicate on dedicated networks.

Scenario: Microservices Deployment

Consider a microservices architecture with a front-end service, an authentication service, a user data service, and a payment processing service. We can create distinct networks:


docker network create frontend-network
docker network create backend-network
docker network create payment-network
        

Then, we connect the relevant containers:


docker run --name frontend --network frontend-network -p 80:80 frontend-image
docker run --name auth --network backend-network -p 8081:8080 auth-image
docker run --name users --network backend-network -p 8082:8080 users-image
docker run --name payment --network payment-network -p 8083:8080 payment-image
docker network connect frontend-network auth
docker network connect frontend-network users
docker network connect backend-network users
docker network connect payment-network auth
        

In this simplified example, the frontend can communicate with auth and users, which can also communicate internally on the backend-network. The highly sensitive payment service is isolated on its own network, only allowing necessary communication (e.g., with the auth service for verification).

The Fine-Grained Firewall: Implementing Network Policies with CNI Plugins

For truly granular control over inter-container traffic, **Docker Network Policies**, facilitated by CNI (Container Network Interface) plugins like Calico, Weave Net, Cilium, and others, are essential. These policies act as a micro-firewall at the container level, allowing you to define precise rules for ingress (incoming) and egress (outgoing) traffic based on labels, network segments, and port protocols.

Important: Network Policies are not a built-in feature of the default Docker networking stack. You need to install and configure a compatible CNI plugin to leverage them.

Conceptual Network Policy Example (Calico):

Let’s say we have our web-app (label: app=web) and database (label: app=db) on a backend-network. We want to allow only the web-app to access the database on its PostgreSQL port (5432).


apiVersion: networking.k8s.io/v1 # (Calico often aligns with Kubernetes NetworkPolicy API)
kind: NetworkPolicy
metadata:
  name: allow-web-to-db
spec:
  podSelector:
    matchLabels:
      app: db
  ingress:
  - from:
    - podSelector:
        matchLabels:
          app: web
    ports:
    - protocol: TCP
      port: 5432
  policyTypes:
  - Ingress
        

This (simplified) Calico NetworkPolicy targets pods (in a Kubernetes context, but the concept applies to labeled Docker containers with Calico) labeled app=db and allows ingress traffic only from pods labeled app=web on TCP port 5432. All other ingress traffic to the database would be denied.

Essential Best Practices for a Secure Docker Network

Beyond network segmentation and policies, a holistic approach to Docker network security involves several key best practices:

  • Apply the Principle of Least Privilege Network Access: Just as you would with user permissions, grant containers only the necessary network connections required for their specific function. Avoid broad, unrestricted access.
  • Isolate Sensitive Workloads on Dedicated, Strictly Controlled Networks: Databases, secret management tools, and other critical components should reside on isolated networks with rigorously defined and enforced network policies.
  • Internal Port Obfuscation: While exposing standard ports externally might be necessary, consider using non-default ports for internal communication between services on the same network. This adds a minor layer of defense against casual scanning.
  • Exercise Extreme Caution with --network host: This mode bypasses all container network isolation, directly exposing the container’s network interfaces on the host. It should only be used in very specific, well-understood scenarios with significant security implications considered. Often, there are better alternatives.
  • Implement Regular Network Configuration Audits: Periodically review your Docker network configurations, custom networks, and network policies (if implemented) to ensure they still align with your security posture and haven’t been inadvertently misconfigured.
  • Harden Host Firewalls: Regardless of your internal Docker network configurations, ensure your host machine’s firewall (e.g., iptables, ufw) is properly configured to control all inbound and outbound traffic to the host and any exposed container ports.
  • Consider Network Segmentation Beyond Docker: For larger and more complex environments, explore network segmentation at the infrastructure level (e.g., using VLANs or security groups in cloud environments) to further isolate groups of Docker hosts or nodes.
  • Maintain Up-to-Date Docker Engine and CNI Plugins: Regularly update your Docker engine and any installed CNI plugins to benefit from the latest security patches and feature enhancements. Vulnerabilities in these core components can have significant security implications.
  • Implement Robust Network Monitoring and Logging: Monitor network traffic within your Docker environment for suspicious patterns or unauthorized connection attempts. Centralized logging of network events can be invaluable for security analysis and incident response.
  • Secure Service Discovery Mechanisms: If you’re using service discovery tools within your Docker environment, ensure they are properly secured to prevent unauthorized registration or discovery of sensitive services.

Conclusion: A Multi-Layered Approach to Docker Network Security

Securing Docker networking is not a one-time configuration but an ongoing process that requires a layered approach. By understanding the nuances of Docker’s default isolation, strategically leveraging custom networks, implementing granular network policies with CNI plugins, and adhering to comprehensive best practices, you can significantly strengthen the security posture of your containerized applications. Don’t underestimate the network as a critical control plane in your container security strategy. Proactive and thoughtful network design is paramount to building resilient and secure container environments.

 

PostHeaderIcon Running Docker Natively on WSL2 (Ubuntu 24.04) in Windows 11

For many developers, Docker Desktop has long been the default solution to run Docker on Windows. However, licensing changes and the desire for a leaner setup have pushed teams to look for alternatives. Fortunately, with the maturity of Windows Subsystem for Linux 2 (WSL2), it is now possible to run the full Docker Engine directly inside a Linux distribution such as Ubuntu 24.04, while still accessing containers seamlessly from both Linux and Windows.

In this guide, I’ll walk you through a clean, step-by-step setup for running Docker Engine inside WSL2 without Docker Desktop, explain how Windows and WSL2 communicate, and share best practices for maintaining a healthy development environment.


Why Run Docker Inside WSL2?

Running Docker natively inside WSL2 has several benefits:

  • No licensing issues – you avoid Docker Desktop’s commercial license requirements.
  • Lightweight – no heavy virtualization layer; containers run directly inside your WSL Linux distro.
  • Integrated networking – on Windows 11 with modern WSL versions,
    containers bound to localhost inside WSL are automatically reachable from Windows.
  • Familiar Linux workflow – you install and use Docker exactly as you would on a regular Ubuntu server.

Step 1 – Update Ubuntu

Open your Ubuntu 24.04 terminal and ensure your system is up to date:

sudo apt update && sudo apt upgrade -y

Step 2 – Install Docker Engine

Install Docker using the official Docker repository:

# Install prerequisites
sudo apt install -y ca-certificates curl gnupg lsb-release

# Add Docker’s GPG key
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | \
  sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg

# Configure Docker repository
echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] \
  https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

# Install Docker Engine
sudo apt update
sudo apt install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

Step 3 – Run Docker Without sudo

To avoid prefixing every command with sudo, add your user to the docker group:

sudo usermod -aG docker $USER

Restart your WSL terminal for the change to take effect, then verify:

docker --version
docker ps

Step 4 – Test Networking

One of the most common questions is:
“Will my containers be accessible from both Ubuntu and Windows?”
The answer is yes on modern Windows 11 with WSL2.
Let’s test it by running an Nginx container:

docker run -d -p 8080:80 --name webtest nginx
  • Inside Ubuntu (WSL): curl http://localhost:8080
  • From Windows (browser or PowerShell): http://localhost:8080

Thanks to WSL2’s localhost forwarding, Windows traffic to localhost is routed
into the WSL network, making containers instantly accessible without extra configuration.


Step 5 – Run Multi-Container Applications with Docker Compose

The Docker Compose plugin is already installed as part of the package above. Check the version:

docker compose version

Create a docker-compose.yml for a WordPress + MySQL stack:

version: "3.9"
services:
  db:
    image: mysql:8.0
    environment:
      MYSQL_ROOT_PASSWORD: rootpass
      MYSQL_DATABASE: wordpress
      MYSQL_USER: wpuser
      MYSQL_PASSWORD: wppass
    volumes:
      - db_data:/var/lib/mysql

  wordpress:
    image: wordpress:latest
    ports:
      - "8080:80"
    environment:
      WORDPRESS_DB_HOST: db:3306
      WORDPRESS_DB_USER: wpuser
      WORDPRESS_DB_PASSWORD: wppass
      WORDPRESS_DB_NAME: wordpress

volumes:
  db_data:

Start the services:

docker compose up -d

Once the containers are running, open http://localhost:8080 in your Windows browser
to access WordPress. The containers are managed entirely inside WSL2,
but networking feels seamless.


Maintenance: Cleaning Up Docker Data

Over time, Docker accumulates images, stopped containers, volumes, and networks.
This can take up significant disk space inside your WSL distribution.
Here are safe maintenance commands to keep your environment clean:

Remove Unused Objects

docker system prune -a --volumes
  • -a: removes all unused images, not just dangling ones
  • --volumes: also removes unused volumes

Reset Everything (Dangerous)

If you need to wipe your Docker environment completely (images, containers, volumes, networks):

docker stop $(docker ps -aq) 2>/dev/null
docker rm -f $(docker ps -aq) 2>/dev/null
docker volume rm $(docker volume ls -q) 2>/dev/null
docker network rm $(docker network ls -q) 2>/dev/null
docker image rm -f $(docker image ls -q) 2>/dev/null

⚠️ Use this only if you want to start fresh. All data will be removed.


Conclusion

By running Docker Engine directly inside WSL2, you gain a powerful, lightweight, and license-free Docker environment that integrates seamlessly with Windows 11. Your containers are accessible from both Linux and Windows, Docker Compose works out of the box, and maintenance is straightforward with prune commands.

This approach is particularly well-suited for developers who want the flexibility of Docker without the overhead of Docker Desktop. With WSL2 and Ubuntu 24.04, you get the best of both worlds: Linux-native Docker with Windows accessibility.

PostHeaderIcon Secure Development with Docker: DockerCon 2023 Workshop

The DockerCon 2023 workshop, “Secure Development with Docker,” delivered by Yves Brissaud, James Carnegie, David Dooling, and Christian Dupuis from Docker, offered a comprehensive exploration of securing the software supply chain. Spanning over three hours, this session addressed the tension between developers’ need for speed and security teams’ focus on risk mitigation. Participants engaged in hands-on labs to identify and remediate common vulnerabilities, leverage Docker Scout for actionable insights, and implement provenance, software bills of materials (SBOMs), and policies. The workshop emphasized Docker’s developer-centric approach to security, empowering attendees to enhance their workflows without compromising safety. By integrating Docker Scout, attendees learned to secure every stage of the software development lifecycle, from code to deployment.

Tackling Common Vulnerabilities and Exposures (CVEs)

The workshop began with a focus on Common Vulnerabilities and Exposures (CVEs), a critical starting point for securing software. David Dooling introduced CVEs as publicly disclosed cybersecurity vulnerabilities in operating systems, dependencies like OpenSSL, or container images. Participants used Docker Desktop 4.24 and the Docker Scout CLI to scan images based on Alpine 3.14, identifying vulnerabilities in base images and added layers, such as npm packages (e.g., Express and its transitive dependency Qs). Hands-on exercises guided attendees to update base images to Alpine 3.18, using Docker Scout’s recommendations to select versions with fewer vulnerabilities. The CLI’s cve command and Desktop’s vulnerability view provided detailed insights, including severity filters and package details, enabling developers to remediate issues efficiently. This segment underscored that while scanning is essential, it’s only one part of a broader security strategy, setting the stage for a holistic approach.

Understanding Software Supply Chain Security

The second segment, led by Dooling, introduced the software supply chain as a framework encompassing source code, dependencies, build processes, and deployment. Drawing an analogy to brewing coffee—where beans, water, and equipment have their own supply chains—the workshop highlighted risks like supply chain attacks, as outlined by CISA’s open-source security roadmap. These attacks, such as poisoning repositories, differ from CVEs by involving intentional tampering. Participants explored Docker Scout’s role as a supply chain management tool, not just a CVE scanner. Using the workshop’s GitHub repository (dc23-secure-workshop), attendees set up environment variables and Docker Compose to build images, learning how Scout tracks components across the lifecycle. This segment emphasized the need to secure every stage, from code creation to deployment, to prevent vulnerabilities and malicious injections.

Leveraging Docker Scout for Actionable Insights

Docker Scout was the cornerstone of the workshop, offering a developer-friendly interface to manage security. Yves Brissaud guided participants through hands-on labs using Docker Desktop and the Scout CLI to analyze images. Attendees explored vulnerabilities in a front-end image (using Express) and a Go-based back-end image, applying filters to focus on critical CVEs or specific package types (e.g., npm). Scout’s compare command allowed participants to assess changes between image versions, such as updating from Alpine 3.14 to 3.18, revealing added or removed packages and their impact on vulnerabilities. Desktop’s visual interface displayed recommended fixes, like updating base images or dependencies, while the CLI provided detailed outputs, including quick views for rapid assessments. This segment demonstrated Scout’s ability to integrate into CI/CD pipelines, providing early feedback to developers without disrupting workflows.

Implementing Provenance and Software Bill of Materials (SBOM)

The third segment focused on provenance and SBOMs, critical for supply chain transparency. Provenance, aligned with the SALSA framework’s Build Level 1, documents how an image is built, including base image tags, digests, and build metadata. SBOMs list all packages and their versions, ensuring consistency across environments. Participants rebuilt images with the --provenance and --sbom flags using BuildKit, generating attestations stored in Docker Hub. Brissaud demonstrated using the imagetools command to inspect provenance and SBOMs, revealing details like build timestamps and package licenses. The workshop highlighted the importance of embedding this metadata at build time to enable reproducible builds and accurate recommendations. By integrating Scout’s custom SBOM indexer, attendees ensured consistent vulnerability reporting across Desktop, CLI, and scout.docker.com, enhancing trust in the software’s integrity.

Enforcing Developer-Centric Policies

The final segment introduced Docker Scout’s policy enforcement, designed with a developer mindset to avoid unnecessary build failures. Dooling explained Scout’s “first do no harm” philosophy, rooted in Kaizen’s continuous improvement principles. Unlike traditional policies that block builds for existing CVEs, Scout compares new builds to production images, allowing progress if vulnerabilities remain unchanged. Participants explored four out-of-the-box policies in Early Access: fixing critical/high CVEs, updating base images, and avoiding deprecated tags. Using the scout policy command, attendees evaluated images against these policies, viewing compliance status on Desktop and scout.docker.com. The workshop also previewed upcoming GitHub Action integrations for pull request policy checks, enabling developers to assess changes before merging. This approach ensures security without hindering development, aligning with Docker’s mission to empower developers.

Hashtags: #DockerCon2023 #SoftwareSupplyChain #DockerScout #SecureDevelopment #CVEs #Provenance #SBOM #Policy #YvesBrissaud #JamesCarnegie #DavidDooling #ChristianDupuis

PostHeaderIcon Navigating the Application Lifecycle in Kubernetes

At Devoxx France 2019, Charles Sabourdin and Jean-Christophe Sirot, seasoned professionals in cloud-native technologies, delivered an extensive exploration of managing application lifecycles within Kubernetes. Charles, an architect with over 15 years in Linux and Java, and Jean-Christophe, a Docker expert since 2002, combined their expertise to demystify Docker’s underpinnings, Kubernetes’ orchestration, and the practicalities of continuous integration and delivery (CI/CD). Through demos and real-world insights, they addressed security challenges across development and business-as-usual (BAU) phases, proposing organizational strategies to streamline containerized workflows. This post captures their comprehensive session, offering a roadmap for developers and operations teams navigating Kubernetes ecosystems.

Docker’s Foundations: Isolation and Layered Efficiency

Charles opened the session by revisiting Docker’s core principles, emphasizing its reliance on Linux kernel features like namespaces and control groups (cgroups). Unlike virtual machines (VMs), which bundle entire operating systems, Docker containers share the host kernel, isolating processes within lightweight environments. This design achieves hyper-density, allowing more containers to run on a single machine compared to VMs. Charles demonstrated launching a container, highlighting its process isolation using commands like ps within a containerized bash session, contrasting it with the host’s process list. He introduced Docker’s layer system, where images are built as immutable, stacked deltas, optimizing storage through shared base layers. Tools like Dive, he noted, help inspect these layers, revealing command histories and suggesting size optimizations. This foundation sets the stage for Kubernetes, enabling efficient, portable application delivery across environments.

Kubernetes: Orchestrating Scalable Deployments

Jean-Christophe transitioned to Kubernetes, describing it as a resource orchestrator that manages containerized applications across node pools. Kubernetes abstracts infrastructure complexities, using declarative configurations to maintain desired application states. Key components include pods—the smallest deployable units housing containers—replica sets for scaling, and deployments for managing updates. Charles demonstrated creating a namespace and deploying a sample application using kubectl run, which scaffolds deployments, replica sets, and pods. He showcased rolling updates, where Kubernetes progressively replaces pods to ensure zero downtime, configurable via parameters like maxSurge and maxUnavailable. The duo emphasized Kubernetes’ auto-scaling capabilities, which adjust pod counts based on load, and the importance of defining resource limits to prevent performance bottlenecks. Their demo underscored Kubernetes’ role in achieving resilient, scalable deployments, aligning with hyper-density goals.

CI/CD Pipelines: Propagating Versions Seamlessly

The session delved into CI/CD pipelines, illustrating how Docker tags facilitate version propagation across development, pre-production, and production environments. Charles outlined a standard process: developers build Docker images tagged with version numbers (e.g., 1.11.2) or environment labels (e.g., prodstaging). These images, stored in registries like Docker Hub or private repositories, are pulled by Kubernetes clusters for deployment. Jean-Christophe highlighted debates around tagging strategies, noting that version-based tags ensure traceability, while environment tags simplify environment-specific deployments. Their demo integrated tools like Jenkins and JFrog Artifactory, automating builds, tests, and deployments. They stressed the need for robust pipeline configurations to avoid resource overuse, citing Jenkins’ default manual build triggers for tagged releases as a safeguard. This pipeline approach ensures consistent, automated delivery, bridging development and production.

Security Across the Lifecycle: Development vs. BAU

Security emerged as a central theme, with Charles contrasting development and BAU phases. During development, teams rapidly address Common Vulnerabilities and Exposures (CVEs) with frequent releases, leveraging tools like JFrog Xray and Clair to scan images for vulnerabilities. Xray integrates with Artifactory, while Clair, an open-source solution, scans registry images for known CVEs. However, in BAU, where releases are less frequent, unpatched vulnerabilities pose greater risks. Charles shared an anecdote about a PHP project where a dependency switch broke builds after two years, underscoring the need for ongoing maintenance. They advocated for practices like running containers in read-only mode and using non-root users to minimize attack surfaces. Tools like OWASP Dependency-Track, they suggested, could enhance visibility into library vulnerabilities, though current scanners often miss non-package dependencies. This dichotomy highlights the need for automated, proactive security measures throughout the lifecycle.

Organizational Strategies: Balancing Complexity and Responsibility

Drawing from their experiences, Charles and Jean-Christophe proposed organizational solutions to manage Kubernetes complexity. They introduced a “1-2-3 model” for image management: Level 1 uses vendor-provided images (e.g., official MySQL images) managed by operations; Level 2 involves base images built by dedicated teams, incorporating standardized tooling; and Level 3 allows project-specific images, with teams assuming maintenance responsibilities. This model clarifies ownership, reducing risks like disappearing maintainers when projects transition to BAU. They emphasized cross-team collaboration, encouraging developers and operations to share knowledge and align on practices like Dockerfile authorship and resource allocation in YAML configurations. Charles reflected on historical DevOps silos, advocating for shared vocabularies and traceable decisions to navigate evolving best practices. Their return-of-experience underscored the importance of balancing automation with human oversight to maintain robust, secure Kubernetes environments.

Hashtags: #Kubernetes #Docker #DevOps #CICD #Security #DevoxxFR #CharlesSabourdin #JeanChristopheSirot #JFrog #Clair

PostHeaderIcon [DevoxxFR 2018] Java in Docker: Best Practices for Production

The practice of running Java applications within Docker containers has become widely adopted in modern software deployment, yet it is not devoid of potential challenges, particularly when transitioning to production environments. Charles Sabourdin, a freelance architect, and Jean-Christophe Sirot, an engineer at Docker, collaborated at DevoxxFR2018 to share their valuable experiences and disseminate best practices for optimizing Java applications inside Docker containers. Their insightful talk directly addressed common and often frustrating issues, such as containers crashing unexpectedly, applications consuming excessive RAM leading to node instability, and encountering CPU throttling. They offered practical solutions and configurations aimed at ensuring smoother and more reliable production deployments for Java workloads.

The presenters initiated their session with a touch of humor, explaining why operations teams might exhibit a degree of apprehension when tasked with deploying a containerized Java application into a production setting. It’s a common scenario: containers that perform flawlessly on a developer’s local machine can begin to behave erratically or fail outright in production. This discrepancy often stems from a fundamental misunderstanding of how the Java Virtual Machine (JVM) interacts with the resource limits imposed by the container’s control groups (cgroups). Several key problems frequently surface in this context. Perhaps the most common is memory mismanagement; the JVM, particularly older versions, might not be inherently aware of the memory limits defined for its container by the cgroup. This lack of awareness can lead the JVM to attempt to allocate and use more memory than has been allocated to the container by the orchestrator or runtime. Such overconsumption inevitably results in the container being abruptly terminated by the operating system’s Out-Of-Memory (OOM) killer, a situation that can be difficult to diagnose without understanding this interaction.

Similarly, CPU resource allocation can present challenges. The JVM might not accurately perceive the CPU resources available to it within the container, such as CPU shares or quotas defined by cgroups. This can lead to suboptimal decisions in sizing internal thread pools (like the common ForkJoinPool or garbage collection threads) or can cause the application to experience unexpected CPU throttling, impacting performance. Another frequent issue is Docker image bloat. Overly large Docker images not only increase deployment times across the infrastructure but also expand the potential attack surface by including unnecessary libraries or tools, thereby posing security vulnerabilities. The talk aimed to equip developers and operations personnel with the knowledge to anticipate and mitigate these common pitfalls. During the presentation, a demonstration application, humorously named “ressources-munger,” was used to simulate these problems, clearly showing how an application could consume excessive memory leading to an OOM kill by Docker, or how it might trigger excessive swapping if not configured correctly, severely degrading performance.

JVM Memory Management and CPU Considerations within Containers

A significant portion of the discussion was dedicated to the intricacies of JVM memory management within the containerized environment. Charles and Jean-Christophe elaborated that older JVM versions, specifically those prior to Java 8 update 131 and Java 9, were not inherently “cgroup-aware”. This lack of awareness meant that the JVM’s default heap sizing heuristics—for example, typically allocating up to one-quarter of the physical host’s memory for the heap—would be based on the total resources of the host machine rather than the specific limits imposed on the container by its cgroup. This behavior is a primary contributor to unexpected OOM kills when the container’s actual memory limit is much lower than what the JVM assumes based on the host.

Several best practices were shared to address these memory-related issues effectively. The foremost recommendation is to use cgroup-aware JVM versions. Modern Java releases, particularly Java 8 update 191 and later, and Java 10 and newer, incorporate significantly improved cgroup awareness. For older Java 8 updates (specifically 8u131 to 8u190), experimental flags such as -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap can be employed to enable the JVM to better respect container memory limits. In Java 10 and subsequent versions, this behavior became standard and often requires no special flags. However, even with cgroup-aware JVMs, explicitly setting the heap size using parameters like -Xms for the initial heap size and -Xmx for the maximum heap size is frequently a recommended practice for predictability and control. Newer JVMs also offer options like -XX:MaxRAMPercentage, allowing for more dynamic heap sizing relative to the container’s allocated memory. It’s crucial to understand that the JVM’s total memory footprint extends beyond just the heap; it also requires memory for metaspace (which replaced PermGen in Java 8+), thread stacks, native libraries, and direct memory buffers. Therefore, when allocating memory to a container, it is essential to account for this total footprint, not merely the -Xmx value. A common guideline suggests that the Java heap might constitute around 50-75% of the total memory allocated to the container, with the remainder reserved for these other essential JVM components and any other processes running within the container. Tuning metaspace parameters, such as -XX:MetaspaceSize and -XX:MaxMetaspaceSize, can also prevent excessive native memory consumption, particularly in applications that dynamically load many classes.

Regarding CPU resources, the presenters noted that the JVM’s perception of available processors is also influenced by its cgroup awareness. In environments where CPU resources are constrained, using flags like -XX:ActiveProcessorCount can be beneficial to explicitly inform the JVM about the number of CPUs it should consider for sizing its internal thread pools, such as the common ForkJoinPool or the threads used for garbage collection. Optimizing the Docker image itself is another critical aspect of preparing Java applications for production. This involves choosing a minimal base image, such as alpine-jre, distroless, or official “slim” JRE images, instead of a full operating system distribution, to reduce the image size and potential attack surface. Utilizing multi-stage builds in the Dockerfile is a highly recommended technique; this allows developers to use a larger image containing build tools like Maven or Gradle and a full JDK in an initial stage, and then copy only the necessary application artifacts (like the JAR file) and a minimal JRE into a final, much smaller runtime image. Furthermore, being mindful of Docker image layering by combining related commands in the Dockerfile where possible can help reduce the number of layers and optimize image size. For applications on Java 9 and later, tools like jlink can be used to create custom, minimal JVM runtimes that include only the Java modules specifically required by the application, further reducing the image footprint. The session strongly emphasized that a collaborative approach between development and operations teams, combined with a thorough understanding of both JVM internals and Docker containerization principles, is paramount for successfully and reliably running Java applications in production environments.

Links:

Hashtags: #Java #Docker #JVM #Containerization #DevOps #Performance #MemoryManagement #DevoxxFR2018 #CharlesSabourdin #JeanChristopheSirot #BestPractices #ProductionReady #CloudNative

PostHeaderIcon [DevoxxUS2017] Mobycraft: Manage Docker Containers Using Minecraft by Arun Gupta

At DevoxxUS2017, Arun Gupta, Vice President of Developer Advocacy at Couchbase, presented an innovative project called Mobycraft, a Minecraft client-side mod designed to manage Docker containers. Collaborating with his son, Aditya Gupta, Arun showcased how this mod transforms container management into an engaging, game-based experience, particularly for younger audiences learning Java and Docker fundamentals. This post delves into the key aspects of Arun’s session, highlighting how Mobycraft bridges gaming and technology education.

Engaging Young Minds with Docker

Arun Gupta introduced Mobycraft as a creative fusion of Minecraft’s interactive environment and Docker’s container management capabilities. Developed as a father-son project, the mod allows users to execute Docker commands like /docker ps and /docker run within Minecraft. Arun explained how containers are visualized as color-coded boxes, with interactive elements like start/stop buttons and status indicators. This approach, rooted in Aditya’s passion for Minecraft modding, makes complex Docker concepts accessible and fun, fostering early exposure to DevOps principles.

Technical Implementation and Community Contribution

Arun detailed Mobycraft’s technical foundation, built on Minecraft Forge for Minecraft 1.8, using the docker-java library to interface with Docker hosts. The mod supports multiple providers, including local Docker hosts, Docker for Mac, and Netflix’s Titus, leveraging Guice for dependency injection to ensure flexibility. Arun encouraged community contributions through code reviews and pull requests on the GitHub repository, emphasizing its educational potential and inviting developers to enhance features like Swarm visualization.

Links: