Posts Tagged ‘Docker-Compose’
How to Install an Old Version of Docker on a Recent Debian: A Case Study with Docker 20.10.9 on Debian 13 (Trixie)
In the rapidly evolving landscape of containerization technology, Docker remains a cornerstone for developers and system administrators. However, specific use cases—such as legacy application compatibility, testing, or reproducing historical environments—may necessitate installing an older version of Docker on a modern operating system. This guide provides a detailed walkthrough for installing Docker Engine 20.10.9, a release from September 2021, on Debian 13 (codename “Trixie”), the testing branch of Debian as of September 2025. While the steps can be adapted for other versions or Debian releases, this case study addresses the unique challenges of downgrading Docker on a contemporary distribution.
Introduction: The Challenges and Rationale for Installing an Older Docker Version
Installing an outdated Docker version like 20.10.9 on a recent Debian release such as Trixie is fraught with challenges due to Docker’s evolution and Debian’s forward-looking package management. Docker transitioned from Calendar Versioning (CalVer, e.g., 20.10) to Semantic Versioning (SemVer, starting with 23.0 in May 2023), introducing significant updates in security, features, and dependencies. This creates several obstacles:
- Package Availability and Compatibility: Docker’s official APT repository prioritizes current versions (e.g., 28.x in 2025) for supported Debian releases. Older versions like 20.10.9 are often archived and unavailable via
aptfor newer codenames like Trixie, requiring manual downloads of.debpackages from a compatible release (e.g., bullseye for Debian 11). This can lead to dependency mismatches or installation failures. - Security and Support Risks: Version 20.10.9 is end-of-life (EOL) since mid-2023, lacking official security patches for known vulnerabilities (e.g., CVEs in networking or containerd). This poses risks for production environments. Additionally, compatibility issues may arise with modern WSL2 networking in Windows Subsystem for Linux (WSL) environments.
- Dependency Conflicts: Older Docker versions rely on specific versions of components like
containerd.io, which may conflict with newer libraries on Debian 13, potentially causing installation or runtime errors. - Docker Compose Compatibility: Modern Docker integrates Compose as a plugin (
docker compose), but older setups require the standalonedocker-compose(v1, with hyphen), necessitating a separate binary installation.
Why pursue this downgrade? Legacy applications, specific toolchains, or compatibility with older Dockerfiles may require it—such as maintaining a telemetry stack with Elasticsearch, Kibana, and APM Server in a controlled environment. However, for production or security-sensitive deployments, upgrading to the latest Docker version (e.g., 28.3.3) is strongly recommended. This guide assumes a WSL/Debian Trixie setup but is applicable to native Debian installations, with precautions for data loss and system stability.
Prerequisites
Before proceeding, ensure the following:
- A running Debian 13 (Trixie) system (verify with
lsb_release -cs). - Administrative access (sudo privileges).
- Backup of critical Docker data (e.g., export volumes using
docker volume ls). - Internet access for downloading packages.
- Awareness of risks: Manual package installation bypasses APT’s dependency resolution, and EOL versions lack security updates.
Step 1: Prune All Local Docker Resources
Why This Step is Needed
Before uninstalling the current Docker version (e.g., 28.3.3), pruning all local resources—images, containers, volumes, and networks—ensures a clean slate. This prevents conflicts from residual data, reclaims disk space, and prepares the system for the downgrade. Since pruning is irreversible, backing up critical data (e.g., telemetry stack volumes) is essential.
What It Does
The docker system prune command removes all unused Docker artifacts, including stopped containers, unused images, volumes, and networks, ensuring no remnants interfere with the new installation.
Commands
# Stop all running containers (if any)
docker stop $(docker ps -q) 2>/dev/null || true
# Prune everything: images, containers, volumes, networks, and build cache
docker system prune -a --volumes -f
Verification
Run these commands to confirm cleanup:
docker images -a # Should list no images
docker volume ls # Should be empty
docker network ls # Should show only defaults (bridge, host, none)
docker ps -a # Should show no containers
If permission errors occur, verify your user is in the docker group (sudo usermod -aG docker $USER and log out/in) or use sudo.
Step 2: Remove the Current Docker Installation
Why This Step is Needed
Removing the existing Docker version (e.g., 28.3.3) eliminates potential conflicts in packages, configurations, or runtime components. Residual files or newer dependencies could cause the older 20.10.9 installation to fail or behave unpredictably.
What It Does
This step stops Docker services, purges installed packages, deletes data directories and configurations, and removes the Docker APT repository to prevent accidental upgrades to newer versions.
Commands
# Stop Docker services
sudo systemctl stop docker
sudo systemctl stop docker.socket
# Uninstall Docker packages
sudo apt-get purge -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin docker-ce-rootless-extras
sudo apt-get autoremove -y --purge
# Remove Docker data and configs
sudo rm -rf /var/lib/docker
sudo rm -rf /var/lib/containerd
sudo rm -rf /etc/docker
sudo rm -f /etc/apparmor.d/docker
sudo rm -f /var/run/docker.sock
sudo groupdel docker 2>/dev/null || true
# Remove Docker repository
sudo rm -f /etc/apt/sources.list.d/docker.list
sudo rm -f /etc/apt/keyrings/docker.gpg
sudo apt-get update
# Verify removal
docker --version # Should show "command not found"
Reboot WSL if needed (in Windows PowerShell: wsl --shutdown).
Step 3: Install Docker Engine 20.10.9 via Manual .deb Packages
Why This Step is Needed
Debian Trixie’s Docker repository does not include 20.10.9, as it is an EOL version from 2021, unsupported since mid-2023. The standard apt installation fails due to version mismatches, so we manually download and install .deb packages from Docker’s archive for Debian 11 (bullseye), which is compatible with Trixie. This approach bypasses repository limitations but requires careful dependency management.
What It Does
The commands download specific .deb files for Docker CE, CLI, and containerd.io, install them using dpkg, resolve dependencies with apt, and lock the version to prevent upgrades. The process also ensures Docker starts correctly and is accessible without root privileges.
Commands
# Install prerequisites
sudo apt-get update
sudo apt-get install -y ca-certificates curl gnupg lsb-release
# Create directory for downloads
mkdir -p ~/docker-install
cd ~/docker-install
# Download .deb packages for 20.10.9 (bullseye, amd64)
curl -O https://download.docker.com/linux/debian/dists/bullseye/pool/stable/amd64/containerd.io_1.4.13-1_amd64.deb
curl -O https://download.docker.com/linux/debian/dists/bullseye/pool/stable/amd64/docker-ce-cli_20.10.9~3-0~debian-bullseye_amd64.deb
curl -O https://download.docker.com/linux/debian/dists/bullseye/pool/stable/amd64/docker-ce_20.10.9~3-0~debian-bullseye_amd64.deb
# Verify file sizes (should be MBs, not bytes)
ls -lh *.deb
# Install .deb packages
sudo dpkg -i containerd.io_1.4.13-1_amd64.deb
sudo dpkg -i docker-ce-cli_20.10.9~3-0~debian-bullseye_amd64.deb
sudo dpkg -i docker-ce_20.10.9~3-0~debian-bullseye_amd64.deb
# Fix any dependency issues
sudo apt-get install -f
# Hold versions to prevent upgrades
sudo apt-mark hold docker-ce docker-ce-cli containerd.io
# Start and enable Docker
sudo systemctl start docker
sudo systemctl enable docker
# Add user to docker group (log out/in after)
sudo usermod -aG docker $USER
# Verify installation
docker --version # Should show "Docker version 20.10.9, build ..."
docker run --rm hello-world # Test pull and run
# Clean up downloaded files
cd /mnt/c/workarea/development-tools/telemetry-poc/etc/docker/telemetry
rm -rf ~/docker-install
Note: If the curl URLs return 404 errors, browse Docker’s bullseye pool to find the exact filenames (e.g., try docker-ce_20.10.10~3-0~debian-bullseye_amd64.deb if 20.10.9 is unavailable). Use containerd.io_1.4.11-1_amd64.deb if 1.4.13 fails.
Step 4: Install Standalone docker-compose (v1.29.2)
Why This Step is Needed
Modern Docker includes Compose as a plugin (docker compose, v2), but legacy setups like 20.10.9 often require the standalone docker-compose (v1, with hyphen) for compatibility with older workflows or scripts. This binary ensures the hyphenated command is available.
What It Does
Downloads the v1.29.2 binary (the last v1 release, compatible with 20.10.9) from GitHub, installs it to /usr/local/bin, and makes it executable.
Commands
sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
# Verify
docker-compose --version # Should show "docker-compose version 1.29.2, build ..."
Step 5: Post-Installation and Verification
Why This Step is Needed
Post-installation steps ensure Docker and docker-compose function correctly, especially in a WSL environment where networking can be temperamental. Verifying the setup confirms compatibility with your telemetry stack (e.g., Elasticsearch, Kibana, APM Server).
What It Does
Restarts WSL to apply changes, tests Docker and docker-compose, and rebuilds your telemetry stack. It also checks network accessibility from Windows.
Commands
# Restart WSL (in Windows PowerShell)
wsl --shutdown
# Reopen WSL terminal and verify
docker --version # Should show "Docker version 20.10.9, build ..."
docker-compose --version # Should show "docker-compose version 1.29.2, build ..."
docker run --rm hello-world
Troubleshooting
If issues arise, consider these steps:
- Download Failures: Check Docker’s bullseye pool for correct filenames. Use
curl -I <URL>to verify HTTP status (200 OK). - Dependency Errors: Run
sudo apt-get install -fto resolve. - Docker Not Starting: Check
sudo systemctl status dockerorjournalctl -u docker. - WSL Networking: Update WSL (
wsl --update) and restart Docker (sudo service docker restart). - Share Outputs: Provide
ls -lh ~/docker-install/*.deb,dpkg -l | grep docker, or error messages for debugging.
Conclusion
Installing an older version of Docker, such as 20.10.9, on a recent Debian release like Trixie (Debian 13) is a complex but achievable task, requiring careful management of package dependencies and manual installation of archived .deb files. By pruning existing Docker resources, removing the current installation, and installing bullseye-compatible packages, you can successfully downgrade to meet legacy requirements. The addition of the standalone docker-compose ensures compatibility with older workflows..
However, this approach comes with caveats: version 20.10.9 is end-of-life, lacking security updates, and may face compatibility issues with modern tools or WSL2 networking. For production environments, consider using the latest Docker version (e.g., 28.3.3 as of September 2025) to benefit from ongoing support and enhanced features. Always test thoroughly after installation, and maintain backups to mitigate data loss risks. If you encounter issues or need to adapt this process for other versions, consult Docker’s official repository or community forums for additional resources.
Secure Development with Docker: DockerCon 2023 Workshop
The DockerCon 2023 workshop, “Secure Development with Docker,” delivered by Yves Brissaud, James Carnegie, David Dooling, and Christian Dupuis from Docker, offered a comprehensive exploration of securing the software supply chain. Spanning over three hours, this session addressed the tension between developers’ need for speed and security teams’ focus on risk mitigation. Participants engaged in hands-on labs to identify and remediate common vulnerabilities, leverage Docker Scout for actionable insights, and implement provenance, software bills of materials (SBOMs), and policies. The workshop emphasized Docker’s developer-centric approach to security, empowering attendees to enhance their workflows without compromising safety. By integrating Docker Scout, attendees learned to secure every stage of the software development lifecycle, from code to deployment.
Tackling Common Vulnerabilities and Exposures (CVEs)
The workshop began with a focus on Common Vulnerabilities and Exposures (CVEs), a critical starting point for securing software. David Dooling introduced CVEs as publicly disclosed cybersecurity vulnerabilities in operating systems, dependencies like OpenSSL, or container images. Participants used Docker Desktop 4.24 and the Docker Scout CLI to scan images based on Alpine 3.14, identifying vulnerabilities in base images and added layers, such as npm packages (e.g., Express and its transitive dependency Qs). Hands-on exercises guided attendees to update base images to Alpine 3.18, using Docker Scout’s recommendations to select versions with fewer vulnerabilities. The CLI’s cve command and Desktop’s vulnerability view provided detailed insights, including severity filters and package details, enabling developers to remediate issues efficiently. This segment underscored that while scanning is essential, it’s only one part of a broader security strategy, setting the stage for a holistic approach.
Understanding Software Supply Chain Security
The second segment, led by Dooling, introduced the software supply chain as a framework encompassing source code, dependencies, build processes, and deployment. Drawing an analogy to brewing coffee—where beans, water, and equipment have their own supply chains—the workshop highlighted risks like supply chain attacks, as outlined by CISA’s open-source security roadmap. These attacks, such as poisoning repositories, differ from CVEs by involving intentional tampering. Participants explored Docker Scout’s role as a supply chain management tool, not just a CVE scanner. Using the workshop’s GitHub repository (dc23-secure-workshop), attendees set up environment variables and Docker Compose to build images, learning how Scout tracks components across the lifecycle. This segment emphasized the need to secure every stage, from code creation to deployment, to prevent vulnerabilities and malicious injections.
Leveraging Docker Scout for Actionable Insights
Docker Scout was the cornerstone of the workshop, offering a developer-friendly interface to manage security. Yves Brissaud guided participants through hands-on labs using Docker Desktop and the Scout CLI to analyze images. Attendees explored vulnerabilities in a front-end image (using Express) and a Go-based back-end image, applying filters to focus on critical CVEs or specific package types (e.g., npm). Scout’s compare command allowed participants to assess changes between image versions, such as updating from Alpine 3.14 to 3.18, revealing added or removed packages and their impact on vulnerabilities. Desktop’s visual interface displayed recommended fixes, like updating base images or dependencies, while the CLI provided detailed outputs, including quick views for rapid assessments. This segment demonstrated Scout’s ability to integrate into CI/CD pipelines, providing early feedback to developers without disrupting workflows.
Implementing Provenance and Software Bill of Materials (SBOM)
The third segment focused on provenance and SBOMs, critical for supply chain transparency. Provenance, aligned with the SALSA framework’s Build Level 1, documents how an image is built, including base image tags, digests, and build metadata. SBOMs list all packages and their versions, ensuring consistency across environments. Participants rebuilt images with the --provenance and --sbom flags using BuildKit, generating attestations stored in Docker Hub. Brissaud demonstrated using the imagetools command to inspect provenance and SBOMs, revealing details like build timestamps and package licenses. The workshop highlighted the importance of embedding this metadata at build time to enable reproducible builds and accurate recommendations. By integrating Scout’s custom SBOM indexer, attendees ensured consistent vulnerability reporting across Desktop, CLI, and scout.docker.com, enhancing trust in the software’s integrity.
Enforcing Developer-Centric Policies
The final segment introduced Docker Scout’s policy enforcement, designed with a developer mindset to avoid unnecessary build failures. Dooling explained Scout’s “first do no harm” philosophy, rooted in Kaizen’s continuous improvement principles. Unlike traditional policies that block builds for existing CVEs, Scout compares new builds to production images, allowing progress if vulnerabilities remain unchanged. Participants explored four out-of-the-box policies in Early Access: fixing critical/high CVEs, updating base images, and avoiding deprecated tags. Using the scout policy command, attendees evaluated images against these policies, viewing compliance status on Desktop and scout.docker.com. The workshop also previewed upcoming GitHub Action integrations for pull request policy checks, enabling developers to assess changes before merging. This approach ensures security without hindering development, aligning with Docker’s mission to empower developers.
Links:
- DockerCon 2023 Workshop Video
- Docker Website
- Announcing Docker Scout GA: Actionable Insights for the Software Supply Chain
- Docker Scout: Securing The Complete Software Supply Chain (DockerCon 2023)
- What’s in My Container? Docker Scout CLI and CI to the Rescue (DockerCon 2023)
Hashtags: #DockerCon2023 #SoftwareSupplyChain #DockerScout #SecureDevelopment #CVEs #Provenance #SBOM #Policy #YvesBrissaud #JamesCarnegie #DavidDooling #ChristianDupuis