Recent Posts
Archives

PostHeaderIcon [DotJs2025] Supercharge Web Performance with Shared Dictionaries: The Next Frontier in HTTP Compression

In an era where digital payloads traverse global networks at breakneck speeds, the subtle art of data compression remains a cornerstone of efficient web delivery, often overlooked amid flashier optimizations. Antoine Caron, engineering manager for frontend teams at Scaleway, reignited this vital discourse at dotJS 2025, advocating for shared dictionaries as a transformative leap in HTTP efficiency. With a keen eye on performance bottlenecks, Antoine dissected how conventional compressors like Gzip and Brotli falter on repetitive assets, only to unveil a protocol that leverages prior transfers as reference tomes, slashing transfer volumes by up to 70% in real-world scenarios. This isn’t arcane theory; it’s a pragmatic evolution, already piloted in Chrome and poised for broader adoption via emerging standards.

Antoine’s clarion call stemmed from stark realities unearthed in the Web Almanac: a disconcerting fraction of sites neglect even basic compression, forfeiting gigabytes in needless transit. A Wikipedia load sans Gzip drags versus its zipped twin, a 15% velocity boon; jQuery’s minified bulk evaporates over 50KB under maximal squeeze, a 70% payload purge sans semantic sacrifice. Yet, Brotli’s binary prowess, while superior for static fare, stumbles on dynamic deltas—vendor bundles morphing across deploys. Enter shared dictionary compression: an HTTP extension where browsers cache antecedent responses as compression glossaries, enabling servers to encode novelties against these baselines. For jQuery’s trek from v3.6 to v3.7, mere 8KB suffices; YouTube’s quarterly refresh yields 70% thrift, prior payloads priming the pump.

This mechanism, rooted in Google’s erstwhile SDCH (Shared Dictionary Compression over HTTP) and revived in IETF drafts like Compression Dictionary Transport, marries client-side retention with server-side savvy. Chrome’s 2024 rollout—flagged under chrome://flags/#shared-dictionary-compression—harnesses Zstandard or Brotli atop these shared tomes, with Microsoft Edge’s ZSDCH echoing for HTTPS. Antoine emphasized pattern matching: regex directives tag vendor globs, caching layers sequester these corpora, subsequent fetches invoking them via headers like Dictionary: . Caveats abound—staticity’s stasis, cache invalidation’s curse—but mitigations like periodic refreshes or hybrid fallbacks preserve robustness.

Antoine’s vision extends to edge cases: CDN confederacies propagating dictionaries, mobile’s miserly bandwidths reaping richest rewards. As Interop 2025 mandates cross-browser parity—Safari and Firefox intent-to-ship signaling convergence—this frontier beckons builders to audit headers, prototype pilots, and pioneer payloads’ parsimony. In a bandwidth-beleaguered world, shared dictionaries don’t merely optimize; they orchestrate a leaner, more equitable web.

The Mechanics of Mutual Memory

Antoine unraveled the protocol’s weave: clients stash responses in a dedicated echelon, servers probe via Accept-Dictionary headers, encoding diffs against these reservoirs. Brotli’s static harbors, once rigid, now ripple with runtime references—Zstd’s dynamism amplifying for JS behemoths. Web Almanac’s diagnostics affirm: uncompressed ubiquity persists, yet 2025’s tide, per Chrome’s telemetry, portends proliferation.

Horizons of Header Harmony

Drafts delineate transport: dictionary dissemination via prior bodies or external anchors, invalidation via etags or TTLs. Antoine’s exhortation: audit via Lighthouse, experiment in canaries—Scaleway’s vantage yielding vendor variances tamed. As specs solidify, this symbiosis promises payloads pared, performance propelled.

Links:

PostHeaderIcon [DotJs2024] Embracing Reactivity: Signals Unveiled in Modern Web Frameworks

As web architectures burgeon in intricacy, the quest for fluid state orchestration intensifies, demanding primitives that harmonize intuition with efficiency. Ruby Jane Cabagnot, a Oslo-based full-stack artisan and co-author of Practical Enterprise React, illuminated this quest at dotJS 2024. With a portfolio spanning cloud services and DevOps, Ruby dissected signals’ ascendancy in frameworks like SolidJS and Svelte, tracing their lineage from Knockout’s observables to today’s compile-time elixirs. Her exposition: a clarion for developers to harness these sentinels, streamlining reactivity while amplifying responsiveness.

Ruby’s odyssey commenced with historical moorings: Knockout’s MVVM pioneers observables, auto-propagating UI tweaks; AngularJS echoed with bidirectional bonds, model-view symphonies. React’s virtual DOM and hooks refined declarative flows, context cascades sans impurity. Yet, SolidJS and Svelte pioneer signals—granular beacons tracking dependencies, updating solely perturbed loci. In Solid, createSignal births a reactive vessel: name tweaks ripple to inputs, paragraphs—minimal footprint, maximal sync. Svelte compiles bindings at build: $: value directives weave reactivity into markup, runtime overhead evaporated.

Vue’s ref system aligns, signals as breath-easy bindings. Ruby extolled their triad: intuitiveness supplants boilerplate bazaars; performance prunes needless re-renders, DOM diffs distilled; developer delight via declarative purity, codebases crystalline. Signals transcend UIs, infiltrating WebAssembly’s server tides, birthing omnipresent reactivity. Ruby’s entreaty: probe these pillars, propel paradigms where apps pulse as dynamically as their environs.

Evolutionary Echoes of Reactivity

Ruby retraced trails: Knockout’s observables ignited auto-updates; AngularJS’s bonds synchronized realms. React’s hooks democratized context; Solid/Svelte’s signals granularize, compile-time cunning curbing cascades—name flux mends markup sans wholesale refresh.

Signals’ Synergies in Action

Solid’s vessels auto-notify dependents; Svelte’s directives distill runtime to essence. Vue’s refs render reactivity reflexive. Ruby rejoiced: libraries obsolete, renders refined, ergonomics elevated—crafting canvases concise, performant, profound.

Links:

PostHeaderIcon [DevoxxUK2025] Concerto for Java and AI: Building Production-Ready LLM Applications

At DevoxxUK2025, Thomas Vitale, a software engineer at Systematic, delivered an inspiring session on integrating generative AI into Java applications to enhance his music composition process. Combining his passion for music and software engineering, Thomas showcased a “composer assistant” application built with Spring AI, addressing real-world use cases like text classification, semantic search, and structured data extraction. Through live coding and a musical performance, he demonstrated how Java developers can leverage large language models (LLMs) for production-ready applications, emphasizing security, observability, and developer experience. His talk culminated in a live composition for an audience-chosen action movie scene, blending AI-driven suggestions with human creativity.

The Why Factor for AI Integration

Thomas introduced his “Why Factor” to evaluate hype technologies like generative AI. First, identify the problem: for his composer assistant, he needed to organize and access musical data efficiently. Second, assess production readiness: LLMs must be secure and reliable for real-world use. Third, prioritize developer experience: tools like Spring AI simplify integration without disrupting workflows. By focusing on these principles, Thomas avoided blindly adopting AI, ensuring it solved specific issues, such as automating data classification to free up time for creative tasks like composing music.

Enhancing Applications with Spring AI

Using a Spring Boot application with a Thymeleaf frontend, Thomas integrated Spring AI to connect to LLMs like those from Ollama (local) and Mistral AI (cloud). He demonstrated text classification by creating a POST endpoint to categorize musical data (e.g., “Irish tin whistle” as an instrument) using a chat client API. To mitigate risks like prompt injection attacks, he employed Java enumerations to enforce structured outputs, converting free text into JSON-parsed Java objects. This approach ensured security and usability, allowing developers to swap models without code changes, enhancing flexibility for production environments.

Semantic Search and Retrieval-Augmented Generation

Thomas addressed the challenge of searching musical data by meaning, not just keywords, using semantic search. By leveraging embedding models in Spring AI, he converted text (e.g., “melancholic”) into numerical vectors stored in a PostgreSQL database, enabling searches for related terms like “sad.” He extended this with retrieval-augmented generation (RAG), where a chat client advisor retrieves relevant data before querying the LLM. For instance, asking, “What instruments for a melancholic scene?” returned suggestions like cello, based on his dataset, improving search accuracy and user experience.

Structured Data Extraction and Human Oversight

To streamline data entry, Thomas implemented structured data extraction, converting unstructured director notes (e.g., from audio recordings) into JSON objects for database storage. Spring AI facilitated this by defining a JSON schema for the LLM to follow, ensuring structured outputs. Recognizing LLMs’ potential for errors, he emphasized keeping humans in the loop, requiring users to review extracted data before saving. This approach, applied to his composer assistant, reduced manual effort while maintaining accuracy, applicable to scenarios like customer support ticket processing.

Tools and MCP for Enhanced Functionality

Thomas enhanced his application with tools, enabling LLMs to call internal APIs, such as saving composition notes. Using Spring Data, he annotated methods to make them accessible to the model, allowing automated actions like data storage. He also introduced the Model Context Protocol (MCP), implemented in Quarkus, to integrate with external music software via MIDI signals. This allowed the LLM to play chord progressions (e.g., in A minor) through his piano software, demonstrating how MCP extends AI capabilities across local processes, though he cautioned it’s not yet production-ready.

Observability and Live Composition

To ensure production readiness, Thomas integrated OpenTelemetry for observability, tracking LLM operations like token usage and prompt augmentation. During the session, he invited the audience to choose a movie scene (action won) and used his application to generate a composition plan, suggesting chord progressions (e.g., I-VI-III-VII) and instruments like percussion and strings. He performed the music live, copy-pasting AI-suggested notes into his software, fixing minor bugs, and adding creative touches, showcasing a practical blend of AI automation and human artistry.

Links:

PostHeaderIcon [DevoxxFR2025] Boosting Java Application Startup Time: JVM and Framework Optimizations

In the world of modern application deployment, particularly in cloud-native and microservice architectures, fast startup time is a crucial factor impacting scalability, resilience, and cost efficiency. Slow-starting applications can delay deployments, hinder auto-scaling responsiveness, and consume resources unnecessarily. Olivier Bourgain, in his presentation, delved into strategies for significantly accelerating the startup time of Java applications, focusing on optimizations at both the Java Virtual Machine (JVM) level and within popular frameworks like Spring Boot. He explored techniques ranging from garbage collection tuning to leveraging emerging technologies like OpenJDK’s Project Leyden and Spring AOT (Ahead-of-Time Compilation) to make Java applications lighter, faster, and more efficient from the moment they start.

The Importance of Fast Startup

Olivier began by explaining why fast startup time matters in modern environments. In microservices architectures, applications are frequently started and stopped as part of scaling events, deployments, or rolling updates. A slow startup adds to the time it takes to scale up to handle increased load, potentially leading to performance degradation or service unavailability. In serverless or function-as-a-service environments, cold starts (the time it takes for an idle instance to become ready) are directly impacted by application startup time, affecting latency and user experience. Faster startup also improves developer productivity by reducing the waiting time during local development and testing cycles. Olivier emphasized that optimizing startup time is no longer just a minor optimization but a fundamental requirement for efficient cloud-native deployments.

JVM and Garbage Collection Optimizations

Optimizing the JVM configuration and understanding garbage collection behavior are foundational steps in improving Java application startup. Olivier discussed how different garbage collectors (like G1, Parallel, or ZGC) can impact startup time and memory usage. Tuning JVM arguments related to heap size, garbage collection pauses, and just-in-time (JIT) compilation tiers can influence how quickly the application becomes responsive. While JIT compilation is crucial for long-term performance, it can introduce startup overhead as the JVM analyzes and optimizes code during initial execution. Techniques like Class Data Sharing (CDS) were mentioned as a way to reduce startup time by sharing pre-processed class metadata between multiple JVM instances. Olivier provided practical tips and configurations for optimizing JVM settings specifically for faster startup, balancing it with overall application performance.

Framework Optimizations: Spring Boot and Beyond

Popular frameworks like Spring Boot, while providing immense productivity benefits, can sometimes contribute to longer startup times due to their extensive features and reliance on reflection and classpath scanning during initialization. Olivier explored strategies within the Spring ecosystem and other frameworks to mitigate this. He highlighted Spring AOT (Ahead-of-Time Compilation) as a transformative technology that analyzes the application at build time and generates optimized code and configuration, reducing the work the JVM needs to do at runtime. This can significantly decrease startup time and memory footprint, making Spring Boot applications more suitable for resource-constrained environments and serverless deployments. Project Leyden in OpenJDK, aiming to enable static images and further AOT compilation for Java, was also discussed as a future direction for improving startup performance at the language level. Olivier demonstrated how applying these framework-specific optimizations and leveraging AOT compilation can have a dramatic impact on the startup speed of Java applications, making them competitive with applications written in languages traditionally known for faster startup.

Links:

PostHeaderIcon [KotlinConf2024] DataFrame: Kotlin’s Dynamic Data Handling

At KotlinConf2024, Roman Belov, JetBrains’ Kotlin Moods group leader, showcased Kotlin DataFrame, a versatile library for managing flat and hierarchical data. Designed for general developers, not just data scientists, DataFrame handles CSV, JSON, and object subgraphs, enabling seamless data transformation and visualization. Roman demonstrated its integration with Kotlin Notebook for prototyping and a compiler plugin for dynamic type inference, using a KotlinConf app backend as an example. This talk highlighted how DataFrame empowers developers to build robust, interactive data pipelines.

DataFrame: A Versatile Data Structure

Kotlin DataFrame redefines data handling for Kotlin developers. Roman explained that, unlike traditional data classes, DataFrame supports dynamic column manipulation, akin to Excel tables. It can read, write, and transform data from formats like CSV or JSON, making it ideal for both analytics and general projects. For a KotlinConf app, DataFrame processed session data from a REST API, allowing developers to filter, sort, and pivot data effortlessly, providing a flexible alternative to rigid data class structures.

Prototyping with Kotlin Notebook

Kotlin Notebook, a plugin for IntelliJ IDEA Ultimate, enhances DataFrame’s prototyping capabilities. Roman demonstrated creating a scratch file to fetch session data via Ktor Client. The notebook’s auto-completion for dependencies, like Ktor or DataFrame, simplifies setup, downloading the latest versions from Maven Central. Interactive tables display hierarchical data, and each code fragment updates variable types, enabling rapid experimentation. This environment suits developers iterating on ideas, offering a low-friction way to test data transformations before production.

Dynamic Type Inference in Action

DataFrame’s compiler plugin, built for the K2 compiler, introduces on-the-fly type inference. Roman showed how it analyzes a DataFrame’s schema during execution, generating extension properties for columns. For example, accessing a title column in a sessions DataFrame feels like using a property, with auto-completion for column names and types. This eliminates manual schema definitions, streamlining data wrangling. Though experimental, the plugin cached schemas efficiently, ensuring performance, as seen when filtering multiplatform talk descriptions.

Handling Hierarchical Data

DataFrame excels with hierarchical structures, unlike flat data classes. Roman illustrated this with nested JSON from the KotlinConf API, converting categories into a DataFrame with grouped columns. Developers can navigate sub-DataFrames within cells, mirroring data class nesting. For instance, a category’s items array became a sub-DataFrame, accessible via intuitive APIs. This capability supports complex data like object subgraphs, enabling developers to transform and analyze nested structures without cumbersome manual mappings.

Building a KotlinConf Schedule

Roman walked through a practical example: creating a daily schedule for KotlinConf. Starting with session data, he converted startsAt strings to LocalDateTime, filtered out service sessions, and joined room IDs with room names from another DataFrame. Sorting by start time and pivoting by room produced a clean schedule, with nulls replaced by empty strings. The resulting HTML table, generated directly in the notebook, showcased DataFrame’s ability to transform REST API data into user-friendly outputs, all with concise, readable code.

Visualizing Data with Kandy

DataFrame integrates with Kandy, JetBrains’ visualization library, to create charts. Roman demonstrated analyzing GitHub commits from the Kotlin repository, grouping them by week to plot commit counts and average message lengths. The resulting chart revealed trends, like steady growth potentially tied to CI improvements. Kandy’s simple API, paired with DataFrame’s data manipulation, makes visualization accessible. Roman encouraged exploring Kandy’s website for examples, highlighting its role in turning raw data into actionable insights.

DataFrame in Production

Moving DataFrame to production is straightforward. Roman showed copying notebook code into IntelliJ’s EAP version, importing the generated schema to access columns as properties. The compiler plugin evolves schemas dynamically, supporting operations like adding a room column and using it immediately. This approach minimizes boilerplate, as seen when serializing a schedule to JSON. Though the plugin is experimental, its integration with K2 ensures reliability, making DataFrame a practical choice for building scalable backend systems, from APIs to data pipelines.

Links:

PostHeaderIcon [DevoxxFR2025] Winamax’s Journey Towards Cross-Platform

In today’s multi-device world, providing a consistent and high-quality user experience across various platforms is a significant challenge for software companies. For online gaming and betting platforms like Winamax, reaching users on desktop, web, and mobile devices is paramount. Anthony Maffert and Florian Yger from Winamax shared their insightful experience report detailing the company’s ambitious journey to unify their frontend applications onto a single cross-platform engine. Their presentation explored the technical challenges, the architectural decisions, and the concrete lessons learned during this migration, showcasing how they leveraged modern web technologies like JavaScript, React, and WebGL to achieve a unified codebase for their desktop and mobile applications.

The Challenge of a Fragmented Frontend

Winamax initially faced a fragmented frontend landscape, with separate native applications for desktop (Windows, macOS) and mobile (iOS, Android), alongside their web platform. Maintaining and developing features across these disparate codebases was inefficient, leading to duplicated efforts, inconsistencies in user experience, and slower delivery of new features. The technical debt associated with supporting multiple platforms became a significant hurdle. Anthony and Florian highlighted the clear business and technical need to consolidate their frontend development onto a single platform that could target all the required devices while maintaining performance and a rich user experience, especially crucial for a real-time application like online poker.

Choosing a Cross-Platform Engine

After evaluating various options, Winamax made the strategic decision to adopt a cross-platform approach based on web technologies. They chose to leverage JavaScript, specifically within the React framework, for building their user interfaces. For rendering the complex and dynamic visuals required for a poker client, they opted for WebGL, a web standard for rendering 2D and 3D graphics within a browser, which can also be utilized in cross-platform frameworks. Their previous experience with JavaScript on their web platform played a role in this decision. The core idea was to build a single application logic and UI layer using these web technologies and then deploy it across desktop and mobile using wrapper technologies (like Electron for desktop and potentially variations for mobile, although the primary focus of this talk seemed to be the desktop migration).

The Migration Process and Lessons Learned

Anthony and Florian shared their experience with the migration process, which was a phased approach given the complexity of a live gaming platform. They discussed the technical challenges encountered, such as integrating native device functionalities (like file system access for desktop) within the web technology stack, optimizing WebGL rendering performance for different hardware, and ensuring a smooth transition for existing users. They touched upon the architectural changes required to support a unified codebase, potentially involving a clear separation between the cross-platform UI logic and any platform-specific native modules or integrations. Key lessons learned included the importance of careful planning, thorough testing on all target platforms, investing in performance optimization, and managing the technical debt during the transition. They also highlighted the benefits reaped from this migration, including faster feature development, reduced maintenance overhead, improved consistency across platforms, and the ability to leverage a larger pool of web developers. The presentation offered a valuable case study for other organizations considering a similar move towards cross-platform development using web technologies.

Links:

PostHeaderIcon [KotlinConf2025] Two Years with Kotlin Multiplatform: From Zero to 55% Shared Code

The journey to unified mobile applications is a complex one, fraught with technical and organizational challenges. Rodrigo Sicarelli, a staff software engineer at StoneCo, a leading Latin American fintech company, shared a compelling real-world account of his company’s two-year transition to Kotlin Multiplatform (KMP). This exploration revealed the strategic decisions, hurdles, and impressive achievements that led to a remarkable 55% code sharing across two large-scale mobile applications.

The initial challenge for StoneCo was to evaluate various cross-platform frameworks to find one that could balance the efficiency of code sharing with the critical need for a seamless user experience in the financial sector. Rodrigo detailed the exhaustive process of assessment and the ultimate decision to adopt KMP, a choice that promised to unify their mobile development efforts. A key part of the journey was the organizational shift, which involved training 130 mobile engineers to embrace a new paradigm. Rodrigo emphasized that this was not merely a technical migration but a cultural and educational one, fostering a collaborative spirit and promoting knowledge sharing across teams.

As the adoption matured, the teams faced a number of technical hurdles. One of the primary challenges was ensuring consistent data models and a unified network layer. Rodrigo outlined how they tackled this by consolidating data sources and creating a shared codebase for networking logic, which streamlined development and reduced errors. Another significant obstacle was the integration of KMP into their iOS CI/CD pipeline. He provided a clear explanation of how they overcame this by creating custom Gradle tasks and optimizing their build process, which dramatically improved build times. He also touched upon the importance of addressing the specific needs of iOS developers, particularly concerning the generation of idiomatic Swift APIs from the shared Kotlin code.

A major win for the team was the development of a custom Gradle plugin to manage Kotlin Multiplatform dependencies. This innovation was designed to solve a problem with exposing external libraries to Swift, where the linker would sometimes struggle with duplicate symbols. By adding annotations, the team was able to improve the linking process and reduce build times. This solution not only streamlined their internal workflow but is also planned for open-sourcing, showcasing StoneCo’s commitment to giving back to the community.

Rodrigo concluded by looking to the future, outlining a vision for a single, unified application repository that is user-segment-aware and built with Compose Multiplatform. This forward-looking approach demonstrates a long-term commitment to KMP and a desire to continue pushing the boundaries of shared code. His talk provided invaluable, actionable insights for any organization considering or already in the process of scaling Kotlin Multiplatform.

Links:


PostHeaderIcon How to Install an Old Version of Docker on a Recent Debian: A Case Study with Docker 20.10.9 on Debian 13 (Trixie)

In the rapidly evolving landscape of containerization technology, Docker remains a cornerstone for developers and system administrators. However, specific use cases—such as legacy application compatibility, testing, or reproducing historical environments—may necessitate installing an older version of Docker on a modern operating system. This guide provides a detailed walkthrough for installing Docker Engine 20.10.9, a release from September 2021, on Debian 13 (codename “Trixie”), the testing branch of Debian as of September 2025. While the steps can be adapted for other versions or Debian releases, this case study addresses the unique challenges of downgrading Docker on a contemporary distribution.

Introduction: The Challenges and Rationale for Installing an Older Docker Version

Installing an outdated Docker version like 20.10.9 on a recent Debian release such as Trixie is fraught with challenges due to Docker’s evolution and Debian’s forward-looking package management. Docker transitioned from Calendar Versioning (CalVer, e.g., 20.10) to Semantic Versioning (SemVer, starting with 23.0 in May 2023), introducing significant updates in security, features, and dependencies. This creates several obstacles:

  • Package Availability and Compatibility: Docker’s official APT repository prioritizes current versions (e.g., 28.x in 2025) for supported Debian releases. Older versions like 20.10.9 are often archived and unavailable via apt for newer codenames like Trixie, requiring manual downloads of .deb packages from a compatible release (e.g., bullseye for Debian 11). This can lead to dependency mismatches or installation failures.
  • Security and Support Risks: Version 20.10.9 is end-of-life (EOL) since mid-2023, lacking official security patches for known vulnerabilities (e.g., CVEs in networking or containerd). This poses risks for production environments. Additionally, compatibility issues may arise with modern WSL2 networking in Windows Subsystem for Linux (WSL) environments.
  • Dependency Conflicts: Older Docker versions rely on specific versions of components like containerd.io, which may conflict with newer libraries on Debian 13, potentially causing installation or runtime errors.
  • Docker Compose Compatibility: Modern Docker integrates Compose as a plugin (docker compose), but older setups require the standalone docker-compose (v1, with hyphen), necessitating a separate binary installation.

Why pursue this downgrade? Legacy applications, specific toolchains, or compatibility with older Dockerfiles may require it—such as maintaining a telemetry stack with Elasticsearch, Kibana, and APM Server in a controlled environment. However, for production or security-sensitive deployments, upgrading to the latest Docker version (e.g., 28.3.3) is strongly recommended. This guide assumes a WSL/Debian Trixie setup but is applicable to native Debian installations, with precautions for data loss and system stability.

Prerequisites

Before proceeding, ensure the following:

  • A running Debian 13 (Trixie) system (verify with lsb_release -cs).
  • Administrative access (sudo privileges).
  • Backup of critical Docker data (e.g., export volumes using docker volume ls).
  • Internet access for downloading packages.
  • Awareness of risks: Manual package installation bypasses APT’s dependency resolution, and EOL versions lack security updates.

Step 1: Prune All Local Docker Resources

Why This Step is Needed

Before uninstalling the current Docker version (e.g., 28.3.3), pruning all local resources—images, containers, volumes, and networks—ensures a clean slate. This prevents conflicts from residual data, reclaims disk space, and prepares the system for the downgrade. Since pruning is irreversible, backing up critical data (e.g., telemetry stack volumes) is essential.

What It Does

The docker system prune command removes all unused Docker artifacts, including stopped containers, unused images, volumes, and networks, ensuring no remnants interfere with the new installation.

Commands


# Stop all running containers (if any)
docker stop $(docker ps -q) 2>/dev/null || true

# Prune everything: images, containers, volumes, networks, and build cache
docker system prune -a --volumes -f
    

Verification

Run these commands to confirm cleanup:


docker images -a  # Should list no images
docker volume ls  # Should be empty
docker network ls # Should show only defaults (bridge, host, none)
docker ps -a      # Should show no containers
    

If permission errors occur, verify your user is in the docker group (sudo usermod -aG docker $USER and log out/in) or use sudo.

Step 2: Remove the Current Docker Installation

Why This Step is Needed

Removing the existing Docker version (e.g., 28.3.3) eliminates potential conflicts in packages, configurations, or runtime components. Residual files or newer dependencies could cause the older 20.10.9 installation to fail or behave unpredictably.

What It Does

This step stops Docker services, purges installed packages, deletes data directories and configurations, and removes the Docker APT repository to prevent accidental upgrades to newer versions.

Commands


# Stop Docker services
sudo systemctl stop docker
sudo systemctl stop docker.socket

# Uninstall Docker packages
sudo apt-get purge -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin docker-ce-rootless-extras
sudo apt-get autoremove -y --purge

# Remove Docker data and configs
sudo rm -rf /var/lib/docker
sudo rm -rf /var/lib/containerd
sudo rm -rf /etc/docker
sudo rm -f /etc/apparmor.d/docker
sudo rm -f /var/run/docker.sock
sudo groupdel docker 2>/dev/null || true

# Remove Docker repository
sudo rm -f /etc/apt/sources.list.d/docker.list
sudo rm -f /etc/apt/keyrings/docker.gpg
sudo apt-get update

# Verify removal
docker --version  # Should show "command not found"
    

Reboot WSL if needed (in Windows PowerShell: wsl --shutdown).

Step 3: Install Docker Engine 20.10.9 via Manual .deb Packages

Why This Step is Needed

Debian Trixie’s Docker repository does not include 20.10.9, as it is an EOL version from 2021, unsupported since mid-2023. The standard apt installation fails due to version mismatches, so we manually download and install .deb packages from Docker’s archive for Debian 11 (bullseye), which is compatible with Trixie. This approach bypasses repository limitations but requires careful dependency management.

What It Does

The commands download specific .deb files for Docker CE, CLI, and containerd.io, install them using dpkg, resolve dependencies with apt, and lock the version to prevent upgrades. The process also ensures Docker starts correctly and is accessible without root privileges.

Commands


# Install prerequisites
sudo apt-get update
sudo apt-get install -y ca-certificates curl gnupg lsb-release

# Create directory for downloads
mkdir -p ~/docker-install
cd ~/docker-install

# Download .deb packages for 20.10.9 (bullseye, amd64)
curl -O https://download.docker.com/linux/debian/dists/bullseye/pool/stable/amd64/containerd.io_1.4.13-1_amd64.deb
curl -O https://download.docker.com/linux/debian/dists/bullseye/pool/stable/amd64/docker-ce-cli_20.10.9~3-0~debian-bullseye_amd64.deb
curl -O https://download.docker.com/linux/debian/dists/bullseye/pool/stable/amd64/docker-ce_20.10.9~3-0~debian-bullseye_amd64.deb

# Verify file sizes (should be MBs, not bytes)
ls -lh *.deb

# Install .deb packages
sudo dpkg -i containerd.io_1.4.13-1_amd64.deb
sudo dpkg -i docker-ce-cli_20.10.9~3-0~debian-bullseye_amd64.deb
sudo dpkg -i docker-ce_20.10.9~3-0~debian-bullseye_amd64.deb

# Fix any dependency issues
sudo apt-get install -f

# Hold versions to prevent upgrades
sudo apt-mark hold docker-ce docker-ce-cli containerd.io

# Start and enable Docker
sudo systemctl start docker
sudo systemctl enable docker

# Add user to docker group (log out/in after)
sudo usermod -aG docker $USER

# Verify installation
docker --version  # Should show "Docker version 20.10.9, build ..."
docker run --rm hello-world  # Test pull and run

# Clean up downloaded files
cd /mnt/c/workarea/development-tools/telemetry-poc/etc/docker/telemetry
rm -rf ~/docker-install
    

Note: If the curl URLs return 404 errors, browse Docker’s bullseye pool to find the exact filenames (e.g., try docker-ce_20.10.10~3-0~debian-bullseye_amd64.deb if 20.10.9 is unavailable). Use containerd.io_1.4.11-1_amd64.deb if 1.4.13 fails.

Step 4: Install Standalone docker-compose (v1.29.2)

Why This Step is Needed

Modern Docker includes Compose as a plugin (docker compose, v2), but legacy setups like 20.10.9 often require the standalone docker-compose (v1, with hyphen) for compatibility with older workflows or scripts. This binary ensures the hyphenated command is available.

What It Does

Downloads the v1.29.2 binary (the last v1 release, compatible with 20.10.9) from GitHub, installs it to /usr/local/bin, and makes it executable.

Commands


sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose

# Verify
docker-compose --version  # Should show "docker-compose version 1.29.2, build ..."
    

Step 5: Post-Installation and Verification

Why This Step is Needed

Post-installation steps ensure Docker and docker-compose function correctly, especially in a WSL environment where networking can be temperamental. Verifying the setup confirms compatibility with your telemetry stack (e.g., Elasticsearch, Kibana, APM Server).

What It Does

Restarts WSL to apply changes, tests Docker and docker-compose, and rebuilds your telemetry stack. It also checks network accessibility from Windows.

Commands


# Restart WSL (in Windows PowerShell)
wsl --shutdown

# Reopen WSL terminal and verify
docker --version  # Should show "Docker version 20.10.9, build ..."
docker-compose --version  # Should show "docker-compose version 1.29.2, build ..."
docker run --rm hello-world

Troubleshooting

If issues arise, consider these steps:

  • Download Failures: Check Docker’s bullseye pool for correct filenames. Use curl -I <URL> to verify HTTP status (200 OK).
  • Dependency Errors: Run sudo apt-get install -f to resolve.
  • Docker Not Starting: Check sudo systemctl status docker or journalctl -u docker.
  • WSL Networking: Update WSL (wsl --update) and restart Docker (sudo service docker restart).
  • Share Outputs: Provide ls -lh ~/docker-install/*.deb, dpkg -l | grep docker, or error messages for debugging.

Conclusion

Installing an older version of Docker, such as 20.10.9, on a recent Debian release like Trixie (Debian 13) is a complex but achievable task, requiring careful management of package dependencies and manual installation of archived .deb files. By pruning existing Docker resources, removing the current installation, and installing bullseye-compatible packages, you can successfully downgrade to meet legacy requirements. The addition of the standalone docker-compose ensures compatibility with older workflows..

However, this approach comes with caveats: version 20.10.9 is end-of-life, lacking security updates, and may face compatibility issues with modern tools or WSL2 networking. For production environments, consider using the latest Docker version (e.g., 28.3.3 as of September 2025) to benefit from ongoing support and enhanced features. Always test thoroughly after installation, and maintain backups to mitigate data loss risks. If you encounter issues or need to adapt this process for other versions, consult Docker’s official repository or community forums for additional resources.

PostHeaderIcon [NDCOslo2024] The History of Computer Art – Anders Norås

In the incandescent interstice of innovation and imagination, where algorithms awaken aesthetics, Anders Norås, a Norwegian designer and digital dreamer, traces the tantalizing trajectory of computer-generated creativity. From 1960s Silicon Valley’s psychedelic pixels to 2020s generative galleries, Anders animates an anthology of artistic audacity, where hackers harnessed harmonics and hobbyists honed holograms. His odyssey, opulent with optical illusions and ontological inquiries, unveils code as canvas, querying: when does datum dance into divinity?

Anders ambles from Bay Area’s beatnik bytes—LSD-laced labs birthing bitmap beauties—to 1970s fine artists’ foray into fractals. Vera Molnar’s algorithmic abstractions, Molnar’s mechanical marks, meld math with muse, manifesting minimalism’s machine-made magic.

Psychedelic Pixels: 1960s’ Subcultural Sparks

San Francisco’s hacker havens hummed with hallucinatory hacks: Ken Knowlton’s BEFLIX begat filmic fractals, A. Michael Noll’s noisy nudes nodded to neo-classics. Anders accentuates the alchemy: computers as collaborators, conjuring compositions that captivated cognoscenti.

Algorithmic Abstractions: 1970s’ Fine Art Fusion

Fine artists forayed into flux: Frieder Nake’s generative geometries, Georg Nees’s nested nests—exhibitions eclipsed elites, etching electronics into etudes. Harold Cohen’s AARON, an autonomous auteur, authored arabesques, blurring brushes and binaries.

Rebellious Renderings: 1980s’ Demoscene Dynamism

Demoscene’s defiant demos dazzled: Future Crew’s trance tunnels, Razor 1911’s ray-traced reveries—amateurs authored epics on 8-bits, echoing graffiti’s guerrilla glee. Anders applauds the anarchy: code as contraband, creativity’s clandestine cabal.

Digital Diaspora: Internet’s Infinite Installations

Web’s weave widened worlds: JODI’s jetset glitches, Rafael Lozano-Hemmer’s responsive realms—browsers birthed boundless biennales. Printouts prized: AARON auctions at astronomic asks, affirming artifacts’ allure.

Generative Galas: GenAI’s Grand Gesture

Anders assays AI’s ascent: Midjourney’s mirages, DALL-E’s dreams—yet decries detachment, Dolly’s depthless depictions devoid of dialogue. Jeff Wall’s “A Sudden Gust of Wind” juxtaposed: human heft versus heuristic haze, where context conceals critique.

Anders’s axiom: art awakens awareness—ideas ignite, irrespective of instrument. His entreaty: etch eternally, hand hewn, honoring humanity’s hallowed hue.

Links:

PostHeaderIcon [DotJs2025] Coding and ADHD: Where We Excel

In tech’s torrent, where focus frays and novelty beckons, ADHD’s archetype—attention’s anarchy—often masquerades as malaise, yet harbors hidden harmonics for code’s cadence. Abbey Perini, a full-stack artisan and technical scribe from Atlanta’s tech thicket, celebrated these synergies at dotJS 2025. A Nexcor developer passionate about accessibility and advocacy, Abbey unpacked DSM-5’s deficits—deficit a misnomer for regulation’s riddle—subtypes’ spectrum (inattentive, impulsive, combined), reframing “disorder” as distress’s delimiter.

Abbey’s audit: ADHD’s allure in dev’s domain—dopamine’s deficit sated by puzzles’ pursuit, hyperfocus’s hurricane on hooks or heuristics. Rabbit holes reward: quirks queried, systems synthesized—Danny Donovan’s “itchy” unknowns quelled by Google’s grace. Creativity cascades: unconventional conundrums cracked, prototypes proliferating. Passion’s pendulum: “passionate programmer” badge, hobbies’ graveyard notwithstanding—novelty’s nectar, troubleshooting’s triumph.

Managerial missives: resets’ rapidity (forgetfulness as feature), sprints’ scaffolding (tickets as tethers, novelty’s nod). Praise’s potency: negativity’s nectar negated. Abbey’s anthem: fireworks in cubic confines—embrace eccentricity, harness hyperactivity for heuristics’ harvest.

Neurodiversity’s Nexus in Code

Abbey anatomized: DSM’s dated diction, subtypes’ shades—combined’s chaos, yet coding’s chemistry: dopamine drafts from debugging’s depths.

Strengths’ Spotlight and Strategies

Rabbit trails to resolutions, creativity’s cornucopia—Abbey’s arc: interviews’ “passion,” rabbit holes’ recall. Managerial mantra: sprints soothe, praise potentiates—ADHD’s assets amplified.

Links: