Recent Posts
Archives

Archive for the ‘General’ Category

PostHeaderIcon [DevoxxFR2025] Winamax’s Journey Towards Cross-Platform

In today’s multi-device world, providing a consistent and high-quality user experience across various platforms is a significant challenge for software companies. For online gaming and betting platforms like Winamax, reaching users on desktop, web, and mobile devices is paramount. Anthony Maffert and Florian Yger from Winamax shared their insightful experience report detailing the company’s ambitious journey to unify their frontend applications onto a single cross-platform engine. Their presentation explored the technical challenges, the architectural decisions, and the concrete lessons learned during this migration, showcasing how they leveraged modern web technologies like JavaScript, React, and WebGL to achieve a unified codebase for their desktop and mobile applications.

The Challenge of a Fragmented Frontend

Winamax initially faced a fragmented frontend landscape, with separate native applications for desktop (Windows, macOS) and mobile (iOS, Android), alongside their web platform. Maintaining and developing features across these disparate codebases was inefficient, leading to duplicated efforts, inconsistencies in user experience, and slower delivery of new features. The technical debt associated with supporting multiple platforms became a significant hurdle. Anthony and Florian highlighted the clear business and technical need to consolidate their frontend development onto a single platform that could target all the required devices while maintaining performance and a rich user experience, especially crucial for a real-time application like online poker.

Choosing a Cross-Platform Engine

After evaluating various options, Winamax made the strategic decision to adopt a cross-platform approach based on web technologies. They chose to leverage JavaScript, specifically within the React framework, for building their user interfaces. For rendering the complex and dynamic visuals required for a poker client, they opted for WebGL, a web standard for rendering 2D and 3D graphics within a browser, which can also be utilized in cross-platform frameworks. Their previous experience with JavaScript on their web platform played a role in this decision. The core idea was to build a single application logic and UI layer using these web technologies and then deploy it across desktop and mobile using wrapper technologies (like Electron for desktop and potentially variations for mobile, although the primary focus of this talk seemed to be the desktop migration).

The Migration Process and Lessons Learned

Anthony and Florian shared their experience with the migration process, which was a phased approach given the complexity of a live gaming platform. They discussed the technical challenges encountered, such as integrating native device functionalities (like file system access for desktop) within the web technology stack, optimizing WebGL rendering performance for different hardware, and ensuring a smooth transition for existing users. They touched upon the architectural changes required to support a unified codebase, potentially involving a clear separation between the cross-platform UI logic and any platform-specific native modules or integrations. Key lessons learned included the importance of careful planning, thorough testing on all target platforms, investing in performance optimization, and managing the technical debt during the transition. They also highlighted the benefits reaped from this migration, including faster feature development, reduced maintenance overhead, improved consistency across platforms, and the ability to leverage a larger pool of web developers. The presentation offered a valuable case study for other organizations considering a similar move towards cross-platform development using web technologies.

Links:

PostHeaderIcon [KotlinConf2025] Two Years with Kotlin Multiplatform: From Zero to 55% Shared Code

The journey to unified mobile applications is a complex one, fraught with technical and organizational challenges. Rodrigo Sicarelli, a staff software engineer at StoneCo, a leading Latin American fintech company, shared a compelling real-world account of his company’s two-year transition to Kotlin Multiplatform (KMP). This exploration revealed the strategic decisions, hurdles, and impressive achievements that led to a remarkable 55% code sharing across two large-scale mobile applications.

The initial challenge for StoneCo was to evaluate various cross-platform frameworks to find one that could balance the efficiency of code sharing with the critical need for a seamless user experience in the financial sector. Rodrigo detailed the exhaustive process of assessment and the ultimate decision to adopt KMP, a choice that promised to unify their mobile development efforts. A key part of the journey was the organizational shift, which involved training 130 mobile engineers to embrace a new paradigm. Rodrigo emphasized that this was not merely a technical migration but a cultural and educational one, fostering a collaborative spirit and promoting knowledge sharing across teams.

As the adoption matured, the teams faced a number of technical hurdles. One of the primary challenges was ensuring consistent data models and a unified network layer. Rodrigo outlined how they tackled this by consolidating data sources and creating a shared codebase for networking logic, which streamlined development and reduced errors. Another significant obstacle was the integration of KMP into their iOS CI/CD pipeline. He provided a clear explanation of how they overcame this by creating custom Gradle tasks and optimizing their build process, which dramatically improved build times. He also touched upon the importance of addressing the specific needs of iOS developers, particularly concerning the generation of idiomatic Swift APIs from the shared Kotlin code.

A major win for the team was the development of a custom Gradle plugin to manage Kotlin Multiplatform dependencies. This innovation was designed to solve a problem with exposing external libraries to Swift, where the linker would sometimes struggle with duplicate symbols. By adding annotations, the team was able to improve the linking process and reduce build times. This solution not only streamlined their internal workflow but is also planned for open-sourcing, showcasing StoneCo’s commitment to giving back to the community.

Rodrigo concluded by looking to the future, outlining a vision for a single, unified application repository that is user-segment-aware and built with Compose Multiplatform. This forward-looking approach demonstrates a long-term commitment to KMP and a desire to continue pushing the boundaries of shared code. His talk provided invaluable, actionable insights for any organization considering or already in the process of scaling Kotlin Multiplatform.

Links:


PostHeaderIcon How to Install an Old Version of Docker on a Recent Debian: A Case Study with Docker 20.10.9 on Debian 13 (Trixie)

In the rapidly evolving landscape of containerization technology, Docker remains a cornerstone for developers and system administrators. However, specific use cases—such as legacy application compatibility, testing, or reproducing historical environments—may necessitate installing an older version of Docker on a modern operating system. This guide provides a detailed walkthrough for installing Docker Engine 20.10.9, a release from September 2021, on Debian 13 (codename “Trixie”), the testing branch of Debian as of September 2025. While the steps can be adapted for other versions or Debian releases, this case study addresses the unique challenges of downgrading Docker on a contemporary distribution.

Introduction: The Challenges and Rationale for Installing an Older Docker Version

Installing an outdated Docker version like 20.10.9 on a recent Debian release such as Trixie is fraught with challenges due to Docker’s evolution and Debian’s forward-looking package management. Docker transitioned from Calendar Versioning (CalVer, e.g., 20.10) to Semantic Versioning (SemVer, starting with 23.0 in May 2023), introducing significant updates in security, features, and dependencies. This creates several obstacles:

  • Package Availability and Compatibility: Docker’s official APT repository prioritizes current versions (e.g., 28.x in 2025) for supported Debian releases. Older versions like 20.10.9 are often archived and unavailable via apt for newer codenames like Trixie, requiring manual downloads of .deb packages from a compatible release (e.g., bullseye for Debian 11). This can lead to dependency mismatches or installation failures.
  • Security and Support Risks: Version 20.10.9 is end-of-life (EOL) since mid-2023, lacking official security patches for known vulnerabilities (e.g., CVEs in networking or containerd). This poses risks for production environments. Additionally, compatibility issues may arise with modern WSL2 networking in Windows Subsystem for Linux (WSL) environments.
  • Dependency Conflicts: Older Docker versions rely on specific versions of components like containerd.io, which may conflict with newer libraries on Debian 13, potentially causing installation or runtime errors.
  • Docker Compose Compatibility: Modern Docker integrates Compose as a plugin (docker compose), but older setups require the standalone docker-compose (v1, with hyphen), necessitating a separate binary installation.

Why pursue this downgrade? Legacy applications, specific toolchains, or compatibility with older Dockerfiles may require it—such as maintaining a telemetry stack with Elasticsearch, Kibana, and APM Server in a controlled environment. However, for production or security-sensitive deployments, upgrading to the latest Docker version (e.g., 28.3.3) is strongly recommended. This guide assumes a WSL/Debian Trixie setup but is applicable to native Debian installations, with precautions for data loss and system stability.

Prerequisites

Before proceeding, ensure the following:

  • A running Debian 13 (Trixie) system (verify with lsb_release -cs).
  • Administrative access (sudo privileges).
  • Backup of critical Docker data (e.g., export volumes using docker volume ls).
  • Internet access for downloading packages.
  • Awareness of risks: Manual package installation bypasses APT’s dependency resolution, and EOL versions lack security updates.

Step 1: Prune All Local Docker Resources

Why This Step is Needed

Before uninstalling the current Docker version (e.g., 28.3.3), pruning all local resources—images, containers, volumes, and networks—ensures a clean slate. This prevents conflicts from residual data, reclaims disk space, and prepares the system for the downgrade. Since pruning is irreversible, backing up critical data (e.g., telemetry stack volumes) is essential.

What It Does

The docker system prune command removes all unused Docker artifacts, including stopped containers, unused images, volumes, and networks, ensuring no remnants interfere with the new installation.

Commands


# Stop all running containers (if any)
docker stop $(docker ps -q) 2>/dev/null || true

# Prune everything: images, containers, volumes, networks, and build cache
docker system prune -a --volumes -f
    

Verification

Run these commands to confirm cleanup:


docker images -a  # Should list no images
docker volume ls  # Should be empty
docker network ls # Should show only defaults (bridge, host, none)
docker ps -a      # Should show no containers
    

If permission errors occur, verify your user is in the docker group (sudo usermod -aG docker $USER and log out/in) or use sudo.

Step 2: Remove the Current Docker Installation

Why This Step is Needed

Removing the existing Docker version (e.g., 28.3.3) eliminates potential conflicts in packages, configurations, or runtime components. Residual files or newer dependencies could cause the older 20.10.9 installation to fail or behave unpredictably.

What It Does

This step stops Docker services, purges installed packages, deletes data directories and configurations, and removes the Docker APT repository to prevent accidental upgrades to newer versions.

Commands


# Stop Docker services
sudo systemctl stop docker
sudo systemctl stop docker.socket

# Uninstall Docker packages
sudo apt-get purge -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin docker-ce-rootless-extras
sudo apt-get autoremove -y --purge

# Remove Docker data and configs
sudo rm -rf /var/lib/docker
sudo rm -rf /var/lib/containerd
sudo rm -rf /etc/docker
sudo rm -f /etc/apparmor.d/docker
sudo rm -f /var/run/docker.sock
sudo groupdel docker 2>/dev/null || true

# Remove Docker repository
sudo rm -f /etc/apt/sources.list.d/docker.list
sudo rm -f /etc/apt/keyrings/docker.gpg
sudo apt-get update

# Verify removal
docker --version  # Should show "command not found"
    

Reboot WSL if needed (in Windows PowerShell: wsl --shutdown).

Step 3: Install Docker Engine 20.10.9 via Manual .deb Packages

Why This Step is Needed

Debian Trixie’s Docker repository does not include 20.10.9, as it is an EOL version from 2021, unsupported since mid-2023. The standard apt installation fails due to version mismatches, so we manually download and install .deb packages from Docker’s archive for Debian 11 (bullseye), which is compatible with Trixie. This approach bypasses repository limitations but requires careful dependency management.

What It Does

The commands download specific .deb files for Docker CE, CLI, and containerd.io, install them using dpkg, resolve dependencies with apt, and lock the version to prevent upgrades. The process also ensures Docker starts correctly and is accessible without root privileges.

Commands


# Install prerequisites
sudo apt-get update
sudo apt-get install -y ca-certificates curl gnupg lsb-release

# Create directory for downloads
mkdir -p ~/docker-install
cd ~/docker-install

# Download .deb packages for 20.10.9 (bullseye, amd64)
curl -O https://download.docker.com/linux/debian/dists/bullseye/pool/stable/amd64/containerd.io_1.4.13-1_amd64.deb
curl -O https://download.docker.com/linux/debian/dists/bullseye/pool/stable/amd64/docker-ce-cli_20.10.9~3-0~debian-bullseye_amd64.deb
curl -O https://download.docker.com/linux/debian/dists/bullseye/pool/stable/amd64/docker-ce_20.10.9~3-0~debian-bullseye_amd64.deb

# Verify file sizes (should be MBs, not bytes)
ls -lh *.deb

# Install .deb packages
sudo dpkg -i containerd.io_1.4.13-1_amd64.deb
sudo dpkg -i docker-ce-cli_20.10.9~3-0~debian-bullseye_amd64.deb
sudo dpkg -i docker-ce_20.10.9~3-0~debian-bullseye_amd64.deb

# Fix any dependency issues
sudo apt-get install -f

# Hold versions to prevent upgrades
sudo apt-mark hold docker-ce docker-ce-cli containerd.io

# Start and enable Docker
sudo systemctl start docker
sudo systemctl enable docker

# Add user to docker group (log out/in after)
sudo usermod -aG docker $USER

# Verify installation
docker --version  # Should show "Docker version 20.10.9, build ..."
docker run --rm hello-world  # Test pull and run

# Clean up downloaded files
cd /mnt/c/workarea/development-tools/telemetry-poc/etc/docker/telemetry
rm -rf ~/docker-install
    

Note: If the curl URLs return 404 errors, browse Docker’s bullseye pool to find the exact filenames (e.g., try docker-ce_20.10.10~3-0~debian-bullseye_amd64.deb if 20.10.9 is unavailable). Use containerd.io_1.4.11-1_amd64.deb if 1.4.13 fails.

Step 4: Install Standalone docker-compose (v1.29.2)

Why This Step is Needed

Modern Docker includes Compose as a plugin (docker compose, v2), but legacy setups like 20.10.9 often require the standalone docker-compose (v1, with hyphen) for compatibility with older workflows or scripts. This binary ensures the hyphenated command is available.

What It Does

Downloads the v1.29.2 binary (the last v1 release, compatible with 20.10.9) from GitHub, installs it to /usr/local/bin, and makes it executable.

Commands


sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose

# Verify
docker-compose --version  # Should show "docker-compose version 1.29.2, build ..."
    

Step 5: Post-Installation and Verification

Why This Step is Needed

Post-installation steps ensure Docker and docker-compose function correctly, especially in a WSL environment where networking can be temperamental. Verifying the setup confirms compatibility with your telemetry stack (e.g., Elasticsearch, Kibana, APM Server).

What It Does

Restarts WSL to apply changes, tests Docker and docker-compose, and rebuilds your telemetry stack. It also checks network accessibility from Windows.

Commands


# Restart WSL (in Windows PowerShell)
wsl --shutdown

# Reopen WSL terminal and verify
docker --version  # Should show "Docker version 20.10.9, build ..."
docker-compose --version  # Should show "docker-compose version 1.29.2, build ..."
docker run --rm hello-world

Troubleshooting

If issues arise, consider these steps:

  • Download Failures: Check Docker’s bullseye pool for correct filenames. Use curl -I <URL> to verify HTTP status (200 OK).
  • Dependency Errors: Run sudo apt-get install -f to resolve.
  • Docker Not Starting: Check sudo systemctl status docker or journalctl -u docker.
  • WSL Networking: Update WSL (wsl --update) and restart Docker (sudo service docker restart).
  • Share Outputs: Provide ls -lh ~/docker-install/*.deb, dpkg -l | grep docker, or error messages for debugging.

Conclusion

Installing an older version of Docker, such as 20.10.9, on a recent Debian release like Trixie (Debian 13) is a complex but achievable task, requiring careful management of package dependencies and manual installation of archived .deb files. By pruning existing Docker resources, removing the current installation, and installing bullseye-compatible packages, you can successfully downgrade to meet legacy requirements. The addition of the standalone docker-compose ensures compatibility with older workflows..

However, this approach comes with caveats: version 20.10.9 is end-of-life, lacking security updates, and may face compatibility issues with modern tools or WSL2 networking. For production environments, consider using the latest Docker version (e.g., 28.3.3 as of September 2025) to benefit from ongoing support and enhanced features. Always test thoroughly after installation, and maintain backups to mitigate data loss risks. If you encounter issues or need to adapt this process for other versions, consult Docker’s official repository or community forums for additional resources.

PostHeaderIcon [AWSReInforce2025] How AWS designs the cloud to be the most secure for your business (SEC201)

Lecturer

The presentation is delivered by AWS security engineering leaders who architect the foundational controls that underpin the global cloud infrastructure. Their expertise encompasses hardware security modules, hypervisor isolation, formal verification, and organizational separation of duties across planetary-scale systems.

Abstract

The exposition delineates AWS’s security design philosophy, demonstrating how deliberate architectural isolation, formal verification, and cultural reinforcement create a substrate that absorbs undifferentiated security burden. Through examination of Nitro System enclaves, independent control planes, and hardware-rooted attestation, it establishes that security constitutes the primary reliability pillar, enabling customers to prioritize application innovation over infrastructure protection.

Security as Cultural Imperative and Design Principle

Security permeates AWS culture as the paramount priority, manifesting in organizational structure and technical architecture. Every engineering decision undergoes security review; features ship only when security criteria are satisfied. This cultural commitment extends to compensation—security objectives weigh equally with availability and performance in promotion criteria.

The design principle of least privilege applies ubiquitously: services operate with minimal permissions, even internally. When compromise occurs, blast radius is constrained by default. This philosophy contrasts with traditional enterprises where security is bolted on; at AWS, it is the foundation upon which all else is built.

Hardware-Enforced Isolation via Nitro System

The Nitro System exemplifies security through custom silicon. Traditional servers commingle customer workloads with management firmware; Nitro segregates these domains into dedicated cards—compute, storage, networking—each with independent firmware update channels.

Customer VM → Nitro Hypervisor → Nitro Security Module → Physical CPU
          ↘ Independent Control Plane → Hardware Attestation

The Nitro Security Module (NSM) maintains cryptographic attestation of the entire software stack. Before a host accepts customer instances, NSM verifies firmware integrity against immutable measurements burned into one-time-programmable fuses. Any deviation prevents boot, eliminating persistent rootkits at the hardware layer.

Independent Control and Data Plane Separation

Control plane operations—API calls, console interactions—execute in isolated cells that never touch customer data. A misconfigured S3 bucket policy might grant public access from the data plane perspective, but the control plane maintains an independent audit stream that detects the anomaly within minutes. This separation ensures configuration drift cannot evade detection.

The demonstration illustrates a public bucket created intentionally for testing. Within 180 seconds, Amazon Macie identifies the exposure, GuardDuty generates a finding, and Security Hub triggers an automated remediation workflow via Lambda. The customer perceives no interruption, yet the risk is mitigated proactively.

Formal Verification and Provable Security

AWS employs mathematical proof for critical components. The s2n TLS library undergoes formal verification using SAW (Software Analysis Workbench), proving absence of memory safety errors in encryption pathways. Similarly, the Firecracker microVM—underpinning Lambda and Fargate—uses TLA+ specifications to verify isolation properties under concurrency.

These proofs extend to hardware: the Nitro enclave attestation protocol is verified using ProVerif, ensuring man-in-the-middle attacks are impossible even if the host OS is compromised. This rigor transforms empirical testing into mathematical certainty for security invariants.

Organizational Isolation and Compensating Controls

Beyond technical boundaries, AWS enforces organizational separation. Teams that manage customer data cannot access control plane systems, and vice versa. This dual-key approach prevents insider threats: a malicious storage engineer cannot modify billing logic.

Compensating controls provide defense in depth. Even if a service principal is compromised, VPC endpoints restrict traffic to authorized networks. Immutable infrastructure—AMI baking, Infrastructure as Code—ensures configuration drift triggers automated replacement rather than manual fixes.

Customer Outcomes and Shared Fate

The infrastructure absorbs complexity so customers need not replicate it. Organizations avoid building global DDoS mitigation, hardware security module fleets, or formal verification teams. Instead, they compose higher-order security patterns: cell-based architectures, zero-trust microsegmentation, and automated compliance evidence collection.

This shared fate model extends to innovation velocity. When AWS hardens the substrate—introducing post-quantum cryptography in KMS, or confidential computing in EC2—customers inherit these capabilities instantly across all regions. Security becomes a force multiplier rather than a drag coefficient.

Conclusion: Security as Substratum for Civilization-Scale Computing

AWS designs security not as a feature but as the invariant property of the computing substrate. Through hardware isolation, formal verification, cultural reinforcement, and independent control planes, it creates a platform where compromise is detected and contained before customer impact. This foundation liberates organizations to build transformative applications—genomic sequencing at population scale, real-time fraud detection for billions of transactions—confident that the underlying security posture is mathematically sound and operationally resilient.

Links:

PostHeaderIcon [NDCOslo2024] The History of Computer Art – Anders Norås

In the incandescent interstice of innovation and imagination, where algorithms awaken aesthetics, Anders Norås, a Norwegian designer and digital dreamer, traces the tantalizing trajectory of computer-generated creativity. From 1960s Silicon Valley’s psychedelic pixels to 2020s generative galleries, Anders animates an anthology of artistic audacity, where hackers harnessed harmonics and hobbyists honed holograms. His odyssey, opulent with optical illusions and ontological inquiries, unveils code as canvas, querying: when does datum dance into divinity?

Anders ambles from Bay Area’s beatnik bytes—LSD-laced labs birthing bitmap beauties—to 1970s fine artists’ foray into fractals. Vera Molnar’s algorithmic abstractions, Molnar’s mechanical marks, meld math with muse, manifesting minimalism’s machine-made magic.

Psychedelic Pixels: 1960s’ Subcultural Sparks

San Francisco’s hacker havens hummed with hallucinatory hacks: Ken Knowlton’s BEFLIX begat filmic fractals, A. Michael Noll’s noisy nudes nodded to neo-classics. Anders accentuates the alchemy: computers as collaborators, conjuring compositions that captivated cognoscenti.

Algorithmic Abstractions: 1970s’ Fine Art Fusion

Fine artists forayed into flux: Frieder Nake’s generative geometries, Georg Nees’s nested nests—exhibitions eclipsed elites, etching electronics into etudes. Harold Cohen’s AARON, an autonomous auteur, authored arabesques, blurring brushes and binaries.

Rebellious Renderings: 1980s’ Demoscene Dynamism

Demoscene’s defiant demos dazzled: Future Crew’s trance tunnels, Razor 1911’s ray-traced reveries—amateurs authored epics on 8-bits, echoing graffiti’s guerrilla glee. Anders applauds the anarchy: code as contraband, creativity’s clandestine cabal.

Digital Diaspora: Internet’s Infinite Installations

Web’s weave widened worlds: JODI’s jetset glitches, Rafael Lozano-Hemmer’s responsive realms—browsers birthed boundless biennales. Printouts prized: AARON auctions at astronomic asks, affirming artifacts’ allure.

Generative Galas: GenAI’s Grand Gesture

Anders assays AI’s ascent: Midjourney’s mirages, DALL-E’s dreams—yet decries detachment, Dolly’s depthless depictions devoid of dialogue. Jeff Wall’s “A Sudden Gust of Wind” juxtaposed: human heft versus heuristic haze, where context conceals critique.

Anders’s axiom: art awakens awareness—ideas ignite, irrespective of instrument. His entreaty: etch eternally, hand hewn, honoring humanity’s hallowed hue.

Links:

PostHeaderIcon [DotJs2025] Coding and ADHD: Where We Excel

In tech’s torrent, where focus frays and novelty beckons, ADHD’s archetype—attention’s anarchy—often masquerades as malaise, yet harbors hidden harmonics for code’s cadence. Abbey Perini, a full-stack artisan and technical scribe from Atlanta’s tech thicket, celebrated these synergies at dotJS 2025. A Nexcor developer passionate about accessibility and advocacy, Abbey unpacked DSM-5’s deficits—deficit a misnomer for regulation’s riddle—subtypes’ spectrum (inattentive, impulsive, combined), reframing “disorder” as distress’s delimiter.

Abbey’s audit: ADHD’s allure in dev’s domain—dopamine’s deficit sated by puzzles’ pursuit, hyperfocus’s hurricane on hooks or heuristics. Rabbit holes reward: quirks queried, systems synthesized—Danny Donovan’s “itchy” unknowns quelled by Google’s grace. Creativity cascades: unconventional conundrums cracked, prototypes proliferating. Passion’s pendulum: “passionate programmer” badge, hobbies’ graveyard notwithstanding—novelty’s nectar, troubleshooting’s triumph.

Managerial missives: resets’ rapidity (forgetfulness as feature), sprints’ scaffolding (tickets as tethers, novelty’s nod). Praise’s potency: negativity’s nectar negated. Abbey’s anthem: fireworks in cubic confines—embrace eccentricity, harness hyperactivity for heuristics’ harvest.

Neurodiversity’s Nexus in Code

Abbey anatomized: DSM’s dated diction, subtypes’ shades—combined’s chaos, yet coding’s chemistry: dopamine drafts from debugging’s depths.

Strengths’ Spotlight and Strategies

Rabbit trails to resolutions, creativity’s cornucopia—Abbey’s arc: interviews’ “passion,” rabbit holes’ recall. Managerial mantra: sprints soothe, praise potentiates—ADHD’s assets amplified.

Links:

PostHeaderIcon Leading Through Reliability: Coaching, Mentoring, and Decision-Making Under Pressure

SRE leadership isn’t only about systems—it’s about people, processes, and resilience under fire.

1) Coaching Team Members Through Debugging

When junior engineers struggle with incidents, I walk them through the scientific method of debugging:

  1. Reproduce the problem.
  2. Collect evidence (logs, metrics, traces).
  3. Form a hypothesis.
  4. Test, measure, refine.

For example, in a memory leak case, I let a junior take the heap dump and explain findings, stepping in only to validate conclusions.

2) Introducing SRE Practices to New Teams

In teams without SRE culture, I start small:

  • Define a single SLO for a critical endpoint.
  • Introduce a burn-rate alert tied to that SLO.
  • Run a blameless postmortem after the first incident.

This creates buy-in without overwhelming the team with jargon.

3) Prioritizing and Delegating in High-Pressure Situations

During outages, prioritization is key:

  • Delegate evidence gathering (thread dumps, logs) to one engineer.
  • Keep communication flowing with stakeholders (status every 15 minutes).
  • Focus leadership on mitigation and rollback decisions.

After stabilization, I lead the postmortem, ensuring learnings feed back into automation, monitoring, and runbooks.

PostHeaderIcon [DevoxxGR2025] Email Re-Platforming Case Study

George Gkogkolis from Travelite Group shared a 15-minute case study at Devoxx Greece 2025 on re-platforming to process 1 million emails per hour.

The Challenge

Travelite Group, a global OTA handling flight tickets in 75 countries, processes 350,000 emails daily, expected to hit 2 million. Previously, a SaaS ticketing system struggled with growing traffic, poor licensing, and subpar user experience. Sharding the system led to complex agent logins and multiplexing issues with the booking engine. Market research revealed no viable alternatives, as vendors’ licensing models couldn’t handle the scale, prompting an in-house solution.

The New Platform

The team built a cloud-native, microservices-based platform within a year, going live in December 2024. It features a receiving app, a React-based web UI with Mantine Dev, a Spring Boot backend, and Amazon DocumentDB, integrated with Amazon SES and S3. Emails land in a Postfix server, are stored in S3, and processed via EventBridge and SQS. Data migration was critical, moving terabytes of EML files and databases in under two months, achieving a peak throughput of 1 million emails per hour by scaling to 50 receiver instances.

Lessons Learned

Starting with migration would have eased performance optimization, as synthetic data didn’t match production scale. Cloud-native deployment simplified scaling, and a backward-compatible API eased integration. Open standards (EML, Open API) ensured reliability. Future plans include AI and LLM enhancements by 2025, automating domain allocation for scalability.

Links

PostHeaderIcon Observability for Modern Systems: From Metrics to Traces

Good monitoring doesn’t just tell you when things are broken—it explains why.

1) White-Box vs Black-Box Monitoring

White-box: metrics from inside the system (CPU, memory, app metrics). Example: http_server_requests_seconds from Spring Actuator.

Black-box: synthetic probes simulating user behavior (ping APIs, load test flows). Example: periodic “buy flow” test in production.

2) Tracing Distributed Transactions

Use OpenTelemetry to propagate context across microservices:

// Spring Boot setup
implementation "io.opentelemetry:opentelemetry-exporter-otlp:1.30.0"

// Annotate spans
Span span = tracer.spanBuilder("checkout").startSpan();
try (Scope scope = span.makeCurrent()) {
    paymentService.charge(card);
    inventoryService.reserve(item);
} finally {
    span.end();
}

These traces flow into Jaeger or Grafana Tempo to visualize bottlenecks across services.

3) Example Dashboard for a High-Value Service

  • Availability: % successful requests (SLO vs actual).
  • Latency: p95/p99 end-to-end response times.
  • Error Rate: 4xx vs 5xx breakdown.
  • Dependency Health: DB latency, cache hit ratio, downstream service SLOs.
  • User metrics: active sessions, checkout success rate.

PostHeaderIcon [GoogleIO2024] What’s New in ChromeOS: Advancements in Accessibility and Performance

The landscape of personal computing continues to evolve, with ChromeOS at the forefront of delivering intuitive and robust experiences. Marisol Ryu, alongside Emilie Roberts and Sam Richard, outlined the platform’s ongoing mission to democratize powerful technology. Their discussion emphasized enhancements that cater to diverse user needs, from premium hardware integrations to refined app ecosystems, ensuring that simplicity and capability go hand in hand.

Expanding Access Through Premium Hardware and AI Features

Marisol highlighted the core philosophy of ChromeOS, which has remained steadfast since its inception nearly fifteen years ago: to provide straightforward yet potent computing solutions for a global audience. This vision manifests in the introduction of Chromebook Plus, a premium lineup designed to meet the demands of users seeking elevated performance without compromising affordability.

Collaborations with manufacturers such as Acer, Asus, HP, and Lenovo have yielded eight new models, each boasting double the processing power of top-selling devices from 2022. Starting at $399, these laptops make high-end computing more attainable. Beyond hardware, the “Plus” designation incorporates advanced Google AI functionalities, like “Help Me Write,” which assists in crafting or refining short-form content such as blog titles or video descriptions. Available soon for U.S. users, this tool exemplifies how AI can streamline everyday tasks, fostering creativity and productivity.

Emilie expanded on the integration of AI to personalize user interactions, noting features that adapt to individual workflows. This approach aligns with broader industry trends toward user-centric design, where technology anticipates needs rather than reacting to them. The emphasis on accessibility ensures that these advancements benefit a wide spectrum of users, from students to professionals.

Enhancing Web and Android App Ecosystems

Sam delved into optimizations for web applications, introducing “tab modes” that allow seamless switching between tabbed and windowed views. This flexibility enhances multitasking, particularly on larger screens, and reflects feedback from developers aiming to create more immersive experiences. Native-like install prompts further bridge the gap between web and desktop apps, encouraging users to engage more deeply.

For Android apps, testing and debugging tools have seen significant upgrades. The Android Emulator’s resizable window supports various form factors, including foldables and tablets, enabling developers to simulate real-world scenarios accurately. Integration with ChromeOS’s virtual machine ensures consistent performance across devices.

Gaming capabilities have also advanced, with “game controls” allowing customizable mappings for touch-only titles. This addresses input challenges on non-touch Chromebooks, making games accessible via keyboards, mice, or gamepads. “Game Capture” facilitates sharing screenshots and videos without disrupting gameplay, boosting social engagement and app visibility.

These improvements stem from close partnerships with developers, resulting in polished experiences that leverage ChromeOS’s strengths in security and speed.

Fostering Developer Collaboration and Future Innovations

The session underscored the importance of community feedback in shaping ChromeOS. Resources like the developer newsletter and RSS feed keep creators informed of updates, while platforms such as g.co/chromeosdev invite ongoing dialogue.

Looking ahead, the team envisions further AI integrations to enhance accessibility, such as adaptive interfaces for diverse abilities. By prioritizing inclusivity, ChromeOS continues to empower users worldwide, transforming curiosity into connection and creativity.

Links: