Archive for the ‘en-US’ Category
From JMS and Message Queues to Kafka Streams: Why Kafka Had to Be Invented
For decades, enterprise systems relied on message queues and JMS-based brokers to decouple applications and ensure reliable communication. Technologies such as IBM MQ, ActiveMQ, and later RabbitMQ solved an important problem: how to move messages safely from one system to another without tight coupling.
However, as systems grew larger, more distributed, and more data-driven, the limitations of this model became increasingly apparent. Kafka — and later Kafka Streams — did not emerge because JMS and MQ were poorly designed. They emerged because they were designed for a different era and a different class of problems.
What JMS and MQ Were Designed to Do
Traditional message brokers focus on delivery. A producer sends a message, the broker stores it temporarily, and a consumer receives it. Once the message is acknowledged, it is typically removed. The broker’s primary responsibility is to guarantee that messages are delivered reliably and, in some cases, transactionally.
This model works very well for command-style interactions such as order submission, workflow orchestration, and request-driven integration between systems. Messages are transient by design, consumers are expected to be online, and the system’s success is measured by how quickly and reliably messages move through it.
For many years, this was sufficient.
The Problems That Started to Appear
As companies began operating at internet scale, the assumptions underlying JMS and MQ started to break down. Data volumes increased dramatically, and systems needed to handle not thousands, but millions of events per second. Message brokers that tracked delivery state per consumer became bottlenecks, both technically and operationally.
More importantly, the nature of the data changed. Events were no longer just instructions to be executed and discarded. They became facts: user actions, transactions, logs, metrics, and behavioral signals that needed to be stored, analyzed, and revisited.
With JMS and MQ, once a message was consumed, it was gone. Reprocessing required complex duplication strategies or external storage. Adding a new consumer meant replaying data manually, if it was even possible. The broker was optimized for delivery, not for history.
At the same time, architectures became more decoupled. Multiple teams wanted to consume the same data independently, at their own pace, and for different purposes. In a traditional queue-based system, this required copying messages or creating parallel queues, increasing cost and complexity.
These pressures revealed a fundamental mismatch between what message queues were built for and what modern systems required.
The Conceptual Shift That Led to Kafka
Kafka was created to answer a different question. Instead of asking how to deliver messages efficiently, its designers asked how to store events reliably at scale and allow many consumers to read them independently.
The key idea was deceptively simple: treat data as an append-only log. Producers write events to a log, and consumers read from that log at their own pace. Events are not deleted when consumed. They are retained for a configurable period, or even indefinitely.
In this model, the broker no longer tracks who consumed what. Each consumer keeps track of its own position. This small change eliminates a major scalability bottleneck and makes replay a natural operation rather than an exceptional one.
Kafka’s architecture reflects this shift. It is disk-first rather than memory-first, optimized for sequential writes and reads. It scales horizontally through partitioning. It treats durability and throughput as complementary goals rather than trade-offs.
Kafka was not created to replace message queues; it was created to solve problems message queues were never meant to solve.
From Transport to Platform: Why Kafka Streams Exists
Kafka alone provides storage and distribution of events, but it does not process them. Early Kafka users still needed external systems to transform, aggregate, and analyze data flowing through Kafka.
Kafka Streams was created to close this gap.
Instead of introducing another centralized processing cluster, Kafka Streams embeds stream processing directly into applications. This is a deliberate contrast with both JMS consumers and large external processing frameworks.
In a JMS-based system, consumers typically process messages one at a time, often statelessly, and rely on external databases for aggregation and state. Rebuilding state after a failure is complex and error-prone.
Kafka Streams, by contrast, assumes that stateful processing is normal. It provides abstractions for event streams and for state that evolves over time. It stores state locally for performance and backs it up to Kafka so it can be restored automatically. Processing logic, state, and data history are all aligned around the same event log.
This approach turns Kafka from a passive transport layer into an active data platform.
What Kafka and Kafka Streams Do Differently
The fundamental difference between JMS/MQ and Kafka is not syntax or APIs, but philosophy.
Message queues focus on messages as transient instructions. Kafka focuses on events as durable facts. Message queues optimize for delivery guarantees. Kafka optimizes for scalability, retention, and replay. Message queues treat consumers as part of the broker’s responsibility. Kafka treats consumers as independent actors.
Kafka Streams builds on this by assuming that computation belongs close to the data. Instead of shipping data to a processing engine, it ships processing logic to where the data already is. This inversion dramatically simplifies architectures while increasing reliability.
Why Someone “Woke Up and Created Kafka”
Kafka was born out of necessity. At companies like LinkedIn, existing messaging systems could not handle the volume, variety, and longevity of data they were producing. They needed a system that could ingest everything, store it reliably, and make it available to many consumers without coordination.
Kafka Streams followed naturally. Once data became durable and replayable, processing it in a stateless, fire-and-forget manner was no longer sufficient. Systems needed to compute continuously, maintain state, and recover automatically — all while remaining simple to operate.
Kafka and Kafka Streams are the result of rethinking messaging from first principles, in response to scale, data-driven architectures, and the need to treat events as first-class citizens.
Conclusion
JMS and traditional message queues remain excellent tools for command-based integration and transactional workflows. Kafka was not designed to replace them, but to address a different category of problems.
Kafka introduced the idea of a distributed, durable event log as the backbone of modern systems. Kafka Streams extended that idea by embedding real-time pro
[DotJs2024] Encrypt All Transports
In the shadowed corridors of digital discourse, where data streams pulse like vital arteries, lurks the imperative to cloak communications in unbreakable veils. Eleanor McHugh, a freelance reality consultant and anonymity architect with three decades spanning avionics to blockchain, issued this mandate at dotJS 2024. Ellie, co-founder of Innovative Identity Solutions, decried surveillance’s specter—from Lenovo’s BIOS interlopers to AI’s voracious scans—positing developers as privacy’s vanguard. Her whirlwind primer: wield WebSockets, RSA, AES in Node and browser crucibles, forging transports impervious to prying eyes.
Ellie’s ire ignited with 2015’s scandals: adware proxies hijacking HTTPS, unmasking “secure” flows for monetization. Today’s AI fervor—Facebook, Microsoft, Apple coveting content—echoes, demanding defiance. Privacy’s etymology—privity’s pact, NDA’s shroud—binds us; yet CTOs crave visibility, debugging APIs dissecting deeds at dawn’s witching hour. Ellie indicted: we, the coders, perpetuate panopticons, outsourcing souls to Albanian bunkers or quakesafe vaults. Reclamation resides in crypto’s toolkit: symmetric ciphers scrambling payloads, asymmetric duos authenticating origins, signatures vouching veracity, zero-knowledge veiling proofs.
Ellie’s arsenal gleams in GitHub’s forge: WebSockets for bidirectional brooks, RSA’s key pairs partitioning public probes from private vaults, AES randomizing streams into gibberish. Node’s crypto module, browser’s SubtleCrypto—both tame these titans. A vignette: socket spawns, keys exchanged via Diffie-Hellman ephemera, payloads AES-encrypted, RSA-signed—interception yields noise, replay thwarted by nonces. Zero-knowledge crowns: prove solvency sans balances, age sans birthdates—zk-SNARKs succinct, verifiable.
Ellie’s entreaty: tinker this trove, erect enclosures where client secrets elude server spies. As liveness biometrics and encrypted enclaves evolve, her free chapter beckons—crypto sans cost, privacy paramount. In software’s void, we architect anonymity; shirk not this solemnity.
Crypto Primitives in Play
Ellie enumerated: AES symmetrizes speed, RSA asymmetrizes trust—public encrypts, private decrypts. Signatures seal integrity; zk proofs affirm attributes incognito. WebSockets underpin, channels churning ciphered chatter—Node’s forge, browser’s bastion.
Defending Against Digital Dragnets
From BIOS betrayals to AI appetites, Ellie’s exposé exhorted: encrypt endpoints, anonymize identities. Her slides loop—3:30 eternities—urging uptake: GitHub’s gallery, SlideShare’s scrolls. Consultations await; privacy’s perimeter, we patrol.
Links:
[GoogleIO2024] What’s New in Flutter: Cross-Platform Innovations and Performance Boosts
Flutter’s pillars—portability, performance, and openness—drive its evolution. Kevin Moore and John Ryan highlighted five key updates, from AI integrations to web assembly support, empowering developers to create seamless experiences across devices.
Portability Across Platforms with Gemini API
Kevin stressed Flutter’s code-sharing efficiency, achieving 97% reuse in Google’s apps. The Gemini API integration via Google AI Dart SDK enables generative features, like image-to-text in apps such as Bricket, which identifies Lego bricks for model suggestions.
Global Gamers Challenge with Global Citizen showcased Flutter’s gaming potential, with winners like “Save the Lot” addressing environmental issues. Resources for game development, including Casual Games Toolkit, facilitate cross-platform builds.
Performance Enhancements with Impeller and Macros
John introduced Impeller on Android, Flutter’s rendering engine, reducing jank through precompiled shaders. Benchmarks show up to 50% frame time improvements, enhancing experiences on mid-range devices.
Dart macros, in experimental preview, automate boilerplate code for tasks like JSON serialization, boosting developer productivity without runtime overhead.
Web Optimization Through Web Assembly
Web Assembly compilation in Flutter 3.22 doubles performance, with up to 4x gains in demanding frames. This consistency minimizes jank, enabling richer web apps.
Collaborations with browser teams ensure broad compatibility, aligning with Flutter’s open ethos.
These 2024 updates solidify Flutter’s role in efficient, high-performance app development.
Links:
[DevoxxGR2025] Understanding Flow in Software Development
James Lewis, a ThoughtWorks consultant, delivered a 41-minute talk at Devoxx Greece 2025, exploring how work flows through software development, drawing on information theory and complexity science.
The Nature of Work as Information
Lewis framed software development as transforming “stuff” into more valuable outputs, akin to enterprise workflows before computers. Work, invisible as information, flows through value streams—from ideas to production code. However, invisibility causes issues like unnoticed backlogs or undeployed code, acting as costly inventory. Lewis cited Don Reinertsen’s Principles of Product Development Flow, emphasizing that untested or undeployed code represents lost revenue, unlike visible factory inventory, which signals inefficiencies immediately.
Visualizing Value Streams
Using a value stream map, Lewis illustrated a typical development cycle: three days for coding, ten days waiting for testing, and 30 days for deployment, totaling 47 days of lead time, with 42 days as idle inventory. Wait times stem from coordination (teams waiting on others), scheduling (e.g., architecture reviews), and queues (backlogs). Shared test environments exacerbate delays, costing more than provisioning new ones. Lewis advocated mapping workflows to expose economic losses, making a case for faster delivery to stakeholders.
Reducing Batch Sizes for Flow
Lewis emphasized reducing batch sizes to improve flow, a principle rooted in queuing theory. Smaller batches, like deploying twice as often, halve wait times, enabling faster revenue generation. Using agent-based models, he simulated agile (single-piece flow) versus waterfall (100% batch) teams, showing agile teams deliver value faster. Limiting work-in-progress and controlling queue sizes prevent congestion collapse, ensuring smoother, more predictable workflows.
Links
[AWSReInforce2025] Secure and scalable customer IAM with Cognito: Wiz’s success story (IAM221)
Lecturer
Rahul Sharma serves as Principal Product Manager for Amazon Cognito at AWS, driving the roadmap for customer identity and access management (CIAM) at global scale. Alex Vorte functions as Field CTO for Login and RBAC at Wiz, leading identity transformation initiatives that support FedRAMP authorization and enterprise compliance.
Abstract
The case study examines Wiz’s migration of 100,000+ identities to Amazon Cognito, achieving FedRAMP High authorization, 99.9% availability, and 70% cost reduction. It establishes best practices for CIAM modernization—migration strategies, machine identity integration, and SLA alignment—that balance security, scalability, and user experience.
Migration Strategy and Execution Framework
Wiz executed a phased migration across three cohorts:
- Pilot (0-10% users): Parallel authentication flows
- Canary (10-50%): Gradual traffic shift with feature flags
- Cutover (50-100%): Automated bulk migration
\# Bulk migration pseudocode
for user in legacy_db.batch(1000):
cognito.admin_create_user(
Username=user.email,
TemporaryPassword=generate_secure_temp(),
UserAttributes=user.profile
)
trigger_password_reset_email(user)
The platform processed 100,000 identities in under one year, with zero downtime during cutover.
Security and Compliance Architecture
FedRAMP High requirements drove design decisions:
- Encryption: KMS customer-managed keys for data at rest
- Network: VPC-private user pools with AWS PrivateLink
- Audit: CloudTrail integration for all admin actions
- MFA: Mandatory WebAuthn with hardware key support
Cognito’s built-in compliance (SOC, PCI, ISO) eliminated third-party audit burden.
Scalability and Availability Engineering
Architecture supports 10,000 RPS authentication:
Global Accelerator → CloudFront → Cognito (multi-AZ)
↓
Lambda@Edge for custom auth
SLA achievement:
– RTO: < 4 hours via cross-region replication
– RPO: < 1 minute with continuous backups
– Availability: 99.9% through health checks and auto-scaling
Machine Identity Integration
Beyond human users, Cognito manages:
- Service accounts: OAuth2 client credentials flow
- CI/CD pipelines: Federated tokens via OIDC
- IoT devices: Custom authenticator with X.509 certificates
// CI/CD token acquisition
CognitoIdentityProvider client = ...
InitiateAuthRequest request = new InitiateAuthRequest()
.withAuthFlow(AuthFlowType.CLIENT_CREDENTIALS)
.withClientId(PIPELINE_CLIENT_ID);
This unified approach reduced identity sprawl by 60%.
Cost Optimization Outcomes
Migration yielded 70% reduction through:
- Elimination of legacy IdP licensing
- Pay-per-monthly-active-user pricing
- Removal of custom auth infrastructure
- Automated user lifecycle management
Best Practices for CIAM Modernization
- Choose migration strategy by risk tolerance: parallel runs for zero-downtime
- Leverage Cognito migration APIs: bulk import with password hash preservation
- Implement progressive enhancement: start with email/password, add MFA/social later
- Align with product roadmap: design partner relationship for feature priority
Conclusion: CIAM as Strategic Enabler
Wiz’s transformation demonstrates that modern CIAM need not compromise between security, scale, and cost. Amazon Cognito provides the managed substrate that absorbs authentication complexity, enabling security teams to focus on policy and governance rather than infrastructure. The migration framework—phased execution, machine identity integration, and SLA engineering—offers a repeatable pattern for enterprises undergoing digital transformation.
Links:
[DevoxxUK2025] The Art of Structuring Real-Time Data Streams into Actionable Insights
At DevoxxUK2025, Olena Kutsenko, a data streaming expert from Confluent, delivered a compelling session on transforming chaotic real-time data streams into structured, actionable insights using Apache Kafka, Apache Flink, and Apache Iceberg. Through practical demos involving IoT devices and social media data, Olena demonstrated how to build scalable, low-latency data pipelines that ensure high data quality and flexibility for downstream analytics and AI applications. Her talk highlighted the power of combining these open-source technologies to handle messy, high-volume data streams, making them accessible for querying, visualization, and decision-making.
Apache Kafka: The Scalable Message Bus
Olena introduced Apache Kafka as the foundation for handling high-speed data streams, acting as a scalable message bus that decouples data producers (e.g., IoT devices) from consumers. Kafka’s design, with topics and partitions likened to multi-lane roads, ensures high throughput and low latency. In her IoT demo, Olena used a JavaScript producer to ingest sensor data (temperature, battery levels) into a Kafka topic, handling messy data with duplicates or missing sensor IDs. Kafka’s ability to replicate data and retain it for a defined period ensures reliability, allowing reprocessing if needed, making it ideal for industries like banking and retail, such as REWE’s use of Kafka for processing sold items.
Apache Flink: Real-Time Data Processing
Apache Flink was showcased as the engine for cleaning and structuring Kafka streams in real time. Olena explained Flink’s ability to handle both unbounded (real-time) and bounded (historical) data, using SQL for transformations. In the IoT demo, she applied a row_number function to deduplicate records by sensor ID and timestamp, filtered out invalid data (e.g., null sensor IDs), and reformatted timestamps to include time zones. A 5-second watermark ignored late-arriving data, and a tumbling window aggregated data into one-minute buckets, enriched with averages and standard deviations, ensuring clean, structured data ready for analysis.
Apache Iceberg: Structured Storage for Analytics
Olena introduced Apache Iceberg as an open table format that brings data warehouse-like structure to data lakes. Developed at Netflix to address Apache Hive’s limitations, Iceberg ensures atomic transactions and schema evolution without rewriting data. Its metadata layer, including manifest files and snapshots, supports time travel and efficient querying. In the demo, Flink’s processed data was written to Iceberg-compatible Kafka topics using Confluent’s Quora engine, eliminating extra migrations. Iceberg’s structure enabled fast queries and versioning, critical for analytics and compliance in regulated environments.
Querying and Visualization with Trino and Superset
To make data actionable, Olena used Trino, a distributed query engine, to run fast queries on Iceberg tables, and Apache Superset for visualization. In the IoT demo, Superset visualized temperature and humidity distributions, highlighting outliers. In a playful social media demo using Blue Sky data, Olena enriched posts with sentiment analysis (positive, negative, neutral) and category classification via a GPT-3.5 Turbo model, integrated via Flink. Superset dashboards displayed author activity and sentiment distributions, demonstrating how structured data enables intuitive insights for non-technical users.
Ensuring Data Integrity and Scalability
Addressing audience questions, Olena explained Flink’s exactly-once processing guarantee, using watermarks and snapshots to ensure data integrity, even during failures. Kafka’s retention policies allow reprocessing, critical for regulatory compliance, though she noted custom solutions are often needed for audit evidence in financial sectors. Flink’s parallel processing scales effectively with Kafka’s partitioned topics, handling high-volume data without bottlenecks, making the pipeline robust for dynamic workloads like IoT or fraud detection in banking.
Links:
[OxidizeConf2024] Building Cross-Platform GUIs with Slint – A Practical Introduction
Introducing Slint’s Versatility
Creating intuitive, cross-platform graphical user interfaces (GUIs) is a critical challenge in modern software development. At OxidizeConf2024, Olivier Goffart, co-founder of Slint, introduced this Rust-based GUI framework designed for desktop, embedded, and bare-metal MCU applications. With a background in Qt and KDE, Olivier demonstrated Slint’s capabilities through a live coding session, showcasing its ability to craft native applications with minimal platform-specific adjustments.
Slint combines a declarative markup language with Rust’s imperative logic, offering a balance of expressiveness and performance. Olivier highlighted its support for desktop, mobile, and web platforms via WebAssembly, though the web is secondary to native targets. His demo illustrated the creation of a simple button with dynamic styling, leveraging Slint’s markup to define layouts and Rust for logic, making it accessible for developers accustomed to imperative programming.
Live Coding a Responsive UI
Olivier’s live coding session was a highlight, demonstrating Slint’s ease of use. He built a button with a gray background, padding, and centered alignment, using Slint’s markup to define the UI. By adding a touch area and binding it to a click event, he enabled dynamic color changes—red when pressed, gray otherwise—with a 300ms animation for smooth transitions. Border radius and width further enhanced the button’s aesthetics, showcasing Slint’s flexibility in meeting designer specifications.
The demo underscored Slint’s portability. Olivier noted that the same code, with minor adaptations, can run on bare-metal MCUs using tools like probe-rs. This portability, enabled by Rust’s ecosystem, allows developers to target diverse platforms without extensive rewrites. Slint’s integration with cargo ensures seamless compilation, making it an efficient choice for embedded and desktop applications alike.
Streamlining Development with Slint
Slint’s design prioritizes developer productivity and application performance. Olivier emphasized its lightweight nature, suitable for resource-constrained environments like MCUs. The framework’s ability to handle complex layouts with minimal code reduces development time, while Rust’s safety guarantees prevent common UI bugs. For embedded systems, Slint’s compatibility with Rust’s ecosystem tools like cargo and probe-rs simplifies deployment, as demonstrated by Olivier’s assurance that the demo code could run on an MCU with minor tweaks.
By open-sourcing Slint, Olivier and his team encourage community contributions, fostering a growing ecosystem. His invitation to visit the demo booth reflects Slint’s collaborative spirit, aiming to refine the framework through developer feedback. Slint’s practical approach to cross-platform GUI development positions it as a powerful tool for Rust developers, streamlining the creation of responsive, reliable applications.
Links:
🐧 Solved: Troubleshooting Login and WiFi DNS Issues in antiX Linux
If you’ve recently installed antiX Linux (a lightweight, fast, and stable distribution perfect for older hardware), you might run into two very common initial hurdles: failed logins and the infamous “Connected, but no internet” WiFi bug.
Here is a detailed guide on how to troubleshoot and fix these common antiX configuration issues.
Part I: Fixing Login Failures After Installation
You’ve installed antiX, set your username (e.g., hello) and password (e.g., world), but when you reboot, the system refuses to let you in.
❔ Why This Happens
This usually boils down to one of two things:
- The Console Login: antiX often defaults to a text-based console login prompt. You might be mistyping your password because of an incorrect keyboard layout (e.g., if you are using a non-US layout).
- User Mix-up: You may be confusing your standard user with the administrative
rootuser.
✅ The Fix
Try logging in using these exact credentials first:
| User Type | Login | Password |
|---|---|---|
| Standard User | hello |
world |
| Root User | root |
world (or the password you set for the user) |
🚀 Launching the Desktop
Once you successfully log in, you will be in the terminal (command line). To start the graphical desktop environment, simply type:
startx
This will load your familiar antiX desktop.
Part II: Fixing WiFi: Connected, But No Internet
Once you’re on the desktop and connect to your WiFi network via the Connman System Tray, it shows “Connected”, but you can’t browse the internet. Your Ethernet connection works, proving the issue is specific to the WiFi configuration.
💡 Why This Happens (DNS Resolution Failure)
The most common reason for this issue in many Linux distributions, especially those using network managers like Connman, is a DNS (Domain Name System) configuration problem.
Your computer is successfully connected to the router and has an IP address (the physical connection is fine), but it doesn’t know which server to ask when you type a website name (like google.com). It can’t resolve the name into an IP address.
🛠️ The Fix: Manually Set and Lock DNS Servers
The most reliable way to fix this is to manually configure your DNS server entries and prevent the network manager from overwriting them.
Step 1: Confirm the DNS Issue
Open a terminal and run a quick test. This pings Google’s public DNS server IP address:
ping -c 4 8.8.8.8
- If you get replies (Success): The issue IS DNS. Proceed to Step 2.
- If you get 100% loss (Failure): The problem is deeper (like a driver issue). Try restarting the service:
sudo service connman restart.
Step 2: Edit the DNS Configuration File
We will edit the system’s DNS configuration file, /etc/resolv.conf.
- Open the file using a text editor (we use
leafpadas it’s common in antiX):sudo leafpad /etc/resolv.conf - Replace all content in the file with two reliable, public DNS server addresses:
nameserver 8.8.8.8 # Google Public DNS nameserver 1.1.1.1 # Cloudflare Public DNS - Save the file and close the editor.
Step 3: Prevent Overwriting (Lock the File)
By default, Connman or other network tools will overwrite this file on the next connection or reboot. We must lock it using the chattr command:
sudo chattr +i /etc/resolv.conf
The +i flag makes the file immutable, meaning no program (including Connman) can modify it.
🎉 Conclusion
After locking the file, your internet browsing should now work perfectly over WiFi!
If you ever need to change your DNS settings again, you must first unlock the file using:
sudo chattr -i /etc/resolv.conf
[RivieraDev2025] Julien Sulpis – What is Color? The Science Behind the Pixels
Julien Sulpis took the Riviera DEV 2025 stage to unravel the science of color, blending biology, physics, and technology to explain the quirks of digital color representation. His presentation demystified why colors behave unexpectedly across platforms and introduced modern color spaces like OKLAB and OKLCH, offering developers tools to create visually coherent interfaces. Julien’s approachable yet rigorous exploration provided actionable insights for enhancing user experience through better color management.
Understanding Color: From Light to Perception
Julien began by defining color as light, an electromagnetic wave with wavelengths between 400 and 700 nanometers, visible to the human eye. He explained how retinal cells—rods for low-light vision and cones for color perception—process these wavelengths. Three types of cones, sensitive to short (blue), medium (green), and long (yellow-orange) wavelengths, combine signals to create the colors we perceive. This biological foundation sets the stage for understanding why digital color representations can differ from human perception.
He highlighted common issues, such as why yellow appears brighter than blue at equal luminosity or why identical RGB values (e.g., green at 0, 255, 0) look different in Figma versus CSS. These discrepancies stem from the limitations of color spaces and their interaction with display technologies, prompting a deeper dive into digital color systems.
Color Spaces and Their Limitations
Julien explored color spaces like sRGB and P3, which define the range of colors a device can display within the CIE 1931 chromaticity diagram. sRGB, the standard for most screens, covers a limited portion of visible colors, while P3, used in modern devices like Macs, offers a broader gamut. He demonstrated how the same RGB code can yield different results across these spaces, as seen in his Figma-CSS example, due to calibration differences and gamut mismatches.
The talk addressed how traditional notations like RGB and HSL fail to account for human perception, leading to issues like inconsistent contrast in UI design. For instance, colors on a chromatic wheel may appear mismatched in brightness, complicating efforts to ensure accessibility-compliant contrast ratios. Julien emphasized that understanding these limitations is crucial for developers aiming to create consistent and inclusive interfaces.
Modern Color Spaces: OKLAB and OKLCH
To address these challenges, Julien introduced OKLAB and OKLCH, perception-based color spaces designed to align with how humans see color. Unlike RGB, which interpolates colors linearly, OKLAB and OKLCH ensure smoother transitions in gradients and palettes by accounting for perceptual uniformity. Julien demonstrated how CSS now supports these spaces, allowing developers to define gradients that maintain consistent brightness and contrast, enhancing visual harmony.
He showcased practical applications, such as using OKLCH to create accessible color palettes or interpolating colors in JavaScript libraries. These tools simplify tasks like ensuring sufficient contrast for text readability, a critical factor in accessible design. Julien also addressed how browsers handle unsupported color spaces, using tone mapping to approximate colors within a device’s gamut, though results vary by implementation.
Practical Applications for Developers
Julien concluded with actionable advice for developers, urging them to leverage OKLAB and OKLCH for more accurate color calculations. He recommended configuring design tools like Figma to match target color spaces (e.g., sRGB for web) and using media queries to adapt colors for displays supporting wider gamuts like P3. By understanding the science behind color, developers can avoid pitfalls like inconsistent rendering and create interfaces that are both aesthetically pleasing and accessible.
He also encouraged experimentation with provided code samples and libraries, available via a QR code, to explore color transformations. Julien’s emphasis on practical, perception-driven solutions empowers developers to enhance user experiences while meeting accessibility standards.
[DevoxxGR2025] Nx for Gradle – Faster Builds, Better DX
Katerina Skroumpelou, a senior engineer at Nx, delivered a 15-minute talk at Devoxx Greece 2025, showcasing how the @nx/gradle plugin enhances Gradle builds for monorepos, improving developer experience (DX).
Streamlining Gradle Monorepos
Skroumpelou introduced Nx as a build system optimized for monorepos, used by over half of Fortune 500 companies. Gradle’s strength lies in managing multi-project setups, where subprojects (e.g., core, API) share dependencies and tasks. However, large repositories grow complex, slowing builds. Nx integrates seamlessly with Gradle, acting as a thin layer atop existing projects without requiring a rewrite. By running nx init in a Gradle project, developers enable Nx’s smart task management, preserving Gradle’s functionality while adding efficiency.
Optimizing CI Pipelines
Slow CI pipelines frustrate developers and inflate costs. Skroumpelou explained how Nx slashes CI times through distributed task execution, caching, and affected task detection. Unlike Gradle’s task-level parallelism and caching, Nx identifies changes in a pull request and runs only impacted tasks, skipping unaffected ones. For instance, a 30-minute pipeline could drop to five minutes by leveraging Nx’s project graph to avoid redundant builds or tests. Nx also splits large tasks, like end-to-end tests, into smaller, distributable units, further accelerating execution.
Handling Flaky Tests
Flaky tests disrupt workflows, forcing developers to rerun entire pipelines. Nx automatically detects and retries failed tests in isolation, preventing delays. Skroumpelou highlighted that this automation ensures pipelines remain efficient, even during meetings or interruptions. Nx, open-source under the MIT license, integrates with tools like VS Code, offering developers a free, scalable solution to enhance Gradle-based CI.