Recent Posts
Archives

PostHeaderIcon [RivieraDev2025] Julien Sulpis – What is Color? The Science Behind the Pixels

Julien Sulpis took the Riviera DEV 2025 stage to unravel the science of color, blending biology, physics, and technology to explain the quirks of digital color representation. His presentation demystified why colors behave unexpectedly across platforms and introduced modern color spaces like OKLAB and OKLCH, offering developers tools to create visually coherent interfaces. Julien’s approachable yet rigorous exploration provided actionable insights for enhancing user experience through better color management.

Understanding Color: From Light to Perception

Julien began by defining color as light, an electromagnetic wave with wavelengths between 400 and 700 nanometers, visible to the human eye. He explained how retinal cells—rods for low-light vision and cones for color perception—process these wavelengths. Three types of cones, sensitive to short (blue), medium (green), and long (yellow-orange) wavelengths, combine signals to create the colors we perceive. This biological foundation sets the stage for understanding why digital color representations can differ from human perception.

He highlighted common issues, such as why yellow appears brighter than blue at equal luminosity or why identical RGB values (e.g., green at 0, 255, 0) look different in Figma versus CSS. These discrepancies stem from the limitations of color spaces and their interaction with display technologies, prompting a deeper dive into digital color systems.

Color Spaces and Their Limitations

Julien explored color spaces like sRGB and P3, which define the range of colors a device can display within the CIE 1931 chromaticity diagram. sRGB, the standard for most screens, covers a limited portion of visible colors, while P3, used in modern devices like Macs, offers a broader gamut. He demonstrated how the same RGB code can yield different results across these spaces, as seen in his Figma-CSS example, due to calibration differences and gamut mismatches.

The talk addressed how traditional notations like RGB and HSL fail to account for human perception, leading to issues like inconsistent contrast in UI design. For instance, colors on a chromatic wheel may appear mismatched in brightness, complicating efforts to ensure accessibility-compliant contrast ratios. Julien emphasized that understanding these limitations is crucial for developers aiming to create consistent and inclusive interfaces.

Modern Color Spaces: OKLAB and OKLCH

To address these challenges, Julien introduced OKLAB and OKLCH, perception-based color spaces designed to align with how humans see color. Unlike RGB, which interpolates colors linearly, OKLAB and OKLCH ensure smoother transitions in gradients and palettes by accounting for perceptual uniformity. Julien demonstrated how CSS now supports these spaces, allowing developers to define gradients that maintain consistent brightness and contrast, enhancing visual harmony.

He showcased practical applications, such as using OKLCH to create accessible color palettes or interpolating colors in JavaScript libraries. These tools simplify tasks like ensuring sufficient contrast for text readability, a critical factor in accessible design. Julien also addressed how browsers handle unsupported color spaces, using tone mapping to approximate colors within a device’s gamut, though results vary by implementation.

Practical Applications for Developers

Julien concluded with actionable advice for developers, urging them to leverage OKLAB and OKLCH for more accurate color calculations. He recommended configuring design tools like Figma to match target color spaces (e.g., sRGB for web) and using media queries to adapt colors for displays supporting wider gamuts like P3. By understanding the science behind color, developers can avoid pitfalls like inconsistent rendering and create interfaces that are both aesthetically pleasing and accessible.

He also encouraged experimentation with provided code samples and libraries, available via a QR code, to explore color transformations. Julien’s emphasis on practical, perception-driven solutions empowers developers to enhance user experiences while meeting accessibility standards.

PostHeaderIcon [DevoxxGR2025] Nx for Gradle – Faster Builds, Better DX

Katerina Skroumpelou, a senior engineer at Nx, delivered a 15-minute talk at Devoxx Greece 2025, showcasing how the @nx/gradle plugin enhances Gradle builds for monorepos, improving developer experience (DX).

Streamlining Gradle Monorepos

Skroumpelou introduced Nx as a build system optimized for monorepos, used by over half of Fortune 500 companies. Gradle’s strength lies in managing multi-project setups, where subprojects (e.g., core, API) share dependencies and tasks. However, large repositories grow complex, slowing builds. Nx integrates seamlessly with Gradle, acting as a thin layer atop existing projects without requiring a rewrite. By running nx init in a Gradle project, developers enable Nx’s smart task management, preserving Gradle’s functionality while adding efficiency.

Optimizing CI Pipelines

Slow CI pipelines frustrate developers and inflate costs. Skroumpelou explained how Nx slashes CI times through distributed task execution, caching, and affected task detection. Unlike Gradle’s task-level parallelism and caching, Nx identifies changes in a pull request and runs only impacted tasks, skipping unaffected ones. For instance, a 30-minute pipeline could drop to five minutes by leveraging Nx’s project graph to avoid redundant builds or tests. Nx also splits large tasks, like end-to-end tests, into smaller, distributable units, further accelerating execution.

Handling Flaky Tests

Flaky tests disrupt workflows, forcing developers to rerun entire pipelines. Nx automatically detects and retries failed tests in isolation, preventing delays. Skroumpelou highlighted that this automation ensures pipelines remain efficient, even during meetings or interruptions. Nx, open-source under the MIT license, integrates with tools like VS Code, offering developers a free, scalable solution to enhance Gradle-based CI.

Links

PostHeaderIcon [DevoxxFR2025] Spark 4 and Iceberg: The New Standard for All Your Data Projects

The world of big data is constantly evolving, with new technologies emerging to address the challenges of managing and processing ever-increasing volumes of data. Apache Spark has long been a dominant force in big data processing, and its evolution continues with Spark 4. Complementing this is Apache Iceberg, a modern table format that is rapidly becoming the standard for managing data lakes. Pierre Andrieux from Capgemini and Houssem Chihoub from Databricks joined forces to demonstrate how the combination of Spark 4 and Iceberg is set to revolutionize data projects, offering improved performance, enhanced data management capabilities, and a more robust foundation for data lakes.

Spark 4: Boosting Performance and Data Lake Support

Pierre and Houssem highlighted the major new features and enhancements in Apache Spark 4. A key area of improvement is performance, with a new query engine and automatic query optimization designed to accelerate data processing workloads. Spark 4 also brings enhanced native support for data lakes, simplifying interactions with data stored in formats like Parquet and ORC on distributed file systems. This tighter integration improves efficiency and reduces the need for external connectors or complex configurations. The presentation showcased benchmarks or performance comparisons illustrating the gains achieved with Spark 4, particularly when working with large datasets in a data lake environment.

Apache Iceberg Demystified: A Next-Generation Table Format

Apache Iceberg addresses the limitations of traditional table formats used in data lakes. Houssem demystified Iceberg, explaining that it provides a layer of abstraction on top of data files, bringing database-like capabilities to data lakes. Key features of Iceberg include:
Time Travel: The ability to query historical snapshots of a table, enabling reproducible reports and simplified data rollbacks.
Schema Evolution: Support for safely evolving table schemas over time (e.g., adding, dropping, or renaming columns) without requiring costly data rewrites.
Dynamic Partitioning: Iceberg automatically manages data partitioning, optimizing query performance based on query patterns without manual intervention.
Atomic Commits: Ensures that changes to a table are atomic, providing reliability and consistency even in distributed environments.

These features solve many of the pain points associated with managing data lakes, such as schema management complexities, difficulty in handling updates and deletions, and lack of transactionality.

The Power of Combination: Spark 4 and Iceberg

The true power lies in combining the processing capabilities of Spark 4 with the data management features of Iceberg. Pierre and Houssem demonstrated through concrete use cases and practical demonstrations how this combination enables building modern data pipelines. They showed how Spark 4 can efficiently read from and write to Iceberg tables, leveraging Iceberg’s features like time travel for historical analysis or schema evolution for seamlessly integrating data with changing structures. The integration allows data engineers and data scientists to work with data lakes with greater ease, reliability, and performance, making this combination a compelling new standard for data projects. The talk covered best practices for implementing data pipelines with Spark 4 and Iceberg and discussed potential pitfalls to avoid, providing attendees with the knowledge to leverage these technologies effectively in their own data initiatives.

Links:

PostHeaderIcon [NDCMelbourne2025] DIY Usability Testing When You Have No Time and No Budget – Bekah Rice

In an insightful presentation at NDC Melbourne 2025, Bekah Rice, a UX consultant from True Matter, delivers a practical guide to conducting usability testing without the luxury of extensive time or financial resources. Drawing from her experience at a South Carolina-based UX consultancy, Bekah outlines an eight-step process to gather meaningful qualitative data, enabling developers and designers to refine digital products effectively. Her approach, demonstrated through a live usability test, underscores the importance of observing real user interactions to uncover design flaws and enhance user experience, even with minimal resources.

Step One: Preparing the Test Material

Bekah begins by emphasizing the need for a testable artifact, which need not be a fully developed product. A simple sketch, paper prototype, or a digital mockup created in tools like Figma can suffice. The key is to ensure the prototype provides enough context to mimic real-world usage. For instance, Bekah shares her plan to test a 12-year-old hospital website, currently undergoing a redesign, to identify usability issues. This approach allows teams to evaluate user interactions early, even before development begins, ensuring the product aligns with user needs from the outset.

Crafting Effective Tasks

The second step involves designing realistic tasks that reflect the user’s typical interactions with the product. Bekah illustrates this with a scenario for the hospital website, where users are asked to make an appointment with a doctor for regular care after moving to a new town. By phrasing tasks as open-ended questions and avoiding UI-specific terminology, she ensures users are not inadvertently guided toward specific actions. This method, she explains, reveals genuine user behavior, including potential failures, which are critical for identifying design shortcomings.

Recruiting the Right Participants

Finding suitable testers is crucial, and Bekah advocates for a pragmatic approach when resources are scarce. Instead of recruiting strangers, she suggests leveraging colleagues from unrelated departments, friends, or family members who are unfamiliar with the product. For the hospital website test, she selects Adam, a 39-year-old artist and warehouse tech, as a representative user. Bekah warns against testing with stakeholders or developers, as their biases can skew results. Offering small incentives, like coffee or lunch, can encourage participation, making the process feasible even on a tight budget.

Setting Up and Conducting the Test

Creating a comfortable testing environment and using minimal equipment are central to Bekah’s DIY approach. A quiet space, such as a conference room or a coffee shop, can replicate the user’s typical context. During the live demo, Bekah uses Adam’s iPhone to conduct the test, highlighting that borrowed devices can work if they allow observation. She also stresses the importance of a note-taking “sidekick” to record patterns and insights, which proved invaluable when Adam repeatedly missed critical UI elements, revealing design flaws like unclear button labels and missing appointment options.

Analyzing and Reporting Findings

The final step involves translating observations into actionable insights. Bekah emphasizes documenting both successes and failures, as seen when Adam struggled with the hospital website’s navigation but eventually found a phone number as a fallback. Immediate reporting to the team ensures fresh insights drive improvements, such as adding a map to the interface or renaming buttons for clarity. By presenting findings in simple bullet lists or visually appealing reports, teams can effectively communicate changes to stakeholders, ensuring the product evolves to meet user needs.

Links:

PostHeaderIcon [DevoxxUK2025] Maven Productivity Tips

Andres Almiray, a Java Champion and Senior Principal Product Manager at Oracle, shared practical Maven productivity tips at DevoxxUK2025, drawing from his 24 years of experience with the build tool. Through live demos and interactive discussions, he guided attendees on optimizing Maven builds for performance, reliability, and maintainability. Covering the Enforcer plugin, reproducible builds, dependency management, and performance enhancements like the Maven Daemon, Andres provided actionable strategies to streamline complex builds, emphasizing best practices over common pitfalls like overusing mvn clean install.

Why Avoid mvn clean install?

Andres humorously declared, “The first rule of Maven Club is you do not mvn clean install,” advocating for mvn verify instead. He explained that verify executes all phases up to verification, sufficient for most builds, while install unnecessarily copies artifacts to the local repository, slowing builds with I/O operations. Referencing a 2019 Devoxx Belgium talk by Robert Scholte, he noted that verify ensures the same build outcomes without the overhead, saving time unless artifacts must be shared across disconnected projects.

Harnessing the Enforcer Plugin

The Enforcer plugin was a centerpiece, with Andres urging all attendees to adopt it. He demonstrated configuring it to enforce Maven and Java versions (e.g., Maven 3.9.9, Java 21), plugin version specifications, and dependency convergence. In a live demo, a build failed due to missing Maven wrapper files and unspecified plugin versions, highlighting how Enforcer catches issues early. By fixing versions in the POM and using the Maven wrapper, Andres ensured consistent, reliable builds across local and CI environments.

Achieving Reproducible Builds

Andres emphasized reproducible builds for supply chain security and contractual requirements. Using the Maven Archiver plugin, he set a fixed timestamp (e.g., a significant date like Back to the Future’s) to ensure deterministic artifact creation. In a demo, he inspected a JAR’s manifest and bytecode, confirming a consistent timestamp and Java 21 compatibility. This practice ensures bit-for-bit identical artifacts, enabling verification against tampering and simplifying compliance in regulated industries.

Streamlining Dependency Management

To manage dependencies effectively, Andres showcased the Dependency plugin’s analyze goal, identifying unused dependencies like Commons Lang and incorrectly scoped SLF4J implementations. He advised explicitly declaring dependencies (e.g., SLF4J API) to avoid relying on transitive dependencies, ensuring clarity and preventing runtime issues. In a multi-module project, he used plugin management to standardize plugin versions, reducing configuration errors across modules.

Profiles and Plugin Flexibility

Andres demonstrated Maven profiles to optimize builds, moving resource-intensive plugins like maven-javadoc-plugin and maven-source-plugin to a specific profile for Maven Central deployments. This reduced default build times, as these plugins were only activated when needed. He also showed how to invoke plugins like echo without explicit configuration, using default settings or execution IDs, enhancing flexibility for ad-hoc tasks.

Boosting Build Performance

To accelerate builds, Andres introduced the Maven Daemon and cache extension. In a demo, a clean verify build took 0.4 seconds initially but dropped to 0.2 seconds with caching, as unchanged results were reused. Paired with the Maven wrapper and tools like gump (which maps commands like build to verify), these tools simplify and speed up builds, especially in CI pipelines, by ensuring consistent Maven versions and caching outcomes.

Links:

PostHeaderIcon Program of Conferences 2026

PostHeaderIcon [KotlinConf2025] LangChain4j with Quarkus

In a collaboration between Red Hat and Twilio, Max Rydahl Andersen and Konstantin Pavlov presented an illuminating session on the powerful combination of LangChain4j and Quarkus for building AI-driven applications with Kotlin. The talk addressed the burgeoning demand for integrating artificial intelligence into modern software and the common difficulties developers encounter, such as complex setups and performance bottlenecks. By merging Kotlin’s expressive power, Quarkus’s rapid runtime, and LangChain4j’s AI capabilities, the presenters demonstrated a streamlined and effective solution for creating cutting-edge applications.

A Synergistic Approach to AI Integration

The core of the session focused on the seamless synergy between the three technologies. Andersen and Pavlov detailed how Kotlin’s idiomatic features simplify the development of AI workflows. They presented a compelling case for using LangChain4j, a versatile framework for building language model-based applications, within the Quarkus ecosystem. Quarkus, with its fast startup times and low memory footprint, proved to be an ideal runtime for these resource-intensive applications. The presenters walked through practical code samples, illustrating how to set up the environment, manage dependencies, and orchestrate AI tools efficiently. They emphasized that this integrated approach significantly reduces the friction typically associated with AI development, allowing engineers to focus on business logic rather than infrastructural challenges.

Enhancing Performance and Productivity

The talk also addressed the critical aspect of performance. The presenters demonstrated how the combination of LangChain4j and Quarkus enables the creation of high-performing, AI-powered applications. They discussed the importance of leveraging Quarkus’s native compilation capabilities, which can lead to dramatic improvements in startup time and resource utilization. Additionally, they touched on the ongoing work to optimize the Kotlin compiler’s interaction with the Quarkus build system. Andersen noted that while the current process is efficient, there are continuous efforts to further reduce build times and enhance developer productivity. This commitment to performance underscores the value of this tech stack for developers who need to build scalable and responsive AI solutions.

The Path Forward

Looking ahead, Andersen and Pavlov outlined the future roadmap for LangChain4j and its integration with Quarkus. They highlighted upcoming features, such as the native asynchronous API, which will provide enhanced support for Kotlin coroutines. While acknowledging the importance of coroutines for certain use cases, they also reminded the audience that traditional blocking and virtual threads remain perfectly viable and often preferred for a majority of applications. They also extended an open invitation to the community to contribute to the project, emphasizing that the development of these tools is a collaborative effort. The session concluded with a powerful message: this technology stack is not just about building applications; it’s about empowering developers to confidently tackle the next generation of AI-driven projects.

Links:

PostHeaderIcon [DotAI2024] DotAI 2024: Daniel Phiri – Bridging the Multimodal Divide: From Monoliths to Mosaic Mastery

Daniel Phiri, Developer Advocate at Weaviate—an open-source vector vault pioneering AI-native navigation—and a scribe of scripts with a penchant for open-source odysseys, bridged breaches at DotAI 2024. His clarion countered consternation: AI’s archipelago awash in apparatuses, scant in sustainable structures—fear’s fetter, fractured faculties. Phiri prescribed pluralism: modalities as medley, not monad—systems symphonizing senses, from spectral scans to syntactic streams, piping predictions to prowess.

Dismantling the Monolithic Myth: Modalities as Multifaceted Melange

Phiri’s parable pulsed: truffle’s trio—pasta’s plinth, Parmesan’s piquancy, fungus’ finesse—mirroring modalities’ mosaic, where singular silos starve synergy. Models’ mirage: magic boxes masking machinations, inputs imploding into ineffable infinities.

Multimodality’s mantle: processing’s palette—images’ illuminations, videos’ vigor, depths’ delineations, code’s calculus—beyond binaries, beckoning breadth. Phiri posited pipelines: predictive pulses—confidence’s calculus—channeling to tool-calling’s trove or retrieval’s reservoir.

Embeddings’ empire: encoders etching essences, vectors vaulted in versatile voids—similarity’s summons spanning spectra. Near-image nexuses: base64’s bastion, querying quanta for cinematic kin—posters procured through proximity’s prism.

Piping Pluralities: Advanced Assemblies for Augmented Actions

Phiri forged the flux: intent’s inception—microchip’s mystery, snapshot’s summons—”what’s this?”—cascading through cognoscenti: vision’s verdict, textual translation, tool’s tether. Assumptions assailed: monolithic mandates misguided—modest modules, manifold models, melded through metrics.

Retrieval’s renaissance: multimodal matrices, embeddings’ expanse enabling eclectic echoes—text’s tendrils twining visuals’ veins. Code’s cadence: Weaviate’s weave, near-image invocations instantiating inquiries, pipelines pulsing predictions to praxis.

Phiri’s provocation: ponder peripheries—tasteless trifles transcended, users uplifted through unum’s unraveling. Cross chasms with choruses: smaller sentinels, synergistic streams—building’s beckon, beyond’s bridge.

Links:

PostHeaderIcon [DevoxxFR2025] Alert, Everything’s Burning! Mastering Technical Incidents

In the fast-paced world of technology, technical incidents are an unavoidable reality. When systems fail, the ability to quickly detect, diagnose, and resolve issues is paramount to minimizing impact on users and the business. Alexis Chotard, Laurent Leca, and Luc Chmielowski from PayFit shared their invaluable experience and strategies for mastering technical incidents, even as a rapidly scaling “unicorn” company. Their presentation went beyond just technical troubleshooting, delving into the crucial aspects of defining and evaluating incidents, effective communication, product-focused response, building organizational resilience, managing on-call duties, and transforming crises into learning opportunities through structured post-mortems.

Defining and Responding to Incidents

The first step in mastering incidents is having a clear understanding of what constitutes an incident and its severity. Alexis, Laurent, and Luc discussed how PayFit defines and categorizes technical incidents based on their impact on users and business operations. This often involves established severity levels and clear criteria for escalation. Their approach emphasized a rapid and coordinated response involving not only technical teams but also product and communication stakeholders to ensure a holistic approach. They highlighted the importance of clear internal and external communication during an incident, keeping relevant parties informed about the status, impact, and expected resolution time. This transparency helps manage expectations and build trust during challenging situations.

Technical Resolution and Product Focus

While quick technical mitigation to restore service is the immediate priority during an incident, the PayFit team stressed the importance of a product-focused approach. This involves understanding the user impact of the incident and prioritizing resolution steps that minimize disruption for customers. They discussed strategies for effective troubleshooting, leveraging monitoring and logging tools to quickly identify the root cause. Beyond immediate fixes, they highlighted the need to address the underlying issues to prevent recurrence. This often involves implementing technical debt reduction measures or improving system resilience as a direct outcome of incident analysis. Their experience showed that a strong collaboration between engineering and product teams is essential for navigating incidents effectively and ensuring that the user experience remains a central focus.

Organizational Resilience and Learning

Mastering incidents at scale requires building both technical and organizational resilience. The presenters discussed how PayFit has evolved its on-call rotation models to ensure adequate coverage while maintaining a healthy work-life balance for engineers. They touched upon the importance of automation in detecting and mitigating incidents faster. A core tenet of their approach was the implementation of structured post-mortems (or retrospectives) after every significant incident. These post-mortems are blameless, focusing on identifying the technical and process-related factors that contributed to the incident and defining actionable steps for improvement. By transforming crises into learning opportunities, PayFit continuously strengthens its systems and processes, reducing the frequency and impact of future incidents. Their journey over 18 months demonstrated that investing in these practices is crucial for any growing organization aiming to build robust and reliable systems.

Links:

PostHeaderIcon [KotlinConf2024] Kotlin Multiplatform Powers Google Workspace

At KotlinConf2024, Jason Parachoniak, a Google Workspace engineer, detailed Google’s shift from a Java-based multiplatform system to Kotlin Multiplatform (KMP), starting with Google Docs. For over a decade, Workspace has relied on shared code for consistency across platforms, like Gmail’s synchronization layer. Jason shared how KMP enhances this approach, leveraging Kotlin’s ecosystem for better performance and native interop. The talk highlighted lessons from the migration, focusing on build efficiency, runtime latency, and memory challenges, offering insights for large-scale KMP adoption.

Why Kotlin Multiplatform for Workspace

Google Workspace has long used multiplatform code to ensure consistency, such as identical email drafts across devices in Gmail or uniform document models in Docs. Jason explained that their Java-based system, using transpilers like J2ObjC, was effective but complex. KMP offers a modern alternative, allowing developers to write Kotlin code that compiles to native platforms, improving runtime performance and ecosystem integration. By targeting business logic—everything beyond the UI—Workspace ensures native-feel apps while sharing critical functionality, aligning with user expectations for productivity tools.

Google Docs: The Migration Testbed

The migration began with Google Docs, chosen for its heavily annotated codebase, which tracks build performance, latency, and memory usage. Jason described how Docs is rolling out on KMP, providing metrics to refine the Kotlin compiler and runtime. This controlled environment allowed Google to compare KMP against their legacy system, ensuring parity before expanding to other apps. Collaboration with JetBrains and the Android team has been key, with iterative improvements driven by real-world data, setting a foundation for broader Workspace adoption.

Tackling Build Performance

Build performance posed challenges, as Google’s Bazel-like system resembles clean builds, unlike Gradle’s incremental approach. Jason recounted a 10-minute build time increase after a Kotlin Native update optimized LLVM bitcode generation. While this improved binary size and speed, it slowed builds. Profiling revealed a slow LLVM pass, already fixed in a newer version. Google patched LLVM temporarily, reducing build times from 30 to 8 minutes, and is working with JetBrains to update Kotlin Native’s LLVM, prioritizing stability alongside the K2 compiler rollout.

Optimizing Runtime Latency

Runtime latency, critical for Workspace apps, required Kotlin Native garbage collection (GC) tweaks. Jason noted that JetBrains proactively adjusted GC before receiving Google’s metrics, but further heuristics were needed as latency issues emerged. String handling in the interop layer also caused bottlenecks, addressed with temporary workarounds. Google is designing long-term fixes with JetBrains, ensuring smooth performance across platforms. These efforts highlight KMP’s potential for high-performance apps, provided runtime challenges are systematically resolved through collaboration.

Addressing Memory Usage

Memory usage spikes were a surprise, particularly between iOS 15 and 16. Jason explained that iOS 16’s security-driven constant pool remapping marked Kotlin Native’s vtables as dirty, consuming megabytes of RAM. Google developed a heap dump tool generating HPROF files, compatible with IntelliJ’s Java heap analysis, to diagnose issues. This tool is being upstreamed to Kotlin Native’s runtime, enhancing debugging capabilities. These insights are guiding Google’s memory optimization strategy, ensuring KMP meets Workspace’s stringent performance requirements as the migration expands.

Links: