Recent Posts
Archives

Posts Tagged ‘DevoxxUK2025’

PostHeaderIcon [DevoxxUK2025] Zero-Bug Policy Success: A Journey to Developer Happiness

At DevoxxUK2025, Peter Hilton, a product manager at a Norwegian startup, shared an inspiring experience report on achieving a zero-bug policy. Drawing from his team’s journey in 2024, Peter narrated how a small, remote team transformed their development process by tackling a backlog of bugs, ultimately reaching a state of zero open bugs. His talk explored the practical steps, team dynamics, and challenges of implementing this approach, emphasizing its impact on developer morale, customer trust, and software quality. Through a blend of storytelling and data, Peter illustrated how a disciplined focus on fixing bugs can lead to a more predictable and joyful development environment.

The Pain of Bugs and the Vision for Change

Peter began by highlighting the chaos caused by an ever-growing bug backlog, which drained time, eroded team morale, and undermined customer confidence. In early 2024, his team faced a surge in bug reports following a marketing campaign for their Norwegian web shop, a circular economy platform selling reusable soap containers. The influx revealed testing gaps and consumed developer time, hindering experiments to boost customer conversions. Inspired by a blog post he wrote in 2021 and the “fix it now or delete it” infographic by Yasaman Farzan, Peter proposed a zero-bug policy—not as a mandate for bug-free software but as a target to clear open issues. The team, motivated by shared frustration, agreed to experiment, envisioning predictable support efforts and meaningful feature feedback.

Overcoming Resistance and Defining the Approach

Convincing a team to prioritize bug fixes over new features required navigating skepticism and detailed “what-if” scenarios from developers. Peter described how initial discussions risked paralysis, as developers questioned edge cases like handling multiple simultaneous bugs. To move forward, the team framed the policy as a safe experiment, setting clear goals: reducing time spent on bug discussions, improving software reliability, and enabling meaningful customer feedback. By April 2024, they committed to fixing bugs exclusively for two months, a bold move that demanded collective focus. Peter, as product manager, leveraged his role to align stakeholders, emphasizing business outcomes like increased customer conversions over bug counts, which helped secure buy-in.

The Hard Work of Bug Fixing

The transition to a zero-bug state was arduous but structured. Starting in May 2024, the team of six developers tackled 252 bugs over the year, fixing around five per week, with peaks of 10–15 during intense periods. Peter shared a chart showing the number of open bugs fluctuating but never exceeding 15, a manageable load compared to teams with hundreds of unresolved issues. The team’s small size and autonomy, as a fully remote group, allowed them to focus without external dependencies. By August, they reached “zero bug day,” a milestone celebrated as a turning point. This period also saw improved testing practices, as each fix included robust test coverage to prevent regressions, addressing technical debt accumulated from the rushed initial launch.

Sustaining Zero Bugs and Reaping Rewards

Post-August, the team entered a maintenance phase, fixing bugs as they arose—typically one or two at a time—while spending half their time on new features. Peter noted that this phase, with months starting at zero open bugs (e.g., March–May 2025), felt liberating. Developers spent less time in meetings, and Peter could focus on customer growth experiments without bugs skewing results. A calendar visualization for April 2025 showed most days bug-free, with only two minor issues fixed leisurely. The simplicity of handling bugs case-by-case, without complex prioritization, mirrored the “fix it now or delete it” mantra, fostering a happier, more productive team environment.

Lessons for Other Teams

Reflecting on the journey, Peter emphasized that a zero-bug policy requires team-wide commitment and a tolerance for initial discomfort. While their small, autonomous team faced no external dependencies, larger organizations might need to address inter-team coordination or legacy backlogs. He suggested a radical option: deleting large backlogs to focus on new reports, though he hadn’t tried it. The key takeaway was the value of simplicity—handling one bug at a time eliminated the need for intricate rules. Peter also highlighted that the process built psychological safety, as tackling a tough challenge together strengthened team cohesion, making it a worthwhile experiment for teams seeking better quality and morale.

Links:

PostHeaderIcon [DevoxxUK2025] Kotlin: The New and Noteworthy

Anton Arhipov, a developer advocate from JetBrains, captivated the DevoxxUK2025 audience with an overview of Kotlin’s recent advancements and future roadmap. Focusing on Kotlin 2.0’s K2 compiler and upcoming features like guard conditions, context parameters, rich errors, and name-based destructuring, Anton highlighted how Kotlin balances conciseness, safety, and expressiveness. His interactive talk, enriched with personal anecdotes and live demos, underscored Kotlin’s evolution as a versatile, multi-platform language that empowers developers to write robust, readable code.

Kotlin 2.0 and the K2 Compiler

Anton introduced Kotlin 2.0, released nearly a year ago, emphasizing the K2 compiler’s new front-end intermediate representation (FIR) and control flow engine. K2 improved compilation performance by 40% in IntelliJ IDEA Ultimate, fixed numerous small bugs, and provided a scalable foundation for future features. By desugaring complex constructs (e.g., if to when expressions, for loops to iterators), K2 enhances smart casts and type inference, enabling seamless handling of nullable types and complex expressions without manual casting.

Guard Conditions for Safer Control Flow

Set to stabilize in Kotlin 2.2, guard conditions enhance when expressions by allowing conditional checks without variable binding. In a demo, Anton showed processing orders with guard conditions to handle subscriptions and discounts, reducing repetition and ensuring exhaustiveness. Unlike Java’s pattern matching, Kotlin leverages existing destructuring to avoid redundancy, with guard conditions adding logical safety by enforcing checks (e.g., amount > 100) directly in when branches, minimizing errors in complex control flows.

Name-Based Destructuring for Robustness

Anton discussed name-based destructuring, planned for experimental release in Kotlin 2.4. Unlike positional destructuring, which risks logical errors during refactoring, name-based destructuring matches variable names to class properties, improving readability and safety. This feature extends to non-data classes and sealed hierarchies, with plans to deprecate positional destructuring in future versions (e.g., Kotlin 3.0), ensuring long-term language consistency while maintaining backward compatibility.

Context Parameters for Scoped APIs

Context parameters, entering beta in Kotlin 2.2, enable scoped extension functions for type-safe builders, often mistaken for DSLs. Anton demonstrated a client-building DSL where an infix extension function for dates (e.g., 10 March 2000) was restricted to a specific context, preventing global namespace pollution. This feature supports library developers in creating intuitive APIs, such as dependency injection-like logger scoping, reducing boilerplate and enhancing code clarity without compromising safety.

Rich Errors for Expressive Error Handling

Planned for experimental release in Kotlin 2.4, rich errors (previously called union types for errors) introduce a new error class syntax to distinguish error types explicitly. In a demo, Anton showed how rich errors improve over null-based error handling in functions like fetchUser and parseUser, enabling clear differentiation between network and parsing errors. Using when expressions, developers gain exhaustiveness checks and readable error handling, avoiding the verbosity of sealed hierarchies or result types.

Enhancing Compiler Safety with Return Value Checks

Anton highlighted a Kotlin 2.2 feature that mandates checking return values for standard library functions, preventing logical errors like missing return statements or incorrect function calls (e.g., using sort instead of sorted). By marking core functions with annotations, the compiler issues warnings for unused return values, reducing bugs like those in a demo where sorting a mutable list failed due to an overlooked return. This opt-in feature will expand to application code, enhancing reliability.

Links:

PostHeaderIcon [DevoxxUK2025] The Art of Structuring Real-Time Data Streams into Actionable Insights

At DevoxxUK2025, Olena Kutsenko, a data streaming expert from Confluent, delivered a compelling session on transforming chaotic real-time data streams into structured, actionable insights using Apache Kafka, Apache Flink, and Apache Iceberg. Through practical demos involving IoT devices and social media data, Olena demonstrated how to build scalable, low-latency data pipelines that ensure high data quality and flexibility for downstream analytics and AI applications. Her talk highlighted the power of combining these open-source technologies to handle messy, high-volume data streams, making them accessible for querying, visualization, and decision-making.

Apache Kafka: The Scalable Message Bus

Olena introduced Apache Kafka as the foundation for handling high-speed data streams, acting as a scalable message bus that decouples data producers (e.g., IoT devices) from consumers. Kafka’s design, with topics and partitions likened to multi-lane roads, ensures high throughput and low latency. In her IoT demo, Olena used a JavaScript producer to ingest sensor data (temperature, battery levels) into a Kafka topic, handling messy data with duplicates or missing sensor IDs. Kafka’s ability to replicate data and retain it for a defined period ensures reliability, allowing reprocessing if needed, making it ideal for industries like banking and retail, such as REWE’s use of Kafka for processing sold items.

Apache Flink: Real-Time Data Processing

Apache Flink was showcased as the engine for cleaning and structuring Kafka streams in real time. Olena explained Flink’s ability to handle both unbounded (real-time) and bounded (historical) data, using SQL for transformations. In the IoT demo, she applied a row_number function to deduplicate records by sensor ID and timestamp, filtered out invalid data (e.g., null sensor IDs), and reformatted timestamps to include time zones. A 5-second watermark ignored late-arriving data, and a tumbling window aggregated data into one-minute buckets, enriched with averages and standard deviations, ensuring clean, structured data ready for analysis.

Apache Iceberg: Structured Storage for Analytics

Olena introduced Apache Iceberg as an open table format that brings data warehouse-like structure to data lakes. Developed at Netflix to address Apache Hive’s limitations, Iceberg ensures atomic transactions and schema evolution without rewriting data. Its metadata layer, including manifest files and snapshots, supports time travel and efficient querying. In the demo, Flink’s processed data was written to Iceberg-compatible Kafka topics using Confluent’s Quora engine, eliminating extra migrations. Iceberg’s structure enabled fast queries and versioning, critical for analytics and compliance in regulated environments.

Querying and Visualization with Trino and Superset

To make data actionable, Olena used Trino, a distributed query engine, to run fast queries on Iceberg tables, and Apache Superset for visualization. In the IoT demo, Superset visualized temperature and humidity distributions, highlighting outliers. In a playful social media demo using Blue Sky data, Olena enriched posts with sentiment analysis (positive, negative, neutral) and category classification via a GPT-3.5 Turbo model, integrated via Flink. Superset dashboards displayed author activity and sentiment distributions, demonstrating how structured data enables intuitive insights for non-technical users.

Ensuring Data Integrity and Scalability

Addressing audience questions, Olena explained Flink’s exactly-once processing guarantee, using watermarks and snapshots to ensure data integrity, even during failures. Kafka’s retention policies allow reprocessing, critical for regulatory compliance, though she noted custom solutions are often needed for audit evidence in financial sectors. Flink’s parallel processing scales effectively with Kafka’s partitioned topics, handling high-volume data without bottlenecks, making the pipeline robust for dynamic workloads like IoT or fraud detection in banking.

Links:

PostHeaderIcon [DevoxxUK2025] Maven Productivity Tips

Andres Almiray, a Java Champion and Senior Principal Product Manager at Oracle, shared practical Maven productivity tips at DevoxxUK2025, drawing from his 24 years of experience with the build tool. Through live demos and interactive discussions, he guided attendees on optimizing Maven builds for performance, reliability, and maintainability. Covering the Enforcer plugin, reproducible builds, dependency management, and performance enhancements like the Maven Daemon, Andres provided actionable strategies to streamline complex builds, emphasizing best practices over common pitfalls like overusing mvn clean install.

Why Avoid mvn clean install?

Andres humorously declared, “The first rule of Maven Club is you do not mvn clean install,” advocating for mvn verify instead. He explained that verify executes all phases up to verification, sufficient for most builds, while install unnecessarily copies artifacts to the local repository, slowing builds with I/O operations. Referencing a 2019 Devoxx Belgium talk by Robert Scholte, he noted that verify ensures the same build outcomes without the overhead, saving time unless artifacts must be shared across disconnected projects.

Harnessing the Enforcer Plugin

The Enforcer plugin was a centerpiece, with Andres urging all attendees to adopt it. He demonstrated configuring it to enforce Maven and Java versions (e.g., Maven 3.9.9, Java 21), plugin version specifications, and dependency convergence. In a live demo, a build failed due to missing Maven wrapper files and unspecified plugin versions, highlighting how Enforcer catches issues early. By fixing versions in the POM and using the Maven wrapper, Andres ensured consistent, reliable builds across local and CI environments.

Achieving Reproducible Builds

Andres emphasized reproducible builds for supply chain security and contractual requirements. Using the Maven Archiver plugin, he set a fixed timestamp (e.g., a significant date like Back to the Future’s) to ensure deterministic artifact creation. In a demo, he inspected a JAR’s manifest and bytecode, confirming a consistent timestamp and Java 21 compatibility. This practice ensures bit-for-bit identical artifacts, enabling verification against tampering and simplifying compliance in regulated industries.

Streamlining Dependency Management

To manage dependencies effectively, Andres showcased the Dependency plugin’s analyze goal, identifying unused dependencies like Commons Lang and incorrectly scoped SLF4J implementations. He advised explicitly declaring dependencies (e.g., SLF4J API) to avoid relying on transitive dependencies, ensuring clarity and preventing runtime issues. In a multi-module project, he used plugin management to standardize plugin versions, reducing configuration errors across modules.

Profiles and Plugin Flexibility

Andres demonstrated Maven profiles to optimize builds, moving resource-intensive plugins like maven-javadoc-plugin and maven-source-plugin to a specific profile for Maven Central deployments. This reduced default build times, as these plugins were only activated when needed. He also showed how to invoke plugins like echo without explicit configuration, using default settings or execution IDs, enhancing flexibility for ad-hoc tasks.

Boosting Build Performance

To accelerate builds, Andres introduced the Maven Daemon and cache extension. In a demo, a clean verify build took 0.4 seconds initially but dropped to 0.2 seconds with caching, as unchanged results were reused. Paired with the Maven wrapper and tools like gump (which maps commands like build to verify), these tools simplify and speed up builds, especially in CI pipelines, by ensuring consistent Maven versions and caching outcomes.

Links:

PostHeaderIcon [DevoxxUK2025] The Hidden Art of Thread-Safe Programming: Exploring java.util.concurrent

At DevoxxUK2025, Heinz Kabutz, a renowned Java expert, delivered an engaging session on the intricacies of thread-safe programming using java.util.concurrent. Drawing from his extensive experience, Heinz explored the subtleties of concurrency bugs, using the Vector class as a cautionary tale of hidden race conditions and deadlocks. Through live coding and detailed analysis, he showcased advanced techniques like lock striping in LongAdder, lock splitting in LinkedBlockingQueue, weakly consistent iteration in ArrayBlockingQueue, and check-then-act in CopyOnWriteArrayList. His interactive approach, starting with audience questions, provided practical insights into writing robust concurrent code, emphasizing the importance of using well-tested library classes over custom synchronizers.

The Perils of Concurrency Bugs

Heinz began with the Vector class, often assumed to be thread-safe due to its synchronized methods. However, he revealed its historical flaws: in Java 1.0, unsynchronized methods like size() caused visibility issues, and Java 1.1 introduced a race condition during serialization. By Java 1.4, fixes for these issues inadvertently added a deadlock risk when two vectors referenced each other during serialization. Heinz emphasized that concurrency bugs are elusive, often requiring specific conditions to manifest, making testing challenging. He recommended studying java.util.concurrent classes to understand robust concurrency patterns and avoid such pitfalls.

Choosing Reliable Concurrent Classes

Addressing an audience question about classes to avoid, Heinz advised against writing custom synchronizers, as recommended by Brian Goetz in Java Concurrency in Practice. Instead, use well-tested classes like ConcurrentHashMap and LinkedBlockingQueue, which are widely used in the JDK and have fewer reported bugs. For example, ConcurrentHashMap evolved from using ReentrantLock in Java 5 to synchronized blocks and red-black trees in Java 8, improving performance. In contrast, less-used classes like ConcurrentSkipListMap and LinkedBlockingDeque have known issues, making them riskier choices unless thoroughly tested.

Lock Striping with LongAdder

Heinz demonstrated the power of lock striping using LongAdder, which outperforms AtomicLong in high-contention scenarios. In a live demo, incrementing a counter 100 million times took 4.5 seconds with AtomicLong but only 84 milliseconds with LongAdder. This efficiency comes from LongAdder’s Striped64 base class, which uses a volatile long base and dynamically allocates cells (128 bytes each) to distribute contention across threads. Using a thread-local random probe, it minimizes clashes, capping at 16 cells to balance memory usage, making it ideal for high-throughput counters.

Lock Splitting in LinkedBlockingQueue

Exploring LinkedBlockingQueue, Heinz highlighted its use of lock splitting, employing separate locks for putting and taking operations to enable simultaneous producer-consumer actions. This design boosts throughput in single-producer, single-consumer scenarios, using an AtomicInteger to ensure visibility across locks. In a demo, LinkedBlockingQueue processed 10 million puts and takes in about 1 second, slightly outperforming LinkedBlockingDeque, which uses a single lock. However, in multi-consumer scenarios, contention between consumers can slow LinkedBlockingQueue, as shown in a two-consumer test taking 320 milliseconds.

Weakly Consistent Iteration in ArrayBlockingQueue

Heinz explained the unique iteration behavior of ArrayBlockingQueue, which uses a circular array and supports weakly consistent iteration. Unlike linked structures, its fixed array can overwrite data, complicating iteration. A demo showed an iterator caching the next item, continuing correctly even after modifications, thanks to weak references tracking iterators to prevent memory leaks. This design avoids ConcurrentModificationException but requires careful handling, as iterating past the array’s end can yield unexpected results, highlighting the complexity of seemingly simple concurrent structures.

Check-Then-Act in CopyOnWriteArrayList

Delving into CopyOnWriteArrayList, Heinz showcased its check-then-act pattern to minimize locking. When removing an item, it checks the array snapshot without locking, only synchronizing if the item is found, reducing contention. A surprising discovery was a labeled if statement, a rare Java construct used to retry operations if the array changes, optimizing for the HotSpot compiler. Heinz noted this deliberate complexity underscores the expertise behind java.util.concurrent, encouraging developers to study these classes for better concurrency practices.

Virtual Threads and Modern Concurrency

Answering an audience question about virtual threads, Heinz noted that Java 24 improved compatibility with wait and notify, reducing concerns compared to Java 21. However, he cautioned about pinning carrier threads in older versions, particularly in ConcurrentHashMap’s computeIfAbsent, which could exhaust thread pools. With Java 24, these issues are mitigated, making java.util.concurrent classes safer for virtual threads, though developers should remain vigilant about potential contention in high-thread scenarios.

Links:

PostHeaderIcon [DevoxxUK2025] How to Ask Questions in 2025

Carly Richmond, a developer advocate at Elastic, delivered a concise and practical talk at DevoxxUK2025 on mastering developer forums in the AI era. Drawing from her experience as a front-end engineer and forum moderator, she shared strategies for asking and answering questions effectively on platforms like Stack Overflow, Discourse, and company-specific Slacks. Carly emphasized providing sufficient context, avoiding common pitfalls like exposing private data, and using AI-generated answers responsibly. Her engaging examples and actionable tips highlighted the importance of empathy and etiquette in fostering vibrant developer communities.

The Value of Developer Forums

Carly underscored that forums remain vital for connecting developers globally, offering solutions and collaboration opportunities. However, poor question quality—such as vague posts or failure to search existing answers—hampers effectiveness. She cited an example of a novice Kibana user posting “server not ready” without searching, missing readily available troubleshooting guides. Encouraging users to check documentation, search forums, or use Google first, Carly stressed that these habits save time and improve answer quality, especially for junior developers prone to panic.

Crafting Effective Questions

To get timely answers, Carly advised including key details: software versions, technology used (e.g., Elasticsearch, Logstash), code snippets, configuration examples, logs, and steps tried. Screenshots are useful for UI issues but not for code, which should be shared as text. For open-ended queries like best practices, specify the goal clearly to avoid intimidating responders. Carly shared an anonymized example of a vague post lacking version details, which led to follow-up questions, delaying resolution and frustrating both asker and community.

Avoiding Common Mistakes

Carly highlighted pitfalls like exposing sensitive information (e.g., API keys, proprietary code) in public forums, which can lead to security risks or platform bans. She recounted instances where moderators had to remove posts containing login credentials or endpoints. To prevent this, obfuscate sensitive data or use dummy values. Another mistake is impatience, such as repeatedly pinging moderators or hijacking others’ threads, which disrupts discussions. Carly advised waiting a few days before escalating and posting solutions if found independently.

Responsible Use of AI in Forums

With AI tools increasingly used in forums, Carly cautioned against posting unverified AI-generated answers. She shared a case where a well-meaning user posted incorrect RAG-generated responses from Elasticsearch documentation, later flagged by developers. To use AI responsibly, verify accuracy, disclose AI usage per forum rules, and avoid flooding threads with unhelpful content. Carly emphasized transparency, as some users prefer human-crafted answers, and unchecked AI responses can mislead or clutter discussions.

Maintaining Forum Etiquette

Carly stressed empathy in forums, noting that responders are developers, not chatbots. Rude behavior, like aggressive pings or irrelevant replies (e.g., pitching a cloud trial for an on-prem query), alienates the community. She also addressed irrelevant posts, like a user discussing their sick cat in a Java agent thread, which moderators should flag or remove. Adhering to the community’s code of conduct ensures constructive dialogue. For disputes, such as responders arguing over answers, Carly recommended flagging violations and focusing on testing suggested solutions.

Practical Tips for Unanswered Questions

When questions go unanswered, Carly suggested waiting a week before flagging to moderators, as forums offer best-effort support, not production-level urgency. If no response, add more context, like new attempts or error updates, to aid responders. For example, she advised a user whose week-old post went unanswered to refine their query with additional logs or context. Carly also encouraged sharing solutions to help future searchers, reinforcing the collaborative spirit of developer forums.

Links:

PostHeaderIcon [DevoxxUK2025] Concerto for Java and AI: Building Production-Ready LLM Applications

At DevoxxUK2025, Thomas Vitale, a software engineer at Systematic, delivered an inspiring session on integrating generative AI into Java applications to enhance his music composition process. Combining his passion for music and software engineering, Thomas showcased a “composer assistant” application built with Spring AI, addressing real-world use cases like text classification, semantic search, and structured data extraction. Through live coding and a musical performance, he demonstrated how Java developers can leverage large language models (LLMs) for production-ready applications, emphasizing security, observability, and developer experience. His talk culminated in a live composition for an audience-chosen action movie scene, blending AI-driven suggestions with human creativity.

The Why Factor for AI Integration

Thomas introduced his “Why Factor” to evaluate hype technologies like generative AI. First, identify the problem: for his composer assistant, he needed to organize and access musical data efficiently. Second, assess production readiness: LLMs must be secure and reliable for real-world use. Third, prioritize developer experience: tools like Spring AI simplify integration without disrupting workflows. By focusing on these principles, Thomas avoided blindly adopting AI, ensuring it solved specific issues, such as automating data classification to free up time for creative tasks like composing music.

Enhancing Applications with Spring AI

Using a Spring Boot application with a Thymeleaf frontend, Thomas integrated Spring AI to connect to LLMs like those from Ollama (local) and Mistral AI (cloud). He demonstrated text classification by creating a POST endpoint to categorize musical data (e.g., “Irish tin whistle” as an instrument) using a chat client API. To mitigate risks like prompt injection attacks, he employed Java enumerations to enforce structured outputs, converting free text into JSON-parsed Java objects. This approach ensured security and usability, allowing developers to swap models without code changes, enhancing flexibility for production environments.

Semantic Search and Retrieval-Augmented Generation

Thomas addressed the challenge of searching musical data by meaning, not just keywords, using semantic search. By leveraging embedding models in Spring AI, he converted text (e.g., “melancholic”) into numerical vectors stored in a PostgreSQL database, enabling searches for related terms like “sad.” He extended this with retrieval-augmented generation (RAG), where a chat client advisor retrieves relevant data before querying the LLM. For instance, asking, “What instruments for a melancholic scene?” returned suggestions like cello, based on his dataset, improving search accuracy and user experience.

Structured Data Extraction and Human Oversight

To streamline data entry, Thomas implemented structured data extraction, converting unstructured director notes (e.g., from audio recordings) into JSON objects for database storage. Spring AI facilitated this by defining a JSON schema for the LLM to follow, ensuring structured outputs. Recognizing LLMs’ potential for errors, he emphasized keeping humans in the loop, requiring users to review extracted data before saving. This approach, applied to his composer assistant, reduced manual effort while maintaining accuracy, applicable to scenarios like customer support ticket processing.

Tools and MCP for Enhanced Functionality

Thomas enhanced his application with tools, enabling LLMs to call internal APIs, such as saving composition notes. Using Spring Data, he annotated methods to make them accessible to the model, allowing automated actions like data storage. He also introduced the Model Context Protocol (MCP), implemented in Quarkus, to integrate with external music software via MIDI signals. This allowed the LLM to play chord progressions (e.g., in A minor) through his piano software, demonstrating how MCP extends AI capabilities across local processes, though he cautioned it’s not yet production-ready.

Observability and Live Composition

To ensure production readiness, Thomas integrated OpenTelemetry for observability, tracking LLM operations like token usage and prompt augmentation. During the session, he invited the audience to choose a movie scene (action won) and used his application to generate a composition plan, suggesting chord progressions (e.g., I-VI-III-VII) and instruments like percussion and strings. He performed the music live, copy-pasting AI-suggested notes into his software, fixing minor bugs, and adding creative touches, showcasing a practical blend of AI automation and human artistry.

Links:

PostHeaderIcon [DevoxxUK2025] Software Excellence in Large Orgs through Technical Coaching

Emily Bache, a seasoned technical coach, shared her expertise at DevoxxUK2025 on fostering software excellence in large organizations through technical coaching. Drawing on DORA research, which correlates high-quality code with faster delivery and better organizational outcomes, Emily emphasized practices like test-driven development (TDD) and refactoring to maintain code quality. She introduced technical coaching as a vital role, involving short, interactive learning hours and ensemble programming to build developer skills. Her talk, enriched with a refactoring demo and insights from Hartman’s proficiency taxonomy, offered a roadmap for organizations to reduce technical debt and enhance team performance.

The Importance of Code Quality

Emily began by referencing DORA research, which highlights capabilities like test automation, code maintainability, and small-batch development as predictors of high-performing teams. She cited a study by Adam Tornhill and Marcus Borie, showing that poor-quality code can increase development time by up to 124%, with worst-case scenarios taking nine times longer. Technical debt, or “cruft,” slows feature delivery and makes schedules unpredictable. Practices like TDD, refactoring, pair programming, and clean architecture are essential to maintain code quality, ensuring software remains flexible and cost-effective to modify over time.

Technical Coaching as a Solution

In large organizations, Emily noted a gap in technical leadership, with architects often focused on high-level design and teams lacking dedicated tech leads. Technical coaches bridge this gap, working part-time across teams to teach skills and foster a quality culture. Unlike code reviews, which reinforce existing knowledge, coaching proactively builds skills through hands-on training. Emily’s approach involves collaborating with architects and tech leads, aligning with organizational goals while addressing low-level design practices like TDD and refactoring, which are often neglected but critical for maintainable code.

Learning Hours for Skill Development

Emily’s learning hours are short, interactive sessions inspired by Sharon Bowman’s training techniques. Developers work in pairs on exercises, such as refactoring katas (e.g., Tennis Refactoring Kata), to practice skills like extracting methods and naming conventions. A demo showcased decomposing a complex method into readable, well-named functions, emphasizing deterministic refactoring tools over AI assistants, which excel at writing new code but struggle with refactoring. These sessions teach vocabulary for discussing code quality and provide checklists for applying skills, ensuring developers can immediately use what they learn.

Ensemble Programming for Real-World Application

Ensemble programming brings teams together to work on production code under a coach’s guidance. Unlike toy exercises, these sessions tackle real, complex problems, allowing developers to apply TDD and refactoring in context. Emily highlighted the collaborative nature of ensembles, where senior developers mentor juniors, fostering team learning. By addressing production code, coaches ensure skills translate to actual work, bridging the gap between training and practice. This approach helps teams internalize techniques like small-batch development and clean design, improving code quality incrementally.

Hartman’s Proficiency Taxonomy

Emily introduced Hartman’s proficiency taxonomy to explain skill acquisition, contrasting it with Bloom’s thinking-focused taxonomy. The stages—familiarity, comprehension, conscious effort, conscious action, proficiency, and expertise—map the journey from knowing a skill exists to applying it fluently in production. Learning hours help developers move from familiarity to conscious effort with exercises and feedback, while ensembles push them toward proficiency by applying skills to real code. Coaches tailor interventions based on a team’s proficiency level, ensuring steady progress toward mastery.

Getting Started with Technical Coaching

Emily encouraged organizations to adopt technical coaching, ideally led by tech leads with management support to allocate time for mentoring. She shared resources from her Samman Coaching website, including kata descriptions and learning hour guides, available through her nonprofit society for technical coaches. For mixed-experience teams, she pairs senior developers with juniors to foster mentoring, turning diversity into a strength. Her book, Samman Technical Coaching, and monthly online meetups provide further support for aspiring coaches, aiming to spread best practices and elevate code quality across organizations.

Links:

PostHeaderIcon [DevoxxUK2025] Passkeys in Practice: Implementing Passwordless Apps

At DevoxxUK2025, Daniel Garnier-Moiroux, a Spring Security team member at VMware, delivered an engaging talk on implementing passwordless authentication using passkeys and the WebAuthn specification. Highlighting the security risks of traditional passwords, Daniel demonstrated how passkeys leverage cryptographic keys stored on devices like YubiKeys, Macs, or smartphones to provide secure, user-friendly login flows. Using Spring Boot 3.4’s new WebAuthn support, he showcased practical steps to integrate passkeys into an existing application, emphasizing phishing resistance and simplified user experiences. His live coding demo and insights into Spring Security’s configuration made this a compelling session for developers seeking modern authentication solutions.

The Problem with Passwords

Daniel opened by underscoring the vulnerabilities of passwords, often reused or poorly secured, leading to frequent breaches. He introduced passwordless alternatives, starting with one-time tokens (OTTs), which Spring Security supports for temporary login links sent via email. While effective, OTTs require cumbersome steps like copying tokens across devices. Passkeys, based on the WebAuthn standard, offer a superior solution by using cryptographic keys tied to specific domains, eliminating password-related risks. Supported by major browsers and platforms like Apple, Google, and Microsoft, passkeys enable seamless authentication via biometrics, PINs, or physical devices, combining convenience with robust security.

Understanding WebAuthn and Passkeys

Passkeys utilize asymmetric cryptography, where a private key remains on the user’s device (e.g., a YubiKey or iPhone) and a public key is shared with the server. Daniel explained the two-phase process: registration, where a key pair is generated and the public key is stored on the server, and authentication, where the server sends a challenge, the device signs it with the private key, and the server verifies it. This ensures phishing resistance, as keys are domain-specific and cannot be used on fraudulent sites. WebAuthn, a W3C standard backed by the FIDO Alliance, simplifies this process for developers by abstracting complex cryptography through browser APIs like navigator.credentials.create() and navigator.credentials.get().

Integrating Passkeys with Spring Security

Using a live demo, Daniel showed how to integrate passkeys into a Spring Boot 3.4 application. He added the spring-security-webauthn dependency and configured a security setup with the application name, relying party (RP) ID (e.g., localhost), and allowed origins. This minimal configuration enables a default passkey login page. For persistence, Spring Security 6.5 (releasing soon after the talk) offers JDBC support, requiring two tables: one for user credentials (storing public keys and metadata) and another linking passkeys to users. Daniel emphasized that Spring Security handles cryptographic validation, sparing developers from implementing complex WebAuthn logic manually.

Customizing the Passkey Experience

To enhance user experience, Daniel demonstrated creating a custom login page with a branded “Sign in with Passkey” button, styled with CSS (featuring a comic sans font for humor). He highlighted the need for JavaScript to interact with WebAuthn APIs, copying Spring Security’s Apache-licensed sample code for authentication flows. This involves handling CSRF tokens and redirecting users post-authentication. While minimal Java code is needed, developers must write some JavaScript to trigger browser APIs. Daniel advised using Spring Security’s defaults for simplicity but encouraged customization for production apps, ensuring alignment with brand aesthetics.

Practical Considerations and Feedback

Daniel stressed that passkeys are not biometric data but cryptographic credentials, synced across devices via password managers or iCloud Keychain without server involvement. For organizations using identity providers like Keycloak or Azure Entra ID, passkey support is often a checkbox configuration, reducing implementation effort. He encouraged developers to provide feedback on Spring Security’s passkey support via GitHub issues, emphasizing community contributions to refine features. For those interested in deeper WebAuthn mechanics, he recommended Ubico’s developer guide over the dense W3C specification, offering practical insights for implementation.

Links:

PostHeaderIcon [DevoxxUK2025] Cracking the Code Review

Paco van Beckhoven, a senior software engineer at Hexagon’s HXDR division, delivered a comprehensive session at DevoxxUK2025 on improving code reviews to enhance code quality and team collaboration. Drawing from his experience with a cloud-based platform for 3D scans, Paco outlined strategies to streamline pull requests, provide constructive feedback, and leverage automated tools. Highlighting the staggering $316 billion cost of fixing bugs in 2013, he emphasized code reviews as a critical defense against defects. His practical tactics, from crafting concise pull requests to automating style checks, aim to reduce friction, foster learning, and elevate software quality, making code reviews a collaborative and productive process.

Streamlining Pull Requests

Paco stressed the importance of concise, well-documented pull requests to facilitate reviews. He advocated for descriptive titles, inspired by conventional commits, that include ticket numbers and context, such as “Fix null pointer in payment service.” Descriptions should outline the change, link related tickets or PRs, and explain design decisions to preempt reviewer questions. Templates with checklists ensure consistency, reminding developers to update documentation or verify tests. Paco also recommended self-reviewing PRs after a break to catch errors like unused code or typos, adding comments to clarify intent and reduce reviewer effort, ultimately speeding up the process.

Effective Feedback and Collaboration

Delivering constructive feedback is key to effective code reviews, Paco noted. He advised reviewers to start with the PR’s description and existing comments to understand context before diving into code. Reviews should prioritize design and functionality over minor style issues, ensuring tests are thoroughly checked for completeness. To foster collaboration, Paco suggested using “we” instead of “you” in comments to emphasize teamwork, posing questions rather than statements, and providing specific, actionable suggestions. Highlighting positive aspects, especially for junior developers, boosts confidence and encourages participation, creating a supportive review culture.

Leveraging Automated Tools

To reduce noise from trivial issues like code style, Paco showcased tools like Error Prone, OpenRewrite, Spotless, Checkstyle, and ArchUnit. Error Prone catches common mistakes and suggests fixes, while OpenRewrite automates migrations, such as JUnit 4 to 5. Spotless enforces consistent formatting across languages like Java and SQL, and Checkstyle ensures adherence to coding standards. ArchUnit enforces architectural rules, like preventing direct controller-to-persistence calls. Paco advised introducing these tools incrementally, involving the team in rule selection, and centralizing configurations in a parent POM to maintain consistency and minimize manual review efforts.

Links: