Archive for the ‘en-US’ Category
[DevoxxFR2025] Spark 4 and Iceberg: The New Standard for All Your Data Projects
The world of big data is constantly evolving, with new technologies emerging to address the challenges of managing and processing ever-increasing volumes of data. Apache Spark has long been a dominant force in big data processing, and its evolution continues with Spark 4. Complementing this is Apache Iceberg, a modern table format that is rapidly becoming the standard for managing data lakes. Pierre Andrieux from Capgemini and Houssem Chihoub from Databricks joined forces to demonstrate how the combination of Spark 4 and Iceberg is set to revolutionize data projects, offering improved performance, enhanced data management capabilities, and a more robust foundation for data lakes.
Spark 4: Boosting Performance and Data Lake Support
Pierre and Houssem highlighted the major new features and enhancements in Apache Spark 4. A key area of improvement is performance, with a new query engine and automatic query optimization designed to accelerate data processing workloads. Spark 4 also brings enhanced native support for data lakes, simplifying interactions with data stored in formats like Parquet and ORC on distributed file systems. This tighter integration improves efficiency and reduces the need for external connectors or complex configurations. The presentation showcased benchmarks or performance comparisons illustrating the gains achieved with Spark 4, particularly when working with large datasets in a data lake environment.
Apache Iceberg Demystified: A Next-Generation Table Format
Apache Iceberg addresses the limitations of traditional table formats used in data lakes. Houssem demystified Iceberg, explaining that it provides a layer of abstraction on top of data files, bringing database-like capabilities to data lakes. Key features of Iceberg include:
– Time Travel: The ability to query historical snapshots of a table, enabling reproducible reports and simplified data rollbacks.
– Schema Evolution: Support for safely evolving table schemas over time (e.g., adding, dropping, or renaming columns) without requiring costly data rewrites.
– Dynamic Partitioning: Iceberg automatically manages data partitioning, optimizing query performance based on query patterns without manual intervention.
– Atomic Commits: Ensures that changes to a table are atomic, providing reliability and consistency even in distributed environments.
These features solve many of the pain points associated with managing data lakes, such as schema management complexities, difficulty in handling updates and deletions, and lack of transactionality.
The Power of Combination: Spark 4 and Iceberg
The true power lies in combining the processing capabilities of Spark 4 with the data management features of Iceberg. Pierre and Houssem demonstrated through concrete use cases and practical demonstrations how this combination enables building modern data pipelines. They showed how Spark 4 can efficiently read from and write to Iceberg tables, leveraging Iceberg’s features like time travel for historical analysis or schema evolution for seamlessly integrating data with changing structures. The integration allows data engineers and data scientists to work with data lakes with greater ease, reliability, and performance, making this combination a compelling new standard for data projects. The talk covered best practices for implementing data pipelines with Spark 4 and Iceberg and discussed potential pitfalls to avoid, providing attendees with the knowledge to leverage these technologies effectively in their own data initiatives.
Links:
- Pierre Andrieux: https://www.linkedin.com/in/pierre-andrieux-33135810b/
- Houssem Chihoub: https://www.linkedin.com/in/houssemchihoub/
- Capgemini: https://www.capgemini.com/
- Databricks: https://www.databricks.com/
- Apache Spark: https://spark.apache.org/
- Apache Iceberg: https://iceberg.apache.org/
- Devoxx France LinkedIn: https://www.linkedin.com/company/devoxx-france/
- Devoxx France Bluesky: https://bsky.app/profile/devoxx.fr
- Devoxx France Website: https://www.devoxx.fr/
[NDCMelbourne2025] DIY Usability Testing When You Have No Time and No Budget – Bekah Rice
In an insightful presentation at NDC Melbourne 2025, Bekah Rice, a UX consultant from True Matter, delivers a practical guide to conducting usability testing without the luxury of extensive time or financial resources. Drawing from her experience at a South Carolina-based UX consultancy, Bekah outlines an eight-step process to gather meaningful qualitative data, enabling developers and designers to refine digital products effectively. Her approach, demonstrated through a live usability test, underscores the importance of observing real user interactions to uncover design flaws and enhance user experience, even with minimal resources.
Step One: Preparing the Test Material
Bekah begins by emphasizing the need for a testable artifact, which need not be a fully developed product. A simple sketch, paper prototype, or a digital mockup created in tools like Figma can suffice. The key is to ensure the prototype provides enough context to mimic real-world usage. For instance, Bekah shares her plan to test a 12-year-old hospital website, currently undergoing a redesign, to identify usability issues. This approach allows teams to evaluate user interactions early, even before development begins, ensuring the product aligns with user needs from the outset.
Crafting Effective Tasks
The second step involves designing realistic tasks that reflect the user’s typical interactions with the product. Bekah illustrates this with a scenario for the hospital website, where users are asked to make an appointment with a doctor for regular care after moving to a new town. By phrasing tasks as open-ended questions and avoiding UI-specific terminology, she ensures users are not inadvertently guided toward specific actions. This method, she explains, reveals genuine user behavior, including potential failures, which are critical for identifying design shortcomings.
Recruiting the Right Participants
Finding suitable testers is crucial, and Bekah advocates for a pragmatic approach when resources are scarce. Instead of recruiting strangers, she suggests leveraging colleagues from unrelated departments, friends, or family members who are unfamiliar with the product. For the hospital website test, she selects Adam, a 39-year-old artist and warehouse tech, as a representative user. Bekah warns against testing with stakeholders or developers, as their biases can skew results. Offering small incentives, like coffee or lunch, can encourage participation, making the process feasible even on a tight budget.
Setting Up and Conducting the Test
Creating a comfortable testing environment and using minimal equipment are central to Bekah’s DIY approach. A quiet space, such as a conference room or a coffee shop, can replicate the user’s typical context. During the live demo, Bekah uses Adam’s iPhone to conduct the test, highlighting that borrowed devices can work if they allow observation. She also stresses the importance of a note-taking “sidekick” to record patterns and insights, which proved invaluable when Adam repeatedly missed critical UI elements, revealing design flaws like unclear button labels and missing appointment options.
Analyzing and Reporting Findings
The final step involves translating observations into actionable insights. Bekah emphasizes documenting both successes and failures, as seen when Adam struggled with the hospital website’s navigation but eventually found a phone number as a fallback. Immediate reporting to the team ensures fresh insights drive improvements, such as adding a map to the interface or renaming buttons for clarity. By presenting findings in simple bullet lists or visually appealing reports, teams can effectively communicate changes to stakeholders, ensuring the product evolves to meet user needs.
Links:
[DevoxxUK2025] Maven Productivity Tips
Andres Almiray, a Java Champion and Senior Principal Product Manager at Oracle, shared practical Maven productivity tips at DevoxxUK2025, drawing from his 24 years of experience with the build tool. Through live demos and interactive discussions, he guided attendees on optimizing Maven builds for performance, reliability, and maintainability. Covering the Enforcer plugin, reproducible builds, dependency management, and performance enhancements like the Maven Daemon, Andres provided actionable strategies to streamline complex builds, emphasizing best practices over common pitfalls like overusing mvn clean install.
Why Avoid mvn clean install?
Andres humorously declared, “The first rule of Maven Club is you do not mvn clean install,” advocating for mvn verify instead. He explained that verify executes all phases up to verification, sufficient for most builds, while install unnecessarily copies artifacts to the local repository, slowing builds with I/O operations. Referencing a 2019 Devoxx Belgium talk by Robert Scholte, he noted that verify ensures the same build outcomes without the overhead, saving time unless artifacts must be shared across disconnected projects.
Harnessing the Enforcer Plugin
The Enforcer plugin was a centerpiece, with Andres urging all attendees to adopt it. He demonstrated configuring it to enforce Maven and Java versions (e.g., Maven 3.9.9, Java 21), plugin version specifications, and dependency convergence. In a live demo, a build failed due to missing Maven wrapper files and unspecified plugin versions, highlighting how Enforcer catches issues early. By fixing versions in the POM and using the Maven wrapper, Andres ensured consistent, reliable builds across local and CI environments.
Achieving Reproducible Builds
Andres emphasized reproducible builds for supply chain security and contractual requirements. Using the Maven Archiver plugin, he set a fixed timestamp (e.g., a significant date like Back to the Future’s) to ensure deterministic artifact creation. In a demo, he inspected a JAR’s manifest and bytecode, confirming a consistent timestamp and Java 21 compatibility. This practice ensures bit-for-bit identical artifacts, enabling verification against tampering and simplifying compliance in regulated industries.
Streamlining Dependency Management
To manage dependencies effectively, Andres showcased the Dependency plugin’s analyze goal, identifying unused dependencies like Commons Lang and incorrectly scoped SLF4J implementations. He advised explicitly declaring dependencies (e.g., SLF4J API) to avoid relying on transitive dependencies, ensuring clarity and preventing runtime issues. In a multi-module project, he used plugin management to standardize plugin versions, reducing configuration errors across modules.
Profiles and Plugin Flexibility
Andres demonstrated Maven profiles to optimize builds, moving resource-intensive plugins like maven-javadoc-plugin and maven-source-plugin to a specific profile for Maven Central deployments. This reduced default build times, as these plugins were only activated when needed. He also showed how to invoke plugins like echo without explicit configuration, using default settings or execution IDs, enhancing flexibility for ad-hoc tasks.
Boosting Build Performance
To accelerate builds, Andres introduced the Maven Daemon and cache extension. In a demo, a clean verify build took 0.4 seconds initially but dropped to 0.2 seconds with caching, as unchanged results were reused. Paired with the Maven wrapper and tools like gump (which maps commands like build to verify), these tools simplify and speed up builds, especially in CI pipelines, by ensuring consistent Maven versions and caching outcomes.
Links:
[KotlinConf2025] LangChain4j with Quarkus
In a collaboration between Red Hat and Twilio, Max Rydahl Andersen and Konstantin Pavlov presented an illuminating session on the powerful combination of LangChain4j and Quarkus for building AI-driven applications with Kotlin. The talk addressed the burgeoning demand for integrating artificial intelligence into modern software and the common difficulties developers encounter, such as complex setups and performance bottlenecks. By merging Kotlin’s expressive power, Quarkus’s rapid runtime, and LangChain4j’s AI capabilities, the presenters demonstrated a streamlined and effective solution for creating cutting-edge applications.
A Synergistic Approach to AI Integration
The core of the session focused on the seamless synergy between the three technologies. Andersen and Pavlov detailed how Kotlin’s idiomatic features simplify the development of AI workflows. They presented a compelling case for using LangChain4j, a versatile framework for building language model-based applications, within the Quarkus ecosystem. Quarkus, with its fast startup times and low memory footprint, proved to be an ideal runtime for these resource-intensive applications. The presenters walked through practical code samples, illustrating how to set up the environment, manage dependencies, and orchestrate AI tools efficiently. They emphasized that this integrated approach significantly reduces the friction typically associated with AI development, allowing engineers to focus on business logic rather than infrastructural challenges.
Enhancing Performance and Productivity
The talk also addressed the critical aspect of performance. The presenters demonstrated how the combination of LangChain4j and Quarkus enables the creation of high-performing, AI-powered applications. They discussed the importance of leveraging Quarkus’s native compilation capabilities, which can lead to dramatic improvements in startup time and resource utilization. Additionally, they touched on the ongoing work to optimize the Kotlin compiler’s interaction with the Quarkus build system. Andersen noted that while the current process is efficient, there are continuous efforts to further reduce build times and enhance developer productivity. This commitment to performance underscores the value of this tech stack for developers who need to build scalable and responsive AI solutions.
The Path Forward
Looking ahead, Andersen and Pavlov outlined the future roadmap for LangChain4j and its integration with Quarkus. They highlighted upcoming features, such as the native asynchronous API, which will provide enhanced support for Kotlin coroutines. While acknowledging the importance of coroutines for certain use cases, they also reminded the audience that traditional blocking and virtual threads remain perfectly viable and often preferred for a majority of applications. They also extended an open invitation to the community to contribute to the project, emphasizing that the development of these tools is a collaborative effort. The session concluded with a powerful message: this technology stack is not just about building applications; it’s about empowering developers to confidently tackle the next generation of AI-driven projects.
Links:
[DotAI2024] DotAI 2024: Daniel Phiri – Bridging the Multimodal Divide: From Monoliths to Mosaic Mastery
Daniel Phiri, Developer Advocate at Weaviate—an open-source vector vault pioneering AI-native navigation—and a scribe of scripts with a penchant for open-source odysseys, bridged breaches at DotAI 2024. His clarion countered consternation: AI’s archipelago awash in apparatuses, scant in sustainable structures—fear’s fetter, fractured faculties. Phiri prescribed pluralism: modalities as medley, not monad—systems symphonizing senses, from spectral scans to syntactic streams, piping predictions to prowess.
Dismantling the Monolithic Myth: Modalities as Multifaceted Melange
Phiri’s parable pulsed: truffle’s trio—pasta’s plinth, Parmesan’s piquancy, fungus’ finesse—mirroring modalities’ mosaic, where singular silos starve synergy. Models’ mirage: magic boxes masking machinations, inputs imploding into ineffable infinities.
Multimodality’s mantle: processing’s palette—images’ illuminations, videos’ vigor, depths’ delineations, code’s calculus—beyond binaries, beckoning breadth. Phiri posited pipelines: predictive pulses—confidence’s calculus—channeling to tool-calling’s trove or retrieval’s reservoir.
Embeddings’ empire: encoders etching essences, vectors vaulted in versatile voids—similarity’s summons spanning spectra. Near-image nexuses: base64’s bastion, querying quanta for cinematic kin—posters procured through proximity’s prism.
Piping Pluralities: Advanced Assemblies for Augmented Actions
Phiri forged the flux: intent’s inception—microchip’s mystery, snapshot’s summons—”what’s this?”—cascading through cognoscenti: vision’s verdict, textual translation, tool’s tether. Assumptions assailed: monolithic mandates misguided—modest modules, manifold models, melded through metrics.
Retrieval’s renaissance: multimodal matrices, embeddings’ expanse enabling eclectic echoes—text’s tendrils twining visuals’ veins. Code’s cadence: Weaviate’s weave, near-image invocations instantiating inquiries, pipelines pulsing predictions to praxis.
Phiri’s provocation: ponder peripheries—tasteless trifles transcended, users uplifted through unum’s unraveling. Cross chasms with choruses: smaller sentinels, synergistic streams—building’s beckon, beyond’s bridge.
Links:
[DevoxxFR2025] Alert, Everything’s Burning! Mastering Technical Incidents
In the fast-paced world of technology, technical incidents are an unavoidable reality. When systems fail, the ability to quickly detect, diagnose, and resolve issues is paramount to minimizing impact on users and the business. Alexis Chotard, Laurent Leca, and Luc Chmielowski from PayFit shared their invaluable experience and strategies for mastering technical incidents, even as a rapidly scaling “unicorn” company. Their presentation went beyond just technical troubleshooting, delving into the crucial aspects of defining and evaluating incidents, effective communication, product-focused response, building organizational resilience, managing on-call duties, and transforming crises into learning opportunities through structured post-mortems.
Defining and Responding to Incidents
The first step in mastering incidents is having a clear understanding of what constitutes an incident and its severity. Alexis, Laurent, and Luc discussed how PayFit defines and categorizes technical incidents based on their impact on users and business operations. This often involves established severity levels and clear criteria for escalation. Their approach emphasized a rapid and coordinated response involving not only technical teams but also product and communication stakeholders to ensure a holistic approach. They highlighted the importance of clear internal and external communication during an incident, keeping relevant parties informed about the status, impact, and expected resolution time. This transparency helps manage expectations and build trust during challenging situations.
Technical Resolution and Product Focus
While quick technical mitigation to restore service is the immediate priority during an incident, the PayFit team stressed the importance of a product-focused approach. This involves understanding the user impact of the incident and prioritizing resolution steps that minimize disruption for customers. They discussed strategies for effective troubleshooting, leveraging monitoring and logging tools to quickly identify the root cause. Beyond immediate fixes, they highlighted the need to address the underlying issues to prevent recurrence. This often involves implementing technical debt reduction measures or improving system resilience as a direct outcome of incident analysis. Their experience showed that a strong collaboration between engineering and product teams is essential for navigating incidents effectively and ensuring that the user experience remains a central focus.
Organizational Resilience and Learning
Mastering incidents at scale requires building both technical and organizational resilience. The presenters discussed how PayFit has evolved its on-call rotation models to ensure adequate coverage while maintaining a healthy work-life balance for engineers. They touched upon the importance of automation in detecting and mitigating incidents faster. A core tenet of their approach was the implementation of structured post-mortems (or retrospectives) after every significant incident. These post-mortems are blameless, focusing on identifying the technical and process-related factors that contributed to the incident and defining actionable steps for improvement. By transforming crises into learning opportunities, PayFit continuously strengthens its systems and processes, reducing the frequency and impact of future incidents. Their journey over 18 months demonstrated that investing in these practices is crucial for any growing organization aiming to build robust and reliable systems.
Links:
- Alexis Chotard: https://www.linkedin.com/in/alexis-chotard/
- Laurent Leca: https://www.linkedin.com/in/laurent-leca/
- Luc Chmielowski: https://www.linkedin.com/in/luc-chmielowski/
- PayFit: https://payfit.com/
- Devoxx France LinkedIn: https://www.linkedin.com/company/devoxx-france/
- Devoxx France Bluesky: https://bsky.app/profile/devoxx.fr
- Devoxx France Website: https://www.devoxx.fr/
[KotlinConf2024] Kotlin Multiplatform Powers Google Workspace
At KotlinConf2024, Jason Parachoniak, a Google Workspace engineer, detailed Google’s shift from a Java-based multiplatform system to Kotlin Multiplatform (KMP), starting with Google Docs. For over a decade, Workspace has relied on shared code for consistency across platforms, like Gmail’s synchronization layer. Jason shared how KMP enhances this approach, leveraging Kotlin’s ecosystem for better performance and native interop. The talk highlighted lessons from the migration, focusing on build efficiency, runtime latency, and memory challenges, offering insights for large-scale KMP adoption.
Why Kotlin Multiplatform for Workspace
Google Workspace has long used multiplatform code to ensure consistency, such as identical email drafts across devices in Gmail or uniform document models in Docs. Jason explained that their Java-based system, using transpilers like J2ObjC, was effective but complex. KMP offers a modern alternative, allowing developers to write Kotlin code that compiles to native platforms, improving runtime performance and ecosystem integration. By targeting business logic—everything beyond the UI—Workspace ensures native-feel apps while sharing critical functionality, aligning with user expectations for productivity tools.
Google Docs: The Migration Testbed
The migration began with Google Docs, chosen for its heavily annotated codebase, which tracks build performance, latency, and memory usage. Jason described how Docs is rolling out on KMP, providing metrics to refine the Kotlin compiler and runtime. This controlled environment allowed Google to compare KMP against their legacy system, ensuring parity before expanding to other apps. Collaboration with JetBrains and the Android team has been key, with iterative improvements driven by real-world data, setting a foundation for broader Workspace adoption.
Tackling Build Performance
Build performance posed challenges, as Google’s Bazel-like system resembles clean builds, unlike Gradle’s incremental approach. Jason recounted a 10-minute build time increase after a Kotlin Native update optimized LLVM bitcode generation. While this improved binary size and speed, it slowed builds. Profiling revealed a slow LLVM pass, already fixed in a newer version. Google patched LLVM temporarily, reducing build times from 30 to 8 minutes, and is working with JetBrains to update Kotlin Native’s LLVM, prioritizing stability alongside the K2 compiler rollout.
Optimizing Runtime Latency
Runtime latency, critical for Workspace apps, required Kotlin Native garbage collection (GC) tweaks. Jason noted that JetBrains proactively adjusted GC before receiving Google’s metrics, but further heuristics were needed as latency issues emerged. String handling in the interop layer also caused bottlenecks, addressed with temporary workarounds. Google is designing long-term fixes with JetBrains, ensuring smooth performance across platforms. These efforts highlight KMP’s potential for high-performance apps, provided runtime challenges are systematically resolved through collaboration.
Addressing Memory Usage
Memory usage spikes were a surprise, particularly between iOS 15 and 16. Jason explained that iOS 16’s security-driven constant pool remapping marked Kotlin Native’s vtables as dirty, consuming megabytes of RAM. Google developed a heap dump tool generating HPROF files, compatible with IntelliJ’s Java heap analysis, to diagnose issues. This tool is being upstreamed to Kotlin Native’s runtime, enhancing debugging capabilities. These insights are guiding Google’s memory optimization strategy, ensuring KMP meets Workspace’s stringent performance requirements as the migration expands.
Links:
[DevoxxGR2025] Unmasking Benchmarking Fallacies
Georgios Andrianakis, a Quarkus engineer at Red Hat, presented a 46-minute talk at Devoxx Greece 2025, dissecting benchmarking fallacies, based on a talk by performance expert Francisco Negro.
The Benchmarketing Problem
Andrianakis introduced “benchmarketing,” where benchmarks are manipulated for marketing. Inspired by Negro’s frustration with a claim that Helidon outperformed Quarkus in a TechEmpower benchmark, he explored how data can be misrepresented. Benchmarks should be relevant, representative, equitable, repeatable, cost-effective, scalable, and transparent. A misleading article claimed Helidon’s superiority, but Negro’s investigation revealed unfair comparisons, sparking this talk to expose such fallacies.
Dissecting a Flawed Claim
Focusing on equity, Negro analyzed the TechEmpower benchmark, which tests web frameworks on tasks like JSON serialization and database queries. The claim hinged on a test where Helidon used a raw database driver (Vert.x for PostgreSQL), while Quarkus used a full object-relational mapper (ORM) like Hibernate, incurring performance penalties. Filtering for full ORM tests, Quarkus topped the charts, with Helidon absent. Comparing both without ORMs, Quarkus still outperformed. This exposed the claim’s inequity, as it wasn’t apples-to-apples, misleading readers.
Critical Thinking in Benchmarks
Andrianakis emphasized skepticism, citing Hitchens’ Razor: claims without evidence can be dismissed. Using Brendan Gregg’s USE method, Negro identified CPU saturation, not database I/O, as the bottleneck, debunking assumptions. He urged active benchmarking—monitoring errors and resources—and measuring one level deeper to understand performance. Awareness of biases, like confirmation bias, and avoiding assumptions of malice over incompetence, ensures fair evaluation of benchmark claims.
Links
[DevoxxBE2025] Finally, Final Means Final: A Deep Dive into Field Immutability in Java
Lecturer
Per Minborg is a Java Core Library Developer at Oracle, specializing in language features and performance improvements. He has contributed to projects enhancing Java’s concurrency and immutability models, drawing from his background in database acceleration tools like Speedment.
Abstract
This investigation explores Java’s ‘final’ keyword as a mechanism for immutability, examining its semantic guarantees from compilation to runtime execution. It contextualizes the balance between mutable flexibility and immutable predictability, delving into JVM trust dynamics and recent proposals like strict finals. Through code demonstrations and JVM internals, the narrative assesses approaches for stronger immutability, including lazy constants, and their effects on threading, optimization, and application speed. Forward implications for Java’s roadmap are considered, emphasizing enhanced reliability and efficiency.
Balancing Mutability and Immutability in Java
Java’s design philosophy accommodates both mutability for dynamic state changes and immutability for stability, yet tensions arise in performance and safety. Immutability, praised in resources like Effective Java, ensures objects maintain a single state, enabling safe concurrent access without locks. Immutable instances can be cached or reused, functioning reliably as keys in collections—unlike mutables that may cause hash inconsistencies after alterations.
Mutability, however, provides adaptability for evolving data, crucial in interactive systems. The ‘final’ keyword aims to enforce constancy, but runtime behaviors complicate this. Declaring ‘final int value = 42;’ implies permanence, yet advanced techniques like reflection with Unsafe can modify it, breaching expectations.
This duality impacts optimizations: trusted finals allow constant propagation, minimizing memory accesses. Yet, Java permits post-constructor updates for deserialization, eroding trust. Demonstrations show reflection altering finals, highlighting vulnerabilities where JVMs must hedge against changes, forgoing inlining.
Historically, Java prioritized flexibility, allowing such mutations for practicality, but contemporary demands favor integrity. Initiatives like “final truly final” seek to ban post-construction changes, bolstering trust for aggressive enhancements.
JVM Trust Mechanisms and Final Field Handling
JVM trust refers to assuming ‘final’ fields remain unchanged post-initialization, enabling efficiencies like constant folding. Current semantics, however, permit reflective or deserialization modifications, limiting optimizations.
Examples illustrate:
class Coordinate {
final int coord;
Coordinate(int coord) { this.coord = coord; }
}
Post-constructor, ‘coord’ is trusted, supporting optimizations. Reflection via Field.setAccessible(true) overrides this, as Unsafe manipulations demonstrate. Performance tests reveal trusted finals outperform volatiles in accesses, but untrusted ones underperform.
Java’s model historically allowed mutations for deserialization, but stricter enforcement proposals aim to restrict this. Implications include safer multi-threading, as finals ensure visibility without volatiles.
Advancements in Strict Finals and Leak Prevention
Strict finals strengthen guarantees by prohibiting constructor leaks—publishing ‘this’ before all finals are set. This prevents partial states in concurrent environments, where threads might observe defaults like zero before assignments.
Problematic code:
class LeakyClass {
final int val = 42;
LeakyClass() { Registry.register(this); } // Leak
}
Leaks risk threads seeing val=0. Strict finals reject this at compile time, enforcing safe initialization.
Methodologically, this requires refactoring to avoid early publications, but yields threading reliability and optimization opportunities. Benchmarks quantify benefits: trusted paths execute quicker, with fewer barriers.
Lazy Constants for Deferred Initialization
Previewed in Java 25, lazy constants merge mutable deferral with final optimizations. Declared ‘lazy final int val;’, they compute once on access via suppliers:
lazy final int heavy = () -> heavyComputation();
JVM views them as constants after initialization, supporting folding. Use cases include infrequent or expensive values, avoiding startup costs.
Unlike volatiles, lazy constants ensure at-most-once execution, with threads blocking on contention. This fits singletons or caches, surpassing synchronized options in efficiency.
Roadmap and Performance Consequences
Strict finals and lazy constants fortify Java’s immutability, complementing concurrent trends. Consequences include accelerated, secure code, with JVMs exploiting trust for vector operations.
Developers balance: finals for absolutes, lazies for postponements. Roadmaps indicate stabilization in Java 26, expanding usage.
In overview, these evolutions make ‘final’ definitively final, boosting Java’s sturdiness and proficiency.
Links:
- Lecture video: https://www.youtube.com/watch?v=J754RsoUd00
- Per Minborg on Twitter/X: https://twitter.com/PMinborg
- Oracle website: https://www.oracle.com/
