Recent Posts
Archives

Posts Tagged ‘Java’

PostHeaderIcon [DevoxxBE2023] Moving Java Forward Together: Community Power

Sharat Chander, Oracle’s Senior Director of Java Developer Engagement, delivered a compelling session at DevoxxBE2023, emphasizing the Java community’s pivotal role in driving the language’s evolution. With over 25 years in the IT industry, Sharat’s passion for Java and community engagement shone through as he outlined how developers can contribute to Java’s future, ensuring its relevance for decades to come.

The Legacy and Longevity of Java

Sharat began by reflecting on Java’s 28-year journey, a testament to its enduring impact on software development. He engaged the audience with a poll, revealing the diverse experience levels among attendees, from those using Java for five years to veterans with over 25 years of expertise. This diversity underscores Java’s broad adoption across industries, from small startups to large enterprises.

Java’s success, Sharat argued, stems from its thoughtful innovation strategy. Unlike the “move fast and break things” mantra, the Java team prioritizes stability and backward compatibility, ensuring that applications built on older versions remain functional. Projects like Amber, Panama, and the recent introduction of virtual threads in Java 21 exemplify this incremental yet impactful approach to innovation.

Balancing Stability and Progress

Sharat addressed the tension between rapid innovation and maintaining stability, a challenge given Java’s extensive history. He highlighted the six-month release cadence introduced to reduce latency to innovation, allowing developers to adopt new features without waiting for major releases. This approach, likened to a train arriving every three minutes, minimizes disruption and enhances accessibility.

The Java team’s commitment to trust, innovation, and predictability guides its development process. Sharat emphasized that Java’s design principles—established 28 years ago—continue to shape its evolution, ensuring it meets the needs of diverse applications, from AI and big data to emerging fields like quantum computing.

Community as the Heart of Java

The core of Sharat’s message was the community’s role in Java’s vitality. He debunked the “build it and they will come” myth, stressing that Java’s success relies on active community participation. Programs like the OpenJDK project invite developers to engage with mailing lists, review code check-ins, and contribute to technical decisions, fostering transparency and collaboration.

Sharat also highlighted foundational programs like the Java Community Process (JCP) and Java Champions, who advocate for Java independently, providing critical feedback to the Java team. He encouraged attendees to join Java User Groups (JUGs), noting the nearly 400 groups worldwide as vital hubs for knowledge sharing and networking.

Digital Engagement and Future Initiatives

Recognizing the digital era’s impact, Sharat discussed Oracle’s efforts to reach Java’s 10 million developers through platforms like dev.java. This portal aggregates learning resources, community content, and programs like JEEP Cafe and Sip of Java, which offer digestible insights into Java’s features. The recently launched Java Playground provides a browser-based environment for experimenting with code snippets, accelerating feature adoption.

Sharat also announced the community contributions initiative on dev.java, featuring content from Java Champions like Venkat Subramaniam and Hannes Kutz. This platform aims to showcase community expertise, encouraging developers to submit their best practices via GitHub pull requests.

Nurturing Diversity and Inclusion

A poignant moment in Sharat’s talk was his call for greater gender diversity in the Java community. He acknowledged the industry’s shortcomings in achieving balanced representation and urged collective action to expand the community’s mindshare. Programs like JDuchess aim to create inclusive spaces, ensuring Java’s evolution benefits from diverse perspectives.

Links:

PostHeaderIcon [DevoxxBE2023] Making Your @Beans Intelligent: Spring AI Innovations

At DevoxxBE2023, Dr. Mark Pollack delivered an insightful presentation on integrating artificial intelligence into Java applications using Spring AI, a project inspired by advancements in AI frameworks like LangChain and LlamaIndex. Mark, a seasoned Spring developer since 2003 and leader of the Spring Data project, explored how Java developers can harness pre-trained AI models to create intelligent applications that address real-world challenges. His talk introduced the audience to Spring AI’s capabilities, from simple “Hello World” examples to sophisticated use cases like question-and-answer systems over custom documents.

The Genesis of Spring AI

Mark began by sharing his journey into AI, sparked by the transformative impact of ChatGPT. Unlike traditional AI development, which often required extensive data cleaning and model training, pre-trained models like those from OpenAI offer accessible APIs and vast knowledge bases, enabling developers to focus on application engineering rather than data science. Mark highlighted how Spring AI emerged from his exploration of code generation, leveraging the structured nature of code within these models to create a framework tailored for Java developers. This framework abstracts the complexity of AI model interactions, making it easier to integrate AI into Spring-based applications.

Spring AI draws inspiration from Python’s AI ecosystem but adapts these concepts to Java’s idioms, emphasizing component abstractions and pluggability. Mark emphasized that this is not a direct port but a reimagination, aligning with the Spring ecosystem’s strengths in enterprise integration and batch processing. This approach positions Spring AI as a bridge between Java’s robust software engineering practices and the dynamic world of AI.

Core Components of AI Applications

A significant portion of Mark’s presentation focused on the architecture of AI applications, which extends beyond merely calling a model. He introduced a conceptual framework involving contextual data, AI frameworks, and models. Contextual data, akin to ETL (Extract, Transform, Load) processes, involves parsing and transforming data—such as PDFs—into embeddings stored in vector databases. These embeddings enable efficient similarity searches, crucial for use cases like question-and-answer systems.

Mark demonstrated a simple AI client in Spring AI, which abstracts interactions with various AI models, including OpenAI, Hugging Face, Amazon Bedrock, and Google Vertex. This portability allows developers to switch models without significant code changes. He also showcased the Spring CLI, a tool inspired by JavaScript’s Create React App, which simplifies project setup by generating starter code from existing repositories.

Prompt Engineering and Its Importance

Prompt engineering emerged as a critical theme in Mark’s talk. He explained that crafting effective prompts is essential for directing AI models to produce desired outputs, such as JSON-formatted responses or specific styles of answers. Spring AI’s PromptTemplate class facilitates this by allowing developers to create reusable, stateful templates with placeholders for dynamic content. Mark illustrated this with a demo where a prompt template generated a joke about a raccoon, highlighting the importance of roles (system and user) in defining the context and tone of AI responses.

He also touched on the concept of “dogfooding,” where AI models are used to refine prompts, creating a feedback loop that enhances their effectiveness. This iterative process, combined with evaluation techniques, ensures that applications deliver accurate and relevant responses, addressing challenges like model hallucinations—where AI generates plausible but incorrect information.

Retrieval Augmented Generation (RAG)

Mark introduced Retrieval Augmented Generation (RAG), a technique to overcome the limitations of AI models’ context windows, which restrict the amount of data they can process. RAG involves pre-processing data into smaller fragments, converting them into embeddings, and storing them in vector databases for similarity searches. This approach allows developers to provide only relevant data to the model, improving efficiency and accuracy.

In a demo, Mark showcased RAG with a bicycle shop dataset, where a question about city-commuting bikes retrieved relevant product descriptions from a vector store. This process mirrors traditional search engines but leverages AI to synthesize answers, demonstrating how Spring AI integrates with vector databases like Milvus and PostgreSQL to handle complex queries.

Real-World Applications and Future Directions

Mark highlighted practical applications of Spring AI, such as enabling question-and-answer systems for financial documents, medical records, or government programs like Medicaid. These use cases illustrate AI’s potential to make complex information more accessible, particularly for non-technical users. He also discussed the importance of evaluation in AI development, advocating for automated scoring mechanisms to assess response quality beyond simple test passing.

Looking forward, Mark outlined Spring AI’s roadmap, emphasizing robust core abstractions and support for a growing number of models and vector databases. He encouraged developers to explore the project’s GitHub repository and participate in its evolution, underscoring the rapid pace of AI advancements and the need for community involvement.

Links:

PostHeaderIcon [DevoxxBE2023] Securing the Supply Chain for Your Java Applications by Thomas Vitale

At Devoxx Belgium 2023, Thomas Vitale, a software engineer and architect at Systematic, delivered an authoritative session on securing the software supply chain for Java applications. As the author of Cloud Native Spring in Action and a passionate advocate for cloud-native technologies, Thomas provided a comprehensive exploration of securing every stage of the software lifecycle, from source code to deployment. Drawing on the SLSA framework and CNCF research, he demonstrated practical techniques for ensuring integrity, authenticity, and resilience using open-source tools like Gradle, Sigstore, and Kyverno. Through a blend of theoretical insights and live demonstrations, Thomas illuminated the critical importance of supply chain security in today’s threat landscape.

Safeguarding Source Code with Git Signatures

Thomas began by defining the software supply chain as the end-to-end process of delivering software, encompassing code, dependencies, tools, practices, and people. He emphasized the risks at each stage, starting with source code. Using Git as an example, Thomas highlighted its audit trail capabilities but cautioned that commit authorship can be manipulated. In a live demo, he showed how he could impersonate a colleague by altering Git’s username and email, underscoring the need for signed commits. By enforcing signed commits with GPG or SSH keys—or preferably a keyless approach via GitHub’s single sign-on—developers can ensure commit authenticity, establishing a verifiable provenance trail critical for supply chain security.

Managing Dependencies with Software Bills of Materials (SBOMs)

Moving to dependencies, Thomas stressed the importance of knowing exactly what libraries are included in a project, especially given vulnerabilities like Log4j. He introduced Software Bills of Materials (SBOMs) as a standardized inventory of software components, akin to a list of ingredients. Using the CycloneDX plugin for Gradle, Thomas demonstrated generating an SBOM during the build process, which provides precise dependency details, including versions, licenses, and hashes for integrity verification. This approach, integrated into Maven or Gradle, ensures accuracy over post-build scanning tools like Snyk, enabling developers to identify vulnerabilities, check license compliance, and verify component integrity before production.

Thomas further showcased Dependency-Track, an OWASP project, to analyze SBOMs and flag vulnerabilities, such as a critical issue in SnakeYAML. He introduced the Vulnerability Exploitability Exchange (VEX) standard, which complements SBOMs by documenting whether vulnerabilities affect an application. In his demo, Thomas marked a SnakeYAML vulnerability as a false positive due to Spring Boot’s safe deserialization, demonstrating how VEX communicates security decisions to stakeholders, reducing unnecessary alerts and ensuring compliance with emerging regulations.

Building Secure Artifacts with Reproducible Builds

The build phase, Thomas explained, is another critical juncture for security. Using Spring Boot as an example, he outlined three packaging methods: JAR files, native executables, and container images. He critiqued Dockerfiles for introducing non-determinism and maintenance overhead, advocating for Cloud Native Buildpacks as a reproducible, secure alternative. In a demo, Thomas built a container image with Buildpacks, highlighting its fixed creation timestamp (January 1, 1980) to ensure identical outputs for unchanged inputs, enhancing security by eliminating variability. This reproducibility, coupled with SBOM generation during the build, ensures artifacts are both secure and traceable.

Signing and Verifying Artifacts with SLSA

To ensure artifact integrity, Thomas introduced the SLSA framework, which provides guidelines for securing software artifacts across the supply chain. He demonstrated signing container images with Sigstore’s Cosign tool, using a keyless approach to avoid managing private keys. This process, integrated into a GitHub Actions pipeline, ensures that artifacts are authentically linked to their creator. Thomas further showcased SLSA’s provenance generation, which documents the artifact’s origin, including the Git commit hash and build steps. By achieving SLSA Level 3, his pipeline provided non-falsifiable provenance, ensuring traceability from source code to deployment.

Securing Deployments with Policy Enforcement

The final stage, deployment, requires validating artifacts to ensure they meet security standards. Thomas demonstrated using Cosign and the SLSA Verifier to validate signatures and provenance, ensuring only trusted artifacts are deployed. On Kubernetes, he introduced Kyverno, a policy engine that enforces signature and provenance checks, automatically rejecting non-compliant deployments. This approach ensures that production environments remain secure, aligning with the principle of validating metadata to prevent unauthorized or tampered artifacts from running.

Conclusion: A Holistic Approach to Supply Chain Security

Thomas’s session at Devoxx Belgium 2023 provided a robust framework for securing Java application supply chains. By addressing source code integrity, dependency management, build reproducibility, artifact signing, and deployment validation, he offered a comprehensive strategy to mitigate risks. His practical demonstrations, grounded in open-source tools and standards like SLSA and VEX, empowered developers to adopt these practices without overwhelming complexity. Thomas’s emphasis on asking “why” at each step encouraged attendees to tailor security measures to their context, ensuring both compliance and resilience in an increasingly regulated landscape.

Links:

PostHeaderIcon [KotlinConf2023] Java and Kotlin: A Mutual Evolution

At KotlinConf2024, John Pampuch, Google’s production languages lead, delivered a history lesson on Java and Kotlin’s intertwined journeys. Battling jet lag with humor, John traced nearly three decades of Java and twelve years of Kotlin, emphasizing their complementary strengths. From Java’s robust ecosystem to Kotlin’s pragmatic innovation, the languages have shaped each other, accelerating progress. John’s talk, rooted in his experience since Java’s 1996 debut, explored design goals, feature cross-pollination, and future implications, urging developers to leverage Kotlin’s developer-friendly features while appreciating Java’s stability.

Design Philosophies: Pragmatism Meets Robustness

John opened by contrasting the languages’ origins. Java, launched in 1995, aimed for simplicity, security, and portability, aligning tightly with the JVM and JDK. Its ecosystem, bolstered by libraries and tooling, set a standard for enterprise development. Kotlin, announced in 2011 by JetBrains, prioritized pragmatism: concise syntax, interoperability with Java, and multiplatform flexibility. Unlike Java’s JVM dependency, Kotlin targets iOS, web, and beyond, enabling faster feature rollouts. John noted Kotlin’s design avoids Java’s rigidity, embracing object-oriented principles with practical tweaks like semicolon-free lines. Yet Java’s self-consistency, seen in its holistic lambda integration, complements Kotlin’s adaptability, creating a synergy where both thrive.

Feature Evolution: From Lambdas to Coroutines

The talk highlighted key milestones. Java’s 2014 release of JDK 8 introduced lambdas, default methods, and type inference, transforming APIs to support functional programming. Kotlin, with 1.0 in 2016, brought smart casts, string templates, and named arguments, prioritizing developer ease. By 2018, Kotlin’s coroutines revolutionized JVM asynchronous programming, offering a simpler mental model than Java’s threads. John praised coroutines as a potential game-changer, though Java’s 2023 virtual threads and structured concurrency aim to close the gap. Kotlin’s multiplatform support, cemented by Google’s 2017 Android endorsement, outpaces Java’s JVM-centric approach, but Java’s predictable six-month release cycle since 2017 ensures steady progress. These advancements reflect a race where each language pushes the other forward.

Mutual Influences: Sealed Classes and Beyond

John emphasized cross-pollination. Java’s 2021 records, inspired by frameworks like Lombok, mirror Kotlin’s data classes, though Kotlin’s named parameters reduce boilerplate further. Sealed classes, introduced in Java 17 and Kotlin 1.5 around 2021, emerged concurrently, suggesting shared inspiration. Kotlin’s string templates, a staple since its early days, influenced Java’s 2024 preview of flexible string templates, which John hopes Kotlin might adopt for localization. Java’s exploration of nullability annotations, potentially aligning with Kotlin’s robust null safety, shows ongoing convergence. John speculated that community demand could push Java toward features like named arguments, though JVM changes remain a hurdle. This mutual learning, fueled by competition with languages like Go and Rust, drives excitement and innovation.

Looking Ahead: Pragmatism and Compatibility

John concluded with a call to action: embrace Kotlin’s compact, readable features while valuing Java’s compile-time speed and ecosystem. Kotlin’s faster feature delivery and multiplatform prowess contrast with Java’s backwards compatibility and predictability. Yet both share a commitment to pragmatic evolution, avoiding breaks in millions of applications. Questions from the audience probed Java’s nullability and virtual threads, with John optimistic about eventual alignment but cautious about timelines. His talk underscored that Java and Kotlin’s competition isn’t zero-sum—it’s a catalyst for better tools, ideas, and developer experiences, ensuring both languages remain vital.

Hashtags: #Java #Kotlin

PostHeaderIcon [DevoxxBE2023] The Panama Dojo: Black Belt Programming with Java 21 and the FFM API by Per Minborg

In an engaging session at Devoxx Belgium 2023, Per Minborg, a Java Core Library team member at Oracle and an OpenJDK contributor, guided attendees through the intricacies of the Foreign Function and Memory (FFM) API, a pivotal component of Project Panama. With a blend of theoretical insights and live coding, Per demonstrated how this API, in its third preview in Java 21, enables seamless interaction with native memory and functions using pure Java code. His talk, dubbed the “Panama Dojo,” showcased the API’s potential to enhance performance and safety, culminating in a hands-on demo of a lightweight microservice framework built with memory segments, arenas, and memory layouts.

Unveiling the FFM API’s Capabilities

Per introduced the FFM API as a solution to the limitations of Java Native Interface (JNI) and direct buffers. Unlike JNI, which requires cumbersome C stubs and inefficient data passing, the FFM API allows direct native memory access and function calls. Per illustrated this with a Point struct example, where a memory segment models a contiguous memory region with 64-bit addressing, supporting both heap and native segments. This eliminates the 2GB limit of direct buffers, offering greater flexibility and efficiency.

The API introduces memory segments with constraints like size, lifetime, and thread confinement, preventing out-of-bounds access and use-after-free errors. Per highlighted the importance of deterministic deallocation, contrasting Java’s automatic memory management with C’s manual approach. The FFM API’s arenas, such as confined and shared arenas, manage segment lifecycles, ensuring resources are freed explicitly, as demonstrated in a try-with-resources block that deterministically deallocates a segment.

Structuring Memory with Layouts and Arenas

Memory layouts, a key FFM API feature, provide a declarative way to define memory structures, reducing manual offset computations. Per showed how a Point layout with x and y doubles uses var handles to access fields safely, leveraging JIT optimizations for atomic operations. This approach minimizes bugs in complex structs, as var handles inherently account for offsets, unlike manual calculations.

Arenas further enhance safety by grouping segments with shared lifetimes. Per demonstrated a confined arena, restricting access to a single thread, and a shared arena, allowing multi-threaded access with thread-local handshakes for safe closure. These constructs bridge the gap between C’s flexibility and Rust’s safety, offering a balanced model for Java developers. In his live demo, Per used an arena to allocate a MarketInfo segment, showcasing deterministic deallocation and thread safety.

Building a Persistent Queue with Memory Mapping

The heart of Per’s session was a live coding demo constructing a persistent queue using memory mapping and atomic operations. He defined a MarketInfo record for stock exchange data, including timestamp, symbol, and price fields. Using a record mapper, Per serialized and deserialized records to and from memory segments, demonstrating immutability and thread safety. The mapper, a potential future JDK feature, simplifies data transfer between Java objects and native memory.

Per then implemented a memory-mapped queue, where a file-backed segment stores headers and payloads. Headers use atomic operations to manage mutual exclusion across threads and JVMs, ensuring safe concurrent access. In the demo, a producer appended MarketInfo records to the queue, while two consumers read them asynchronously, showcasing low-latency, high-performance data sharing. Per’s use of sparse files allowed a 1MB queue to scale virtually, highlighting the API’s efficiency.

Crafting a Microservice Framework

The session culminated in assembling these components into a microservice framework. Per’s queue, inspired by Chronicle Queue, supports persistent, high-performance data exchange across JVMs. The framework leverages memory mapping for durability, atomic operations for concurrency, and record mappers for clean data modeling. Per demonstrated its practical application by persisting a queue to a file and reading it in a separate JVM, underscoring its robustness for distributed systems.

He emphasized the reusability of these patterns across domains like machine learning and graphics processing, where native libraries are prevalent. Tools like jextract, briefly mentioned, further unlock native libraries like TensorFlow, enabling Java developers to integrate them effortlessly. Per’s framework, though minimal, illustrates how the FFM API can transform Java’s interaction with native code, offering a safer, faster alternative to JNI.

Performance and Safety in Harmony

Throughout, Per stressed the FFM API’s dual focus on performance and safety. Native function calls, faster than JNI, and memory segments with strict constraints outperform direct buffers while preventing common errors. The API’s integration with existing JDK features, like var handles, ensures compatibility and optimization. Per’s live coding, despite its complexity, flowed seamlessly, reinforcing the API’s practicality for real-world applications.

Conclusion: Embracing the Panama Dojo

Per’s session was a masterclass in leveraging the FFM API to push Java’s boundaries. By combining memory segments, layouts, arenas, and atomic operations, he crafted a framework that exemplifies the API’s potential. His call to action—experiment with the FFM API in Java 21—invites developers to explore this transformative tool, promising enhanced performance and safety for native interactions. The Panama Dojo left attendees inspired to break new ground in Java development.

Links:

PostHeaderIcon [DevoxxBE2023] Java Language Update by Brian Goetz

At Devoxx Belgium 2023, Brian Goetz, Oracle’s Java Language Architect, delivered an insightful session on the evolution of Java, weaving together a narrative of recent advancements, current features in preview, and a vision for the language’s future. With his deep expertise, Brian illuminated how Java balances innovation with compatibility, ensuring it remains a cornerstone of modern software development. His talk explored the introduction of records, sealed classes, pattern matching, and emerging features like string templates and simplified program structures, all designed to enhance Java’s expressiveness and accessibility. Through a blend of technical depth and practical examples, Brian showcased Java’s commitment to readable, maintainable code while addressing contemporary programming challenges.

Reflecting on Java’s Recent Evolution

Brian began by recapping Java’s significant strides since his last Devoxx appearance, highlighting features like records, sealed classes, and pattern matching. Records, introduced as nominal tuples, provide a concise way to model data with named components, enhancing readability over structural tuples. For instance, a Point record with x and y coordinates is more intuitive than an anonymous tuple of integers. By deriving constructors, accessors, and equality methods from a state declaration, records eliminate boilerplate while making a clear semantic statement about data immutability. Brian emphasized that this semantic focus, rather than mere syntax reduction, distinguishes Java’s approach from alternatives like Lombok.

Sealed classes, another recent addition, allow developers to restrict class hierarchies, specifying permitted subtypes explicitly. This enables libraries to expose abstract types while controlling implementations, as seen in the JDK’s use of method handles. Sealed classes also enhance exhaustiveness checking in switch statements, reducing runtime errors by ensuring all cases are covered. Brian illustrated this with a Shape hierarchy, where a sealed interface permits only Circle and Rectangle, allowing the compiler to verify switch completeness without a default clause.

Advancing Data Modeling with Pattern Matching

Pattern matching, a cornerstone of Java’s recent enhancements, fuses type testing, casting, and binding into a single operation, reducing errors from manual casts. Brian demonstrated how type patterns, like if (obj instanceof String s), streamline code by eliminating redundant casts. Record patterns extend this by deconstructing objects into components, enabling recursive matching for nested structures. For example, a Circle record with a Point center can be matched to extract x and y coordinates in one expression, enhancing both concision and safety.

The revamped switch construct, now an expression supporting patterns and guards, further leverages these capabilities. Brian highlighted its exhaustiveness checking, which uses sealing information to ensure all cases are handled, as in a Color interface sealed to Red, Yellow, and Green. This eliminates the need for default clauses, catching errors at compile time if the hierarchy evolves. By combining records, sealed classes, and pattern matching, Java now supports algebraic data types, offering a powerful framework for modeling complex domains like expressions, where a sealed Expression type can be traversed elegantly with pattern-based recursion.

Introducing String Templates for Safe Aggregation

Looking to the future, Brian introduced string templates, a preview feature addressing the perils of string interpolation. Unlike traditional concatenation or formatting methods, string templates use a template processor to safely combine text fragments and expressions. A syntax like STR.FMT."Hello, \{name\}!" invokes a processor to validate inputs, preventing issues like SQL injection. Brian envisioned a SQL template processor that balances quotes and produces a result set directly, bypassing string intermediaries for efficiency and security. Similarly, a JSON processor could streamline API development by constructing objects from raw fragments, enhancing performance.

This approach reframes interpolation as a broader aggregation problem, allowing developers to define custom processors for domain-specific needs. Brian’s emphasis on safety and flexibility underscores Java’s commitment to robust APIs, drawing inspiration from JavaScript’s tagged functions and Scala’s string interpolators, but tailored to Java’s ecosystem.

Simplifying Java’s On-Ramp and Beyond

To make Java for new developers, Brian discussed preview features like unnamed classes and patterns, which reduce boilerplate for simple programs. A minimal program might omit public static void main, allowing beginners to focus on core logic rather than complex object-oriented constructs. This aligns Java with languages like Python, where incremental learning is prioritized, easing the educational burden on instructors and students alike.

Future enhancements include reconstruction patterns for immutable objects, enabling concise updates like p.with(x: 0) to derive new records from existing ones. Brian also proposed deconstructor patterns for regular classes, mirroring constructors to enable pattern decomposition, enhancing API symmetry. These features aim to make aggregation and decomposition reversible, reducing error-prone asymmetries in object manipulation. For instance, a Person class could declare a deconstructor to extract first and last names, mirroring its constructor, streamlining data handling across Java’s object model.

Conclusion: Java’s Balanced Path Forward

Brian’s session underscored Java’s deliberate evolution, balancing innovation with compatibility. By prioritizing readable, maintainable code, Java addresses modern challenges like loosely coupled services and untyped data, positioning itself as a versatile language for data modeling. Features like string templates and simplified program structures promise greater accessibility, while pattern matching and deconstruction patterns enhance expressiveness. As Java continues to refine its features, it remains a testament to thoughtful design, ensuring developers can build robust, future-ready applications.

Links:

PostHeaderIcon Navigating the Reactive Frontier: Oleh Dokuka’s Reactive Streams at Devoxx France 2023

On April 13, 2023, Oleh Dokuka commanded the Devoxx France stage with a 44-minute odyssey titled “From imperative to Reactive: the Reactive Streams adventure!” Delivered at Paris’s Palais des Congrès, Oleh, a reactive programming luminary, guided developers through the paradigm shift from imperative to reactive programming. Building on his earlier R2DBC talk, he unveiled the power of Reactive Streams, a specification for non-blocking, asynchronous data processing. His narrative was a thrilling journey, blending technical depth with practical insights, inspiring developers to embrace reactive systems for scalable, resilient applications.

Oleh began with a relatable scenario: a Java application overwhelmed by high-throughput data, such as a real-time analytics dashboard. Traditional imperative code, with its synchronous loops and blocking calls, buckles under pressure, leading to latency spikes and resource exhaustion. “We’ve all seen threads waiting idly for I/O,” Oleh quipped, his humor resonating with the audience. Reactive Streams, he explained, offer a solution by processing data asynchronously, using backpressure to balance producer and consumer speeds. Oleh’s passion for reactive programming set the stage for a deep dive into its principles, tools, and real-world applications.

Embracing Reactive Streams

Oleh’s first theme was the core of Reactive Streams: a specification for asynchronous stream processing with non-blocking backpressure. He introduced its four interfaces—Publisher, Subscriber, Subscription, and Processor—and their role in building reactive pipelines. Oleh likely demonstrated a simple pipeline using Project Reactor, a Reactive Streams implementation:

Flux.range(1, 100)
    .map(i -> processData(i))
    .subscribeOn(Schedulers.boundedElastic())
    .subscribe(System.out::println);

In this demo, a Flux emits numbers, processes them asynchronously, and prints results, all while respecting backpressure. Oleh showed how the Subscription controls data flow, preventing the subscriber from being overwhelmed. He contrasted this with imperative code, where a loop might block on I/O, highlighting reactive’s efficiency for high-throughput tasks like log processing or event streaming. The audience, familiar with synchronous Java, leaned in, captivated by the prospect of responsive systems.

Building Reactive Applications

Oleh’s narrative shifted to practical application, his second theme. He explored integrating Reactive Streams with Spring WebFlux, a reactive web framework. In a demo, Oleh likely built a REST API handling thousands of concurrent requests, using Mono and Flux for non-blocking responses:

@GetMapping("/events")
Flux<Event> getEvents() {
    return eventService.findAll();
}

This API, running on Netty and leveraging virtual threads (echoing José Paumard’s talk), scaled effortlessly under load. Oleh emphasized backpressure strategies, such as onBackpressureBuffer(), to manage fast producers. He also addressed error handling, showing how onErrorResume() ensures resilience in reactive pipelines. For microservices or event-driven architectures, Oleh argued, Reactive Streams enable low-latency, resource-efficient systems, a must for cloud-native deployments.

Oleh shared real-world examples, noting how companies like Netflix use Reactor for streaming services. He recommended starting with small reactive components, such as a single endpoint, and monitoring performance with tools like Micrometer. His practical advice—test under load, tune buffer sizes—empowered developers to adopt reactive programming incrementally.

Reactive in the Ecosystem

Oleh’s final theme was Reactive Streams’ role in Java’s ecosystem. Libraries like Reactor, RxJava, and Akka Streams implement the specification, while frameworks like Spring Boot 3 integrate reactive data access via R2DBC (from his earlier talk). Oleh highlighted compatibility with databases like MongoDB and Kafka, ideal for reactive pipelines. He likely demonstrated a reactive Kafka consumer, processing messages with backpressure:

KafkaReceiver.create(receiverOptions)
    .receive()
    .flatMap(record -> processRecord(record))
    .subscribe();

This demo showcased seamless integration, reinforcing reactive’s versatility. Oleh urged developers to explore Reactor’s documentation and experiment with Spring WebFlux, starting with a prototype project. He cautioned about debugging challenges, suggesting tools like BlockHound to detect blocking calls. Looking ahead, Oleh envisioned reactive systems dominating data-intensive applications, from IoT to real-time analytics.

As the session closed, Oleh’s enthusiasm sparked hallway discussions about reactive programming’s potential. Developers left with a clear path: build a reactive endpoint, integrate with Reactor, and measure scalability. Oleh’s adventure through Reactive Streams was a testament to Java’s adaptability, inspiring a new era of responsive, cloud-ready applications.

PostHeaderIcon [DevoxxPL2022] Bare Metal Java • Jarosław Pałka

Jarosław Pałka, a staff engineer at Neo4j, captivated the audience at Devoxx Poland 2022 with an in-depth exploration of low-level Java programming through the Foreign Function and Memory API. As a veteran of the JVM ecosystem, Jarosław shared his expertise in leveraging these experimental APIs to interact directly with native memory and C code, offering a glimpse into Java’s potential for high-performance, system-level programming. His presentation, blending technical depth with engaging demos, provided a roadmap for developers seeking to harness Java’s evolving capabilities.

The Need for Low-Level Access in Java

Jarosław began by contextualizing the necessity of low-level APIs in Java, a language traditionally celebrated for its managed runtime and safety guarantees. He outlined the trade-offs between safety and performance, noting that managed runtimes abstract complexities like memory management but limit optimization opportunities. In high-performance systems like Neo4j, Kafka, or Elasticsearch, direct memory access is critical to avoid garbage collection overhead. Jarosław introduced the Foreign Function and Memory API, incubated since Java 14 and stabilized in Java 17, as a safer alternative to the sun.misc.Unsafe API, enabling developers to work with native memory while preserving Java’s safety principles.

Mastering Native Memory with Memory Segments

Delving into the API’s mechanics, Jarosław explained the concept of memory segments, which serve as pointers to native memory. These segments, managed through resource scopes, allow developers to allocate and deallocate memory explicitly, with safety mechanisms to prevent unauthorized access across threads. He demonstrated how memory segments support operations like setting and retrieving primitive values, using var handles for type-safe access. Jarosław emphasized the API’s flexibility, enabling seamless interaction with both heap and off-heap memory, and its potential to unify access to diverse memory types, including memory-mapped files and persistent memory.

Bridging Java and C with Foreign Functions

A highlight of Jarosław’s talk was the Foreign Function API, which simplifies calling C functions from Java and vice versa. He showcased a practical example of invoking the getpid C function to retrieve a process ID, illustrating the use of symbol lookups, function descriptors, and method handles to map C types to Java. Jarosław also explored upcalls, allowing C code to invoke Java methods, using a signal handler as a case study. This bidirectional integration eliminates the complexities of Java Native Interface (JNI), streamlining interactions with native libraries like SDL for game development.

Practical Applications: A Java Game Demo

To illustrate the API’s power, Jarosław presented a live demo of a 2D game built using Java and the SDL library. By mapping C structures to Java memory layouts, he created sprites and handled events like keyboard inputs, demonstrating how Java can interface with hardware for real-time rendering. The demo highlighted the challenges of manual structure mapping and memory management, but also showcased the API’s potential to simplify these tasks. Jarosław noted that Java 19’s jextract tool automates this process by generating Java bindings from C header files, significantly reducing boilerplate.

Safety and Performance Considerations

Jarosław underscored the API’s safety features, such as temporal and spatial bounds checking, which prevent invalid memory access. He also discussed the cleaner mechanism, which integrates with Java’s garbage collector to manage native memory deallocation. While the API introduces overhead comparable to JNI, Jarosław highlighted its potential for optimization in future releases, particularly for serverless applications and caching. He cautioned developers to use these APIs judiciously, given their complexity and the need for careful error handling.

Future Prospects and Java’s Evolution

Looking ahead, Jarosław positioned the Foreign Function and Memory API as a transformative step in Java’s evolution, enabling developers to write high-performance applications traditionally reserved for languages like C or Rust. He encouraged exploration of these APIs for niche use cases like database development or game engines, while acknowledging their experimental nature. Jarosław’s vision of Java as a versatile platform for both high-level and low-level programming resonated, urging developers to embrace these tools to push the boundaries of what Java can achieve.

Links:

PostHeaderIcon [DevoxxPL2022] Are Immortal Libraries Ready for Immutable Classes? • Tomasz Skowroński

At Devoxx Poland 2022, Tomasz Skowroński, a seasoned Java developer, delivered a compelling presentation exploring the readiness of Java libraries for immutable classes. With a focus on the evolving landscape of Java programming, Tomasz dissected the challenges and opportunities of adopting immutability in modern software development. His talk provided a nuanced perspective on balancing simplicity, clarity, and robustness in code design, offering practical insights for developers navigating the complexities of mutable and immutable paradigms.

The Allure and Pitfalls of Mutable Classes

Tomasz opened his discourse by highlighting the appeal of mutable classes, likening them to a “shy green boy” for their ease of use and rapid development. Mutable classes, with their familiar getters and setters, simplify coding and accelerate project timelines, making them a go-to choice for many developers. However, Tomasz cautioned that this simplicity comes at a cost. As fields and methods accumulate, mutable classes grow increasingly complex, undermining their initial clarity. The internal state becomes akin to a data structure, vulnerable to unintended modifications, which complicates maintenance and debugging. This fragility, he argued, often leads to issues like null pointer exceptions and challenges in maintaining a consistent state, particularly in large-scale systems.

The Promise of Immutability

Transitioning to immutability, Tomasz emphasized its role in fostering robust and predictable code. Immutable classes, by preventing state changes after creation, offer a safeguard against unintended modifications, making them particularly valuable in concurrent environments. He clarified that immutability extends beyond merely marking fields as final or using tools like Lombok. Instead, it requires a disciplined approach to design, ensuring objects remain unalterable. Tomasz highlighted Java records and constructor-based classes as practical tools for achieving immutability, noting their ability to streamline code while maintaining clarity. However, he acknowledged that immutability introduces complexity, requiring developers to rethink traditional approaches to state management.

Navigating Java Libraries with Immutability

A core focus of Tomasz’s presentation was the compatibility of Java libraries with immutable classes. He explored tools like Jackson for JSON deserialization, noting that while modern libraries support immutability through annotations like @ConstructorProperties, challenges persist. For instance, deserializing complex objects may require manual configuration or reliance on Lombok to reduce boilerplate. Tomasz also discussed Hibernate, where immutable entities, such as events or finalized invoices, can express domain constraints effectively. By using the @Immutable annotation and configuring Hibernate to throw exceptions on modification attempts, developers can enforce immutability, though direct database operations remain a potential loophole.

Practical Strategies for Immutable Design

Tomasz offered actionable strategies for integrating immutability into everyday development. He advocated for constructor-based dependency injection over field-based approaches, reducing boilerplate with tools like Lombok or Java records. For RESTful APIs, he suggested mapping query parameters to immutable DTOs, enhancing clarity and reusability. In the context of state management, Tomasz proposed modeling state transitions in immutable classes using interfaces and type-safe implementations, as illustrated by a rocket lifecycle example. This approach ensures predictable state changes without the risks associated with mutable methods. Additionally, he addressed performance concerns, arguing that the overhead of object creation in immutable designs is often overstated, particularly in web-based systems where network latency dominates.

Testing and Tooling Considerations

Testing immutable classes presents unique challenges, particularly with tools like Mockito. Tomasz noted that while Mockito supports final classes in newer versions, mocking immutable objects may indicate design flaws. Instead, he recommended creating real objects via constructors for testing, emphasizing their intentional design for construction. For developers working with legacy systems or external libraries, Tomasz advised cautious adoption of immutability, leveraging tools like Terraform for infrastructure consistency and Java’s evolving ecosystem to reduce boilerplate. His pragmatic approach underscored the importance of aligning immutability with project goals, avoiding dogmatic adherence to either mutable or immutable paradigms.

Embracing Immutability in Java’s Evolution

Concluding his talk, Tomasz positioned immutability as a cornerstone of Java’s ongoing evolution, from records to potential future enhancements like immutable collections. He urged developers to reduce mutation in their codebases and consider immutability beyond concurrency, citing benefits in caching, hashing, and overall design clarity. While acknowledging that mutable classes remain suitable for certain use cases, such as JPA entities in dynamic domains, Tomasz advocated for a mindful approach to code design, prioritizing immutability where it enhances robustness and maintainability.

Links:

PostHeaderIcon [DevoxxPL2022] Integrate Hibernate with Your Elasticsearch Database • Bartosz de Boulange

At Devoxx Poland 2022, Bartosz de Boulange, a Java developer at BGŻ BNP Paribas, Poland’s national development bank, delivered an insightful presentation on Hibernate Search, a powerful tool that seamlessly integrates traditional Object-Relational Mapping (ORM) with NoSQL databases like Elasticsearch. Bartosz’s talk focused on enabling full-text search capabilities within SQL-based applications, offering a practical solution for developers seeking to enhance search functionality without migrating entirely to a NoSQL ecosystem. Through a blend of theoretical insights and hands-on coding demonstrations, he illustrated how Hibernate Search can address complex search requirements in modern applications.

The Power of Full-Text Search

Bartosz began by addressing the challenge of implementing robust search functionality in applications backed by SQL databases. For instance, in a bookstore application, users might need to search for specific phrases within thousands of reviews. Traditional SQL queries, such as LIKE statements, are often inadequate for such tasks due to their limited ability to handle complex text analysis. Hibernate Search solves this by enabling full-text search, which includes character filtering, tokenization, and normalization. These features allow developers to remove irrelevant characters, break text into searchable tokens, and standardize data for efficient querying. Unlike native SQL full-text search capabilities, Hibernate Search offers a more streamlined and scalable approach, making it ideal for applications requiring sophisticated search features.

Integrating Hibernate with Elasticsearch

The core of Bartosz’s presentation was a step-by-step guide to integrating Hibernate Search with Elasticsearch. He outlined five key steps: creating JPA entities, adding Hibernate Search dependencies, annotating entities for indexing, configuring fields for NoSQL storage, and performing initial indexing. By annotating entities with @Indexed, developers can create indexes in Elasticsearch at application startup. Fields are annotated as @FullTextField for tokenization and search, @KeywordField for sorting, or @GenericField for basic querying. Bartosz emphasized the importance of the @FullTextField, which enables advanced search capabilities like fuzzy matching and phrase queries. His live coding demo showcased how to set up a Docker Compose file with MySQL and Elasticsearch, configure the application, and index a bookstore’s data, demonstrating the ease of integrating these technologies.

Scalability and Synchronization Challenges

A significant advantage of using Elasticsearch with Hibernate Search is its scalability. Unlike Apache Lucene, which is limited to a single node and suited for smaller projects, Elasticsearch supports distributed data across multiple nodes, making it ideal for enterprise applications. However, Bartosz highlighted a key challenge: synchronization between SQL and NoSQL databases. Changes in the SQL database may not immediately reflect in Elasticsearch due to communication overhead. To address this, he introduced an experimental outbox polling coordination strategy, which uses additional SQL tables to maintain update order. While still in development, this feature promises to improve data consistency, a critical aspect for production environments.

Practical Applications and Benefits

Bartosz demonstrated practical applications of Hibernate Search through a bookstore example, where users could search for books by title, description, or reviews. His demo showed how to query Elasticsearch for terms like “Hibernate” or “programming,” retrieving relevant results ranked by relevance. Additionally, Hibernate Search supports advanced features like sorting by distance for geolocation-based queries and projections for retrieving partial documents, reducing reliance on the SQL database for certain operations. These capabilities make Hibernate Search a versatile tool for developers aiming to enhance search performance while maintaining their existing SQL infrastructure.

Links: