Recent Posts
Archives

Posts Tagged ‘DevoxxFR2025’

PostHeaderIcon [DevoxxFR2025] Boosting Java Application Startup Time: JVM and Framework Optimizations

In the world of modern application deployment, particularly in cloud-native and microservice architectures, fast startup time is a crucial factor impacting scalability, resilience, and cost efficiency. Slow-starting applications can delay deployments, hinder auto-scaling responsiveness, and consume resources unnecessarily. Olivier Bourgain, in his presentation, delved into strategies for significantly accelerating the startup time of Java applications, focusing on optimizations at both the Java Virtual Machine (JVM) level and within popular frameworks like Spring Boot. He explored techniques ranging from garbage collection tuning to leveraging emerging technologies like OpenJDK’s Project Leyden and Spring AOT (Ahead-of-Time Compilation) to make Java applications lighter, faster, and more efficient from the moment they start.

The Importance of Fast Startup

Olivier began by explaining why fast startup time matters in modern environments. In microservices architectures, applications are frequently started and stopped as part of scaling events, deployments, or rolling updates. A slow startup adds to the time it takes to scale up to handle increased load, potentially leading to performance degradation or service unavailability. In serverless or function-as-a-service environments, cold starts (the time it takes for an idle instance to become ready) are directly impacted by application startup time, affecting latency and user experience. Faster startup also improves developer productivity by reducing the waiting time during local development and testing cycles. Olivier emphasized that optimizing startup time is no longer just a minor optimization but a fundamental requirement for efficient cloud-native deployments.

JVM and Garbage Collection Optimizations

Optimizing the JVM configuration and understanding garbage collection behavior are foundational steps in improving Java application startup. Olivier discussed how different garbage collectors (like G1, Parallel, or ZGC) can impact startup time and memory usage. Tuning JVM arguments related to heap size, garbage collection pauses, and just-in-time (JIT) compilation tiers can influence how quickly the application becomes responsive. While JIT compilation is crucial for long-term performance, it can introduce startup overhead as the JVM analyzes and optimizes code during initial execution. Techniques like Class Data Sharing (CDS) were mentioned as a way to reduce startup time by sharing pre-processed class metadata between multiple JVM instances. Olivier provided practical tips and configurations for optimizing JVM settings specifically for faster startup, balancing it with overall application performance.

Framework Optimizations: Spring Boot and Beyond

Popular frameworks like Spring Boot, while providing immense productivity benefits, can sometimes contribute to longer startup times due to their extensive features and reliance on reflection and classpath scanning during initialization. Olivier explored strategies within the Spring ecosystem and other frameworks to mitigate this. He highlighted Spring AOT (Ahead-of-Time Compilation) as a transformative technology that analyzes the application at build time and generates optimized code and configuration, reducing the work the JVM needs to do at runtime. This can significantly decrease startup time and memory footprint, making Spring Boot applications more suitable for resource-constrained environments and serverless deployments. Project Leyden in OpenJDK, aiming to enable static images and further AOT compilation for Java, was also discussed as a future direction for improving startup performance at the language level. Olivier demonstrated how applying these framework-specific optimizations and leveraging AOT compilation can have a dramatic impact on the startup speed of Java applications, making them competitive with applications written in languages traditionally known for faster startup.

Links:

PostHeaderIcon [DevoxxFR2025] Winamax’s Journey Towards Cross-Platform

In today’s multi-device world, providing a consistent and high-quality user experience across various platforms is a significant challenge for software companies. For online gaming and betting platforms like Winamax, reaching users on desktop, web, and mobile devices is paramount. Anthony Maffert and Florian Yger from Winamax shared their insightful experience report detailing the company’s ambitious journey to unify their frontend applications onto a single cross-platform engine. Their presentation explored the technical challenges, the architectural decisions, and the concrete lessons learned during this migration, showcasing how they leveraged modern web technologies like JavaScript, React, and WebGL to achieve a unified codebase for their desktop and mobile applications.

The Challenge of a Fragmented Frontend

Winamax initially faced a fragmented frontend landscape, with separate native applications for desktop (Windows, macOS) and mobile (iOS, Android), alongside their web platform. Maintaining and developing features across these disparate codebases was inefficient, leading to duplicated efforts, inconsistencies in user experience, and slower delivery of new features. The technical debt associated with supporting multiple platforms became a significant hurdle. Anthony and Florian highlighted the clear business and technical need to consolidate their frontend development onto a single platform that could target all the required devices while maintaining performance and a rich user experience, especially crucial for a real-time application like online poker.

Choosing a Cross-Platform Engine

After evaluating various options, Winamax made the strategic decision to adopt a cross-platform approach based on web technologies. They chose to leverage JavaScript, specifically within the React framework, for building their user interfaces. For rendering the complex and dynamic visuals required for a poker client, they opted for WebGL, a web standard for rendering 2D and 3D graphics within a browser, which can also be utilized in cross-platform frameworks. Their previous experience with JavaScript on their web platform played a role in this decision. The core idea was to build a single application logic and UI layer using these web technologies and then deploy it across desktop and mobile using wrapper technologies (like Electron for desktop and potentially variations for mobile, although the primary focus of this talk seemed to be the desktop migration).

The Migration Process and Lessons Learned

Anthony and Florian shared their experience with the migration process, which was a phased approach given the complexity of a live gaming platform. They discussed the technical challenges encountered, such as integrating native device functionalities (like file system access for desktop) within the web technology stack, optimizing WebGL rendering performance for different hardware, and ensuring a smooth transition for existing users. They touched upon the architectural changes required to support a unified codebase, potentially involving a clear separation between the cross-platform UI logic and any platform-specific native modules or integrations. Key lessons learned included the importance of careful planning, thorough testing on all target platforms, investing in performance optimization, and managing the technical debt during the transition. They also highlighted the benefits reaped from this migration, including faster feature development, reduced maintenance overhead, improved consistency across platforms, and the ability to leverage a larger pool of web developers. The presentation offered a valuable case study for other organizations considering a similar move towards cross-platform development using web technologies.

Links:

PostHeaderIcon [DevoxxFR2025] Dagger Modules: A Swiss Army Knife for Modern CI/CD Pipelines

Continuous Integration and Continuous Delivery (CI/CD) pipelines are the backbone of modern software development, automating the process of building, testing, and deploying applications. However, as these pipelines grow in complexity, they often become difficult to maintain, debug, and port across different execution platforms, frequently relying on verbose and platform-specific YAML configurations. Jean-Christophe Sirot, in his presentation, introduced Dagger as a revolutionary approach to CI/CD, allowing pipelines to be written as code, executable locally, testable, and portable. He explored Dagger Functions and Dagger Modules as key concepts for creating and sharing reusable, language-agnostic components for CI/CD workflows, positioning Dagger as a versatile “Swiss Army knife” for modernizing these critical pipelines.

The Pain Points of Traditional CI/CD

Jean-Christophe began by outlining the common frustrations associated with traditional CI/CD pipelines. Relying heavily on YAML or other declarative formats for defining pipelines can lead to complex, repetitive, and hard-to-read configurations, especially for intricate workflows. Debugging failures within these pipelines is often challenging, requiring pushing changes to a remote CI server and waiting for the pipeline to run. Furthermore, pipelines written for one CI platform (like GitHub Actions or GitLab CI) are often not easily transferable to another, creating vendor lock-in and hindering flexibility. This dependency on specific platforms and the difficulty in managing complex workflows manually are significant pain points for development and DevOps teams.

Dagger: CI/CD as Code

Dagger offers a fundamentally different approach by treating CI/CD pipelines as code. It allows developers to write their pipeline logic using familiar programming languages (like Go, Python, Java, or TypeScript) instead of platform-specific configuration languages. This brings the benefits of software development practices – such as code reusability, modularity, testing, and versioning – to CI/CD. Jean-Christophe explained that Dagger executes these pipelines using containers, ensuring consistency and portability across different environments. The Dagger engine runs the pipeline logic, orchestrates the necessary container operations, and manages dependencies. This allows developers to run and debug their CI/CD pipelines locally using the same code that will execute on the remote CI platform, significantly accelerating the debugging cycle.

Dagger Functions and Modules

Key to Dagger’s power are Dagger Functions and Dagger Modules. Jean-Christophe described Dagger Functions as the basic building blocks of a pipeline – functions written in a programming language that perform specific CI/CD tasks (e.g., building a Docker image, running tests, deploying an application). These functions interact with the Dagger engine to perform container operations. Dagger Modules are collections of related Dagger Functions that can be packaged and shared. Modules allow teams to create reusable components for common CI/CD patterns or specific technologies, effectively creating a library of CI/CD capabilities. For example, a team could create a “Java Build Module” containing functions for compiling Java code, running Maven or Gradle tasks, and building JAR or WAR files. These modules can be easily imported and used in different projects, promoting standardization and reducing duplication across an organization’s CI/CD workflows. Jean-Christophe demonstrated how to create and use Dagger Modules, illustrating their potential for building composable and maintainable pipelines. He highlighted that Dagger’s language independence means that modules can be written in one language (e.g., Python) and used in a pipeline defined in another (e.g., Java), fostering collaboration between teams with different language preferences.

The Benefits: Composable, Maintainable, Portable

By adopting Dagger, teams can create CI/CD pipelines that are:
Composable: Pipelines can be built by combining smaller, reusable Dagger Modules and Functions.
Maintainable: Pipelines written as code are easier to read, understand, and refactor using standard development tools and practices.
Portable: Pipelines can run on any platform that supports Dagger and containers, eliminating vendor lock-in.
Testable: Individual Dagger Functions and modules can be unit tested, and the entire pipeline can be run and debugged locally.

Jean-Christophe’s presentation positioned Dagger as a versatile tool that modernizes CI/CD by bringing the best practices of software development to pipeline automation. The ability to write pipelines in code, leverage reusable modules, and execute locally makes Dagger a powerful “Swiss Army knife” for developers and DevOps engineers seeking more efficient, reliable, and maintainable CI/CD workflows.

Links:

PostHeaderIcon [DevoxxFR2025] Go Without Frills: When the Standard Library Suffices

Go, the programming language designed by Google, has gained significant popularity for its simplicity, efficiency, and strong support for concurrent programming. A core philosophy of Go is its minimalist design and emphasis on a robust standard library, encouraging developers to “do a lot with a little.” Nathan Castelein, in his presentation, championed this philosophy, demonstrating how a significant portion of modern applications can be built effectively using only Go’s standard library, without resorting to numerous third-party dependencies. He explored various native packages and compared their functionalities to well-known third-party alternatives, showcasing why and how returning to the fundamentals can lead to simpler, more maintainable, and often equally performant Go applications.

The Go Standard Library: A Powerful Foundation

Nathan highlighted the richness and capability of Go’s standard library. Unlike some languages where the standard library is minimal, Go provides a comprehensive set of packages covering a wide range of functionalities, from networking and HTTP to encoding/decoding, cryptography, and testing. He emphasized that these standard packages are well-designed, thoroughly tested, and actively maintained, making them a reliable choice for building production-ready applications. Focusing on the standard library reduces the number of external dependencies, which simplifies project management, minimizes potential security vulnerabilities introduced by third-party code, and avoids the complexities of managing version conflicts. It also encourages developers to gain a deeper understanding of the language’s built-in capabilities.

Comparing Standard Packages to Third-Party Libraries

The core of Nathan’s talk involved comparing functionalities provided by standard Go packages with those offered by popular third-party libraries. He showcased examples in areas such as:
Web Development: Demonstrating how to build web servers and handle HTTP requests using the net/http package, contrasting it with frameworks like Gin, Echo, or Fiber. He would have shown that for many common web tasks, the standard library provides sufficient features.
Logging: Illustrating the capabilities of the log/slog package (introduced in Go 1.21) for structured logging, comparing it to libraries like Logrus or Zerolog. He would have highlighted how log/slog provides modern logging features natively.
Testing: Exploring the testing package for writing unit and integration tests, perhaps mentioning how it can be used effectively without resorting to assertion libraries like Testify for many common assertion scenarios.

The comparison aimed to show that while third-party libraries often provide convenience or specialized features, the standard library has evolved to incorporate many commonly needed functionalities, often in a simpler and more idiomatic Go way.

The Benefits of a Minimalist Approach

Nathan articulated the benefits of embracing a “Go without frills” approach. Using the standard library more extensively leads to:
Reduced Complexity: Fewer dependencies mean a simpler project structure and fewer moving parts to understand and manage.
Improved Maintainability: Code relying on standard libraries is often easier to maintain over time, as the dependencies are stable and well-documented.
Enhanced Performance: Standard library implementations are often highly optimized and integrated with the Go runtime.
Faster Compilation: Fewer dependencies can lead to quicker build times.
Smaller Binaries: Avoiding large third-party libraries can result in smaller executable files.

He acknowledged that there are still valid use cases for third-party libraries, especially for highly specialized tasks or when a library provides significant productivity gains. However, the key takeaway was to evaluate the necessity of adding a dependency and to leverage the powerful standard library whenever it suffices. The talk encouraged developers to revisit the fundamentals and appreciate the elegance and capability of Go’s built-in tools for building robust and efficient applications.

Links:

PostHeaderIcon [DevoxxFR2025] Building an Agentic AI with Structured Outputs, Function Calling, and MCP

The rapid advancements in Artificial Intelligence, particularly in large language models (LLMs), are enabling the creation of more sophisticated and autonomous AI agents – programs capable of understanding instructions, reasoning, and interacting with their environment to achieve goals. Building such agents requires effective ways for the AI model to communicate programmatically and to trigger external actions. Julien Dubois, in his deep-dive session, explored key techniques and a new protocol essential for constructing these agentic AI systems: Structured Outputs, Function Calling, and the Model-Controller Protocol (MCP). Using practical examples and the latest Java SDK developed by OpenAI, he demonstrated how to implement these features within LangChain4j, showcasing how developers can build AI agents that go beyond simple text generation.

Structured Outputs: Enabling Programmatic Communication

One of the challenges in building AI agents is getting LLMs to produce responses in a structured format that can be easily parsed and used by other parts of the application. Julien explained how Structured Outputs address this by allowing developers to define a specific JSON schema that the AI model must adhere to when generating its response. This ensures that the output is not just free-form text but follows a predictable structure, making it straightforward to map the AI’s response to data objects in programming languages like Java. He demonstrated how to provide the LLM with a JSON schema definition and constrain its output to match that schema, enabling reliable programmatic communication between the AI model and the application logic. This is crucial for scenarios where the AI needs to provide data in a specific format for further processing or action.

Function Calling: Giving AI the Ability to Act

To be truly agentic, an AI needs the ability to perform actions in the real world or interact with external tools and services. Julien introduced Function Calling as a powerful mechanism that allows developers to define functions in their code (e.g., Java methods) and expose them to the AI model. The LLM can then understand when a user’s request requires calling one of these functions and generate a structured output indicating which function to call and with what arguments. The application then intercepts this output, executes the corresponding function, and can provide the function’s result back to the AI, allowing for a multi-turn interaction where the AI reasons, acts, and incorporates the results into its subsequent responses. Julien demonstrated how to define function “signatures” that the AI can understand and how to handle the function calls triggered by the AI, showcasing scenarios like retrieving information from a database or interacting with an external API based on the user’s natural language request.

MCP: Standardizing LLM Interaction

While Structured Outputs and Function Calling provide the capabilities for AI communication and action, the Model-Controller Protocol (MCP) emerges as a new standard to streamline how LLMs interact with various data sources and tools. Julien discussed MCP as a protocol that aims to standardize the communication layer between AI models (the “Model”) and the application logic that orchestrates them and provides access to external resources (the “Controller”). This standardization can facilitate building more portable and interoperable AI agentic systems, allowing developers to switch between different LLMs or integrate new tools and data sources more easily. While details of MCP might still be evolving, its goal is to provide a common interface for tasks like function calling, accessing external knowledge, and managing conversational state. Julien illustrated how libraries like LangChain4j are adopting these concepts and integrating with protocols like MCP to simplify the development of sophisticated AI agents. The presentation, rich in code examples using the OpenAI Java SDK, provided developers with the practical knowledge and tools to start building the next generation of agentic AI applications.

Links: