Recent Posts
Archives

Posts Tagged ‘DevoxxFR2025’

PostHeaderIcon [DevoxxFR2025] Spark 4 and Iceberg: The New Standard for All Your Data Projects

The world of big data is constantly evolving, with new technologies emerging to address the challenges of managing and processing ever-increasing volumes of data. Apache Spark has long been a dominant force in big data processing, and its evolution continues with Spark 4. Complementing this is Apache Iceberg, a modern table format that is rapidly becoming the standard for managing data lakes. Pierre Andrieux from Capgemini and Houssem Chihoub from Databricks joined forces to demonstrate how the combination of Spark 4 and Iceberg is set to revolutionize data projects, offering improved performance, enhanced data management capabilities, and a more robust foundation for data lakes.

Spark 4: Boosting Performance and Data Lake Support

Pierre and Houssem highlighted the major new features and enhancements in Apache Spark 4. A key area of improvement is performance, with a new query engine and automatic query optimization designed to accelerate data processing workloads. Spark 4 also brings enhanced native support for data lakes, simplifying interactions with data stored in formats like Parquet and ORC on distributed file systems. This tighter integration improves efficiency and reduces the need for external connectors or complex configurations. The presentation showcased benchmarks or performance comparisons illustrating the gains achieved with Spark 4, particularly when working with large datasets in a data lake environment.

Apache Iceberg Demystified: A Next-Generation Table Format

Apache Iceberg addresses the limitations of traditional table formats used in data lakes. Houssem demystified Iceberg, explaining that it provides a layer of abstraction on top of data files, bringing database-like capabilities to data lakes. Key features of Iceberg include:
Time Travel: The ability to query historical snapshots of a table, enabling reproducible reports and simplified data rollbacks.
Schema Evolution: Support for safely evolving table schemas over time (e.g., adding, dropping, or renaming columns) without requiring costly data rewrites.
Dynamic Partitioning: Iceberg automatically manages data partitioning, optimizing query performance based on query patterns without manual intervention.
Atomic Commits: Ensures that changes to a table are atomic, providing reliability and consistency even in distributed environments.

These features solve many of the pain points associated with managing data lakes, such as schema management complexities, difficulty in handling updates and deletions, and lack of transactionality.

The Power of Combination: Spark 4 and Iceberg

The true power lies in combining the processing capabilities of Spark 4 with the data management features of Iceberg. Pierre and Houssem demonstrated through concrete use cases and practical demonstrations how this combination enables building modern data pipelines. They showed how Spark 4 can efficiently read from and write to Iceberg tables, leveraging Iceberg’s features like time travel for historical analysis or schema evolution for seamlessly integrating data with changing structures. The integration allows data engineers and data scientists to work with data lakes with greater ease, reliability, and performance, making this combination a compelling new standard for data projects. The talk covered best practices for implementing data pipelines with Spark 4 and Iceberg and discussed potential pitfalls to avoid, providing attendees with the knowledge to leverage these technologies effectively in their own data initiatives.

Links:

PostHeaderIcon [DevoxxFR2025] Alert, Everything’s Burning! Mastering Technical Incidents

In the fast-paced world of technology, technical incidents are an unavoidable reality. When systems fail, the ability to quickly detect, diagnose, and resolve issues is paramount to minimizing impact on users and the business. Alexis Chotard, Laurent Leca, and Luc Chmielowski from PayFit shared their invaluable experience and strategies for mastering technical incidents, even as a rapidly scaling “unicorn” company. Their presentation went beyond just technical troubleshooting, delving into the crucial aspects of defining and evaluating incidents, effective communication, product-focused response, building organizational resilience, managing on-call duties, and transforming crises into learning opportunities through structured post-mortems.

Defining and Responding to Incidents

The first step in mastering incidents is having a clear understanding of what constitutes an incident and its severity. Alexis, Laurent, and Luc discussed how PayFit defines and categorizes technical incidents based on their impact on users and business operations. This often involves established severity levels and clear criteria for escalation. Their approach emphasized a rapid and coordinated response involving not only technical teams but also product and communication stakeholders to ensure a holistic approach. They highlighted the importance of clear internal and external communication during an incident, keeping relevant parties informed about the status, impact, and expected resolution time. This transparency helps manage expectations and build trust during challenging situations.

Technical Resolution and Product Focus

While quick technical mitigation to restore service is the immediate priority during an incident, the PayFit team stressed the importance of a product-focused approach. This involves understanding the user impact of the incident and prioritizing resolution steps that minimize disruption for customers. They discussed strategies for effective troubleshooting, leveraging monitoring and logging tools to quickly identify the root cause. Beyond immediate fixes, they highlighted the need to address the underlying issues to prevent recurrence. This often involves implementing technical debt reduction measures or improving system resilience as a direct outcome of incident analysis. Their experience showed that a strong collaboration between engineering and product teams is essential for navigating incidents effectively and ensuring that the user experience remains a central focus.

Organizational Resilience and Learning

Mastering incidents at scale requires building both technical and organizational resilience. The presenters discussed how PayFit has evolved its on-call rotation models to ensure adequate coverage while maintaining a healthy work-life balance for engineers. They touched upon the importance of automation in detecting and mitigating incidents faster. A core tenet of their approach was the implementation of structured post-mortems (or retrospectives) after every significant incident. These post-mortems are blameless, focusing on identifying the technical and process-related factors that contributed to the incident and defining actionable steps for improvement. By transforming crises into learning opportunities, PayFit continuously strengthens its systems and processes, reducing the frequency and impact of future incidents. Their journey over 18 months demonstrated that investing in these practices is crucial for any growing organization aiming to build robust and reliable systems.

Links:

PostHeaderIcon [DevoxxFR2025] Simplify Your Ideas’ Containerization!

For many developers and DevOps engineers, creating and managing Dockerfiles can feel like a tedious chore. Ensuring best practices, optimizing image layers, and keeping up with security standards often add friction to the containerization process. Thomas DA ROCHA from Lenra, in his presentation, introduced Dofigen as an open-source command-line tool designed to simplify this. He demonstrated how Dofigen allows users to generate optimized and secure Dockerfiles from a simple YAML or JSON description, making containerization quicker, easier, and less error-prone, even without deep Dockerfile expertise.

The Pain Points of Dockerfiles

Thomas began by highlighting the common frustrations associated with writing and maintaining Dockerfiles. These include:
Complexity: Writing effective Dockerfiles requires understanding various instructions, their order, and how they impact caching and layer size.
Time Consumption: Manually writing and optimizing Dockerfiles for different projects can be time-consuming.
Security Concerns: Ensuring that images are built securely, minimizing attack surface, and adhering to security standards can be challenging without expert knowledge.
Lack of Reproducibility: Small changes or inconsistencies in the build environment can sometimes lead to non-reproducible images.

These challenges can slow down development cycles and increase the risk of deploying insecure or inefficient containers.

Introducing Dofigen: Dockerfile Generation Simplified

Dofigen aims to abstract away the complexities of Dockerfile creation. Thomas explained that instead of writing a Dockerfile directly, users provide a simplified description of their application and its requirements in a YAML or JSON file. This description includes information such as the base image, application files, dependencies, ports, and desired security configurations. Dofigen then takes this description and automatically generates an optimized and standards-compliant Dockerfile. This approach allows developers to focus on defining their application’s needs rather than the intricacies of Dockerfile syntax and best practices. Thomas showed a live coding demo, transforming a simple application description into a functional Dockerfile using Dofigen.

Built-in Best Practices and Security Standards

A key advantage of Dofigen is its ability to embed best practices and security standards into the generated Dockerfiles automatically. Thomas highlighted that Dofigen incorporates knowledge about efficient layering, reducing image size, and minimizing the attack surface by following recommended guidelines. This means users don’t need to be experts in Dockerfile optimization or security to create robust images. The tool handles these aspects automatically based on the provided high-level description. Thomas might have demonstrated how Dofigen helps in creating multi-stage builds or incorporating user and permission best practices, which are crucial for building secure production-ready images. By simplifying the process and baking in expertise, Dofigen empowers developers to containerize their applications quickly and confidently, ensuring that the resulting images are not only functional but also optimized and secure. The open-source nature of Dofigen also allows the community to contribute to improving its capabilities and keeping up with evolving best practices and security recommendations.

Links:

PostHeaderIcon [DevoxxFR2025] Side Roads: Info Prof – An Experience Report

After years navigating the corporate landscape, particularly within the often-demanding environment of SSII (Systèmes et Services d’Information et d’Ingénierie) or IT consulting companies, many professionals reach a point of questioning their career path. Seeking a different kind of fulfillment or a better alignment with personal values, some choose to take a “side road” – a deliberate shift in their professional journey. Jerome BATON shared his personal experience taking such a path: transitioning from the world of IT services to becoming an IT professor. His presentation offered a candid look at this career change, exploring the motivations behind it, the realities of teaching, and why the next generation of IT professionals needs the experience and passion of those who have worked in the field.

The Turning Point: Seeking a Different Path

Jerome began by describing the feeling of reaching a turning point in his career within the SSII environment. While such roles offer valuable experience and exposure to diverse projects, they can also involve long hours, constant pressure, and a focus on deliverables that sometimes overshadow personal growth or the opportunity to share knowledge more broadly. He articulated the motivations that led him to consider a change, such as a desire for a better work-life balance, a need for a stronger sense of purpose, or a calling to contribute to the development of future talent. The idea of taking a “side road” suggests a deviation from a conventional linear career progression, a conscious choice to explore an alternative path that aligns better with personal aspirations.

The Reality of Being an Info Prof

Becoming an IT professor involves a different set of challenges and rewards compared to working in the industry. Jerome shared his experience in this new role, discussing the realities of teaching computer science or related subjects. This included aspects like curriculum development, preparing and delivering lectures and practical sessions, evaluating student progress, and engaging with the academic environment. He touched upon the satisfaction of sharing his industry knowledge and experience with students, guiding their learning, and witnessing their growth. However, he might also have discussed the administrative aspects, the need to stay updated with rapidly evolving technologies to keep course content relevant, and the unique dynamics of working within an educational institution.

Why the Next Generation Needs Your Experience

A central message of Jerome’s presentation was the crucial role that experienced IT professionals can play in shaping the next generation. He argued that students benefit immensely from learning from individuals who have real-world experience, who understand the practical challenges of software development, and who can share insights beyond theoretical concepts. Industry professionals can provide valuable context, mentorship, and guidance, preparing students not just with technical skills but also with an understanding of industry best practices, teamwork, and problem-solving in real-world scenarios. Jerome’s own transition exemplified this, demonstrating how years of experience in IT services could be directly applied and leveraged in an educational setting to benefit aspiring developers. The talk served as a call to action, encouraging other experienced professionals to consider teaching or mentoring as a way to give back to the community and influence the future of the IT industry.

Links:

PostHeaderIcon [DevoxxFR2025] Boosting Java Application Startup Time: JVM and Framework Optimizations

In the world of modern application deployment, particularly in cloud-native and microservice architectures, fast startup time is a crucial factor impacting scalability, resilience, and cost efficiency. Slow-starting applications can delay deployments, hinder auto-scaling responsiveness, and consume resources unnecessarily. Olivier Bourgain, in his presentation, delved into strategies for significantly accelerating the startup time of Java applications, focusing on optimizations at both the Java Virtual Machine (JVM) level and within popular frameworks like Spring Boot. He explored techniques ranging from garbage collection tuning to leveraging emerging technologies like OpenJDK’s Project Leyden and Spring AOT (Ahead-of-Time Compilation) to make Java applications lighter, faster, and more efficient from the moment they start.

The Importance of Fast Startup

Olivier began by explaining why fast startup time matters in modern environments. In microservices architectures, applications are frequently started and stopped as part of scaling events, deployments, or rolling updates. A slow startup adds to the time it takes to scale up to handle increased load, potentially leading to performance degradation or service unavailability. In serverless or function-as-a-service environments, cold starts (the time it takes for an idle instance to become ready) are directly impacted by application startup time, affecting latency and user experience. Faster startup also improves developer productivity by reducing the waiting time during local development and testing cycles. Olivier emphasized that optimizing startup time is no longer just a minor optimization but a fundamental requirement for efficient cloud-native deployments.

JVM and Garbage Collection Optimizations

Optimizing the JVM configuration and understanding garbage collection behavior are foundational steps in improving Java application startup. Olivier discussed how different garbage collectors (like G1, Parallel, or ZGC) can impact startup time and memory usage. Tuning JVM arguments related to heap size, garbage collection pauses, and just-in-time (JIT) compilation tiers can influence how quickly the application becomes responsive. While JIT compilation is crucial for long-term performance, it can introduce startup overhead as the JVM analyzes and optimizes code during initial execution. Techniques like Class Data Sharing (CDS) were mentioned as a way to reduce startup time by sharing pre-processed class metadata between multiple JVM instances. Olivier provided practical tips and configurations for optimizing JVM settings specifically for faster startup, balancing it with overall application performance.

Framework Optimizations: Spring Boot and Beyond

Popular frameworks like Spring Boot, while providing immense productivity benefits, can sometimes contribute to longer startup times due to their extensive features and reliance on reflection and classpath scanning during initialization. Olivier explored strategies within the Spring ecosystem and other frameworks to mitigate this. He highlighted Spring AOT (Ahead-of-Time Compilation) as a transformative technology that analyzes the application at build time and generates optimized code and configuration, reducing the work the JVM needs to do at runtime. This can significantly decrease startup time and memory footprint, making Spring Boot applications more suitable for resource-constrained environments and serverless deployments. Project Leyden in OpenJDK, aiming to enable static images and further AOT compilation for Java, was also discussed as a future direction for improving startup performance at the language level. Olivier demonstrated how applying these framework-specific optimizations and leveraging AOT compilation can have a dramatic impact on the startup speed of Java applications, making them competitive with applications written in languages traditionally known for faster startup.

Links:

PostHeaderIcon [DevoxxFR2025] Winamax’s Journey Towards Cross-Platform

In today’s multi-device world, providing a consistent and high-quality user experience across various platforms is a significant challenge for software companies. For online gaming and betting platforms like Winamax, reaching users on desktop, web, and mobile devices is paramount. Anthony Maffert and Florian Yger from Winamax shared their insightful experience report detailing the company’s ambitious journey to unify their frontend applications onto a single cross-platform engine. Their presentation explored the technical challenges, the architectural decisions, and the concrete lessons learned during this migration, showcasing how they leveraged modern web technologies like JavaScript, React, and WebGL to achieve a unified codebase for their desktop and mobile applications.

The Challenge of a Fragmented Frontend

Winamax initially faced a fragmented frontend landscape, with separate native applications for desktop (Windows, macOS) and mobile (iOS, Android), alongside their web platform. Maintaining and developing features across these disparate codebases was inefficient, leading to duplicated efforts, inconsistencies in user experience, and slower delivery of new features. The technical debt associated with supporting multiple platforms became a significant hurdle. Anthony and Florian highlighted the clear business and technical need to consolidate their frontend development onto a single platform that could target all the required devices while maintaining performance and a rich user experience, especially crucial for a real-time application like online poker.

Choosing a Cross-Platform Engine

After evaluating various options, Winamax made the strategic decision to adopt a cross-platform approach based on web technologies. They chose to leverage JavaScript, specifically within the React framework, for building their user interfaces. For rendering the complex and dynamic visuals required for a poker client, they opted for WebGL, a web standard for rendering 2D and 3D graphics within a browser, which can also be utilized in cross-platform frameworks. Their previous experience with JavaScript on their web platform played a role in this decision. The core idea was to build a single application logic and UI layer using these web technologies and then deploy it across desktop and mobile using wrapper technologies (like Electron for desktop and potentially variations for mobile, although the primary focus of this talk seemed to be the desktop migration).

The Migration Process and Lessons Learned

Anthony and Florian shared their experience with the migration process, which was a phased approach given the complexity of a live gaming platform. They discussed the technical challenges encountered, such as integrating native device functionalities (like file system access for desktop) within the web technology stack, optimizing WebGL rendering performance for different hardware, and ensuring a smooth transition for existing users. They touched upon the architectural changes required to support a unified codebase, potentially involving a clear separation between the cross-platform UI logic and any platform-specific native modules or integrations. Key lessons learned included the importance of careful planning, thorough testing on all target platforms, investing in performance optimization, and managing the technical debt during the transition. They also highlighted the benefits reaped from this migration, including faster feature development, reduced maintenance overhead, improved consistency across platforms, and the ability to leverage a larger pool of web developers. The presentation offered a valuable case study for other organizations considering a similar move towards cross-platform development using web technologies.

Links:

PostHeaderIcon [DevoxxFR2025] Dagger Modules: A Swiss Army Knife for Modern CI/CD Pipelines

Continuous Integration and Continuous Delivery (CI/CD) pipelines are the backbone of modern software development, automating the process of building, testing, and deploying applications. However, as these pipelines grow in complexity, they often become difficult to maintain, debug, and port across different execution platforms, frequently relying on verbose and platform-specific YAML configurations. Jean-Christophe Sirot, in his presentation, introduced Dagger as a revolutionary approach to CI/CD, allowing pipelines to be written as code, executable locally, testable, and portable. He explored Dagger Functions and Dagger Modules as key concepts for creating and sharing reusable, language-agnostic components for CI/CD workflows, positioning Dagger as a versatile “Swiss Army knife” for modernizing these critical pipelines.

The Pain Points of Traditional CI/CD

Jean-Christophe began by outlining the common frustrations associated with traditional CI/CD pipelines. Relying heavily on YAML or other declarative formats for defining pipelines can lead to complex, repetitive, and hard-to-read configurations, especially for intricate workflows. Debugging failures within these pipelines is often challenging, requiring pushing changes to a remote CI server and waiting for the pipeline to run. Furthermore, pipelines written for one CI platform (like GitHub Actions or GitLab CI) are often not easily transferable to another, creating vendor lock-in and hindering flexibility. This dependency on specific platforms and the difficulty in managing complex workflows manually are significant pain points for development and DevOps teams.

Dagger: CI/CD as Code

Dagger offers a fundamentally different approach by treating CI/CD pipelines as code. It allows developers to write their pipeline logic using familiar programming languages (like Go, Python, Java, or TypeScript) instead of platform-specific configuration languages. This brings the benefits of software development practices – such as code reusability, modularity, testing, and versioning – to CI/CD. Jean-Christophe explained that Dagger executes these pipelines using containers, ensuring consistency and portability across different environments. The Dagger engine runs the pipeline logic, orchestrates the necessary container operations, and manages dependencies. This allows developers to run and debug their CI/CD pipelines locally using the same code that will execute on the remote CI platform, significantly accelerating the debugging cycle.

Dagger Functions and Modules

Key to Dagger’s power are Dagger Functions and Dagger Modules. Jean-Christophe described Dagger Functions as the basic building blocks of a pipeline – functions written in a programming language that perform specific CI/CD tasks (e.g., building a Docker image, running tests, deploying an application). These functions interact with the Dagger engine to perform container operations. Dagger Modules are collections of related Dagger Functions that can be packaged and shared. Modules allow teams to create reusable components for common CI/CD patterns or specific technologies, effectively creating a library of CI/CD capabilities. For example, a team could create a “Java Build Module” containing functions for compiling Java code, running Maven or Gradle tasks, and building JAR or WAR files. These modules can be easily imported and used in different projects, promoting standardization and reducing duplication across an organization’s CI/CD workflows. Jean-Christophe demonstrated how to create and use Dagger Modules, illustrating their potential for building composable and maintainable pipelines. He highlighted that Dagger’s language independence means that modules can be written in one language (e.g., Python) and used in a pipeline defined in another (e.g., Java), fostering collaboration between teams with different language preferences.

The Benefits: Composable, Maintainable, Portable

By adopting Dagger, teams can create CI/CD pipelines that are:
Composable: Pipelines can be built by combining smaller, reusable Dagger Modules and Functions.
Maintainable: Pipelines written as code are easier to read, understand, and refactor using standard development tools and practices.
Portable: Pipelines can run on any platform that supports Dagger and containers, eliminating vendor lock-in.
Testable: Individual Dagger Functions and modules can be unit tested, and the entire pipeline can be run and debugged locally.

Jean-Christophe’s presentation positioned Dagger as a versatile tool that modernizes CI/CD by bringing the best practices of software development to pipeline automation. The ability to write pipelines in code, leverage reusable modules, and execute locally makes Dagger a powerful “Swiss Army knife” for developers and DevOps engineers seeking more efficient, reliable, and maintainable CI/CD workflows.

Links:

PostHeaderIcon [DevoxxFR2025] Go Without Frills: When the Standard Library Suffices

Go, the programming language designed by Google, has gained significant popularity for its simplicity, efficiency, and strong support for concurrent programming. A core philosophy of Go is its minimalist design and emphasis on a robust standard library, encouraging developers to “do a lot with a little.” Nathan Castelein, in his presentation, championed this philosophy, demonstrating how a significant portion of modern applications can be built effectively using only Go’s standard library, without resorting to numerous third-party dependencies. He explored various native packages and compared their functionalities to well-known third-party alternatives, showcasing why and how returning to the fundamentals can lead to simpler, more maintainable, and often equally performant Go applications.

The Go Standard Library: A Powerful Foundation

Nathan highlighted the richness and capability of Go’s standard library. Unlike some languages where the standard library is minimal, Go provides a comprehensive set of packages covering a wide range of functionalities, from networking and HTTP to encoding/decoding, cryptography, and testing. He emphasized that these standard packages are well-designed, thoroughly tested, and actively maintained, making them a reliable choice for building production-ready applications. Focusing on the standard library reduces the number of external dependencies, which simplifies project management, minimizes potential security vulnerabilities introduced by third-party code, and avoids the complexities of managing version conflicts. It also encourages developers to gain a deeper understanding of the language’s built-in capabilities.

Comparing Standard Packages to Third-Party Libraries

The core of Nathan’s talk involved comparing functionalities provided by standard Go packages with those offered by popular third-party libraries. He showcased examples in areas such as:
Web Development: Demonstrating how to build web servers and handle HTTP requests using the net/http package, contrasting it with frameworks like Gin, Echo, or Fiber. He would have shown that for many common web tasks, the standard library provides sufficient features.
Logging: Illustrating the capabilities of the log/slog package (introduced in Go 1.21) for structured logging, comparing it to libraries like Logrus or Zerolog. He would have highlighted how log/slog provides modern logging features natively.
Testing: Exploring the testing package for writing unit and integration tests, perhaps mentioning how it can be used effectively without resorting to assertion libraries like Testify for many common assertion scenarios.

The comparison aimed to show that while third-party libraries often provide convenience or specialized features, the standard library has evolved to incorporate many commonly needed functionalities, often in a simpler and more idiomatic Go way.

The Benefits of a Minimalist Approach

Nathan articulated the benefits of embracing a “Go without frills” approach. Using the standard library more extensively leads to:
Reduced Complexity: Fewer dependencies mean a simpler project structure and fewer moving parts to understand and manage.
Improved Maintainability: Code relying on standard libraries is often easier to maintain over time, as the dependencies are stable and well-documented.
Enhanced Performance: Standard library implementations are often highly optimized and integrated with the Go runtime.
Faster Compilation: Fewer dependencies can lead to quicker build times.
Smaller Binaries: Avoiding large third-party libraries can result in smaller executable files.

He acknowledged that there are still valid use cases for third-party libraries, especially for highly specialized tasks or when a library provides significant productivity gains. However, the key takeaway was to evaluate the necessity of adding a dependency and to leverage the powerful standard library whenever it suffices. The talk encouraged developers to revisit the fundamentals and appreciate the elegance and capability of Go’s built-in tools for building robust and efficient applications.

Links:

PostHeaderIcon [DevoxxFR2025] Building an Agentic AI with Structured Outputs, Function Calling, and MCP

The rapid advancements in Artificial Intelligence, particularly in large language models (LLMs), are enabling the creation of more sophisticated and autonomous AI agents – programs capable of understanding instructions, reasoning, and interacting with their environment to achieve goals. Building such agents requires effective ways for the AI model to communicate programmatically and to trigger external actions. Julien Dubois, in his deep-dive session, explored key techniques and a new protocol essential for constructing these agentic AI systems: Structured Outputs, Function Calling, and the Model-Controller Protocol (MCP). Using practical examples and the latest Java SDK developed by OpenAI, he demonstrated how to implement these features within LangChain4j, showcasing how developers can build AI agents that go beyond simple text generation.

Structured Outputs: Enabling Programmatic Communication

One of the challenges in building AI agents is getting LLMs to produce responses in a structured format that can be easily parsed and used by other parts of the application. Julien explained how Structured Outputs address this by allowing developers to define a specific JSON schema that the AI model must adhere to when generating its response. This ensures that the output is not just free-form text but follows a predictable structure, making it straightforward to map the AI’s response to data objects in programming languages like Java. He demonstrated how to provide the LLM with a JSON schema definition and constrain its output to match that schema, enabling reliable programmatic communication between the AI model and the application logic. This is crucial for scenarios where the AI needs to provide data in a specific format for further processing or action.

Function Calling: Giving AI the Ability to Act

To be truly agentic, an AI needs the ability to perform actions in the real world or interact with external tools and services. Julien introduced Function Calling as a powerful mechanism that allows developers to define functions in their code (e.g., Java methods) and expose them to the AI model. The LLM can then understand when a user’s request requires calling one of these functions and generate a structured output indicating which function to call and with what arguments. The application then intercepts this output, executes the corresponding function, and can provide the function’s result back to the AI, allowing for a multi-turn interaction where the AI reasons, acts, and incorporates the results into its subsequent responses. Julien demonstrated how to define function “signatures” that the AI can understand and how to handle the function calls triggered by the AI, showcasing scenarios like retrieving information from a database or interacting with an external API based on the user’s natural language request.

MCP: Standardizing LLM Interaction

While Structured Outputs and Function Calling provide the capabilities for AI communication and action, the Model-Controller Protocol (MCP) emerges as a new standard to streamline how LLMs interact with various data sources and tools. Julien discussed MCP as a protocol that aims to standardize the communication layer between AI models (the “Model”) and the application logic that orchestrates them and provides access to external resources (the “Controller”). This standardization can facilitate building more portable and interoperable AI agentic systems, allowing developers to switch between different LLMs or integrate new tools and data sources more easily. While details of MCP might still be evolving, its goal is to provide a common interface for tasks like function calling, accessing external knowledge, and managing conversational state. Julien illustrated how libraries like LangChain4j are adopting these concepts and integrating with protocols like MCP to simplify the development of sophisticated AI agents. The presentation, rich in code examples using the OpenAI Java SDK, provided developers with the practical knowledge and tools to start building the next generation of agentic AI applications.

Links: