Posts Tagged ‘LangChain4j’
[DevoxxBE2025] Behavioral Software Engineering
Lecturer
Mario Fusco is a Senior Principal Software Engineer at Red Hat, where he leads the Drools project, a business rules management system, and contributes to initiatives like LangChain4j. As a Java Champion and open-source advocate, he co-authored “Java 8 in Action” with Raoul-Gabriel Urma and Alan Mycroft, published by Manning. Mario frequently speaks at conferences on topics ranging from functional programming to domain-specific languages.
Abstract
This examination draws parallels between behavioral economics and software engineering, highlighting cognitive biases that distort rational decision-making in technical contexts. It elucidates key heuristics identified by economists like Daniel Kahneman and Amos Tversky, situating them within engineering practices such as benchmarking, tool selection, and architectural choices. Through illustrative examples of performance evaluation flaws and hype-driven adoptions, the narrative scrutinizes methodological influences on project outcomes. Ramifications for collaborative dynamics, innovation barriers, and professional development are explored, proposing mindfulness as a countermeasure to enhance engineering efficacy.
Foundations of Behavioral Economics and Rationality Myths
Classical economic models presupposed fully efficient markets populated by perfectly logical agents, often termed Homo Economicus, who maximize utility through impeccable reasoning. However, pioneering work by psychologists Daniel Kahneman and Amos Tversky in the late 1970s challenged this paradigm, demonstrating that human judgment is riddled with systematic errors. Their prospect theory, for instance, revealed how individuals weigh losses more heavily than equivalent gains, leading to irrational risk aversion or seeking behaviors. This laid the groundwork for behavioral economics, which integrates psychological insights into economic analysis to explain deviations from predicted rational conduct.
In software engineering, a parallel illusion persists: the notion of the “Engeen,” an idealized practitioner who approaches problems with unerring logic and objectivity. Yet, engineers are susceptible to the same mental shortcuts that Kahneman and Tversky cataloged. These heuristics, evolved for quick survival decisions in ancestral environments, often mislead in modern technical scenarios. For example, the anchoring effect—where initial information disproportionately influences subsequent judgments—can skew performance assessments. An engineer might fixate on a preliminary benchmark result, overlooking confounding variables like hardware variability or suboptimal test conditions.
The availability bias compounds this, prioritizing readily recalled information over comprehensive data. If recent experiences involve a particular technology failing, an engineer might unduly favor alternatives, even if statistical evidence suggests otherwise. Contextualized within the rapid evolution of software tools, these biases amplify during hype cycles, where media amplification creates illusory consensus. Implications extend to resource allocation: projects may pursue fashionable solutions, diverting efforts from proven, albeit less glamorous, approaches.
Heuristics in Performance Evaluation and Tool Adoption
Performance benchmarking exemplifies how cognitive shortcuts undermine objective analysis. The availability heuristic leads engineers to overemphasize memorable failures, such as a vivid recollection of a slow database query, while discounting broader datasets. This can result in premature optimizations or misguided architectural pivots. Similarly, anchoring occurs when initial metrics set unrealistic expectations; a prototype’s speed on high-end hardware might bias perceptions of production viability.
Tool adoption is equally fraught. The pro-innovation bias fosters an uncritical embrace of novel technologies, often without rigorous evaluation. Engineers might adopt container orchestration systems like Kubernetes for simple applications, incurring unnecessary complexity. The bandwagon effect reinforces this, as perceived peer adoption creates social proof, echoing Tversky’s work on conformity under uncertainty.
The not-invented-here syndrome further distorts choices, prompting reinvention of wheels due to overconfidence in proprietary solutions. Framing effects alter problem-solving: the same requirement, phrased differently—e.g., “build a scalable service” versus “optimize for cost”—yields divergent designs. Examples from practice include teams favoring microservices for “scalability” when monolithic structures suffice, driven by availability of success stories from tech giants.
Analysis reveals these heuristics degrade quality: biased evaluations lead to inefficient code, while hype-driven adoptions inflate maintenance costs. Implications urge structured methodologies, such as A/B testing or peer reviews, to counteract intuitive pitfalls.
Biases in Collaborative and Organizational Contexts
Team interactions amplify individual biases, creating collective delusions. The curse of knowledge hinders communication: experts assume shared understanding, leading to ambiguous requirements or overlooked edge cases. Hyperbolic discounting prioritizes immediate deliverables over long-term maintainability, accruing technical debt.
Organizational politics exacerbate these: non-technical leaders impose decisions, as in mandating unproven tools based on superficial appeal. Sunk cost fallacy sustains failing projects, ignoring opportunity costs. Dunning-Kruger effect, where incompetence breeds overconfidence, manifests in unqualified critiques of sound engineering.
Confirmation bias selectively affirms preconceptions, dismissing contradictory evidence. In code reviews, this might involve defending flawed implementations by highlighting partial successes. Contextualized within agile methodologies, these biases undermine iterative improvements, fostering resistance to refactoring.
Implications for dynamics: eroded trust hampers collaboration, reducing innovation. Analysis suggests diverse teams dilute biases, as varied perspectives challenge assumptions.
Strategies to Mitigate Biases in Engineering Practices
Mitigation begins with awareness: educating on Kahneman’s System 1 (intuitive) versus System 2 (deliberative) thinking encourages reflective pauses. Structured decision frameworks, like weighted scoring for tool selection, counteract anchoring and availability.
For performance, blind testing—evaluating without preconceptions—promotes objectivity. Debiasing techniques, such as devil’s advocacy, challenge bandwagon tendencies. Organizational interventions include bias training and diverse hiring to foster balanced views.
In practice, adopting evidence-based approaches—rigorous benchmarking protocols—enhances outcomes. Implications: mindful engineering boosts efficiency, reducing rework. Future research could quantify bias impacts via metrics like defect rates.
In essence, recognizing human frailties transforms engineering from intuitive art to disciplined science, yielding superior software.
Links:
- Lecture video: https://www.youtube.com/watch?v=Aa2Zn8WFJrI
- Mario Fusco on LinkedIn: https://www.linkedin.com/in/mariofusco/
- Mario Fusco on Twitter/X: https://twitter.com/mariofusco
- Red Hat website: https://www.redhat.com/
[KotlinConf2025] LangChain4j with Quarkus
In a collaboration between Red Hat and Twilio, Max Rydahl Andersen and Konstantin Pavlov presented an illuminating session on the powerful combination of LangChain4j and Quarkus for building AI-driven applications with Kotlin. The talk addressed the burgeoning demand for integrating artificial intelligence into modern software and the common difficulties developers encounter, such as complex setups and performance bottlenecks. By merging Kotlin’s expressive power, Quarkus’s rapid runtime, and LangChain4j’s AI capabilities, the presenters demonstrated a streamlined and effective solution for creating cutting-edge applications.
A Synergistic Approach to AI Integration
The core of the session focused on the seamless synergy between the three technologies. Andersen and Pavlov detailed how Kotlin’s idiomatic features simplify the development of AI workflows. They presented a compelling case for using LangChain4j, a versatile framework for building language model-based applications, within the Quarkus ecosystem. Quarkus, with its fast startup times and low memory footprint, proved to be an ideal runtime for these resource-intensive applications. The presenters walked through practical code samples, illustrating how to set up the environment, manage dependencies, and orchestrate AI tools efficiently. They emphasized that this integrated approach significantly reduces the friction typically associated with AI development, allowing engineers to focus on business logic rather than infrastructural challenges.
Enhancing Performance and Productivity
The talk also addressed the critical aspect of performance. The presenters demonstrated how the combination of LangChain4j and Quarkus enables the creation of high-performing, AI-powered applications. They discussed the importance of leveraging Quarkus’s native compilation capabilities, which can lead to dramatic improvements in startup time and resource utilization. Additionally, they touched on the ongoing work to optimize the Kotlin compiler’s interaction with the Quarkus build system. Andersen noted that while the current process is efficient, there are continuous efforts to further reduce build times and enhance developer productivity. This commitment to performance underscores the value of this tech stack for developers who need to build scalable and responsive AI solutions.
The Path Forward
Looking ahead, Andersen and Pavlov outlined the future roadmap for LangChain4j and its integration with Quarkus. They highlighted upcoming features, such as the native asynchronous API, which will provide enhanced support for Kotlin coroutines. While acknowledging the importance of coroutines for certain use cases, they also reminded the audience that traditional blocking and virtual threads remain perfectly viable and often preferred for a majority of applications. They also extended an open invitation to the community to contribute to the project, emphasizing that the development of these tools is a collaborative effort. The session concluded with a powerful message: this technology stack is not just about building applications; it’s about empowering developers to confidently tackle the next generation of AI-driven projects.
Links:
[DevoxxFR2025] Building an Agentic AI with Structured Outputs, Function Calling, and MCP
The rapid advancements in Artificial Intelligence, particularly in large language models (LLMs), are enabling the creation of more sophisticated and autonomous AI agents – programs capable of understanding instructions, reasoning, and interacting with their environment to achieve goals. Building such agents requires effective ways for the AI model to communicate programmatically and to trigger external actions. Julien Dubois, in his deep-dive session, explored key techniques and a new protocol essential for constructing these agentic AI systems: Structured Outputs, Function Calling, and the Model-Controller Protocol (MCP). Using practical examples and the latest Java SDK developed by OpenAI, he demonstrated how to implement these features within LangChain4j, showcasing how developers can build AI agents that go beyond simple text generation.
Structured Outputs: Enabling Programmatic Communication
One of the challenges in building AI agents is getting LLMs to produce responses in a structured format that can be easily parsed and used by other parts of the application. Julien explained how Structured Outputs address this by allowing developers to define a specific JSON schema that the AI model must adhere to when generating its response. This ensures that the output is not just free-form text but follows a predictable structure, making it straightforward to map the AI’s response to data objects in programming languages like Java. He demonstrated how to provide the LLM with a JSON schema definition and constrain its output to match that schema, enabling reliable programmatic communication between the AI model and the application logic. This is crucial for scenarios where the AI needs to provide data in a specific format for further processing or action.
Function Calling: Giving AI the Ability to Act
To be truly agentic, an AI needs the ability to perform actions in the real world or interact with external tools and services. Julien introduced Function Calling as a powerful mechanism that allows developers to define functions in their code (e.g., Java methods) and expose them to the AI model. The LLM can then understand when a user’s request requires calling one of these functions and generate a structured output indicating which function to call and with what arguments. The application then intercepts this output, executes the corresponding function, and can provide the function’s result back to the AI, allowing for a multi-turn interaction where the AI reasons, acts, and incorporates the results into its subsequent responses. Julien demonstrated how to define function “signatures” that the AI can understand and how to handle the function calls triggered by the AI, showcasing scenarios like retrieving information from a database or interacting with an external API based on the user’s natural language request.
MCP: Standardizing LLM Interaction
While Structured Outputs and Function Calling provide the capabilities for AI communication and action, the Model-Controller Protocol (MCP) emerges as a new standard to streamline how LLMs interact with various data sources and tools. Julien discussed MCP as a protocol that aims to standardize the communication layer between AI models (the “Model”) and the application logic that orchestrates them and provides access to external resources (the “Controller”). This standardization can facilitate building more portable and interoperable AI agentic systems, allowing developers to switch between different LLMs or integrate new tools and data sources more easily. While details of MCP might still be evolving, its goal is to provide a common interface for tasks like function calling, accessing external knowledge, and managing conversational state. Julien illustrated how libraries like LangChain4j are adopting these concepts and integrating with protocols like MCP to simplify the development of sophisticated AI agents. The presentation, rich in code examples using the OpenAI Java SDK, provided developers with the practical knowledge and tools to start building the next generation of agentic AI applications.
Links:
- Julien Dubois: https://www.linkedin.com/in/juliendubois/
- Microsoft: https://www.microsoft.com/
- LangChain4j on GitHub: https://github.com/langchain4j/langchain4j
- OpenAI: https://openai.com/
- Devoxx France LinkedIn: https://www.linkedin.com/company/devoxx-france/
- Devoxx France Bluesky: https://bsky.app/profile/devoxx.fr
- Devoxx France Website: https://www.devoxx.fr/