Posts Tagged ‘DevoxxGR2025’
[DevoxxGR2025] Angular Micro-Frontends
Dimitris Kaklamanis, a lead software engineer at CodeHub, delivered an 11-minute talk at Devoxx Greece 2025, exploring how Angular micro-frontends revolutionize scalable web development.
Micro-Frontends Unveiled
Kaklamanis opened with a relatable scenario: a growing front-end monolith turning into a dependency nightmare. Micro-frontends, inspired by microservices, break the UI into smaller, independent pieces, each owned by a team. This enables parallel development, reduces risks, and enhances scalability. He outlined four principles: decentralization (team-owned UI parts), technology agnosticism (mixing frameworks like Angular, React, or Vue), resilience (isolated bugs don’t crash the app), and scalability (independent team scaling). A diagram showed teams building features in different frameworks, integrated at runtime via a shell app.
Pros and Cons
Micro-frontends offer scalability, tech flexibility, faster parallel development, resilience, and easier maintenance due to focused codebases. However, challenges include increased complexity (more coordination), performance overhead (multiple apps loading), communication issues (state sharing), and CI/CD complexity (separate pipelines). Kaklamanis highlighted Angular’s strengths: its component-based structure aligns with modularity, CLI tools manage multiple projects, and features like lazy loading and Webpack 5 module federation simplify implementation. Tools like NX streamline monorepo management, making Angular a robust choice.
Implementation in Action
Kaklamanis demonstrated a live Angular store app with independent modules (orders, products, inventory). A change in the product component didn’t affect others, showcasing isolation. He recommended clear module ownership, careful intermodule communication, performance monitoring, and minimal shared libraries. For large, multi-team projects, he urged prototyping micro-frontends, starting small and iterating for scalability.
Links
[DevoxxGR2025] Simplifying LLM Integration: A Blueprint for Effective AI Systems
Efstratios Marinos captivated attendees at Devoxx Greece 2025 with a masterclass on streamlining large language model (LLM) integrations. By focusing on practical, modular patterns, Efstratios demonstrated how to construct robust, scalable AI systems that prioritize simplicity without sacrificing functionality, offering actionable strategies for developers.
Exploring the Complexity Continuum
Efstratios introduced the concept of a complexity continuum for LLM integrations, spanning from straightforward single calls to sophisticated agentic frameworks. At its simplest, a system comprises an LLM, a retrieval mechanism, and tool capabilities, delivering maintainability and ease of updates with minimal overhead. More intricate setups incorporate routers, APIs, and vector stores, enhancing functionality but complicating debugging. Efstratios emphasized that simplicity is a strategic choice, enabling rapid adaptation to evolving AI technologies. He showcased a concise Python implementation, where a single function manages retrieval and response generation in a handful of lines, contrasting this with a multi-step retrieval-augmented generation (RAG) workflow that involves encoding, indexing, and embedding, adding layers of complexity that demand careful justification.
Crafting Robust Interfaces
Central to Efstratios’s philosophy is the design of clean interfaces for LLMs, retrieval systems, tools, and memory components. He compared prompt crafting to API design, advocating for structured formats that clearly separate instructions, context, and queries. Well-documented tools, complete with detailed descriptions and practical examples, empower LLMs to perform effectively, while vague documentation leads to errors. Efstratios underscored the need for resilient error handling, such as fallback strategies for failed retrievals or tool invocations, to ensure system reliability. For example, a system might respond to a failed search by suggesting alternatives or retrying with adjusted parameters, improving usability and simplifying troubleshooting in production environments.
Enhancing Capabilities with Workflow Patterns
Efstratios explored three foundational workflow patterns—prompt chaining, routing, and parallelization—to optimize performance while managing complexity. Prompt chaining divides complex tasks into sequential steps, such as outlining, drafting, and refining content, enhancing clarity at the expense of increased latency. Routing employs an LLM to categorize inputs and direct them to specialized handlers, like a customer support bot distinguishing technical from financial queries, improving efficiency through focused processing. Parallelization, encompassing sectioning and voting, distributes tasks across multiple LLM instances, such as analyzing document segments concurrently, though it incurs higher computational costs. These patterns provide incremental enhancements, ideal for tasks requiring moderate sophistication.
Advanced Patterns and Decision-Making Principles
For more demanding scenarios, Efstratios presented two advanced patterns: orchestrator-workers and evaluator-optimizer. The orchestrator-workers pattern dynamically breaks down tasks, with a central LLM coordinating specialized workers, perfect for complex coding projects or multi-faceted content creation. The evaluator-optimizer pattern establishes a feedback loop, where a generator LLM produces content and an evaluator refines it iteratively, mirroring human iterative processes. Efstratios outlined six decision-making principles—use case alignment, development effort, maintainability, performance granularity, latency, and cost—to guide pattern selection. Simple solutions suffice for tasks like summarization, while multi-step workflows excel in knowledge-intensive applications. He encouraged starting with minimal solutions, establishing performance baselines, identifying specific limitations, and adding complexity only when validated by measurable gains.
Links:
[DevoxxGR2025] Orchestration vs. Choreography: Balancing Control and Flexibility in Microservices
At Devoxx Greece 2025, Laila Bougria, representing Particular Software, delivered an insightful presentation on the nuances of orchestration and choreography in microservice architectures. Leveraging her extensive banking industry experience, Laila provided a practical framework to navigate the trade-offs of these coordination strategies, using real-world scenarios to guide developers toward informed system design choices.
The Essence of Microservice Interactions
Laila opened with a relatable story about navigating the mortgage process, underscoring the complexity of interservice communication in microservices. She explained that while individual services are streamlined, the real challenge lies in orchestrating their interactions to deliver business value. Orchestration employs a centralized component to direct workflows, maintaining state and issuing commands, much like a conductor guiding a symphony. Choreography, by contrast, embraces an event-driven model where services operate autonomously, reacting to events with distributed state management. Through a loan broker example, Laila illustrated how orchestration simplifies processes like credit checks and offer ranking by centralizing control, yet risks creating dependencies that can halt workflows if services fail. Choreography, facilitated by an event bus, enhances autonomy but complicates tracking the overall process, potentially obscuring system behavior.
Navigating Coupling and Resilience
Delving into the mechanics, Laila highlighted the distinct coupling profiles of each approach. Orchestration often leads to efferent coupling, with the central component relying on multiple downstream services, necessitating resilience mechanisms like retries or circuit breakers to mitigate failures. For instance, if a credit scoring service is unavailable, the orchestrator must handle retries or fallback strategies. Choreography, however, increases afferent coupling through event subscriptions, which can introduce bidirectional dependencies when addressing business failures, such as reversing a loan if a property deal collapses. Laila stressed the importance of understanding coupling types—temporal, contract, and control—to make strategic decisions. Asynchronous communication in orchestration reduces temporal coupling, while choreography’s event-driven nature supports scalability but challenges visibility, as seen in her banking workflow example where emergent behavior obscured process clarity.
Addressing Business Failures and Workflow Evolution
Laila emphasized the critical role of managing business failures, or compensating flows, where actions must be undone due to unforeseen events, like a failed property transaction requiring the reversal of interest provisions or direct debits. Orchestration excels here, leveraging existing service connections to streamline reversals. In contrast, choreography demands additional event subscriptions, risking complex bidirectional coupling, as demonstrated when adding a background check to a loan process introduced order dependencies. Laila introduced the concept of “passive-aggressive publishers,” where services implicitly rely on others to act on events, akin to expecting a partner to address a chaotic kitchen without direct communication. She advocated for explicit command-driven interactions to clarify dependencies, ensuring system robustness. Additionally, Laila addressed workflow evolution, noting that orchestration simplifies modifications by centralizing changes, while choreography requires careful management to avoid disrupting event-driven flows.
A Strategic Decision Framework
Concluding her talk, Laila offered a decision-making framework anchored in five questions: the nature of communication (synchronous or asynchronous), the complexity of prerequisites, the extent of compensating flows, the likelihood of domain changes, and the need for centralized responsibility. Orchestration suits critical workflows with frequent changes or complex dependencies, such as banking processes requiring clear state visibility. Choreography is ideal for stable domains with minimal prerequisites, like retail order systems. By segmenting workflows into sub-processes, developers can apply the appropriate pattern strategically, blending both approaches for optimal outcomes. Laila’s banking-inspired insights provide a practical guide for architects to craft systems that balance control, flexibility, and maintainability.
Links:
[DevoxxGR2025] Engineering for Social Impact
Giorgos Anagnostaki and Kostantinos Petropoulos, from IKnowHealth, delivered a concise 15-minute talk at Devoxx Greece 2025, portraying software engineering as a creative process with profound social impact, particularly in healthcare.
Engineering as Art
Anagnostaki likened software engineering to creating art, blending design and problem-solving to build functional systems from scratch. In healthcare, this creativity carries immense responsibility, as their work at IKnowHealth supports radiology departments. Their platform, built for Greece’s national imaging repository, enables precise diagnoses, like detecting cancer or brain tumors, directly impacting patients’ lives. This human connection fuels their motivation, transforming code into life-saving tools.
The Radiology Platform
Petropoulos detailed their cloud-based platform on Azure, connecting hospitals and citizens. Hospitals send DICOM imaging files and HL7 diagnosis data via VPN, while citizens access their medical history through a portal, eliminating CDs and printed reports. The system supports remote diagnosis and collaboration, allowing radiologists to share anonymized cases for second opinions, enhancing accuracy and speeding up critical decisions, especially in understaffed regions.
Technical Challenges
The platform handles 2.5 petabytes of imaging data annually from over 100 hospitals, requiring robust storage and fast retrieval. High throughput (up to 600 requests per minute per hospital) demands scalable infrastructure. Front-end challenges include rendering thousands of DICOM images without overloading browsers, while GDPR-compliant security ensures data privacy. Integration with national health systems added complexity, but the platform’s impact—illustrated by Anagnostaki’s personal story of his father’s cancer detection—underscores its value.
Links
[DevoxxGR2025] AI Integration with MCPs
Kent C. Dodds, in his dynamic 22-minute talk at Devoxx Greece 2025, explored how Model Context Protocols (MCPs) enable AI assistants to interact with applications, envisioning a future where users have their own “Jarvis” from Iron Man.
The Vision of Jarvis
Dodds opened with a clip from Iron Man, showcasing Jarvis performing tasks like compiling databases, generating UI, and creating flight plans. He posed a question: why don’t we have such assistants today? Current technologies, like Google Assistant or Siri, fall short due to limited integrations. Dodds argued that MCPs, a standard protocol supported by Anthropic, OpenAI, and Google, bridge this gap by enabling AI to communicate with diverse services, from Slack to local government platforms, transforming user interaction.
MCP Architecture
MCPs sit between the host application (e.g., ChatGPT, Claude) and service tools, allowing seamless communication. Dodds explained that LLMs generate tokens but rely on host applications to execute actions. MCP servers, managed by service providers, connect to tools, enabling users to install them like apps. In a demo, Dodds showed an MCP server for his website, allowing an AI to search blog posts and subscribe users to newsletters, though client-side issues hindered reliability, highlighting the need for improved user experiences.
Challenges and Future
The primary challenge is the poor client experience for installing MCP servers, currently requiring manual JSON configuration. Dodds predicted a marketplace or auto-discovery system to streamline this, likening MCPs to the internet’s impact. Security concerns, similar to early browsers, need addressing, but Dodds sees AI hosts as the new browsers, promising a future where personalized AI assistants handle complex tasks effortlessly.
Links
[DevoxxGR2025] Optimized Kubernetes Scaling with Karpenter
Alex König, an AWS expert, delivered a 39-minute talk at Devoxx Greece 2025, exploring how Karpenter enhances Kubernetes cluster autoscaling for speed, cost-efficiency, and availability.
Karpenter’s Dynamic Autoscaling
König introduced Karpenter as an open-source, Kubernetes-native autoscaling solution, contrasting it with the traditional Cluster Autoscaler. Unlike the latter, which relies on uniform node groups (e.g., nodes with four CPUs and 16GB RAM), Karpenter uses the EC2 Fleet API to dynamically provision nodes tailored to workload needs. For instance, if a pod requires one CPU, Karpenter allocates a node with minimal excess capacity, avoiding resource waste. This right-sizing, combined with groupless scaling, enables faster and more cost-effective scaling, especially in dynamic environments.
Ensuring Availability with Constraints
König addressed availability challenges reported by users, emphasizing Kubernetes-native scheduling constraints to mitigate disruptions. Topology spread constraints distribute pods across availability zones, reducing the risk of downtime if a node fails. Pod disruption budgets, affinity/anti-affinity rules, and priority classes further ensure critical workloads are scheduled appropriately. For stateful workloads using EBS, König recommended setting the volume binding mode to “wait for first consumer” to avoid pod-volume mismatches across zones, preventing crashes and ensuring reliability.
Integrating with KEDA for Application Scaling
For advanced scaling, König highlighted combining Karpenter with KEDA for event-driven, application-specific scaling. KEDA scales pods based on metrics like Kafka topic sizes or SQS queues, beyond CPU/memory. Karpenter then provisions nodes for pending pods, enabling seamless scaling for workloads like flash sales. König outlined a four-step migration from Cluster Autoscaler to Karpenter, emphasizing its simplicity and open-source documentation.
Links
[DevoxxGR2025] Understanding Flow in Software Development
James Lewis, a ThoughtWorks consultant, delivered a 41-minute talk at Devoxx Greece 2025, exploring how work flows through software development, drawing on information theory and complexity science.
The Nature of Work as Information
Lewis framed software development as transforming “stuff” into more valuable outputs, akin to enterprise workflows before computers. Work, invisible as information, flows through value streams—from ideas to production code. However, invisibility causes issues like unnoticed backlogs or undeployed code, acting as costly inventory. Lewis cited Don Reinertsen’s Principles of Product Development Flow, emphasizing that untested or undeployed code represents lost revenue, unlike visible factory inventory, which signals inefficiencies immediately.
Visualizing Value Streams
Using a value stream map, Lewis illustrated a typical development cycle: three days for coding, ten days waiting for testing, and 30 days for deployment, totaling 47 days of lead time, with 42 days as idle inventory. Wait times stem from coordination (teams waiting on others), scheduling (e.g., architecture reviews), and queues (backlogs). Shared test environments exacerbate delays, costing more than provisioning new ones. Lewis advocated mapping workflows to expose economic losses, making a case for faster delivery to stakeholders.
Reducing Batch Sizes for Flow
Lewis emphasized reducing batch sizes to improve flow, a principle rooted in queuing theory. Smaller batches, like deploying twice as often, halve wait times, enabling faster revenue generation. Using agent-based models, he simulated agile (single-piece flow) versus waterfall (100% batch) teams, showing agile teams deliver value faster. Limiting work-in-progress and controlling queue sizes prevent congestion collapse, ensuring smoother, more predictable workflows.
Links
[DevoxxGR2025] Nx for Gradle – Faster Builds, Better DX
Katerina Skroumpelou, a senior engineer at Nx, delivered a 15-minute talk at Devoxx Greece 2025, showcasing how the @nx/gradle plugin enhances Gradle builds for monorepos, improving developer experience (DX).
Streamlining Gradle Monorepos
Skroumpelou introduced Nx as a build system optimized for monorepos, used by over half of Fortune 500 companies. Gradle’s strength lies in managing multi-project setups, where subprojects (e.g., core, API) share dependencies and tasks. However, large repositories grow complex, slowing builds. Nx integrates seamlessly with Gradle, acting as a thin layer atop existing projects without requiring a rewrite. By running nx init
in a Gradle project, developers enable Nx’s smart task management, preserving Gradle’s functionality while adding efficiency.
Optimizing CI Pipelines
Slow CI pipelines frustrate developers and inflate costs. Skroumpelou explained how Nx slashes CI times through distributed task execution, caching, and affected task detection. Unlike Gradle’s task-level parallelism and caching, Nx identifies changes in a pull request and runs only impacted tasks, skipping unaffected ones. For instance, a 30-minute pipeline could drop to five minutes by leveraging Nx’s project graph to avoid redundant builds or tests. Nx also splits large tasks, like end-to-end tests, into smaller, distributable units, further accelerating execution.
Handling Flaky Tests
Flaky tests disrupt workflows, forcing developers to rerun entire pipelines. Nx automatically detects and retries failed tests in isolation, preventing delays. Skroumpelou highlighted that this automation ensures pipelines remain efficient, even during meetings or interruptions. Nx, open-source under the MIT license, integrates with tools like VS Code, offering developers a free, scalable solution to enhance Gradle-based CI.