Recent Posts
Archives

Posts Tagged ‘DevoxxGR2025’

PostHeaderIcon [DevoxxGR2025] Nx for Gradle – Faster Builds, Better DX

Katerina Skroumpelou, a senior engineer at Nx, delivered a 15-minute talk at Devoxx Greece 2025, showcasing how the @nx/gradle plugin enhances Gradle builds for monorepos, improving developer experience (DX).

Streamlining Gradle Monorepos

Skroumpelou introduced Nx as a build system optimized for monorepos, used by over half of Fortune 500 companies. Gradle’s strength lies in managing multi-project setups, where subprojects (e.g., core, API) share dependencies and tasks. However, large repositories grow complex, slowing builds. Nx integrates seamlessly with Gradle, acting as a thin layer atop existing projects without requiring a rewrite. By running nx init in a Gradle project, developers enable Nx’s smart task management, preserving Gradle’s functionality while adding efficiency.

Optimizing CI Pipelines

Slow CI pipelines frustrate developers and inflate costs. Skroumpelou explained how Nx slashes CI times through distributed task execution, caching, and affected task detection. Unlike Gradle’s task-level parallelism and caching, Nx identifies changes in a pull request and runs only impacted tasks, skipping unaffected ones. For instance, a 30-minute pipeline could drop to five minutes by leveraging Nx’s project graph to avoid redundant builds or tests. Nx also splits large tasks, like end-to-end tests, into smaller, distributable units, further accelerating execution.

Handling Flaky Tests

Flaky tests disrupt workflows, forcing developers to rerun entire pipelines. Nx automatically detects and retries failed tests in isolation, preventing delays. Skroumpelou highlighted that this automation ensures pipelines remain efficient, even during meetings or interruptions. Nx, open-source under the MIT license, integrates with tools like VS Code, offering developers a free, scalable solution to enhance Gradle-based CI.

Links

PostHeaderIcon [DevoxxGR2025] Unmasking Benchmarking Fallacies

Georgios Andrianakis, a Quarkus engineer at Red Hat, presented a 46-minute talk at Devoxx Greece 2025, dissecting benchmarking fallacies, based on a talk by performance expert Francisco Negro.

The Benchmarketing Problem

Andrianakis introduced “benchmarketing,” where benchmarks are manipulated for marketing. Inspired by Negro’s frustration with a claim that Helidon outperformed Quarkus in a TechEmpower benchmark, he explored how data can be misrepresented. Benchmarks should be relevant, representative, equitable, repeatable, cost-effective, scalable, and transparent. A misleading article claimed Helidon’s superiority, but Negro’s investigation revealed unfair comparisons, sparking this talk to expose such fallacies.

Dissecting a Flawed Claim

Focusing on equity, Negro analyzed the TechEmpower benchmark, which tests web frameworks on tasks like JSON serialization and database queries. The claim hinged on a test where Helidon used a raw database driver (Vert.x for PostgreSQL), while Quarkus used a full object-relational mapper (ORM) like Hibernate, incurring performance penalties. Filtering for full ORM tests, Quarkus topped the charts, with Helidon absent. Comparing both without ORMs, Quarkus still outperformed. This exposed the claim’s inequity, as it wasn’t apples-to-apples, misleading readers.

Critical Thinking in Benchmarks

Andrianakis emphasized skepticism, citing Hitchens’ Razor: claims without evidence can be dismissed. Using Brendan Gregg’s USE method, Negro identified CPU saturation, not database I/O, as the bottleneck, debunking assumptions. He urged active benchmarking—monitoring errors and resources—and measuring one level deeper to understand performance. Awareness of biases, like confirmation bias, and avoiding assumptions of malice over incompetence, ensures fair evaluation of benchmark claims.

Links

PostHeaderIcon [DevoxxGR2025] Why OpenTelemetry is the Future

Steve Flanders, a veteran in observability, delivered a 13-minute talk at Devoxx Greece 2025, outlining five reasons why OpenTelemetry (OTel) is poised to dominate observability.

Unified Data Collection

Flanders began by addressing a common pain point: managing multiple libraries for traces, metrics, and logs. OpenTelemetry, a CNCF project second only to Kubernetes in activity, offers a single, open-standard library for all telemetry signals, including profiling and real user monitoring. Supporting standards like W3C Trace Context, Zipkin, and Prometheus, OTel allows developers to instrument applications once, regardless of backend. This eliminates the need for proprietary libraries, simplifying integration and reducing rework when switching vendors.

Flexible Data Control

The OpenTelemetry Collector, deployable as an agent or gateway, provides robust data processing. Flanders highlighted its ability to filter sensitive data, like personally identifiable information, before export. Developers can send full datasets to internal data lakes while sharing subsets with vendors, offering unmatched flexibility. OTel’s modularity means you can use its instrumentation, collector, or neither, integrating with existing systems. This vendor-agnostic approach ensures data portability, as switching backends requires only configuration changes, not re-instrumentation.

Enhanced Problem Resolution

OTel’s context and correlation features link traces, metrics, and logs, accelerating issue resolution. Flanders showcased a service map visualizing errors and latency, enriched with resource metadata (e.g., Kubernetes pod, cloud provider). This allows pinpointing issues, like a faulty pod causing currency service errors, reducing mean-time-to-resolution. With broad adoption by vendors, users, and projects, and stable support for core signals, OTel is a production-ready standard reshaping observability.

Links

PostHeaderIcon [DevoxxGR2025] Mastering Indistractable Focus

Michela Bertaina, head of community at Codemotion, shared a 45-minute talk at Devoxx Greece 2025 on achieving indistractable focus.

Understanding Distraction

Bertaina began with her community manager role, overwhelmed by notifications across platforms, leading to unproductive days. She introduced Nir Eyal’s concept of traction (actions toward goals) versus distraction (actions pulling away). Internal triggers (90%), like boredom or stress, drive distraction more than external ones (10%). She shared her chaotic morning routine—checking notifications while eating—causing stress and cognitive overload. Research shows focus lasts 40 seconds, worsened by constant app stimuli, with people checking phones 60 times daily.

Four Keys to Focus

Bertaina outlined four strategies: manage internal triggers, schedule traction, eliminate external triggers, and make pacts. First, identify emotional triggers (e.g., procrastination) using the 10-minute rule to delay distractions. Second, time-box meaningful tasks, prioritizing outcomes over to-do lists, and schedule downtime to avoid multitasking, which costs 23 minutes to refocus. Third, use technology (focus modes, app timers, grayscale screens) to reduce external triggers. Finally, make pacts (money, effort, identity) to commit to goals, like Ulysses resisting sirens by binding himself.

Reframing Lifestyle

Bertaina added a fifth key: reframe lifestyle with mindfulness, dedicated spaces, healthy diet, and sleep. Journaling and retrospectives clarify thoughts, while separate physical/virtual workspaces enhance focus. She challenged attendees to avoid phones during commutes or try a week-long digital detox, urging experimentation to find personal focus strategies.

Links

PostHeaderIcon [DevoxxGR2025] Component Ownership in Feature Teams

Thanassis Bantios, VP of Engineering at T-Food, delivered a 17-minute talk at Devoxx Greece 2025 on managing component ownership in feature teams.

The Feature Team Dilemma

Bantios narrated a story of Helen, an entrepreneur scaling an online delivery startup. Initially, a small team communicated easily, but growth led to functional teams and a backend monolith, complicating contributions. Adopting microservices split critical components like orders and menu services, but communication broke down as features required multiple teams. Agile cross-functional teams solved this, enabling autonomy, but neglected component ownership, risking a “Frankenstein” codebase.

Defining Component Ownership

A component, deployable independently (e.g., backend service or client app), needs ownership to maintain health, architecture, documentation, and code reviews. Bantios stressed teams, not individuals, should own components to avoid risks like staff turnover. Using the Spotify matrix model, client components (e.g., Android) and critical backend services (e.g., menu service) are owned by chapters (craft-based groups like Android developers), ensuring knowledge sharing and manageable on-call rotations. Non-critical services, like ratings, can be team-owned.

Inner Sourcing for Speed

Inner sourcing allows any team to contribute to any component, reducing dependencies. Bantios emphasized standardization (language, CI/CD, architecture) to simplify contributions, focusing only on business logic. He suggested rating components on an inner-sourcing score (test coverage, documentation) and dedicating 20% of time to component backlogs. This prevents technical debt in feature-driven environments, ensuring fast, scalable development.

Links

PostHeaderIcon [DevoxxGR2025] Email Re-Platforming Case Study

George Gkogkolis from Travelite Group shared a 15-minute case study at Devoxx Greece 2025 on re-platforming to process 1 million emails per hour.

The Challenge

Travelite Group, a global OTA handling flight tickets in 75 countries, processes 350,000 emails daily, expected to hit 2 million. Previously, a SaaS ticketing system struggled with growing traffic, poor licensing, and subpar user experience. Sharding the system led to complex agent logins and multiplexing issues with the booking engine. Market research revealed no viable alternatives, as vendors’ licensing models couldn’t handle the scale, prompting an in-house solution.

The New Platform

The team built a cloud-native, microservices-based platform within a year, going live in December 2024. It features a receiving app, a React-based web UI with Mantine Dev, a Spring Boot backend, and Amazon DocumentDB, integrated with Amazon SES and S3. Emails land in a Postfix server, are stored in S3, and processed via EventBridge and SQS. Data migration was critical, moving terabytes of EML files and databases in under two months, achieving a peak throughput of 1 million emails per hour by scaling to 50 receiver instances.

Lessons Learned

Starting with migration would have eased performance optimization, as synthetic data didn’t match production scale. Cloud-native deployment simplified scaling, and a backward-compatible API eased integration. Open standards (EML, Open API) ensured reliability. Future plans include AI and LLM enhancements by 2025, automating domain allocation for scalability.

Links

PostHeaderIcon [DevoxxGR2025] Angular Micro-Frontends

Dimitris Kaklamanis, a lead software engineer at CodeHub, delivered an 11-minute talk at Devoxx Greece 2025, exploring how Angular micro-frontends revolutionize scalable web development.

Micro-Frontends Unveiled

Kaklamanis opened with a relatable scenario: a growing front-end monolith turning into a dependency nightmare. Micro-frontends, inspired by microservices, break the UI into smaller, independent pieces, each owned by a team. This enables parallel development, reduces risks, and enhances scalability. He outlined four principles: decentralization (team-owned UI parts), technology agnosticism (mixing frameworks like Angular, React, or Vue), resilience (isolated bugs don’t crash the app), and scalability (independent team scaling). A diagram showed teams building features in different frameworks, integrated at runtime via a shell app.

Pros and Cons

Micro-frontends offer scalability, tech flexibility, faster parallel development, resilience, and easier maintenance due to focused codebases. However, challenges include increased complexity (more coordination), performance overhead (multiple apps loading), communication issues (state sharing), and CI/CD complexity (separate pipelines). Kaklamanis highlighted Angular’s strengths: its component-based structure aligns with modularity, CLI tools manage multiple projects, and features like lazy loading and Webpack 5 module federation simplify implementation. Tools like NX streamline monorepo management, making Angular a robust choice.

Implementation in Action

Kaklamanis demonstrated a live Angular store app with independent modules (orders, products, inventory). A change in the product component didn’t affect others, showcasing isolation. He recommended clear module ownership, careful intermodule communication, performance monitoring, and minimal shared libraries. For large, multi-team projects, he urged prototyping micro-frontends, starting small and iterating for scalability.

Links

PostHeaderIcon [DevoxxGR2025] Simplifying LLM Integration: A Blueprint for Effective AI Systems

Efstratios Marinos captivated attendees at Devoxx Greece 2025 with a masterclass on streamlining large language model (LLM) integrations. By focusing on practical, modular patterns, Efstratios demonstrated how to construct robust, scalable AI systems that prioritize simplicity without sacrificing functionality, offering actionable strategies for developers.

Exploring the Complexity Continuum

Efstratios introduced the concept of a complexity continuum for LLM integrations, spanning from straightforward single calls to sophisticated agentic frameworks. At its simplest, a system comprises an LLM, a retrieval mechanism, and tool capabilities, delivering maintainability and ease of updates with minimal overhead. More intricate setups incorporate routers, APIs, and vector stores, enhancing functionality but complicating debugging. Efstratios emphasized that simplicity is a strategic choice, enabling rapid adaptation to evolving AI technologies. He showcased a concise Python implementation, where a single function manages retrieval and response generation in a handful of lines, contrasting this with a multi-step retrieval-augmented generation (RAG) workflow that involves encoding, indexing, and embedding, adding layers of complexity that demand careful justification.

Crafting Robust Interfaces

Central to Efstratios’s philosophy is the design of clean interfaces for LLMs, retrieval systems, tools, and memory components. He compared prompt crafting to API design, advocating for structured formats that clearly separate instructions, context, and queries. Well-documented tools, complete with detailed descriptions and practical examples, empower LLMs to perform effectively, while vague documentation leads to errors. Efstratios underscored the need for resilient error handling, such as fallback strategies for failed retrievals or tool invocations, to ensure system reliability. For example, a system might respond to a failed search by suggesting alternatives or retrying with adjusted parameters, improving usability and simplifying troubleshooting in production environments.

Enhancing Capabilities with Workflow Patterns

Efstratios explored three foundational workflow patterns—prompt chaining, routing, and parallelization—to optimize performance while managing complexity. Prompt chaining divides complex tasks into sequential steps, such as outlining, drafting, and refining content, enhancing clarity at the expense of increased latency. Routing employs an LLM to categorize inputs and direct them to specialized handlers, like a customer support bot distinguishing technical from financial queries, improving efficiency through focused processing. Parallelization, encompassing sectioning and voting, distributes tasks across multiple LLM instances, such as analyzing document segments concurrently, though it incurs higher computational costs. These patterns provide incremental enhancements, ideal for tasks requiring moderate sophistication.

Advanced Patterns and Decision-Making Principles

For more demanding scenarios, Efstratios presented two advanced patterns: orchestrator-workers and evaluator-optimizer. The orchestrator-workers pattern dynamically breaks down tasks, with a central LLM coordinating specialized workers, perfect for complex coding projects or multi-faceted content creation. The evaluator-optimizer pattern establishes a feedback loop, where a generator LLM produces content and an evaluator refines it iteratively, mirroring human iterative processes. Efstratios outlined six decision-making principles—use case alignment, development effort, maintainability, performance granularity, latency, and cost—to guide pattern selection. Simple solutions suffice for tasks like summarization, while multi-step workflows excel in knowledge-intensive applications. He encouraged starting with minimal solutions, establishing performance baselines, identifying specific limitations, and adding complexity only when validated by measurable gains.

Links:

PostHeaderIcon [DevoxxGR2025] Orchestration vs. Choreography: Balancing Control and Flexibility in Microservices

At Devoxx Greece 2025, Laila Bougria, representing Particular Software, delivered an insightful presentation on the nuances of orchestration and choreography in microservice architectures. Leveraging her extensive banking industry experience, Laila provided a practical framework to navigate the trade-offs of these coordination strategies, using real-world scenarios to guide developers toward informed system design choices.

The Essence of Microservice Interactions

Laila opened with a relatable story about navigating the mortgage process, underscoring the complexity of interservice communication in microservices. She explained that while individual services are streamlined, the real challenge lies in orchestrating their interactions to deliver business value. Orchestration employs a centralized component to direct workflows, maintaining state and issuing commands, much like a conductor guiding a symphony. Choreography, by contrast, embraces an event-driven model where services operate autonomously, reacting to events with distributed state management. Through a loan broker example, Laila illustrated how orchestration simplifies processes like credit checks and offer ranking by centralizing control, yet risks creating dependencies that can halt workflows if services fail. Choreography, facilitated by an event bus, enhances autonomy but complicates tracking the overall process, potentially obscuring system behavior.

Navigating Coupling and Resilience

Delving into the mechanics, Laila highlighted the distinct coupling profiles of each approach. Orchestration often leads to efferent coupling, with the central component relying on multiple downstream services, necessitating resilience mechanisms like retries or circuit breakers to mitigate failures. For instance, if a credit scoring service is unavailable, the orchestrator must handle retries or fallback strategies. Choreography, however, increases afferent coupling through event subscriptions, which can introduce bidirectional dependencies when addressing business failures, such as reversing a loan if a property deal collapses. Laila stressed the importance of understanding coupling types—temporal, contract, and control—to make strategic decisions. Asynchronous communication in orchestration reduces temporal coupling, while choreography’s event-driven nature supports scalability but challenges visibility, as seen in her banking workflow example where emergent behavior obscured process clarity.

Addressing Business Failures and Workflow Evolution

Laila emphasized the critical role of managing business failures, or compensating flows, where actions must be undone due to unforeseen events, like a failed property transaction requiring the reversal of interest provisions or direct debits. Orchestration excels here, leveraging existing service connections to streamline reversals. In contrast, choreography demands additional event subscriptions, risking complex bidirectional coupling, as demonstrated when adding a background check to a loan process introduced order dependencies. Laila introduced the concept of “passive-aggressive publishers,” where services implicitly rely on others to act on events, akin to expecting a partner to address a chaotic kitchen without direct communication. She advocated for explicit command-driven interactions to clarify dependencies, ensuring system robustness. Additionally, Laila addressed workflow evolution, noting that orchestration simplifies modifications by centralizing changes, while choreography requires careful management to avoid disrupting event-driven flows.

A Strategic Decision Framework

Concluding her talk, Laila offered a decision-making framework anchored in five questions: the nature of communication (synchronous or asynchronous), the complexity of prerequisites, the extent of compensating flows, the likelihood of domain changes, and the need for centralized responsibility. Orchestration suits critical workflows with frequent changes or complex dependencies, such as banking processes requiring clear state visibility. Choreography is ideal for stable domains with minimal prerequisites, like retail order systems. By segmenting workflows into sub-processes, developers can apply the appropriate pattern strategically, blending both approaches for optimal outcomes. Laila’s banking-inspired insights provide a practical guide for architects to craft systems that balance control, flexibility, and maintainability.

Links:

PostHeaderIcon [DevoxxGR2025] Engineering for Social Impact

Giorgos Anagnostaki and Kostantinos Petropoulos, from IKnowHealth, delivered a concise 15-minute talk at Devoxx Greece 2025, portraying software engineering as a creative process with profound social impact, particularly in healthcare.

Engineering as Art

Anagnostaki likened software engineering to creating art, blending design and problem-solving to build functional systems from scratch. In healthcare, this creativity carries immense responsibility, as their work at IKnowHealth supports radiology departments. Their platform, built for Greece’s national imaging repository, enables precise diagnoses, like detecting cancer or brain tumors, directly impacting patients’ lives. This human connection fuels their motivation, transforming code into life-saving tools.

The Radiology Platform

Petropoulos detailed their cloud-based platform on Azure, connecting hospitals and citizens. Hospitals send DICOM imaging files and HL7 diagnosis data via VPN, while citizens access their medical history through a portal, eliminating CDs and printed reports. The system supports remote diagnosis and collaboration, allowing radiologists to share anonymized cases for second opinions, enhancing accuracy and speeding up critical decisions, especially in understaffed regions.

Technical Challenges

The platform handles 2.5 petabytes of imaging data annually from over 100 hospitals, requiring robust storage and fast retrieval. High throughput (up to 600 requests per minute per hospital) demands scalable infrastructure. Front-end challenges include rendering thousands of DICOM images without overloading browsers, while GDPR-compliant security ensures data privacy. Integration with national health systems added complexity, but the platform’s impact—illustrated by Anagnostaki’s personal story of his father’s cancer detection—underscores its value.

Links