Posts Tagged ‘DevoxxGreece2025’
[DevoxxGR2025] Optimized Kubernetes Scaling with Karpenter
Alex König, an AWS expert, delivered a 39-minute talk at Devoxx Greece 2025, exploring how Karpenter enhances Kubernetes cluster autoscaling for speed, cost-efficiency, and availability.
Karpenter’s Dynamic Autoscaling
König introduced Karpenter as an open-source, Kubernetes-native autoscaling solution, contrasting it with the traditional Cluster Autoscaler. Unlike the latter, which relies on uniform node groups (e.g., nodes with four CPUs and 16GB RAM), Karpenter uses the EC2 Fleet API to dynamically provision nodes tailored to workload needs. For instance, if a pod requires one CPU, Karpenter allocates a node with minimal excess capacity, avoiding resource waste. This right-sizing, combined with groupless scaling, enables faster and more cost-effective scaling, especially in dynamic environments.
Ensuring Availability with Constraints
König addressed availability challenges reported by users, emphasizing Kubernetes-native scheduling constraints to mitigate disruptions. Topology spread constraints distribute pods across availability zones, reducing the risk of downtime if a node fails. Pod disruption budgets, affinity/anti-affinity rules, and priority classes further ensure critical workloads are scheduled appropriately. For stateful workloads using EBS, König recommended setting the volume binding mode to “wait for first consumer” to avoid pod-volume mismatches across zones, preventing crashes and ensuring reliability.
Integrating with KEDA for Application Scaling
For advanced scaling, König highlighted combining Karpenter with KEDA for event-driven, application-specific scaling. KEDA scales pods based on metrics like Kafka topic sizes or SQS queues, beyond CPU/memory. Karpenter then provisions nodes for pending pods, enabling seamless scaling for workloads like flash sales. König outlined a four-step migration from Cluster Autoscaler to Karpenter, emphasizing its simplicity and open-source documentation.
Links
[DevoxxGR2025] Understanding Flow in Software Development
James Lewis, a ThoughtWorks consultant, delivered a 41-minute talk at Devoxx Greece 2025, exploring how work flows through software development, drawing on information theory and complexity science.
The Nature of Work as Information
Lewis framed software development as transforming “stuff” into more valuable outputs, akin to enterprise workflows before computers. Work, invisible as information, flows through value streams—from ideas to production code. However, invisibility causes issues like unnoticed backlogs or undeployed code, acting as costly inventory. Lewis cited Don Reinertsen’s Principles of Product Development Flow, emphasizing that untested or undeployed code represents lost revenue, unlike visible factory inventory, which signals inefficiencies immediately.
Visualizing Value Streams
Using a value stream map, Lewis illustrated a typical development cycle: three days for coding, ten days waiting for testing, and 30 days for deployment, totaling 47 days of lead time, with 42 days as idle inventory. Wait times stem from coordination (teams waiting on others), scheduling (e.g., architecture reviews), and queues (backlogs). Shared test environments exacerbate delays, costing more than provisioning new ones. Lewis advocated mapping workflows to expose economic losses, making a case for faster delivery to stakeholders.
Reducing Batch Sizes for Flow
Lewis emphasized reducing batch sizes to improve flow, a principle rooted in queuing theory. Smaller batches, like deploying twice as often, halve wait times, enabling faster revenue generation. Using agent-based models, he simulated agile (single-piece flow) versus waterfall (100% batch) teams, showing agile teams deliver value faster. Limiting work-in-progress and controlling queue sizes prevent congestion collapse, ensuring smoother, more predictable workflows.
Links
[DevoxxGR2025] Nx for Gradle – Faster Builds, Better DX
Katerina Skroumpelou, a senior engineer at Nx, delivered a 15-minute talk at Devoxx Greece 2025, showcasing how the @nx/gradle plugin enhances Gradle builds for monorepos, improving developer experience (DX).
Streamlining Gradle Monorepos
Skroumpelou introduced Nx as a build system optimized for monorepos, used by over half of Fortune 500 companies. Gradle’s strength lies in managing multi-project setups, where subprojects (e.g., core, API) share dependencies and tasks. However, large repositories grow complex, slowing builds. Nx integrates seamlessly with Gradle, acting as a thin layer atop existing projects without requiring a rewrite. By running nx init in a Gradle project, developers enable Nx’s smart task management, preserving Gradle’s functionality while adding efficiency.
Optimizing CI Pipelines
Slow CI pipelines frustrate developers and inflate costs. Skroumpelou explained how Nx slashes CI times through distributed task execution, caching, and affected task detection. Unlike Gradle’s task-level parallelism and caching, Nx identifies changes in a pull request and runs only impacted tasks, skipping unaffected ones. For instance, a 30-minute pipeline could drop to five minutes by leveraging Nx’s project graph to avoid redundant builds or tests. Nx also splits large tasks, like end-to-end tests, into smaller, distributable units, further accelerating execution.
Handling Flaky Tests
Flaky tests disrupt workflows, forcing developers to rerun entire pipelines. Nx automatically detects and retries failed tests in isolation, preventing delays. Skroumpelou highlighted that this automation ensures pipelines remain efficient, even during meetings or interruptions. Nx, open-source under the MIT license, integrates with tools like VS Code, offering developers a free, scalable solution to enhance Gradle-based CI.
Links
[DevoxxGR2025] Unmasking Benchmarking Fallacies
Georgios Andrianakis, a Quarkus engineer at Red Hat, presented a 46-minute talk at Devoxx Greece 2025, dissecting benchmarking fallacies, based on a talk by performance expert Francisco Negro.
The Benchmarketing Problem
Andrianakis introduced “benchmarketing,” where benchmarks are manipulated for marketing. Inspired by Negro’s frustration with a claim that Helidon outperformed Quarkus in a TechEmpower benchmark, he explored how data can be misrepresented. Benchmarks should be relevant, representative, equitable, repeatable, cost-effective, scalable, and transparent. A misleading article claimed Helidon’s superiority, but Negro’s investigation revealed unfair comparisons, sparking this talk to expose such fallacies.
Dissecting a Flawed Claim
Focusing on equity, Negro analyzed the TechEmpower benchmark, which tests web frameworks on tasks like JSON serialization and database queries. The claim hinged on a test where Helidon used a raw database driver (Vert.x for PostgreSQL), while Quarkus used a full object-relational mapper (ORM) like Hibernate, incurring performance penalties. Filtering for full ORM tests, Quarkus topped the charts, with Helidon absent. Comparing both without ORMs, Quarkus still outperformed. This exposed the claim’s inequity, as it wasn’t apples-to-apples, misleading readers.
Critical Thinking in Benchmarks
Andrianakis emphasized skepticism, citing Hitchens’ Razor: claims without evidence can be dismissed. Using Brendan Gregg’s USE method, Negro identified CPU saturation, not database I/O, as the bottleneck, debunking assumptions. He urged active benchmarking—monitoring errors and resources—and measuring one level deeper to understand performance. Awareness of biases, like confirmation bias, and avoiding assumptions of malice over incompetence, ensures fair evaluation of benchmark claims.
Links
[DevoxxGR2025] Why OpenTelemetry is the Future
Steve Flanders, a veteran in observability, delivered a 13-minute talk at Devoxx Greece 2025, outlining five reasons why OpenTelemetry (OTel) is poised to dominate observability.
Unified Data Collection
Flanders began by addressing a common pain point: managing multiple libraries for traces, metrics, and logs. OpenTelemetry, a CNCF project second only to Kubernetes in activity, offers a single, open-standard library for all telemetry signals, including profiling and real user monitoring. Supporting standards like W3C Trace Context, Zipkin, and Prometheus, OTel allows developers to instrument applications once, regardless of backend. This eliminates the need for proprietary libraries, simplifying integration and reducing rework when switching vendors.
Flexible Data Control
The OpenTelemetry Collector, deployable as an agent or gateway, provides robust data processing. Flanders highlighted its ability to filter sensitive data, like personally identifiable information, before export. Developers can send full datasets to internal data lakes while sharing subsets with vendors, offering unmatched flexibility. OTel’s modularity means you can use its instrumentation, collector, or neither, integrating with existing systems. This vendor-agnostic approach ensures data portability, as switching backends requires only configuration changes, not re-instrumentation.
Enhanced Problem Resolution
OTel’s context and correlation features link traces, metrics, and logs, accelerating issue resolution. Flanders showcased a service map visualizing errors and latency, enriched with resource metadata (e.g., Kubernetes pod, cloud provider). This allows pinpointing issues, like a faulty pod causing currency service errors, reducing mean-time-to-resolution. With broad adoption by vendors, users, and projects, and stable support for core signals, OTel is a production-ready standard reshaping observability.
Links
[DevoxxGR2025] Mastering Indistractable Focus
Michela Bertaina, head of community at Codemotion, shared a 45-minute talk at Devoxx Greece 2025 on achieving indistractable focus.
Understanding Distraction
Bertaina began with her community manager role, overwhelmed by notifications across platforms, leading to unproductive days. She introduced Nir Eyal’s concept of traction (actions toward goals) versus distraction (actions pulling away). Internal triggers (90%), like boredom or stress, drive distraction more than external ones (10%). She shared her chaotic morning routine—checking notifications while eating—causing stress and cognitive overload. Research shows focus lasts 40 seconds, worsened by constant app stimuli, with people checking phones 60 times daily.
Four Keys to Focus
Bertaina outlined four strategies: manage internal triggers, schedule traction, eliminate external triggers, and make pacts. First, identify emotional triggers (e.g., procrastination) using the 10-minute rule to delay distractions. Second, time-box meaningful tasks, prioritizing outcomes over to-do lists, and schedule downtime to avoid multitasking, which costs 23 minutes to refocus. Third, use technology (focus modes, app timers, grayscale screens) to reduce external triggers. Finally, make pacts (money, effort, identity) to commit to goals, like Ulysses resisting sirens by binding himself.
Reframing Lifestyle
Bertaina added a fifth key: reframe lifestyle with mindfulness, dedicated spaces, healthy diet, and sleep. Journaling and retrospectives clarify thoughts, while separate physical/virtual workspaces enhance focus. She challenged attendees to avoid phones during commutes or try a week-long digital detox, urging experimentation to find personal focus strategies.
Links
[DevoxxGR2025] Component Ownership in Feature Teams
Thanassis Bantios, VP of Engineering at T-Food, delivered a 17-minute talk at Devoxx Greece 2025 on managing component ownership in feature teams.
The Feature Team Dilemma
Bantios narrated a story of Helen, an entrepreneur scaling an online delivery startup. Initially, a small team communicated easily, but growth led to functional teams and a backend monolith, complicating contributions. Adopting microservices split critical components like orders and menu services, but communication broke down as features required multiple teams. Agile cross-functional teams solved this, enabling autonomy, but neglected component ownership, risking a “Frankenstein” codebase.
Defining Component Ownership
A component, deployable independently (e.g., backend service or client app), needs ownership to maintain health, architecture, documentation, and code reviews. Bantios stressed teams, not individuals, should own components to avoid risks like staff turnover. Using the Spotify matrix model, client components (e.g., Android) and critical backend services (e.g., menu service) are owned by chapters (craft-based groups like Android developers), ensuring knowledge sharing and manageable on-call rotations. Non-critical services, like ratings, can be team-owned.
Inner Sourcing for Speed
Inner sourcing allows any team to contribute to any component, reducing dependencies. Bantios emphasized standardization (language, CI/CD, architecture) to simplify contributions, focusing only on business logic. He suggested rating components on an inner-sourcing score (test coverage, documentation) and dedicating 20% of time to component backlogs. This prevents technical debt in feature-driven environments, ensuring fast, scalable development.
Links
[DevoxxGR2025] Email Re-Platforming Case Study
George Gkogkolis from Travelite Group shared a 15-minute case study at Devoxx Greece 2025 on re-platforming to process 1 million emails per hour.
The Challenge
Travelite Group, a global OTA handling flight tickets in 75 countries, processes 350,000 emails daily, expected to hit 2 million. Previously, a SaaS ticketing system struggled with growing traffic, poor licensing, and subpar user experience. Sharding the system led to complex agent logins and multiplexing issues with the booking engine. Market research revealed no viable alternatives, as vendors’ licensing models couldn’t handle the scale, prompting an in-house solution.
The New Platform
The team built a cloud-native, microservices-based platform within a year, going live in December 2024. It features a receiving app, a React-based web UI with Mantine Dev, a Spring Boot backend, and Amazon DocumentDB, integrated with Amazon SES and S3. Emails land in a Postfix server, are stored in S3, and processed via EventBridge and SQS. Data migration was critical, moving terabytes of EML files and databases in under two months, achieving a peak throughput of 1 million emails per hour by scaling to 50 receiver instances.
Lessons Learned
Starting with migration would have eased performance optimization, as synthetic data didn’t match production scale. Cloud-native deployment simplified scaling, and a backward-compatible API eased integration. Open standards (EML, Open API) ensured reliability. Future plans include AI and LLM enhancements by 2025, automating domain allocation for scalability.
Links
[DevoxxGR2025] Angular Micro-Frontends
Dimitris Kaklamanis, a lead software engineer at CodeHub, delivered an 11-minute talk at Devoxx Greece 2025, exploring how Angular micro-frontends revolutionize scalable web development.
Micro-Frontends Unveiled
Kaklamanis opened with a relatable scenario: a growing front-end monolith turning into a dependency nightmare. Micro-frontends, inspired by microservices, break the UI into smaller, independent pieces, each owned by a team. This enables parallel development, reduces risks, and enhances scalability. He outlined four principles: decentralization (team-owned UI parts), technology agnosticism (mixing frameworks like Angular, React, or Vue), resilience (isolated bugs don’t crash the app), and scalability (independent team scaling). A diagram showed teams building features in different frameworks, integrated at runtime via a shell app.
Pros and Cons
Micro-frontends offer scalability, tech flexibility, faster parallel development, resilience, and easier maintenance due to focused codebases. However, challenges include increased complexity (more coordination), performance overhead (multiple apps loading), communication issues (state sharing), and CI/CD complexity (separate pipelines). Kaklamanis highlighted Angular’s strengths: its component-based structure aligns with modularity, CLI tools manage multiple projects, and features like lazy loading and Webpack 5 module federation simplify implementation. Tools like NX streamline monorepo management, making Angular a robust choice.
Implementation in Action
Kaklamanis demonstrated a live Angular store app with independent modules (orders, products, inventory). A change in the product component didn’t affect others, showcasing isolation. He recommended clear module ownership, careful intermodule communication, performance monitoring, and minimal shared libraries. For large, multi-team projects, he urged prototyping micro-frontends, starting small and iterating for scalability.
Links
[DevoxxGR2025] Simplifying LLM Integration: A Blueprint for Effective AI Systems
Efstratios Marinos captivated attendees at Devoxx Greece 2025 with a masterclass on streamlining large language model (LLM) integrations. By focusing on practical, modular patterns, Efstratios demonstrated how to construct robust, scalable AI systems that prioritize simplicity without sacrificing functionality, offering actionable strategies for developers.
Exploring the Complexity Continuum
Efstratios introduced the concept of a complexity continuum for LLM integrations, spanning from straightforward single calls to sophisticated agentic frameworks. At its simplest, a system comprises an LLM, a retrieval mechanism, and tool capabilities, delivering maintainability and ease of updates with minimal overhead. More intricate setups incorporate routers, APIs, and vector stores, enhancing functionality but complicating debugging. Efstratios emphasized that simplicity is a strategic choice, enabling rapid adaptation to evolving AI technologies. He showcased a concise Python implementation, where a single function manages retrieval and response generation in a handful of lines, contrasting this with a multi-step retrieval-augmented generation (RAG) workflow that involves encoding, indexing, and embedding, adding layers of complexity that demand careful justification.
Crafting Robust Interfaces
Central to Efstratios’s philosophy is the design of clean interfaces for LLMs, retrieval systems, tools, and memory components. He compared prompt crafting to API design, advocating for structured formats that clearly separate instructions, context, and queries. Well-documented tools, complete with detailed descriptions and practical examples, empower LLMs to perform effectively, while vague documentation leads to errors. Efstratios underscored the need for resilient error handling, such as fallback strategies for failed retrievals or tool invocations, to ensure system reliability. For example, a system might respond to a failed search by suggesting alternatives or retrying with adjusted parameters, improving usability and simplifying troubleshooting in production environments.
Enhancing Capabilities with Workflow Patterns
Efstratios explored three foundational workflow patterns—prompt chaining, routing, and parallelization—to optimize performance while managing complexity. Prompt chaining divides complex tasks into sequential steps, such as outlining, drafting, and refining content, enhancing clarity at the expense of increased latency. Routing employs an LLM to categorize inputs and direct them to specialized handlers, like a customer support bot distinguishing technical from financial queries, improving efficiency through focused processing. Parallelization, encompassing sectioning and voting, distributes tasks across multiple LLM instances, such as analyzing document segments concurrently, though it incurs higher computational costs. These patterns provide incremental enhancements, ideal for tasks requiring moderate sophistication.
Advanced Patterns and Decision-Making Principles
For more demanding scenarios, Efstratios presented two advanced patterns: orchestrator-workers and evaluator-optimizer. The orchestrator-workers pattern dynamically breaks down tasks, with a central LLM coordinating specialized workers, perfect for complex coding projects or multi-faceted content creation. The evaluator-optimizer pattern establishes a feedback loop, where a generator LLM produces content and an evaluator refines it iteratively, mirroring human iterative processes. Efstratios outlined six decision-making principles—use case alignment, development effort, maintainability, performance granularity, latency, and cost—to guide pattern selection. Simple solutions suffice for tasks like summarization, while multi-step workflows excel in knowledge-intensive applications. He encouraged starting with minimal solutions, establishing performance baselines, identifying specific limitations, and adding complexity only when validated by measurable gains.