Recent Posts
Archives

Posts Tagged ‘DevoxxGreece2025’

PostHeaderIcon [DevoxxGR2025] AI Integration with MCPs

Kent C. Dodds, in his dynamic 22-minute talk at Devoxx Greece 2025, explored how Model Context Protocols (MCPs) enable AI assistants to interact with applications, envisioning a future where users have their own “Jarvis” from Iron Man.

The Vision of Jarvis

Dodds opened with a clip from Iron Man, showcasing Jarvis performing tasks like compiling databases, generating UI, and creating flight plans. He posed a question: why don’t we have such assistants today? Current technologies, like Google Assistant or Siri, fall short due to limited integrations. Dodds argued that MCPs, a standard protocol supported by Anthropic, OpenAI, and Google, bridge this gap by enabling AI to communicate with diverse services, from Slack to local government platforms, transforming user interaction.

MCP Architecture

MCPs sit between the host application (e.g., ChatGPT, Claude) and service tools, allowing seamless communication. Dodds explained that LLMs generate tokens but rely on host applications to execute actions. MCP servers, managed by service providers, connect to tools, enabling users to install them like apps. In a demo, Dodds showed an MCP server for his website, allowing an AI to search blog posts and subscribe users to newsletters, though client-side issues hindered reliability, highlighting the need for improved user experiences.

Challenges and Future

The primary challenge is the poor client experience for installing MCP servers, currently requiring manual JSON configuration. Dodds predicted a marketplace or auto-discovery system to streamline this, likening MCPs to the internet’s impact. Security concerns, similar to early browsers, need addressing, but Dodds sees AI hosts as the new browsers, promising a future where personalized AI assistants handle complex tasks effortlessly.

Links

PostHeaderIcon [DevoxxGR2025] Optimized Kubernetes Scaling with Karpenter

Alex König, an AWS expert, delivered a 39-minute talk at Devoxx Greece 2025, exploring how Karpenter enhances Kubernetes cluster autoscaling for speed, cost-efficiency, and availability.

Karpenter’s Dynamic Autoscaling

König introduced Karpenter as an open-source, Kubernetes-native autoscaling solution, contrasting it with the traditional Cluster Autoscaler. Unlike the latter, which relies on uniform node groups (e.g., nodes with four CPUs and 16GB RAM), Karpenter uses the EC2 Fleet API to dynamically provision nodes tailored to workload needs. For instance, if a pod requires one CPU, Karpenter allocates a node with minimal excess capacity, avoiding resource waste. This right-sizing, combined with groupless scaling, enables faster and more cost-effective scaling, especially in dynamic environments.

Ensuring Availability with Constraints

König addressed availability challenges reported by users, emphasizing Kubernetes-native scheduling constraints to mitigate disruptions. Topology spread constraints distribute pods across availability zones, reducing the risk of downtime if a node fails. Pod disruption budgets, affinity/anti-affinity rules, and priority classes further ensure critical workloads are scheduled appropriately. For stateful workloads using EBS, König recommended setting the volume binding mode to “wait for first consumer” to avoid pod-volume mismatches across zones, preventing crashes and ensuring reliability.

Integrating with KEDA for Application Scaling

For advanced scaling, König highlighted combining Karpenter with KEDA for event-driven, application-specific scaling. KEDA scales pods based on metrics like Kafka topic sizes or SQS queues, beyond CPU/memory. Karpenter then provisions nodes for pending pods, enabling seamless scaling for workloads like flash sales. König outlined a four-step migration from Cluster Autoscaler to Karpenter, emphasizing its simplicity and open-source documentation.

Links

PostHeaderIcon [DevoxxGR2025] Understanding Flow in Software Development

James Lewis, a ThoughtWorks consultant, delivered a 41-minute talk at Devoxx Greece 2025, exploring how work flows through software development, drawing on information theory and complexity science.

The Nature of Work as Information

Lewis framed software development as transforming “stuff” into more valuable outputs, akin to enterprise workflows before computers. Work, invisible as information, flows through value streams—from ideas to production code. However, invisibility causes issues like unnoticed backlogs or undeployed code, acting as costly inventory. Lewis cited Don Reinertsen’s Principles of Product Development Flow, emphasizing that untested or undeployed code represents lost revenue, unlike visible factory inventory, which signals inefficiencies immediately.

Visualizing Value Streams

Using a value stream map, Lewis illustrated a typical development cycle: three days for coding, ten days waiting for testing, and 30 days for deployment, totaling 47 days of lead time, with 42 days as idle inventory. Wait times stem from coordination (teams waiting on others), scheduling (e.g., architecture reviews), and queues (backlogs). Shared test environments exacerbate delays, costing more than provisioning new ones. Lewis advocated mapping workflows to expose economic losses, making a case for faster delivery to stakeholders.

Reducing Batch Sizes for Flow

Lewis emphasized reducing batch sizes to improve flow, a principle rooted in queuing theory. Smaller batches, like deploying twice as often, halve wait times, enabling faster revenue generation. Using agent-based models, he simulated agile (single-piece flow) versus waterfall (100% batch) teams, showing agile teams deliver value faster. Limiting work-in-progress and controlling queue sizes prevent congestion collapse, ensuring smoother, more predictable workflows.

Links

PostHeaderIcon [DevoxxGR2025] Nx for Gradle – Faster Builds, Better DX

Katerina Skroumpelou, a senior engineer at Nx, delivered a 15-minute talk at Devoxx Greece 2025, showcasing how the @nx/gradle plugin enhances Gradle builds for monorepos, improving developer experience (DX).

Streamlining Gradle Monorepos

Skroumpelou introduced Nx as a build system optimized for monorepos, used by over half of Fortune 500 companies. Gradle’s strength lies in managing multi-project setups, where subprojects (e.g., core, API) share dependencies and tasks. However, large repositories grow complex, slowing builds. Nx integrates seamlessly with Gradle, acting as a thin layer atop existing projects without requiring a rewrite. By running nx init in a Gradle project, developers enable Nx’s smart task management, preserving Gradle’s functionality while adding efficiency.

Optimizing CI Pipelines

Slow CI pipelines frustrate developers and inflate costs. Skroumpelou explained how Nx slashes CI times through distributed task execution, caching, and affected task detection. Unlike Gradle’s task-level parallelism and caching, Nx identifies changes in a pull request and runs only impacted tasks, skipping unaffected ones. For instance, a 30-minute pipeline could drop to five minutes by leveraging Nx’s project graph to avoid redundant builds or tests. Nx also splits large tasks, like end-to-end tests, into smaller, distributable units, further accelerating execution.

Handling Flaky Tests

Flaky tests disrupt workflows, forcing developers to rerun entire pipelines. Nx automatically detects and retries failed tests in isolation, preventing delays. Skroumpelou highlighted that this automation ensures pipelines remain efficient, even during meetings or interruptions. Nx, open-source under the MIT license, integrates with tools like VS Code, offering developers a free, scalable solution to enhance Gradle-based CI.

Links