Recent Posts
Archives

PostHeaderIcon [SpringIO2025] A cloud cost saving journey: Strategies to balance CPU for containerized JAVA workloads in K8s

Lecturer

Laurentiu Marinescu is a Lead Software Engineer at ASML, specializing in building resilient, cloud-native platforms with a focus on full-stack development. With expertise in problem-solving and software craftsmanship, he serves as a tech lead responsible for next-generation cloud platforms at ASML. He holds a degree from the Faculty of Economic Cybernetics and is an advocate for pair programming and emerging technologies. Ajith Ganesan is a System Engineer at ASML with over 15 years of experience in software solutions, particularly in lithography process control applications. His work emphasizes data platform requirements and strategy, with a strong interest in AI opportunities. He holds degrees from Eindhoven University of Technology and is passionate about system design and optimization.

Abstract

This article investigates strategies for optimizing CPU resource utilization in Kubernetes environments for containerized Java workloads, emphasizing cost reduction and performance enhancement. It analyzes the trade-offs in resource allocation, including requests and limits, and presents data-driven approaches to minimize idle CPU cycles. Through examination of workload characteristics, scaling mechanisms, and JVM configurations, the discussion highlights practical implementations that balance efficiency, stability, and operational expenses in on-premises deployments.

Contextualizing Cloud Costs and CPU Utilization Challenges

The escalating costs of cloud infrastructure represent a significant challenge for organizations deploying containerized applications. Annual expenditures on cloud services have surpassed $600 billion, with many entities exceeding budgets by over 17%. In Kubernetes clusters, average CPU utilization hovers around 10%, even in large-scale environments exceeding 1,000 CPUs, where it reaches only 17%. This underutilization implies that up to 90% of provisioned resources remain idle, akin to maintaining expensive infrastructure on perpetual standby.

The inefficiency stems not from collective oversight but from inherent design trade-offs. Organizations deploy expansive clusters to ensure capacity for peak demands, yet this leads to substantial idle resources. The opportunity lies in reclaiming these for cost savings; even doubling utilization to 20% could yield significant reductions. This requires understanding application behaviors, load profiles, and the interplay between Kubernetes scheduling and Java Virtual Machine (JVM) dynamics.

In simulated scenarios with balanced nodes and containers, tight packing minimizes rollout costs but introduces risks. For instance, upgrading containers sequentially due to limited spare capacity (e.g., 25% headroom) can prevent zero-downtime deployments. Scaling demands may fail due to resource constraints, necessitating cluster expansions that inflate expenses. These examples underscore the need for strategies that optimize utilization without compromising reliability.

Resource Allocation Strategies: Requests, Limits, and Workload Profiling

Effective CPU management in Kubernetes hinges on judicious setting of resource requests and limits. Requests guarantee minimum allocation for scheduling, while limits cap maximum usage to prevent monopolization. For Java workloads, these must align with JVM ergonomics, which adapt heap and thread pools based on detected CPU cores.

Workload profiling is essential, categorizing applications into mission-critical (requiring deterministic latency) and non-critical (tolerant of variability). In practice, reducing requests by up to 75% for critical workloads, counterintuitively, enhanced performance by allowing burstable access to idle resources. Experiments demonstrated halved hardware, energy, and real estate costs, with improved stability.

A binary search query identified optimal requests, but assumptions—such as non-simultaneous peaks—were validated through rigorous testing. For non-critical applications, minimal requests (sharing 99% of resources) maximized utilization. Scaling based on application-specific metrics, rather than default CPU thresholds, proved superior. For example, autoscaling on heap usage or queue sizes avoided premature scaling triggered by garbage collection spikes.

Code example for configuring Kubernetes resources in a Deployment YAML:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: java-app
spec:
  replicas: 3
  template:
    spec:
      containers:
      - name: app
        image: java-app:latest
        resources:
          requests:
            cpu: "500m"  # Reduced request for sharing
          limits:
            cpu: "2"     # Expanded limit for bursts

This configuration enables overcommitment, assuming workload diversity prevents concurrent peaks.

JVM and Application-Level Optimizations for Efficiency

Java workloads introduce unique considerations due to JVM behaviors like garbage collection (GC) and thread management. Default JVM settings often lead to inefficiencies; for instance, GC pauses can spike CPU usage, triggering unnecessary scaling. Tuning collectors (e.g., ZGC for low-latency) and limiting threads reduced contention.

Servlet containers like Tomcat exhibited high overhead; profiling revealed excessive thread creation. Switching to Undertow, with its non-blocking I/O, halved resource usage while maintaining throughput. Reactive applications benefited from Netty, leveraging asynchronous processing for better utilization.

Thread management is critical: unbounded queues in executors caused out-of-memory errors under load. Implementing bounded queues with rejection policies ensured stability. For example:

@Bean
public ThreadPoolTaskExecutor executor() {
    ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
    executor.setCorePoolSize(10);  // Limit threads
    executor.setMaxPoolSize(20);
    executor.setQueueCapacity(50); // Bounded queue
    executor.setRejectedExecutionHandler(new ThreadPoolExecutor.CallerRunsPolicy());
    return executor;
}

Monitoring tools like Prometheus and Grafana facilitated iterative tuning, adapting to evolving workloads.

Cluster-Level Interventions and Success Metrics

Cluster-wide optimizations complement application-level efforts. Overcommitment, by reducing requests while expanding limits, smoothed resource contention. Pre-optimization graphs showed erratic throttling; post-optimization, latency decreased 10-20%, with 7x more requests handled.

Success hinged on validating assumptions through experiments. Despite risks of simultaneous scaling, diverse workloads ensured viability. Continuous monitoring—via vulnerability scans and metrics—enabled proactive adjustments.

Key metrics included reduced throttling, stabilized performance, and halved costs. Policies at namespace and node levels aligned with overcommitment strategies, incorporating backups for node failures.

Implications for Sustainable Infrastructure Management

Optimizing CPU for Java in Kubernetes demands balancing trade-offs: determinism versus sharing, cost versus performance. Strategies emphasize application understanding, JVM tuning, and adaptive scaling. While mission-critical apps benefit from resource sharing under validated assumptions, non-critical ones maximize efficiency with minimal requests.

Future implications involve AI-driven predictions for peak avoidance, enhancing sustainability by reducing energy consumption. Organizations must iterate: monitor, fine-tune, adapt—treating efficiency as a dynamic goal.

Links:

PostHeaderIcon [DevoxxUK2025] Zero-Bug Policy Success: A Journey to Developer Happiness

At DevoxxUK2025, Peter Hilton, a product manager at a Norwegian startup, shared an inspiring experience report on achieving a zero-bug policy. Drawing from his team’s journey in 2024, Peter narrated how a small, remote team transformed their development process by tackling a backlog of bugs, ultimately reaching a state of zero open bugs. His talk explored the practical steps, team dynamics, and challenges of implementing this approach, emphasizing its impact on developer morale, customer trust, and software quality. Through a blend of storytelling and data, Peter illustrated how a disciplined focus on fixing bugs can lead to a more predictable and joyful development environment.

The Pain of Bugs and the Vision for Change

Peter began by highlighting the chaos caused by an ever-growing bug backlog, which drained time, eroded team morale, and undermined customer confidence. In early 2024, his team faced a surge in bug reports following a marketing campaign for their Norwegian web shop, a circular economy platform selling reusable soap containers. The influx revealed testing gaps and consumed developer time, hindering experiments to boost customer conversions. Inspired by a blog post he wrote in 2021 and the “fix it now or delete it” infographic by Yasaman Farzan, Peter proposed a zero-bug policy—not as a mandate for bug-free software but as a target to clear open issues. The team, motivated by shared frustration, agreed to experiment, envisioning predictable support efforts and meaningful feature feedback.

Overcoming Resistance and Defining the Approach

Convincing a team to prioritize bug fixes over new features required navigating skepticism and detailed “what-if” scenarios from developers. Peter described how initial discussions risked paralysis, as developers questioned edge cases like handling multiple simultaneous bugs. To move forward, the team framed the policy as a safe experiment, setting clear goals: reducing time spent on bug discussions, improving software reliability, and enabling meaningful customer feedback. By April 2024, they committed to fixing bugs exclusively for two months, a bold move that demanded collective focus. Peter, as product manager, leveraged his role to align stakeholders, emphasizing business outcomes like increased customer conversions over bug counts, which helped secure buy-in.

The Hard Work of Bug Fixing

The transition to a zero-bug state was arduous but structured. Starting in May 2024, the team of six developers tackled 252 bugs over the year, fixing around five per week, with peaks of 10–15 during intense periods. Peter shared a chart showing the number of open bugs fluctuating but never exceeding 15, a manageable load compared to teams with hundreds of unresolved issues. The team’s small size and autonomy, as a fully remote group, allowed them to focus without external dependencies. By August, they reached “zero bug day,” a milestone celebrated as a turning point. This period also saw improved testing practices, as each fix included robust test coverage to prevent regressions, addressing technical debt accumulated from the rushed initial launch.

Sustaining Zero Bugs and Reaping Rewards

Post-August, the team entered a maintenance phase, fixing bugs as they arose—typically one or two at a time—while spending half their time on new features. Peter noted that this phase, with months starting at zero open bugs (e.g., March–May 2025), felt liberating. Developers spent less time in meetings, and Peter could focus on customer growth experiments without bugs skewing results. A calendar visualization for April 2025 showed most days bug-free, with only two minor issues fixed leisurely. The simplicity of handling bugs case-by-case, without complex prioritization, mirrored the “fix it now or delete it” mantra, fostering a happier, more productive team environment.

Lessons for Other Teams

Reflecting on the journey, Peter emphasized that a zero-bug policy requires team-wide commitment and a tolerance for initial discomfort. While their small, autonomous team faced no external dependencies, larger organizations might need to address inter-team coordination or legacy backlogs. He suggested a radical option: deleting large backlogs to focus on new reports, though he hadn’t tried it. The key takeaway was the value of simplicity—handling one bug at a time eliminated the need for intricate rules. Peter also highlighted that the process built psychological safety, as tackling a tough challenge together strengthened team cohesion, making it a worthwhile experiment for teams seeking better quality and morale.

Links:

PostHeaderIcon [DotAI2024] DotAI 2024: Clara Chappaz – Dispatches from the Helm of AI and Digital Stewardship

Clara Chappaz, Secretary of State for Artificial Intelligence and Digital Affairs—nominated by the Prime Minister, appointed by President Macron on September 21, 2024—and erstwhile vanguard of La French Tech, where she galvanized Gallic innovation from ESSEC’s environs to Harvard’s halls and Vestiaire Collective’s vanguard, conveyed conviction at DotAI 2024. Her missive marked a milestone: AI’s ascension to ministerial marquee, France’s fervor forged since 2018—positioned for the fray, funding fervent, fostering fusion.

France’s Forward March: From Policy Pillars to Practical Progress

Chappaz’s chronicle commenced with camaraderie: technical titans as table-setters, government’s gateway—AI’s atelier, attributions amplified. Macron’s manifesto: metamorphosis manifest, equilibrium etched—regulation’s restraint, innovation’s ignition.

France’s firmament: 25 billion euros’ endowment—25% for sovereignty’s sinews, 75% for collaborative quests—deep tech’s dynamo, quantum’s quarry, biotech’s bastion. Sovereignty’s spectrum: Mistral’s mantle, Dataiku’s dominion—open-source odysseys orbiting excellence.

Adoption’s arc: enterprises’ embrace—L’Oréal’s lore, SNCF’s swiftness—yet, latency lingers; Chappaz championed catalysts: training’s torrent for territorial talents, diffusion’s decree through decrees and dialogues.

Summit’s Symphony: Harmonizing Horizons in the City of Light

Chappaz crescendoed to confluence: February 2025’s zenith, AI Action Summit—Macron’s mosaic, global gathering—scientific salons preceding, business bastions ensuing. Pragmatism’s pledge: pragmatists convened, counterparts courted—corporates’ cadence, diffusion’s drive.

Draghi’s dossier: Europe’s edge eroded—adoption’s arrears, societal synergy’s summons—talents tempered, trusts tendered, transformations tallied. Chappaz’s coda: convergence’s clarion—innovators intertwined, intelligences ignited—France’s forum, future’s forge.

In gratitude, Chappaz galvanized: gratitude’s ground, growth’s genesis—society’s scaffold, where AI augments aspirations, antiquity’s allure allied with avant-garde.

Links:

PostHeaderIcon [RivieraDev2025] Rachel Dubois – Spotify: An Insider View

Rachel Dubois offered a captivating glimpse into Spotify’s evolution during her Riviera DEV 2025 presentation, tracing the company’s journey from a fledgling startup to a streaming powerhouse. As a former Agile Coach at Spotify, Rachel shared anecdotes from her time there, emphasizing the role of engineering excellence, adaptive structures, and a nurturing culture in driving sustained growth. Through the lens of a fictional software engineer named Anna, she illustrated how Spotify balances innovation with operational agility, revealing that true success stems not from rigid frameworks but from trust, experimentation, and resilience.

The Genesis of a Disruptive Vision

Rachel opened by transporting the audience back to 2006, a tumultuous era for the music sector reeling from widespread piracy and a 75% revenue plunge since the 1990s. Enter Daniel Ek and Martin Lorentzon, two Swedish visionaries with a bold plan to salvage the industry through legal, accessible streaming. Daniel, an affluent engineer with a passion for music, teamed up with the sales-savvy Martin to craft a service that mirrored the convenience of illicit downloads while compensating creators fairly.

Their initial prototype, a desktop application, emerged after two years of relentless effort by a compact team of 20 elite seniors steeped in extreme programming principles. Rachel highlighted Daniel’s philosophy: hire talent surpassing his own and step aside to let them innovate. This trust fostered self-organization, tool selection, and process refinement from day one, laying the groundwork for Spotify’s debut in 2008 amid fierce label negotiations and technical hurdles like bandwidth constraints.

The early days were marked by rapid iteration and user-centric design, prioritizing high-fidelity audio and seamless access. Rachel noted how this engineer-led ethos—prioritizing technical prowess over business acumen—enabled breakthroughs, such as peer-to-peer streaming to sidestep infrastructure costs, proving that passion and expertise could upend entrenched industries.

Fostering an Engineering-Centric Culture

Central to Spotify’s allure is its vibrant engineering environment, where autonomy and collaboration reign. Rachel described how the company recruits for curiosity and skill, ensuring teams comprise diverse, high-caliber individuals who thrive on complex challenges. This mirrors Daniel’s founding belief: empower smarter minds to navigate ambiguity, yielding solutions unattainable through top-down directives.

Daily standups evolve into dynamic forums for knowledge exchange, while pair programming and code reviews reinforce collective ownership. Rachel recounted Anna’s typical day, blending feature development with exploratory spikes—dedicated time for prototyping without immediate deliverables. Such practices cultivate psychological safety, where failure is a learning tool, not a setback, aligning with Spotify’s mantra of “fail fast, learn faster.”

Moreover, the culture extends beyond code: wellness initiatives like mandatory two-day monthly “brain boosts” for personal growth—be it conferences, reading, or side projects—ensure sustained creativity. Annual hack weeks unite cross-functional squads in frenzied innovation, birthing 960 shippable prototypes in 2023 alone, many translating to revenue-generating features. Rachel stressed that this isn’t mere perk; it’s strategic investment in human capital, yielding outsized returns through engaged, inventive teams.

Scaling Agility: Beyond the Squad Model

Rachel demystified Spotify’s famed organizational model, cautioning against its rote imitation. While squads (autonomous feature teams), tribes (squad clusters), chapters (skill-based guilds), and guilds (interest communities) provide loose alignment, they represent just one facet of a fluid structure. Introduced in 2012, this framework promotes loose coupling and high autonomy, but Rachel urged focusing on underlying principles: transparency, empowerment, and adaptability over hierarchical silos.

Continuous discovery integrates user feedback loops with delivery pipelines, ensuring products evolve in tandem with listener needs. Release trains synchronize deployments across services, minimizing coordination friction in a microservices landscape. Data-informed decisions, powered by robust analytics, guide prioritization, while AB testing validates assumptions swiftly.

Yet, Rachel candidly addressed pitfalls: the 2023 layoffs, slashing 27% of staff amid tech sector woes, eroded trust despite prior “family-like” bonds. Attempts to impose tools like Jira backfired, reverting to chaos-embracing norms. This pendulum swing between order and disorder, Rachel explained, is deliberate—acknowledging that over-structure stifles innovation. True agility, she asserted, demands cultural bedrock: vulnerability, shared purpose, and engineering reverence, enabling rebound from adversity.

Innovation Amidst Adversity: Lessons from the Trenches

Even giants falter, and Rachel didn’t shy from Spotify’s stumbles. Early missteps, like premium-only pivots amid stagnant growth, necessitated painful pivots. The 2023 crisis tested resilience: abrupt redundancies and channel curbs sparked backlash, yet grassroots revival—Slack resurgence, tool rollbacks—reaffirmed employee agency.

Wellness weeks, granting universal paid breaks with stipends, exemplify proactive care, halting global operations sans catastrophe (barring critical sectors). Rachel tied this to broader ethos: treat talent as assets warranting recharge, fostering loyalty and ingenuity.

Concluding with Swedish flair—”tack” for thanks, “hej då” for farewell—Rachel invited feedback, underscoring Spotify’s human core. Her narrative posits that enduring triumph arises not from flawless execution but from cultures honoring people: empowering engineers, celebrating experimentation, and navigating turmoil with grace. For developers, the takeaway is clear: emulate the spirit—trust, iteration, humanity—over the skeleton of any model.

Links:

PostHeaderIcon [GoogleIO2024] What’s New in Angular: Enhancements in Performance and Developer Experience

Angular continues to evolve as a robust framework for building dynamic web applications, with recent updates focusing on efficiency, stability, and innovation. Minko Gechev, Jessica Janiuk, and Jeremy Elbourn shared insights into the platform’s progress, highlighting features that streamline development workflows and enhance application reliability. Their presentation underscored the community’s enthusiasm, often referred to as an “Angular Renaissance,” driven by consistent advancements that empower creators to deliver high-quality experiences.

Recent Releases and Community-Driven Improvements

Minko opened by reflecting on the framework’s trajectory, noting the integration of deferrable views following a request for comments (RFC) process that garnered substantial feedback. This feature allows for lazy loading of components, significantly reducing initial load times and improving perceived performance in complex applications. Developers have reported smoother user interactions in production environments, aligning with Angular’s commitment to real-world usability.

Jessica elaborated on the new control flow syntax introduced in version 17, which simplifies template logic and reduces boilerplate code. This syntax, inspired by community input, offers a more intuitive way to handle conditionals and loops, making templates cleaner and easier to maintain. The update has been praised for bridging the gap between Angular’s declarative style and modern JavaScript practices, facilitating quicker iterations during development.

Jeremy discussed the adoption of Material 3 in Angular Material, bringing updated design tokens and components that align with Google’s evolving design system. A dedicated blog post on angular.dev provides migration guides and examples, helping teams transition seamlessly. These enhancements not only modernize the visual aspects but also ensure consistency across applications, reducing the effort needed for custom styling.

The team’s emphasis on RFCs exemplifies Angular’s collaborative ethos, with over 1,000 comments on signals alone shaping its direction. This approach ensures that features resonate with users, fostering a vibrant ecosystem where contributions drive progress.

Advancements in Reactivity with Signals

A pivotal focus has been on signals, introduced via an RFC less than a year ago. Minko explained how signals provide a fine-grained reactivity system, allowing for precise change detection that outperforms traditional zone-based mechanisms. This leads to faster rendering and lower resource consumption, particularly in large-scale applications.

Jessica highlighted practical implementations, such as signal-based inputs and outputs in components, which eliminate the need for decorators like @Input and @Output. This simplifies code structure and reduces errors, as demonstrated in examples where computed signals derive values efficiently without redundant computations.

Jeremy addressed zoneless applications, a future milestone where signals enable full reactivity without NgZone, potentially halving bundle sizes and improving startup times. Early experiments show promising results, with applications running up to twice as fast. The gradual adoption path allows teams to migrate incrementally, minimizing disruption.

These reactivity improvements, set for broader rollout in version 19, position Angular as a leader in performance optimization, drawing from lessons learned in other frameworks while maintaining its unique strengths.

Build Optimizations and Tooling Upgrades

Build performance has seen substantial gains through esbuild and Vite integration in the Angular CLI. Minko noted that these changes, stable since version 17, accelerate compilation and serving, with benchmarks indicating up to 15 times faster production builds for large projects.

Jessica covered Angular DevTools enhancements, including a profiler that visualizes change detection cycles and identifies bottlenecks. This tool, available in browser extensions, aids in debugging zoneless apps and understanding signal flows.

Jeremy introduced the revamped documentation site at angular.dev, featuring interactive tutorials powered by StackBlitz and WebContainers. This hands-on learning approach lowers the entry barrier for newcomers, with embedded code examples allowing immediate experimentation.

Upcoming priorities include full zoneless support, hot module replacement for faster development cycles, and streaming server-side rendering to improve time-to-first-byte. Component authoring enhancements, like macro-based selectors, aim to eliminate redundant imports, flipping standalone flags by default for cleaner code.

Future Directions and Stability Commitments

The backlog includes macros for reducing boilerplate in directives and pipes, ensuring Angular remains adaptable. Minko stressed the team’s dedication to stability, with semver adherence and long-term support for versions like 18.

Jessica emphasized community involvement through RFCs, inviting feedback on evolving features. Jeremy concluded by reiterating the mission: enabling confident web app delivery through faster builds, superior tools, and unwavering reliability.

These developments solidify Angular’s role in modern web development, blending innovation with proven stability.

Links:

PostHeaderIcon [DevoxxBE2025] Virtual Threads, Structured Concurrency, and Scoped Values: Putting It All Together

Lecturer

Balkrishna Rawool leads IT chapters at ING Bank, focusing on scalable software solutions and Java concurrency. He actively shares insights on Project Loom through conferences and writings, drawing from practical implementations in financial systems.

Abstract

This review dissects Project Loom’s enhancements to Java’s concurrency: virtual threads for efficient multitasking, structured concurrency for task orchestration, and scoped values for secure data sharing. Placed in web development contexts, it explains their interfaces and combined usage via a Spring Boot loan processing app. The evaluation covers integration techniques, traditional threading issues, and effects on legibility, expandability, and upkeep in parallel code.

Project Loom Foundations and Virtual Threads

Project Loom overhauls Java concurrency with lightweight alternatives to OS-bound threads, which limit scale due to overheads. Virtual threads, managed by the JVM, enable vast concurrency on few carriers, ideal for IO-heavy web services.

In the loan app—computing offers via credit, account, and loan calls—virtual threads parallelize without resource strain. Configuring Tomcat to use them boosts TPS from hundreds to thousands, as non-blocking calls unmount threads.

The interface mirrors traditional: Thread.ofVirtual().start(task). Internals use continuations for suspension, allowing carrier reuse. Consequences: lower memory, natural exception flow.

Care needed for pinning: synchronized blocks block carriers; ReentrantLocks avoid this, sustaining performance.

Structured Concurrency for Unified Task Control

Structured concurrency organizes subtasks as cohesive units, addressing executors’ scattering. StructuredTaskScope scopes forks, ensuring completion before progression.

In the app, scoping credit/account/loan forks with ShutdownOnFailure cancels on errors, avoiding leaks. Example:

try (var scope = new StructuredTaskScope.ShutdownOnFailure()) {
    var credit = scope.fork(() -> getCredit(request));
    var account = scope.fork(() -> getAccount(request));
    var loan = scope.fork(() -> calculateLoan(request));
    scope.join();
    // Aggregate
} catch (Exception e) {
    // Manage
}

This ensures orderly shutdowns, contrasting unstructured daemons. Effects: simpler debugging, no dangling tasks.

Scoped Values for Immutable Inheritance

Scoped values supplant ThreadLocals for virtual threads, binding data immutably in scopes. ThreadLocals mutate, risking inconsistencies; scoped values inherit safely.

For request IDs in logs: ScopedValue.where(ID, uuid).run(() -> tasks); IDs propagate to forks via scopes.

Example:

ScopedValue.where(REQ_ID, UUID.randomUUID()).run(() -> {
    // Forks access ID
});

This solves ThreadLocal inefficiencies in Loom. Effects: secure sharing in hierarchies.

Combined Usage and Prospects

Synergies yield maintainable concurrency: virtual threads scale, scopes structure, values share. The app processes concurrently yet organized, IDs tracing.

Effects: higher IO throughput, easier upkeep. Prospects: framework integrations reshaping concurrency.

In overview, Loom’s features enable efficient, readable parallel systems.

Links:

  • Lecture video: https://www.youtube.com/watch?v=iO79VR0zAhQ
  • Balkrishna Rawool on LinkedIn: https://nl.linkedin.com/in/balkrishnarawool
  • Balkrishna Rawool on Twitter/X: https://twitter.com/BalaRawool
  • ING Bank website: https://www.ing.com/

PostHeaderIcon [DevoxxUK2025] Kotlin: The New and Noteworthy

Anton Arhipov, a developer advocate from JetBrains, captivated the DevoxxUK2025 audience with an overview of Kotlin’s recent advancements and future roadmap. Focusing on Kotlin 2.0’s K2 compiler and upcoming features like guard conditions, context parameters, rich errors, and name-based destructuring, Anton highlighted how Kotlin balances conciseness, safety, and expressiveness. His interactive talk, enriched with personal anecdotes and live demos, underscored Kotlin’s evolution as a versatile, multi-platform language that empowers developers to write robust, readable code.

Kotlin 2.0 and the K2 Compiler

Anton introduced Kotlin 2.0, released nearly a year ago, emphasizing the K2 compiler’s new front-end intermediate representation (FIR) and control flow engine. K2 improved compilation performance by 40% in IntelliJ IDEA Ultimate, fixed numerous small bugs, and provided a scalable foundation for future features. By desugaring complex constructs (e.g., if to when expressions, for loops to iterators), K2 enhances smart casts and type inference, enabling seamless handling of nullable types and complex expressions without manual casting.

Guard Conditions for Safer Control Flow

Set to stabilize in Kotlin 2.2, guard conditions enhance when expressions by allowing conditional checks without variable binding. In a demo, Anton showed processing orders with guard conditions to handle subscriptions and discounts, reducing repetition and ensuring exhaustiveness. Unlike Java’s pattern matching, Kotlin leverages existing destructuring to avoid redundancy, with guard conditions adding logical safety by enforcing checks (e.g., amount > 100) directly in when branches, minimizing errors in complex control flows.

Name-Based Destructuring for Robustness

Anton discussed name-based destructuring, planned for experimental release in Kotlin 2.4. Unlike positional destructuring, which risks logical errors during refactoring, name-based destructuring matches variable names to class properties, improving readability and safety. This feature extends to non-data classes and sealed hierarchies, with plans to deprecate positional destructuring in future versions (e.g., Kotlin 3.0), ensuring long-term language consistency while maintaining backward compatibility.

Context Parameters for Scoped APIs

Context parameters, entering beta in Kotlin 2.2, enable scoped extension functions for type-safe builders, often mistaken for DSLs. Anton demonstrated a client-building DSL where an infix extension function for dates (e.g., 10 March 2000) was restricted to a specific context, preventing global namespace pollution. This feature supports library developers in creating intuitive APIs, such as dependency injection-like logger scoping, reducing boilerplate and enhancing code clarity without compromising safety.

Rich Errors for Expressive Error Handling

Planned for experimental release in Kotlin 2.4, rich errors (previously called union types for errors) introduce a new error class syntax to distinguish error types explicitly. In a demo, Anton showed how rich errors improve over null-based error handling in functions like fetchUser and parseUser, enabling clear differentiation between network and parsing errors. Using when expressions, developers gain exhaustiveness checks and readable error handling, avoiding the verbosity of sealed hierarchies or result types.

Enhancing Compiler Safety with Return Value Checks

Anton highlighted a Kotlin 2.2 feature that mandates checking return values for standard library functions, preventing logical errors like missing return statements or incorrect function calls (e.g., using sort instead of sorted). By marking core functions with annotations, the compiler issues warnings for unused return values, reducing bugs like those in a demo where sorting a mutable list failed due to an overlooked return. This opt-in feature will expand to application code, enhancing reliability.

Links:

PostHeaderIcon [AWSReInforce2025] AWS Heroes launch insights (COM220)

Lecturer

The panel comprises AWS Heroes who contribute extensively to the global cloud community through technical content, open-source projects, and educational initiatives. Their collective expertise spans serverless architecture, security automation, and generative AI integration across AWS services.

Abstract

The discussion analyzes keynote announcements through the lens of practicing architects, emphasizing simplification of security onboarding, unified interfaces for AI model management, and enhanced visibility into complex systems. The Heroes establish that while new capabilities emerge, the overarching theme centers on reducing operational friction without sacrificing control.

Simplification as Strategic Imperative

Security complexity impedes adoption. The keynote reveals multiple features designed to streamline configuration:

  • WAF Console Redesign: Natural language rule creation reduces setup time from hours to minutes
  • Shield Network Security Director: Centralized policy orchestration across accounts and regions
  • IAM Access Analyzer Internal Findings: Automated detection of unused roles and cross-account assumptions

These enhancements transform security from a configuration burden into an enablement layer. The Heroes note that practitioners often avoid modifying working CloudFront distributions due to fear of regression; simplified interfaces mitigate this paralysis.

Unified Model Control Plane (MCP)

The Model Control Plane introduces a standardized interface for AI model interaction:

MCP Endpoint → Authentication → Rate Limiting → Model Routing

Analogous to USB-C, MCP eliminates custom integration per provider. However, the panel cautions that universal interfaces require rigorous trust validation—public charging stations demonstrate how convenience enables supply chain attacks. Organizations must implement:

  • Provider allowlisting
  • Request signing verification
  • Response integrity checks

Visibility and Operational Confidence

New dashboards and AI-powered summaries in Security Hub provide contextual intelligence:

{
  "finding": "CryptoMining EC2",
  "ai_summary": "Instance i-1234567890 shows 5000+ connections to known mining pools",
  "recommended_action": "Isolate and scan"
}

The Heroes emphasize that visibility without action creates alert fatigue. Integration with EventBridge enables automated containment—revoking sessions, quarantining instances—closing the loop from detection to resolution.

Generative AI Risk Management

Security must not lag innovation. The panel discusses patterns for safe adoption:

  1. Prompt Injection Prevention: Input validation, output filtering via Bedrock Guardrails
  2. Model Version Pinning: Immutable references in CodePipeline
  3. Audit Trail Preservation: Structured logging of prompt/response pairs

They stress that hype cycles drive premature adoption; organizations should maintain baseline controls before experimenting with emerging capabilities.

Community Perspective on Innovation Velocity

The Heroes observe that AWS prioritizes practitioner feedback. Features like exportable ACM certificates and active threat defense in Network Firewall address real operational pain points. This collaborative evolution ensures security keeps pace with development velocity.

Conclusion: Security as Innovation Substrate

The keynote demonstrates that mature cloud platforms succeed by reducing cognitive load while preserving granularity. Simplified interfaces, unified control planes, and contextual visibility create an environment where security enables rather than impedes progress. The Heroes conclude that organizations which treat security as infrastructure will achieve both velocity and resilience.

Links:

PostHeaderIcon [KotlinConf2025] Build your Kotlin and Android apps with Buck2

Sergei Rybalkin, a software engineer from Meta, introduced the audience to Buck2, an open-source build system that is now equipped to support Kotlin and Android applications. In a concise and informative presentation, Rybalkin detailed how Buck2, a successor to the original Buck system, addresses the need for a fast and scalable build solution, particularly for large-scale projects like those at Meta. The talk centered on Buck2’s core principles and its capabilities for accelerating development cycles and ensuring consistent, reliable builds.

The Power of a Scalable Build System

Rybalkin began by outlining the motivation behind Buck2. He explained that as projects grow in size and complexity, traditional build systems often struggle to keep up, leading to slow incremental iterations and hindered developer productivity. Buck2 was designed to overcome these challenges by focusing on key areas such as parallelism and a highly opinionated approach to optimization. The talk revealed that Buck2’s architecture allows it to execute build tasks with remarkable efficiency, a crucial factor for Meta’s own internal development processes. Rybalkin also touched on advanced capabilities like Remote Execution and the Build Tools API, which further enhance the system’s performance and flexibility.

A Blueprint for Optimization

The presentation also shed light on Buck2’s philosophy of “opinionated optimization.” Rybalkin clarified that this means the system takes a firm stance on how things should be done to achieve the best results. For example, if a particular feature or integration does not perform well, the Buck2 team may choose to drop support for it entirely, rather than provide a subpar experience. This selective approach ensures that the build system remains fast and reliable, even as it handles a multitude of dependencies and complex configurations. Rybalkin underscored the fact that the open-source version of Buck2 is almost identical to the internal solution used at Meta, offering the community the same powerful tools and optimizations that drive one of the world’s largest development teams. He concluded by encouraging the audience to try Buck2 and provide feedback, underscoring the collaborative nature of open-source development.

Links:

PostHeaderIcon [NDCOslo2024] Hub-Spoke Virtual Networks in Azure – Bastiaan Wassenaar

In the labyrinthine landscape of Azure’s azure architecture, where connectivity contends with compliance, Bastiaan Wassenaar, a cloud custodian and connectivity connoisseur, clarifies the conundrum of hub-spoke virtual networks. As a Dutch devops dynamo, Bastiaan blueprints the bedrock—endpoints, peering, policies—propelling practitioners past perplexities in private provisioning. His session, a symphony of safeguards and setups, spotlights the saga from service sentinels to spoke sanctuaries, ensuring egress elegance and ingress integrity.

Bastiaan banters on boredom’s behalf: vnet vexations vex veterans, yet victory vaults with vigilance. He heralds history: vnet’s 2014 genesis, endpoints’ evolution, private peers’ precision—pivoting from public perils to partitioned paradises.

Foundations of Fortification: Endpoints and Evolutions

Service endpoints erect ramparts: subnet sentries shielding storage, SQL sans sprawl. Bastiaan bewails bandwidth burdens—10Gbps ceilings—yet blesses them as basics, bridging to private endpoints’ purity: dedicated daisy-chains, DNS delegations demystified.

Hub-spoke’s heartbeat: central hub harboring firewalls, spokes siphoning spokes—peering propagates prefixes, UDRs usher unicast. Bastiaan blueprints: Azure Firewall’s fabric, forced tunneling fortifying flows.

Orchestrating the Orbit: Peering, Policies, and Proxies

Peering’s pact: global gateways, transitive taboos—spokes supplicate hubs for harmony. Bastiaan bemoans BGP’s burdens—bidirectional broadcasts—yet bows to basics: static routes suffice for simplicity.

Policies propel protection: NSGs nestle at NICs, FW’s finesse filters flows. Bastiaan broadcasts best bets: hub’s hegemony, spokes’ seclusion—egress egressing exclusively, ingress inspecting intently.

DNS’s Dominion: Delegations and Dilemmas

DNS dances delicately: private endpoints’ FQDNs, hub’s handlers hijacking queries. Bastiaan bemoans blunders—external IPs eclipsing internals—yet extols overrides: custom configurations, conditional forwarding.

His hack: hosts’ hacks for haste, yet hub’s hegemony harmonizes hordes. Bastiaan broadcasts: reboot realms for resolution—vnet’s vicissitudes vanquished.

Victory’s Vista: Vigilance in Vastness

Bastiaan’s benediction: hub-spoke as haven, harmonizing hazards—history heeded, hurdles hurdled. His hurrah: harness helpers, heed heuristics—Azure’s arsenal awaits.

Links: