Recent Posts
Archives

PostHeaderIcon Can GraalVM Outperform C++ for Real-Time Performance? A Deep Technical Analysis

(long answer to this comment on LinkedIn)

As GraalVM continues to mature, a recurring question surfaces among architects and performance engineers: can it realistically outperform traditional C++ in real-time systems?

The answer is nuanced. While GraalVM represents a major leap forward for managed runtimes, it does not fundamentally overturn the performance model that gives C++ its edge in deterministic environments. However, the gap is narrowing in ways that materially change architectural decisions.

Reframing the Question: What Do We Mean by “Real-Time”?

Before comparing technologies, it is critical to define “real-time.” In engineering practice, this term is frequently overloaded.

There are two distinct categories:

  • Hard real-time: strict guarantees on worst-case latency (e.g., missing a deadline is a system failure)
  • Soft real-time: latency matters, but occasional deviations are acceptable

Most backend systems fall into the second category, even when they are described as “low-latency.” This distinction is essential because it directly determines whether GraalVM is even a viable candidate.

Execution Models: Native vs Managed

C++: Deterministic by Design

C++ provides a minimal abstraction over hardware:

  • Ahead-of-time (AOT) compilation to native code
  • No implicit garbage collection
  • Full control over memory layout and allocation strategies
  • Predictable interaction with CPU caches and NUMA characteristics

This enables precise control over latency, which is why C++ dominates in domains such as embedded systems, game engines, and high-frequency trading infrastructure.

GraalVM: A Spectrum of Execution Modes

GraalVM is not a single execution model but a platform offering multiple strategies:

  • JIT mode (JVM-based): dynamic compilation with runtime profiling
  • Native Image (AOT): static compilation into a standalone binary
  • Polyglot execution: interoperability across languages

Each mode introduces different trade-offs in terms of startup time, peak performance, and latency stability.

JIT Compilation: Peak Performance vs Predictability

GraalVM’s JIT compiler is one of its strongest assets. It performs deep optimizations based on runtime profiling, including:

  • Inlining across abstraction boundaries
  • Escape analysis and allocation elimination
  • Speculative optimizations with deoptimization fallback

In long-running services, this can produce highly optimized machine code that rivals native implementations.

However, this optimization model introduces variability:

  • Warmup phase: performance improves over time
  • Deoptimization events: speculative assumptions can be invalidated
  • Compilation overhead: CPU cycles are consumed by the compiler itself

For systems requiring stable latency from the first request, this behavior is inherently problematic.

Native Image: Reducing the Gap

GraalVM Native Image shifts compilation to build time, eliminating JIT behavior at runtime. This results in:

  • Fast startup times
  • Lower memory footprint
  • Reduced latency variance

However, these benefits come with trade-offs:

  • Loss of dynamic optimizations available in JIT mode
  • Restrictions on reflection and dynamic class loading
  • Generally lower peak performance compared to JIT-optimized code

Even in this mode, C++ retains advantages in fine-grained memory control and instruction-level optimization.

Garbage Collection and Latency

Garbage collection is one of the most significant differentiators between GraalVM and C++.

Modern collectors (e.g., G1, ZGC, Shenandoah) have dramatically reduced pause times, but they do not eliminate them entirely. More importantly, they introduce uncertainty:

  • Pause times may vary depending on allocation patterns
  • Concurrent phases still compete for CPU resources
  • Memory pressure can trigger unexpected behavior

In contrast, C++ allows engineers to:

  • Use stack allocation or object pools
  • Avoid heap allocation in critical paths
  • Guarantee upper bounds on allocation latency

This difference is decisive in hard real-time systems.

Microarchitectural Considerations

At the highest level of performance engineering, factors such as cache locality, branch prediction, and instruction pipelines dominate.

C++ offers direct control over:

  • Data layout (AoS vs SoA)
  • Alignment and padding
  • SIMD/vectorization strategies

While GraalVM’s JIT can optimize some of these aspects automatically, it operates under constraints imposed by the language and runtime. As a result, it cannot consistently match the level of control available in C++.

Latency Profiles: A Practical Comparison

From a systems perspective, the difference can be summarized as follows:

Characteristic C++ GraalVM (JIT) GraalVM (Native Image)
Startup Time Fast Slow Very fast
Peak Throughput Excellent Excellent (after warmup) Good
Latency Predictability Excellent Moderate Good
Memory Control Full Limited Limited

Where GraalVM Is a Strong Choice

Despite its limitations in strict real-time environments, GraalVM excels in several domains:

Low-Latency Microservices

Native Image significantly reduces cold start times and memory usage, making it ideal for containerized workloads and serverless environments.

High-Throughput Systems

In long-running services, JIT optimizations can deliver excellent throughput with acceptable latency characteristics.

Polyglot Architectures

GraalVM enables seamless interoperability across multiple languages, simplifying system design in heterogeneous environments.

Developer Productivity

Compared to C++, the Java ecosystem offers faster iteration, richer tooling, and lower cognitive overhead for most teams.

Where C++ Remains Unmatched

C++ continues to dominate in scenarios where performance constraints are absolute:

  • Hard real-time systems (avionics, medical devices, robotics)
  • High-frequency trading engines with microsecond budgets
  • Game engines and real-time rendering pipelines
  • High-performance computing (HPC)

In these domains, even minor unpredictability is unacceptable, and the control offered by C++ is indispensable.

Strategic Takeaway

The most important shift is not that GraalVM surpasses C++, but that it redefines the boundary where managed runtimes are viable.

Historically, many systems defaulted to C++ purely for performance reasons. Today, GraalVM enables teams to achieve sufficiently high performance with significantly better developer productivity and ecosystem support.

This changes the optimization calculus:

  • Use C++ when you need guarantees
  • Use GraalVM when you need performance and agility

Conclusion

GraalVM does not replace C++ in real-time systems—but it does erode its dominance in adjacent domains.

For hard real-time applications, C++ remains the gold standard due to its deterministic execution model and fine-grained control over system resources.

For everything else, the decision is no longer obvious. GraalVM offers a compelling middle ground, delivering strong performance while dramatically improving developer velocity.

In modern system design, that trade-off is often more valuable than raw speed alone.

PostHeaderIcon [GoogleIO2025] What’s new in Android

Keynote Speakers

John Zoeller operates as a Developer Relations Engineer at Google, advocating for Wear OS and high-quality Android experiences. Educated at the University of Washington, he shares insights on code documentation and platform integrations to foster developer communities.

Jingyu Shi functions as a Developer Relations Engineer at Google, specializing in AI Edge technologies for Android. With a background from Columbia University, she guides developers in deploying on-device models and enhancing intelligent app features.

Jolanda Verhoef serves as a Developer Relations Engineer at Google, specializing in Android development with a focus on Jetpack Compose and user interface tooling. Based in Utrecht, she advocates for modern UI practices, drawing from her education at the University of Utrecht to educate developers on building efficient, adaptive applications.

Abstract

This comprehensive inquiry examines forthcoming Android 16 capabilities and developmental trajectories, focusing on crafting superior applications across varied hardware, including wearables, televisions, and automotive systems. It dissects integrations of AI via Gemini models, productivity boosts through Jetpack Compose and Kotlin Multiplatform, and Gemini-assisted tooling in Android Studio. By analyzing methodologies for on-device intelligence, media handling, and cross-platform logic, the discussion appraises contexts of user delight and developer velocity, with ramifications for scalable, privacy-conscious software engineering.

Productivity Amplifications in Development Tooling

Jolanda Verhoef commences by chronicling Jetpack Compose’s ascent, now adopted by 60% of premier apps for its declarative prowess. She delineates enhancements accelerating workflows, such as autofill via semantics rewrites, autosizing text for adaptive displays, and animateBounds for seamless transitions.

Visibility APIs like onLayoutRectChanged enable efficient tracking, with alpha extensions for fractional visibility aiding media optimizations. Performance surges from compiler skips and UI refinements yield 20-30% gains, while stability purges 32% of experimental APIs.

Navigation 3 rethinks routing with Compose primitives, supporting adaptive architectures. Media3 and CameraX offer modular composables, as in Androidify’s video tutorials.

Jingyu Shi introduces Kotlin Multiplatform (KMP) for shared logic across Android and iOS, stabilizing in Kotlin 2.0. Methodologies involve common modules for business rules, with platform-specific UI, implying reduced duplication and unified testing.

Code sample for KMP setup:

// commonMain/kotlin
expect class Platform() {
    val name: String
}

// androidMain/kotlin
actual class Platform {
    actual val name: String = "Android"
}

// iosMain/kotlin
actual class Platform {
    actual val name: String = "iOS"
}

Implications encompass streamlined maintenance, though require ecosystem maturity for full parity.

AI Integrations for Intelligent Experiences

Shi emphasizes on-device AI via Gemini Nano and cloud access, liberating from server dependencies. GenAI APIs handle text/image tasks with minimal code, expanding to multimodal interactions.

Gemini Live API via Firebase enables bidirectional audio, fostering agentic apps. Home APIs incorporate Gemini for smart automations, accessing 750 million devices.

Methodologies prioritize privacy in on-device processing, with implications for real-time personalization sans latency. Contexts include solving tangible issues, like fitness tracking or content generation.

Media and Camera Advancements for Rich Interactions

Updates in Jetpack Media3 and CameraX facilitate effects sharing for grayscale filters across capture and editing. Low-light boosts via ML extend brightness adjustments to broader hardware.

PreloadManager optimizes short-form video feeds, reducing startups for swipeable interfaces. Native PCM offload in NDK conserves battery during audio playback by delegating to DSPs.

Professional features in Android 16 enhance creator tools, implying elevated content quality across ecosystems.

Cross-Device Excellence and Future Paradigms

John Zoeller (implied in Wear OS focus) and speakers advocate multi-form factor designs, with Android 16’s live updates and Material 3 Expressive for engaging UIs.

Implications span unified experiences, with AI as the differentiator for “wow” moments, urging ethical, performant integrations.

Links:

PostHeaderIcon [DevoxxUK2025] Scripting on the JVM with JBang: Simplifying Java Development

Yassine Benabbas, a data engineer at Worldline and a passionate educator, delivered an engaging session at DevoxxUK2025 on JBang, a tool that simplifies Java, Kotlin, and Groovy scripting. Through live demos using Jupyter notebooks, Yassine showcased how JBang streamlines small-scale development, making it as intuitive as Python. His talk covered JBang’s setup, core features, and advanced capabilities like remote script execution and catalog integration, emphasizing its value for prototyping, teaching, and sharing code. Yassine’s enthusiasm for JBang highlighted its potential to make Java development accessible and enjoyable for diverse audiences.

JBang’s Core Features: Scripting Made Simple

Yassine introduced JBang as a command-line tool that transforms Java development by enabling single-file scripts, akin to shell or Python scripts. Using a shebang line (#!/usr/bin/env jbang), developers can run Java files directly from the terminal, as demonstrated with a “Hello World” example. JBang’s init command generates a basic Java file, and its export capabilities produce JARs, fat JARs, or native images effortlessly. For prototyping, JBang supports converting scripts into Maven or Gradle projects, maintaining simplicity while scaling to full-fledged applications. This flexibility makes it ideal for educators, demo developers, and hobbyists who need quick, lightweight solutions without complex build tools.

Templates and Multi-Language Support

JBang’s template system was a highlight, allowing developers to bootstrap projects with pre-configured code. Yassine showcased templates for command-line apps (using Picocli) and Quarkus REST APIs, demonstrating how JBang creates single-file REST APIs in Java, rivaling other languages’ simplicity. Beyond Java, JBang supports Kotlin and Groovy, and even executes Java code blocks within Markdown files. A demo showed initializing a Kotlin script with a shebang line, running seamlessly via JBang. These features empower developers to reuse code efficiently, making JBang a versatile tool for rapid development across JVM languages.

Remote Execution and Catalog Integration

Yassine explored JBang’s ability to run remote scripts by referencing URLs, such as a GitHub-hosted Java file, with a caching mechanism for efficiency. He introduced JBang catalogs, JSON files hosted on platforms like GitHub, which simplify referencing scripts and templates with short aliases (e.g., @yassine/pal-cli for a palindrome checker). The JBang app store, a community-driven UI, allows browsing catalogs, enhancing script discoverability. Yassine also demonstrated a JavaFX template for creating presentations, showcasing how JBang fetches dependencies and runs complex applications, further broadening its applicability for creative and educational use cases.

Links:

PostHeaderIcon [VoxxedDaysTicino2026] Backlog.md: The Simplest Project Management Tool for the AI Era

Lecturer

Alex Gavrilescu is a full-stack developer with extensive experience in .NET and Vue.js technologies. He has been actively involved in software development for many years and has shifted his focus toward artificial intelligence since last year. Alex developed Backlog.md as a side project starting from the end of May 2025, while maintaining a full-time role in the casino industry. He shares insights through blog articles on platforms like LinkedIn and X (formerly Twitter). Relevant links include his LinkedIn profile (https://www.linkedin.com/in/alex-gavrilescu/) and X account (https://x.com/alexgavrilescu).

Abstract

This article examines Alex Gavrilescu’s presentation on his journey in AI-assisted software development and the creation of Backlog.md, a terminal-based project management tool designed to enhance predictability and structure in workflows involving AI agents. Drawing from personal experiences, the discussion analyzes the evolution from unstructured prompting to a systematic approach, emphasizing task decomposition, context management, and delegation modes. It explores the tool’s features, limitations, and implications for spec-driven AI development, highlighting how such methodologies foster deterministic outcomes in non-deterministic AI environments.

Context of AI Integration in Development Workflows

In the evolving landscape of software engineering, the integration of artificial intelligence agents has transformed traditional practices. Alex begins by contextualizing his experiences, noting the shift from basic code completions in integrated development environments (IDEs) like Visual Studio’s IntelliSense, which relied on simple machine learning or pattern matching, to more advanced tools. The advent of models like ChatGPT allowed developers to query and incorporate code snippets, reducing friction but still requiring manual transfers.

The introduction of GitHub Copilot marked a significant advancement, embedding AI directly into IDEs for contextual queries and modifications. However, the true leap came with agent modes, where AI operates in a loop, utilizing tools and gathering context autonomously until task completion. Alex distinguishes between “steer mode,” where developers iteratively guide AI through prompts and approvals, and “delegate mode,” where comprehensive instructions are provided upfront for independent execution. His focus leans toward delegation, aiming for reliable outcomes without constant intervention.

This context is crucial as AI models are inherently non-deterministic, yielding varied results from identical prompts. Alex draws parallels to human collaboration, where structured information—clarifying the “why,” “what,” and “how”—ensures success. He references practices like Gherkin scenarios (given-when-then) but simplifies them to acceptance criteria and definitions of done, adapting them for AI efficiency. Early challenges, such as limited context windows in models like those from May 2025, necessitated task breakdown to avoid information loss during compaction.

The implications are profound: unstructured AI use often leads to abandonment, as complexity escalates failure rates. Alex classifies developers into categories like “vibe coders” (improvisational prompting without code review) and “AI product managers” (structured delegation with final reviews), illustrating how his journey from near-abandonment to 95% success stemmed from imposing structure.

Development and Features of Backlog.md

Backlog.md emerged as Alex’s solution to the limitations of manual task structuring. Initially, he created tasks in Markdown files, logging them in Git repositories for sharing and history. This allowed referencing between tasks, scoping to prevent derailment, and assigning tasks to specialized agents (e.g., Opus for UI, Codex for backend). By avoiding database or API dependencies, agents could directly read files, enhancing efficiency.

The tool formalizes this into a command-line interface (CLI) resembling Git commands: backlog task create, edit, list. Tasks are stored as Markdown with a front-matter section for metadata (title, ID, dependencies, status). Sections include “why” for problem context, acceptance criteria with checkboxes for self-verification, implementation plans generated by agents, and notes/summaries for pull request descriptions.

Backlog.md supports subtasks, dependencies (e.g., “relates to” or “blocked by”), and a web interface for easier editing, including rich text and dark mode. It operates offline, uses Git for synchronization across branches, and avoids conflicts by leveraging repository permissions for security. Notably, 99% of its code was AI-generated, with Alex reviewing initial tasks, demonstrating the tool’s recursive utility.

Limitations include no direct task initiation from the interface, self-hosting requirements, single-repo support, experimental documentation/decisions sections, and absent integrations like GitHub Issues or Jira. As a solo side project, it lacks production-grade support, but welcomes community contributions via issues or pull requests.

In practice, Alex showcases Backlog.md in a live demo for spec-driven development. Starting with a product requirements document (PRD) generated by an agent like Claude, tasks are decomposed. Implementation plans are reviewed per task to adapt to changes, ensuring accuracy. Sub-agents orchestrate parallel planning, with human checkpoints at description, plan, and code stages.

Methodological Implications for Spec-Driven AI Development

Spec-driven AI development, as outlined, requires clear intent expression before execution. Backlog.md facilitates this by breaking projects into manageable tasks, delegating to agents for research, planning, and coding. A feedback loop refines agent instructions, specs, and processes.

Alex’s workflow begins with PRD creation, followed by task decomposition adhering to Backlog.md guidelines. Agents generate plans only upon task start, preventing obsolescence. For a task-scheduling feature, he demonstrates PRD prompting, task creation, and sub-agent orchestration for plans, emphasizing acceptance criteria for verification.

The methodology promotes one-task-per-context-window sessions, referencing summaries to avoid bloat. Definitions of done, global across projects, enforce testing, linting, and security checks. This counters “vibe coding’s” directional uncertainty, ensuring guardrails like unit tests prevent premature completion claims.

Implications extend to project readiness: documentation for agent onboarding mirrors human processes, with skills, code styles, and self-verification loops enhancing efficiency. Alex references a Factory.ai article on AI-ready maturity levels, underscoring documentation’s role.

Challenges persist in UI verification, requiring human QA, and complex integrations. Yet, the approach allows iterations without full restarts, leveraging cheap tokens for refinements.

Consequences and Future Directions

Backlog.md’s simplicity yields repeatability, boosting success from 50% (slot-machine-like prompting) to 95%. By structuring delegation, it mitigates AI’s non-determinism, fostering predictable workflows. Consequences include democratized AI use—no prior experience needed beyond basic Git—potentially broadening adoption.

For teams, Git synchronization enables collaboration, though self-hosting limits non-technical access. Future enhancements might include multi-repo support, integrations, and improved documentation, driven by its 4,600 GitHub stars and community feedback.

Broader implications question AI’s role: accepting “good enough” results accelerates development, but human input remains vital for steering and verification. As models improve (e.g., Opus 5.6’s million-token window), tools like Backlog.md evolve, but foundational structure endures.

In conclusion, Alex’s tool and methodology exemplify pragmatic AI integration, balancing innovation with reliability in an era where agents redefine development.

Links:

PostHeaderIcon Clojure: A Modern Lisp for Durable, Concurrent Software

Clojure: A Modern Lisp for Durable, Concurrent Software

In the evolving landscape of programming languages, Clojure distinguishes itself not through novelty or aggressive feature growth, but through deliberate restraint. Rather than chasing trends, it revisits enduring principles of computer science—immutability, functional composition, and symbolic computation—and applies them rigorously to contemporary software systems. This approach results in a language that feels both deeply rooted in tradition and sharply attuned to modern challenges.

Clojure appeals to developers and organizations that prioritize long-term correctness, conceptual coherence, and system resilience over short-term convenience.

What Clojure Is and What It Aims to Solve

Clojure is a functional, dynamically typed programming language that runs primarily on the Java Virtual Machine. It is a modern Lisp, and as such it adopts a uniform syntax in which code is represented as structured data. This design choice enables powerful programmatic manipulation of code itself, while also enforcing consistency across the language.

Unlike many earlier Lisp dialects, Clojure was explicitly designed for production systems. It assumes the presence of large codebases, multiple teams, and long-lived services. As a result, its design is deeply influenced by concerns such as concurrency, data integrity, and integration with existing ecosystems.

Historical Context and Design Motivation

Rich Hickey introduced Clojure publicly in 2007 after years of observing recurring failures in large software systems. His critique focused on the way mainstream languages conflate identity, state, and value. In mutable systems, objects change over time, and those changes must be coordinated explicitly when concurrency is involved. The resulting complexity often exceeds human reasoning capacity.

Clojure responds by redefining the problem. Instead of allowing values to change, it treats values as immutable and represents change as a controlled transition between values. This shift in perspective underpins nearly every aspect of the language.

Immutability as a Foundational Principle

In Clojure, immutability is the default. Data structures such as vectors, maps, and sets never change in place. Instead, operations that appear to modify data return new versions that share most of their internal structure with the original.

(def user {:name "Alice" :role "admin"})
(def updated-user (assoc user :role "editor"))

;; user remains unchanged
;; updated-user reflects the new role

Because values never mutate, functions cannot introduce hidden side effects. This dramatically simplifies reasoning, testing, and debugging, especially in concurrent environments.

Functional Composition in Practice

Clojure encourages developers to express computation as the transformation of data through a series of functions. Rather than focusing on control flow and state transitions, programs describe what should happen to data.

(defn even-squares [numbers]
  (->> numbers
       (filter even?)
       (map #(* % %))))

In this example, data flows through a pipeline of transformations. Each function is small, focused, and easily testable, which encourages reuse and composability over time.

Concurrency Through Explicit State Management

Clojure’s concurrency model separates identity from value. State is managed through explicit reference types, while the values themselves remain immutable. This design makes concurrent programming safer and more predictable.

(def counter (atom 0))
(swap! counter inc)

For coordinated updates across multiple pieces of state, Clojure provides software transactional memory, allowing several changes to occur atomically.

(def account-a (ref 100))
(def account-b (ref 50))

(dosync
  (alter account-a - 10)
  (alter account-b + 10))

Macros and Language Extension

Because Clojure code is represented as data, macros can transform programs before evaluation. This allows developers to introduce new syntactic constructs that feel native to the language rather than external utilities.

(defmacro unless [condition & body]
  `(if (not ~condition)
     (do ~@body)))

Although macros should be used with care, they play an important role in building expressive and coherent abstractions.

Interoperability with Java

Despite its distinct philosophy, Clojure integrates seamlessly with Java. Java classes can be instantiated and methods invoked directly, allowing developers to reuse existing libraries and infrastructure.

(import java.time.LocalDate)
(LocalDate/now)

Comparison with Java

Although Clojure and Java share the JVM, they differ fundamentally in how they model software. Java emphasizes object-oriented design, mutable state, and explicit control flow. Clojure emphasizes immutable data, functional composition, and explicit state transitions.

While Java has incorporated functional features over time, its underlying model remains object-centric. Clojure offers a more radical rethinking of program structure, often resulting in smaller and more predictable systems.

Comparison with Scala

Scala and Clojure are often compared as functional alternatives on the JVM, yet their philosophies diverge significantly. Scala embraces expressive power through advanced typing and rich abstractions, whereas Clojure seeks to reduce complexity by minimizing the language itself.

Both approaches are valid, but they reflect different beliefs about how developers best manage complexity.

Closing Perspective

Clojure is not designed for universal adoption. It demands a shift in how developers think about state, time, and behavior. However, for teams willing to embrace its principles, it offers a disciplined and coherent approach to building software that remains understandable, correct, and adaptable over time.

PostHeaderIcon [DevoxxBE2025] Robotics and GraalVM Native Libraries

Lecturer

Florian Enner is the co-founder and chief software engineer at HEBI Robotics, a company specializing in modular robotic systems. With a background in electrical and computer engineering from Carnegie Mellon University, Florian has extensive experience in developing software for real-time robotic control and has contributed to various open-source projects in the field. His work focuses on creating flexible, high-performance robotic solutions for industrial and research applications.

Abstract

This article delves into the integration of Java in custom robotics development, emphasizing real-time control systems and the potential of GraalVM native shared libraries to supplant segments of C++ codebases. It identifies key innovations in modular hardware components and software architectures, situated within HEBI Robotics’ approach to building autonomous and inspection-capable robots. Through demonstrations of robotic platforms and code compilations, the narrative highlights methodologies for cross-platform compatibility, performance optimization, and safety features. The discussion evaluates contextual challenges in embedded systems, implications for development efficiency and scalability, and provides insights into transitioning legacy code for enhanced productivity.

Modular Building Blocks for Custom Robotics

HEBI Robotics specializes in creating versatile components that function as high-end building blocks for constructing bespoke robotic systems, akin to advanced modular kits. These include actuators, cameras, mobile bases, and batteries, designed to enable rapid assembly of robots for diverse applications such as industrial inspections or autonomous navigation. The innovation lies in the actuators’ integrated design, combining motors, encoders, and controllers into compact units that can be daisy-chained, simplifying wiring and reducing complexity in multi-joint systems.

Contextually, this addresses the fragmentation in robotics where off-the-shelf solutions often fail to meet specific needs. By providing standardized yet customizable modules, HEBI facilitates experimentation in research and industry, allowing users to focus on higher-level logic rather than hardware intricacies. For instance, actuators support real-time control at 1 kHz, with features like voltage compensation for battery-powered operations and safety timeouts to prevent uncontrolled movements.

Methodologically, the software stack is cross-platform, supporting languages like Java, C++, Python, and MATLAB, ensuring broad accessibility. Demonstrations showcase robots like hexapods or wheeled platforms controlled via Ethernet or WiFi, highlighting the system’s robustness in real-world scenarios. Implications include lowered barriers for R&D teams, enabling faster iterations and safer deployments, particularly for novices or in educational settings.

Java’s Role in Real-Time Robotic Control

Java’s utilization in robotics challenges conventional views of its suitability for time-critical tasks, traditionally dominated by lower-level languages. At HEBI, Java powers control loops on embedded Linux systems, leveraging its rich ecosystem for productivity while achieving deterministic performance. Key to this is managing garbage collection pauses through careful allocation strategies and using scoped values for thread-local data.

The API abstracts hardware complexities, allowing Java clients to issue commands and receive feedback in real time. For example, a simple Java script can orchestrate a robotic arm’s movements:

import com.hebi.robotics.*;

public class ArmControl {
    public static void main(String[] args) {
        ModuleSet modules = ModuleSet.fromDiscovery("arm");
        Group group = modules.createGroup();
        Command cmd = Command.create();
        cmd.setPosition(new double[]{0.0, Math.PI/2, 0.0}); // Set joint positions
        group.sendCommand(cmd);
    }
}

This code discovers modules, forms a control group, and issues position commands. Safety integrates via command lifetimes: if no new command arrives within a specified period (e.g., 100 ms), the system shuts down, preventing hazards.

Contextually, this approach contrasts with C++’s dominance in embedded systems, offering Java’s advantages in readability and rapid development. Analysis shows Java matching C++ latencies in control loops, with minor overheads mitigated by optimizations like ahead-of-time compilation. Implications extend to team composition: Java’s familiarity attracts diverse talent, accelerating project timelines while maintaining reliability.

GraalVM Native Compilation for Shared Libraries

GraalVM’s native image compilation transforms Java code into standalone executables or shared libraries, presenting an opportunity to replace performance-critical C++ components. At HEBI, this is explored for creating .so files callable from C++, blending Java’s productivity with native efficiency.

The process involves configuring GraalVM for reflections and resources, then compiling:

native-image --shared -jar mylib.jar -H:Name=mylib

This generates a shared library with JNI exports. A simple example compiles a Java class with methods exposed for C++ invocation:

public class Devoxx {
    public static int add(int a, int b) {
        return a + b;
    }
}

Compiled to libdevoxx.so, it’s callable from C++. Demonstrations show successful executions, with “Hello Devoxx” printed from Java-originated code.

Contextualized within robotics’ need for low-latency libraries, this bridges languages, allowing Java for logic and C++ for interfaces. Performance evaluations indicate near-native speeds, with startup advantages over JVMs. Implications include simplified maintenance: Java’s safety features reduce bugs in controls, while native compilation ensures compatibility with existing C++ ecosystems.

Performance Analysis and Case Studies

Performance benchmarks compare GraalVM libraries to C++ equivalents: in loops, latencies are comparable, with Java’s GC managed for determinism. Case studies include snake-like inspectors navigating pipes, controlled via Java for path planning.

Analysis reveals GraalVM’s potential in embedded scenarios, where quick compilations (under 5 minutes for small libraries) enable rapid iterations. Safety features, like velocity limits, integrate seamlessly.

Implications: hybrid codebases leverage strengths, enhancing scalability for complex robots like balancing platforms with children.

Future Prospects in Robotic Software Stacks

GraalVM promises polyglot libraries, enabling seamless multi-language calls. HEBI envisions full Java controls, reducing C++ reliance for better productivity.

Challenges: ensuring real-time guarantees in compiled code. Future: broader adoptions in frameworks for robotics.

In conclusion, GraalVM empowers Java in robotics, merging efficiency with developer-friendly tools for innovative systems.

Links:

  • Lecture video: https://www.youtube.com/watch?v=md2JFgegN7U
  • Florian Enner on LinkedIn: https://www.linkedin.com/in/florian-enner-59b81466/
  • Florian Enner on GitHub: https://github.com/ennerf
  • HEBI Robotics website: https://www.hebirobotics.com/

PostHeaderIcon Kerberos in the JDK: A Deep Technical Guide for Java Developers and Architects

Kerberos remains one of the most important authentication protocols in enterprise computing. Although it is often perceived as legacy infrastructure, it continues to underpin authentication in corporate networks, distributed data platforms, and Windows domains. For Java developers working in enterprise environments, understanding how Kerberos integrates with the JDK is not optional — it is frequently essential.

This article provides a comprehensive, architectural-level explanation of the Kerberos tooling available directly within the JDK. The objective is not merely to demonstrate configuration snippets, but to clarify how the pieces interact internally so that developers, architects, and staff engineers can reason about authentication flows, diagnose failures, and design secure systems with confidence.

Kerberos Support in the JDK: An Architectural Overview

The JDK provides native support for Kerberos through three primary layers: the internal Kerberos protocol implementation, JAAS (Java Authentication and Authorization Service), and JGSS (Java Generic Security Services API). These layers operate together to allow a Java process to authenticate as a principal, acquire credentials, and establish secure contexts with remote services.

At the lowest level, the JDK contains a complete Kerberos protocol stack implementation located insun.security.krb5. This implementation performs the AS, TGS, and AP exchanges defined by the Kerberos protocol. Although this layer is not intended for direct application use, it is important to understand that the JVM does not require external Kerberos libraries to function as a Kerberos client.

Above the protocol implementation sits JAAS, which is responsible for authentication and credential acquisition. JAAS provides the abstraction layer that allows a Java process to log in as a principal using a password, a keytab, or an existing ticket cache.

Finally, the JDK exposes JGSS through the org.ietf.jgss package. JGSS is the API used to generate and validate Kerberos tokens, negotiate security mechanisms such as SPNEGO, and establish secure contexts between clients and services.

In practice, enterprise Java applications almost always use JAAS to obtain credentials and JGSS to perform service authentication.

JAAS and the Krb5LoginModule

JAAS serves as the authentication entry point for Kerberos within the JVM. The central class isjavax.security.auth.login.LoginContext, which delegates authentication to one or more login modules defined in a JAAS configuration file.

For Kerberos authentication, the relevant module iscom.sun.security.auth.module.Krb5LoginModule, which is bundled with the JDK. This login module supports multiple credential acquisition strategies, including interactive password login, keytab-based login for services, and reuse of an existing operating system ticket cache.

A typical JAAS configuration for a service using a keytab might look as follows:

Client {
  com.sun.security.auth.module.Krb5LoginModule required
  useKeyTab=true
  keyTab="/etc/security/keytabs/app.keytab"
  principal="appuser@COMPANY.COM"
  storeKey=true
  doNotPrompt=true;
};

Once authentication succeeds, JAAS produces a Subject. This object represents the authenticated identity within the JVM and contains the Kerberos principal along with private credentials such as the Ticket Granting Ticket (TGT).

The Subject becomes the in-memory security identity for the application. Code can be executed under this identity using Subject.doAs, which ensures that downstream security operations use the acquired Kerberos credentials.

JGSS and Security Context Establishment

After credentials are acquired, the next step is to authenticate to a remote service. This is performed through the Java GSS-API implementation provided in theorg.ietf.jgss package.

The central abstraction in JGSS is the GSSContext, which represents a security context between two peers. The GSSManager factory is used to create names, credentials, and contexts. During context establishment, Kerberos tickets are exchanged and validated transparently by the JVM.

On the client side, the application creates a GSSName representing the service principal, then initializes a GSSContext. The resulting token is transmitted to the server, often via an HTTPAuthorization: Negotiate header.

On the server side, the application accepts the token using acceptSecContext, which validates the ticket, verifies authenticity, and establishes a shared session key. Mutual authentication can be requested so that both client and server verify each other’s identities.

Under the hood, JGSS relies on the Kerberos mechanism identified by OID 1.2.840.113554.1.2.2. When SPNEGO is involved, the negotiation mechanism uses OID 1.3.6.1.5.5.2 to determine the appropriate underlying security protocol.

Kerberos Configuration in the JVM

The JVM reads Kerberos configuration from a krb5.conf file, typically located under${java.home}/lib/security or specified via the-Djava.security.krb5.conf system property.

Several JVM system properties significantly influence Kerberos behavior. For example, enabling -Dsun.security.krb5.debug=true produces extremely detailed protocol-level logs, including encryption types, ticket exchanges, and key version numbers. This flag is invaluable when diagnosing authentication failures.

Another important property is -Djavax.security.auth.useSubjectCredsOnly. When set to true (the default), the JVM will only use credentials present in the currentSubject. When set to false, the JVM may fall back to native operating system credentials, which is often necessary in SPNEGO-enabled web applications.

Ticket Cache and Operating System Integration

The JDK can integrate with an operating system’s Kerberos ticket cache. On Unix systems, this typically corresponds to the cache generated by the kinit command. JAAS can be configured with useTicketCache=true to reuse these credentials instead of requiring a password or keytab.

On Windows, the JVM can integrate with the Local Security Authority (LSA), allowing Java applications to authenticate transparently as the currently logged-in domain user.

SASL and GSSAPI Support

Beyond HTTP authentication, the JDK also provides SASL support through thejavax.security.sasl package. The GSSAPI mechanism enables Kerberos authentication for protocols such as LDAP, SMTP, and custom TCP services.

Technologies such as Apache Kafka, enterprise LDAP servers, and distributed data platforms frequently leverage SASL/GSSAPI under the hood. From the JVM’s perspective, the mechanism ultimately delegates to the same JGSS implementation used for HTTP-based SPNEGO authentication.

Encryption Types and Cryptographic Considerations

Modern JDK versions support AES-based encryption types, including AES-128 and AES-256. Older algorithms such as DES have been removed or disabled due to security concerns. Since Java now ships with unlimited cryptographic strength enabled by default, no additional policy configuration is typically required for strong Kerberos encryption.

Encryption type mismatches between the KDC and the JVM are a frequent source of authentication errors, particularly in legacy environments.

Debugging and Operational Realities

Most Kerberos failures in Java applications are not caused by cryptographic defects but by configuration issues. Common causes include DNS misconfiguration, principal mismatches, clock skew between systems, and incorrect keytab versions.

Effective troubleshooting requires correlating JVM debug logs with KDC logs and verifying ticket cache state using operating system tools. Engineers who understand the protocol exchange sequence can usually isolate failures quickly by determining whether the breakdown occurs during AS exchange, TGS exchange, or service ticket validation.

What the JDK Does Not Provide

It is important to clarify that the JDK does not include a Kerberos Key Distribution Center, administrative tools such as kadmin, or command-line utilities for ticket management. Those capabilities are provided by implementations such as MIT Kerberos, Heimdal, or Active Directory.

The JDK functions strictly as a Kerberos client and service runtime.

Conclusion

Kerberos support in the JDK is both mature and deeply integrated. Through JAAS, the JVM can acquire credentials using password, keytab, or ticket cache. Through JGSS, it can establish secure, mutually authenticated contexts with remote services. Through SASL, it can extend this authentication model to non-HTTP protocols.

For architects and staff engineers, understanding these layers is essential when designing secure enterprise systems. For junior developers, gaining familiarity with JAAS, Subject, and GSSContextprovides a strong foundation for working within corporate authentication environments.

Kerberos may not be fashionable, but it remains foundational. In the Java ecosystem, it is not an external add-on — it is part of the platform itself.

PostHeaderIcon [DotJs2025] Clarke’s Third Law in Action: Having Fun with ES Proxies

Any sufficiently advanced technology is indistinguishable from magic—or so Arthur C. Clarke conjured—yet ES Proxies embody this enigma, transfiguring mundane objects into oracles of orchestration. Christophe Porteneuve, frontend lead at Doctolib and Prototype.js alumnus, conjured this conjury at dotJS 2025, from Rails’ metaprogramming to JS’s proxy prowess. A 1995 web warlock, Christophe chronicled proxies’ palette: traps transmuting traps, revocable realms, Immer’s immutable incantations.

Christophe’s chronicle commenced with OOP’s ontology: proxies as intercessors, wrappers weaving whims—get’s gaze, set’s scribe, apply’s invocation. Revocable’s rite: realms retractable, realms revocable—Proxy.revocable(target, handler) yielding revocable proxies, revoke’s rupture rending references. Christophe cavorted with conundrums: negative indices navigated (proxy[-1]), symbols’ summons (proxy[Symbol.iterator]()), even functions’ facades—proxied predicates, applicables as arguments.

Immer’s immutability: drafts’ dominion, mutations’ mirage—produce(base, draft => {draft.foo = 'bar'}) birthing branches sans sprawl. Christophe commended: Redux Toolkit’s reliance, React’s reducer rapport—deep derivations distilled, verbosity vanquished. Proxies’ pedigree: performance’s parity (descriptors’ drag dwarfed), ubiquity’s umbrella (ES6’s endowment, polyfills passé).

This thaumaturgy tantalizes: metaprogramming’s muse, DX’s deity—proxies as pandora’s portal.

Proxies’ Prismatic Patterns

Christophe cataloged traps: get’s glean, set’s seal—revocable’s recall, negative’s nuance. Functions’ finesse: predicates proxied, iterators invoked—magic’s mantle.

Immer’s Immutable Incantations

Drafts’ dominion: mutations’ masquerade, branches’ bloom—Redux’s reliance, React’s rapport. Christophe’s creed: verbosity’s vanquish, performance’s parity.

Links:

PostHeaderIcon [AWSReInforce2025] Cyber for Industry 4.0: What is CPS protection anyway? (NIS123)

Lecturer

Sean Gillson serves as Global Head of Cloud Alliances at Claroty, architecting solutions that bridge IT and OT security domains. Gillson Wilson leads the Security Competency for GSIs and ISVs at AWS, driving partner-enabled protection for cyber-physical systems across industrial environments.

Abstract

The presentation defines cyber-physical systems (CPS) protection within the context of IT/OT convergence, examining threat vectors that exploit interconnected industrial assets. Through architectural patterns and real-world deployments, it establishes specialized controls that maintain operational continuity while enabling digital transformation in manufacturing, energy, and healthcare sectors.

CPS Threat Landscape Evolution

Cyber-physical systems encompass operational technology (OT), IoT devices, and building management systems that increasingly connect to enterprise networks. This convergence delivers efficiency gains—predictive maintenance, remote monitoring, sustainability optimization—but expands the attack surface dramatically.

Traditional IT threats now target physical processes:

  • Ransomware encrypting PLC configurations
  • Supply chain compromise via firmware updates
  • Insider threats leveraging legitimate remote access

The 2021 Colonial Pipeline incident exemplifies how IT breaches cascade into physical disruption, highlighting the need for unified security posture.

IT/OT Convergence Architectural Patterns

Successful convergence requires deliberate segmentation while preserving data flow:

Level 0: Physical Processes → PLC/RTU
Level 1: Basic Control → SCADA/DCS
Level 2: Supervisory Control → Historian
Level 3: Operations → MES
Level 4: Business → ERP (IT Network)

Claroty implements micro-segmentation at Level 2/3 boundary using AWS Transit Gateway with Network Firewall rules that permit only known protocols (Modbus, OPC-UA) between zones.

Asset Discovery and Risk Prioritization

Industrial environments contain thousands of unmanaged devices. Claroty’s passive monitoring identifies:

  • Device inventory with firmware versions
  • Communication patterns and dependencies
  • Vulnerability mapping to CVSS and EPSS scores
{
  "asset": "Siemens S7-1500",
  "firmware": "V2.9.2",
  "vulnerabilities": ["CVE-2023-1234"],
  "risk_score": 9.2,
  "business_criticality": "high"
}

This contextual intelligence enables prioritization—patching a chiller controller impacts comfort; patching a turbine controller impacts revenue.

Secure Remote Access Patterns

Industry 4.0 demands remote expertise. Traditional VPNs expose entire OT networks. The solution implements:

  • Zero-trust access via AWS Verified Access
  • Session recording and justification logging
  • Time-bound credentials tied to change windows

Engineers connect to bastion hosts in DMZ segments; protocol translation occurs through data diodes that permit only outbound historian data.

Edge-to-Cloud Security Fabric

AWS IoT Greengrass enables secure edge processing:

components:
  - com.claroty.asset-discovery
  - com.aws.secure-tunnel
local_storage: /opt/ot-data

Devices operate autonomously during connectivity loss, syncing vulnerability state when reconnected. Security Hub aggregates findings from edge agents alongside cloud workloads.

Regulatory and Compliance Framework

Standards evolve rapidly:

  • IEC 62443: Security levels for industrial automation
  • NIST CSF 2.0: OT-specific controls
  • EU NIS2 Directive: Critical infrastructure requirements

The architecture generates compliance evidence automatically—asset inventories, access logs, patch verification—reducing audit preparation from months to days.

Conclusion: Unified Security for Digital Industry

CPS protection requires specialized approaches that respect operational constraints while leveraging cloud-native controls. The convergence of IT and OT security creates resilient industrial systems that withstand cyber threats without compromising production. Organizations that implement layered defenses—asset intelligence, micro-segmentation, secure remote access—achieve Industry 4.0 benefits while maintaining safety and reliability.

Links:

PostHeaderIcon [GoogleIO2024] What’s New in the Web: Baseline Features for Interoperable Development

The web platform advances steadily, with interoperability as a key focus for enhancing developer confidence. Rachel Andrew’s session explored Baseline, a initiative that clarifies feature availability across browsers, aiding creators in adopting innovations reliably.

Understanding Baseline and Its Impact on Development

Rachel introduced Baseline as a mechanism to track when web features achieve cross-browser support, categorizing them as “widely available” or “newly available.” Launched at Google I/O 2023, it addresses challenges like keeping pace with standards, as 21% of developers cite this as a top hurdle per Google’s research.

Baseline integrates with resources like MDN, CanIUse, and web.dev, providing clear status indicators. Features qualify as newly available upon implementation in Chrome, Firefox, and Safari stables, progressing to widely available after 30 months. This timeline reflects adoption cycles, ensuring stability.

The initiative fosters collaboration among browser vendors, aligning on consistent APIs. Rachel emphasized how Baseline empowers informed decisions, reducing fragmentation and encouraging broader feature use.

Key Layout and Styling Enhancements

Size container queries, newly available, enable responsive designs based on element dimensions, revolutionizing adaptive layouts. Rachel demonstrated their utility in card components, where styles adjust dynamically without media queries.

The :has() pseudo-class, a “parent selector,” allows targeting based on child presence, simplifying conditional styling. Widely available, it enhances accessibility by managing states like form validations.

CSS nesting, inspired by preprocessors, permits embedded rules for cleaner code. Newly available, it improves maintainability while adhering to specificity rules.

Linear easing functions and trigonometric support in CSS expand animation capabilities, enabling precise effects without JavaScript.

Accessibility and JavaScript Improvements

The inert attribute, newly available, disables elements from interaction, aiding modal focus trapping and improving accessibility. Rachel highlighted its role in preventing unintended activations.

Compression streams in JavaScript, widely available, facilitate efficient data handling in streams, useful for real-time applications.

Declarative Shadow DOM, newly available, enables server-side rendering of custom elements, enhancing SEO and performance for web components.

Popover API, newly available, simplifies accessible overlays, reducing custom code for tooltips and menus.

Future Tools and Community Engagement

Rachel discussed upcoming integrations, like Rum Vision for usage metrics, aiding feature adoption analysis. She urged tooling providers to incorporate Baseline data, enhancing ecosystems.

The 2024 Baseline features promise further advancements, with web.dev offering updates. This collaborative effort aims to streamline development, making the web more robust.

Links: