Recent Posts
Archives

PostHeaderIcon [DevoxxBE2025] Robotics and GraalVM Native Libraries

Lecturer

Florian Enner is the co-founder and chief software engineer at HEBI Robotics, a company specializing in modular robotic systems. With a background in electrical and computer engineering from Carnegie Mellon University, Florian has extensive experience in developing software for real-time robotic control and has contributed to various open-source projects in the field. His work focuses on creating flexible, high-performance robotic solutions for industrial and research applications.

Abstract

This article delves into the integration of Java in custom robotics development, emphasizing real-time control systems and the potential of GraalVM native shared libraries to supplant segments of C++ codebases. It identifies key innovations in modular hardware components and software architectures, situated within HEBI Robotics’ approach to building autonomous and inspection-capable robots. Through demonstrations of robotic platforms and code compilations, the narrative highlights methodologies for cross-platform compatibility, performance optimization, and safety features. The discussion evaluates contextual challenges in embedded systems, implications for development efficiency and scalability, and provides insights into transitioning legacy code for enhanced productivity.

Modular Building Blocks for Custom Robotics

HEBI Robotics specializes in creating versatile components that function as high-end building blocks for constructing bespoke robotic systems, akin to advanced modular kits. These include actuators, cameras, mobile bases, and batteries, designed to enable rapid assembly of robots for diverse applications such as industrial inspections or autonomous navigation. The innovation lies in the actuators’ integrated design, combining motors, encoders, and controllers into compact units that can be daisy-chained, simplifying wiring and reducing complexity in multi-joint systems.

Contextually, this addresses the fragmentation in robotics where off-the-shelf solutions often fail to meet specific needs. By providing standardized yet customizable modules, HEBI facilitates experimentation in research and industry, allowing users to focus on higher-level logic rather than hardware intricacies. For instance, actuators support real-time control at 1 kHz, with features like voltage compensation for battery-powered operations and safety timeouts to prevent uncontrolled movements.

Methodologically, the software stack is cross-platform, supporting languages like Java, C++, Python, and MATLAB, ensuring broad accessibility. Demonstrations showcase robots like hexapods or wheeled platforms controlled via Ethernet or WiFi, highlighting the system’s robustness in real-world scenarios. Implications include lowered barriers for R&D teams, enabling faster iterations and safer deployments, particularly for novices or in educational settings.

Java’s Role in Real-Time Robotic Control

Java’s utilization in robotics challenges conventional views of its suitability for time-critical tasks, traditionally dominated by lower-level languages. At HEBI, Java powers control loops on embedded Linux systems, leveraging its rich ecosystem for productivity while achieving deterministic performance. Key to this is managing garbage collection pauses through careful allocation strategies and using scoped values for thread-local data.

The API abstracts hardware complexities, allowing Java clients to issue commands and receive feedback in real time. For example, a simple Java script can orchestrate a robotic arm’s movements:

import com.hebi.robotics.*;

public class ArmControl {
    public static void main(String[] args) {
        ModuleSet modules = ModuleSet.fromDiscovery("arm");
        Group group = modules.createGroup();
        Command cmd = Command.create();
        cmd.setPosition(new double[]{0.0, Math.PI/2, 0.0}); // Set joint positions
        group.sendCommand(cmd);
    }
}

This code discovers modules, forms a control group, and issues position commands. Safety integrates via command lifetimes: if no new command arrives within a specified period (e.g., 100 ms), the system shuts down, preventing hazards.

Contextually, this approach contrasts with C++’s dominance in embedded systems, offering Java’s advantages in readability and rapid development. Analysis shows Java matching C++ latencies in control loops, with minor overheads mitigated by optimizations like ahead-of-time compilation. Implications extend to team composition: Java’s familiarity attracts diverse talent, accelerating project timelines while maintaining reliability.

GraalVM Native Compilation for Shared Libraries

GraalVM’s native image compilation transforms Java code into standalone executables or shared libraries, presenting an opportunity to replace performance-critical C++ components. At HEBI, this is explored for creating .so files callable from C++, blending Java’s productivity with native efficiency.

The process involves configuring GraalVM for reflections and resources, then compiling:

native-image --shared -jar mylib.jar -H:Name=mylib

This generates a shared library with JNI exports. A simple example compiles a Java class with methods exposed for C++ invocation:

public class Devoxx {
    public static int add(int a, int b) {
        return a + b;
    }
}

Compiled to libdevoxx.so, it’s callable from C++. Demonstrations show successful executions, with “Hello Devoxx” printed from Java-originated code.

Contextualized within robotics’ need for low-latency libraries, this bridges languages, allowing Java for logic and C++ for interfaces. Performance evaluations indicate near-native speeds, with startup advantages over JVMs. Implications include simplified maintenance: Java’s safety features reduce bugs in controls, while native compilation ensures compatibility with existing C++ ecosystems.

Performance Analysis and Case Studies

Performance benchmarks compare GraalVM libraries to C++ equivalents: in loops, latencies are comparable, with Java’s GC managed for determinism. Case studies include snake-like inspectors navigating pipes, controlled via Java for path planning.

Analysis reveals GraalVM’s potential in embedded scenarios, where quick compilations (under 5 minutes for small libraries) enable rapid iterations. Safety features, like velocity limits, integrate seamlessly.

Implications: hybrid codebases leverage strengths, enhancing scalability for complex robots like balancing platforms with children.

Future Prospects in Robotic Software Stacks

GraalVM promises polyglot libraries, enabling seamless multi-language calls. HEBI envisions full Java controls, reducing C++ reliance for better productivity.

Challenges: ensuring real-time guarantees in compiled code. Future: broader adoptions in frameworks for robotics.

In conclusion, GraalVM empowers Java in robotics, merging efficiency with developer-friendly tools for innovative systems.

Links:

  • Lecture video: https://www.youtube.com/watch?v=md2JFgegN7U
  • Florian Enner on LinkedIn: https://www.linkedin.com/in/florian-enner-59b81466/
  • Florian Enner on GitHub: https://github.com/ennerf
  • HEBI Robotics website: https://www.hebirobotics.com/

PostHeaderIcon Kerberos in the JDK: A Deep Technical Guide for Java Developers and Architects

Kerberos remains one of the most important authentication protocols in enterprise computing. Although it is often perceived as legacy infrastructure, it continues to underpin authentication in corporate networks, distributed data platforms, and Windows domains. For Java developers working in enterprise environments, understanding how Kerberos integrates with the JDK is not optional — it is frequently essential.

This article provides a comprehensive, architectural-level explanation of the Kerberos tooling available directly within the JDK. The objective is not merely to demonstrate configuration snippets, but to clarify how the pieces interact internally so that developers, architects, and staff engineers can reason about authentication flows, diagnose failures, and design secure systems with confidence.

Kerberos Support in the JDK: An Architectural Overview

The JDK provides native support for Kerberos through three primary layers: the internal Kerberos protocol implementation, JAAS (Java Authentication and Authorization Service), and JGSS (Java Generic Security Services API). These layers operate together to allow a Java process to authenticate as a principal, acquire credentials, and establish secure contexts with remote services.

At the lowest level, the JDK contains a complete Kerberos protocol stack implementation located insun.security.krb5. This implementation performs the AS, TGS, and AP exchanges defined by the Kerberos protocol. Although this layer is not intended for direct application use, it is important to understand that the JVM does not require external Kerberos libraries to function as a Kerberos client.

Above the protocol implementation sits JAAS, which is responsible for authentication and credential acquisition. JAAS provides the abstraction layer that allows a Java process to log in as a principal using a password, a keytab, or an existing ticket cache.

Finally, the JDK exposes JGSS through the org.ietf.jgss package. JGSS is the API used to generate and validate Kerberos tokens, negotiate security mechanisms such as SPNEGO, and establish secure contexts between clients and services.

In practice, enterprise Java applications almost always use JAAS to obtain credentials and JGSS to perform service authentication.

JAAS and the Krb5LoginModule

JAAS serves as the authentication entry point for Kerberos within the JVM. The central class isjavax.security.auth.login.LoginContext, which delegates authentication to one or more login modules defined in a JAAS configuration file.

For Kerberos authentication, the relevant module iscom.sun.security.auth.module.Krb5LoginModule, which is bundled with the JDK. This login module supports multiple credential acquisition strategies, including interactive password login, keytab-based login for services, and reuse of an existing operating system ticket cache.

A typical JAAS configuration for a service using a keytab might look as follows:

Client {
  com.sun.security.auth.module.Krb5LoginModule required
  useKeyTab=true
  keyTab="/etc/security/keytabs/app.keytab"
  principal="appuser@COMPANY.COM"
  storeKey=true
  doNotPrompt=true;
};

Once authentication succeeds, JAAS produces a Subject. This object represents the authenticated identity within the JVM and contains the Kerberos principal along with private credentials such as the Ticket Granting Ticket (TGT).

The Subject becomes the in-memory security identity for the application. Code can be executed under this identity using Subject.doAs, which ensures that downstream security operations use the acquired Kerberos credentials.

JGSS and Security Context Establishment

After credentials are acquired, the next step is to authenticate to a remote service. This is performed through the Java GSS-API implementation provided in theorg.ietf.jgss package.

The central abstraction in JGSS is the GSSContext, which represents a security context between two peers. The GSSManager factory is used to create names, credentials, and contexts. During context establishment, Kerberos tickets are exchanged and validated transparently by the JVM.

On the client side, the application creates a GSSName representing the service principal, then initializes a GSSContext. The resulting token is transmitted to the server, often via an HTTPAuthorization: Negotiate header.

On the server side, the application accepts the token using acceptSecContext, which validates the ticket, verifies authenticity, and establishes a shared session key. Mutual authentication can be requested so that both client and server verify each other’s identities.

Under the hood, JGSS relies on the Kerberos mechanism identified by OID 1.2.840.113554.1.2.2. When SPNEGO is involved, the negotiation mechanism uses OID 1.3.6.1.5.5.2 to determine the appropriate underlying security protocol.

Kerberos Configuration in the JVM

The JVM reads Kerberos configuration from a krb5.conf file, typically located under${java.home}/lib/security or specified via the-Djava.security.krb5.conf system property.

Several JVM system properties significantly influence Kerberos behavior. For example, enabling -Dsun.security.krb5.debug=true produces extremely detailed protocol-level logs, including encryption types, ticket exchanges, and key version numbers. This flag is invaluable when diagnosing authentication failures.

Another important property is -Djavax.security.auth.useSubjectCredsOnly. When set to true (the default), the JVM will only use credentials present in the currentSubject. When set to false, the JVM may fall back to native operating system credentials, which is often necessary in SPNEGO-enabled web applications.

Ticket Cache and Operating System Integration

The JDK can integrate with an operating system’s Kerberos ticket cache. On Unix systems, this typically corresponds to the cache generated by the kinit command. JAAS can be configured with useTicketCache=true to reuse these credentials instead of requiring a password or keytab.

On Windows, the JVM can integrate with the Local Security Authority (LSA), allowing Java applications to authenticate transparently as the currently logged-in domain user.

SASL and GSSAPI Support

Beyond HTTP authentication, the JDK also provides SASL support through thejavax.security.sasl package. The GSSAPI mechanism enables Kerberos authentication for protocols such as LDAP, SMTP, and custom TCP services.

Technologies such as Apache Kafka, enterprise LDAP servers, and distributed data platforms frequently leverage SASL/GSSAPI under the hood. From the JVM’s perspective, the mechanism ultimately delegates to the same JGSS implementation used for HTTP-based SPNEGO authentication.

Encryption Types and Cryptographic Considerations

Modern JDK versions support AES-based encryption types, including AES-128 and AES-256. Older algorithms such as DES have been removed or disabled due to security concerns. Since Java now ships with unlimited cryptographic strength enabled by default, no additional policy configuration is typically required for strong Kerberos encryption.

Encryption type mismatches between the KDC and the JVM are a frequent source of authentication errors, particularly in legacy environments.

Debugging and Operational Realities

Most Kerberos failures in Java applications are not caused by cryptographic defects but by configuration issues. Common causes include DNS misconfiguration, principal mismatches, clock skew between systems, and incorrect keytab versions.

Effective troubleshooting requires correlating JVM debug logs with KDC logs and verifying ticket cache state using operating system tools. Engineers who understand the protocol exchange sequence can usually isolate failures quickly by determining whether the breakdown occurs during AS exchange, TGS exchange, or service ticket validation.

What the JDK Does Not Provide

It is important to clarify that the JDK does not include a Kerberos Key Distribution Center, administrative tools such as kadmin, or command-line utilities for ticket management. Those capabilities are provided by implementations such as MIT Kerberos, Heimdal, or Active Directory.

The JDK functions strictly as a Kerberos client and service runtime.

Conclusion

Kerberos support in the JDK is both mature and deeply integrated. Through JAAS, the JVM can acquire credentials using password, keytab, or ticket cache. Through JGSS, it can establish secure, mutually authenticated contexts with remote services. Through SASL, it can extend this authentication model to non-HTTP protocols.

For architects and staff engineers, understanding these layers is essential when designing secure enterprise systems. For junior developers, gaining familiarity with JAAS, Subject, and GSSContextprovides a strong foundation for working within corporate authentication environments.

Kerberos may not be fashionable, but it remains foundational. In the Java ecosystem, it is not an external add-on — it is part of the platform itself.

PostHeaderIcon [DotJs2025] Clarke’s Third Law in Action: Having Fun with ES Proxies

Any sufficiently advanced technology is indistinguishable from magic—or so Arthur C. Clarke conjured—yet ES Proxies embody this enigma, transfiguring mundane objects into oracles of orchestration. Christophe Porteneuve, frontend lead at Doctolib and Prototype.js alumnus, conjured this conjury at dotJS 2025, from Rails’ metaprogramming to JS’s proxy prowess. A 1995 web warlock, Christophe chronicled proxies’ palette: traps transmuting traps, revocable realms, Immer’s immutable incantations.

Christophe’s chronicle commenced with OOP’s ontology: proxies as intercessors, wrappers weaving whims—get’s gaze, set’s scribe, apply’s invocation. Revocable’s rite: realms retractable, realms revocable—Proxy.revocable(target, handler) yielding revocable proxies, revoke’s rupture rending references. Christophe cavorted with conundrums: negative indices navigated (proxy[-1]), symbols’ summons (proxy[Symbol.iterator]()), even functions’ facades—proxied predicates, applicables as arguments.

Immer’s immutability: drafts’ dominion, mutations’ mirage—produce(base, draft => {draft.foo = 'bar'}) birthing branches sans sprawl. Christophe commended: Redux Toolkit’s reliance, React’s reducer rapport—deep derivations distilled, verbosity vanquished. Proxies’ pedigree: performance’s parity (descriptors’ drag dwarfed), ubiquity’s umbrella (ES6’s endowment, polyfills passé).

This thaumaturgy tantalizes: metaprogramming’s muse, DX’s deity—proxies as pandora’s portal.

Proxies’ Prismatic Patterns

Christophe cataloged traps: get’s glean, set’s seal—revocable’s recall, negative’s nuance. Functions’ finesse: predicates proxied, iterators invoked—magic’s mantle.

Immer’s Immutable Incantations

Drafts’ dominion: mutations’ masquerade, branches’ bloom—Redux’s reliance, React’s rapport. Christophe’s creed: verbosity’s vanquish, performance’s parity.

Links:

PostHeaderIcon [AWSReInforce2025] Cyber for Industry 4.0: What is CPS protection anyway? (NIS123)

Lecturer

Sean Gillson serves as Global Head of Cloud Alliances at Claroty, architecting solutions that bridge IT and OT security domains. Gillson Wilson leads the Security Competency for GSIs and ISVs at AWS, driving partner-enabled protection for cyber-physical systems across industrial environments.

Abstract

The presentation defines cyber-physical systems (CPS) protection within the context of IT/OT convergence, examining threat vectors that exploit interconnected industrial assets. Through architectural patterns and real-world deployments, it establishes specialized controls that maintain operational continuity while enabling digital transformation in manufacturing, energy, and healthcare sectors.

CPS Threat Landscape Evolution

Cyber-physical systems encompass operational technology (OT), IoT devices, and building management systems that increasingly connect to enterprise networks. This convergence delivers efficiency gains—predictive maintenance, remote monitoring, sustainability optimization—but expands the attack surface dramatically.

Traditional IT threats now target physical processes:

  • Ransomware encrypting PLC configurations
  • Supply chain compromise via firmware updates
  • Insider threats leveraging legitimate remote access

The 2021 Colonial Pipeline incident exemplifies how IT breaches cascade into physical disruption, highlighting the need for unified security posture.

IT/OT Convergence Architectural Patterns

Successful convergence requires deliberate segmentation while preserving data flow:

Level 0: Physical Processes → PLC/RTU
Level 1: Basic Control → SCADA/DCS
Level 2: Supervisory Control → Historian
Level 3: Operations → MES
Level 4: Business → ERP (IT Network)

Claroty implements micro-segmentation at Level 2/3 boundary using AWS Transit Gateway with Network Firewall rules that permit only known protocols (Modbus, OPC-UA) between zones.

Asset Discovery and Risk Prioritization

Industrial environments contain thousands of unmanaged devices. Claroty’s passive monitoring identifies:

  • Device inventory with firmware versions
  • Communication patterns and dependencies
  • Vulnerability mapping to CVSS and EPSS scores
{
  "asset": "Siemens S7-1500",
  "firmware": "V2.9.2",
  "vulnerabilities": ["CVE-2023-1234"],
  "risk_score": 9.2,
  "business_criticality": "high"
}

This contextual intelligence enables prioritization—patching a chiller controller impacts comfort; patching a turbine controller impacts revenue.

Secure Remote Access Patterns

Industry 4.0 demands remote expertise. Traditional VPNs expose entire OT networks. The solution implements:

  • Zero-trust access via AWS Verified Access
  • Session recording and justification logging
  • Time-bound credentials tied to change windows

Engineers connect to bastion hosts in DMZ segments; protocol translation occurs through data diodes that permit only outbound historian data.

Edge-to-Cloud Security Fabric

AWS IoT Greengrass enables secure edge processing:

components:
  - com.claroty.asset-discovery
  - com.aws.secure-tunnel
local_storage: /opt/ot-data

Devices operate autonomously during connectivity loss, syncing vulnerability state when reconnected. Security Hub aggregates findings from edge agents alongside cloud workloads.

Regulatory and Compliance Framework

Standards evolve rapidly:

  • IEC 62443: Security levels for industrial automation
  • NIST CSF 2.0: OT-specific controls
  • EU NIS2 Directive: Critical infrastructure requirements

The architecture generates compliance evidence automatically—asset inventories, access logs, patch verification—reducing audit preparation from months to days.

Conclusion: Unified Security for Digital Industry

CPS protection requires specialized approaches that respect operational constraints while leveraging cloud-native controls. The convergence of IT and OT security creates resilient industrial systems that withstand cyber threats without compromising production. Organizations that implement layered defenses—asset intelligence, micro-segmentation, secure remote access—achieve Industry 4.0 benefits while maintaining safety and reliability.

Links:

PostHeaderIcon [GoogleIO2024] What’s New in the Web: Baseline Features for Interoperable Development

The web platform advances steadily, with interoperability as a key focus for enhancing developer confidence. Rachel Andrew’s session explored Baseline, a initiative that clarifies feature availability across browsers, aiding creators in adopting innovations reliably.

Understanding Baseline and Its Impact on Development

Rachel introduced Baseline as a mechanism to track when web features achieve cross-browser support, categorizing them as “widely available” or “newly available.” Launched at Google I/O 2023, it addresses challenges like keeping pace with standards, as 21% of developers cite this as a top hurdle per Google’s research.

Baseline integrates with resources like MDN, CanIUse, and web.dev, providing clear status indicators. Features qualify as newly available upon implementation in Chrome, Firefox, and Safari stables, progressing to widely available after 30 months. This timeline reflects adoption cycles, ensuring stability.

The initiative fosters collaboration among browser vendors, aligning on consistent APIs. Rachel emphasized how Baseline empowers informed decisions, reducing fragmentation and encouraging broader feature use.

Key Layout and Styling Enhancements

Size container queries, newly available, enable responsive designs based on element dimensions, revolutionizing adaptive layouts. Rachel demonstrated their utility in card components, where styles adjust dynamically without media queries.

The :has() pseudo-class, a “parent selector,” allows targeting based on child presence, simplifying conditional styling. Widely available, it enhances accessibility by managing states like form validations.

CSS nesting, inspired by preprocessors, permits embedded rules for cleaner code. Newly available, it improves maintainability while adhering to specificity rules.

Linear easing functions and trigonometric support in CSS expand animation capabilities, enabling precise effects without JavaScript.

Accessibility and JavaScript Improvements

The inert attribute, newly available, disables elements from interaction, aiding modal focus trapping and improving accessibility. Rachel highlighted its role in preventing unintended activations.

Compression streams in JavaScript, widely available, facilitate efficient data handling in streams, useful for real-time applications.

Declarative Shadow DOM, newly available, enables server-side rendering of custom elements, enhancing SEO and performance for web components.

Popover API, newly available, simplifies accessible overlays, reducing custom code for tooltips and menus.

Future Tools and Community Engagement

Rachel discussed upcoming integrations, like Rum Vision for usage metrics, aiding feature adoption analysis. She urged tooling providers to incorporate Baseline data, enhancing ecosystems.

The 2024 Baseline features promise further advancements, with web.dev offering updates. This collaborative effort aims to streamline development, making the web more robust.

Links:

PostHeaderIcon [SpringIO2025] Modern Authentication Demystified: A Deep Dive into Spring Security’s Latest Innovations @ Spring IO

Lecturer

Andreas Falk is an Executive Consultant at CGI, specializing in software architecture, application and cloud security, and identity and access management (IAM). As an iSAQB Certified Architect, security expert, trainer, and public speaker, he has over 25 years of experience in enterprise application development. Falk is renowned for his contributions to Spring Security, including workshops on reactive security and presentations on modern authentication mechanisms.

Abstract

This article provides an in-depth analysis of recent advancements in Spring Security, focusing on features introduced from version 6.3 onward. It examines compromised password checking, OAuth token exchange, one-time token login, and passkey authentication, alongside improvements in filter chain management and emerging specifications like Demonstrating Proof of Possession (DPoP). The discussion elucidates implementation strategies, security implications, and best practices for integrating these innovations into Spring-based applications.

Enhancements in Password Management and Token Handling

Spring Security 6.3 introduces compromised password checking, leveraging Troy Hunt’s Have I Been Pwned API to validate passwords against known breaches. Passwords are hashed before transmission, ensuring privacy. This feature integrates into password policies, enforcing minimum lengths (e.g., 12 characters) and maximums (up to 100), while permitting diverse characters, including emojis, to enhance security without undue restrictions.

Implementation involves registering a CompromisedPasswordChecker bean:

@Bean
public CompromisedPasswordChecker compromisedPasswordChecker() {
    return new HaveIBeenPwnedRestApiPasswordChecker();
}

This checker can be invoked in registration endpoints or policy validators, rejecting compromised inputs like “password” while accepting stronger alternatives.

OAuth token exchange addresses scenarios with multiple microservices, reducing attack surfaces by exchanging short-lived JSON Web Tokens (JWTs) for domain-specific ones. A client obtains a JWT, exchanges it at the authorization server for a scoped token, minimizing risks in high-stakes services like payments versus low-risk ones like recommendations.

Advanced Authentication Mechanisms: One-Time Tokens and Passkeys

One-time tokens (OTT) secure sensitive actions, such as password resets or transaction approvals, without persistent sessions. Spring Security supports OTT generation and validation, configurable via beans like OneTimeTokenService. Users receive tokens via email or SMS, granting temporary access.

Passkeys, based on WebAuthn and FIDO2, offer passwordless, phishing-resistant authentication using biometric or hardware keys. Spring Security’s PasskeyAuthenticationProvider integrates seamlessly, supporting registration and authentication flows. Devices like YubiKeys or smartphones generate public-private key pairs, with public keys stored server-side.

Example configuration:

@Bean
public SecurityFilterChain securityFilterChain(HttpSecurity http) throws Exception {
    http
        .authorizeHttpRequests(authz -> authz
            .anyRequest().authenticated()
        )
        .passkey(passkey -> passkey
            .passkeyAuthenticationProvider(passkeyAuthenticationProvider())
        );
    return http.build();
}

This enables biometric logins, eliminating passwords and enhancing usability.

Filter Chain Improvements and Emerging Specifications

Spring Security’s filter chain, often a source of configuration errors, now includes error detection for misordered chains in version 6.3+. Annotations like @Order ensure proper sequencing, preventing silent failures.

Version 6.4 adds OAuth2 support for the RestClient, automatic bearer token inclusion, and OpenSAML 5 for SAML assertions. Kotlin enhancements improve pre/post-filter annotations.

Demonstrating Proof of Possession (DPoP) in 6.5 binds tokens cryptographically to clients, thwarting theft in single-page applications (SPAs). Mutual TLS complements this for server-side frameworks. JWT profiles specify token types (e.g., access tokens), ensuring intended usage.

Pushed Authorization Requests (PAR) secure initial OAuth flows by posting parameters first, receiving a unique link devoid of sensitive data.

Implications for Secure Application Development

These innovations mitigate root causes of breaches—weak passwords and token vulnerabilities—promoting passwordless paradigms. Token exchange and DPoP reduce attack surfaces in microservices architectures. However, developers must address persistence for passkeys (e.g., database storage) and secure actuator endpoints.

Future versions (7.0+) mandate lambda DSL for configurations, removing deprecated code and defaulting to Proof Key for Code Exchange (PKCE) for all clients. Authorization server advancements, like multi-tenancy and backend-for-frontend patterns, facilitate scalable, secure ecosystems.

In conclusion, Spring Security’s evolution empowers developers to build resilient, user-friendly authentication systems, aligning with modern threats and standards.

Links:

PostHeaderIcon [AWSReInventPartnerSessions2024] Powering Technology Lifecycle Innovation with AWS Services & Amazon Q (AIM124)

Lecturer

Luke Higgins serves as the Chief Architect for global asset and automation deployment at Accenture, where he focuses on integrating artificial intelligence and automation into service delivery frameworks. With over twenty years at Accenture, Luke has contributed to numerous innovations, including award-winning projects in generative AI applications. Kishor Panth leads global asset engineering at Accenture, overseeing the development of software assets, tools, and automations for client solutions. With more than twenty years in the firm, Kishor specializes in applying AI and automation to software development and platform management.

Abstract

This comprehensive analysis delves into the integration of generative artificial intelligence within service delivery platforms, drawing from Accenture’s experiences with its proprietary GenWizard system. It explores the contextual evolution from traditional automation to AI-driven workflows, methodological approaches to embedding Amazon Q and foundation models, and the broader implications for operational efficiency, decision-making, and innovation across technology lifecycles. By examining real-world applications in software engineering, application management, and platform optimization, the article highlights how these technologies foster accelerated project timelines and customized client outcomes.

Evolution from Automation to Generative AI in Service Delivery

The journey toward incorporating generative AI in technology services reflects a shift from rule-based automation to intelligent, adaptive systems. Initially, efforts focused on streamlining processes through predefined scripts and AI models for tasks like anomaly detection and predictive maintenance. However, the advent of large language models introduced capabilities for natural language processing and code generation, transforming how organizations approach software development and operations.

At Accenture, this evolution culminated in the GenWizard platform, designed to enhance service areas by leveraging AWS services such as Amazon Q. The platform addresses challenges in managing complex technology lifecycles, where traditional methods often led to inefficiencies in code migration, application rationalization, and incident resolution. By infusing generative AI, GenWizard enables forward and reverse engineering, allowing for rapid analysis of legacy systems and generation of modern equivalents.

This transition was driven by the need to handle vast codebases—often millions of lines—across diverse languages like COBOL, Java, and .NET. Reverse engineering, for instance, involves creating visual representations of code structures to identify dependencies and inefficiencies, while forward engineering automates the creation of new code based on specifications. The integration of Amazon Q facilitates natural language queries, making these processes accessible to non-experts and accelerating timelines from months to days.

Methodological Integration of AWS Services in GenWizard

GenWizard’s architecture employs a multi-agent framework powered by AWS foundation models, where agents specialize in tasks such as code analysis, generation, and testing. This methodology draws from software development best practices, incorporating continuous integration and deployment loops to ensure reliability.

A key component is the use of Amazon Q for contextual understanding and response generation. For example, in code migration, agents analyze source code, infer intent, and produce target language equivalents, followed by automated testing against predefined criteria. This reduces human error and enhances consistency, as demonstrated in projects converting legacy mainframe applications to cloud-native formats.

In application management, the platform’s event operations module rationalizes incident tickets by identifying duplicates and correlating issues with configuration management databases. This involves clustering related events and suggesting resolutions from a knowledge base, significantly cutting resolution times.

Platform engineering benefits from predictive analytics, where AI models forecast resource needs and optimize configurations. The methodology emphasizes data-driven insights, using metadata from development workflows to inform decisions.

Code sample illustrating a basic agentic workflow in Python, simulating code generation and testing:

“`
def generate_code(specification):

PostHeaderIcon Quelques points relatifs aux accords de défense entre la France et les Emirats Arabes Unis

Dans l’architecture complexe des relations internationales, certains traités de défense se distinguent par une densité opérationnelle qui dépasse le simple cadre de la coopération diplomatique. C’est précisément le cas du lien organique unissant la France et les Émirats arabes unis (EAU). Si l’on s’accorde souvent à dire que la France est le partenaire stratégique le plus proche de la Fédération émirienne, il convient d’analyser les fondements juridiques et militaires qui confèrent à cet accord une force contraignante singulière, surpassant par certains aspects les engagements contractés au sein de l’Alliance atlantique.

Un héritage historique : de la vente d’armes au sanctuaire partagé

La genèse de cette alliance remonte au milieu des années 1970, peu après la naissance de la Fédération émirienne. Très tôt, sous l’impulsion du Cheikh Zayed, les Émirats font le choix souverain de diversifier leurs partenaires de sécurité pour ne pas dépendre exclusivement de l’influence anglo-saxonne. La France, y voyant une opportunité d’ancrage dans une zone vitale pour ses intérêts énergétiques et géopolitiques, répond par un premier accord de coopération militaire en 1977.

Cependant, le véritable tournant s’opère en 2009. À cette date, la relation change de nature avec la signature d’un traité de défense révisé et la création des Forces françaises aux Émirats arabes unis (FFEAU). Pour la première fois de son histoire moderne, la France établit une base militaire permanente à l’étranger, non plus dans un ancien territoire colonial, mais à la demande expresse d’un État souverain partenaire. Ce dispositif interarmées — terrestre, naval et aérien — transforme la France en une puissance riveraine du Golfe, liant indéfectiblement son destin sécuritaire à celui d’Abou Dabi.

La nature de l’accord : une clause de sécurité « de haute intensité »

L’accord de 2009 repose sur les articles 3 et 4, qui stipulent que la France s’engage à « participer à la défense de la sécurité, de la souveraineté, de l’intégrité territoriale et de l’indépendance » des Émirats. Contrairement aux accords de coopération classiques limités à la formation ou à l’équipement, ce texte définit une véritable clause d’assistance mutuelle.

Sur le plan capacitaire, ce partenariat s’adosse à une intégration industrielle sans précédent. L’acquisition récente de 80 avions Rafale au standard F4 par les Émirats illustre cette volonté de disposer d’une interopérabilité totale. En cas de crise majeure, les forces émiriennes et françaises partagent les mêmes vecteurs technologiques, le même renseignement et les mêmes doctrines de combat, créant de fait une armée de coalition prête à l’emploi.

Le paradoxe de la contrainte : l’Accord de 2009 plus contraignant que le Traité de Washington

L’aspect le plus remarquable de ce partenariat réside dans sa comparaison avec l’OTAN. Bien que l’Article 5 du Traité de l’Atlantique Nord soit souvent perçu comme le sommet de la garantie sécuritaire, une lecture juridique fine révèle que l’accord bilatéral franco-émirien est, à bien des égards, plus directif.

Là où l’Article 5 de l’OTAN comporte une part de subjectivité — chaque État membre s’engageant à prendre « les mesures qu’il jugera nécessaires », ce qui n’implique pas automatiquement une réponse armée — le traité de 2009 engage la France sur une assistance militaire explicite. De plus, la mise en œuvre de la solidarité atlantique est subordonnée à un processus de consultation politique et de consensus au sein du Conseil de l’Atlantique Nord, une procédure qui peut s’avérer lente ou sujette à des blocages diplomatiques.

À l’inverse, l’engagement de Paris envers Abou Dabi est immédiat et bilatéral. La présence physique des troupes françaises sur le sol émirien agit comme un « fil à la patte » stratégique : toute agression contre le territoire émirien placerait mécaniquement les forces françaises en situation de légitime défense. En somme, si l’OTAN demeure une assurance-vie collective dont les clauses de déclenchement restent soumises à l’appréciation des alliés, l’accord France-EAU s’apparente à un contrat de protection rapprochée, où le protecteur est déjà déployé aux côtés de son partenaire, prêt à engager la plénitude de sa puissance de feu.

Dans un Moyen-Orient en perpétuelle mutation, ce traité demeure la clef de voûte de la stratégie française dans l’Indo-Pacifique, prouvant que la crédibilité d’une puissance réside parfois moins dans le nombre de ses alliés que dans la clarté de ses engagements.

PostHeaderIcon [DevoxxBE2025] Behavioral Software Engineering

Lecturer

Mario Fusco is a Senior Principal Software Engineer at Red Hat, where he leads the Drools project, a business rules management system, and contributes to initiatives like LangChain4j. As a Java Champion and open-source advocate, he co-authored “Java 8 in Action” with Raoul-Gabriel Urma and Alan Mycroft, published by Manning. Mario frequently speaks at conferences on topics ranging from functional programming to domain-specific languages.

Abstract

This examination draws parallels between behavioral economics and software engineering, highlighting cognitive biases that distort rational decision-making in technical contexts. It elucidates key heuristics identified by economists like Daniel Kahneman and Amos Tversky, situating them within engineering practices such as benchmarking, tool selection, and architectural choices. Through illustrative examples of performance evaluation flaws and hype-driven adoptions, the narrative scrutinizes methodological influences on project outcomes. Ramifications for collaborative dynamics, innovation barriers, and professional development are explored, proposing mindfulness as a countermeasure to enhance engineering efficacy.

Foundations of Behavioral Economics and Rationality Myths

Classical economic models presupposed fully efficient markets populated by perfectly logical agents, often termed Homo Economicus, who maximize utility through impeccable reasoning. However, pioneering work by psychologists Daniel Kahneman and Amos Tversky in the late 1970s challenged this paradigm, demonstrating that human judgment is riddled with systematic errors. Their prospect theory, for instance, revealed how individuals weigh losses more heavily than equivalent gains, leading to irrational risk aversion or seeking behaviors. This laid the groundwork for behavioral economics, which integrates psychological insights into economic analysis to explain deviations from predicted rational conduct.

In software engineering, a parallel illusion persists: the notion of the “Engeen,” an idealized practitioner who approaches problems with unerring logic and objectivity. Yet, engineers are susceptible to the same mental shortcuts that Kahneman and Tversky cataloged. These heuristics, evolved for quick survival decisions in ancestral environments, often mislead in modern technical scenarios. For example, the anchoring effect—where initial information disproportionately influences subsequent judgments—can skew performance assessments. An engineer might fixate on a preliminary benchmark result, overlooking confounding variables like hardware variability or suboptimal test conditions.

The availability bias compounds this, prioritizing readily recalled information over comprehensive data. If recent experiences involve a particular technology failing, an engineer might unduly favor alternatives, even if statistical evidence suggests otherwise. Contextualized within the rapid evolution of software tools, these biases amplify during hype cycles, where media amplification creates illusory consensus. Implications extend to resource allocation: projects may pursue fashionable solutions, diverting efforts from proven, albeit less glamorous, approaches.

Heuristics in Performance Evaluation and Tool Adoption

Performance benchmarking exemplifies how cognitive shortcuts undermine objective analysis. The availability heuristic leads engineers to overemphasize memorable failures, such as a vivid recollection of a slow database query, while discounting broader datasets. This can result in premature optimizations or misguided architectural pivots. Similarly, anchoring occurs when initial metrics set unrealistic expectations; a prototype’s speed on high-end hardware might bias perceptions of production viability.

Tool adoption is equally fraught. The pro-innovation bias fosters an uncritical embrace of novel technologies, often without rigorous evaluation. Engineers might adopt container orchestration systems like Kubernetes for simple applications, incurring unnecessary complexity. The bandwagon effect reinforces this, as perceived peer adoption creates social proof, echoing Tversky’s work on conformity under uncertainty.

The not-invented-here syndrome further distorts choices, prompting reinvention of wheels due to overconfidence in proprietary solutions. Framing effects alter problem-solving: the same requirement, phrased differently—e.g., “build a scalable service” versus “optimize for cost”—yields divergent designs. Examples from practice include teams favoring microservices for “scalability” when monolithic structures suffice, driven by availability of success stories from tech giants.

Analysis reveals these heuristics degrade quality: biased evaluations lead to inefficient code, while hype-driven adoptions inflate maintenance costs. Implications urge structured methodologies, such as A/B testing or peer reviews, to counteract intuitive pitfalls.

Biases in Collaborative and Organizational Contexts

Team interactions amplify individual biases, creating collective delusions. The curse of knowledge hinders communication: experts assume shared understanding, leading to ambiguous requirements or overlooked edge cases. Hyperbolic discounting prioritizes immediate deliverables over long-term maintainability, accruing technical debt.

Organizational politics exacerbate these: non-technical leaders impose decisions, as in mandating unproven tools based on superficial appeal. Sunk cost fallacy sustains failing projects, ignoring opportunity costs. Dunning-Kruger effect, where incompetence breeds overconfidence, manifests in unqualified critiques of sound engineering.

Confirmation bias selectively affirms preconceptions, dismissing contradictory evidence. In code reviews, this might involve defending flawed implementations by highlighting partial successes. Contextualized within agile methodologies, these biases undermine iterative improvements, fostering resistance to refactoring.

Implications for dynamics: eroded trust hampers collaboration, reducing innovation. Analysis suggests diverse teams dilute biases, as varied perspectives challenge assumptions.

Strategies to Mitigate Biases in Engineering Practices

Mitigation begins with awareness: educating on Kahneman’s System 1 (intuitive) versus System 2 (deliberative) thinking encourages reflective pauses. Structured decision frameworks, like weighted scoring for tool selection, counteract anchoring and availability.

For performance, blind testing—evaluating without preconceptions—promotes objectivity. Debiasing techniques, such as devil’s advocacy, challenge bandwagon tendencies. Organizational interventions include bias training and diverse hiring to foster balanced views.

In practice, adopting evidence-based approaches—rigorous benchmarking protocols—enhances outcomes. Implications: mindful engineering boosts efficiency, reducing rework. Future research could quantify bias impacts via metrics like defect rates.

In essence, recognizing human frailties transforms engineering from intuitive art to disciplined science, yielding superior software.

Links:

  • Lecture video: https://www.youtube.com/watch?v=Aa2Zn8WFJrI
  • Mario Fusco on LinkedIn: https://www.linkedin.com/in/mariofusco/
  • Mario Fusco on Twitter/X: https://twitter.com/mariofusco
  • Red Hat website: https://www.redhat.com/

PostHeaderIcon [KotlinConf2025] Blueprints for Scale: What AWS Learned Building a Massive Multiplatform Project

Ian Botsford and Matis Lazdins from Amazon Web Services (AWS) shared their experiences and insights from developing the AWS SDK for Kotlin, a truly massive multiplatform project. This session provided a practical blueprint for managing the complexities of a large-scale Kotlin Multiplatform (KMP) project, offering firsthand lessons on design, development, and scaling. The speakers detailed the strategies they adopted to maintain sanity while dealing with a codebase that spans over 300 services and targets eight distinct platforms.

Architectural and Development Strategies

Botsford and Lazdins began by breaking down the project’s immense scale, explaining that it is distributed across four different repositories and consists of nearly 500 Gradle projects. They emphasized the importance of a well-defined project structure and the strategic use of Gradle to manage dependencies and build processes. A key lesson they shared was the necessity of designing for Kotlin Multiplatform from the very beginning, rather than attempting to retrofit it later. They also highlighted the critical role of maintaining backward compatibility, a practice that is essential for a project with such a large user base. The speakers explained the various design trade-offs they had to make and how these decisions ultimately shaped the project’s architecture and long-term sustainability.

The Maintainer Experience

The discussion moved beyond technical architecture to focus on the human element of maintaining such a vast project. Lazdins spoke about the importance of automating repetitive and mundane processes to free up maintainers’ time for more complex tasks. He detailed the implementation of broad checks to catch issues before they are merged, a proactive approach that prevents regressions and ensures code quality. These checks are designed to be highly informative while remaining overridable, giving developers the autonomy to make informed decisions. The presenters stressed that a positive maintainer experience is crucial for the health of any large open-source project, as it encourages contributions and fosters a collaborative environment.

Lessons for the Community

In their concluding remarks, Botsford and Lazdins offered a summary of the most valuable lessons they learned. They reiterated the importance of owning your own dependencies, structuring projects for scale, and designing for KMP from the outset. By sharing their experiences with a real-world, large-scale project, they provided the Kotlin community with actionable insights that can be applied to projects of any size. The session served as a powerful testament to the capabilities of Kotlin Multiplatform and the importance of a thoughtful, strategic approach to software development at scale.

Links: