Recent Posts
Archives

Posts Tagged ‘SpringBoot’

PostHeaderIcon [DevoxxFR2025] Boosting Java Application Startup Time: JVM and Framework Optimizations

In the world of modern application deployment, particularly in cloud-native and microservice architectures, fast startup time is a crucial factor impacting scalability, resilience, and cost efficiency. Slow-starting applications can delay deployments, hinder auto-scaling responsiveness, and consume resources unnecessarily. Olivier Bourgain, in his presentation, delved into strategies for significantly accelerating the startup time of Java applications, focusing on optimizations at both the Java Virtual Machine (JVM) level and within popular frameworks like Spring Boot. He explored techniques ranging from garbage collection tuning to leveraging emerging technologies like OpenJDK’s Project Leyden and Spring AOT (Ahead-of-Time Compilation) to make Java applications lighter, faster, and more efficient from the moment they start.

The Importance of Fast Startup

Olivier began by explaining why fast startup time matters in modern environments. In microservices architectures, applications are frequently started and stopped as part of scaling events, deployments, or rolling updates. A slow startup adds to the time it takes to scale up to handle increased load, potentially leading to performance degradation or service unavailability. In serverless or function-as-a-service environments, cold starts (the time it takes for an idle instance to become ready) are directly impacted by application startup time, affecting latency and user experience. Faster startup also improves developer productivity by reducing the waiting time during local development and testing cycles. Olivier emphasized that optimizing startup time is no longer just a minor optimization but a fundamental requirement for efficient cloud-native deployments.

JVM and Garbage Collection Optimizations

Optimizing the JVM configuration and understanding garbage collection behavior are foundational steps in improving Java application startup. Olivier discussed how different garbage collectors (like G1, Parallel, or ZGC) can impact startup time and memory usage. Tuning JVM arguments related to heap size, garbage collection pauses, and just-in-time (JIT) compilation tiers can influence how quickly the application becomes responsive. While JIT compilation is crucial for long-term performance, it can introduce startup overhead as the JVM analyzes and optimizes code during initial execution. Techniques like Class Data Sharing (CDS) were mentioned as a way to reduce startup time by sharing pre-processed class metadata between multiple JVM instances. Olivier provided practical tips and configurations for optimizing JVM settings specifically for faster startup, balancing it with overall application performance.

Framework Optimizations: Spring Boot and Beyond

Popular frameworks like Spring Boot, while providing immense productivity benefits, can sometimes contribute to longer startup times due to their extensive features and reliance on reflection and classpath scanning during initialization. Olivier explored strategies within the Spring ecosystem and other frameworks to mitigate this. He highlighted Spring AOT (Ahead-of-Time Compilation) as a transformative technology that analyzes the application at build time and generates optimized code and configuration, reducing the work the JVM needs to do at runtime. This can significantly decrease startup time and memory footprint, making Spring Boot applications more suitable for resource-constrained environments and serverless deployments. Project Leyden in OpenJDK, aiming to enable static images and further AOT compilation for Java, was also discussed as a future direction for improving startup performance at the language level. Olivier demonstrated how applying these framework-specific optimizations and leveraging AOT compilation can have a dramatic impact on the startup speed of Java applications, making them competitive with applications written in languages traditionally known for faster startup.

Links:

PostHeaderIcon Java/Spring Troubleshooting: From Memory Leaks to Database Bottlenecks

Practical strategies and hands-on tips for diagnosing and fixing performance issues in production Java applications.

1) Approaching Memory Leaks

Memory leaks in Java often manifest as OutOfMemoryError exceptions or rising heap usage visible in monitoring dashboards. My approach:

  1. Reproduce in staging: Apply the same traffic profile (e.g., JMeter load test).
  2. Collect a heap dump:
    jmap -dump:format=b,file=heap.hprof <PID>
  3. Analyze with tools: Eclipse MAT, VisualVM, or YourKit to detect uncollected references.
  4. Fix common causes:
    • Unclosed streams or ResultSets.
    • Static collections holding references.
    • Caches without eviction policies (e.g., replace HashMap with Caffeine).

2) Profiling and Fixing High CPU Usage

High CPU can stem from tight loops, inefficient queries, or excessive logging.

  • Step 1: Sample threads
    jstack <PID> > thread-dump.txt

    Identify “hot” threads consuming CPU.

  • Step 2: Profile with async profilers like async-profiler or Java Flight Recorder.
    java -XX:StartFlightRecording=duration=60s,filename=recording.jfr -jar app.jar
  • Step 3: Refactor:
    • Replace String concatenation in loops with StringBuilder.
    • Optimize regex (use Pattern reuse instead of String.matches()).
    • Review logging level (DEBUG inside loops is expensive).

3) Tuning GC for Low-Latency Services

Garbage collection (GC) can cause pauses. For trading, gaming, or API services, tuning matters:

  • Choose the right collector:
    • G1GC for balanced throughput and latency (default in recent JDKs).
    • ZGC or Shenandoah for ultra-low latency workloads (<10ms pauses).
  • Sample configs:
    -XX:+UseG1GC -XX:MaxGCPauseMillis=200 -XX:+ParallelRefProcEnabled
  • Monitor GC logs with GC Toolkit or Grafana dashboards.

4) Handling Database Bottlenecks

Spring apps often hit bottlenecks in DB queries rather than CPU.

  1. Enable SQL logging: in application.properties
    spring.jpa.show-sql=true
  2. Profile queries: Use p6spy or database AWR reports.
  3. Fixes:
    • Add missing indexes (EXPLAIN ANALYZE is your friend).
    • Batch inserts (saveAll() in Spring Data with hibernate.jdbc.batch_size).
    • Introduce caching (Spring Cache, Redis) for hot reads.
    • Use connection pools like HikariCP with tuned settings:
      spring.datasource.hikari.maximum-pool-size=30
Bottom line: Troubleshooting is both art and science—measure, hypothesize, fix, and validate with metrics.

PostHeaderIcon Efficient Inter-Service Communication with Feign and Spring Cloud in Multi-Instance Microservices

In a world where systems are becoming increasingly distributed and cloud-native, microservices have emerged as the de facto architecture. But as we scale
microservices horizontally—running multiple instances for each service—one of the biggest challenges becomes inter-service communication.

How do we ensure that our services talk to each other reliably, efficiently, and in a way that’s resilient to failures?

Welcome to the world of Feign and Spring Cloud.


The Challenge: Multi-Instance Microservices

Imagine you have a user-service that needs to talk to an order-service, and your order-service runs 5 instances behind a
service registry like Eureka. Hardcoding URLs? That’s brittle. Manual load balancing? Not scalable.

You need:

  • Service discovery to dynamically resolve where to send the request
  • Load balancing across instances
  • Resilience for timeouts, retries, and fallbacks
  • Clean, maintainable code that developers love

The Solution: Feign + Spring Cloud

OpenFeign is a declarative web client. Think of it as a smart HTTP client where you only define interfaces — no more boilerplate REST calls.

When combined with Spring Cloud, Feign becomes a first-class citizen in a dynamic, scalable microservices ecosystem.

✅ Features at a Glance:

  • Declarative REST client
  • Automatic service discovery (Eureka, Consul)
  • Client-side load balancing (Spring Cloud LoadBalancer)
  • Integration with Resilience4j for circuit breaking
  • Easy integration with Spring Boot config and observability tools

Step-by-Step Setup

1. Add Dependencies

[xml][/xml]

If using Eureka:

[xml][/xml]


2. Enable Feign Clients

In your main Spring Boot application class:

[java]@SpringBootApplication
@EnableFeignClients
public <span>class <span>UserServiceApplication { … }
[/java]


3. Define Your Feign Interface

[java]
@FeignClient(name = "order-service")
public interface OrderClient { @GetMapping("/orders/{id}")
OrderDTO getOrder(@PathVariable("id") Long id); }
[/java]

Spring will automatically:

  • Register this as a bean
  • Resolve order-service from Eureka
  • Load-balance across all its instances

4. Add Resilience with Fallbacks

You can configure a fallback to handle failures gracefully:

[java]

@FeignClient(name = "order-service", fallback = OrderClientFallback.class)
public interface OrderClient {
@GetMapping("/orders/{id}") OrderDTO getOrder(@PathVariable Long id);
}[/java]

The fallback:

[java]

@Component
public class OrderClientFallback implements OrderClient {
@Override public OrderDTO getOrder(Long id) {
return new OrderDTO(id, "Fallback Order", LocalDate.now());
}
}[/java]


⚙️ Configuration Tweaks

Customize Feign timeouts in application.yml:

[yml]

feign:

    client:

       config:

           default:

                connectTimeout:3000

                readTimeout:500

[/yml]

Enable retry:

[xml]
feign:
client:
config:
default:
retryer:
maxAttempts: 3
period: 1000
maxPeriod: 2000
[/xml]


What Happens Behind the Scenes?

When user-service calls order-service:

  1. Spring Cloud uses Eureka to resolve all instances of order-service.
  2. Spring Cloud LoadBalancer picks an instance using round-robin (or your chosen strategy).
  3. Feign sends the HTTP request to that instance.
  4. If it fails, Resilience4j (or your fallback) handles it gracefully.

Observability & Debugging

Use Spring Boot Actuator to expose Feign metrics:

[xml]

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-actuator</artifactId>
</dependency[/xml]

And tools like Spring Cloud Sleuth + Zipkin for distributed tracing across Feign calls.


Beyond the Basics

To go even further:

  • Integrate with Spring Cloud Gateway for API routing and external access.
  • Use Spring Cloud Config Server to centralize configuration across environments.
  • Secure Feign calls with OAuth2 via Spring Security and OpenID Connect.

✨ Final Thoughts

Using Feign with Spring Cloud transforms service-to-service communication from a tedious, error-prone task into a clean, scalable, and cloud-native solution.
Whether you’re scaling services across zones or deploying in Kubernetes, Feign ensures your services communicate intelligently and resiliently.

PostHeaderIcon SpringBatch: How to have different schedules, per environment, for instance: keep the fixedDelay=60000 in prod, but schedule with a Cron expression in local dev?

Case

In SpringBatch, a batch is scheduled in a bean JobScheduler with

[java]
@Scheduled(fixedDelay = 60000)
void doSomeThing(){…}
[/java]

.
How to have different schedules, per environment, for instance: keep the fixedDelay=60000 in prod, but schedule with a Cron expression in local dev?

Solution

Add this block to the <JobScheduler:

[java]
@Value("${jobScheduler.scheduling.enabled:true}")
private boolean schedulingEnabled;

@Value("${jobScheduler.scheduling.type:fixedDelay}")
private String scheduleType;

@Value("${jobScheduler.scheduling.fixedDelay:60000}")
private long fixedDelay;

@Value("${jobScheduler.scheduling.initialDelay:0}")
private long initialDelay;

@Value("${jobScheduler.scheduling.cron:}")
private String cronExpression;

@Scheduled(fixedDelayString = "${jobScheduler.scheduling.fixedDelay:60000}", initialDelayString = "${jobScheduler.scheduling.initialDelay:0}")
@ConditionalOnProperty(name = "jobScheduler.scheduling.type", havingValue = "fixedDelay")
public void scheduleFixedDelay() throws Exception {
if ("fixedDelay".equals(scheduleType) || "initialDelayFixedDelay".equals(scheduleType)) {
doSomething();
}
}

@Scheduled(cron = "${jobScheduler.scheduling.cron:0 0 1 * * ?}")
@ConditionalOnProperty(name = "jobScheduler.scheduling.type", havingValue = "cron", matchIfMissing = false)
public void scheduleCron() throws Exception {
if ("cron".equals(scheduleType)) {
doSomething(); }
}
[/java]

In application.yml, add:

[xml]
jobScheduler:
# noinspection GrazieInspection
scheduling:
enabled: true
type: fixedDelay
fixedDelay: 60000
initialDelay: 0
cron: 0 0 1 31 2 ? # every 31st of February… which means: never
[/xml]

(note the cron expression: leaving it empty may prevent SpringBoot from starting)

In application.yml, add:

[xml]
jobScheduler:
# noinspection GrazieInspection
scheduling:
type: cron
cron: 0 0 1 * * ?
[/xml]

It should work now ;-).

PostHeaderIcon [DevoxxUK2024] Breaking AI: Live Coding and Hacking Applications with Generative AI by Simon Maple and Brian Vermeer

Simon Maple and Brian Vermeer, both seasoned developer advocates with extensive experience at Snyk and other tech firms, delivered an electrifying live coding session at DevoxxUK2024, exploring the double-edged sword of generative AI in software development. Simon, recently transitioned to a stealth-mode startup, and Brian, a current Snyk advocate, demonstrate how tools like GitHub Copilot and ChatGPT can accelerate coding velocity while introducing significant security risks. Through a live-coded Spring Boot coffee shop application, they expose vulnerabilities such as SQL injection, directory traversal, and cross-site scripting, emphasizing the need for rigorous validation and security practices. Their engaging, demo-driven approach underscores the balance between innovation and caution, offering developers actionable insights for leveraging AI safely.

Accelerating Development with Generative AI

Simon and Brian kick off by highlighting the productivity boost offered by generative AI tools, citing studies that suggest a 55% increase in developer efficiency and a 27% higher likelihood of meeting project goals. They build a Spring Boot application with a Thymeleaf front end, using Copilot to generate a homepage with a banner and product table. The process showcases AI’s ability to rapidly produce code snippets, such as HTML fragments, based on minimal prompts. However, they caution that this speed comes with risks, as AI often prioritizes completion over correctness, potentially embedding vulnerabilities. Their live demo illustrates how Copilot’s suggestions evolve with context, but also how developers must critically evaluate outputs to ensure functionality and security.

Exposing SQL Injection Vulnerabilities

The duo dives into a search functionality for their coffee shop application, where Copilot generates a query to filter products by name or description. However, the initial code concatenates user input directly into an SQL query, creating a classic SQL injection vulnerability. Brian demonstrates an exploit by injecting malicious input to set product prices to zero, highlighting how unchecked AI-generated code can compromise a system. They then refactor the code using prepared statements, showing how parameterization separates user input from the query execution plan, effectively neutralizing the vulnerability. This example underscores the importance of understanding AI outputs and applying secure coding practices, as tools like Copilot may not inherently prioritize security.

Mitigating Directory Traversal Risks

Next, Simon and Brian tackle a profile picture upload feature, where Copilot generates code to save files to a directory. The initial implementation concatenates user-provided file names with a base path, opening the door to directory traversal attacks. Using Burp Suite, they demonstrate how an attacker could overwrite critical files by manipulating the file name with “../” sequences. To address this, they refine the code to normalize paths, ensuring files remain within the intended directory. The session highlights the limitations of AI in detecting complex vulnerabilities like path traversal, emphasizing the need for developer vigilance and tools like Snyk to catch issues early in the development cycle.

Addressing Cross-Site Scripting Threats

The final vulnerability explored is cross-site scripting (XSS) in a product page feature. The AI-generated code directly embeds user input (product names) into HTML without sanitization, allowing Brian to inject a malicious script that captures session cookies. They demonstrate both reflective and stored XSS, showing how attackers could exploit these to hijack user sessions. While querying ChatGPT for a code review fails to pinpoint the XSS issue, Simon and Brian advocate for using established libraries like Spring Utils for input sanitization. This segment reinforces the necessity of combining AI tools with robust security practices and automated scanning to mitigate risks that AI might overlook.

Balancing Innovation and Security

Throughout the session, Simon and Brian stress that generative AI, while transformative, demands a cautious approach. They liken AI tools to junior developers, capable of producing functional code but requiring oversight to avoid errors or vulnerabilities. Real-world examples, such as a Samsung employee leaking sensitive code via ChatGPT, underscore the risks of blindly trusting AI outputs. They advocate for education, clear guidelines, and security tooling to complement AI-assisted development. By integrating tools like Snyk for vulnerability scanning and fostering a culture of code review, developers can harness AI’s potential while safeguarding their applications against threats.

Links:

PostHeaderIcon [SpringIO2024] Mind the Gap: Connecting High-Performance Systems at a Leading Crypto Exchange @ Spring I/O 2024

At Spring I/O 2024, Marcos Maia and Lars Werkman from Bitvavo, Europe’s leading cryptocurrency exchange, unveiled the architectural intricacies of their high-performance trading platform. Based in the Netherlands, Bitvavo processes thousands of transactions per second with sub-millisecond latency. Marcos and Lars detailed how they integrate ultra-low-latency systems with Spring Boot applications, offering a deep dive into their strategies for scalability and performance. Their talk, rich with technical insights, challenged conventional software practices, urging developers to rethink performance optimization.

Architecting for Ultra-Low Latency

Marcos opened by highlighting Bitvavo’s mission to enable seamless crypto trading for nearly two million customers. The exchange’s hot path, where orders are processed, demands microsecond response times. To achieve this, Bitvavo employs the Aeron framework, an open-source tool designed for high-performance messaging. By using memory-mapped files, UDP-based communication, and lock-free algorithms, the platform minimizes latency. Marcos explained how they bypass traditional databases, opting for in-memory processing with eventual disk synchronization, ensuring deterministic outcomes critical for trading fairness.

Optimizing the Hot Path

The hot path’s design is uncompromising, as Marcos elaborated. Bitvavo avoids garbage collection by preallocating and reusing objects, ensuring predictable memory usage. Single-threaded processing, counterintuitive to many, leverages CPU caches for nanosecond-level performance. The platform uses distributed state machines, guaranteeing consistent outputs across executions. Lars complemented this by discussing inter-process communication via shared memory and DPDK for kernel-bypassing network operations. These techniques, rooted in decades of trading system expertise, enable Bitvavo to handle peak loads of 30,000 transactions per second.

Bridging with Spring Boot

Integrating high-performance systems with the broader organization poses significant challenges. Marcos detailed the “cold sink,” a Spring Boot application that consumes data from the hot path’s Aeron archive, feeding it into Kafka and MySQL for downstream processing. By batching requests and using object pools, the cold sink minimizes garbage collection, maintaining performance under heavy loads. Fine-tuning batch sizes and applying backpressure ensure the system keeps pace with the hot path’s output, preventing data lags in Bitvavo’s 24/7 operations.

Enhancing JWT Signing Performance

Lars concluded with a case study on optimizing JWT token signing, a “warm path” process targeting sub-millisecond latency. Initially, their RSA-based signing took 8.8 milliseconds, far from the goal. By switching to symmetric HMAC signing and adopting Azul Prime’s JVM, they achieved a 30x performance boost, reaching 260-280 microsecond response times. Lars emphasized the importance of benchmarking with JMH and leveraging Azul’s features like Falcon JIT compiler for stable throughput. This optimization underscores Bitvavo’s commitment to performance across all system layers.

Links:

PostHeaderIcon [DevoxxBE2023] Securing the Supply Chain for Your Java Applications by Thomas Vitale

At Devoxx Belgium 2023, Thomas Vitale, a software engineer and architect at Systematic, delivered an authoritative session on securing the software supply chain for Java applications. As the author of Cloud Native Spring in Action and a passionate advocate for cloud-native technologies, Thomas provided a comprehensive exploration of securing every stage of the software lifecycle, from source code to deployment. Drawing on the SLSA framework and CNCF research, he demonstrated practical techniques for ensuring integrity, authenticity, and resilience using open-source tools like Gradle, Sigstore, and Kyverno. Through a blend of theoretical insights and live demonstrations, Thomas illuminated the critical importance of supply chain security in today’s threat landscape.

Safeguarding Source Code with Git Signatures

Thomas began by defining the software supply chain as the end-to-end process of delivering software, encompassing code, dependencies, tools, practices, and people. He emphasized the risks at each stage, starting with source code. Using Git as an example, Thomas highlighted its audit trail capabilities but cautioned that commit authorship can be manipulated. In a live demo, he showed how he could impersonate a colleague by altering Git’s username and email, underscoring the need for signed commits. By enforcing signed commits with GPG or SSH keys—or preferably a keyless approach via GitHub’s single sign-on—developers can ensure commit authenticity, establishing a verifiable provenance trail critical for supply chain security.

Managing Dependencies with Software Bills of Materials (SBOMs)

Moving to dependencies, Thomas stressed the importance of knowing exactly what libraries are included in a project, especially given vulnerabilities like Log4j. He introduced Software Bills of Materials (SBOMs) as a standardized inventory of software components, akin to a list of ingredients. Using the CycloneDX plugin for Gradle, Thomas demonstrated generating an SBOM during the build process, which provides precise dependency details, including versions, licenses, and hashes for integrity verification. This approach, integrated into Maven or Gradle, ensures accuracy over post-build scanning tools like Snyk, enabling developers to identify vulnerabilities, check license compliance, and verify component integrity before production.

Thomas further showcased Dependency-Track, an OWASP project, to analyze SBOMs and flag vulnerabilities, such as a critical issue in SnakeYAML. He introduced the Vulnerability Exploitability Exchange (VEX) standard, which complements SBOMs by documenting whether vulnerabilities affect an application. In his demo, Thomas marked a SnakeYAML vulnerability as a false positive due to Spring Boot’s safe deserialization, demonstrating how VEX communicates security decisions to stakeholders, reducing unnecessary alerts and ensuring compliance with emerging regulations.

Building Secure Artifacts with Reproducible Builds

The build phase, Thomas explained, is another critical juncture for security. Using Spring Boot as an example, he outlined three packaging methods: JAR files, native executables, and container images. He critiqued Dockerfiles for introducing non-determinism and maintenance overhead, advocating for Cloud Native Buildpacks as a reproducible, secure alternative. In a demo, Thomas built a container image with Buildpacks, highlighting its fixed creation timestamp (January 1, 1980) to ensure identical outputs for unchanged inputs, enhancing security by eliminating variability. This reproducibility, coupled with SBOM generation during the build, ensures artifacts are both secure and traceable.

Signing and Verifying Artifacts with SLSA

To ensure artifact integrity, Thomas introduced the SLSA framework, which provides guidelines for securing software artifacts across the supply chain. He demonstrated signing container images with Sigstore’s Cosign tool, using a keyless approach to avoid managing private keys. This process, integrated into a GitHub Actions pipeline, ensures that artifacts are authentically linked to their creator. Thomas further showcased SLSA’s provenance generation, which documents the artifact’s origin, including the Git commit hash and build steps. By achieving SLSA Level 3, his pipeline provided non-falsifiable provenance, ensuring traceability from source code to deployment.

Securing Deployments with Policy Enforcement

The final stage, deployment, requires validating artifacts to ensure they meet security standards. Thomas demonstrated using Cosign and the SLSA Verifier to validate signatures and provenance, ensuring only trusted artifacts are deployed. On Kubernetes, he introduced Kyverno, a policy engine that enforces signature and provenance checks, automatically rejecting non-compliant deployments. This approach ensures that production environments remain secure, aligning with the principle of validating metadata to prevent unauthorized or tampered artifacts from running.

Conclusion: A Holistic Approach to Supply Chain Security

Thomas’s session at Devoxx Belgium 2023 provided a robust framework for securing Java application supply chains. By addressing source code integrity, dependency management, build reproducibility, artifact signing, and deployment validation, he offered a comprehensive strategy to mitigate risks. His practical demonstrations, grounded in open-source tools and standards like SLSA and VEX, empowered developers to adopt these practices without overwhelming complexity. Thomas’s emphasis on asking “why” at each step encouraged attendees to tailor security measures to their context, ensuring both compliance and resilience in an increasingly regulated landscape.

Links:

PostHeaderIcon [DevoxxBE2023] REST Next Level: Crafting Domain-Driven Web APIs by Julien Topçu

At Devoxx Belgium 2023, Julien Topçu, a technical coach at Shadow, delivered a compelling session on elevating REST APIs by embedding domain-driven design principles. With a rich background in crafting software using Domain-Driven Design (DDD), Extreme Programming, and Kanban, Julien illuminated the pitfalls of traditional REST implementations and proposed a transformative approach to encapsulate business intent within APIs. His talk, centered around a fictional space travel booking system, demonstrated how to align APIs with user actions, preserve business workflows, and enhance consumer experience through hypermedia controls. Through a blend of theoretical insights and practical demonstrations, Julien showcased a methodology to create APIs that are not only functional but also semantically rich and workflow-driven.

The Pitfalls of Traditional REST APIs

Julien began by highlighting a pervasive issue in software architecture: the loss of business intent when translating domain logic into REST APIs. Typically, business logic resides in the backend to avoid duplication across consumers like web or mobile applications. However, REST’s uniform interface, with its limited vocabulary of CRUD operations (Create, Read, Update, Delete), often distorts this logic. For instance, in a train reservation system, a user’s intent to “search for trains” is reduced to “create a search resource,” stripping away domain-specific semantics like destinations or schedules. This mismatch, Julien argued, stems from REST’s standardized approach, formalized by Roy Fielding in his PhD thesis, which prioritizes simplicity over application-specific needs. As a result, APIs lose expressiveness, forcing consumers to reconstruct business workflows, leading to what Julien termed “accidental complexity of adaptation.”

To illustrate, Julien presented a scenario where a user performs a search for space trains from Earth to the Moon. The traditional REST API translates this into a POST request to create a search resource, devoid of domain context. This not only obscures the user’s intent but also couples consumers to the backend’s implementation, making changes—like switching from “bound” to “journey index” for multi-destination trips—disruptive. Julien’s live demo underscored this fragility: altering a request parameter broke the API, highlighting the risks of tight coupling between consumers and backend models.

Encapsulating Business Intent with Semantic Endpoints

To address these shortcomings, Julien proposed aligning REST endpoints with user actions rather than backend models. Instead of exposing implementation details, such as updating a sub-resource like “selection” within a search, APIs should reflect behaviors like “select a space train with a fare.” This approach involves using classifiers in URLs, such as POST /searches/{id}/spacetrains/{number}/fares/{code}/select, which clearly convey the intent of selecting a fare for a specific train. Julien emphasized that this does not violate REST principles, debunking the myth that verbs in URLs are forbidden. As long as verbs align with HTTP methods (e.g., POST for creating a resource), they enhance semantic clarity without breaking the uniform interface.

This shift decouples consumers from the backend’s internal structure. For example, changing the backend’s data model (e.g., using booleans instead of a selection object) no longer impacts consumers, as the API exposes behaviors rather than state. Julien’s demo further showcased this by demonstrating how a frontend could adapt to backend changes (e.g., from “bound” to “journey index”) without modification, thanks to semantic endpoints. This approach not only preserves business intent but also simplifies consumer logic, reducing the cognitive load of interpreting CRUD-based APIs.

Encapsulating Workflows with Hypermedia Controls

A critical challenge Julien addressed is the lack of workflow definition in traditional REST APIs. Typically, consumers must hardcode business workflows, such as the sequence of selecting outbound and inbound trains before booking. This leads to duplicated logic and potential errors, like displaying a booking button prematurely. Julien introduced hypermedia controls, specifically HATEOAS (Hypermedia As The Engine Of Application State), as a solution. By embedding links in API responses, the backend can guide consumers through the workflow dynamically.

In his demo, Julien showed how a search response includes links like select-outbound and all-inbounds, which guide the consumer to the next valid actions. For instance, after selecting an outbound train, the response provides a link to select an inbound train, ensuring only compatible options are available. This encapsulation of workflow logic in the backend eliminates the need for consumers to understand the sequence of actions, reducing errors and enhancing maintainability. Julien highlighted that this approach, part of the Richardson Maturity Model’s Level 3, makes APIs discoverable and resilient to backend changes, as consumers rely on links rather than hardcoded URLs.

Practical Implementation and Limitations

Julien’s live coding demo brought these concepts to life, showcasing a Spring Boot backend in Kotlin that dynamically generates links based on the application state. For example, the create-booking link only appears when the selection is complete, ensuring consumers cannot book prematurely. This dynamic guidance, facilitated by Spring HATEOAS, allows the frontend to display UI elements like the booking button based solely on available links, streamlining development and enhancing user experience.

However, Julien acknowledged limitations. For complex forms requiring extensive user input, the hypermedia approach may need supplementation with predefined payloads, as consumers must know what data to send. Additionally, long URLs, while not a practical issue in Julien’s experience at Expedia, could pose challenges in some contexts. Despite these constraints, the approach excels in domains with well-defined workflows, offering a robust framework for building expressive, maintainable APIs.

Conclusion: A New Paradigm for REST APIs

Julien’s session at Devoxx Belgium 2023 offered a transformative vision for REST APIs, emphasizing the power of domain-driven design and hypermedia controls. By aligning endpoints with user actions, encapsulating behaviors, and guiding workflows through links, developers can create APIs that are both semantically rich and resilient to change. This approach not only enhances consumer experience but also aligns with the principles of DDD, ensuring that business intent remains at the forefront of API design. Julien’s practical insights and engaging demo left attendees inspired to rethink their API strategies, fostering a deeper appreciation for REST’s potential when infused with domain-driven principles.

Links:

PostHeaderIcon [SpringIO2023] Managing Spring Boot Application Secrets: Badr Nass Lahsen

In a compelling session at Spring I/O 2023, Badr Nass Lahsen, a DevSecOps expert at CyberArk, tackled the critical challenge of securing secrets in Spring Boot applications. With the rise of cloud-native architectures and Kubernetes, secrets like database credentials or API keys have become prime targets for attackers. Badr’s talk, enriched with demos and real-world insights, introduced CyberArk’s Conjur solution and various patterns to eliminate hard-coded credentials, enhance authentication, and streamline secrets management, fostering collaboration between developers and security teams.

The Growing Threat to Application Secrets

Badr opened with alarming statistics: in 2021, software supply chain attacks surged by 650%, with 71% of organizations experiencing such breaches. He cited the 2022 Uber attack, where a PowerShell script with hard-coded credentials enabled attackers to escalate privileges across AWS, Google Suite, and other systems. Using the SALSA threat model, Badr highlighted vulnerabilities like compromised source code (e.g., Okta’s leaked access token) and build processes (e.g., SolarWinds). These examples underscored the need to eliminate hard-coded secrets, which are difficult to rotate, track, or audit, and often exposed inadvertently. Badr advocated for “shifting security left,” integrating security from the design phase to mitigate risks early.

Introducing Application Identity Security

Badr introduced the concept of non-human identities, noting that machine identities (e.g., SSH keys, database credentials) outnumber human identities 45 to 1 in enterprises. These secrets, if compromised, grant attackers access to critical resources. To address this, Badr presented CyberArk’s Conjur, an open-source secrets management solution that authenticates workloads, enforces policies, and rotates credentials. He emphasized the “secret zero problem”—the initial secret needed at application startup—and proposed authenticators like JWT or certificate-based authentication to solve it. Conjur’s attribute-based access control (ABAC) ensures least privilege, enabling scalable, auditable workflows that balance developer autonomy and security requirements.

Patterns for Securing Spring Boot Applications

Through a series of demos using the Spring Pet Clinic application, Badr showcased five patterns for secrets management in Kubernetes. The API pattern integrates Conjur’s SDK, using Spring’s @Value annotations to inject secrets without changing developer workflows. The Secrets Provider pattern updates Kubernetes secrets from Conjur, minimizing code changes but offering less security. The Push-to-File pattern stores secrets in shared memory, updating application YAML files securely. The Summon pattern uses a process wrapper to inject secrets as environment variables, ideal for apps relying on such variables. Finally, the Secretless Broker pattern proxies connections to resources like MySQL, hiding secrets entirely from applications and developers. Badr demonstrated credential rotation with zero downtime using Spring Cloud Kubernetes, ensuring resilience for critical applications.

Enhancing Kubernetes Security and Auditing

Badr cautioned that Kubernetes secrets, being base64-encoded and unencrypted by default, are insecure without etcd encryption. He introduced KubeScan, an open-source tool to identify risky roles and permissions in clusters. His demos highlighted Conjur’s auditing capabilities, logging access to secrets and enabling security teams to track usage. By centralizing secrets management, Conjur eliminates “security islands” created by disparate tools like AWS Secrets Manager or Azure Key Vault, ensuring compliance and visibility. Badr stressed the need for a federated governance model to manage secrets across diverse technologies, empowering developers while maintaining robust security controls.

Links:

PostHeaderIcon [SpringIO2023] Going Native: Fast and Lightweight Spring Boot Applications with GraalVM

At Spring I/O 2023 in Barcelona, Alina Yurenko, a developer advocate at Oracle Labs, captivated the audience with her deep dive into GraalVM Native Image support for Spring Boot 3.0. Her session, a blend of technical insights, live demos, and community engagement, showcased how GraalVM transforms Spring Boot applications into fast-starting, lightweight native executables that eliminate the need for a JVM. By leveraging GraalVM’s ahead-of-time (AOT) compilation, developers can achieve significant performance gains, reduced memory usage, and enhanced security, making it a game-changer for cloud-native deployments.

GraalVM: Beyond a Traditional JDK

Alina began by demystifying GraalVM, a versatile platform that extends beyond a standard JDK. While it can run Java applications using the OpenJDK HotSpot VM with an optimized Graal compiler, the spotlight was on its Native Image feature. This AOT compilation process converts a Spring Boot application into a standalone native executable, stripping away runtime code loading and compilation. The result? Applications that start in fractions of a second and consume minimal memory. Alina emphasized that GraalVM’s ability to include only reachable code—application logic, dependencies, and necessary JDK classes—reduces binary size and enhances efficiency, a critical advantage for cloud environments where resources are costly.

Performance and Resource Efficiency in Action

Through live demos, Alina illustrated GraalVM’s impact using the Spring Pet Clinic application. On her laptop, the JVM version took 1.5 seconds to start, while the native executable launched in just 0.3 seconds—a fivefold improvement. The native version was also significantly smaller, at roughly 50 MB without compression, compared to the JVM’s bulkier footprint. To stress-test performance, Alina ran a million requests against a simple Spring Boot app, comparing JVM and native modes. The JVM achieved 80k requests per second, while the native image hit 67k. However, with profile-guided optimizations (PGO), which mimic JVM’s runtime profiling at build time, the optimized native version reached 81k requests per second, rivaling JVM peak throughput. These demos underscored GraalVM’s ability to balance startup speed, low memory usage, and competitive throughput.

Security and Compact Packaging

Alina highlighted GraalVM’s security benefits, noting that native images eliminate runtime code loading, reducing attack vectors like those targeting just-in-time compilation. Only reachable code is included, minimizing the risk of unused dependencies introducing vulnerabilities. Dynamic features like reflection require explicit configuration, ensuring deliberate control over runtime behavior. On packaging, Alina showcased how native images can be compressed using tools like UPX, achieving sizes as low as a few megabytes, though she cautioned about potential runtime decompression trade-offs. These features make GraalVM ideal for deploying compact, secure applications in constrained environments like Kubernetes or serverless platforms.

Practical Integration with Spring Boot

The session also covered GraalVM’s seamless integration with Spring Boot 3.0, which graduated Native Image support from the experimental Spring Native project to general availability in November 2022. Spring Boot’s AOT processing step optimizes applications for native compilation, reducing reflective calls and generating configuration files for GraalVM. Alina demonstrated how Maven and Gradle plugins, along with the GraalVM Reachability Metadata Repository, simplify builds by automatically handling library configurations. For developers, this means minimal changes to existing workflows, with tools like the tracing agent and Spring’s runtime hints easing the handling of dynamic features. Alina’s practical advice—develop on the JVM for fast feedback, then compile to native in CI/CD pipelines—resonated with attendees aiming to adopt GraalVM.

Links: