Recent Posts
Archives

Archive for the ‘en-US’ Category

PostHeaderIcon Enabling and Using the WordPress REST API on OVH Hosting

I recently started migrating my WordPress site from Free.fr to OVHcloud hosting. The migration is still in progress, but along the way I needed to enable and validate
programmatic publishing through the WordPress REST API (RPC/API calls). This post documents the full process end-to-end, including OVH-specific gotchas and troubleshooting.

My last migration was many years ago, from DotClear 2 to WordPress…

Why move from Free.fr to OVH?

  • Performance: More CPU/RAM and faster PHP execution make WordPress snappier.
  • Modern PHP: Current PHP versions and extensions are available and easy to select.
  • HTTPS (SSL): Essential for secure logins and required for Application Passwords.
  • Better control: You can tweak .htaccess, install custom/mu-plugins, and adjust config.
  • Scalability: Easier to upgrade plans and resources as your site grows.

What is the WordPress REST API?

WordPress ships with a built-in REST API at /wp-json/. It lets you read and write content, upload media, and automate publishing from scripts or external systems (curl, Python, Node.js, CI, etc.).

Step 1 — Confirm the API is reachable

  1. Open https://yourdomain.com/wp-json/ in a browser. You should see a JSON index of routes.
  2. Optional: check https://yourdomain.com/wp-json/wp/v2 or
    https://yourdomain.com/wp-json/wp/v2/types/post to view available endpoints and fields.

Step 2 — Enable authentication with Application Passwords

  1. Sign in to /wp-admin/ with a user that can create/publish posts.
  2. Go to Users → Profile (your profile page).
  3. In Application Passwords, add a new password (e.g., “API access from laptop”). It should look like ABCD EFgh IjKl M123 n951 (including spaces)
  4. Copy the generated password (you’ll only see it once). Keep it secure.

You will authenticate via HTTP Basic Auth using username:application-password over HTTPS.

Step 3 — Test authentication (curl)

Replace the placeholders before running:

curl -i -u 'USERNAME:APP_PASSWORD' \
  https://yourdomain.com/wp-json/wp/v2/users/me

Expected result: 200 OK with your user JSON. If you get 401 or 403, see Troubleshooting below.

Important on OVH — The Authorization header may be stripped

On some OVH hosting configurations, the HTTP Authorization header isn’t passed to PHP.
If that happens, WordPress cannot see your Application Password and responds with:

{"code":"rest_not_logged_in","message":"You are not currently logged in.","data":{"status":401}}

To confirm you’re sending the header, try explicitly setting it:

curl -i -H "Authorization: Basic $(echo -n 'USERNAME:APP_PASSWORD' | base64)" \
  https://yourdomain.com/wp-json/wp/v2/users/me

If you still get 401, fix the server so PHP receives the header.

Step 4 — Fixing Authorization headers on OVH

Option A — Add rules to .htaccess

Connect in FTP, browse to “www” folder, edit the .htaccess file. Add these lines above the “BEGIN WordPress” block:

<IfModule mod_rewrite.c>
    RewriteEngine On
    RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization}]
</IfModule>

<IfModule mod_setenvif.c>
    SetEnvIf Authorization "(.*)" HTTP_AUTHORIZATION=$1
</IfModule>

Option B — Tiny must-use plugin

Create wp-content/mu-plugins/ if missing, then add fix-authorization.php:

<?php
/**
 * Plugin Name: Fix Authorization Header
 * Description: Ensures HTTP Authorization header is passed to WordPress for Application Passwords.
 */
add_action('init', function () {
    if (!isset($_SERVER['HTTP_AUTHORIZATION'])) {
        if (isset($_SERVER['REDIRECT_HTTP_AUTHORIZATION'])) {
            $_SERVER['HTTP_AUTHORIZATION'] = $_SERVER['REDIRECT_HTTP_AUTHORIZATION'];
        } elseif (function_exists('apache_request_headers')) {
            $headers = apache_request_headers();
            if (isset($headers['Authorization'])) {
                $_SERVER['HTTP_AUTHORIZATION'] = $headers['Authorization'];
            }
        }
    }
});

Upload and reload: authentication should now succeed.

Step 5 — Create and publish a complete post via API

Optional: create a category and a tag

# Create a category
curl -i -X POST \
  -u 'USERNAME:APP_PASSWORD' \
  -H "Content-Type: application/json" \
  -d '{ "name": "Tech" }' \
  https://yourdomain.com/wp-json/wp/v2/categories

# Create a tag
curl -i -X POST \
  -u 'USERNAME:APP_PASSWORD' \
  -H "Content-Type: application/json" \
  -d '{ "name": "API" }' \
  https://yourdomain.com/wp-json/wp/v2/tags

Upload a featured image

curl -i -X POST \
  -u 'USERNAME:APP_PASSWORD' \
  -H "Content-Disposition: attachment; filename=header.jpg" \
  -H "Content-Type: image/jpeg" \
  --data-binary @/full/path/to/header.jpg \
  https://yourdomain.com/wp-json/wp/v2/media

Note the returned MEDIA_ID.

Create and publish the post

curl -i -X POST \
  -u 'USERNAME:APP_PASSWORD' \
  -H "Content-Type: application/json" \
  -d '{
        "title": "Hello from the API",
        "content": "<p>Created automatically 🚀</p>",
        "status": "publish",
        "categories": [CAT_ID],
        "tags": [TAG_ID],
        "featured_media": MEDIA_ID
      }' \
  https://yourdomain.com/wp-json/wp/v2/posts

Optionally update excerpt or slug

POST_ID=REPLACE_WITH_ID

curl -i -X POST \
  -u 'USERNAME:APP_PASSWORD' \
  -H "Content-Type: application/json" \
  -d '{ "excerpt": "Short summary", "slug": "hello-from-the-api" }' \
  https://yourdomain.com/wp-json/wp/v2/posts/$POST_ID

Troubleshooting

  • 401 Unauthorized / rest_not_logged_inThe Authorization header isn’t reaching PHP. Add the .htaccess rules or the mu-plugin above. Re-test with
    -H "Authorization: Basic …".
  • 403 ForbiddenThe user lacks capabilities (e.g., Authors can’t publish globally). Use "status":"draft" or run publishing as an Editor/Admin.
  • Media upload failsCheck upload_max_filesize, post_max_size, and file permissions. Try a smaller file to isolate the issue.
  • Categories/Tags not appliedUse numeric IDs, not names. Fetch with /wp-json/wp/v2/categories and /wp-json/wp/v2/tags.
  • PermalinksPrefer non-Plain permalinks. If using Plain, you can call endpoints with the fallback:
    https://yourdomain.com/?rest_route=/wp/v2/posts.

Conclusion

Moving from Free.fr to OVH brings better performance, modern PHP, and full HTTPS, which is perfect for automation and scheduling.
After ensuring the Authorization header reaches WordPress (via .htaccess or a tiny mu-plugin), the REST API works smoothly for creating posts, uploading media, and managing taxonomy.
My migration is still ongoing, but having a reliable API in place is already a big win.

PostHeaderIcon Guarding the Web: Understanding and Mitigating Modern Application Security Risks: XSS • CSRF • Clickjacking • CORS • SSRF

Modern web applications operate in a hostile environment. Attackers exploit input handling, browser trust, and server connectivity to inject code (XSS), trigger unauthorized actions (CSRF), trick users into clicking hidden UI (clickjacking), abuse permissive cross-origin policy (CORS), or make the server itself fetch sensitive resources (SSRF). This article explains the mechanics of each threat and provides concrete, production-grade mitigation patterns you can adopt today.


Cross-Site Scripting (XSS): When Untrusted Input Becomes Code

Threat model: User-provided data is rendered into a page without correct encoding or policy restrictions, allowing script execution in the victim’s browser.

Typical Exploits

Stored XSS (comments, profiles):

<script>fetch('/api/session/steal', {credentials:'include'})</script>

Reflected XSS (search query):

GET /search?q=<script>alert(1)</script>

DOM-based XSS (dangerous sinks):

// Anti-pattern: innerHTML with untrusted input
const result = new URLSearchParams(location.search).get('msg');
document.getElementById('banner').innerHTML = result; // XSS sink

High-Confidence Mitigations

  • Contextual Output Encoding: Encode before insertion, based on context (HTML, attribute, URL, JS, CSS).
  • Prefer Safe APIs: Use textContent / setAttribute over innerHTML; avoid eval/Function constructors.
  • Framework Defaults: Modern templating (e.g., React, Angular) auto-escapes. Avoid escape bypasses (e.g., dangerouslySetInnerHTML) unless you sanitize with robust libraries.
  • Content Security Policy (CSP): Block inline scripts and restrict sources; use nonces.

CSP Example (Nonce-Based)

Content-Security-Policy: default-src 'none';
  script-src 'self' 'nonce-r4nd0mNonce';
  style-src 'self';
  img-src 'self' data:;
  connect-src 'self';
  base-uri 'self';
  frame-ancestors 'self';
  form-action 'self';

Server/Framework Snippets

Express.js: encode + CSP headers

app.use((req, res, next) => {
  res.set('Content-Security-Policy',
    "default-src 'none'; script-src 'self'; style-src 'self'; img-src 'self' data:; connect-src 'self'; base-uri 'self'; frame-ancestors 'self'");
  res.set('X-Content-Type-Options', 'nosniff');
  next();
});

Do sanitize if you must render HTML: Use a vetted sanitizer (e.g., DOMPurify on the client, OWASP Java HTML Sanitizer on the server) with an allowlist.


Cross-Site Request Forgery (CSRF): Abusing Session Trust

Threat model: A logged-in user’s browser sends a forged state-changing request (e.g., money transfer) to a trusted site because cookies are automatically included.

Exploit Example

<form action="https://bank.example/transfer" method="POST">
  <input type="hidden" name="to" value="attacker">
  <input type="hidden" name="amount" value="1000">
</form>
<script>document.forms[0].submit();</script>

Mitigations That Work Together

  • Synchronizer (Anti-CSRF) Tokens: Embed a per-session/per-request token in each state-changing form or AJAX request; verify server-side.
  • SameSite Cookies: Set session cookies to SameSite=Lax or Strict and Secure to block cross-site inclusion.
  • Method & Content Checks: Require POST/PUT/DELETE with Content-Type: application/json; reject unexpected content types.
  • Origin/Referer Validation: For sensitive endpoints, verify Origin (preferred) or Referer.
  • Re-authentication / Step-Up: For high-value actions, require a second factor or password confirmation.

Example (Express + CSRF Token)

// Pseudocode with csurf-like middleware
app.post('/transfer', requireAuth, verifyCsrfToken, (req, res) => {
  // process transfer
});

SPA + API note: Prefer token-based auth (Authorization header) to avoid ambient cookies. If you must use cookies, combine SameSite with CSRF tokens.


Clickjacking: UI Redress and Hidden Frames

Threat model: An attacker overlays your site in an invisible <iframe> and tricks users into clicking sensitive controls.

Exploit Sketch

<iframe src="https://target.example/approve" style="opacity:0;position:absolute;top:0;left:0;width:100%;height:100%"></iframe>

Mitigations

  • X-Frame-Options (XFO): DENY or SAMEORIGIN. (Legacy but still widely respected.)
  • CSP frame-ancestors: Modern, fine-grained embedding control.
  • UI Hardening: Re-prompt for confirmation on dangerous actions; disable one-click irreversible changes.

Header Examples

X-Frame-Options: DENY
Content-Security-Policy: frame-ancestors 'self'

Server Config

Nginx:

add_header X-Frame-Options "DENY" always;
add_header Content-Security-Policy "frame-ancestors 'self'" always;

Apache:

Header always set X-Frame-Options "DENY"
Header always set Content-Security-Policy "frame-ancestors 'self'"

CORS: Safe Cross-Origin Requests Without Overexposure

Threat model: Overly permissive CORS allows arbitrary origins to read sensitive API responses or send authenticated requests.

Dangerous Patterns

  • Access-Control-Allow-Origin: * combined with Access-Control-Allow-Credentials: true (browsers will ignore, but this signals confusion and often pairs with other mistakes).
  • Reflecting the request Origin wholesale without an allowlist.
  • Forgetting Vary: Origin, causing caches/CDNs to serve one origin’s CORS response to all.

Safe Configuration

Access-Control-Allow-Origin: https://app.example
Access-Control-Allow-Credentials: true
Access-Control-Allow-Methods: GET, POST, PUT, DELETE
Access-Control-Allow-Headers: Authorization, Content-Type
Access-Control-Expose-Headers: ETag, X-Request-Id
Vary: Origin

Implementation Tip

// Pseudocode: strict allowlist
const allowed = new Set(['https://app.example', 'https://admin.example']);
const origin = req.headers.origin;
if (allowed.has(origin)) {
  res.setHeader('Access-Control-Allow-Origin', origin);
  res.setHeader('Vary', 'Origin');
}

Principle: CORS is not an auth mechanism—treat it as a read-permission gate. Apply per-endpoint scoping; don’t turn it on globally.


Server-Side Request Forgery (SSRF): Turning Your Server Into a Proxy

Threat model: The application fetches remote resources based on user input (URLs), allowing attackers to reach internal networks or cloud metadata endpoints.

Exploit Examples

  • Cloud metadata theft: http://169.254.169.254/latest/meta-data/ (AWS) / http://metadata.google.internal/ / http://169.254.169.254/metadata/instance (Azure; IMDSv2/headers required).
  • Reaching services on localhost or RFC1918 IPs (e.g., Redis, internal dashboards).
  • Schema abuse: file://, gopher://, or ftp:// if unsupported schemas aren’t blocked.
  • Open redirect chains or DNS rebinding that convert a safe-looking hostname into an internal IP at resolution time.

Defense-in-Depth

  • Strict Allowlists: Only permit fetching from specific hosts/paths; reject everything else.
  • URL & IP Validation: Parse the URL; resolve DNS; block private and link-local ranges; re-validate after redirects.
  • Egress Proxy with ACL: Force all outbound HTTP(S) through a proxy that enforces destination policies and logs requests.
  • Cloud Hardening: Require IMDSv2 (AWS); block server access to metadata endpoints unless strictly needed.
  • Timeouts & Size Limits: Short connect/read timeouts and response byte caps to prevent internal scanning and DoS.

Hardening Snippet (Pseudocode)

function safeFetch(userUrl) {
  const url = new URL(userUrl);
  // allowlist scheme
  if (!['https:'].includes(url.protocol)) throw new Error('Blocked scheme');
  // DNS resolve and block private IPs
  const ip = resolveToIP(url.hostname);
  if (isPrivate(ip) || isLinkLocal(ip) || isLoopback(ip)) throw new Error('Blocked destination');
  // enforce egress proxy
  return proxyHttpGet(url, { timeoutMs: 2000, maxBytes: 1_000_000, followRedirects: 3, revalidateIPOnRedirect: true });
}

Header note: SSRF is best mitigated by network policy and URL validation; “headers” alone can’t prevent SSRF, but you may set request policies (e.g., disallow redirects, strip sensitive headers) and enforce gateway rules.


Complementary Security Headers

  • Strict-Transport-Security: Strict-Transport-Security: max-age=31536000; includeSubDomains; preload
  • X-Content-Type-Options: nosniff to block MIME type sniffing.
  • Referrer-Policy: e.g., strict-origin-when-cross-origin.
  • Permissions-Policy: Restrict powerful APIs (camera, geolocation, etc.).
  • Cross-Origin-Opener-Policy / Resource-Policy / Embedder-Policy: Strengthen isolation where feasible.

Operational Controls & SDLC Integration

  • Threat Modeling: Map trust boundaries (browser ⇄ app ⇄ services ⇄ internet) and identify high-risk data flows.
  • Static/Dynamic Testing: SAST, DAST, and dependency scanning in CI; add security unit tests for templating and request flows.
  • Observability: Log security-relevant events (CSP violations via report-to, WAF blocks, CSRF failures, SSRF proxy denies) and alert.
  • WAF/Gateway Policies: Enforce header baselines, block known bad payloads, and constrain egress with explicit ACLs.
  • Secrets Hygiene: Short-lived tokens, mTLS for service-to-service, and key rotation.
  • Least Privilege by Default: Scope CORS per endpoint, narrow IAM roles, network segmentation.

Deployment Checklist

  • XSS: Contextual encoding, CSP with nonces, sanitizer for any HTML rendering.
  • CSRF: Anti-CSRF tokens, SameSite + Secure cookies, Origin checks, step-up auth on critical actions.
  • Clickjacking: X-Frame-Options: DENY and CSP frame-ancestors 'self'.
  • CORS: Explicit allowlist, credentials only when necessary, Vary: Origin, scoped to sensitive endpoints.
  • SSRF: Allowlist destinations, block internal ranges, egress proxy with ACL, timeouts/size caps, re-validate after redirects.
  • Headers: HSTS, XCTO, Referrer-Policy, Permissions-Policy in place.
  • Ops: Logging/alerting on policy violations; SAST/DAST integrated; WAF rules tuned to app.
Security is not a single feature; it is a posture expressed through code, configuration, and continuous verification.

PostHeaderIcon [NDC Security 2025] Hacking History: The First Computer Worm

Håvard Opheim, a software developer at Kaa, took the audience at NDC Security 2025 in Oslo on a captivating journey through the history of the Morris Worm, the first significant malware to disrupt the early internet. Through a blend of historical narrative and technical analysis, Håvard explored the worm’s impact, its technical mechanisms, and the enduring lessons it offers for modern cybersecurity. His talk, rich with anecdotes and technical insights, highlighted how vulnerabilities exploited in 1988 remain relevant today.

The Dawn of the Morris Worm

Håvard set the stage by describing the internet of 1988, a nascent network connecting research institutions and defense installations via ARPANET. With minimal security controls, this “walled garden” fostered trust among users, allowing easy data sharing but also exposing systems to exploitation. On November 2, 1988, the Morris Worm, created by Cornell graduate student Robert Morris, brought this trust to its knees. Håvard recounted how the worm rendered computers across North America unusable, affecting universities, NASA, and the Department of Defense.

The worm’s rapid spread, Håvard explained, was not a deliberate attack but the result of a coding error by Robert. Intended as a proof-of-concept to highlight internet vulnerabilities, the worm’s aggressive replication turned it into a denial-of-service (DoS) fork bomb, overwhelming systems. Håvard’s narrative brought to life the chaos of that night, with system administrators scrambling to mitigate the damage as the worm reinfected systems despite reboots.

Technical Exploits and Vulnerabilities

Delving into the worm’s mechanics, Håvard outlined its exploitation of multiple vulnerabilities. The worm targeted Unix-based systems, leveraging flaws in the finger and sendmail programs. The finger daemon, used to query user information, suffered from a buffer overflow vulnerability due to the gets function, which lacked bounds checking. By sending a 536-byte payload—exceeding the 512-byte buffer—the worm overwrote memory to execute a remote shell, granting attackers full access.

Similarly, the sendmail program, running in debug mode on BSD 4.2 and 4.3, allowed commands in the recipient field, enabling the worm to send itself as an email and execute on the recipient’s system. Håvard also highlighted the worm’s password-cracking capabilities, exploiting predictable user behaviors, such as using usernames as passwords or simple variations like reversed usernames. These flaws, combined with insecure remote execution tools like rexec and rsh, allowed the worm to propagate rapidly across trusted networks.

Response and Legacy

Håvard described the community’s swift response, with ad-hoc working groups at Berkeley and MIT dissecting the worm overnight. By November 3, 1988, researchers had identified and patched the vulnerabilities, and within days, the worm’s source code was decompiled, revealing its inner workings. The incident, Håvard noted, marked a turning point, introducing the term “internet” to mainstream media and prompting the creation of the Computer Emergency Response Team (CERT).

The legal aftermath saw Robert convicted under the newly enacted Computer Fraud and Abuse Act (CFAA) of 1986, the first such conviction. Despite the worm’s benign intent, its impact—estimated at 100,000��10 million in damages—underscored the need for robust cybersecurity. Håvard emphasized that Robert’s career rebounded, with contributions to e-commerce and the founding of Y Combinator, but the incident left a lasting mark on the industry.

Enduring Lessons for Cybersecurity

Reflecting on the worm’s legacy, Håvard highlighted its relevance to modern cybersecurity. The vulnerabilities it exploited—buffer overflows, weak passwords, and insecure configurations—persist in today’s systems, albeit in patched forms. He stressed that human behavior remains a weak link, with users still prone to predictable password patterns. The worm’s unintended DoS effect also serves as a cautionary tale about the risks of untested code in production environments.

Håvard advocated for proactive measures, such as regular patching, strong authentication, and threat modeling, to mitigate similar risks today. He underscored the importance of learning from history, noting that the internet’s growth has amplified the stakes. By understanding past incidents like the Morris Worm, developers can build more resilient systems, recognizing that no system is inherently secure.

Hashtags: #MorrisWorm #CybersecurityHistory #NDCSecurity2025 #HåvardOpheim #Kaa #InternetSecurity #Malware

PostHeaderIcon [DevoxxFR 2025] Be More Productive with IntelliJ IDEA

Presented by Marit van Dijk (JetBrains)

IntelliJ IDEA is renowned for being a powerful and intelligent Integrated Development Environment (IDE) designed to help developers stay in the flow and maximize their productivity. With its rich set of features, including a smart editor, powerful refactorings, seamless navigation, and integrated tools for various technologies, IntelliJ IDEA aims to provide a comprehensive development experience without the need to leave the IDE. Marit van Dijk from JetBrains showcases how to leverage these capabilities to become a happier and more productive developer.

Marit’s talk delves into the myriad of features that contribute to developer productivity in IntelliJ IDEA. She highlights how the IDE supports various workflows and provides tools for everything from writing and reading code to debugging, testing, and working with databases and version control systems.

Staying in the Flow with a Smart IDE

Maintaining focus and staying in the “flow state” is crucial for developer productivity. Frequent context switching, interruptions, and wrestling with inefficient tools can easily break this flow. Marit van Dijk emphasizes that IntelliJ IDEA is designed to minimize these distractions and help developers stay focused on writing code.

She showcases the IDE’s intelligent code editor, which provides smart code completion, code analysis, and quick fixes. Features like intention actions and context-aware suggestions help developers write code more efficiently and accurately, reducing the need to manually search for syntax or API usage.

Powerful Refactorings and Navigation

Refactoring code is an essential part of maintaining code quality and improving the design of an application. IntelliJ IDEA offers a wide range of powerful automated refactorings that can significantly speed up this process and reduce the risk of introducing errors. Marit demonstrates some of the most useful refactorings, such as renaming variables or methods, extracting methods or interfaces, and changing method signatures.

Seamless navigation within a codebase is also critical for understanding existing code and quickly jumping between different parts of the project. Marit highlights IntelliJ IDEA’s navigation features, such as jumping to declarations or usages, navigating through recent files and locations, and searching for symbols or files by name. These features allow developers to explore their codebase efficiently and find the information they need quickly.

Integrated Tools for a Comprehensive Workflow

Modern software development involves working with a variety of tools and technologies beyond just the code editor. IntelliJ IDEA integrates with a wide range of popular tools, providing a unified experience within the IDE. Marit van Dijk showcases how IntelliJ IDEA seamlessly integrates with:

  • Build Tools: Maven and Gradle for managing project dependencies and building applications.
  • Version Control Systems: Git and others for managing code changes and collaborating with team members.
  • Databases: Tools for connecting to databases, Browse schemas, writing and executing queries, and managing data.
  • Test Tools: Integration with testing frameworks like JUnit and TestNG for writing, running, and debugging tests.
  • Debugging: A powerful debugger for stepping through code, inspecting variables, and diagnosing issues.

By providing these integrated tools, IntelliJ IDEA allows developers to perform most of their tasks without leaving the IDE, minimizing context switching and improving productivity.

AI-Powered Assistance

In addition to its traditional features, IntelliJ IDEA is also incorporating AI-powered assistance to further enhance developer productivity. Marit touches upon features like the AI Assistant, which can provide code suggestions, generate documentation, and even explain complex code snippets.

She might also mention tools sounding like “Juny”, a coding agent that can perform more complex coding tasks, such as generating boilerplate code or creating prototypes. These AI-powered features aim to automate repetitive tasks and provide developers with intelligent assistance throughout their workflow.

Conclusion: A Happier and More Productive Developer

Marit van Dijk concludes by reinforcing the message that leveraging the features of IntelliJ IDEA can make developers happier and more productive. By providing a smart editor, powerful refactorings, seamless navigation, integrated tools, and AI-powered assistance, the IDE helps developers stay in the flow, write better code, and focus on delivering value.

The talk encourages developers to explore the full potential of IntelliJ IDEA and customize it to fit their specific workflows. By making the most of the IDE’s capabilities, developers can significantly improve their efficiency and enjoy a more productive and fulfilling coding experience.

 

Hashtags: #DevoxxFR2025 #IntelliJIDEA #IDE #DeveloperProductivity #Java #Coding #Refactoring #Debugging #AI #JetBrains #MaritvanDijk

 

PostHeaderIcon [Voxxed Amsterdam 2025] From Zero to AI: Building Smart Java or Kotlin Applications with Spring AI

At VoxxedDaysAmsterdam2025, Christian Tzolov, a Spring AI team member at VMware and lead of the MCP Java SDK, delivered a comprehensive session titled “From Zero to AI: Building Smart Java or Kotlin Applications with Spring AI.” Spanning nearly two hours, the session provided a deep dive into integrating generative AI into Java and Kotlin applications using Spring AI, a framework designed to connect enterprise data and APIs with AI models. Through live coding demos, Tzolov showcased practical use cases, including conversation memory, tool/function calling, retrieval-augmented generation (RAG), and multi-agent systems, while addressing challenges like AI hallucinations and observability. Attendees left with actionable insights to start building AI-driven applications, leveraging Spring AI’s portable abstractions and the Model Context Protocol (MCP).

Overcoming LLM Limitations with Spring AI

Tzolov began by outlining the challenges of large language models (LLMs): they are stateless, frozen in time, and lack domain-specific knowledge, requiring developers to provide context, manage state, and handle interactions with external systems. Spring AI addresses these issues with high-level abstractions like the ChatClient, similar to Spring’s RestClient or WebClient, enabling seamless integration with models like OpenAI’s GPT-4o, Anthropic’s Claude, or open-source alternatives like LLaMA. A live demo of a flight booking assistant illustrated these concepts. Tzolov started with a basic Spring Boot application connected to OpenAI, demonstrating a simple chat interface. To ground the model, he used system prompts to define its behavior as a customer support agent for “Fun Air,” ensuring contextually appropriate responses. He then introduced conversation memory using Spring AI’s ChatMemoryAdvisor, which retains a chronological list of messages to maintain state, addressing the stateless nature of LLMs. For long-term memory, Tzolov employed a vector store (Chroma) to store conversation history semantically, retrieving only relevant data for queries, thus overcoming context window limitations. This setup allowed the assistant to respond accurately to queries like “What is my flight status?” by fetching booking details (e.g., booking number 103) from a mock database.

Enhancing AI Applications with Tool Calling and RAG

To enable LLMs to interact with external systems, Tzolov demonstrated tool/function calling, where Spring AI wraps existing services (e.g., a flight booking service) as tools with metadata (name, description, JSON schema). In the demo, the assistant used a getBookingDetails tool to query a database, allowing it to provide accurate flight status updates. Tzolov emphasized the importance of descriptive tool metadata to guide the LLM in deciding when and how to invoke tools, reducing the risk of misinterpretation. For domain-specific knowledge, he introduced prompt stuffing—injecting additional context into prompts—and RAG for dynamic data retrieval. In a RAG demo, cancellation policies were loaded into a Chroma vector store, chunked into meaningful segments, and retrieved dynamically based on user queries. This approach mitigated hallucinations, as seen when the assistant correctly cited a 50% refund policy for premium economy bookings within 40 hours. Tzolov highlighted advanced RAG techniques, such as data compression and reranking, supported by Spring AI’s APIs, and stressed the importance of evaluating responses to ensure relevance, referencing frameworks like those from contributor Thomas Vitali.

Building Multi-Agent Systems with MCP

Tzolov explored the Model Context Protocol (MCP), initiated by Anthropic, as a standardized way to integrate AI applications with external tools and resources across platforms. Using Spring AI’s MCP Java SDK, he demonstrated how to build and consume MCP-compliant tools. In one demo, a Spring AI application connected to MCP servers for Brave Search (JavaScript-based) and file system access, enabling an agent to answer queries about Spring AI support for MCP and write summaries to a file. Another demo reversed the setup, exposing a Spring AI weather tool (using Open-Meteo) as an MCP server, accessible by third-party clients like Claude Desktop via standard I/O or HTTP/SSE transports. Tzolov explained MCP’s bidirectional architecture, where clients can act as servers, supporting features like sampling (allowing servers to request LLM processing from clients). He addressed security concerns, noting Spring AI’s integration with Spring Security (referencing a blog by Daniel Garnier-Moiroux) to secure MCP servers with OAuth 2.1. The session also introduced agentic systems, where LLMs act as a “brain” for planning and tools as a “body” for interaction, with an agentic loop evaluating and refining responses. A work-in-progress demo showcased an orchestration pattern, delegating tasks to searcher, fact-checker, and writer agents, to be published on the Spring AI Community Portal.

Observability and Multimodality for Robust AI Systems

Observability was a key focus, as Tzolov underscored its importance in debugging complex AI interactions. Spring AI integrates with Micrometer to provide metrics (e.g., token usage, model throughput, latency), tracing, and logging (via Loki). A dashboard demo displayed real-time metrics for the flight booking assistant, highlighting tool calls and errors, crucial for diagnosing issues in agentic systems. Tzolov also explored multimodality, demonstrating a voice assistant using OpenAI’s GPT-4o audio preview, which processes audio input and output. Configured as “Marvin the Paranoid Android,” the assistant responded to voice queries with humorous, contextually appropriate replies, showcasing Spring AI’s support for non-text modalities like images, PDFs, and videos (e.g., Gemini’s video support). Tzolov noted that multimodality enables richer interactions, such as analyzing images or converting PDFs to markdown, and Spring AI’s abstractions handle these seamlessly. He concluded by encouraging developers to explore Spring AI’s documentation, experiment with MCP, and contribute to the community, emphasizing its role in building robust, interoperable AI applications.

Hashtags: #SpringAI #GenerativeAI #ModelContextProtocol #ChristianTzolov #VoxxedDaysAmsterdam2025 #AIAgents #RAG #Observability

PostHeaderIcon [Voxxed Amsterdam 2025] How to Survive as a Developer in the Exponential Age of AI

In a dynamic and fast-paced session at VoxxedDaysAmsterdam2025, Sander Hoogendoorn, CTO at iBOOD.com, explores the transformative impact of artificial intelligence (AI) on software development. With over four decades of coding experience, Hoogendoorn demystifies the hype around AI, examining its practical benefits, challenges, and implications for developers. Far from signaling the end of programming careers, he argues that AI empowers developers to tackle broader and more complex problems—provided they adapt and remain committed to quality and lifelong learning.

AI: A Developer’s Ally

Hoogendoorn clarifies AI’s role in development, highlighting tools like Cursor AI, which can extract components, fix linting issues, and generate unit tests using natural language prompts. He recounts an experience where his non-technical colleague Elmo built an iOS app with Cursor AI, illustrating AI’s democratizing potential. However, Sander warns that such tools require supervision to ensure code quality and compliance with project standards. AI’s ability to automate repetitive tasks—such as refactoring or test generation—frees developers to focus on complex problem-solving. At iBOOD, AI has enabled the team to create content with a unique tone, retrieve competitor pricing, and automate invoice recognition—tasks that previously required external expertise or significant manual effort.

The rise of AI-assisted development—especially “vibe coding,” where problems are described in natural language to generate code—introduces new forms of technical debt. Hoogendoorn references Ward Cunningham’s metaphor of technical debt as a loan that accelerates development but demands repayment through refactoring. AI-generated code, while fast to produce, often lacks context or long-term maintainability. For instance, Cursor AI struggled to integrate with iBOOD’s custom markdown components, resulting in complex solutions that required manual fixes. Research suggests that AI can amplify technical debt if used without rigorous validation, emphasizing the need for developers to stay vigilant and prioritize code quality over short-term gains.

Thriving in an AI-Centric Future

Far from replacing developers, Sander Hoogendoorn asserts that AI enhances their capabilities, enabling them to tackle ambitious challenges. He reminds us that developers are not mere typists—they are problem-solvers who think critically and collaborate to meet business needs. Historical shifts, from COBOL to cloud computing, have always empowered developers to solve bigger problems, and AI is no exception. By thoughtfully experimenting with AI—integrating it into workflows for content creation, price retrieval, or invoice processing—Sander’s team at iBOOD has unlocked previously unreachable efficiencies. The key to thriving, he says, lies in relentless learning and a willingness to adapt, ensuring that developers remain indispensable in an AI-driven world.

Hashtags: #AI #SoftwareDevelopment #TechnicalDebt #CursorAI #iBOOD #SanderHoogendoorn #VoxxedDaysAmsterdam2025

PostHeaderIcon Understanding Kubernetes for Docker and Docker Compose Users

TL;DR

Kubernetes may look like an overly complicated version of Docker Compose, but it operates on a different level entirely. Where Compose excels at quick, local orchestration of containers, Kubernetes is a robust, distributed platform designed for automated scaling, fault-tolerance, and production-grade deployments across multi-node clusters. This article provides a comprehensive comparison and shows how ArgoCD enhances GitOps-based Kubernetes workflows.


Docker Compose vs Kubernetes – Similarities and First Impressions

At a high level, Docker Compose and Kubernetes share similar concepts: containers, services, configuration, and volumes. This often leads to the assumption that Kubernetes is just a verbose, harder-to-write Compose replacement. However, Kubernetes is more than a runtime. It’s a control plane, a state manager, and a policy enforcer.

Concept Docker Compose Kubernetes
Service definition docker-compose.yml Deployment, Service, etc. YAML manifests
Networking Shared bridge network, service discovery by name DNS, internal IPs, ClusterIP, NodePort, Ingress
Volume management volumes: PersistentVolume, PersistentVolumeClaim, StorageClass
Secrets and configs .env, environment: ConfigMap, Secret, ServiceAccount
Dependency management depends_on initContainers, readinessProbe, livenessProbe
Scaling Manual (scale flag or duplicate services) Declarative (replicas), automatic via HPA

Real-Life Use Cases – Docker Compose vs Kubernetes Examples

Tomcat + Oracle + MongoDB + NGINX Stack

Docker Compose


version: '3'
services:
  nginx:
    image: nginx:latest
    ports:
      - "80:80"
    depends_on:
      - tomcat

  tomcat:
    image: tomcat:9
    ports:
      - "8080:8080"
    environment:
      DB_URL: jdbc:oracle:thin:@oracle:1521:orcl

  oracle:
    image: oracle/database:19.3.0-ee
    environment:
      ORACLE_PWD: secretpass
    volumes:
      - oracle-data:/opt/oracle/oradata

  mongo:
    image: mongo:5
    volumes:
      - mongo-data:/data/db

volumes:
  oracle-data:
  mongo-data:

Kubernetes Equivalent

  • Each service becomes a Deployment and a Service.
  • Environment variables and passwords are stored in Secrets.
  • Volumes are defined with PVC and StorageClass.

apiVersion: v1
kind: Secret
metadata:
  name: oracle-secret
type: Opaque
data:
  ORACLE_PWD: c2VjcmV0cGFzcw==

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: tomcat
spec:
  replicas: 2
  selector:
    matchLabels:
      app: tomcat
  template:
    metadata:
      labels:
        app: tomcat
    spec:
      containers:
      - name: tomcat
        image: tomcat:9
        ports:
        - containerPort: 8080
        env:
        - name: DB_URL
          value: jdbc:oracle:thin:@oracle:1521:orcl

NodeJS + Express + MySQL + NGINX

Docker Compose


services:
  mysql:
    image: mysql:8
    environment:
      MYSQL_ROOT_PASSWORD: rootpass
    volumes:
      - mysql-data:/var/lib/mysql

  api:
    build: ./api
    environment:
      DB_USER: root
      DB_PASS: rootpass
      DB_HOST: mysql

  nginx:
    image: nginx:latest
    ports:
      - "80:80"

Kubernetes Equivalent


apiVersion: v1
kind: Secret
metadata:
  name: mysql-secret
type: Opaque
data:
  MYSQL_ROOT_PASSWORD: cm9vdHBhc3M=
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: api
spec:
  replicas: 2
  template:
    spec:
      containers:
      - name: api
        image: node-app:latest
        env:
        - name: DB_PASS
          valueFrom:
            secretKeyRef:
              name: mysql-secret
              key: MYSQL_ROOT_PASSWORD

⚙️ Docker Compose vs kubectl – Command Mapping

Task Docker Compose Kubernetes
Start services docker-compose up -d kubectl apply -f .
Stop/cleanup docker-compose down kubectl delete -f .
View logs docker-compose logs -f kubectl logs -f pod-name
Scale a service docker-compose up --scale web=3 kubectl scale deployment web --replicas=3
Shell into container docker-compose exec app sh kubectl exec -it pod-name -- /bin/sh

ArgoCD – GitOps Made Practical

ArgoCD is a Kubernetes-native continuous deployment tool. It uses Git as the single source of truth, enabling declarative infrastructure and GitOps workflows.

✨ Key Features

  • Declarative sync of Git and cluster state
  • Drift detection and automatic repair
  • Multi-environment and multi-namespace support
  • CLI and Web UI available

Example ArgoCD Commands


argocd login argocd.myorg.com
argocd app create my-app \
  --repo https://github.com/org/app.git \
  --path k8s \
  --dest-server https://kubernetes.default.svc \
  --dest-namespace production

argocd app sync my-app
argocd app get my-app
argocd app diff my-app

Sample ArgoCD Application Manifest


apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: my-api
spec:
  destination:
    namespace: default
    server: https://kubernetes.default.svc
  project: default
  source:
    path: k8s/app
    repoURL: https://github.com/org/api.git
    targetRevision: HEAD
  syncPolicy:
    automated:
      prune: true
      selfHeal: true

✅ Conclusion

Docker Compose is perfect for prototyping and local dev. Kubernetes is built for cloud-native workloads, distributed systems, and high availability. ArgoCD makes declarative, Git-based continuous deployment simple, scalable, and observable.

PostHeaderIcon [Oracle Dev Days 2025] Optimizing Java Performance: Choosing the Right Garbage Collector

Jean-Philippe BEMPEL , a seasoned developer at Datadog and a Java Champion, delivered an insightful presentation on selecting and tuning Garbage Collectors (GCs) in OpenJDK to enhance Java application performance. His talk, rooted in practical expertise, unraveled the complexities of GCs, offering a roadmap for developers to align their choices with specific application needs. By dissecting the characteristics of various GCs and their suitability for different workloads, Jean-Philippe provided actionable strategies to optimize memory management, reduce production issues, and boost efficiency.

Understanding Garbage Collectors in OpenJDK

Garbage Collectors are pivotal in Java’s memory management, silently handling memory allocation and reclamation. However, as Jean-Philippe emphasized, a misconfigured GC can lead to significant performance bottlenecks in production environments. OpenJDK offers a suite of GCs—Serial GC, Parallel GC, G1, Shenandoah, and ZGC—each designed with distinct characteristics to cater to diverse application requirements. The challenge lies in selecting the one that best matches the workload, whether it prioritizes throughput or low latency.

Jean-Philippe began by outlining the foundational concepts of GCs, particularly the generational model. Most GCs in OpenJDK are generational, dividing memory into the Young Generation (for short-lived objects) and the Old Generation (for longer-lived objects). The Young Generation is further segmented into the Eden space, where new objects are allocated, and Survivor spaces, which hold objects that survive initial collections before promotion to the Old Generation. Additionally, the Metaspace stores class metadata, a critical but often overlooked component of memory management.

Serial GC: Simplicity for Constrained Environments

The Serial GC, one of the oldest collectors, operates with a single thread and employs a stop-the-world approach, pausing all application threads during collection. Jean-Philippe highlighted its suitability for small-scale applications, particularly those running in containers with less than 2 GB of RAM, where it serves as the default GC. Its simplicity makes it ideal for environments with limited resources, but its stop-the-world nature can introduce noticeable pauses, making it less suitable for latency-sensitive applications.

To illustrate, Jean-Philippe explained the mechanics of the Young Generation’s Survivor spaces. These spaces, S0 and S1, alternate roles as source and destination during minor GC cycles, copying live objects to manage memory efficiently. Objects surviving multiple cycles are promoted to the Old Generation, reducing the overhead of frequent collections. This generational approach leverages the hypothesis that most objects die young, minimizing the cost of memory reclamation.

Parallel GC: Maximizing Throughput

For applications prioritizing throughput, such as batch processing jobs, the Parallel GC offers significant advantages. Unlike the Serial GC, it leverages multiple threads to reclaim memory, making it efficient for systems with ample CPU cores. Jean-Philippe noted that it was the default GC until JDK 8 and remains a strong choice for throughput-oriented workloads like Spark jobs, Kafka consumers, or ETL processes.

The Parallel GC, also stop-the-world, excels in scenarios where total execution time matters more than individual pause durations. Jean-Philippe shared a benchmark using a JFR (Java Flight Recorder) file parsing application, where Parallel GC outperformed others, achieving a throughput of 97% (time spent in application versus GC). By tuning the Young Generation size to reduce frequent minor GCs, developers can further minimize object copying, enhancing overall performance.

G1 GC: Balancing Throughput and Latency

The G1 (Garbage-First) GC, default since JDK 9 for heaps larger than 2 GB, strikes a balance between throughput and latency. Jean-Philippe described its region-based memory management, dividing the heap into smaller regions (Eden, Survivor, Old, and Humongous for large objects). This structure allows G1 to focus on regions with the most garbage, optimizing memory reclamation with minimal copying.

In his benchmark, G1 showed a throughput of 85%, with average pause times of 76 milliseconds, aligning with its target of 200 milliseconds. However, Jean-Philippe pointed out challenges with Humongous objects, which can increase GC frequency if not managed properly. By adjusting region sizes (up to 32 MB), developers can mitigate these issues, improving throughput for applications like batch jobs while maintaining reasonable pause times.

Shenandoah and ZGC: Prioritizing Low Latency

For latency-sensitive applications, such as HTTP servers or microservices, Shenandoah and ZGC are the go-to choices. These concurrent GCs minimize pause times, often below a millisecond, by performing most operations alongside the running application. Jean-Philippe highlighted Shenandoah’s non-generational approach (though a generational version is in development) and ZGC’s generational support since JDK 21, making the latter particularly efficient for large heaps.

In a latency-focused benchmark using a Spring PetClinic application, Jean-Philippe demonstrated that Shenandoah and ZGC maintained request latencies below 200 milliseconds, significantly outperforming Parallel GC’s 450 milliseconds at the 99th percentile. ZGC’s use of colored pointers and load/store barriers ensures rapid memory reclamation, allowing regions to be freed early in the GC cycle, a key advantage over Shenandoah.

Tuning Strategies for Optimal Performance

Tuning GCs is as critical as selecting the right one. For Parallel GC, Jean-Philippe recommended sizing the Young Generation to reduce the frequency of minor GCs, ideally exceeding 50% of the heap to minimize object copying. For G1, adjusting region sizes can address Humongous object issues, while setting a maximum pause time target (e.g., 50 milliseconds) can shift its behavior toward latency sensitivity, though it may not compete with Shenandoah or ZGC in extreme cases.

For concurrent GCs like Shenandoah and ZGC, ensuring sufficient heap size and CPU cores prevents allocation stalls, where threads wait for memory to be freed. Jean-Philippe emphasized that Shenandoah requires careful heap sizing to avoid full GCs, while ZGC’s rapid region reclamation reduces such risks, making it more forgiving for high-allocation-rate applications.

Selecting the Right GC for Your Workload

Jean-Philippe concluded by categorizing workloads into two types: throughput-oriented (SPOT) and latency-sensitive. For SPOT workloads, such as batch jobs or ETL processes, Parallel GC or G1 are optimal, with Parallel GC offering easier tuning for predictable performance. For latency-sensitive applications, like microservices or databases (e.g., Cassandra), ZGC’s generational efficiency and Shenandoah’s low-pause capabilities shine, with ZGC being particularly effective for large heaps.

By analyzing workload characteristics and leveraging tools like GC Easy for log analysis, developers can make informed GC choices. Jean-Philippe’s benchmarks underscored the importance of tailoring GC configurations to specific use cases, ensuring both performance and stability in production environments.

Links:

Hashtags: #Java #GarbageCollector #OpenJDK #Performance #Tuning #Datadog #JeanPhilippeBempel #OracleDevDays2025

PostHeaderIcon (temporary testing) Mapping pages

PostHeaderIcon [Oracle Dev Days 2025] From JDK 21 to JDK 25: Jean-Michel Doudoux on Java’s Evolution

Jean-Michel Doudoux, a renowned Java Champion and Sciam consultant, delivered a session, charting Java’s evolution from JDK 21 to JDK 25. As the next Long-Term Support (LTS) release, JDK 25 introduces transformative features that redefine Java development. Jean-Michel’s talk provided a comprehensive guide to new syntax, APIs, JVM enhancements, and security measures, equipping developers to navigate Java’s future with confidence.

Enhancing Syntax and APIs

Jean-Michel began by exploring syntactic improvements that streamline Java code. JEP 456 in JDK 22 introduces unnamed variables using _, improving clarity for unused variables. JDK 23’s JEP 467 adds Markdown support for Javadoc, easing documentation. In JDK 25, JEP 511 simplifies module imports, while JEP 512’s implicit classes and simplified main methods make Java more beginner-friendly. JEP 513 enhances constructor flexibility, enabling pre-constructor logic. These changes collectively minimize boilerplate, boosting developer efficiency.

Expanding Capabilities with New APIs

The session highlighted APIs that broaden Java’s scope. The Foreign Function & Memory API (JEP 454) enables safer native code integration, replacing sun.misc.Unsafe. Stream Gatherers (JEP 485) enhance data processing, while the Class-File API (JEP 484) simplifies bytecode manipulation. Scope Values (JEP 506) improve concurrency with lightweight alternatives to thread-local variables. Jean-Michel’s practical examples demonstrated how these APIs empower developers to craft modern, robust applications.

Strengthening JVM and Security

Jean-Michel emphasized JVM and security advancements. JEP 472 in JDK 25 restricts native code access via --enable-native-access, enhancing system integrity. The deprecation of sun.misc.Unsafe aligns with safer alternatives. The removal of 32-bit support, the Security Manager, and certain JMX features reflects Java’s modern focus. Performance boosts in HotSpot JVM, Garbage Collectors (G1, ZGC), and startup times via Project Leyden (JEP 483) ensure Java’s competitiveness.

Boosting Productivity with Tools

Jean-Michel covered enhancements to Java’s tooling ecosystem, including upgraded Javadoc, JCMD, and JAR utilities, which streamline workflows. New Java Flight Recorder (JFR) events improve diagnostics. He urged developers to test JDK 25’s early access builds to prepare for the LTS release, highlighting how these tools enhance efficiency and scalability in application development.

Jean-Michel wrapped up by emphasizing JDK 25’s role as an LTS release with extended support. He encouraged proactive engagement with early access programs to adapt to new features and deprecations. His session offered a clear, actionable roadmap, empowering developers to leverage JDK 25’s innovations confidently. Jean-Michel’s expertise illuminated Java’s trajectory, inspiring attendees to embrace its evolving landscape.

Hashtags: #Java #JDK25 #LTS #JVM #Security #Sciam #JeanMichelDoudoux