Recent Posts
Archives

Archive for the ‘General’ Category

PostHeaderIcon [DevoxxUK2025] Passkeys in Practice: Implementing Passwordless Apps

At DevoxxUK2025, Daniel Garnier-Moiroux, a Spring Security team member at VMware, delivered an engaging talk on implementing passwordless authentication using passkeys and the WebAuthn specification. Highlighting the security risks of traditional passwords, Daniel demonstrated how passkeys leverage cryptographic keys stored on devices like YubiKeys, Macs, or smartphones to provide secure, user-friendly login flows. Using Spring Boot 3.4’s new WebAuthn support, he showcased practical steps to integrate passkeys into an existing application, emphasizing phishing resistance and simplified user experiences. His live coding demo and insights into Spring Security’s configuration made this a compelling session for developers seeking modern authentication solutions.

The Problem with Passwords

Daniel opened by underscoring the vulnerabilities of passwords, often reused or poorly secured, leading to frequent breaches. He introduced passwordless alternatives, starting with one-time tokens (OTTs), which Spring Security supports for temporary login links sent via email. While effective, OTTs require cumbersome steps like copying tokens across devices. Passkeys, based on the WebAuthn standard, offer a superior solution by using cryptographic keys tied to specific domains, eliminating password-related risks. Supported by major browsers and platforms like Apple, Google, and Microsoft, passkeys enable seamless authentication via biometrics, PINs, or physical devices, combining convenience with robust security.

Understanding WebAuthn and Passkeys

Passkeys utilize asymmetric cryptography, where a private key remains on the user’s device (e.g., a YubiKey or iPhone) and a public key is shared with the server. Daniel explained the two-phase process: registration, where a key pair is generated and the public key is stored on the server, and authentication, where the server sends a challenge, the device signs it with the private key, and the server verifies it. This ensures phishing resistance, as keys are domain-specific and cannot be used on fraudulent sites. WebAuthn, a W3C standard backed by the FIDO Alliance, simplifies this process for developers by abstracting complex cryptography through browser APIs like navigator.credentials.create() and navigator.credentials.get().

Integrating Passkeys with Spring Security

Using a live demo, Daniel showed how to integrate passkeys into a Spring Boot 3.4 application. He added the spring-security-webauthn dependency and configured a security setup with the application name, relying party (RP) ID (e.g., localhost), and allowed origins. This minimal configuration enables a default passkey login page. For persistence, Spring Security 6.5 (releasing soon after the talk) offers JDBC support, requiring two tables: one for user credentials (storing public keys and metadata) and another linking passkeys to users. Daniel emphasized that Spring Security handles cryptographic validation, sparing developers from implementing complex WebAuthn logic manually.

Customizing the Passkey Experience

To enhance user experience, Daniel demonstrated creating a custom login page with a branded “Sign in with Passkey” button, styled with CSS (featuring a comic sans font for humor). He highlighted the need for JavaScript to interact with WebAuthn APIs, copying Spring Security’s Apache-licensed sample code for authentication flows. This involves handling CSRF tokens and redirecting users post-authentication. While minimal Java code is needed, developers must write some JavaScript to trigger browser APIs. Daniel advised using Spring Security’s defaults for simplicity but encouraged customization for production apps, ensuring alignment with brand aesthetics.

Practical Considerations and Feedback

Daniel stressed that passkeys are not biometric data but cryptographic credentials, synced across devices via password managers or iCloud Keychain without server involvement. For organizations using identity providers like Keycloak or Azure Entra ID, passkey support is often a checkbox configuration, reducing implementation effort. He encouraged developers to provide feedback on Spring Security’s passkey support via GitHub issues, emphasizing community contributions to refine features. For those interested in deeper WebAuthn mechanics, he recommended Ubico’s developer guide over the dense W3C specification, offering practical insights for implementation.

Links:

PostHeaderIcon [DefCon32] DEF CON 32: Finding & Exploiting Local Attacks on 1Password Mac Desktop App

J. Hoffman and Colby Morgan, offensive security engineers at Robinhood, delivered a compelling presentation at DEF CON 32, exploring vulnerabilities in the 1Password macOS desktop application. Focusing on the risks posed by compromised endpoints, they unveiled multiple attack vectors to dump local vaults, exposing weaknesses in 1Password’s software architecture and IPC mechanisms. Their research, blending technical rigor with practical demonstrations, offered critical insights into securing password managers against local threats.

Probing 1Password’s Security Assumptions

J. and Colby opened by highlighting the immense trust users place in password managers like 1Password, which safeguard sensitive credentials. They posed a critical question: how secure are these credentials if a device is compromised? Their research targeted the macOS application, uncovering vulnerabilities that could allow attackers to access vaults. By examining 1Password’s reliance on inter-process communication (IPC) and open-source components, they revealed how seemingly robust encryption fails under local attacks, setting the stage for their detailed findings.

Exploiting Application Vulnerabilities

The duo detailed several vulnerabilities, including an XPC validation bypass that enabled unauthorized access to 1Password’s processes. Their live demonstrations showcased how attackers could exploit these flaws to extract vault data, even on locked systems. They also identified novel bugs in Google Chrome’s interaction with 1Password’s browser extension, amplifying the attack surface. J. and Colby’s meticulous approach, including proof-of-concept scripts released at Morgan’s GitHub, underscored the need for robust validation in password manager software.

Mitigating Local Threats

Addressing mitigation, J. and Colby recommended upgrading to the latest 1Password versions, noting fixes in versions 8.10.18 and 8.10.36 for their disclosed issues. They urged organizations to enhance endpoint security, emphasizing that password managers are prime targets for red teamers seeking cloud credentials or API keys. Their findings, developed over a month of intensive research, highlighted the importance of proactive patching and monitoring to safeguard sensitive data on compromised devices.

Engaging the Security Community

Concluding, J. and Colby encouraged the DEF CON community to extend their research to other password managers, noting that similar vulnerabilities likely exist. They shared their code to inspire further exploration and emphasized responsible disclosure, having worked with 1Password to address the issues. Their call to action invited attendees to collaborate on improving password manager security, reinforcing the collective effort needed to protect critical credentials in an era of sophisticated local attacks.

Links:

PostHeaderIcon [OxidizeConf2024] Panel: What Has to Change to Increase Rust Adoption in Industrial Companies?

Overcoming Barriers to Adoption

The promise of memory-safe programming has positioned Rust as a leading candidate for industrial applications, yet its adoption in traditional sectors faces significant hurdles. At OxidizeConf2024, a panel moderated by Florian Gilcher from Ferrous Systems, featuring Michał Fita, James Munns from OneVariable UG, and Steve Klabnik from Oxide Computer Company, explored strategies to enhance Rust’s uptake in industrial companies. The discussion, enriched by audience questions, addressed cultural, technical, and managerial barriers, offering actionable insights for developers and organizations.

Michał highlighted the managerial perspective, noting that industrial companies often prioritize stability and cost over innovation. The challenge lies in convincing decision-makers of Rust’s benefits, particularly when hiring skilled Rust developers is difficult. James added that industrial users, rooted in mechanical and electrical engineering, are less likely to share challenges publicly, complicating efforts to gauge their needs. Florian emphasized the role of initiatives like Ferrocene, a safety-compliant Rust compiler, in opening doors to regulated industries like automotive.

Technical and Cultural Shifts

Steve underscored Rust’s technical advantages, such as memory safety and concurrency guarantees, which align with regulatory pressures from organizations like the NSA advocating for memory-safe languages. However, he cautioned against framing Rust as a direct replacement for C++, which risks alienating skilled C++ developers. Instead, the panel advocated for a collaborative approach, highlighting Rust’s total cost of ownership benefits—fewer bugs, faster debugging, and improved maintainability. James noted that tools like cargo-deny and cargo-tree enhance security and dependency management, addressing industrial concerns about reliability.

Cultural resistance also plays a role, particularly in companies reliant on trade secrets. Michał pointed out that Rust’s open-source ethos can clash with proprietary mindsets, requiring tailored strategies to demonstrate value. The panel suggested focusing on high-impact areas, such as safety-critical components, where Rust’s guarantees provide immediate benefits. By integrating Rust incrementally, companies can leverage existing C++ codebases while transitioning to safer, more modern practices.

Engaging Stakeholders and Building Community

Convincing stakeholders requires a nuanced approach, avoiding dismissive rhetoric about legacy languages. Florian stressed the importance of meeting developers where they are, respecting the expertise of C++ practitioners while showcasing Rust’s practical advantages. Steve highlighted successful adoptions, such as Oxide’s server stack, as case studies to inspire confidence. The panel also discussed the role of community efforts, such as the Rust Foundation, in providing resources and certifications to ease adoption.

Audience input reinforced the need for positive messaging. A C++ developer cautioned against framing Rust as a mandate driven by external pressures, advocating for dialogue that emphasizes mutual benefits. The panel agreed, suggesting that events like OxidizeConf and open-source contributions can bridge gaps between communities, fostering collaboration. By addressing technical, cultural, and managerial challenges, Rust can gain traction in industrial settings, driving innovation without discarding legacy expertise.

Links:

PostHeaderIcon [DevoxxFR2025] Dagger Modules: A Swiss Army Knife for Modern CI/CD Pipelines

Continuous Integration and Continuous Delivery (CI/CD) pipelines are the backbone of modern software development, automating the process of building, testing, and deploying applications. However, as these pipelines grow in complexity, they often become difficult to maintain, debug, and port across different execution platforms, frequently relying on verbose and platform-specific YAML configurations. Jean-Christophe Sirot, in his presentation, introduced Dagger as a revolutionary approach to CI/CD, allowing pipelines to be written as code, executable locally, testable, and portable. He explored Dagger Functions and Dagger Modules as key concepts for creating and sharing reusable, language-agnostic components for CI/CD workflows, positioning Dagger as a versatile “Swiss Army knife” for modernizing these critical pipelines.

The Pain Points of Traditional CI/CD

Jean-Christophe began by outlining the common frustrations associated with traditional CI/CD pipelines. Relying heavily on YAML or other declarative formats for defining pipelines can lead to complex, repetitive, and hard-to-read configurations, especially for intricate workflows. Debugging failures within these pipelines is often challenging, requiring pushing changes to a remote CI server and waiting for the pipeline to run. Furthermore, pipelines written for one CI platform (like GitHub Actions or GitLab CI) are often not easily transferable to another, creating vendor lock-in and hindering flexibility. This dependency on specific platforms and the difficulty in managing complex workflows manually are significant pain points for development and DevOps teams.

Dagger: CI/CD as Code

Dagger offers a fundamentally different approach by treating CI/CD pipelines as code. It allows developers to write their pipeline logic using familiar programming languages (like Go, Python, Java, or TypeScript) instead of platform-specific configuration languages. This brings the benefits of software development practices – such as code reusability, modularity, testing, and versioning – to CI/CD. Jean-Christophe explained that Dagger executes these pipelines using containers, ensuring consistency and portability across different environments. The Dagger engine runs the pipeline logic, orchestrates the necessary container operations, and manages dependencies. This allows developers to run and debug their CI/CD pipelines locally using the same code that will execute on the remote CI platform, significantly accelerating the debugging cycle.

Dagger Functions and Modules

Key to Dagger’s power are Dagger Functions and Dagger Modules. Jean-Christophe described Dagger Functions as the basic building blocks of a pipeline – functions written in a programming language that perform specific CI/CD tasks (e.g., building a Docker image, running tests, deploying an application). These functions interact with the Dagger engine to perform container operations. Dagger Modules are collections of related Dagger Functions that can be packaged and shared. Modules allow teams to create reusable components for common CI/CD patterns or specific technologies, effectively creating a library of CI/CD capabilities. For example, a team could create a “Java Build Module” containing functions for compiling Java code, running Maven or Gradle tasks, and building JAR or WAR files. These modules can be easily imported and used in different projects, promoting standardization and reducing duplication across an organization’s CI/CD workflows. Jean-Christophe demonstrated how to create and use Dagger Modules, illustrating their potential for building composable and maintainable pipelines. He highlighted that Dagger’s language independence means that modules can be written in one language (e.g., Python) and used in a pipeline defined in another (e.g., Java), fostering collaboration between teams with different language preferences.

The Benefits: Composable, Maintainable, Portable

By adopting Dagger, teams can create CI/CD pipelines that are:
Composable: Pipelines can be built by combining smaller, reusable Dagger Modules and Functions.
Maintainable: Pipelines written as code are easier to read, understand, and refactor using standard development tools and practices.
Portable: Pipelines can run on any platform that supports Dagger and containers, eliminating vendor lock-in.
Testable: Individual Dagger Functions and modules can be unit tested, and the entire pipeline can be run and debugged locally.

Jean-Christophe’s presentation positioned Dagger as a versatile tool that modernizes CI/CD by bringing the best practices of software development to pipeline automation. The ability to write pipelines in code, leverage reusable modules, and execute locally makes Dagger a powerful “Swiss Army knife” for developers and DevOps engineers seeking more efficient, reliable, and maintainable CI/CD workflows.

Links:

PostHeaderIcon [AWSReInforce2025] How AWS’s global threat intelligence transforms cloud protection (SEC302)

Lecturer

The presentation features AWS security leadership and engineering experts who architect the global threat intelligence platform. Their collective expertise spans distributed systems, machine learning, and real-time security operations across AWS’s planetary-scale infrastructure.

Abstract

The session examines AWS’s threat intelligence lifecycle—from sensor deployment through data processing to automated disruption—demonstrating how global telemetry volume enables precision defense at scale. It reveals the architectural patterns and machine learning models that convert billions of daily security events into actionable mitigations, establishing security as a reliability function within the shared responsibility model.

Global Sensor Network and Telemetry Foundation

AWS operates the world’s largest sensor network for security telemetry, spanning every Availability Zone, edge location, and service endpoint. This includes hypervisor introspection, network flow logs, DNS query monitoring, and host-level signals from EC2 instances. The scale is staggering: thousands of potential security events are blocked daily before customer impact, derived from petabytes of raw telemetry.

Sensors are purpose-built for specific threat classes. Network sensors detect C2 beaconing patterns; host sensors identify cryptominer process trees; DNS sensors flag domain generation algorithms. This layered approach ensures coverage across the attack lifecycle—from reconnaissance through exploitation to persistence.

Data Processing Pipeline and Intelligence Generation

Raw telemetry flows through a multi-stage pipeline. First, deterministic rules filter known bad indicators—IP addresses from botnet controllers, certificate hashes of phishing kits. Surviving events enter machine learning models trained on historical compromise patterns.

The models operate in two modes: supervised classification for known attack families, and unsupervised anomaly detection for zero-day behaviors. Feature engineering extracts behavioral fingerprints—process lineage entropy, network flow burstiness, file system access velocity. Models refresh hourly using federated learning across regions, preventing single-point compromise.

Intelligence quality gates require precision above 99.9% to minimize false positives. When confidence thresholds are met, signals become actionable intelligence with metadata: actor attribution, campaign identifiers, TTP mappings to MITRE ATT&CK.

Automated Disruption and Attacker Cost Imposition

Intelligence drives automated responses through three mechanisms. First, infrastructure-level blocks: malicious IPs are null-routed at the network edge within seconds. Second, service-level mitigations: compromised credentials trigger forced password rotation and session termination. Third, customer notifications via GuardDuty findings with remediation playbooks.

The disruption philosophy focuses on increasing attacker cost. By blocking C2 infrastructure early, campaigns lose command visibility. By rotating compromised keys rapidly, lateral movement becomes expensive. By publishing indicators publicly, defenders globally benefit from AWS’s visibility.

Shared Outcomes in the Responsibility Model

The shared responsibility model extends to outcomes, not just controls. AWS secures the cloud—hypervisors, network fabric, physical facilities—while customers secure their workloads. Threat intelligence bridges this divide: AWS’s global view detects campaigns targeting multiple customers, enabling proactive protection before individual compromise.

This manifests in services like Shield Advanced, which absorbs DDoS attacks at the network perimeter, and Macie, which identifies exposed PII across S3 buckets. Customers focus on application logic—input validation, business rule enforcement—while AWS handles undifferentiated heavy lifting.

Machine Learning at Security Scale

Scaling threat intelligence requires automation beyond human capacity. Data scientists build models that generalize across attack variations while maintaining low false positive rates. Techniques include:

  • Graph neural networks to detect credential abuse chains
  • Time-series analysis for cryptominer thermal signatures
  • Natural language processing on phishing email corpora

Model interpretability ensures security analysts can validate decisions. Feature importance rankings and counterfactual examples explain why a particular IP was blocked, maintaining operational trust.

Operational Integration and Customer Impact

Intelligence integrates into customer-facing services seamlessly. GuardDuty consumes the same models used internally, surfacing findings with evidence packages. Security Hub centralizes signals from AWS and partner solutions. WAF rulesets update automatically with emerging threat patterns.

The impact compounds: a campaign targeting one customer is disrupted globally. A novel malware strain detected in one region triggers protections everywhere. This network effect makes the internet safer collectively.

Conclusion: Security as Reliability Engineering

Threat intelligence at AWS scale transforms security from reactive defense to proactive reliability engineering. By investing in sensors, processing, and automation, AWS prevents disruptions before they affect customer operations. The shared outcomes model—where infrastructure protection enables application innovation—creates a virtuous cycle: more secure workloads generate better telemetry, improving intelligence quality, which prevents more disruptions.

Links:

PostHeaderIcon [DefCon32] DEF CON 32: Laundering Money

Michael Orlitzky, a multifaceted security researcher and mathematician, captivated the DEF CON 32 audience with a provocative presentation on bypassing payment mechanisms in CSC ServiceWorks’ pay-to-play laundry machines. By exploiting physical vulnerabilities in Speed Queen washers and dryers, Michael demonstrated how to run these machines without payment, framing his actions as a response to CSC’s exploitative practices. His talk, rich with technical detail and humor, shed light on the intersection of physical security and consumer frustration, urging attendees to question predatory business models.

Uncovering CSC’s Predatory Practices

Michael began by introducing CSC ServiceWorks, a major provider of coin- and app-operated laundry machines in residential buildings. He detailed their business model, which charges tenants for laundry despite rent covering utilities, often trapping users with non-refundable prepaid cards or unreliable apps like CSC GO. Michael recounted personal grievances, such as machines eating quarters or failing to deliver services, supported by widespread customer complaints citing CSC’s poor maintenance and refund processes. His narrative positioned CSC as a corporate antagonist, justifying his exploration of hardware bypasses as a form of reclaiming fairness.

Bypassing Coin Slots with Hardware Hacks

Delving into the technical core, Michael explained how to access the service panels of CSC-branded Speed Queen machines, which use standardized keys available online. By short-circuiting red and black wires in the coin-drop mechanism, he tricked the machine into registering payment, enabling free cycles without damage. His live demonstration, complete with safety warnings about grounding and electrical risks, showcased the simplicity of the bypass—achievable in seconds with minimal tools. Michael’s approach, detailed on his personal website, emphasized accessibility, requiring only determination and basic equipment.

Addressing CSC’s Security Upgrades

Michael also addressed CSC’s response to his findings, noting that days before DEF CON 32, the company upgraded his building’s machines with new tubular locks and security Torx screws. Undeterred, he demonstrated how to bypass these using a tubular lockpick or a flathead screwdriver, highlighting CSC’s superficial fixes. His candid tone and humorous defiance—acknowledging the machines’ internet-connected logs—underscored the low risk of repercussions, as CSC’s focus on profit over maintenance left such vulnerabilities unaddressed. This segment reinforced the talk’s theme of exploiting systemic flaws in poorly secured systems.

Ethical Implications and Community Call

Concluding, Michael framed his work as a protest against CSC’s exploitative practices, encouraging attendees to consider the ethics of bypassing systems that exploit consumers. He shared resources, including manuals and his write-up, to empower others while cautioning about legal risks. His talk sparked reflection on the balance between technical ingenuity and corporate accountability, urging the DEF CON community to challenge predatory systems through informed action.

Links:

PostHeaderIcon [DotAI2024] DotAI 2024: Maxim Zaks – Mojo: Beyond Buzz, Toward a Systems Symphony

Maxim Zaks, polymath programmer from IDEs to data ducts, and FlatBuffers’ fleet-footed forger, interrogated Mojo’s mettle at DotAI 2024. As Mojo’s communal curator—contributing to its canon sans corporate crest—Zaks, unyoked to Modular, affirmed its ascent: not ephemeral éclat, but enduring edifice for AI artisans and systems smiths alike.

Echoes of Eras: From Procedural Progenitors to Pythonic Prodigies

Zaks zested with zeitgeist: Married… with Children’s clan conjuring C’s centrality, Smalltalk’s sparkle, BASIC’s benevolence—80s archetypes amid enterprise esoterica. Fast-forward: Java’s juggernaut, Python’s pliant poise—yet performance’s plaint persists, Python’s pyrotechnics paling in precision’s precinct.

Mojo manifests as meld: Python’s patois, systems’ sinew—superset sans schism, scripting’s suavity fused with C’s celerity. Zaks zinged its zygote: 2023’s stealthy spawn, Howard’s herald as “decades’ dawn”—now TIOBE’s 48th, browser-bound for barrierless baptism.

Empowering Engineers: From Syntax to SIMD

Zaks zoomed to zealots: high-performance heralds harnessing SIMD sorcery, data designs deftly dispatched—SIMD intrinsics summoning speedups sans syntax strain. Mojo’s mantle: multithreading’s mastery, inline ML’s alchemy—CPUs as canvases, GPUs on horizon.

For non-natives, Zaks zapped a prefix-sum parable: prosaic Python plodding, Mojo’s baseline brisk, SIMD’s spike surging eightfold—arcane accessible, sans secondary syntaxes like Zig’s ziggurats or Rust’s runes.

Community’s crucible: inclusive incubus, tools transcendent—VS Code’s vassal, REPL’s rapture. Zaks’ zest: Mojo’s mirthful meld, where whimsy weds wattage, inviting idiomatic idioms.

In finale, Zaks flung a flourish: browser beckons at mojo.modular.com—forge futures, unfettered.

Links:

PostHeaderIcon Enabling and Using the WordPress REST API on OVH Hosting

I recently started migrating my WordPress site from Free.fr to OVHcloud hosting. The migration is still in progress, but along the way I needed to enable and validate
programmatic publishing through the WordPress REST API (RPC/API calls). This post documents the full process end-to-end, including OVH-specific gotchas and troubleshooting.

My last migration was many years ago, from DotClear 2 to WordPress…

Why move from Free.fr to OVH?

  • Performance: More CPU/RAM and faster PHP execution make WordPress snappier.
  • Modern PHP: Current PHP versions and extensions are available and easy to select.
  • HTTPS (SSL): Essential for secure logins and required for Application Passwords.
  • Better control: You can tweak .htaccess, install custom/mu-plugins, and adjust config.
  • Scalability: Easier to upgrade plans and resources as your site grows.

What is the WordPress REST API?

WordPress ships with a built-in REST API at /wp-json/. It lets you read and write content, upload media, and automate publishing from scripts or external systems (curl, Python, Node.js, CI, etc.).

Step 1 — Confirm the API is reachable

  1. Open https://yourdomain.com/wp-json/ in a browser. You should see a JSON index of routes.
  2. Optional: check https://yourdomain.com/wp-json/wp/v2 or
    https://yourdomain.com/wp-json/wp/v2/types/post to view available endpoints and fields.

Step 2 — Enable authentication with Application Passwords

  1. Sign in to /wp-admin/ with a user that can create/publish posts.
  2. Go to Users → Profile (your profile page).
  3. In Application Passwords, add a new password (e.g., “API access from laptop”). It should look like ABCD EFgh IjKl M123 n951 (including spaces)
  4. Copy the generated password (you’ll only see it once). Keep it secure.

You will authenticate via HTTP Basic Auth using username:application-password over HTTPS.

Step 3 — Test authentication (curl)

Replace the placeholders before running:

curl -i -u 'USERNAME:APP_PASSWORD' \
  https://yourdomain.com/wp-json/wp/v2/users/me

Expected result: 200 OK with your user JSON. If you get 401 or 403, see Troubleshooting below.

Important on OVH — The Authorization header may be stripped

On some OVH hosting configurations, the HTTP Authorization header isn’t passed to PHP.
If that happens, WordPress cannot see your Application Password and responds with:

{"code":"rest_not_logged_in","message":"You are not currently logged in.","data":{"status":401}}

To confirm you’re sending the header, try explicitly setting it:

curl -i -H "Authorization: Basic $(echo -n 'USERNAME:APP_PASSWORD' | base64)" \
  https://yourdomain.com/wp-json/wp/v2/users/me

If you still get 401, fix the server so PHP receives the header.

Step 4 — Fixing Authorization headers on OVH

Option A — Add rules to .htaccess

Connect in FTP, browse to “www” folder, edit the .htaccess file. Add these lines above the “BEGIN WordPress” block:

<IfModule mod_rewrite.c>
    RewriteEngine On
    RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization}]
</IfModule>

<IfModule mod_setenvif.c>
    SetEnvIf Authorization "(.*)" HTTP_AUTHORIZATION=$1
</IfModule>

Option B — Tiny must-use plugin

Create wp-content/mu-plugins/ if missing, then add fix-authorization.php:

<?php
/**
 * Plugin Name: Fix Authorization Header
 * Description: Ensures HTTP Authorization header is passed to WordPress for Application Passwords.
 */
add_action('init', function () {
    if (!isset($_SERVER['HTTP_AUTHORIZATION'])) {
        if (isset($_SERVER['REDIRECT_HTTP_AUTHORIZATION'])) {
            $_SERVER['HTTP_AUTHORIZATION'] = $_SERVER['REDIRECT_HTTP_AUTHORIZATION'];
        } elseif (function_exists('apache_request_headers')) {
            $headers = apache_request_headers();
            if (isset($headers['Authorization'])) {
                $_SERVER['HTTP_AUTHORIZATION'] = $headers['Authorization'];
            }
        }
    }
});

Upload and reload: authentication should now succeed.

Step 5 — Create and publish a complete post via API

Optional: create a category and a tag

# Create a category
curl -i -X POST \
  -u 'USERNAME:APP_PASSWORD' \
  -H "Content-Type: application/json" \
  -d '{ "name": "Tech" }' \
  https://yourdomain.com/wp-json/wp/v2/categories

# Create a tag
curl -i -X POST \
  -u 'USERNAME:APP_PASSWORD' \
  -H "Content-Type: application/json" \
  -d '{ "name": "API" }' \
  https://yourdomain.com/wp-json/wp/v2/tags

Upload a featured image

curl -i -X POST \
  -u 'USERNAME:APP_PASSWORD' \
  -H "Content-Disposition: attachment; filename=header.jpg" \
  -H "Content-Type: image/jpeg" \
  --data-binary @/full/path/to/header.jpg \
  https://yourdomain.com/wp-json/wp/v2/media

Note the returned MEDIA_ID.

Create and publish the post

curl -i -X POST \
  -u 'USERNAME:APP_PASSWORD' \
  -H "Content-Type: application/json" \
  -d '{
        "title": "Hello from the API",
        "content": "<p>Created automatically 🚀</p>",
        "status": "publish",
        "categories": [CAT_ID],
        "tags": [TAG_ID],
        "featured_media": MEDIA_ID
      }' \
  https://yourdomain.com/wp-json/wp/v2/posts

Optionally update excerpt or slug

POST_ID=REPLACE_WITH_ID

curl -i -X POST \
  -u 'USERNAME:APP_PASSWORD' \
  -H "Content-Type: application/json" \
  -d '{ "excerpt": "Short summary", "slug": "hello-from-the-api" }' \
  https://yourdomain.com/wp-json/wp/v2/posts/$POST_ID

Troubleshooting

  • 401 Unauthorized / rest_not_logged_inThe Authorization header isn’t reaching PHP. Add the .htaccess rules or the mu-plugin above. Re-test with
    -H "Authorization: Basic …".
  • 403 ForbiddenThe user lacks capabilities (e.g., Authors can’t publish globally). Use "status":"draft" or run publishing as an Editor/Admin.
  • Media upload failsCheck upload_max_filesize, post_max_size, and file permissions. Try a smaller file to isolate the issue.
  • Categories/Tags not appliedUse numeric IDs, not names. Fetch with /wp-json/wp/v2/categories and /wp-json/wp/v2/tags.
  • PermalinksPrefer non-Plain permalinks. If using Plain, you can call endpoints with the fallback:
    https://yourdomain.com/?rest_route=/wp/v2/posts.

Conclusion

Moving from Free.fr to OVH brings better performance, modern PHP, and full HTTPS, which is perfect for automation and scheduling.
After ensuring the Authorization header reaches WordPress (via .htaccess or a tiny mu-plugin), the REST API works smoothly for creating posts, uploading media, and managing taxonomy.
My migration is still ongoing, but having a reliable API in place is already a big win.

PostHeaderIcon [OxidizeConf2024] The Fullest Stack by Anatol Ulrich

Embracing Rust’s Cross-Platform Versatility

Rust’s ability to operate seamlessly across diverse platforms—from embedded devices to web applications—positions it as a uniquely versatile language for full-stack development. At OxidizeConf2024, Anatol Ulrich, a freelance developer with extensive experience in web, mobile, and embedded systems, presented a compelling vision of the “fullest stack” built entirely in Rust. Anatol’s talk demonstrated how Rust’s “write once, compile anywhere” philosophy enables low-friction, vertically integrated projects, spanning embedded devices, cloud services, and web or native clients.

Anatol showcased a project integrating a fleet of embedded devices with a cloud backend and a web-based UI, all written in Rust. Using the postcard crate for serialization and remote procedure calls (RPC), he achieved seamless communication between an STM32 microcontroller and a browser via Web USB. This setup allowed real-time data exchange, demonstrating Rust’s ability to unify disparate platforms. Anatol’s approach leverages Rust’s type safety and zero-cost abstractions, ensuring robust performance across the stack while minimizing development complexity.

Streamlining Development with Open-Source Tools

A key aspect of Anatol’s presentation was the use of open-source Rust crates to streamline development. The dioxus crate enabled cross-platform UI development, supporting both web and native clients with a single codebase. For embedded communication, Anatol employed postcard for efficient serialization, agnostic to the underlying transport layer—whether Web USB, Web Serial, or MQTT. This flexibility allowed him to focus on application logic rather than platform-specific details, reducing friction in multi-platform projects.

Anatol also introduced a crate for auto-generating UIs based on type introspection, simplifying the creation of user interfaces for complex data structures. By sprinkling minimal hints onto the code, developers can generate dynamic UIs, a feature particularly useful for rapid prototyping. Despite challenges like long compile times and WebAssembly debugging difficulties, Anatol’s open-source contributions, soon to be published, invite community collaboration to enhance Rust’s full-stack capabilities.

Future Directions and Community Collaboration

Anatol’s vision extends beyond his current project, aiming to inspire broader adoption of Rust in full-stack development. He highlighted areas for improvement, such as WebAssembly debugging and the orphan rule, which complicates crate composition. Tools like Servo, a Rust-based browser engine, could enhance Web USB support, further bridging embedded and web ecosystems. Anatol’s call for contributors underscores the community-driven nature of Rust, encouraging developers to collaborate on platforms like GitHub and Discord to address these challenges.

The talk also touched on advanced techniques, such as dynamic type handling, which Anatol found surprisingly manageable compared to macro-heavy alternatives. By sharing his experiences and open-source tools, Anatol fosters a collaborative environment where Rust’s ecosystem can evolve to support increasingly complex applications. His work exemplifies Rust’s potential to deliver cohesive, high-performance solutions across the entire technology stack.

Links:

PostHeaderIcon [DefCon32] DEF CON 32: Measuring the Tor Network

Silvia Puglisi and Roger Dingledine, key figures in the Tor Project, delivered an insightful presentation at DEF CON 32, shedding light on the Tor network’s metrics and community-driven efforts to maintain its health. As millions rely on Tor to evade surveillance and censorship, Silvia and Roger detailed how the Tor Project collects safe metrics, detects attacks, and fosters a vibrant relay operator community. Their talk provided a window into the challenges of sustaining an anonymity network and invited attendees to contribute to its mission of preserving internet freedom.

Collecting Safe Metrics for Anonymity

Silvia opened by explaining the Tor Project’s approach to gathering metrics without compromising user anonymity. By analyzing usage patterns and relay performance, the network health team identifies unusual activity, such as potential attacks or misconfigured relays. Silvia highlighted tools like Tor Weather, which notifies operators of relay issues, and the network status API, which supports data analysis. These efforts ensure the network remains robust while prioritizing user privacy, a delicate balance in an anonymity-focused ecosystem.

Detecting and Mitigating Network Threats

Roger delved into the strategies for identifying and countering attacks on the Tor network, which supports over seven thousand volunteer-operated relays. He discussed how metrics help detect malicious relays and unusual traffic patterns, enabling rapid response to threats. Roger cited historical examples, such as the 2009 Green Party Movement in Iran, where Tor empowered activists, underscoring the network’s role in global activism. By sharing these insights, he emphasized the importance of community vigilance in maintaining network integrity.

Fostering a Diverse Relay Community

The duo highlighted the Tor Project’s efforts to grow its community of relay operators, encouraging attendees to run relays, bridges, or Snowflake proxies. Silvia detailed initiatives like the formal relay operator meetup planned for future conferences, aiming to strengthen community ties. Roger stressed that contributing to Tor supports activists worldwide, particularly those without institutional protections. Their call to action invited DEF CON attendees to join the network health team or contribute to projects like rewriting tools in Rust for better performance.

Future Challenges and Community Engagement

Concluding, Silvia and Roger outlined ongoing challenges, such as improving data visualization and scaling the network to handle increasing demand. They encouraged contributions to the Tor Project’s wiki and open-source tools, emphasizing that every relay or code contribution aids the fight for privacy and anonymity. Their interactive session at the Tor booth post-talk invited attendees to explore further, reinforcing the collaborative spirit that drives the Tor ecosystem forward.

Links: