Archive for the ‘en-US’ Category
[DevoxxFR2025] Dagger Modules: A Swiss Army Knife for Modern CI/CD Pipelines
Continuous Integration and Continuous Delivery (CI/CD) pipelines are the backbone of modern software development, automating the process of building, testing, and deploying applications. However, as these pipelines grow in complexity, they often become difficult to maintain, debug, and port across different execution platforms, frequently relying on verbose and platform-specific YAML configurations. Jean-Christophe Sirot, in his presentation, introduced Dagger as a revolutionary approach to CI/CD, allowing pipelines to be written as code, executable locally, testable, and portable. He explored Dagger Functions and Dagger Modules as key concepts for creating and sharing reusable, language-agnostic components for CI/CD workflows, positioning Dagger as a versatile “Swiss Army knife” for modernizing these critical pipelines.
The Pain Points of Traditional CI/CD
Jean-Christophe began by outlining the common frustrations associated with traditional CI/CD pipelines. Relying heavily on YAML or other declarative formats for defining pipelines can lead to complex, repetitive, and hard-to-read configurations, especially for intricate workflows. Debugging failures within these pipelines is often challenging, requiring pushing changes to a remote CI server and waiting for the pipeline to run. Furthermore, pipelines written for one CI platform (like GitHub Actions or GitLab CI) are often not easily transferable to another, creating vendor lock-in and hindering flexibility. This dependency on specific platforms and the difficulty in managing complex workflows manually are significant pain points for development and DevOps teams.
Dagger: CI/CD as Code
Dagger offers a fundamentally different approach by treating CI/CD pipelines as code. It allows developers to write their pipeline logic using familiar programming languages (like Go, Python, Java, or TypeScript) instead of platform-specific configuration languages. This brings the benefits of software development practices – such as code reusability, modularity, testing, and versioning – to CI/CD. Jean-Christophe explained that Dagger executes these pipelines using containers, ensuring consistency and portability across different environments. The Dagger engine runs the pipeline logic, orchestrates the necessary container operations, and manages dependencies. This allows developers to run and debug their CI/CD pipelines locally using the same code that will execute on the remote CI platform, significantly accelerating the debugging cycle.
Dagger Functions and Modules
Key to Dagger’s power are Dagger Functions and Dagger Modules. Jean-Christophe described Dagger Functions as the basic building blocks of a pipeline – functions written in a programming language that perform specific CI/CD tasks (e.g., building a Docker image, running tests, deploying an application). These functions interact with the Dagger engine to perform container operations. Dagger Modules are collections of related Dagger Functions that can be packaged and shared. Modules allow teams to create reusable components for common CI/CD patterns or specific technologies, effectively creating a library of CI/CD capabilities. For example, a team could create a “Java Build Module” containing functions for compiling Java code, running Maven or Gradle tasks, and building JAR or WAR files. These modules can be easily imported and used in different projects, promoting standardization and reducing duplication across an organization’s CI/CD workflows. Jean-Christophe demonstrated how to create and use Dagger Modules, illustrating their potential for building composable and maintainable pipelines. He highlighted that Dagger’s language independence means that modules can be written in one language (e.g., Python) and used in a pipeline defined in another (e.g., Java), fostering collaboration between teams with different language preferences.
The Benefits: Composable, Maintainable, Portable
By adopting Dagger, teams can create CI/CD pipelines that are:
– Composable: Pipelines can be built by combining smaller, reusable Dagger Modules and Functions.
– Maintainable: Pipelines written as code are easier to read, understand, and refactor using standard development tools and practices.
– Portable: Pipelines can run on any platform that supports Dagger and containers, eliminating vendor lock-in.
– Testable: Individual Dagger Functions and modules can be unit tested, and the entire pipeline can be run and debugged locally.
Jean-Christophe’s presentation positioned Dagger as a versatile tool that modernizes CI/CD by bringing the best practices of software development to pipeline automation. The ability to write pipelines in code, leverage reusable modules, and execute locally makes Dagger a powerful “Swiss Army knife” for developers and DevOps engineers seeking more efficient, reliable, and maintainable CI/CD workflows.
Links:
- Jean-Christophe Sirot: https://www.linkedin.com/in/jcsirot/
- Decathlon: https://www.decathlon.com/
- Dagger: https://dagger.io/
- Devoxx France LinkedIn: https://www.linkedin.com/company/devoxx-france/
- Devoxx France Bluesky: https://bsky.app/profile/devoxx.fr
- Devoxx France Website: https://www.devoxx.fr/
[AWSReInforce2025] How AWS’s global threat intelligence transforms cloud protection (SEC302)
Lecturer
The presentation features AWS security leadership and engineering experts who architect the global threat intelligence platform. Their collective expertise spans distributed systems, machine learning, and real-time security operations across AWS’s planetary-scale infrastructure.
Abstract
The session examines AWS’s threat intelligence lifecycle—from sensor deployment through data processing to automated disruption—demonstrating how global telemetry volume enables precision defense at scale. It reveals the architectural patterns and machine learning models that convert billions of daily security events into actionable mitigations, establishing security as a reliability function within the shared responsibility model.
Global Sensor Network and Telemetry Foundation
AWS operates the world’s largest sensor network for security telemetry, spanning every Availability Zone, edge location, and service endpoint. This includes hypervisor introspection, network flow logs, DNS query monitoring, and host-level signals from EC2 instances. The scale is staggering: thousands of potential security events are blocked daily before customer impact, derived from petabytes of raw telemetry.
Sensors are purpose-built for specific threat classes. Network sensors detect C2 beaconing patterns; host sensors identify cryptominer process trees; DNS sensors flag domain generation algorithms. This layered approach ensures coverage across the attack lifecycle—from reconnaissance through exploitation to persistence.
Data Processing Pipeline and Intelligence Generation
Raw telemetry flows through a multi-stage pipeline. First, deterministic rules filter known bad indicators—IP addresses from botnet controllers, certificate hashes of phishing kits. Surviving events enter machine learning models trained on historical compromise patterns.
The models operate in two modes: supervised classification for known attack families, and unsupervised anomaly detection for zero-day behaviors. Feature engineering extracts behavioral fingerprints—process lineage entropy, network flow burstiness, file system access velocity. Models refresh hourly using federated learning across regions, preventing single-point compromise.
Intelligence quality gates require precision above 99.9% to minimize false positives. When confidence thresholds are met, signals become actionable intelligence with metadata: actor attribution, campaign identifiers, TTP mappings to MITRE ATT&CK.
Automated Disruption and Attacker Cost Imposition
Intelligence drives automated responses through three mechanisms. First, infrastructure-level blocks: malicious IPs are null-routed at the network edge within seconds. Second, service-level mitigations: compromised credentials trigger forced password rotation and session termination. Third, customer notifications via GuardDuty findings with remediation playbooks.
The disruption philosophy focuses on increasing attacker cost. By blocking C2 infrastructure early, campaigns lose command visibility. By rotating compromised keys rapidly, lateral movement becomes expensive. By publishing indicators publicly, defenders globally benefit from AWS’s visibility.
Shared Outcomes in the Responsibility Model
The shared responsibility model extends to outcomes, not just controls. AWS secures the cloud—hypervisors, network fabric, physical facilities—while customers secure their workloads. Threat intelligence bridges this divide: AWS’s global view detects campaigns targeting multiple customers, enabling proactive protection before individual compromise.
This manifests in services like Shield Advanced, which absorbs DDoS attacks at the network perimeter, and Macie, which identifies exposed PII across S3 buckets. Customers focus on application logic—input validation, business rule enforcement—while AWS handles undifferentiated heavy lifting.
Machine Learning at Security Scale
Scaling threat intelligence requires automation beyond human capacity. Data scientists build models that generalize across attack variations while maintaining low false positive rates. Techniques include:
- Graph neural networks to detect credential abuse chains
- Time-series analysis for cryptominer thermal signatures
- Natural language processing on phishing email corpora
Model interpretability ensures security analysts can validate decisions. Feature importance rankings and counterfactual examples explain why a particular IP was blocked, maintaining operational trust.
Operational Integration and Customer Impact
Intelligence integrates into customer-facing services seamlessly. GuardDuty consumes the same models used internally, surfacing findings with evidence packages. Security Hub centralizes signals from AWS and partner solutions. WAF rulesets update automatically with emerging threat patterns.
The impact compounds: a campaign targeting one customer is disrupted globally. A novel malware strain detected in one region triggers protections everywhere. This network effect makes the internet safer collectively.
Conclusion: Security as Reliability Engineering
Threat intelligence at AWS scale transforms security from reactive defense to proactive reliability engineering. By investing in sensors, processing, and automation, AWS prevents disruptions before they affect customer operations. The shared outcomes model—where infrastructure protection enables application innovation—creates a virtuous cycle: more secure workloads generate better telemetry, improving intelligence quality, which prevents more disruptions.
Links:
[DefCon32] DEF CON 32: Laundering Money
Michael Orlitzky, a multifaceted security researcher and mathematician, captivated the DEF CON 32 audience with a provocative presentation on bypassing payment mechanisms in CSC ServiceWorks’ pay-to-play laundry machines. By exploiting physical vulnerabilities in Speed Queen washers and dryers, Michael demonstrated how to run these machines without payment, framing his actions as a response to CSC’s exploitative practices. His talk, rich with technical detail and humor, shed light on the intersection of physical security and consumer frustration, urging attendees to question predatory business models.
Uncovering CSC’s Predatory Practices
Michael began by introducing CSC ServiceWorks, a major provider of coin- and app-operated laundry machines in residential buildings. He detailed their business model, which charges tenants for laundry despite rent covering utilities, often trapping users with non-refundable prepaid cards or unreliable apps like CSC GO. Michael recounted personal grievances, such as machines eating quarters or failing to deliver services, supported by widespread customer complaints citing CSC’s poor maintenance and refund processes. His narrative positioned CSC as a corporate antagonist, justifying his exploration of hardware bypasses as a form of reclaiming fairness.
Bypassing Coin Slots with Hardware Hacks
Delving into the technical core, Michael explained how to access the service panels of CSC-branded Speed Queen machines, which use standardized keys available online. By short-circuiting red and black wires in the coin-drop mechanism, he tricked the machine into registering payment, enabling free cycles without damage. His live demonstration, complete with safety warnings about grounding and electrical risks, showcased the simplicity of the bypass—achievable in seconds with minimal tools. Michael’s approach, detailed on his personal website, emphasized accessibility, requiring only determination and basic equipment.
Addressing CSC’s Security Upgrades
Michael also addressed CSC’s response to his findings, noting that days before DEF CON 32, the company upgraded his building’s machines with new tubular locks and security Torx screws. Undeterred, he demonstrated how to bypass these using a tubular lockpick or a flathead screwdriver, highlighting CSC’s superficial fixes. His candid tone and humorous defiance—acknowledging the machines’ internet-connected logs—underscored the low risk of repercussions, as CSC’s focus on profit over maintenance left such vulnerabilities unaddressed. This segment reinforced the talk’s theme of exploiting systemic flaws in poorly secured systems.
Ethical Implications and Community Call
Concluding, Michael framed his work as a protest against CSC’s exploitative practices, encouraging attendees to consider the ethics of bypassing systems that exploit consumers. He shared resources, including manuals and his write-up, to empower others while cautioning about legal risks. His talk sparked reflection on the balance between technical ingenuity and corporate accountability, urging the DEF CON community to challenge predatory systems through informed action.
Links:
[DotAI2024] DotAI 2024: Maxim Zaks – Mojo: Beyond Buzz, Toward a Systems Symphony
Maxim Zaks, polymath programmer from IDEs to data ducts, and FlatBuffers’ fleet-footed forger, interrogated Mojo’s mettle at DotAI 2024. As Mojo’s communal curator—contributing to its canon sans corporate crest—Zaks, unyoked to Modular, affirmed its ascent: not ephemeral éclat, but enduring edifice for AI artisans and systems smiths alike.
Echoes of Eras: From Procedural Progenitors to Pythonic Prodigies
Zaks zested with zeitgeist: Married… with Children’s clan conjuring C’s centrality, Smalltalk’s sparkle, BASIC’s benevolence—80s archetypes amid enterprise esoterica. Fast-forward: Java’s juggernaut, Python’s pliant poise—yet performance’s plaint persists, Python’s pyrotechnics paling in precision’s precinct.
Mojo manifests as meld: Python’s patois, systems’ sinew—superset sans schism, scripting’s suavity fused with C’s celerity. Zaks zinged its zygote: 2023’s stealthy spawn, Howard’s herald as “decades’ dawn”—now TIOBE’s 48th, browser-bound for barrierless baptism.
Empowering Engineers: From Syntax to SIMD
Zaks zoomed to zealots: high-performance heralds harnessing SIMD sorcery, data designs deftly dispatched—SIMD intrinsics summoning speedups sans syntax strain. Mojo’s mantle: multithreading’s mastery, inline ML’s alchemy—CPUs as canvases, GPUs on horizon.
For non-natives, Zaks zapped a prefix-sum parable: prosaic Python plodding, Mojo’s baseline brisk, SIMD’s spike surging eightfold—arcane accessible, sans secondary syntaxes like Zig’s ziggurats or Rust’s runes.
Community’s crucible: inclusive incubus, tools transcendent—VS Code’s vassal, REPL’s rapture. Zaks’ zest: Mojo’s mirthful meld, where whimsy weds wattage, inviting idiomatic idioms.
In finale, Zaks flung a flourish: browser beckons at mojo.modular.com—forge futures, unfettered.
Links:
Enabling and Using the WordPress REST API on OVH Hosting
I recently started migrating my WordPress site from Free.fr to OVHcloud hosting. The migration is still in progress, but along the way I needed to enable and validate
programmatic publishing through the WordPress REST API (RPC/API calls). This post documents the full process end-to-end, including OVH-specific gotchas and troubleshooting.
My last migration was many years ago, from DotClear 2 to WordPress…
Why move from Free.fr to OVH?
- Performance: More CPU/RAM and faster PHP execution make WordPress snappier.
- Modern PHP: Current PHP versions and extensions are available and easy to select.
- HTTPS (SSL): Essential for secure logins and required for Application Passwords.
- Better control: You can tweak
.htaccess, install custom/mu-plugins, and adjust config. - Scalability: Easier to upgrade plans and resources as your site grows.
What is the WordPress REST API?
WordPress ships with a built-in REST API at /wp-json/. It lets you read and write content, upload media, and automate publishing from scripts or external systems (curl, Python, Node.js, CI, etc.).
Step 1 — Confirm the API is reachable
- Open
https://yourdomain.com/wp-json/in a browser. You should see a JSON index of routes. - Optional: check
https://yourdomain.com/wp-json/wp/v2or
https://yourdomain.com/wp-json/wp/v2/types/postto view available endpoints and fields.
Step 2 — Enable authentication with Application Passwords
- Sign in to
/wp-admin/with a user that can create/publish posts. - Go to Users → Profile (your profile page).
- In Application Passwords, add a new password (e.g., “API access from laptop”). It should look like
ABCD EFgh IjKl M123 n951(including spaces) - Copy the generated password (you’ll only see it once). Keep it secure.
You will authenticate via HTTP Basic Auth using username:application-password over HTTPS.
Step 3 — Test authentication (curl)
Replace the placeholders before running:
curl -i -u 'USERNAME:APP_PASSWORD' \
https://yourdomain.com/wp-json/wp/v2/users/me
Expected result: 200 OK with your user JSON. If you get 401 or 403, see Troubleshooting below.
Important on OVH — The Authorization header may be stripped
On some OVH hosting configurations, the HTTP Authorization header isn’t passed to PHP.
If that happens, WordPress cannot see your Application Password and responds with:
{"code":"rest_not_logged_in","message":"You are not currently logged in.","data":{"status":401}}
To confirm you’re sending the header, try explicitly setting it:
curl -i -H "Authorization: Basic $(echo -n 'USERNAME:APP_PASSWORD' | base64)" \
https://yourdomain.com/wp-json/wp/v2/users/me
If you still get 401, fix the server so PHP receives the header.
Step 4 — Fixing Authorization headers on OVH
Option A — Add rules to .htaccess
Connect in FTP, browse to “www” folder, edit the .htaccess file. Add these lines above the “BEGIN WordPress” block:
<IfModule mod_rewrite.c>
RewriteEngine On
RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization}]
</IfModule>
<IfModule mod_setenvif.c>
SetEnvIf Authorization "(.*)" HTTP_AUTHORIZATION=$1
</IfModule>
Option B — Tiny must-use plugin
Create wp-content/mu-plugins/ if missing, then add fix-authorization.php:
<?php
/**
* Plugin Name: Fix Authorization Header
* Description: Ensures HTTP Authorization header is passed to WordPress for Application Passwords.
*/
add_action('init', function () {
if (!isset($_SERVER['HTTP_AUTHORIZATION'])) {
if (isset($_SERVER['REDIRECT_HTTP_AUTHORIZATION'])) {
$_SERVER['HTTP_AUTHORIZATION'] = $_SERVER['REDIRECT_HTTP_AUTHORIZATION'];
} elseif (function_exists('apache_request_headers')) {
$headers = apache_request_headers();
if (isset($headers['Authorization'])) {
$_SERVER['HTTP_AUTHORIZATION'] = $headers['Authorization'];
}
}
}
});
Upload and reload: authentication should now succeed.
Step 5 — Create and publish a complete post via API
Optional: create a category and a tag
# Create a category
curl -i -X POST \
-u 'USERNAME:APP_PASSWORD' \
-H "Content-Type: application/json" \
-d '{ "name": "Tech" }' \
https://yourdomain.com/wp-json/wp/v2/categories
# Create a tag
curl -i -X POST \
-u 'USERNAME:APP_PASSWORD' \
-H "Content-Type: application/json" \
-d '{ "name": "API" }' \
https://yourdomain.com/wp-json/wp/v2/tags
Upload a featured image
curl -i -X POST \
-u 'USERNAME:APP_PASSWORD' \
-H "Content-Disposition: attachment; filename=header.jpg" \
-H "Content-Type: image/jpeg" \
--data-binary @/full/path/to/header.jpg \
https://yourdomain.com/wp-json/wp/v2/media
Note the returned MEDIA_ID.
Create and publish the post
curl -i -X POST \
-u 'USERNAME:APP_PASSWORD' \
-H "Content-Type: application/json" \
-d '{
"title": "Hello from the API",
"content": "<p>Created automatically 🚀</p>",
"status": "publish",
"categories": [CAT_ID],
"tags": [TAG_ID],
"featured_media": MEDIA_ID
}' \
https://yourdomain.com/wp-json/wp/v2/posts
Optionally update excerpt or slug
POST_ID=REPLACE_WITH_ID
curl -i -X POST \
-u 'USERNAME:APP_PASSWORD' \
-H "Content-Type: application/json" \
-d '{ "excerpt": "Short summary", "slug": "hello-from-the-api" }' \
https://yourdomain.com/wp-json/wp/v2/posts/$POST_ID
Troubleshooting
- 401 Unauthorized /
rest_not_logged_inTheAuthorizationheader isn’t reaching PHP. Add the.htaccessrules or the mu-plugin above. Re-test with
-H "Authorization: Basic …". - 403 ForbiddenThe user lacks capabilities (e.g., Authors can’t publish globally). Use
"status":"draft"or run publishing as an Editor/Admin. - Media upload failsCheck
upload_max_filesize,post_max_size, and file permissions. Try a smaller file to isolate the issue. - Categories/Tags not appliedUse numeric IDs, not names. Fetch with
/wp-json/wp/v2/categoriesand/wp-json/wp/v2/tags. - PermalinksPrefer non-Plain permalinks. If using Plain, you can call endpoints with the fallback:
https://yourdomain.com/?rest_route=/wp/v2/posts.
Conclusion
Moving from Free.fr to OVH brings better performance, modern PHP, and full HTTPS, which is perfect for automation and scheduling.
After ensuring the Authorization header reaches WordPress (via .htaccess or a tiny mu-plugin), the REST API works smoothly for creating posts, uploading media, and managing taxonomy.
My migration is still ongoing, but having a reliable API in place is already a big win.
[OxidizeConf2024] The Fullest Stack by Anatol Ulrich
Embracing Rust’s Cross-Platform Versatility
Rust’s ability to operate seamlessly across diverse platforms—from embedded devices to web applications—positions it as a uniquely versatile language for full-stack development. At OxidizeConf2024, Anatol Ulrich, a freelance developer with extensive experience in web, mobile, and embedded systems, presented a compelling vision of the “fullest stack” built entirely in Rust. Anatol’s talk demonstrated how Rust’s “write once, compile anywhere” philosophy enables low-friction, vertically integrated projects, spanning embedded devices, cloud services, and web or native clients.
Anatol showcased a project integrating a fleet of embedded devices with a cloud backend and a web-based UI, all written in Rust. Using the postcard crate for serialization and remote procedure calls (RPC), he achieved seamless communication between an STM32 microcontroller and a browser via Web USB. This setup allowed real-time data exchange, demonstrating Rust’s ability to unify disparate platforms. Anatol’s approach leverages Rust’s type safety and zero-cost abstractions, ensuring robust performance across the stack while minimizing development complexity.
Streamlining Development with Open-Source Tools
A key aspect of Anatol’s presentation was the use of open-source Rust crates to streamline development. The dioxus crate enabled cross-platform UI development, supporting both web and native clients with a single codebase. For embedded communication, Anatol employed postcard for efficient serialization, agnostic to the underlying transport layer—whether Web USB, Web Serial, or MQTT. This flexibility allowed him to focus on application logic rather than platform-specific details, reducing friction in multi-platform projects.
Anatol also introduced a crate for auto-generating UIs based on type introspection, simplifying the creation of user interfaces for complex data structures. By sprinkling minimal hints onto the code, developers can generate dynamic UIs, a feature particularly useful for rapid prototyping. Despite challenges like long compile times and WebAssembly debugging difficulties, Anatol’s open-source contributions, soon to be published, invite community collaboration to enhance Rust’s full-stack capabilities.
Future Directions and Community Collaboration
Anatol’s vision extends beyond his current project, aiming to inspire broader adoption of Rust in full-stack development. He highlighted areas for improvement, such as WebAssembly debugging and the orphan rule, which complicates crate composition. Tools like Servo, a Rust-based browser engine, could enhance Web USB support, further bridging embedded and web ecosystems. Anatol’s call for contributors underscores the community-driven nature of Rust, encouraging developers to collaborate on platforms like GitHub and Discord to address these challenges.
The talk also touched on advanced techniques, such as dynamic type handling, which Anatol found surprisingly manageable compared to macro-heavy alternatives. By sharing his experiences and open-source tools, Anatol fosters a collaborative environment where Rust’s ecosystem can evolve to support increasingly complex applications. His work exemplifies Rust’s potential to deliver cohesive, high-performance solutions across the entire technology stack.
Links:
[DefCon32] DEF CON 32: Measuring the Tor Network
Silvia Puglisi and Roger Dingledine, key figures in the Tor Project, delivered an insightful presentation at DEF CON 32, shedding light on the Tor network’s metrics and community-driven efforts to maintain its health. As millions rely on Tor to evade surveillance and censorship, Silvia and Roger detailed how the Tor Project collects safe metrics, detects attacks, and fosters a vibrant relay operator community. Their talk provided a window into the challenges of sustaining an anonymity network and invited attendees to contribute to its mission of preserving internet freedom.
Collecting Safe Metrics for Anonymity
Silvia opened by explaining the Tor Project’s approach to gathering metrics without compromising user anonymity. By analyzing usage patterns and relay performance, the network health team identifies unusual activity, such as potential attacks or misconfigured relays. Silvia highlighted tools like Tor Weather, which notifies operators of relay issues, and the network status API, which supports data analysis. These efforts ensure the network remains robust while prioritizing user privacy, a delicate balance in an anonymity-focused ecosystem.
Detecting and Mitigating Network Threats
Roger delved into the strategies for identifying and countering attacks on the Tor network, which supports over seven thousand volunteer-operated relays. He discussed how metrics help detect malicious relays and unusual traffic patterns, enabling rapid response to threats. Roger cited historical examples, such as the 2009 Green Party Movement in Iran, where Tor empowered activists, underscoring the network’s role in global activism. By sharing these insights, he emphasized the importance of community vigilance in maintaining network integrity.
Fostering a Diverse Relay Community
The duo highlighted the Tor Project’s efforts to grow its community of relay operators, encouraging attendees to run relays, bridges, or Snowflake proxies. Silvia detailed initiatives like the formal relay operator meetup planned for future conferences, aiming to strengthen community ties. Roger stressed that contributing to Tor supports activists worldwide, particularly those without institutional protections. Their call to action invited DEF CON attendees to join the network health team or contribute to projects like rewriting tools in Rust for better performance.
Future Challenges and Community Engagement
Concluding, Silvia and Roger outlined ongoing challenges, such as improving data visualization and scaling the network to handle increasing demand. They encouraged contributions to the Tor Project’s wiki and open-source tools, emphasizing that every relay or code contribution aids the fight for privacy and anonymity. Their interactive session at the Tor booth post-talk invited attendees to explore further, reinforcing the collaborative spirit that drives the Tor ecosystem forward.
Links:
[DevoxxFR2025] Go Without Frills: When the Standard Library Suffices
Go, the programming language designed by Google, has gained significant popularity for its simplicity, efficiency, and strong support for concurrent programming. A core philosophy of Go is its minimalist design and emphasis on a robust standard library, encouraging developers to “do a lot with a little.” Nathan Castelein, in his presentation, championed this philosophy, demonstrating how a significant portion of modern applications can be built effectively using only Go’s standard library, without resorting to numerous third-party dependencies. He explored various native packages and compared their functionalities to well-known third-party alternatives, showcasing why and how returning to the fundamentals can lead to simpler, more maintainable, and often equally performant Go applications.
The Go Standard Library: A Powerful Foundation
Nathan highlighted the richness and capability of Go’s standard library. Unlike some languages where the standard library is minimal, Go provides a comprehensive set of packages covering a wide range of functionalities, from networking and HTTP to encoding/decoding, cryptography, and testing. He emphasized that these standard packages are well-designed, thoroughly tested, and actively maintained, making them a reliable choice for building production-ready applications. Focusing on the standard library reduces the number of external dependencies, which simplifies project management, minimizes potential security vulnerabilities introduced by third-party code, and avoids the complexities of managing version conflicts. It also encourages developers to gain a deeper understanding of the language’s built-in capabilities.
Comparing Standard Packages to Third-Party Libraries
The core of Nathan’s talk involved comparing functionalities provided by standard Go packages with those offered by popular third-party libraries. He showcased examples in areas such as:
– Web Development: Demonstrating how to build web servers and handle HTTP requests using the net/http package, contrasting it with frameworks like Gin, Echo, or Fiber. He would have shown that for many common web tasks, the standard library provides sufficient features.
– Logging: Illustrating the capabilities of the log/slog package (introduced in Go 1.21) for structured logging, comparing it to libraries like Logrus or Zerolog. He would have highlighted how log/slog provides modern logging features natively.
– Testing: Exploring the testing package for writing unit and integration tests, perhaps mentioning how it can be used effectively without resorting to assertion libraries like Testify for many common assertion scenarios.
The comparison aimed to show that while third-party libraries often provide convenience or specialized features, the standard library has evolved to incorporate many commonly needed functionalities, often in a simpler and more idiomatic Go way.
The Benefits of a Minimalist Approach
Nathan articulated the benefits of embracing a “Go without frills” approach. Using the standard library more extensively leads to:
– Reduced Complexity: Fewer dependencies mean a simpler project structure and fewer moving parts to understand and manage.
– Improved Maintainability: Code relying on standard libraries is often easier to maintain over time, as the dependencies are stable and well-documented.
– Enhanced Performance: Standard library implementations are often highly optimized and integrated with the Go runtime.
– Faster Compilation: Fewer dependencies can lead to quicker build times.
– Smaller Binaries: Avoiding large third-party libraries can result in smaller executable files.
He acknowledged that there are still valid use cases for third-party libraries, especially for highly specialized tasks or when a library provides significant productivity gains. However, the key takeaway was to evaluate the necessity of adding a dependency and to leverage the powerful standard library whenever it suffices. The talk encouraged developers to revisit the fundamentals and appreciate the elegance and capability of Go’s built-in tools for building robust and efficient applications.
Links:
- Nathan Castelein: https://www.linkedin.com/in/nathan-castelein/
- Shodo Lille: https://shodo.io/
- Devoxx France LinkedIn: https://www.linkedin.com/company/devoxx-france/
- Devoxx France Bluesky: https://bsky.app/profile/devoxx.fr
- Devoxx France Website: https://www.devoxx.fr/
[DefCon32] DEF CON 32: Exploiting Cloud Provider Vulnerabilities for Initial Access
Nick Frichette, a cloud security expert, enthralled the DEF CON 32 audience with a deep dive into vulnerabilities within Amazon Web Services (AWS) that enable initial access to cloud environments. Moving beyond traditional misconfiguration exploits, Nick explored flaws in AWS services like AppSync and Amplify, demonstrating how attackers can hijack Identity and Access Management (IAM) roles. His presentation offered practical defensive strategies, empowering organizations to secure their cloud infrastructure against sophisticated attacks.
Understanding IAM Role Exploits
Nick began by explaining how IAM roles establish trust within AWS, relying on mechanisms like sts:AssumeRoleWithWebIdentity to prevent unauthorized access across accounts. He detailed a confused deputy vulnerability in AWS AppSync that allowed attackers to assume roles in other accounts, bypassing trust boundaries. Through a real-world case study, Nick illustrated how this flaw enabled unauthorized access, emphasizing the importance of understanding trust relationships in cloud environments to prevent such breaches.
Amplify Vulnerabilities and Zero-Day Risks
Delving deeper, Nick revealed a critical vulnerability in AWS Amplify that exposed customer IAM roles to takeover, granting attackers a foothold in victim accounts. His demonstration highlighted how adversaries could exploit this flaw without authentication, underscoring the severity of zero-day vulnerabilities in cloud services. Nick’s meticulous analysis of Amplify’s architecture provided insights into how such flaws arise, urging security practitioners to scrutinize service configurations for hidden risks.
Defensive Strategies for Cloud Security
Nick concluded with actionable recommendations, advocating for the use of condition keys in IAM trust policies to block cross-tenant attacks. He demonstrated how setting account-specific conditions thwarted his AppSync exploit, offering a defense-in-depth approach. Nick encouraged organizations to audit IAM roles, particularly those using web identity federation, and to test configurations rigorously before deployment. His work, available at Security Labs, equips defenders with tools to fortify AWS environments.
Links:
[AWSReInventPartnerSessions2024] Revolutionizing Enterprise Resource Planning through AI-Infused Cloud-Native SaaS Architectures: The SAP and AWS Convergence
Lecturer
Lauren Houon directs the Grow with SAP product marketing team at SAP, formulating strategies for cloud ERP market penetration. Elena Toader leads go-to-market operations for Grow with SAP, coordinating deployment acceleration and partner ecosystem development.
Abstract
This analytical discourse unveils the strategic integration of Grow with SAP within the AWS Marketplace, presenting a transformative procurement model for cloud enterprise resource planning. It systematically addresses prevailing organizational impediments—agility deficits, process fragmentation, transparency shortages, security vulnerabilities, and legacy system constraints—through a tripartite framework emphasizing operational simplification, business expansion, and success assurance. Customer case studies illustrate rapid value realization, cost optimization, and resistance mitigation, while technical specifications underscore reliability and extensibility.
Tripartite Strategic Framework for Cloud ERP Transformation
Contemporary enterprises grapple with multifaceted operational challenges that undermine competitiveness. Organizational inflexibility impedes adaptation to structural shifts or geographic expansion; disconnected systems spawn inefficiencies; opaque data flows obstruct automation; digital threats escalate; outdated platforms restrict scalability.
Grow with SAP on AWS counters these through marketplace-enabled acquisition—a pioneering development reflecting deepened SAP-AWS collaboration. The offering crystallizes around three interdependent pillars.
Operational Simplification deploys agile business templates, automates workflows via fifty years of embedded industry best practices, integrates artificial intelligence for enhanced transparency and strategic prioritization, and delivers continuous security/compliance updates across ninety-plus certifications.
Business Expansion accommodates multinational operations through fifty-nine out-of-the-box localizations, thirty-three languages, and localization-as-a-service for additional jurisdictions. The platform further supports mergers, divestitures, and subsidiary management within unified governance structures.
Success Assurance manifests through deployment methodologies yielding go-live timelines of eight to twelve weeks, extensible Business Technology Platform for intellectual property encapsulation, and SaaS characteristics including 99.9% availability, elastic scaling across three-tier landscapes, and biannual feature releases.
Empirical Validation via Diverse Customer Implementations
Practical efficacy emerges through heterogeneous customer narratives spanning multiple sectors.
MOD Pizza initiated its SAP journey with human resources modernization, subsequently recognizing inextricable finance-HR interdependencies. Integration enabled predictive impact assessment across four hundred monthly transactions, fostering cross-functional collaboration and process streamlining.
Aair, a major industrial raw materials distributor, replaced decade-old on-premises infrastructure plagued by talent retention difficulties and paper-based warehouse operations. Grow with SAP digitized twelve facilities, eliminating manual invoicing while revitalizing information technology career prospects.
Western Sugar Cooperative confronted thirty-year legacy ERP entrenchment compounded by employee change resistance. Methodological guidance and embedded best practices facilitated disruption-minimized transition, achieving five percent information technology cost reduction and twenty percent efficiency improvement.
\# Conceptual BTP extension configuration
apiVersion: sap.btp/v1
kind: ExtensionModule
metadata:
name: custom-localization
spec:
targetCountries: ["additional-jurisdictions"]
languageSupport: ["extended-set"]
deploymentTimeline: "8-weeks"
Industry breadth—encompassing quick-service dining, industrial distribution, agricultural processing—validates the platform’s versatile end-to-end process coverage. Partner ecosystem contributions from Accenture, Deloitte, Cognitus, Navigator, and Syntax amplify implementation expertise.
Strategic Implications and Enterprise Transformation Pathways
The marketplace procurement model democratizes access to sophisticated ERP capabilities, compressing adoption cycles while preserving customization flexibility. Tripartite pillar alignment ensures that simplification catalyzes expansion, which success assurance sustains.
Organizational consequences include liberated strategic focus through automation, regulatory compliance through perpetual updates, and scalable growth infrastructure. The paradigm shifts enterprise resource planning from administrative overhead to competitive differentiator, with artificial intelligence integration promising continual value augmentation.