Recent Posts
Archives

PostHeaderIcon [Devoxx France 2022] Securing Applications with HTTP Headers: A Survey of Attacks and Defenses

At Devoxx France 2022, Mathieu Humbert, a tech lead at Accenture with over 15 years of development experience, navigates the complex landscape of HTTP security headers. Mathieu demystifies headers like CSP, HSTS, XFO, and CORS, explaining their role in protecting web applications from threats like XSS, CSRF, and SSRF. Through a clear and engaging presentation, he outlines common attacks, their risks, and how specific headers can mitigate them, concluding with practical tools and resources for implementation.

Understanding HTTP Security Headers

Mathieu begins by introducing HTTP security headers as critical tools for safeguarding web applications. He explains headers like Content Security Policy (CSP), which restricts the sources from which content can be loaded, and HTTP Strict Transport Security (HSTS), which enforces HTTPS connections. These headers, though complex, are essential for mitigating risks in an ever-evolving threat landscape. Mathieu’s experience at Accenture informs his approach, emphasizing that understanding the purpose of each header is key to effective implementation.

By mapping headers to specific threats, Mathieu provides clarity on their practical applications. For instance, Cross-Site Scripting (XSS) attacks, where malicious scripts are injected into web pages, can be mitigated with CSP, while Cross-Site Request Forgery (CSRF) risks are reduced through proper header configurations. His accessible explanations make the technical subject approachable, ensuring developers grasp the importance of these defenses.

Mitigating Common Web Attacks

Delving into specific attacks, Mathieu outlines how headers counter vulnerabilities. He discusses XSS, where attackers exploit input fields to inject harmful code, and CSRF, where unauthorized actions are triggered on behalf of users. Headers like X-Frame-Options (XFO) prevent clickjacking by restricting how pages are framed, while CORS configurations ensure safe cross-origin requests. Mathieu also addresses Server-Side Request Forgery (SSRF), highlighting headers that limit unauthorized server requests.

Through real-world examples, Mathieu illustrates the consequences of neglecting these headers, such as data breaches or session hijacking. He stresses that proactive header implementation can significantly reduce these risks, providing a robust first line of defense for web applications. His insights, drawn from years of tackling technical challenges, underscore the necessity of staying vigilant in a dynamic threat environment.

Practical Implementation and Tools

Mathieu offers actionable guidance for integrating security headers into development workflows. He recommends starting with tools like OWASP’s Security Headers Project, which provides comprehensive documentation for configuring headers effectively. For testing, he suggests platforms like WebGoat, designed to simulate vulnerabilities, allowing developers to practice identifying and fixing issues. Mathieu also highlights the importance of automated scanners, such as Burp Suite, to detect missing or misconfigured headers.

His experience with distributed architectures and agile teams at Accenture informs his practical approach. Mathieu advises incremental implementation, starting with critical headers like HSTS and CSP, and regularly reviewing configurations to adapt to new threats. This methodical strategy ensures that security remains a priority without overwhelming development teams.

Hashtags: #WebSecurity #HTTPHeaders #Cybersecurity #DevoxxFR2022 #MathieuHumbert #Accenture #OWASP

PostHeaderIcon [DevoxxFR 2022] Log4Shell: Is It the Apache Foundation’s Fault?

At Devoxx France 2022, Emmanuel Lécharny, Jean-Baptiste Onofré, and Hervé Boutemy, all active contributors to the Apache Software Foundation, tackle the infamous Log4Shell vulnerability that shook the tech world in December 2021. Their collaborative presentation dissects the origins, causes, and responses to the Log4J security flaw, addressing whether the Apache Foundation bears responsibility. By examining the incident’s impact, the trio provides a transparent analysis of open-source security practices, offering insights into preventing future vulnerabilities and fostering community involvement. Their expertise and candid reflections make this a vital discussion for developers and organizations alike.

Unpacking the Log4Shell Incident

Emmanuel, Jean-Baptiste, and Hervé begin by tracing the history of Log4J and the emergence of Log4Shell, a critical vulnerability that allowed remote code execution, impacting countless systems worldwide. They outline the technical root causes, including flaws in Log4J’s message lookup functionality, which enabled attackers to exploit untrusted inputs. The presenters emphasize the rapid response from the Apache community, which released patches and mitigations under intense pressure, highlighting the challenges of maintaining widely-used open-source libraries.

The session provides a sobering look at the incident’s widespread effects, from internal projects to global enterprises. By sharing a detailed post-mortem, the trio illustrates how Log4Shell exposed vulnerabilities in dependency management, urging organizations to prioritize robust software supply chain practices.

Apache’s Security Practices and Challenges

The presenters delve into the Apache Foundation’s approach to managing Common Vulnerabilities and Exposures (CVEs). They explain that the foundation relies on a small, dedicated group of volunteer committers—often fewer than 15 per project—making comprehensive code reviews challenging. Emmanuel, Jean-Baptiste, and Hervé acknowledge that limited resources and the sheer volume of contributions can create gaps, as seen in Log4Shell. However, they defend the open-source model, noting its transparency and community-driven ethos as strengths that enable rapid response to issues.

They highlight systemic challenges, such as the difficulty of auditing complex codebases and the reliance on volunteer efforts. The trio calls for greater community participation, emphasizing that open-source projects like Apache thrive on collective contributions, which can enhance security and resilience.

Solutions and Future Prevention

To prevent future vulnerabilities, Emmanuel, Jean-Baptiste, and Hervé propose several strategies. They advocate for enhanced code review processes, including automated tools and mandatory audits, to catch issues early. They also discuss the potential for increased funding to support open-source maintenance, noting that financial backing could enable more robust security practices. However, they stress that money alone is insufficient; better organizational structures and community engagement are equally critical.

The presenters highlight emerging regulations, such as those in the U.S. and Europe, that hold software vendors accountable for their dependencies. These laws underscore the need for organizations to actively manage their open-source components, fostering a collaborative relationship between developers and users to ensure security.

Engaging the Community

In their closing remarks, the trio urges developers to become active contributors to open-source projects like Apache. They emphasize that even small contributions, such as reporting issues or participating in code reviews, can significantly enhance project security. Jean-Baptiste, Emmanuel, and Hervé invite attendees to engage with the Apache community, noting that projects like Log4J rely on collective effort to thrive. Their call to action underscores the shared responsibility of securing the open-source ecosystem, making it a compelling invitation for developers to get involved.

Hashtags: #Log4Shell #OpenSource #Cybersecurity #DevoxxFR2022 #EmmanuelLécharny #JeanBaptisteOnofré #HervéBoutemy #Apache

PostHeaderIcon [DevoxxFR 2022] Do You Really Know JWT?

Do You Really Know JWT? Insights from Devoxx France 2022

Karim Pinchon, a backend developer at Ornikar, delivered an illuminating talk titled “Do You Really Know JWT?” (watch on YouTube). With a decade of experience across Java, PHP, and Go, Karim dives into JSON Web Tokens (JWT), a standard for secure data transfer in authentication and authorization. This session explores JWT’s structure, cryptographic foundations, vulnerabilities, and best practices, moving beyond common usage in OAuth2 and OpenID Connect.

Understanding JWT Structure and Cryptography

Karim begins by demystifying JWT, a compact, secure token for transferring JSON data, often used in HTTP headers for authentication. A JWT comprises three parts—header, payload, and signature—encoded in Base64 and concatenated with dots. The header specifies the cryptographic algorithm (e.g., HMAC, RSA), the payload contains claims (data), and the signature ensures integrity. Karim demonstrates this using jwt.io, showing how decoding reveals JSON objects.

He distinguishes token types: reference tokens (database-backed) and value tokens (self-contained, like JWT). JWT supports two forms: compact (Base64-encoded) and JSON (with additional features like multiple signatures). Karim introduces related standards under JOSE (JSON Object Signing and Encryption), including JWS (signed tokens), JWE (encrypted tokens), JWK (key management), and JWA (algorithms). Cryptographic operations like signing (for integrity) and encryption (for confidentiality) underpin JWT’s security.

Payload Claims and Use Cases

The payload is JWT’s core, divided into three claim types:

  • Registered Claims: Standard fields like issuer (iss), audience (aud), expiration (exp), and token ID (jti) for validation.
  • Public Claims: Defined by IANA for protocols like OpenID Connect, carrying user data (e.g., name, email) in ID tokens.
  • Private Claims: Custom data agreed upon by parties, kept minimal for compactness.

Karim highlights JWT’s versatility in:

  • API Authentication: Tokens in Authorization headers validate requests without database lookups.
  • OAuth2: Access tokens may be JWTs, carrying authorization data.
  • OpenID Connect: ID tokens propagate user identity.
  • Stateless Sessions: Storing session data (e.g., e-commerce carts) client-side, enhancing scalability.

He cautions that stateless sessions require careful implementation to avoid complexity.

Security Vulnerabilities and Attacks

Karim dedicates significant time to JWT’s security risks, demonstrating attacks via a PHP library on his GitHub. Common vulnerabilities include:

  • Unsecured Tokens: Setting the header’s algorithm to none bypasses signature verification, a flaw exploited in some libraries. Karim shows a test where a modified token passes validation due to this.
  • RSA Public Key as Shared Key: An attacker changes the algorithm from RSA to HMAC, using the public key as a shared secret, tricking servers into validating tampered tokens.
  • Brute Force: Weak secrets (e.g., “azerty”) are vulnerable to brute-force attacks.
  • Encrypted Data Modification: Some encryption algorithms allow payload tampering (e.g., flipping is_admin from false to true) without breaking the cipher.
  • Token Substitution: Using a token from one service (where the user is admin) on another without proper audience validation.

Karim emphasizes the JWT paradox: the header, which specifies validation details, can’t be trusted until the token is validated. He attributes these issues to developers’ reliance on unvetted libraries, not poor coding.

Best Practices for Secure JWT Usage

To mitigate risks, Karim offers practical advice:

  • Protect Secrets: Use strong, rotated keys. Avoid sharing symmetric keys with external partners; prefer asymmetric keys (e.g., RSA).
  • Restrict Algorithms: Servers should only accept predefined algorithms (e.g., one or two), ignoring the header’s alg field.
  • Validate Claims: Check issaud, and exp to ensure the token’s legitimacy. Reject tokens not intended for your service.
  • Use Trusted Libraries: Avoid custom implementations. Modern libraries require explicit algorithm whitelists, reducing none algorithm risks.
  • Short Token Lifespans: Minimize revocation needs with short-lived tokens. Avoid external revocation lists, as they undermine JWT’s autonomy.
  • Ensure Confidentiality: Since JWS payloads are Base64-encoded (readable), avoid sensitive data. Use JWE for encryption if needed, and transmit over HTTPS.

Karim also mentions alternatives like Biscuits (from Clever Cloud), PASETO, and Google’s Macaroons, which address JWT’s flaws, such as untrusted headers.

Hashtags: #DevoxxFrance #KarimPinchon #JWT #Security #Cryptography #Authentication #Authorization #OAuth2 #OpenIDConnect #JWS #JWE #JWK #Ornikar #PHP #Java

PostHeaderIcon A Decade of Devoxx FR and Java Evolution: A Detailed Retrospective and Forward-Looking Analysis


Introduction:

The Devoxx FR conference has served as a key barometer of the Java platform’s dynamic evolution over the past ten years. This period has been marked by numerous releases, including major advancements that have significantly reshaped how we architect, develop, and deploy Java applications. This presentation offers a detailed retrospective analysis of significant announcements and the substantial changes within Java, emphasizing the critical importance of embracing these enhancements to optimize our applications for performance, maintainability, and security. Beyond a surface-level examination of syntax and API modifications, this session provides a comprehensive rationale for migrating to newer Java versions, addressing the common concerns and challenges that often accompany such transitions with practical insights and actionable strategies.

1. A Detailed Look Back: Java’s Evolution Over the Past Decade

Jean-Michel “JM” Doudoux begins the session by establishing a parallel timeline of the ten-year history of the Devoxx FR conference and Java’s continuous development. He emphasizes the importance of understanding the reception and adoption rates of different Java versions to contextualize the current state of the Java ecosystem.

Java 8:

JM highlights Java 8 as a watershed release, noting its widespread adoption and the introduction of transformative features that fundamentally changed Java development. Key features include:

  • Lambda Expressions: Revolutionized functional programming in Java, enabling more concise and expressive code.
  • Stream API: Introduced a powerful and efficient way to process collections of data.
  • Method References: Simplified the syntax for referring to methods, further enhancing code readability.
  • New Date/Time API (java.time): Addressed the shortcomings of the old java.util.Date and java.util.Calendar APIs, providing a more robust and intuitive way to handle date and time.
  • Default Methods in Interfaces: Allowed adding new methods to interfaces without breaking backward compatibility.

Java 11:

JM points out the slower adoption rate of Java 11, despite being a Long-Term Support (LTS) release, which typically encourages enterprise adoption due to extended support guarantees. Notable features include:

  • HTTP Client API: Introduced a new and improved HTTP Client API, supporting HTTP/2 and WebSocket.

Java 17:

Characterized as a release that has garnered significant developer enthusiasm, building upon the foundation laid by previous versions and further refining the language.

Java 9:

Acknowledged as a disruptive release, primarily due to the introduction of the Java Platform Module System (JPMS), which brought modularity to Java. Doudoux discusses the profound impact of modularity on the Java ecosystem, affecting code organization, accessibility, and deployment.

Java 10, 12-16:

These releases are characterized as more transient, feature releases, with less widespread adoption compared to the LTS versions. However, they introduced valuable features such as:

  • Local Variable Type Inference (var): Simplified variable declaration.
  • Enhanced Switch Expressions: Improved the switch statement, making it more expressive and usable as an expression.

2. Navigating Migration: Java 17 and Strategic Considerations

The presentation transitions to a practical discussion on the complexities of migrating to newer Java versions, with a strong emphasis on the benefits and challenges of migrating to Java 17. Doudoux addresses the common obstacles developers encounter when advocating for migration within their organizations, particularly the challenge of securing buy-in from operations teams and management.

Strategies for Persuasion:

The speaker offers valuable strategies to help developers build a compelling case for migration, focusing on:

  • Highlighting Performance Improvements: Emphasizing the performance gains offered by newer Java versions.
  • Improved Security: Stressing the importance of security updates and enhancements.
  • Increased Developer Productivity: Showcasing how new language features can streamline development workflows.
  • Long-Term Maintainability: Arguing that staying on older versions increases technical debt and maintenance costs in the long run.

Migration Considerations:

While a detailed, step-by-step migration guide is beyond the scope of the session, Doudoux outlines the essential high-level considerations and key steps involved in the migration process, such as:

  • Dependency Analysis: Assessing compatibility with updated libraries and frameworks.
  • Testing: Thoroughly testing the application after migration.
  • Gradual Rollouts: Considering phased deployments to minimize risk.

3. The Future of Java: Trends and Directions

The session concludes with a concise yet insightful look at the future trajectory of the Java platform. This segment provides a glimpse into upcoming features, emerging trends, and the ongoing evolution of Java, ensuring the audience is aware of the continuous innovation within the Java ecosystem.

Summary:

This presentation provides a detailed and comprehensive overview of Java’s journey over the past decade, carefully contextualized within the parallel evolution of the Devoxx FR conference. It goes beyond a simple recitation of features, offering in-depth analysis of the impact of key advancements, practical guidance on navigating the complexities of Java migration, and a valuable perspective on the future of the platform.

PostHeaderIcon [Devoxx Poland 2022] Understanding Zero Trust Security with Service Mesh

At Devoxx Poland 2022, Viktor Gamov, a dynamic developer advocate at Kong, delivered an engaging presentation on zero trust security and its integration with service mesh technologies. With a blend of humor and technical depth, Viktor demystified the complexities of securing modern microservice architectures, emphasizing a philosophy that eliminates implicit trust to bolster system resilience. His talk, rich with practical demonstrations, offered developers and architects actionable insights into implementing zero trust principles using tools like Kong’s Kuma service mesh, making a traditionally daunting topic accessible and compelling.

The Philosophy of Zero Trust

Viktor begins by challenging the conventional notion of trust, using the poignant analogy of The Lion King to illustrate its exploitable nature. Trust, he argues, is a vulnerability when relied upon for system access, as it can be manipulated by malicious actors. Zero trust, conversely, operates on the premise that no entity—human or service—should be inherently trusted. This philosophy, not a product or framework, redefines security by requiring continuous verification of identity and access. Viktor outlines four pillars critical to zero trust in microservices: identity, automation, default denial, and observability. These principles guide the secure communication between services, ensuring robust protection in distributed environments.

Identity in Microservices

In the realm of microservices, identity is paramount. Viktor likens service identification to a passport, issued by a trusted authority, which verifies legitimacy without relying on trust. Traditional security models, akin to fortified castles with IP-based firewalls, are inadequate in dynamic cloud environments where services span multiple platforms. He introduces the concept of embedding identity within cryptographic certificates, specifically using the Subject Alternative Name (SAN) in TLS to encode service identities. This approach, facilitated by service meshes like Kuma, allows for encrypted communication and automatic identity validation, reducing the burden on individual services and enhancing security across heterogeneous systems.

Automation and Service Mesh

Automation is a cornerstone of effective zero trust implementation, particularly in managing the complexity of certificate generation and rotation. Viktor demonstrates how Kuma, a CNCF sandbox project built on Envoy, automates these tasks through its control plane. By acting as a certificate authority, Kuma provisions and rotates certificates seamlessly, ensuring encrypted mutual TLS (mTLS) communication between services. This automation alleviates manual overhead, enabling developers to focus on application logic rather than security configurations. During a live demo, Viktor showcases how Kuma integrates a gateway into the mesh, enabling mTLS from browser to service, highlighting the ease of securing traffic in real-time.

Deny by Default and Observability

The principle of denying all access by default is central to zero trust, ensuring that only explicitly authorized communications occur. Viktor illustrates how Kuma’s traffic permissions allow precise control over service interactions, preventing unauthorized access. For instance, a user service can be restricted to only communicate with an invoice service, eliminating wildcard permissions that expose vulnerabilities. Additionally, observability is critical for detecting and responding to threats. By integrating with tools like Prometheus, Loki, and Grafana, Kuma provides real-time metrics, logs, and traces, enabling developers to monitor service interactions and maintain an up-to-date system overview. Viktor’s demo of a microservices application underscores how observability enhances security and operational efficiency.

Practical Implementation with Kuma

Viktor’s hands-on approach culminates in a demonstration of deploying a containerized application within a Kuma mesh. By injecting sidecar proxies, Kuma ensures encrypted communication and centralized policy management without altering application code. He highlights advanced use cases, such as leveraging Open Policy Agent (OPA) to enforce fine-grained access controls, like restricting a service to read-only HTTP GET requests. This infrastructure-level security decouples policy enforcement from application logic, offering flexibility and scalability. Viktor’s emphasis on developer-friendly tools and real-time feedback loops empowers teams to adopt zero trust practices with minimal friction, fostering a culture of security-first development.

Hashtags: #ZeroTrust #ServiceMesh #Microservices #Security #Kuma #Kong #DevoxxPoland #ViktorGamov

PostHeaderIcon [DevoxxFR 2022] Père Castor 🐻, raconte-nous une histoire (d’OPS)

Lors de Devoxx France 2022, David Aparicio, Data Ops chez OVHcloud, a partagé une conférence de 44 minutes sur l’apprentissage à partir des échecs en opérations informatiques. David a analysé les post-mortems d’incidents majeurs survenus chez des géants comme GitHub, Amazon, Google, OVHcloud, Apple, Fastly, Microsoft, GitLab et Facebook. En explorant les causes racines, les remédiations et les bonnes pratiques, il a montré comment tirer des leçons des erreurs pour renforcer la résilience des systèmes. Suivez OVHcloud sur ovhcloud.com et twitter.com/OVHcloud.

Comprendre les post-mortems

David a commencé par expliquer ce qu’est un post-mortem : un document rédigé après un incident pour comprendre ce qui s’est passé, identifier les causes et prévenir les récurrences. Il inclut l’historique de l’incident, les flux d’information, l’organisation (qui a agi, avec quelle équipe), les canaux de communication avec les clients, l’utilisation des ressources et les processus suivis. David a souligné l’importance de la transparence, citant des initiatives comme les meetups de développeurs où les échecs sont partagés pour démystifier les incidents.

Il a illustré son propos avec une histoire fictive d’Elliot, un junior qui, par erreur, supprime une base de données de production en suivant une documentation mal structurée. Cet incident, inspiré de cas réels chez AWS (2017), GitLab et DigitalOcean, montre les dangers d’un accès non contrôlé à la production. David a recommandé des garde-fous comme des approbations manuelles pour les commandes critiques (par exemple, DROP TABLE), des rôles RBAC stricts, et des tests réguliers des backups pour garantir leur fiabilité.

Les incidents personnels : le legacy à l’épreuve

David a partagé une expérience personnelle chez OVHcloud, où il gère le data lake pour répliquer les données internes. Lors d’une astreinte, un week-end d’été, il a été alerté d’un problème sur une infrastructure legacy sans documentation claire. Un service saturait sa file de connexions (1024 clients maximum), provoquant des refus. Sans réponse des développeurs, David a opté pour une solution KISS (Keep It Simple, Stupid) : une sonde vérifiant la connectivité toutes les cinq minutes, redémarrant le service si nécessaire. Ce script, en place depuis un an et demi, a redémarré le service 70 fois, évitant de nouveaux incidents.

Un autre incident concernait une application Java legacy, tombant après 20 à 40 minutes malgré des redémarrages. Les logs montraient des déconnexions ZooKeeper et des crashs JVM. Plutôt que d’ajuster la mémoire (heap tuning), David a découvert un script de nettoyage propriétaire dans l’historique. Appliqué deux fois par semaine, ce script a résolu le problème durablement. Ces cas illustrent l’importance de comprendre le legacy, d’éviter les solutions complexes et de documenter les correctifs.

Les pannes majeures : CDN et réseaux

David a analysé l’incident Fastly de juin 2021, où une erreur 503 a touché des sites comme The Guardian, The New York Times, Amazon, Twitter et la Maison Blanche. La cause : une configuration client déployée sans test le 8 juin, activée par une demande du 12 mai, révélant un point de défaillance unique (SPoF) dans le CDN. Résolu en 45 minutes, cet incident souligne l’importance de tester les changements en pré-production (par exemple, via blue-green deployments ou shadow traffic) et de personnaliser les messages d’erreur pour améliorer l’expérience utilisateur.

Un autre cas marquant est la panne Facebook de septembre 2021, causée par une mise à jour du protocole BGP (Border Gateway Protocol). Les serveurs DNS, incapables d’accéder aux datacenters, se sont mis en mode protection, coupant l’accès à Facebook, Instagram, WhatsApp et même aux outils internes (Messenger, LDAP). Les employés ne pouvaient ni badger ni consulter la documentation, obligeant une intervention physique avec des disqueuses pour accéder aux racks. David a recommandé des TTL (Time To Live) plus longs pour les DNS, des canaux de communication séparés et des routes de secours via d’autres cloud providers.

Bonnes pratiques et culture de l’échec

David a conclu en insistant sur la nécessité de ne pas blâmer les individus, comme dans le cas d’Elliot, mais de renforcer les processus. Il a proposé des tests réguliers de backups, des exercices de chaos engineering (par exemple, simuler une erreur 500 un vendredi après-midi), et l’adoption de pratiques DevSecOps pour intégrer la sécurité dès les tests unitaires. Il a également suggéré de consulter les post-mortems publics (comme ceux de GitLab ou ElasticSearch) pour s’inspirer et d’utiliser des outils comme Terraform pour automatiser les déploiements sécurisés. Enfin, il a encouragé à rejoindre OVHcloud pour expérimenter et apprendre des incidents dans un environnement transparent.

PostHeaderIcon Kafka Streams @ Carrefour : Traitement big data à la vitesse de l’éclair

Lors de Devoxx France 2022, François Sarradin et Jérémy Sebayhi, membres des équipes data de Carrefour, ont partagé un retour d’expérience de 45 minutes sur l’utilisation de Kafka Streams pour des pipelines big data en temps réel. François, technical lead chez Moshi, et Jérémy, ingénieur senior chez Carrefour, ont détaillé leur transition des systèmes batch Spark et Hadoop vers un traitement stream réactif sur Google Cloud Platform (GCP). Leur talk a couvert l’adoption de Kafka Streams pour le calcul des stocks et des prix, les défis rencontrés et les solutions créatives mises en œuvre. Découvrez Carrefour sur carrefour.com et Moshi sur moshi.fr.

Du batch au stream processing

François et Jérémy ont débuté en comparant le traitement batch et stream. La plateforme legacy de Carrefour, datant de 2014, reposait sur Spark et Hadoop pour des jobs batch, traitant les données comme des fichiers avec des entrées et sorties claires. Les erreurs étaient gérables en corrigeant les fichiers d’entrée et en relançant les pipelines. Le streaming, en revanche, implique des flux d’événements continus via des topics Kafka, où les erreurs nécessitent une gestion en temps réel sans perturber le pipeline. Un événement corrompu ne peut être simplement supprimé, car les données historiques peuvent couvrir des années, rendant le reprocessing impraticable.

Kafka Streams, un framework réactif basé sur Apache Kafka, a permis à Carrefour de passer au stream processing. Il exploite Kafka pour un transit de données scalable et RocksDB pour un stockage d’état colocalisé à faible latence. François a expliqué que les développeurs définissent des topologies—graphes acycliques dirigés (DAG) similaires à ceux de Spark—avec des opérations comme map, flatMap, reduce et join. Kafka Streams gère automatiquement la création des topics, les stores d’état et la résilience, simplifiant le développement. L’intégration avec les services GCP (GCS, GKE, BigTable) et les systèmes internes de Carrefour a permis des calculs de stocks et de prix en temps réel à l’échelle nationale.

Surmonter les défis d’adoption

Adopter Kafka Streams chez Carrefour n’a pas été sans obstacles. Jérémy a noté que beaucoup d’équipes manquaient d’expérience avec Kafka, mais la familiarité avec Spark a facilité la transition, les deux utilisant des paradigmes de transformation similaires. Les équipes ont développé indépendamment des pratiques pour le monitoring, la configuration et le déploiement, consolidées ensuite en best practices partagées. Cette approche pragmatique a créé une base commune pour les nouveaux projets, accélérant l’adoption.

Le changement nécessitait une adaptation culturelle au-delà des compétences techniques. La plateforme data de Carrefour, gérant des volumes massifs et des données à haute vélocité (stocks, prix, commandes), exigeait un changement de mindset du batch vers le réactif. Le stream processing implique des jointures continues avec des bases externes, contrairement aux datasets statiques des batchs. François et Jérémy ont souligné l’importance d’une documentation précoce et d’un accompagnement expert pour naviguer dans les complexités de Kafka Streams, surtout lors des déploiements en production.

Bonnes pratiques et architectures

François et Jérémy ont partagé les pratiques clés émergées sur deux ans. Pour les schémas des topics, ils utilisaient Schema Registry pour typer les données, préférant des clés obligatoires pour assurer la stabilité des partitions et évitant les champs optionnels pour prévenir les ruptures de contrat. Les valeurs des messages incluaient des champs optionnels pour la flexibilité, avec des champs obligatoires comme les IDs et timestamps pour le débogage et l’ordonnancement des événements.

Maintenir des topologies stateful posait des défis. Ajouter de nouvelles transformations (par exemple, une nouvelle source de données) nécessitait de retraiter les données historiques, risquant des émissions dupliquées. Ils ont proposé des solutions comme les déploiements blue-green, où la nouvelle version construit son état sans produire de sortie jusqu’à ce qu’elle soit prête, ou l’utilisation de topics compactés comme snapshots pour stocker uniquement le dernier état par clé. Ces approches minimisaient les perturbations mais exigeaient une planification rigoureuse, les déploiements blue-green doublant temporairement les besoins en ressources.

Métriques et monitoring

Le monitoring des applications Kafka Streams était crucial. François a mis en avant des métriques clés : lag (messages en attente par topic/consumer group), indiquant les points de contention ; end-to-end latency, mesurant le temps de traitement par nœud de topologie ; et rebalance events, déclenchés par des changements de consumer group, pouvant perturber les performances. Carrefour utilisait Prometheus pour collecter les métriques et Grafana pour des dashboards, assurant une détection proactive des problèmes. Jérémy a insisté sur l’importance des métriques custom via une couche web pour les health checks, les métriques JMX de Kafka Streams n’étant pas toujours suffisantes.

Ils ont aussi abordé les défis de déploiement, utilisant Kubernetes (GKE) avec des readiness probes pour surveiller les états des applications. Une surallocation de CPU pouvait retarder les réponses aux health checks, causant des évictions de consumer groups, d’où l’importance d’un tuning précis des ressources. François et Jérémy ont conclu en vantant l’écosystème robuste de Kafka Streams—connecteurs, bibliothèques de test, documentation—tout en notant que sa nature événementielle exige un mindset distinct du batch. Leur expérience chez Carrefour a démontré la puissance de Kafka Streams pour des données en temps réel à grande échelle, incitant le public à partager ses propres retours.

PostHeaderIcon [DevoxFR 2022] Cracking Enigma: A Tale of Espionage and Mathematics

In his captivating 45-minute talk at Devoxx France 2022, Jean-Christophe Sirot, a cloud telephony expert from Sherweb, takes the audience on a historical journey through the cryptanalysis of the Enigma machine, used by German forces during World War II. Jean-Christophe weaves a narrative that blends espionage, mathematics, and technological innovation, highlighting the lesser-known contributions of Polish cryptanalysts like Marian Rejewski alongside Alan Turing’s famed efforts. His presentation, recorded in April 2022 in Paris, reveals how Enigma’s secrets were unraveled through a combination of human ingenuity and mathematical rigor, ushering cryptography into the modern era. This post summarizes the key themes, from early Polish breakthroughs to Turing’s machines, and reflects on their lasting impact.

The Polish Prelude: Cryptography in a Time of War

Jean-Christophe sets the stage in post-World War I Poland, a nation caught between Soviet Russia and a resurgent Germany. In 1919, during the Polish-Soviet War, Polish radio interception units, staffed by former German army officers, cracked Soviet codes, securing a decisive victory at the Battle of Warsaw. This success underscored the strategic importance of cryptography, prompting Poland to invest in codebreaking. By 1929, a curious incident at Warsaw’s central station revealed Germany’s use of Enigma machines. A German embassy official’s attempt to retrieve a misrouted “radio equipment” package—later identified as a commercial Enigma—alerted Polish intelligence.

Recognizing the complexity of Enigma, a machine with rotors, a reflector, and a plugboard generating billions of possible configurations, Poland innovated. Instead of relying on puzzle-solvers, as was common, they recruited mathematicians. At a new cryptography chair in western Poland, young talents like Marian Rejewski, Henryk Zygalski, and Jerzy Różycki began applying group theory and permutation mathematics to Enigma’s ciphers. Their work marked a shift from intuitive codebreaking to a systematic, mathematical approach, laying the groundwork for future successes.

Espionage and Secrets: The German Defector

The narrative shifts to 1931 Berlin, where Hans-Thilo Schmidt, a disgruntled former German officer, offered to sell Enigma’s secrets to the French. Schmidt, driven by financial troubles and resentment after being demobilized post-World War I, had access to Enigma key tables and technical manuals through his brother, an officer in Germany’s cipher bureau. Meeting French intelligence in Verviers, Belgium, Schmidt handed over critical documents. However, the French, lacking advanced cryptanalysis expertise, passed the materials to their Polish allies.

The Poles, already studying Enigma, seized the opportunity. Rejewski and his team exploited a flaw in the German protocol: operators sent a three-letter message key twice at the start of each transmission. Using permutation theory, they analyzed these repeated letters to deduce rotor settings. By cataloging cycle structures for all possible rotor configurations—a year-long effort—they cracked 70–80% of Enigma messages by the late 1930s. Jean-Christophe emphasizes the audacity of this mathematical feat, achieved with minimal computational resources, and the espionage that made it possible.

Turing and Bletchley Park: Scaling the Attack

As Germany invaded Poland in 1939, the Polish cryptanalysts shared their findings with the Allies, providing documentation and a reconstructed Enigma machine. This transfer was pivotal, as Germany had upgraded Enigma, increasing rotors from three to five and plugboard connections from six to ten, exponentially raising the number of possible keys. The Polish method, reliant on the repeated message key, became obsolete when Germany reduced repetitions to once.

Enter Alan Turing and the team at Bletchley Park, Britain’s codebreaking hub. Turing devised a new approach: the “known plaintext attack.” By assuming certain messages contained predictable phrases, like weather forecasts for the Bay of Biscay, cryptanalysts could test rotor settings. Turing’s genius lay in automating this process with the “Bombe,” an electromechanical device that tested rotor and plugboard configurations in parallel. Jean-Christophe explains how the Bombe used electrical circuits to detect inconsistencies in assumed settings, drastically reducing the time needed to crack a message. By running multiple Bombes, Bletchley Park decrypted messages within hours, providing critical intelligence that shortened the war by an estimated one to two years.

The Legacy of Enigma: Modern Cryptography’s Dawn

Jean-Christophe concludes by reflecting on Enigma’s broader impact. The machine, despite its complexity, was riddled with flaws, such as the inability to map a letter to itself and the exploitable key repetition protocol. These vulnerabilities, exposed by Polish and British cryptanalysts, highlighted the need for robust algorithms and secure protocols. Enigma’s cryptanalysis marked a turning point, transforming cryptography from a craft of puzzle enthusiasts to a rigorous discipline grounded in mathematics and, later, computer science.

He draws parallels to modern cryptographic failures, like the flawed WEP protocol for early Wi-Fi, which used secure algorithms but a weak protocol, and the PlayStation 3’s disk encryption, undone by poor key management. Jean-Christophe’s key takeaway for developers: avoid custom cryptography, use industry standards, and prioritize protocol design. The Enigma story, blending human drama and technical innovation, underscores the enduring importance of secure communication in today’s digital world.

Resources:

  • Enigma by Dermot Turing

  • Our Spy in Hitler’s Office by Paul Paillole

  • The Code Book by Simon Singh

  • The Codebreakers by David Kahn

PostHeaderIcon [DevoxxFR 2022] Exploiter facilement des fonctions natives avec le Projet Panama depuis Java

Lors de Devoxx France 2022, Brice Dutheil a présenté une conférence de 28 minutes sur le Projet Panama, une initiative visant à simplifier l’appel de fonctions natives depuis Java sans les complexités de JNI ou de bibliothèques tierces. Brice, contributeur actif à l’écosystème Java, a introduit l’API Foreign Function & Memory (JEP-419), montrant comment elle relie le monde géré de Java au code natif en C, Swift ou Rust. À travers des démonstrations de codage en direct, Brice a illustré le potentiel de Panama pour des intégrations natives fluides. Suivez Brice sur Twitter à twitter.com/Brice_Dutheil pour plus d’insights Java.

Simplifier l’intégration de code natif

Brice a débuté en expliquant la mission du Projet Panama : connecter l’environnement géré de Java, avec son garbage collector, au monde natif de C, Swift ou Rust, plus proche de la machine. Traditionnellement, JNI imposait des étapes laborieuses : écrire des classes wrapper, charger des bibliothèques et générer des headers lors des builds. Ces processus étaient sujets aux erreurs et chronophages. Des alternatives comme JNA et JNR amélioraient l’expérience développeur en générant des bindings au runtime, mais elles étaient plus lentes et moins sécurisées.

Lancé en 2014, le Projet Panama répond à ces défis avec trois composantes : les API vectorielles (non couvertes ici), les appels de fonctions étrangères et la gestion de la mémoire. Brice s’est concentré sur l’API Foreign Function & Memory (JEP-419), disponible en incubation dans JDK 18. Contrairement à JNI, Panama élimine les complexités du build et offre des performances proches du natif sur toutes les plateformes. Il introduit un modèle de sécurité robuste, limitant les opérations dangereuses et envisageant de restreindre JNI dans les futures versions de Java (par exemple, Java 25 pourrait exiger un flag pour activer JNI). Brice a souligné l’utilisation des method handles et des instructions d’invocation dynamique, inspirées des avancées du bytecode JVM, pour générer efficacement des instructions assembleur pour les appels natifs.

Démonstrations pratiques avec Panama

Brice a démontré les capacités de Panama via du codage en direct, commençant par un exemple simple appelant la fonction getpid de la bibliothèque standard C. À l’aide du SystemLinker, il a effectué une recherche de symbole pour localiser getpid, créé un method handle avec un descripteur de fonction définissant la signature (retournant un long Java), et l’a invoqué pour récupérer l’ID du processus. Ce processus a contourné les lourdeurs de JNI, nécessitant seulement quelques lignes de code Java. Brice a insisté sur l’activation de l’accès natif avec le flag –enable-native-access dans JDK 18, renforçant le modèle de sécurité de Panama en limitant l’accès à des modules spécifiques.

Il a ensuite présenté un exemple plus complexe avec la fonction crypto_box de la bibliothèque cryptographique Libsodium, portable sur des plateformes comme Android. Brice a alloué des segments de mémoire avec un ResourceScope et un NativeAllocator, garantissant la sécurité mémoire en libérant automatiquement les ressources après usage, contrairement à JNI qui dépend du garbage collector. Le ResourceScope prévient les fuites mémoire, une amélioration significative par rapport aux buffers natifs traditionnels. Brice a également abordé l’appel de code Swift via des interfaces compatibles C, démontrant la polyvalence de Panama.

Outils et potentiel futur

Brice a introduit jextract, un outil de Panama qui génère des mappings Java à partir de headers C/C++, simplifiant l’intégration de bibliothèques comme Blake3, une fonction de hachage performante écrite en Rust. Dans une démo, il a montré comment jextract créait des bindings compatibles Panama pour les structures de données et fonctions de Blake3, permettant aux développeurs Java de tirer parti des performances natives sans bindings manuels. Malgré quelques accrocs, la démo a souligné le potentiel de Panama pour des intégrations natives transparentes.

Brice a conclu en soulignant les avantages de Panama : simplicité, rapidité, compatibilité multiplateforme et sécurité mémoire renforcée. Il a noté son évolution continue, avec JEP-419 en incubation dans JDK 18 et une deuxième preview prévue pour JDK 19. Pour les développeurs d’applications desktop ou de systèmes critiques, Panama offre une solution puissante pour exploiter des fonctions spécifiques aux OS ou des bibliothèques optimisées comme Libsodium. Brice a encouragé le public à expérimenter Panama et à poser des questions, renforçant son engagement via Twitter.

PostHeaderIcon [VivaTech 2021] Emmanuel Macron : Championing European Scale-Ups and Innovation

Abstract

At VivaTech 2021, French President Emmanuel Macron joined a panel of European scale-up CEOs to discuss the future of Europe’s tech ecosystem. In a 66-minute conversation, Macron emphasized the need for a robust financial ecosystem, streamlined regulations, and a unified European market to support scale-ups. The panel, featuring leaders from Believe, Aledia, Neuroelectrics, and Klarna, highlighted Europe’s potential to lead in innovation through ethical, sustainable, and citizen-centric approaches. This article explores Macron’s vision for fostering European champions, addressing challenges in funding, regulation, and talent, and positioning Europe as a global tech leader.

Introduction

In June 2021, VivaTech, Europe’s premier startup and tech event, hosted a landmark panel featuring French President Emmanuel Macron alongside CEOs of leading European scale-ups. Moderated by Nicolas Barré of Les Échos, the discussion showcased Europe’s burgeoning tech landscape through the lens of companies like Believe (digital music distribution), Aledia (LED displays), Neuroelectrics (neuroscience), and Klarna (fintech). Macron articulated a bold vision for transforming Europe into a hub for innovation by strengthening its financial ecosystem, reducing regulatory barriers, and embracing a distinctly European approach that blends science, ethics, and ambition. This article delves into the key themes of the panel, weaving a narrative around Macron’s call for speed, scale, and sovereignty in European tech.

Building a Thriving Tech Ecosystem

Believe: Scaling Digital Music

Denis Ladegaillerie, CEO of Believe, opened the panel by sharing his company’s journey from a three-person startup in his living room to a global leader supporting 850,000 artists across 50 countries. Believe, which recently went public via an IPO, aims to dominate digital music distribution by offering artists transparency, better economics, and digital-first expertise. Ladegaillerie credited France’s Next 40 and French Tech initiatives for creating a supportive environment for its Paris-based IPO, noting Europe’s rising prominence as the second-largest music market by 2028. He urged Macron to foster more IPOs by attracting talent, educating investors, and building a pipeline of listed companies to create a virtuous cycle.

Macron responded by emphasizing the need for a robust financial ecosystem to provide liquidity for investors through mergers and acquisitions (M&As) and IPOs. He highlighted France’s Tibi Initiative, which redirected 6 billion euros of institutional savings to tech investments, unlocking 20 billion euros for the sector. Macron proposed scaling this model to the European level, encouraging banks and insurers to invest more in tech equity and fostering cooperation with large corporations for M&A exits. He stressed that successful IPOs like Believe’s enhance Europe’s credibility, attracting analysts and investors to fuel further growth.

Aledia: Industrializing Deep Tech

Giorgio Anania, CEO of Aledia, brought a deep-tech perspective, focusing on energy-efficient LED displays poised to revolutionize augmented reality (AR) within five years. With experience across startups in the U.S., U.K., Germany, and France, Anania praised France’s supportive environment, particularly BPI France’s assistance in choosing France over Singapore for Aledia’s manufacturing plant. However, he highlighted Europe’s lag in capital access compared to the U.S. and China, where “infinite money” fuels rapid scaling. Anania posed three questions to Macron: how to match U.S./China capital access, accelerate European reforms within three years, and simplify regulations for small companies transitioning to industrial scale.

Macron agreed that “speediness and scale” are critical, advocating for a European strategy to attract U.S. and Chinese investors by positioning Europe as business-friendly and innovative. He proposed rethinking procurement to favor startups over “usual suspects” in deep-tech sectors like energy, mobility, and defense, citing SpaceX’s disruption of aerospace as a model. Macron emphasized that deep tech is a matter of European sovereignty, warning that missing the current innovation wave could leave Europe dependent on U.S. or Chinese technologies. To support industrialization, he committed to streamlining regulations to ease the growth of small companies like Aledia.

The European Way: Science, Ethics, and Impact

Neuroelectrics: Innovating in Healthcare

Ana Maiques, CEO of Neuroelectrics, shared her Barcelona-based company’s mission to modulate brain activity for conditions like epilepsy and depression. Demonstrating a cap that monitors and stimulates brain signals in real time, Maiques highlighted Neuroelectrics’ FDA breakthrough designation for reducing seizures in children non-invasively. She emphasized Europe’s potential to address healthcare challenges—mental health, aging, and neurodegeneration—through responsible innovation. Having scaled her company to Boston, Maiques asked Macron how the “European way” could attract the next generation and how the pandemic reshaped his healthcare vision.

Macron described the European way as a unique blend of science, ethics, and economic ambition, resilient to globalization due to its ability to navigate complexity. Unlike the U.S., which prioritizes market efficiency, or China, Europe embeds democratic values and ethical considerations in innovation. He argued that sustainable business requires regulation to protect human rights and prevent unchecked data exploitation, citing the risks of private platforms controlling brain data or insurers using it to discriminate. Macron positioned Europe’s General Data Protection Regulation (GDPR), Digital Markets Act (DMA), and Digital Services Act (DSA) as frameworks for ethical innovation, ensuring transparency and citizen trust.

On healthcare, Macron identified education and healthcare as key investment pillars, advocating for personalization and prevention through AI and deep tech. He highlighted France’s centralized healthcare data as a competitive advantage, enabling secure, innovative solutions if access is managed transparently. Post-pandemic, Macron saw innovation as critical to shifting healthcare from hospital-centric models to citizen-focused systems, reducing costs and preventing chronic diseases through personalized approaches.

Disrupting with Purpose

Klarna: Fintech and Open Banking

Sebastian Siemiatkowski, CEO of Klarna, represented Sweden’s vibrant tech scene, with Klarna’s 90 million users and $45 billion valuation disrupting retail banking. He praised Macron’s business-friendly leadership but criticized Brussels’ slow and ineffective regulations, particularly on open banking and GDPR. Siemiatkowski argued that GDPR’s cookie consent overload (142 lifetimes daily) fails to enhance privacy, while open banking regulations fall short of enabling data mobility to drive competition. He urged Macron to push for consumer-centric regulations that foster innovation and position Europe as a global leader.

Macron defended GDPR as a necessary foundation, ensuring legal accountability and consumer awareness, but acknowledged that regulations blocking innovation are counterproductive. He candidly admitted governments’ reluctance to fully embrace disruptive models like Klarna’s, which can eliminate retail banking jobs. Macron clarified his dual role: supporting innovation that adds new services without destroying jobs, while balancing economic and social priorities. He cited Singapore’s open banking success as a model, suggesting that forward-leaning regulation could attract investment and create jobs, but emphasized the need for European players to lead disruption to maintain sovereignty.

A Call for Speed and Sovereignty

Macron concluded by reiterating the urgency of building a single European market, lifting sectoral barriers, and replicating France’s Next 40 and FT 120 initiatives at the European level. He committed to prioritizing these goals during France’s EU presidency in early 2022, aiming for concrete results. Macron underscored the political dimension of innovation, framing it as a matter of sovereignty to ensure Europe develops its own champions and technologies. By fostering trust through regulation, attracting global capital, and empowering startups, Europe can seize the current wave of innovation to shape a sustainable, ethical future.

Conclusion

The VivaTech 2021 panel with Emmanuel Macron and European scale-up leaders was a powerful testament to Europe’s potential as a global tech hub. From Believe’s digital music revolution to Aledia’s deep-tech displays, Neuroelectrics’ brain health innovations, and Klarna’s fintech disruption, the panel showcased diverse visions united by a commitment to impact. Macron’s vision—rooted in speed, scale, and the European way—offers a roadmap for building a resilient ecosystem. By strengthening financial markets, streamlining regulations, and championing ethical innovation, Europe can lead the next decade’s technological wave, ensuring sovereignty and prosperity for its citizens.