Recent Posts
Archives

Posts Tagged ‘DevoxxFR’

PostHeaderIcon [DevoxxFR 2022] Do You Really Know JWT?

Do You Really Know JWT? Insights from Devoxx France 2022

Karim Pinchon, a backend developer at Ornikar, delivered an illuminating talk titled “Do You Really Know JWT?” (watch on YouTube). With a decade of experience across Java, PHP, and Go, Karim dives into JSON Web Tokens (JWT), a standard for secure data transfer in authentication and authorization. This session explores JWT’s structure, cryptographic foundations, vulnerabilities, and best practices, moving beyond common usage in OAuth2 and OpenID Connect.

Understanding JWT Structure and Cryptography

Karim begins by demystifying JWT, a compact, secure token for transferring JSON data, often used in HTTP headers for authentication. A JWT comprises three parts—header, payload, and signature—encoded in Base64 and concatenated with dots. The header specifies the cryptographic algorithm (e.g., HMAC, RSA), the payload contains claims (data), and the signature ensures integrity. Karim demonstrates this using jwt.io, showing how decoding reveals JSON objects.

He distinguishes token types: reference tokens (database-backed) and value tokens (self-contained, like JWT). JWT supports two forms: compact (Base64-encoded) and JSON (with additional features like multiple signatures). Karim introduces related standards under JOSE (JSON Object Signing and Encryption), including JWS (signed tokens), JWE (encrypted tokens), JWK (key management), and JWA (algorithms). Cryptographic operations like signing (for integrity) and encryption (for confidentiality) underpin JWT’s security.

Payload Claims and Use Cases

The payload is JWT’s core, divided into three claim types:

  • Registered Claims: Standard fields like issuer (iss), audience (aud), expiration (exp), and token ID (jti) for validation.
  • Public Claims: Defined by IANA for protocols like OpenID Connect, carrying user data (e.g., name, email) in ID tokens.
  • Private Claims: Custom data agreed upon by parties, kept minimal for compactness.

Karim highlights JWT’s versatility in:

  • API Authentication: Tokens in Authorization headers validate requests without database lookups.
  • OAuth2: Access tokens may be JWTs, carrying authorization data.
  • OpenID Connect: ID tokens propagate user identity.
  • Stateless Sessions: Storing session data (e.g., e-commerce carts) client-side, enhancing scalability.

He cautions that stateless sessions require careful implementation to avoid complexity.

Security Vulnerabilities and Attacks

Karim dedicates significant time to JWT’s security risks, demonstrating attacks via a PHP library on his GitHub. Common vulnerabilities include:

  • Unsecured Tokens: Setting the header’s algorithm to none bypasses signature verification, a flaw exploited in some libraries. Karim shows a test where a modified token passes validation due to this.
  • RSA Public Key as Shared Key: An attacker changes the algorithm from RSA to HMAC, using the public key as a shared secret, tricking servers into validating tampered tokens.
  • Brute Force: Weak secrets (e.g., “azerty”) are vulnerable to brute-force attacks.
  • Encrypted Data Modification: Some encryption algorithms allow payload tampering (e.g., flipping is_admin from false to true) without breaking the cipher.
  • Token Substitution: Using a token from one service (where the user is admin) on another without proper audience validation.

Karim emphasizes the JWT paradox: the header, which specifies validation details, can’t be trusted until the token is validated. He attributes these issues to developers’ reliance on unvetted libraries, not poor coding.

Best Practices for Secure JWT Usage

To mitigate risks, Karim offers practical advice:

  • Protect Secrets: Use strong, rotated keys. Avoid sharing symmetric keys with external partners; prefer asymmetric keys (e.g., RSA).
  • Restrict Algorithms: Servers should only accept predefined algorithms (e.g., one or two), ignoring the header’s alg field.
  • Validate Claims: Check issaud, and exp to ensure the token’s legitimacy. Reject tokens not intended for your service.
  • Use Trusted Libraries: Avoid custom implementations. Modern libraries require explicit algorithm whitelists, reducing none algorithm risks.
  • Short Token Lifespans: Minimize revocation needs with short-lived tokens. Avoid external revocation lists, as they undermine JWT’s autonomy.
  • Ensure Confidentiality: Since JWS payloads are Base64-encoded (readable), avoid sensitive data. Use JWE for encryption if needed, and transmit over HTTPS.

Karim also mentions alternatives like Biscuits (from Clever Cloud), PASETO, and Google’s Macaroons, which address JWT’s flaws, such as untrusted headers.

Hashtags: #DevoxxFrance #KarimPinchon #JWT #Security #Cryptography #Authentication #Authorization #OAuth2 #OpenIDConnect #JWS #JWE #JWK #Ornikar #PHP #Java

PostHeaderIcon [DevoxxFR 2022] Père Castor 🐻, raconte-nous une histoire (d’OPS)

Lors de Devoxx France 2022, David Aparicio, Data Ops chez OVHcloud, a partagé une conférence de 44 minutes sur l’apprentissage à partir des échecs en opérations informatiques. David a analysé les post-mortems d’incidents majeurs survenus chez des géants comme GitHub, Amazon, Google, OVHcloud, Apple, Fastly, Microsoft, GitLab et Facebook. En explorant les causes racines, les remédiations et les bonnes pratiques, il a montré comment tirer des leçons des erreurs pour renforcer la résilience des systèmes. Suivez OVHcloud sur ovhcloud.com et twitter.com/OVHcloud.

Comprendre les post-mortems

David a commencé par expliquer ce qu’est un post-mortem : un document rédigé après un incident pour comprendre ce qui s’est passé, identifier les causes et prévenir les récurrences. Il inclut l’historique de l’incident, les flux d’information, l’organisation (qui a agi, avec quelle équipe), les canaux de communication avec les clients, l’utilisation des ressources et les processus suivis. David a souligné l’importance de la transparence, citant des initiatives comme les meetups de développeurs où les échecs sont partagés pour démystifier les incidents.

Il a illustré son propos avec une histoire fictive d’Elliot, un junior qui, par erreur, supprime une base de données de production en suivant une documentation mal structurée. Cet incident, inspiré de cas réels chez AWS (2017), GitLab et DigitalOcean, montre les dangers d’un accès non contrôlé à la production. David a recommandé des garde-fous comme des approbations manuelles pour les commandes critiques (par exemple, DROP TABLE), des rôles RBAC stricts, et des tests réguliers des backups pour garantir leur fiabilité.

Les incidents personnels : le legacy à l’épreuve

David a partagé une expérience personnelle chez OVHcloud, où il gère le data lake pour répliquer les données internes. Lors d’une astreinte, un week-end d’été, il a été alerté d’un problème sur une infrastructure legacy sans documentation claire. Un service saturait sa file de connexions (1024 clients maximum), provoquant des refus. Sans réponse des développeurs, David a opté pour une solution KISS (Keep It Simple, Stupid) : une sonde vérifiant la connectivité toutes les cinq minutes, redémarrant le service si nécessaire. Ce script, en place depuis un an et demi, a redémarré le service 70 fois, évitant de nouveaux incidents.

Un autre incident concernait une application Java legacy, tombant après 20 à 40 minutes malgré des redémarrages. Les logs montraient des déconnexions ZooKeeper et des crashs JVM. Plutôt que d’ajuster la mémoire (heap tuning), David a découvert un script de nettoyage propriétaire dans l’historique. Appliqué deux fois par semaine, ce script a résolu le problème durablement. Ces cas illustrent l’importance de comprendre le legacy, d’éviter les solutions complexes et de documenter les correctifs.

Les pannes majeures : CDN et réseaux

David a analysé l’incident Fastly de juin 2021, où une erreur 503 a touché des sites comme The Guardian, The New York Times, Amazon, Twitter et la Maison Blanche. La cause : une configuration client déployée sans test le 8 juin, activée par une demande du 12 mai, révélant un point de défaillance unique (SPoF) dans le CDN. Résolu en 45 minutes, cet incident souligne l’importance de tester les changements en pré-production (par exemple, via blue-green deployments ou shadow traffic) et de personnaliser les messages d’erreur pour améliorer l’expérience utilisateur.

Un autre cas marquant est la panne Facebook de septembre 2021, causée par une mise à jour du protocole BGP (Border Gateway Protocol). Les serveurs DNS, incapables d’accéder aux datacenters, se sont mis en mode protection, coupant l’accès à Facebook, Instagram, WhatsApp et même aux outils internes (Messenger, LDAP). Les employés ne pouvaient ni badger ni consulter la documentation, obligeant une intervention physique avec des disqueuses pour accéder aux racks. David a recommandé des TTL (Time To Live) plus longs pour les DNS, des canaux de communication séparés et des routes de secours via d’autres cloud providers.

Bonnes pratiques et culture de l’échec

David a conclu en insistant sur la nécessité de ne pas blâmer les individus, comme dans le cas d’Elliot, mais de renforcer les processus. Il a proposé des tests réguliers de backups, des exercices de chaos engineering (par exemple, simuler une erreur 500 un vendredi après-midi), et l’adoption de pratiques DevSecOps pour intégrer la sécurité dès les tests unitaires. Il a également suggéré de consulter les post-mortems publics (comme ceux de GitLab ou ElasticSearch) pour s’inspirer et d’utiliser des outils comme Terraform pour automatiser les déploiements sécurisés. Enfin, il a encouragé à rejoindre OVHcloud pour expérimenter et apprendre des incidents dans un environnement transparent.

PostHeaderIcon [DevoxxFR 2022] Exploiter facilement des fonctions natives avec le Projet Panama depuis Java

Lors de Devoxx France 2022, Brice Dutheil a présenté une conférence de 28 minutes sur le Projet Panama, une initiative visant à simplifier l’appel de fonctions natives depuis Java sans les complexités de JNI ou de bibliothèques tierces. Brice, contributeur actif à l’écosystème Java, a introduit l’API Foreign Function & Memory (JEP-419), montrant comment elle relie le monde géré de Java au code natif en C, Swift ou Rust. À travers des démonstrations de codage en direct, Brice a illustré le potentiel de Panama pour des intégrations natives fluides. Suivez Brice sur Twitter à twitter.com/Brice_Dutheil pour plus d’insights Java.

Simplifier l’intégration de code natif

Brice a débuté en expliquant la mission du Projet Panama : connecter l’environnement géré de Java, avec son garbage collector, au monde natif de C, Swift ou Rust, plus proche de la machine. Traditionnellement, JNI imposait des étapes laborieuses : écrire des classes wrapper, charger des bibliothèques et générer des headers lors des builds. Ces processus étaient sujets aux erreurs et chronophages. Des alternatives comme JNA et JNR amélioraient l’expérience développeur en générant des bindings au runtime, mais elles étaient plus lentes et moins sécurisées.

Lancé en 2014, le Projet Panama répond à ces défis avec trois composantes : les API vectorielles (non couvertes ici), les appels de fonctions étrangères et la gestion de la mémoire. Brice s’est concentré sur l’API Foreign Function & Memory (JEP-419), disponible en incubation dans JDK 18. Contrairement à JNI, Panama élimine les complexités du build et offre des performances proches du natif sur toutes les plateformes. Il introduit un modèle de sécurité robuste, limitant les opérations dangereuses et envisageant de restreindre JNI dans les futures versions de Java (par exemple, Java 25 pourrait exiger un flag pour activer JNI). Brice a souligné l’utilisation des method handles et des instructions d’invocation dynamique, inspirées des avancées du bytecode JVM, pour générer efficacement des instructions assembleur pour les appels natifs.

Démonstrations pratiques avec Panama

Brice a démontré les capacités de Panama via du codage en direct, commençant par un exemple simple appelant la fonction getpid de la bibliothèque standard C. À l’aide du SystemLinker, il a effectué une recherche de symbole pour localiser getpid, créé un method handle avec un descripteur de fonction définissant la signature (retournant un long Java), et l’a invoqué pour récupérer l’ID du processus. Ce processus a contourné les lourdeurs de JNI, nécessitant seulement quelques lignes de code Java. Brice a insisté sur l’activation de l’accès natif avec le flag –enable-native-access dans JDK 18, renforçant le modèle de sécurité de Panama en limitant l’accès à des modules spécifiques.

Il a ensuite présenté un exemple plus complexe avec la fonction crypto_box de la bibliothèque cryptographique Libsodium, portable sur des plateformes comme Android. Brice a alloué des segments de mémoire avec un ResourceScope et un NativeAllocator, garantissant la sécurité mémoire en libérant automatiquement les ressources après usage, contrairement à JNI qui dépend du garbage collector. Le ResourceScope prévient les fuites mémoire, une amélioration significative par rapport aux buffers natifs traditionnels. Brice a également abordé l’appel de code Swift via des interfaces compatibles C, démontrant la polyvalence de Panama.

Outils et potentiel futur

Brice a introduit jextract, un outil de Panama qui génère des mappings Java à partir de headers C/C++, simplifiant l’intégration de bibliothèques comme Blake3, une fonction de hachage performante écrite en Rust. Dans une démo, il a montré comment jextract créait des bindings compatibles Panama pour les structures de données et fonctions de Blake3, permettant aux développeurs Java de tirer parti des performances natives sans bindings manuels. Malgré quelques accrocs, la démo a souligné le potentiel de Panama pour des intégrations natives transparentes.

Brice a conclu en soulignant les avantages de Panama : simplicité, rapidité, compatibilité multiplateforme et sécurité mémoire renforcée. Il a noté son évolution continue, avec JEP-419 en incubation dans JDK 18 et une deuxième preview prévue pour JDK 19. Pour les développeurs d’applications desktop ou de systèmes critiques, Panama offre une solution puissante pour exploiter des fonctions spécifiques aux OS ou des bibliothèques optimisées comme Libsodium. Brice a encouragé le public à expérimenter Panama et à poser des questions, renforçant son engagement via Twitter.

PostHeaderIcon [Devoxx FR 2021] IoT Open Source at Home

At Devoxx France 2021, François Mockers, an IoT enthusiast, delivered a 32-minute talk titled IoT open source à la maison (YouTube). This session shared his decade-long journey managing over 300 open-source IoT devices at home, likening home automation to production IT challenges. From connected light bulbs to zoned heating and sunlight-responsive shutters, Mockers explored protocols (ZigBee, Z-Wave, 433MHz, Wi-Fi) and tools (Home Assistant, ESPHome, Node-RED, Ansible, InfluxDB, Grafana). Aligned with Devoxx’s IoT and cloud themes, the talk offered practical insights for developers building cost-effective, secure home automation systems.

IoT: A Growing Home Ecosystem

Mockers began by highlighting the ubiquity of IoT devices, asking the audience how many owned connected devices (00:00:30–00:00:45). Most had over five, some over 50, and Mockers himself managed ~300, from Philips Hue bulbs to custom-built sensors (00:00:45–00:01:00). He started with commercial devices a decade ago but shifted to DIY solutions five years ago for cost savings and flexibility (00:00:15–00:00:30). His setup mirrors production environments, with “unhappy users” (family), legacy systems, and protocol sprawl, making it a relatable challenge for developers.

IoT Protocols: A Diverse Landscape

Mockers provided a technical overview of IoT protocols, each with unique strengths and challenges (00:01:00–00:08:15):

  • ZigBee: Used by Philips Hue and IKEA, ZigBee supports lights, switches, plugs, motion sensors, and shutters in a mesh network for extended range. Devices like battery-powered switches consume minimal power, while plugged-in bulbs act as repeaters. Security issues, like a past Philips Hue hack allowing remote on/off control, highlight risks (00:01:15–00:02:15).
  • Z-Wave: Similar to ZigBee but less common, used by Fibaro and Aeotec. It supports up to 232 devices (vs. ZigBee’s 65,000) with similar mesh functionality (00:02:15–00:02:45).
  • 433.92 MHz: A frequency band hosting protocols like Oregon Scientific (sensors), Somfy (shutters), and Chacon/DIO (switches). These are cheap (~€10 vs. €50 for ZigBee/Z-Wave) but insecure, allowing neighbors’ devices to be controlled with a powerful transceiver. Car keys and security boxes also use this band, complicating urban use (00:02:45–00:04:00).
  • Wi-Fi: Popular for startups like Netatmo (weather, security), LIFX (bulbs), and Tuya (garden devices). Wi-Fi devices are plug-and-play but power-hungry and reliant on external cloud APIs, posing risks if internet or vendor services fail. Security is a concern, as hacked Wi-Fi devices fueled major botnets (00:04:15–00:06:00).
  • Bluetooth: Used for lights, speakers, and beacons, Bluetooth offers localization but requires phone proximity, limiting automation (00:06:00–00:06:30).
  • Powerline (CPL) and Fil Pilote: Protocols like X10 and fil pilote (for electric radiators) use electrical wiring but depend on home wiring quality. Infrared signals control AV equipment and air conditioners but require line-of-sight and lack status feedback (00:06:45–00:08:00).
  • LoRaWAN/Sigfox: Long-range protocols for smart cities, not home use (00:08:00–00:08:15).

Open-Source Tools for Home Automation

Mockers detailed his open-source toolchain, emphasizing flexibility and integration (00:08:15–00:20:45):

Home Assistant

Home Assistant, with 1,853 integrations, is Mockers’ central hub, supporting Alexa, Google Assistant, and Siri. It offers mobile apps, automation, and dashboards but becomes unwieldy with many devices. Mockers disabled its database and UI, using it solely for device discovery (00:08:30–00:09:45). It integrates with OpenHAB (2,526 integrations) and Domoticz (500 integrations) for broader device support.

ESPHome

ESPHome deploys ESP8266/ESP32 chips for custom sensors, connecting via Wi-Fi or Bluetooth. Mockers builds temperature, humidity, and light sensors for ~€10 (vs. €50 commercial equivalents). Configuration via YAML files integrates sensors directly into Home Assistant (00:10:00–00:11:45). Example:

esphome:
  name: sensor_t1_mini
  platform: ESP8266
api:
  services:
    - service: update
      then:
        - logger.log: "Updating firmware"
output:
  - platform: gpio
    pin: GPIO4
    id: led
sensor:
  - platform: bme280
    temperature:
      name: "Temperature"
    pressure:
      name: "Pressure"
    humidity:
      name: "Humidity"

Node-RED

Node-RED, with 3,485 integrations, handles automation via low-code event-driven flows. Mockers routes all Home Assistant events to Node-RED, creating rules like bridging 433MHz remotes to ZigBee bulbs. Its responsive dashboard outperforms Home Assistant’s (00:12:00–00:14:00).

InfluxDB and Grafana

InfluxDB stores time-series data from devices, replacing Home Assistant’s PostgreSQL. Mockers experimented with machine learning for anomaly detection and room occupancy prediction, though the latter was unpopular with his family (00:14:15–00:15:15). Grafana visualizes historical data, like weekly temperature trends, with polished dashboards (00:15:15–00:15:45).

Telegraf

Telegraf runs scripts for devices lacking Home Assistant integration, sending data to InfluxDB. It also monitors network and CPU usage .

Ansible and Pi-hole

Ansible automates Docker container deployment on Raspberry Pis, with roles for each service and a web page listing services . Pi-hole, a DNS-based ad blocker, caches queries and logs IoT device DNS requests, exposing suspicious activity.

Security and Deployment

Security is critical with IoT’s attack surface. Mockers recommends:

  • A separate Wi-Fi network for IoT devices to isolate them from PCs .
  • Limiting internet access for devices supporting local mode .
  • A VPN for remote access, avoiding open ports .
  • Factory-resetting devices before disposal to erase Wi-Fi credentials .

Deployment uses Docker containers on Raspberry Pis, managed by Ansible. Mockers avoids Kubernetes due to Raspberry Pi constraints, opting for custom scripts. Hardware includes Raspberry Pis, 433MHz transceivers, and Wemos ESP8266 boards with shields for sensors (00:19:45–00:20:45).

Audience Interaction and Lessons

Mockers engaged the audience with questions (00:00:30) and a Q&A , addressing:

  • Usability for family (transparent for his wife, usable by his six-year-old)
  • Home Assistant backups via Ansible and hourly NAS snapshots
  • Insecure 433MHz devices (cheap but risky)
  • Air conditioning control via infrared and fil pilote for radiators
  • A universal remote consolidating five protocols, reducing complexity
  • A humorous “divorce threat” from a beeping device, emphasizing user experience

Conclusion

Mockers’ talk showcased IoT as an accessible, developer-friendly domain using open-source tools. His setup, blending ZigBee, Wi-Fi, and DIY sensors with Home Assistant, Node-RED, and Grafana, offers a scalable, cost-effective model. Security and automation align with Devoxx’s cloud and IoT focus, inspiring developers to experiment safely. The key takeaway: quality data and user experience are critical for home automation success.

Resources

PostHeaderIcon [DevoxxFR 2021] Maximizing Productivity with Programmable Ergonomic Keyboards: Insights from Alexandre Navarro

In an enlightening session at Devoxx France 2021, Alexandre Navarro, a seasoned Java backend developer, captivated the audience with a deep dive into the world of programmable ergonomic keyboards. His presentation, titled “Maximizing Your Productivity with a Programmable Ergonomic Keyboard,” unveils the historical evolution of keyboards, the principles of ergonomic design, and practical strategies for customizing keyboards to enhance coding efficiency. Alexandre’s expertise, honed over eleven years of typing in the Bépo layout and eight years on a TextBlade, offers developers a compelling case for rethinking their primary input device. This post explores the key themes of his talk, providing actionable insights for programmers seeking to optimize their workflow.

A Journey Through Keyboard History

Alexandre begins by tracing the lineage of keyboards, a journey that illuminates why our modern layouts exist. In the 1870s, early typewriters resembled pianos with alphabetical key arrangements, mere prototypes of today’s devices. By 1874, the Sholes and Glidden typewriter introduced a layout resembling QWERTY, a design often misunderstood as a deliberate attempt to slow typists to prevent jamming. Alexandre debunks this myth, explaining that QWERTY was shaped by practical needs, such as placing frequent English digraphs like “TH” and “ER” for efficient typing. The addition of a number row and user feedback further refined the layout, with quirks like the absence of dedicated “0” and “1” keys—substituted by “O” and “I”—reflecting telegraphy influences.

This historical context sets the stage for understanding why QWERTY persists despite its limitations. Alexandre notes that modern keyboards, like the iconic IBM model, retain QWERTY’s staggered rows and non-aligned letters, a legacy of mechanical constraints irrelevant to today’s technology. His narrative underscores a critical point: many developers use keyboards designed for a bygone era, prompting a reevaluation of tools that dominate their daily work.

Defining Ergonomic Keyboards

Transitioning to ergonomics, Alexandre outlines the hallmarks of a keyboard designed for comfort and speed. He categorizes ergonomic features into three domains: physical key arrangement, letter layout, and key customization. Physically, an ergonomic keyboard should be orthogonal (straight rows, unlike QWERTY’s stagger), symmetrical to match human hand anatomy, flat to reduce tendon strain, and accessible to minimize finger travel. These principles challenge conventional designs, where number pads skew symmetry and elevated keys stress wrists.

Alexandre highlights two exemplary models: the Keyboardio Model 01 and the ErgoDox. The Keyboardio, which he uses, boasts orthogonal, symmetrical keys and accessible layouts, while the ErgoDox offers customizable switches and curvature. These keyboards prioritize user comfort, aligning with the natural positioning of hands to reduce fatigue during long coding sessions. By contrasting these with traditional keyboards, Alexandre emphasizes that ergonomic design is not a luxury but a necessity for developers who spend hours typing.

Optimizing with Programmable Keyboards

The heart of Alexandre’s talk lies in programming keyboards to unlock productivity. Programmable keyboards, like the ErgoDox and Keyboardio, emerged around 2011, powered by microcontrollers that developers can flash with custom firmware, often using Arduino-based C code or graphical tools. This flexibility allows users to redefine key functions, creating layouts tailored to their workflows.

Alexandre introduces key programming concepts, such as layers (up to 32, akin to switching between QWERTY and number pad modes), macros (single keys triggering complex shortcuts like “Ctrl+F”), and tap/hold behaviors (e.g., a key typing “A” when tapped but acting as “Ctrl” when held). These features enable developers to streamline repetitive tasks, such as navigating code or executing IDE shortcuts, directly from their home row. Alexandre’s personal setup, using the Bépo layout optimized for French, exemplifies how customization can enhance efficiency, even for English-heavy programming tasks.

Why Embrace Ergonomic Keyboards?

Alexandre concludes by addressing the “why” behind adopting ergonomic keyboards. Beyond speed, these devices offer comfort, reducing the risk of repetitive strain injuries—a concern for developers typing extensively. He shares his experience with the Bépo layout, which, while not optimized for English, outperforms QWERTY and AZERTY due to shared frequent letters and better hand alternation. For those hesitant to switch, Alexandre suggests starting with a blank keyboard to learn touch typing, ensuring all fingers engage without glancing at keys.

His call to action resonates with developers: mastering your keyboard is as essential as mastering your IDE. By investing in an ergonomic, programmable keyboard, programmers can transform a mundane tool into a productivity powerhouse. Alexandre’s insights, grounded in years of experimentation, inspire a shift toward tools that align with modern coding demands.

PostHeaderIcon [DevoxxFR 2019] Micronaut: The Ultra-Light JVM Framework of the Future

At Devoxx France 2019, Olivier Revial, a developer at Stackeo in Toulouse, presented Micronaut: The Ultra-Light JVM Framework of the Future. This session introduced Micronaut, a modern JVM framework designed for microservices and serverless applications, offering sub-second startup times and a 10MB memory footprint. Through slides and demos, Revial showcased Micronaut’s cloud-native approach and its potential to redefine JVM development.

Limitations of Existing Frameworks

Revial began by contrasting Micronaut with established frameworks like Spring Boot and Grails. While Spring Boot simplifies development with auto-configuration and standalone applications, it suffers from runtime dependency injection and reflection, leading to slow startup times (20–25 seconds) and high memory usage. As codebases grow, these issues worsen, complicating testing and deployment, especially in serverless environments where rapid startup is critical. Frameworks like Spring create a barrier between unit and integration tests, as long-running servers are often relegated to separate CI processes.

Micronaut addresses these pain points by eliminating reflection and using Ahead-of-Time (AOT) compilation, performing dependency injection and configuration at build time. This reduces startup times and memory usage, making it ideal for containerized and serverless deployments.

Micronaut’s Innovative Approach

Micronaut, created by Grails’ founder Graeme Rocher and Spring contributors, builds on the strengths of existing frameworks—dependency injectiaon, auto-configuration, service discovery, and HTTP client/server simplicity—while introducing innovations. It supports Java, Kotlin, and Groovy, using annotation processors and AST transformations for AOT compilation. This eliminates runtime overhead, enabling sub-second startups and minimal memory footprints.

Micronaut is cloud-native, with built-in support for MongoDB, Kafka, JDBC, and providers like Kubernetes and AWS. It embraces reactive programming via Reactor, supports GraalVM for native compilation, and simplifies testing by allowing integration tests to run alongside unit tests. Security features, including JWT and basic authentication, and metrics for Prometheus, enhance its enterprise readiness. Despite its youth (version 1.0 released in 2018), Micronaut’s ecosystem is rapidly growing.

Demonstration

Revial’s demo showcased Micronaut’s capabilities. He used the Micronaut CLI to create a “hello world” application in Kotlin, adding a controller with REST endpoints, one returning a reactive Flowable. The application started in 1–2 seconds locally (6 seconds in the demo due to environment differences) and handled HTTP requests efficiently. A second demo featured a Twitter crawler storing tweets in MongoDB using a reactive driver. It demonstrated dependency injection, validation, scheduled tasks, and security (basic authentication with role-based access). A GraalVM-compiled version started in 20 milliseconds, with a 70MB Docker image compared to 160MB for a JVM-based image, highlighting Micronaut’s efficiency for serverless use cases.

Hashtags: #Micronaut #Microservices #DevoxxFR2019 #OlivierRevial #JVMFramework #CloudNative

PostHeaderIcon Navigating the Application Lifecycle in Kubernetes

At Devoxx France 2019, Charles Sabourdin and Jean-Christophe Sirot, seasoned professionals in cloud-native technologies, delivered an extensive exploration of managing application lifecycles within Kubernetes. Charles, an architect with over 15 years in Linux and Java, and Jean-Christophe, a Docker expert since 2002, combined their expertise to demystify Docker’s underpinnings, Kubernetes’ orchestration, and the practicalities of continuous integration and delivery (CI/CD). Through demos and real-world insights, they addressed security challenges across development and business-as-usual (BAU) phases, proposing organizational strategies to streamline containerized workflows. This post captures their comprehensive session, offering a roadmap for developers and operations teams navigating Kubernetes ecosystems.

Docker’s Foundations: Isolation and Layered Efficiency

Charles opened the session by revisiting Docker’s core principles, emphasizing its reliance on Linux kernel features like namespaces and control groups (cgroups). Unlike virtual machines (VMs), which bundle entire operating systems, Docker containers share the host kernel, isolating processes within lightweight environments. This design achieves hyper-density, allowing more containers to run on a single machine compared to VMs. Charles demonstrated launching a container, highlighting its process isolation using commands like ps within a containerized bash session, contrasting it with the host’s process list. He introduced Docker’s layer system, where images are built as immutable, stacked deltas, optimizing storage through shared base layers. Tools like Dive, he noted, help inspect these layers, revealing command histories and suggesting size optimizations. This foundation sets the stage for Kubernetes, enabling efficient, portable application delivery across environments.

Kubernetes: Orchestrating Scalable Deployments

Jean-Christophe transitioned to Kubernetes, describing it as a resource orchestrator that manages containerized applications across node pools. Kubernetes abstracts infrastructure complexities, using declarative configurations to maintain desired application states. Key components include pods—the smallest deployable units housing containers—replica sets for scaling, and deployments for managing updates. Charles demonstrated creating a namespace and deploying a sample application using kubectl run, which scaffolds deployments, replica sets, and pods. He showcased rolling updates, where Kubernetes progressively replaces pods to ensure zero downtime, configurable via parameters like maxSurge and maxUnavailable. The duo emphasized Kubernetes’ auto-scaling capabilities, which adjust pod counts based on load, and the importance of defining resource limits to prevent performance bottlenecks. Their demo underscored Kubernetes’ role in achieving resilient, scalable deployments, aligning with hyper-density goals.

CI/CD Pipelines: Propagating Versions Seamlessly

The session delved into CI/CD pipelines, illustrating how Docker tags facilitate version propagation across development, pre-production, and production environments. Charles outlined a standard process: developers build Docker images tagged with version numbers (e.g., 1.11.2) or environment labels (e.g., prodstaging). These images, stored in registries like Docker Hub or private repositories, are pulled by Kubernetes clusters for deployment. Jean-Christophe highlighted debates around tagging strategies, noting that version-based tags ensure traceability, while environment tags simplify environment-specific deployments. Their demo integrated tools like Jenkins and JFrog Artifactory, automating builds, tests, and deployments. They stressed the need for robust pipeline configurations to avoid resource overuse, citing Jenkins’ default manual build triggers for tagged releases as a safeguard. This pipeline approach ensures consistent, automated delivery, bridging development and production.

Security Across the Lifecycle: Development vs. BAU

Security emerged as a central theme, with Charles contrasting development and BAU phases. During development, teams rapidly address Common Vulnerabilities and Exposures (CVEs) with frequent releases, leveraging tools like JFrog Xray and Clair to scan images for vulnerabilities. Xray integrates with Artifactory, while Clair, an open-source solution, scans registry images for known CVEs. However, in BAU, where releases are less frequent, unpatched vulnerabilities pose greater risks. Charles shared an anecdote about a PHP project where a dependency switch broke builds after two years, underscoring the need for ongoing maintenance. They advocated for practices like running containers in read-only mode and using non-root users to minimize attack surfaces. Tools like OWASP Dependency-Track, they suggested, could enhance visibility into library vulnerabilities, though current scanners often miss non-package dependencies. This dichotomy highlights the need for automated, proactive security measures throughout the lifecycle.

Organizational Strategies: Balancing Complexity and Responsibility

Drawing from their experiences, Charles and Jean-Christophe proposed organizational solutions to manage Kubernetes complexity. They introduced a “1-2-3 model” for image management: Level 1 uses vendor-provided images (e.g., official MySQL images) managed by operations; Level 2 involves base images built by dedicated teams, incorporating standardized tooling; and Level 3 allows project-specific images, with teams assuming maintenance responsibilities. This model clarifies ownership, reducing risks like disappearing maintainers when projects transition to BAU. They emphasized cross-team collaboration, encouraging developers and operations to share knowledge and align on practices like Dockerfile authorship and resource allocation in YAML configurations. Charles reflected on historical DevOps silos, advocating for shared vocabularies and traceable decisions to navigate evolving best practices. Their return-of-experience underscored the importance of balancing automation with human oversight to maintain robust, secure Kubernetes environments.

Hashtags: #Kubernetes #Docker #DevOps #CICD #Security #DevoxxFR #CharlesSabourdin #JeanChristopheSirot #JFrog #Clair

PostHeaderIcon [DevoxxFR 2019] Back to Basics: Stop Wasting Time with Dates

At Devoxx France 2019, Frédéric Camblor, a web developer at 4SH in Bordeaux, delivered an insightful session on mastering date and time handling in software applications. Drawing from years of noting real-world issues in a notebook, Frédéric aimed to equip developers with the right questions to ask when working with dates, ensuring they avoid common pitfalls like time zone mismatches, daylight saving time (DST) quirks, and leap seconds.

Understanding Time Fundamentals

Frédéric began by exploring the historical context of time measurement, contrasting ancient solar-based “true time” with modern standardized systems. He introduced Greenwich Mean Time (GMT), now deprecated in favor of Coordinated Universal Time (UTC), which is based on International Atomic Time. UTC, defined by the highly regular oscillations of cesium-133 atoms (9,192,631,770 per second), is geopolitically agnostic, free from DST or seasonal shifts, with its epoch set at January 1, 1970, 00:00:00 Greenwich time.

The distinction between GMT and UTC lies in the irregularity of Earth’s rotation, affected by tidal forces and earthquakes. To align astronomical time (UT1) with atomic time, leap seconds are introduced every six months by the International Earth Rotation and Reference Systems Service (IERS). In Java, these leap seconds are smoothed over the last 1,000 seconds of June or December, making them transparent to developers. Frédéric emphasized the role of the Network Time Protocol (NTP), which synchronizes computer clocks to atomic time via a global network of root nodes, ensuring sub-second accuracy despite local quartz oscillator drift.

Time Representations in Software

Frédéric outlined three key time representations developers encounter: timestamps, ISO 8601 datetimes, and local dates/times. Timestamps, the simplest, count seconds or milliseconds since the 1970 epoch but face limitations, such as the 2038 overflow issue on 32-bit systems (though mitigated in Java). ISO 8601 datetimes (e.g., 2019-04-18T12:00:00+01:00) offer human-readable precision with time zone offsets, enhancing clarity over raw timestamps. Local dates/times, however, are complex, often lacking explicit time zone or DST context, leading to ambiguities in scenarios like recurring meetings.

Each representation has trade-offs. Timestamps are precise but opaque, ISO 8601 is readable but requires parsing, and local times carry implicit assumptions that can cause bugs if not clarified. Frédéric urged developers to choose representations thoughtfully based on application needs.

Time zones, defined by the IANA database, are geopolitical regions with uniform time rules, distinct from time zone offsets (e.g., UTC+1). Frédéric clarified that a time zone like Europe/Paris can yield different offsets (UTC+1 or UTC+2) depending on DST, which requires a time zone table to resolve. These tables, updated frequently (e.g., nine releases in 2018), reflect geopolitical changes, such as Russia’s abrupt time zone shifts or the EU’s 2018 consultation to abolish DST by 2023. Frédéric highlighted the importance of updating time zone data in systems like Java (via JRE updates or TZUpdater), MySQL, or Node.js to avoid outdated rules.

DST introduces further complexity, creating “local time gaps” during spring transitions (e.g., 2:00–3:00 AM doesn’t exist) and overlaps in fall (e.g., 2:00–3:00 AM occurs twice). Libraries handle these differently: Moment.js adjusts invalid times, while Java throws exceptions. Frédéric warned against scheduling tasks like CRON jobs at local times prone to DST shifts (e.g., 2:30 AM), recommending UTC-based scheduling to avoid missed or duplicated executions.

Common Pitfalls and Misconceptions

Frédéric debunked several myths, such as “a day is always 24 hours” or “comparing dates is simple.” DST can result in 23- or 25-hour days, and leap years (every four years, except centurial years not divisible by 400) add complexity. For instance, 2000 was a leap year, but 2100 won’t be. Comparing dates requires distinguishing between equality (same moment) and identity (same time zone), as Java’s equals() and isEqual() methods behave differently.

JavaScript’s Date object was singled out for its flaws, including inconsistent parsing (dashes vs. slashes shift time zones), zero-based months, and unreliable handling of pre-1970 dates. Frédéric recommended using libraries like Moment.js, Moment-timezone, or Luxon to mitigate these issues. He also highlighted edge cases, such as the non-existent December 30, 2011, in Samoa due to a time zone shift, which can break calendar applications.

Best Practices for Robust Date Handling

Frédéric shared practical strategies drawn from real-world experience. Servers and databases should operate in UTC to avoid DST issues and expose conversion bugs when client and server time zones differ. For searches involving local dates (e.g., retrieving messages by date), he advocated defining a date range (e.g., 00:00–23:59 in the user’s time zone) rather than a single date to account for implicit time zone assumptions. Storing future dates requires capturing the user’s time zone to handle potential rule changes.

For time-only patterns (e.g., recurring 3:00 PM meetings), storing the user’s time zone is critical to resolve DST ambiguities. Frédéric advised against storing times in datetime fields (e.g., as 1970-01-01T15:00:00), recommending string storage with time zone metadata. For date-only patterns like birthdays, using dedicated data structures prevents inappropriate operations, and storing at 12:00 UTC minimizes time zone shift bugs. Finally, he cautioned against local datetimes without time zones, as they cannot be reliably placed on a timeline.

Frédéric concluded by urging developers to question assumptions, update time zone data, and use appropriate time scales. His engaging talk, blending humor, history, and hard-earned lessons, left attendees ready to tackle date and time challenges with confidence.

Hashtags: #DateTime #TimeZones #DST #ISO8601 #UTC #DevoxxFR2019 #FrédéricCamblor #4SH #Java #JavaScript

PostHeaderIcon Gradle: A Love-Hate Journey at Margot Bank

At Devoxx France 2019, David Wursteisen and Jérémy Martinez, developers at Margot Bank, delivered a candid talk on their experience with Gradle while building a core banking system from scratch. Their 45-minute session, “Gradle, je t’aime: moi non plus,” explored why they chose Gradle over alternatives, its developer-friendly features, script maintenance strategies, and persistent challenges like memory consumption. This post dives into their insights, offering a comprehensive guide for developers navigating build tools in complex projects.

Choosing Gradle for a Modern Banking System

Margot Bank, a startup redefining corporate banking, embarked on an ambitious project in 2017 to rebuild its IT infrastructure, including a core banking system (CBS) with Kotlin and Java modules. The CBS comprised applications for payments, data management, and a central “core” module, all orchestrated with microservices. Selecting a build tool was critical, given the need for speed, flexibility, and scalability. The team evaluated Maven, SBT, Bazel, and Gradle. Maven, widely used in Java ecosystems, lacked frequent updates, risking obsolescence. SBT’s Scala-based DSL added complexity, unsuitable for a Kotlin-focused stack. Bazel, while powerful for monorepos, didn’t support generic languages well. Gradle emerged as the winner, thanks to its task-based architecture, where tasks like compilejar, and assemble form a dependency graph, executing only modified components. This incremental build system saved time, crucial for Margot’s rapid iterations. Frequent releases (e.g., Gradle 5.1.1 in 2019) and a dynamic Groovy DSL further cemented its appeal, aligning with Devoxx’s emphasis on modern build tools.

Streamlining Development with Gradle’s Features

Gradle’s developer experience shone at Margot Bank, particularly with IntelliJ IDEA integration. The IDE auto-detected source sets (e.g., maintestintegrationTest) and tasks, enabling seamless task execution. Eclipse support, though less polished, handled basic imports. The Gradle Wrapper, a binary committed to repositories, automated setup by downloading the specified Gradle version (e.g., 5.1.1) from a custom URL, secured with checksums. This ensured consistency across developer machines, a boon for onboarding. Dependency management leveraged dynamic configurations like api and implementation. For example, marking a third-party client like AmazingMail as implementation in a web app module hid its classes from transitive dependencies, reducing coupling. Composite builds, introduced in recent Gradle versions, allowed local projects (e.g., a mailer module) to be linked without publishing to Maven Local, streamlining multi-project workflows. A notable pain point was disk usage: open-source projects’ varying Gradle versions accumulated 4GB on developers’ machines, as IntelliJ redundantly downloaded sources alongside binaries. Addressing an audience question, the team emphasized selective caching (e.g., wrapper binaries) to mitigate overhead, highlighting Gradle’s balance of power and complexity.

Enhancing Builds with Plugins and Kotlin DSL

For script maintainers, standardizing configurations across Margot’s projects was paramount. The team developed an internal Gradle plugin to centralize settings for linting (e.g., Ktlint), Nexus repositories, and releases. Applied via apply plugin: 'com.margotbank.standard', it ensured uniformity, reducing configuration drift. For project-specific logic, buildSrc proved revolutionary. This module housed Kotlin code for tasks like version management, keeping build.gradle files declarative. For instance, a Versions.kt object centralized dependency versions (e.g., junit:5.3.1), with unused ones grayed out in IntelliJ for cleanup. Migrating from Groovy to Kotlin DSL brought static typing benefits: autocompletion, refactoring, and navigation. A sourceSet.create("integrationTest") call, though verbose, clarified intent compared to Groovy’s dynamic integrationTest {}. Migration was iterative, file by file, avoiding disruptions. Challenges included verbose syntax for plugins like JaCoCo, requiring explicit casts. A buildSrc extension for commit message parsing (e.g., extracting Git SHAs) exemplified declarative simplicity. This approach, inspired by Devoxx’s focus on maintainable scripts, empowered developers to contribute to shared tooling, fostering collaboration across teams.

Gradle’s performance, driven by daemons that keep processes in memory, was a double-edged sword. Daemons reduced startup time, but multiple instances (e.g., 5.1.1 and 5.0.10) occasionally ran concurrently, consuming excessive RAM. On CI servers, Gradle crashed under heavy loads, prompting tweaks: disabling daemons, adjusting Docker memory, and upgrading to Gradle 4.4.5 for better memory optimization. Diagnostics remained elusive, as crashes stemmed from either Gradle or the Kotlin compiler. Configuration tweaks like enabling caching (org.gradle.caching=true) and parallel task execution (org.gradle.parallel=true) improved build times, but required careful tuning. The team allocated maximum heap space (-Xmx4g) upfront to handle large builds, reflecting Margot’s resource-intensive CI pipeline. An audience question on caching underscored selective imports (e.g., excluding redundant sources) to optimize costs. Looking ahead, Margot planned to leverage build caching for granular task reuse and explore tools like Build Queue for cleaner pipelines. Despite frustrations, Gradle’s flexibility and evolving features—showcased at Devoxx—made it indispensable, though memory management demanded ongoing vigilance.

Links :

Hashtags: #Gradle #KotlinDSL #BuildTools #DavidWursteisen #JeremyMartinez #DevoxxFrance2019

PostHeaderIcon [DevoxxFR 2018] Java in Docker: Best Practices for Production

The practice of running Java applications within Docker containers has become widely adopted in modern software deployment, yet it is not devoid of potential challenges, particularly when transitioning to production environments. Charles Sabourdin, a freelance architect, and Jean-Christophe Sirot, an engineer at Docker, collaborated at DevoxxFR2018 to share their valuable experiences and disseminate best practices for optimizing Java applications inside Docker containers. Their insightful talk directly addressed common and often frustrating issues, such as containers crashing unexpectedly, applications consuming excessive RAM leading to node instability, and encountering CPU throttling. They offered practical solutions and configurations aimed at ensuring smoother and more reliable production deployments for Java workloads.

The presenters initiated their session with a touch of humor, explaining why operations teams might exhibit a degree of apprehension when tasked with deploying a containerized Java application into a production setting. It’s a common scenario: containers that perform flawlessly on a developer’s local machine can begin to behave erratically or fail outright in production. This discrepancy often stems from a fundamental misunderstanding of how the Java Virtual Machine (JVM) interacts with the resource limits imposed by the container’s control groups (cgroups). Several key problems frequently surface in this context. Perhaps the most common is memory mismanagement; the JVM, particularly older versions, might not be inherently aware of the memory limits defined for its container by the cgroup. This lack of awareness can lead the JVM to attempt to allocate and use more memory than has been allocated to the container by the orchestrator or runtime. Such overconsumption inevitably results in the container being abruptly terminated by the operating system’s Out-Of-Memory (OOM) killer, a situation that can be difficult to diagnose without understanding this interaction.

Similarly, CPU resource allocation can present challenges. The JVM might not accurately perceive the CPU resources available to it within the container, such as CPU shares or quotas defined by cgroups. This can lead to suboptimal decisions in sizing internal thread pools (like the common ForkJoinPool or garbage collection threads) or can cause the application to experience unexpected CPU throttling, impacting performance. Another frequent issue is Docker image bloat. Overly large Docker images not only increase deployment times across the infrastructure but also expand the potential attack surface by including unnecessary libraries or tools, thereby posing security vulnerabilities. The talk aimed to equip developers and operations personnel with the knowledge to anticipate and mitigate these common pitfalls. During the presentation, a demonstration application, humorously named “ressources-munger,” was used to simulate these problems, clearly showing how an application could consume excessive memory leading to an OOM kill by Docker, or how it might trigger excessive swapping if not configured correctly, severely degrading performance.

JVM Memory Management and CPU Considerations within Containers

A significant portion of the discussion was dedicated to the intricacies of JVM memory management within the containerized environment. Charles and Jean-Christophe elaborated that older JVM versions, specifically those prior to Java 8 update 131 and Java 9, were not inherently “cgroup-aware”. This lack of awareness meant that the JVM’s default heap sizing heuristics—for example, typically allocating up to one-quarter of the physical host’s memory for the heap—would be based on the total resources of the host machine rather than the specific limits imposed on the container by its cgroup. This behavior is a primary contributor to unexpected OOM kills when the container’s actual memory limit is much lower than what the JVM assumes based on the host.

Several best practices were shared to address these memory-related issues effectively. The foremost recommendation is to use cgroup-aware JVM versions. Modern Java releases, particularly Java 8 update 191 and later, and Java 10 and newer, incorporate significantly improved cgroup awareness. For older Java 8 updates (specifically 8u131 to 8u190), experimental flags such as -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap can be employed to enable the JVM to better respect container memory limits. In Java 10 and subsequent versions, this behavior became standard and often requires no special flags. However, even with cgroup-aware JVMs, explicitly setting the heap size using parameters like -Xms for the initial heap size and -Xmx for the maximum heap size is frequently a recommended practice for predictability and control. Newer JVMs also offer options like -XX:MaxRAMPercentage, allowing for more dynamic heap sizing relative to the container’s allocated memory. It’s crucial to understand that the JVM’s total memory footprint extends beyond just the heap; it also requires memory for metaspace (which replaced PermGen in Java 8+), thread stacks, native libraries, and direct memory buffers. Therefore, when allocating memory to a container, it is essential to account for this total footprint, not merely the -Xmx value. A common guideline suggests that the Java heap might constitute around 50-75% of the total memory allocated to the container, with the remainder reserved for these other essential JVM components and any other processes running within the container. Tuning metaspace parameters, such as -XX:MetaspaceSize and -XX:MaxMetaspaceSize, can also prevent excessive native memory consumption, particularly in applications that dynamically load many classes.

Regarding CPU resources, the presenters noted that the JVM’s perception of available processors is also influenced by its cgroup awareness. In environments where CPU resources are constrained, using flags like -XX:ActiveProcessorCount can be beneficial to explicitly inform the JVM about the number of CPUs it should consider for sizing its internal thread pools, such as the common ForkJoinPool or the threads used for garbage collection. Optimizing the Docker image itself is another critical aspect of preparing Java applications for production. This involves choosing a minimal base image, such as alpine-jre, distroless, or official “slim” JRE images, instead of a full operating system distribution, to reduce the image size and potential attack surface. Utilizing multi-stage builds in the Dockerfile is a highly recommended technique; this allows developers to use a larger image containing build tools like Maven or Gradle and a full JDK in an initial stage, and then copy only the necessary application artifacts (like the JAR file) and a minimal JRE into a final, much smaller runtime image. Furthermore, being mindful of Docker image layering by combining related commands in the Dockerfile where possible can help reduce the number of layers and optimize image size. For applications on Java 9 and later, tools like jlink can be used to create custom, minimal JVM runtimes that include only the Java modules specifically required by the application, further reducing the image footprint. The session strongly emphasized that a collaborative approach between development and operations teams, combined with a thorough understanding of both JVM internals and Docker containerization principles, is paramount for successfully and reliably running Java applications in production environments.

Links:

Hashtags: #Java #Docker #JVM #Containerization #DevOps #Performance #MemoryManagement #DevoxxFR2018 #CharlesSabourdin #JeanChristopheSirot #BestPractices #ProductionReady #CloudNative