Secure Development with Docker: DockerCon 2023 Workshop
The DockerCon 2023 workshop, “Secure Development with Docker,” delivered by Yves Brissaud, James Carnegie, David Dooling, and Christian Dupuis from Docker, offered a comprehensive exploration of securing the software supply chain. Spanning over three hours, this session addressed the tension between developers’ need for speed and security teams’ focus on risk mitigation. Participants engaged in hands-on labs to identify and remediate common vulnerabilities, leverage Docker Scout for actionable insights, and implement provenance, software bills of materials (SBOMs), and policies. The workshop emphasized Docker’s developer-centric approach to security, empowering attendees to enhance their workflows without compromising safety. By integrating Docker Scout, attendees learned to secure every stage of the software development lifecycle, from code to deployment.
Tackling Common Vulnerabilities and Exposures (CVEs)
The workshop began with a focus on Common Vulnerabilities and Exposures (CVEs), a critical starting point for securing software. David Dooling introduced CVEs as publicly disclosed cybersecurity vulnerabilities in operating systems, dependencies like OpenSSL, or container images. Participants used Docker Desktop 4.24 and the Docker Scout CLI to scan images based on Alpine 3.14, identifying vulnerabilities in base images and added layers, such as npm packages (e.g., Express and its transitive dependency Qs). Hands-on exercises guided attendees to update base images to Alpine 3.18, using Docker Scout’s recommendations to select versions with fewer vulnerabilities. The CLI’s cve
command and Desktop’s vulnerability view provided detailed insights, including severity filters and package details, enabling developers to remediate issues efficiently. This segment underscored that while scanning is essential, it’s only one part of a broader security strategy, setting the stage for a holistic approach.
Understanding Software Supply Chain Security
The second segment, led by Dooling, introduced the software supply chain as a framework encompassing source code, dependencies, build processes, and deployment. Drawing an analogy to brewing coffee—where beans, water, and equipment have their own supply chains—the workshop highlighted risks like supply chain attacks, as outlined by CISA’s open-source security roadmap. These attacks, such as poisoning repositories, differ from CVEs by involving intentional tampering. Participants explored Docker Scout’s role as a supply chain management tool, not just a CVE scanner. Using the workshop’s GitHub repository (dc23-secure-workshop), attendees set up environment variables and Docker Compose to build images, learning how Scout tracks components across the lifecycle. This segment emphasized the need to secure every stage, from code creation to deployment, to prevent vulnerabilities and malicious injections.
Leveraging Docker Scout for Actionable Insights
Docker Scout was the cornerstone of the workshop, offering a developer-friendly interface to manage security. Yves Brissaud guided participants through hands-on labs using Docker Desktop and the Scout CLI to analyze images. Attendees explored vulnerabilities in a front-end image (using Express) and a Go-based back-end image, applying filters to focus on critical CVEs or specific package types (e.g., npm). Scout’s compare
command allowed participants to assess changes between image versions, such as updating from Alpine 3.14 to 3.18, revealing added or removed packages and their impact on vulnerabilities. Desktop’s visual interface displayed recommended fixes, like updating base images or dependencies, while the CLI provided detailed outputs, including quick views for rapid assessments. This segment demonstrated Scout’s ability to integrate into CI/CD pipelines, providing early feedback to developers without disrupting workflows.
Implementing Provenance and Software Bill of Materials (SBOM)
The third segment focused on provenance and SBOMs, critical for supply chain transparency. Provenance, aligned with the SALSA framework’s Build Level 1, documents how an image is built, including base image tags, digests, and build metadata. SBOMs list all packages and their versions, ensuring consistency across environments. Participants rebuilt images with the --provenance
and --sbom
flags using BuildKit, generating attestations stored in Docker Hub. Brissaud demonstrated using the imagetools
command to inspect provenance and SBOMs, revealing details like build timestamps and package licenses. The workshop highlighted the importance of embedding this metadata at build time to enable reproducible builds and accurate recommendations. By integrating Scout’s custom SBOM indexer, attendees ensured consistent vulnerability reporting across Desktop, CLI, and scout.docker.com, enhancing trust in the software’s integrity.
Enforcing Developer-Centric Policies
The final segment introduced Docker Scout’s policy enforcement, designed with a developer mindset to avoid unnecessary build failures. Dooling explained Scout’s “first do no harm” philosophy, rooted in Kaizen’s continuous improvement principles. Unlike traditional policies that block builds for existing CVEs, Scout compares new builds to production images, allowing progress if vulnerabilities remain unchanged. Participants explored four out-of-the-box policies in Early Access: fixing critical/high CVEs, updating base images, and avoiding deprecated tags. Using the scout policy
command, attendees evaluated images against these policies, viewing compliance status on Desktop and scout.docker.com. The workshop also previewed upcoming GitHub Action integrations for pull request policy checks, enabling developers to assess changes before merging. This approach ensures security without hindering development, aligning with Docker’s mission to empower developers.
Links:
- DockerCon 2023 Workshop Video
- Docker Website
- Announcing Docker Scout GA: Actionable Insights for the Software Supply Chain
- Docker Scout: Securing The Complete Software Supply Chain (DockerCon 2023)
- What’s in My Container? Docker Scout CLI and CI to the Rescue (DockerCon 2023)
Hashtags: #DockerCon2023 #SoftwareSupplyChain #DockerScout #SecureDevelopment #CVEs #Provenance #SBOM #Policy #YvesBrissaud #JamesCarnegie #DavidDooling #ChristianDupuis
Decoding Shazam: Unraveling Music Recognition Technology
This post delves into Moustapha AGACK’s Devoxx FR 2023 presentation, “Jay-Z, Maths and Signals! How to clone Shazam 🎧,” exploring the technology behind the popular song identification application, Shazam. AGACK shares his journey to understand and replicate Shazam’s functionality, explaining the core concepts of sound, signals, and frequency analysis.
Understanding Shazam’s Core Functionality
Moustapha AGACK begins by captivating the audience with a demonstration of Shazam’s seemingly magical ability to identify songs from brief audio snippets, often recorded in noisy and challenging acoustic environments. He emphasizes the robustness of Shazam’s identification process, noting its ability to function even with background conversations, ambient noise, or variations in recording quality. This remarkable capability sparked Moustapha’s curiosity as a developer, prompting him to embark on a quest to investigate the inner workings of the application.
Moustapha mentions that his exploration started with the seminal paper authored by Avery Wang, a co-founder of Shazam, which meticulously details the design and implementation of the Shazam algorithm. This paper, a cornerstone of music information retrieval, provides deep insights into the signal processing techniques, data structures, and search strategies employed by Shazam. However, Moustapha humorously admits to experiencing initial difficulty in fully grasping the paper’s complex mathematical formalisms and dense signal processing jargon. He acknowledges the steep learning curve associated with the field of digital signal processing, which requires a solid foundation in mathematics, physics, and computer science. Despite the initial challenges, Moustapha emphasizes the importance of visual aids within the paper, such as insightful graphs and illustrative spectrograms, which greatly aided his conceptual understanding and provided valuable intuition.
The Physics of Sound: A Deep Dive
Moustapha explains that sound, at its most fundamental level, is a mechanical wave phenomenon. It originates from the vibration of objects, which disturbs the surrounding air molecules. These molecules collide with their neighbors, transferring the energy of the vibration and causing a chain reaction that propagates the disturbance through the air as a wave. This wave travels through the air at a finite speed (approximately 343 meters per second at room temperature) and eventually reaches our ears, where it is converted into electrical signals that our brains interpret as sound.
These sound waves are typically represented mathematically as sinusoidal signals, also known as sine waves. A sine wave is a smooth, continuous, and periodic curve that oscillates between a maximum and minimum value. Two key properties characterize these signals: frequency and amplitude.
- Frequency is defined as the number of complete cycles of the wave that occur in one second, measured in Hertz (Hz). One Hertz is equivalent to one cycle per second. Frequency is the primary determinant of the perceived pitch of the sound. High-frequency waves correspond to high-pitched sounds (treble), while low-frequency waves correspond to low-pitched sounds (bass). For example, a sound wave oscillating at 440 Hz is perceived as the musical note A above middle C. The higher the frequency, the more rapidly the air molecules are vibrating, and the higher the perceived pitch.
- Amplitude refers to the maximum displacement of the wave from its equilibrium position. It is a measure of the wave’s intensity or strength and directly correlates with the perceived volume or loudness of the sound. A large amplitude corresponds to a loud sound, meaning the air molecules are vibrating with greater force, while a small amplitude corresponds to a quiet sound, indicating gentler vibrations.
Moustapha notes that the human auditory system possesses a limited range of frequency perception, typically spanning from 20 Hz to 20 kHz. This means that humans can generally hear sounds with frequencies as low as 20 cycles per second and as high as 20,000 cycles per second. However, it’s important to note that this range can vary slightly between individuals and tends to decrease with age, particularly at the higher frequency end. Furthermore, Moustapha points out that very high frequencies (above 2000 Hz) can often be perceived as unpleasant or even painful due to the sensitivity of the ear to rapid pressure changes.
Connecting Musical Notes and Frequencies
Moustapha draws a direct and precise relationship between musical notes and specific frequencies, a fundamental concept in music theory and acoustics. He uses the A440 standard as a prime example. The A440 standard designates the A note above middle C (also known as concert pitch) as having a frequency of exactly 440 Hz. This standard is crucial in music, as it provides a universal reference for tuning musical instruments, ensuring that musicians playing together are in harmony.
Moustapha elaborates on the concept of octaves, a fundamental concept in music theory and acoustics. An octave represents a doubling or halving of frequency. When the frequency of a note is doubled, it corresponds to the same note but one octave higher. Conversely, when the frequency is halved, it corresponds to the same note but one octave lower. This logarithmic relationship between pitch and frequency is essential for understanding musical scales, chords, and harmonies.
For instance:
- The A note in the octave below A440 has a frequency of 220 Hz (440 Hz / 2).
- The A note in the octave above A440 has a frequency of 880 Hz (440 Hz * 2).
This consistent doubling or halving of frequency for each octave creates a predictable and harmonious relationship between notes, which is exploited by Shazam’s algorithms to identify musical patterns and structures.
The Complexity of Real-World Sound Signals
Moustapha emphasizes that real-world sound is significantly more complex than the idealized pure sine waves often used for basic explanations. Instead, real-world sound signals are typically composed of a superposition, or sum, of numerous sine waves, each with its own unique frequency, amplitude, and phase. These constituent sine waves interact with each other, through a process called interference, creating complex and intricate waveforms.
Furthermore, real-world sounds often contain harmonics, which are additional frequencies that accompany the fundamental frequency of a sound. The fundamental frequency is the lowest frequency component of a complex sound and is typically perceived as the primary pitch. Harmonics, also known as overtones, are integer multiples of the fundamental frequency. For example, if the fundamental frequency is 440 Hz, the first harmonic will be 880 Hz (2 * 440 Hz), the second harmonic will be 1320 Hz (3 * 440 Hz), and so on.
Moustapha illustrates this complexity with the example of a piano playing the A440 note. While the piano will produce a strong fundamental frequency at 440 Hz, it will simultaneously generate a series of weaker harmonic frequencies. These harmonics are not considered “noise” or “parasites” in the context of music; they are integral to the rich and distinctive sound of the instrument. The specific set of harmonics and their relative amplitudes, or strengths, are what give a piano its characteristic timbre, allowing us to distinguish it from a guitar, a flute, or other instruments playing the same fundamental note.
Moustapha further explains that the physical characteristics of musical instruments, such as the materials from which they are constructed (e.g., wood, metal), their shape and size, the way they produce sound (e.g., strings vibrating, air resonating in a tube), and the presence of resonance chambers, all significantly influence the production and relative intensities of these harmonics. For instance, a violin’s hollow body amplifies certain harmonics, creating its characteristic warm and resonant tone, while a trumpet’s brass construction and flared bell shape emphasize different harmonics, resulting in its bright and piercing sound. This is why a violin and a piano, or a trumpet and a flute, sound so different, even when playing the same fundamental pitch.
He also points out that the human voice is an exceptionally complex sound source. The vocal cords, resonance chambers in the throat and mouth, the shape of the oral cavity, and the position of the tongue and lips all contribute to the unique harmonic content and timbre of each individual’s voice. These intricate interactions make voice recognition and speech analysis challenging tasks, as the acoustic characteristics of speech can vary significantly between speakers and even within the same speaker depending on emotional state and context.
To further emphasize the difference between idealized sine waves and real-world sound, Moustapha contrasts the pure sine wave produced by a tuning fork (an instrument specifically designed to produce a nearly pure tone with minimal harmonics) with the complex waveforms generated by various musical instruments playing the same note. The tuning fork’s waveform is a smooth, regular sine wave, devoid of significant overtones, while the instruments’ waveforms are jagged, irregular, and rich in harmonic content, reflecting the unique timbral characteristics of each instrument.
Harnessing the Power of Fourier Transform
To effectively analyze these complex sound signals and extract the individual frequencies and their amplitudes, Moustapha introduces the Fourier Transform. He acknowledges Joseph Fourier, a renowned 18th-century mathematician and physicist, as the “father of signal theory” for his groundbreaking work in this area. Fourier’s mathematical insights revolutionized signal processing and have found applications in diverse fields far beyond audio analysis, including image compression (e.g., JPEG), telecommunications, medical imaging (e.g., MRI), seismology, and even quantum mechanics.
The Fourier Transform is presented as a powerful mathematical tool that decomposes any complex, time-domain signal into a sum of simpler sine waves, each with its own unique frequency, amplitude, and phase. In essence, it performs a transformation of the signal from the time domain, where the signal is represented as a function of time (i.e., amplitude versus time), to the frequency domain, where the signal is represented as a function of frequency (i.e., amplitude versus frequency). This transformation allows us to see the frequency content of the signal, revealing which frequencies are present and how strong they are.
Moustapha provides a simplified explanation of how the Fourier Transform works conceptually. He first illustrates how it would analyze pure sine waves. If the input signal is a single sine wave, the Fourier Transform will precisely identify the frequency of that sine wave and its amplitude. The output in the frequency domain will be a spike or peak at that specific frequency, with the height of the spike corresponding to the amplitude (strength) of the sine wave.
He then emphasizes that the true power and utility of the Fourier Transform become apparent when analyzing complex signals that are the sum of multiple sine waves. In this case, the Fourier Transform will decompose the complex signal into its individual sine wave components, revealing the presence, amplitude, and phase of each frequency. This is precisely the nature of real-world sound, which, as previously discussed, is a mixture of many frequencies and harmonics. By applying the Fourier Transform to an audio signal, it becomes possible to determine the constituent frequencies and their relative strengths, providing valuable information for music analysis, audio processing, and, crucially, song identification as used by Shazam.
Gestion des incidents : Parler et agir
Lors de Devoxx France 2023, Hila Fish a présenté une conférence captivante de 47 minutes intitulée « Incident Management – Talk the Talk, Walk the Walk » (lien YouTube), proposant une feuille de route pour une gestion efficace des incidents. Enregistrée en avril 2023 au Palais des Congrès à Paris, Hila, ingénieure DevOps senior chez Wix (site Wix), a partagé ses 15 années d’expérience dans la tech, mettant en avant des stratégies proactives et des processus structurés pour gérer les incidents en production. Son discours, enrichi de conseils pratiques et d’anecdotes réelles, a inspiré les participants à non seulement parler de gestion des incidents, mais à exceller dans ce domaine. Cet article explore le cadre de Hila, soulignant comment se préparer et résoudre les incidents tout en préservant la valeur business et le sommeil.
Repenser les incidents avec une mentalité business
Hila a commencé par redéfinir la perception des incidents, incitant à passer d’une vision technique étroite à une approche orientée business. Elle a défini les incidents comme des événements risquant des pertes de revenus, l’insatisfaction des clients, des violations de données ou des atteintes à la réputation, les distinguant des alertes mineures. Sans une gestion adéquate, les incidents peuvent entraîner des temps d’arrêt, une productivité réduite et des violations des accords de niveau de service (SLA), coûteux pour les entreprises. Hila a insisté sur le fait que les développeurs et ingénieurs doivent comprendre le « pourquoi » de leurs systèmes — comment les pannes affectent les revenus, les clients et la réputation.
Citants Werner Vogels, CTO d’AWS, Hila a rappelé que « tout échoue tout le temps », des systèmes de production à l’endurance humaine. Cette réalité rend les incidents inévitables, non des urgences à paniquer. En anticipant les échecs, les équipes peuvent aborder les incidents calmement, armées d’un processus structuré. La mentalité business de Hila encourage les ingénieurs à prioriser les résultats alignés sur les objectifs organisationnels, comme minimiser les temps d’arrêt et maintenir la confiance des clients. Cette perspective pose les bases de son cadre structuré de gestion des incidents, conçu pour éviter le chaos et optimiser l’efficacité.
Un processus structuré pour la résolution des incidents
Hila a présenté un processus en cinq piliers pour gérer les incidents, adapté du cadre de PagerDuty mais affiné par son expérience : Identifier et Catégoriser, Notifier et Escalader, Investiguer et Diagnostiquer, Résoudre et Récupérer, et Clôture de l’Incident. Chaque pilier inclut des questions clés pour guider les ingénieurs vers la résolution.
-
Identifier et Catégoriser : Hila conseille d’évaluer l’ampleur et l’impact business de l’incident. Des questions comme « Est-ce que je comprends toute l’étendue du problème ? » et « Peut-on attendre les heures ouvrables ? » déterminent l’urgence. Si une alerte provient d’une plainte client plutôt que d’outils comme PagerDuty, cela signale une lacune dans la détection à corriger après l’incident.
-
Notifier et Escalader : La communication est cruciale. Hila a souligné l’importance de notifier les équipes de support, les ingénieurs clients et les équipes dépendantes pour maintenir la transparence et respecter les SLA. Les alertes mal classifiées doivent être ajustées pour refléter la véritable gravité.
-
Investiguer et Diagnostiquer : Concentrez-vous sur les informations pertinentes pour éviter de perdre du temps. Hila a partagé un exemple où des ingénieurs débattaient de détails de flux non pertinents, retardant la résolution. Poser « Ai-je trouvé la cause racine ? » assure la progression, avec une escalade si l’investigation stagne.
-
Résoudre et Récupérer : La solution la plus rapide préservant la stabilité du système est idéale. Hila a mis en garde contre les correctifs « rapides et sales », comme redémarrer un service sans traiter les causes sous-jacentes, qui peuvent réapparaître et nuire à la fiabilité. Des correctifs permanents et des mesures préventives sont essentiels.
-
Clôture de l’Incident : Après résolution, informez toutes les parties prenantes, vérifiez les alertes, mettez à jour les runbooks et évaluez si un post-mortem est nécessaire. Hila a insisté sur la documentation immédiate des leçons pour capturer les détails avec précision, favorisant une culture d’apprentissage sans blâme.
Ce processus structuré réduit le temps moyen de résolution, minimise les coûts et améliore la fiabilité des systèmes, en phase avec la philosophie business de Hila.
Traits essentiels des gestionnaires d’incidents
Hila a détaillé dix traits cruciaux pour une gestion efficace des incidents, proposant des moyens pratiques de les développer :
-
Réflexion rapide : Les incidents impliquent souvent des problèmes inconnus, nécessitant des décisions rapides et créatives. Hila a suggéré de s’entraîner via des sessions de brainstorming ou des exercices d’équipe comme le paintball pour renforcer l’adaptabilité.
-
Filtrer les informations pertinentes : Connaître les flux d’un système aide à distinguer les données critiques du bruit. La familiarité avec l’architecture système améliore cette compétence, accélérant le débogage.
-
Travailler sous pression : Hila a raconté l’histoire d’un collègue paralysé par 300 alertes lors de son premier quart d’astreinte. Collecter des données pertinentes réduit le stress en restaurant le contrôle. Apprendre les flux système en amont renforce la confiance.
-
Travail méthodique : Suivre son processus basé sur les piliers assure une progression constante, même sous pression.
-
Humilité : Demander de l’aide privilégie les besoins business à l’ego. Hila a encouragé l’escalade des problèmes non résolus plutôt que de perdre du temps.
-
Résolution de problèmes et attitude proactive : Une approche positive et proactive favorise les solutions. Hila a raconté avoir poussé des collègues réticents à essayer des correctifs suggérés, évitant la stagnation.
-
Propriété et initiative : Même après escalade, les gestionnaires doivent vérifier la progression, comme Hila l’a fait en relançant un DBA silencieux.
-
Communication : Des mises à jour claires et concises aux équipes et clients sont vitales. Pour les moins communicatifs, Hila a recommandé des lignes directrices prédéfinies pour les canaux et le contenu.
-
Leadership sans autorité : La confiance et le calme inspirent la confiance, permettant aux gestionnaires de diriger efficacement les équipes.
-
Engagement : La passion pour le rôle stimule la propriété et l’initiative. Hila a averti que l’apathie pourrait signaler un épuisement ou un mauvais ajustement professionnel.
Ces traits, affinés par la pratique et la réflexion, permettent aux ingénieurs de gérer les incidents avec clarté et détermination.
Préparation proactive pour réussir ses incidents
Le message central de Hila était le pouvoir de la proactivité, comparé à une écoute active en classe pour préparer un examen. Elle a détaillé des étapes proactives pour le travail quotidien et les actions post-incident pour garantir la preparedness :
-
Actions post-incident : Rédigez des rapports de fin de quart d’astreinte pour documenter les problèmes récurrents, utiles pour la sensibilisation de l’équipe et les audits. Notez immédiatement les observations pour un post-mortem, même sans réunion formelle, pour capturer les leçons. Ouvrez des tâches pour prévenir les futurs incidents, corrigez les alertes faussement positives, mettez à jour les runbooks et automatisez les problèmes auto-réparables. Partagez des connaissances détaillées via des manuels ou des briefings pour aider les équipes à apprendre des processus de débogage.
-
Proactivité quotidienne : Lisez les rapports de fin de quart des coéquipiers pour rester informé des changements en production. Connaissez les contacts d’escalade pour d’autres domaines (par exemple, développeurs pour des services spécifiques) pour éviter les retards. Étudiez l’architecture système et les flux d’applications pour identifier les points faibles et rationaliser le dépannage. Surveillez les tâches des coéquipiers et les changements en production pour anticiper les impacts. Soyez une personne ressource, partageant vos connaissances pour bâtir la confiance et réduire les efforts de collecte d’informations.
L’approche proactive de Hila garantit que les ingénieurs sont « prêts ou non » lorsque les alertes de PagerDuty ou OpsGenie arrivent, minimisant les temps d’arrêt et favorisant le succès business.
Conclusion
La présentation de Hila Fish à Devoxx France 2023 a été une masterclass en gestion des incidents, mêlant processus structurés, traits essentiels et stratégies proactives. En adoptant une mentalité business, en suivant un cadre de résolution clair, en cultivant des compétences clés et en se préparant avec diligence, les ingénieurs peuvent transformer les incidents chaotiques en défis gérables. Son accent sur la préparation et la collaboration garantit des résolutions efficaces tout en préservant le sommeil — une victoire pour les ingénieurs et les entreprises.
Visionnez la conférence complète sur YouTube pour explorer davantage les idées de Hila. Son travail chez Wix (site Wix) reflète un engagement envers l’excellence DevOps, et des ressources supplémentaires sont disponibles via Devoxx France (site Devoxx France). Comme Hila l’a rappelé, maîtriser la gestion des incidents signifie se préparer, rester calme et toujours prioriser le business — car lorsque les incidents frappent, vous serez prêt à agir.
Meet with Others: tools for speech
In a world increasingly dominated by digital communication and remote work, the ability to connect with others and speak confidently has become more challenging yet more valuable than ever. At Devoxx France 2023, Alex Casanova delivered an engaging workshop on overcoming the barriers to effective communication and public speaking, drawing from her extensive experience as an actress, trainer, and sophrologist.
The Importance of Human Connection
Alex began her presentation with an interactive exercise, asking the audience to identify what prevents people from speaking in public. The responses came quickly: shyness, lack of confidence, fear of judgment, feeling illegitimate, and the intimidation of speaking after someone more articulate has already spoken. She then asked what would help overcome these barriers: confidence, feeling safe, stress management, feedback, a supportive atmosphere, and practice.
“The development of digital technology, artificial intelligence, and social distancing following lockdowns has had an impact on human beings and our self-confidence,” Alex explained. “It increases fear—fear of going out, fear of approaching others, fear of not knowing what to say—because the professional world is demanding and always asks for more: more ideas, more spontaneity, more innovation.”
As a professional actress, trainer, and sophrologist, Alex shared that she too has experienced impostor syndrome and naturally tends toward introversion. Her life path consciously or unconsciously led her to theater, which provided tools to express herself better, feel comfortable in front of an audience, and create a space where she could be fully herself.
Understanding Communication Types
Alex outlined three types of communication we encounter:
- Interpersonal communication – Between two people, involving an emitter and a receiver
- Group communication – One person addressing a group, such as in presentations or conferences
- Mass communication – Multiple sources addressing large audiences through various channels
The workshop focused primarily on the first two types, which are most relevant to professional settings.
The Hero’s Journey to Better Communication
Alex framed the workshop as a hero’s journey where participants would face and overcome four challenges that prevent effective communication:
Challenge 1: Breaking Mental and Physical Isolation
The first monster to defeat is the fear of leaving our comfort zone. Alex guided the audience through a sophrological relaxation exercise focusing on:
- Posture awareness and alignment
- Square breathing technique (inhale, hold, exhale, hold)
- Visualization of a safe, comforting place
- Recalling a memory of personal excellence and confidence
This simple but powerful tool helps create grounding, calm, strengthen personal resources, gain perspective on emotions, and bring focus to the present moment.
Challenge 2: Public Speaking and Self-Confidence
The second challenge involves overcoming stage fright, anxiety, and various fears:
- Fear of not being understood
- Fear of being judged
- Fear of not being good enough
- Fear of losing composure
Alex demonstrated the “Victory V” posture—standing tall with arms raised in a V shape—based on Amy Cuddy’s research on body language and its influence on mental state. Maintaining this posture for 30 seconds releases hormones that boost confidence and create an optimistic, open mindset.
“Body language truly puts you in an attitude of openness,” Alex explained, contrasting it with closed postures associated with fear or sadness. She shared a personal anecdote of using this technique at a networking event where she felt out of place, which led to the event organizer approaching her and introducing her to others.
Challenge 3: Team Relationships and Quick Thinking
The third challenge addresses conflict avoidance, difficulty collaborating, lack of self-confidence, fear of not knowing what to say, viewing others as enemies, and fear of rejection.
Alex led the audience through a word association exercise:
- First individually, thinking of a word and making associations (e.g., bottle → alcohol → cocktail → vacation)
- Then collectively, with audience members building on each other’s associations
This simple activity immediately created engagement, spontaneity, and connection among strangers, demonstrating the philosophy of improvisation.
“Improv puts you in a state of play, exchange, meeting, letting go, and self-confidence,” Alex explained. She has used improvisation tools to help anesthesiologists improve their listening skills, multitasking abilities, and patient interaction, as well as with high-ranking military personnel who needed to develop active listening to communicate with civilians.
Challenge 4: Creativity and Innovation
The final challenge involves overcoming:
- Fear of failure
- Fear of not measuring up
- Fear of leaving one’s comfort zone
- Fear of not being original
As an exercise, Alex asked participants to list five positive adjectives about themselves, including one starting with the first letter of their name, and then say them aloud together.
This tool helps transform limiting beliefs into motivating ones, shifting from a closed to an open state, from procrastination to action.
The Virtuous Circle
Alex concluded by presenting the virtuous circle that replaces the vicious circle of self-doubt:
- I live, therefore I exist – Recognizing your inherent right to exist and take up space
- I recognize my qualities and experiences – Building on small successes
- I welcome errors as opportunities to learn – Seeing challenges as feedback rather than failure
- I reach my goals at my own pace – Bringing compassion and kindness to yourself
“It’s really up to you to be your best ally,” Alex emphasized.
Applying These Tools
Alex’s approach combines inspiration and action—balancing periods of calm, introspection, and theory with practice, simulation, and implementation. Her multidisciplinary background allows her to use theatrical improvisation, psychology, sophrology, and coaching to adapt to individual and corporate needs.
Her ultimate goal is to help people develop greater self-confidence and what psychologist Carl Rogers calls “congruence”—alignment and coherence between our thoughts, feelings, words, and actions. This authenticity creates empathy and acceptance of ourselves and others.
About Alex Casanova
Alex Casanova is an actress, trainer, and sophrologist who specializes in helping individuals develop confidence through experiential learning. Her multidisciplinary approach combines the performing arts, psychology, and therapeutic techniques to create personalized development pathways for both individuals and organizations.
Through her work, she aims to bring more humanity, respect, and tolerance into corporate environments by focusing on authentic communication and personal growth. Her “INSPIR’ACTION” methodology balances introspection with practical application to create sustainable behavioral change.
Navigating the Reactive Frontier: Oleh Dokuka’s Reactive Streams at Devoxx France 2023
On April 13, 2023, Oleh Dokuka commanded the Devoxx France stage with a 44-minute odyssey titled “From imperative to Reactive: the Reactive Streams adventure!” Delivered at Paris’s Palais des Congrès, Oleh, a reactive programming luminary, guided developers through the paradigm shift from imperative to reactive programming. Building on his earlier R2DBC talk, he unveiled the power of Reactive Streams, a specification for non-blocking, asynchronous data processing. His narrative was a thrilling journey, blending technical depth with practical insights, inspiring developers to embrace reactive systems for scalable, resilient applications.
Oleh began with a relatable scenario: a Java application overwhelmed by high-throughput data, such as a real-time analytics dashboard. Traditional imperative code, with its synchronous loops and blocking calls, buckles under pressure, leading to latency spikes and resource exhaustion. “We’ve all seen threads waiting idly for I/O,” Oleh quipped, his humor resonating with the audience. Reactive Streams, he explained, offer a solution by processing data asynchronously, using backpressure to balance producer and consumer speeds. Oleh’s passion for reactive programming set the stage for a deep dive into its principles, tools, and real-world applications.
Embracing Reactive Streams
Oleh’s first theme was the core of Reactive Streams: a specification for asynchronous stream processing with non-blocking backpressure. He introduced its four interfaces—Publisher, Subscriber, Subscription, and Processor—and their role in building reactive pipelines. Oleh likely demonstrated a simple pipeline using Project Reactor, a Reactive Streams implementation:
Flux.range(1, 100)
.map(i -> processData(i))
.subscribeOn(Schedulers.boundedElastic())
.subscribe(System.out::println);
In this demo, a Flux emits numbers, processes them asynchronously, and prints results, all while respecting backpressure. Oleh showed how the Subscription controls data flow, preventing the subscriber from being overwhelmed. He contrasted this with imperative code, where a loop might block on I/O, highlighting reactive’s efficiency for high-throughput tasks like log processing or event streaming. The audience, familiar with synchronous Java, leaned in, captivated by the prospect of responsive systems.
Building Reactive Applications
Oleh’s narrative shifted to practical application, his second theme. He explored integrating Reactive Streams with Spring WebFlux, a reactive web framework. In a demo, Oleh likely built a REST API handling thousands of concurrent requests, using Mono and Flux for non-blocking responses:
@GetMapping("/events")
Flux<Event> getEvents() {
return eventService.findAll();
}
This API, running on Netty and leveraging virtual threads (echoing José Paumard’s talk), scaled effortlessly under load. Oleh emphasized backpressure strategies, such as onBackpressureBuffer(), to manage fast producers. He also addressed error handling, showing how onErrorResume() ensures resilience in reactive pipelines. For microservices or event-driven architectures, Oleh argued, Reactive Streams enable low-latency, resource-efficient systems, a must for cloud-native deployments.
Oleh shared real-world examples, noting how companies like Netflix use Reactor for streaming services. He recommended starting with small reactive components, such as a single endpoint, and monitoring performance with tools like Micrometer. His practical advice—test under load, tune buffer sizes—empowered developers to adopt reactive programming incrementally.
Reactive in the Ecosystem
Oleh’s final theme was Reactive Streams’ role in Java’s ecosystem. Libraries like Reactor, RxJava, and Akka Streams implement the specification, while frameworks like Spring Boot 3 integrate reactive data access via R2DBC (from his earlier talk). Oleh highlighted compatibility with databases like MongoDB and Kafka, ideal for reactive pipelines. He likely demonstrated a reactive Kafka consumer, processing messages with backpressure:
KafkaReceiver.create(receiverOptions)
.receive()
.flatMap(record -> processRecord(record))
.subscribe();
This demo showcased seamless integration, reinforcing reactive’s versatility. Oleh urged developers to explore Reactor’s documentation and experiment with Spring WebFlux, starting with a prototype project. He cautioned about debugging challenges, suggesting tools like BlockHound to detect blocking calls. Looking ahead, Oleh envisioned reactive systems dominating data-intensive applications, from IoT to real-time analytics.
As the session closed, Oleh’s enthusiasm sparked hallway discussions about reactive programming’s potential. Developers left with a clear path: build a reactive endpoint, integrate with Reactor, and measure scalability. Oleh’s adventure through Reactive Streams was a testament to Java’s adaptability, inspiring a new era of responsive, cloud-ready applications.
Et si on parlait un peu de sécurité ? Un guide pour les développeurs
Introduction
Julien Legras, expert en sécurité chez SFEIR, livre une présentation captivante à Devoxx France 2023, intitulée « Et si on parlait un peu de sécurité ? ». Dans cette conférence de 14 minutes, Legras démystifie la sécurité des applications pour les développeurs, mettant en avant des mesures pratiques pour protéger les logiciels sans devenir spécialiste en sécurité. S’appuyant sur son travail chez SFEIR, une société de conseil en transformation digitale, il propose une feuille de route pour développer des applications sécurisées dans des environnements rapides.
Points clés
Legras commence par remettre en question une idée répandue chez les développeurs : la sécurité est le travail de quelqu’un d’autre. Il soutient que les développeurs sont la première ligne de défense, car ils écrivent le code ciblé par les attaquants. Chez SFEIR, où les clients vont des startups aux grandes entreprises, Legras a vu comment de petites erreurs de sécurité mènent à des failles majeures.
Il décrit trois pratiques essentielles :
-
Validation des entrées : Nettoyer toutes les entrées utilisateur pour prévenir les attaques par injection, comme SQL ou XSS.
-
Sécurisation des API : Utiliser l’authentification (par exemple, OAuth) et limiter les taux pour protéger les points d’accès.
-
Gestion des dépendances : Mettre à jour régulièrement les bibliothèques et analyser les vulnérabilités avec des outils comme Dependabot.
Legras partage une étude de cas sur une plateforme e-commerce d’un client, où l’implémentation de HTTPS et une gestion sécurisée des sessions ont empêché une fuite de données. Il insiste également sur l’importance des journaux et de la surveillance pour détecter tôt les anomalies. La conférence équilibre conseils techniques et astuces culturelles, comme promouvoir une mentalité « sécurité d’abord » via des formations d’équipe.
Leçons apprises
La présentation de Legras offre des enseignements pratiques :
-
Assumer la sécurité : Les développeurs doivent intégrer la sécurité dans leur travail quotidien, sans la déléguer.
-
Utiliser les outils intelligemment : Les scanners automatisés et linters détectent les problèmes tôt, mais le jugement humain reste clé.
-
Former les équipes : Des ateliers réguliers sur la sécurité renforcent la sensibilisation et réduisent les risques.
Ces idées sont cruciales pour les développeurs travaillant sur des applications publiques ou dans des secteurs réglementés. Le style accessible de Legras rend la sécurité réalisable, pas intimidante.
Conclusion
La conférence de Julien Legras donne aux développeurs les moyens de prendre en charge la sécurité des applications avec des mesures pratiques et actionnables. Son expérience chez SFEIR souligne l’importance de mesures proactives pour protéger les logiciels. Cette présentation est essentielle pour les développeurs cherchant à construire des applications sécurisées et résilientes sans ralentir la livraison.
Ce que l’open source peut apprendre des startups : Une perspective nouvelle
Introduction
Dans sa présentation à Devoxx France 2023, « Ce que l’open source peut apprendre des startups », Adrien Pessu, évangéliste technique chez MongoDB, explore comment les principes des startups peuvent revitaliser les communautés open source. Cette conférence de 14 minutes s’appuie sur son expérience à l’intersection des startups et des logiciels open source. En appliquant des stratégies entrepreneuriales, soutient-il, les projets open source peuvent devenir plus durables et percutants, offrant des leçons précieuses aux développeurs et mainteneurs.
Points clés
Pessu commence par souligner les défis de l’open source : l’épuisement des mainteneurs, les contributions fragmentées et le manque de financement. Il oppose cela aux startups, qui prospèrent grâce à l’agilité, l’orientation utilisateur et la croissance itérative. Chez MongoDB, où des outils open source comme MongoDB Atlas sont centraux, Pessu a observé comment des pratiques inspirées des startups renforçaient l’engagement communautaire.
Il propose trois stratégies inspirées des startups :
-
Conception centrée utilisateur : Impliquer les utilisateurs tôt pour façonner les fonctionnalités, comme les startups valident l’adéquation produit-marché.
-
Itération légère : Publier des fonctionnalités minimales viables et itérer selon les retours, évitant le perfectionnisme.
-
Communauté comme clients : Traiter les contributeurs comme des clients précieux, avec une documentation claire et un support réactif.
Pessu partage l’exemple des pilotes open source de MongoDB, où un processus de contribution simplifié, inspiré de l’onboarding des startups, a boosté la participation communautaire. Il aborde également le financement, suggérant des modèles comme le développement sponsorisé ou les fondations, similaires aux levées de fonds des startups. La conférence met l’accent sur des objectifs mesurables, comme le suivi de la rétention des contributeurs, pour évaluer la santé du projet.
Leçons apprises
Les idées de Pessu sont actionnables pour les mainteneurs open source :
-
Prioriser les utilisateurs : Développer des fonctionnalités répondant à des problèmes réels, validées par les retours communautaires.
-
Simplifier les contributions : Des directives claires et des victoires rapides encouragent une participation soutenue.
-
Assurer la durabilité : Explorer des modèles de financement pour soutenir les mainteneurs sans compromettre les valeurs open source.
Ces leçons résonnent avec les développeurs impliqués dans l’open source ou cherchant à rendre leurs projets plus inclusifs. La perspective startup de Pessu offre un cadre novateur pour relever des défis de longue date.
Conclusion
La présentation d’Adrien Pessu comble le fossé entre startups et open source, montrant comment des tactiques entrepreneuriales peuvent renforcer les communautés. Son expérience chez MongoDB illustre le pouvoir de l’orientation utilisateur et de l’itération pour pérenniser les projets. Cette conférence est incontournable pour les développeurs et mainteneurs visant à rendre l’open source plus vibrant et résilient.
Orateur : Adrien Pessu
[DevoxxFR 2023] Tests, an Investment for the Future: Building Reliable Software
Introduction
In “Les tests, un investissement pour l’avenir,” presented at Devoxx France 2023, Julien Deniau, a developer at Amadeus, champions software testing as a cornerstone of sustainable development. This 14-minute quickie draws from his work on airline reservation systems, where reliability is non-negotiable. Deniau’s passionate case for testing offers developers practical strategies to ensure code quality while accelerating delivery.
Key Insights
Deniau frames testing as an investment, not a cost, emphasizing its role in preventing regressions and enabling fearless refactoring. At Amadeus, where systems handle billions of transactions annually, comprehensive tests are critical. He outlines a testing pyramid:
-
Unit Tests: Fast, isolated tests for individual components, forming the pyramid’s base.
-
Integration Tests: Validate interactions between modules, such as APIs and databases.
-
End-to-End Tests: Simulate user journeys, used sparingly due to complexity.
Deniau shares a case study of refactoring a booking system, where a robust test suite allowed the team to rewrite critical components without introducing bugs. He advocates for Test-Driven Development (TDD) to clarify requirements before coding and recommends tools like JUnit and Cucumber for Java-based projects. The talk also addresses cultural barriers, such as convincing stakeholders to allocate time for testing, achieved by demonstrating reduced maintenance costs.
Lessons Learned
Deniau’s talk provides key takeaways:
-
Test Early, Test Often: Writing tests upfront saves time during debugging and refactoring.
-
Balance the Pyramid: Prioritize unit tests for speed, but don’t neglect integration tests.
-
Sell Testing: Highlight business benefits, like faster delivery and fewer outages, to gain buy-in.
These insights are crucial for teams in high-stakes industries or those struggling with legacy code. Deniau’s enthusiasm makes testing feel like an empowering tool rather than a chore.
Conclusion
Julien Deniau’s quickie reframes testing as a strategic asset for building reliable, maintainable software. His Amadeus experience underscores the long-term value of a disciplined testing approach. This talk is a must-watch for developers seeking to future-proof their codebases.
[DevoxxFR 2023] Hexagonal Architecture in 15 Minutes: Simplifying Complex Systems
Introduction
Julien Topçu, a tech lead at LesFurets, delivers a concise yet powerful Devoxx France 2023 quickie titled “L’architecture hexagonale en 15 minutes.” In this 17-minute talk, Topçu introduces hexagonal architecture (also known as ports and adapters) as a solution for building maintainable, testable systems. Drawing from his experience at LesFurets, a French insurance comparison platform, he provides a practical guide for developers navigating complex codebases.
Key Insights
Topçu explains hexagonal architecture as a way to decouple business logic from external systems, like databases or APIs. At LesFurets, where rapid feature delivery is critical, this approach reduced technical debt and improved testing. The architecture organizes code into:
-
Core Business Logic: Pure functions or classes that handle the application’s rules.
-
Ports: Interfaces defining interactions with the outside world.
-
Adapters: Implementations of ports, such as database connectors or HTTP clients.
Topçu shares a refactoring example, where a tightly coupled insurance quote system was restructured. By isolating business rules in a core module, the team simplified unit testing and swapped out a legacy database without changing the core logic. He highlights tools like Java’s interfaces and Spring’s dependency injection to implement ports and adapters efficiently. The talk also addresses trade-offs, such as the initial overhead of defining ports, balanced by long-term flexibility.
Lessons Learned
Topçu’s insights are actionable:
-
Decouple Early: Separating business logic prevents future refactoring pain.
-
Testability First: Hexagonal architecture enables comprehensive unit tests without mocks.
-
Start Small: Apply the pattern incrementally to avoid overwhelming teams.
These lessons resonate with developers maintaining evolving systems or adopting Domain-Driven Design. Topçu’s clear explanations make hexagonal architecture accessible even to newcomers.
Conclusion
Julien Topçu’s quickie offers a masterclass in hexagonal architecture, proving its value in real-world applications. His LesFurets example shows how to build systems that are robust yet adaptable. This talk is essential for developers aiming to create clean, maintainable codebases.
Event Sourcing Without a Framework: A Practical Approach
Introduction
In his Devoxx France 2023 quickie, “Et si on faisait du Event Sourcing sans framework ?”, Jonathan Lermitage, a developer at Worldline, challenges the reliance on complex frameworks for event sourcing. This 17-minute talk explores how his team implemented event sourcing from scratch to meet the needs of a payment processing system. Lermitage’s practical approach, grounded in Worldline’s high-stakes environment, offers developers a clear path to adopting event sourcing without overwhelming dependencies.
Key Insights
Lermitage begins by explaining event sourcing, where application state is derived from a sequence of events rather than a static database. At Worldline, which processes millions of transactions daily, event sourcing ensures auditability and resilience. However, frameworks like Axon or EventStore introduced complexity that clashed with the team’s need for simplicity and control.
Instead, Lermitage’s team built a custom solution using:
-
PostgreSQL for Event Storage: Storing events as JSON objects in a single table, with indexes for performance.
-
Kafka for Event Streaming: Ensuring scalability and real-time processing.
-
Java for Business Logic: Simple classes to handle event creation, storage, and replay.
He shares a case study of tracking payment statuses, where events like PaymentInitiated or PaymentConfirmed formed an auditable trail. Lermitage emphasizes minimalism, avoiding over-engineered patterns and focusing on readable code. The talk also covers challenges, such as managing event schema evolution and ensuring idempotency during replays, solved with versioned events and unique identifiers.
Lessons Learned
Lermitage’s experience offers key takeaways:
-
Keep It Simple: Avoid frameworks if your use case demands lightweight solutions.
-
Prioritize Auditability: Event sourcing shines in systems requiring traceability, like payments.
-
Plan for Evolution: Design events with versioning in mind to handle future changes.
These insights are valuable for developers in regulated industries or those wary of framework lock-in. Lermitage’s focus on practicality makes event sourcing approachable for teams of varying expertise.
Conclusion
Jonathan Lermitage’s talk demystifies event sourcing by showing how to implement it without heavy frameworks. His Worldline case study proves that simplicity and control can coexist in complex systems. This quickie is a must-watch for developers seeking flexible, auditable architectures.