Archive for the ‘en-US’ Category
Secure Development with Docker: DockerCon 2023 Workshop
The DockerCon 2023 workshop, “Secure Development with Docker,” delivered by Yves Brissaud, James Carnegie, David Dooling, and Christian Dupuis from Docker, offered a comprehensive exploration of securing the software supply chain. Spanning over three hours, this session addressed the tension between developers’ need for speed and security teams’ focus on risk mitigation. Participants engaged in hands-on labs to identify and remediate common vulnerabilities, leverage Docker Scout for actionable insights, and implement provenance, software bills of materials (SBOMs), and policies. The workshop emphasized Docker’s developer-centric approach to security, empowering attendees to enhance their workflows without compromising safety. By integrating Docker Scout, attendees learned to secure every stage of the software development lifecycle, from code to deployment.
Tackling Common Vulnerabilities and Exposures (CVEs)
The workshop began with a focus on Common Vulnerabilities and Exposures (CVEs), a critical starting point for securing software. David Dooling introduced CVEs as publicly disclosed cybersecurity vulnerabilities in operating systems, dependencies like OpenSSL, or container images. Participants used Docker Desktop 4.24 and the Docker Scout CLI to scan images based on Alpine 3.14, identifying vulnerabilities in base images and added layers, such as npm packages (e.g., Express and its transitive dependency Qs). Hands-on exercises guided attendees to update base images to Alpine 3.18, using Docker Scout’s recommendations to select versions with fewer vulnerabilities. The CLI’s cve
command and Desktop’s vulnerability view provided detailed insights, including severity filters and package details, enabling developers to remediate issues efficiently. This segment underscored that while scanning is essential, it’s only one part of a broader security strategy, setting the stage for a holistic approach.
Understanding Software Supply Chain Security
The second segment, led by Dooling, introduced the software supply chain as a framework encompassing source code, dependencies, build processes, and deployment. Drawing an analogy to brewing coffee—where beans, water, and equipment have their own supply chains—the workshop highlighted risks like supply chain attacks, as outlined by CISA’s open-source security roadmap. These attacks, such as poisoning repositories, differ from CVEs by involving intentional tampering. Participants explored Docker Scout’s role as a supply chain management tool, not just a CVE scanner. Using the workshop’s GitHub repository (dc23-secure-workshop), attendees set up environment variables and Docker Compose to build images, learning how Scout tracks components across the lifecycle. This segment emphasized the need to secure every stage, from code creation to deployment, to prevent vulnerabilities and malicious injections.
Leveraging Docker Scout for Actionable Insights
Docker Scout was the cornerstone of the workshop, offering a developer-friendly interface to manage security. Yves Brissaud guided participants through hands-on labs using Docker Desktop and the Scout CLI to analyze images. Attendees explored vulnerabilities in a front-end image (using Express) and a Go-based back-end image, applying filters to focus on critical CVEs or specific package types (e.g., npm). Scout’s compare
command allowed participants to assess changes between image versions, such as updating from Alpine 3.14 to 3.18, revealing added or removed packages and their impact on vulnerabilities. Desktop’s visual interface displayed recommended fixes, like updating base images or dependencies, while the CLI provided detailed outputs, including quick views for rapid assessments. This segment demonstrated Scout’s ability to integrate into CI/CD pipelines, providing early feedback to developers without disrupting workflows.
Implementing Provenance and Software Bill of Materials (SBOM)
The third segment focused on provenance and SBOMs, critical for supply chain transparency. Provenance, aligned with the SALSA framework’s Build Level 1, documents how an image is built, including base image tags, digests, and build metadata. SBOMs list all packages and their versions, ensuring consistency across environments. Participants rebuilt images with the --provenance
and --sbom
flags using BuildKit, generating attestations stored in Docker Hub. Brissaud demonstrated using the imagetools
command to inspect provenance and SBOMs, revealing details like build timestamps and package licenses. The workshop highlighted the importance of embedding this metadata at build time to enable reproducible builds and accurate recommendations. By integrating Scout’s custom SBOM indexer, attendees ensured consistent vulnerability reporting across Desktop, CLI, and scout.docker.com, enhancing trust in the software’s integrity.
Enforcing Developer-Centric Policies
The final segment introduced Docker Scout’s policy enforcement, designed with a developer mindset to avoid unnecessary build failures. Dooling explained Scout’s “first do no harm” philosophy, rooted in Kaizen’s continuous improvement principles. Unlike traditional policies that block builds for existing CVEs, Scout compares new builds to production images, allowing progress if vulnerabilities remain unchanged. Participants explored four out-of-the-box policies in Early Access: fixing critical/high CVEs, updating base images, and avoiding deprecated tags. Using the scout policy
command, attendees evaluated images against these policies, viewing compliance status on Desktop and scout.docker.com. The workshop also previewed upcoming GitHub Action integrations for pull request policy checks, enabling developers to assess changes before merging. This approach ensures security without hindering development, aligning with Docker’s mission to empower developers.
Links:
- DockerCon 2023 Workshop Video
- Docker Website
- Announcing Docker Scout GA: Actionable Insights for the Software Supply Chain
- Docker Scout: Securing The Complete Software Supply Chain (DockerCon 2023)
- What’s in My Container? Docker Scout CLI and CI to the Rescue (DockerCon 2023)
Hashtags: #DockerCon2023 #SoftwareSupplyChain #DockerScout #SecureDevelopment #CVEs #Provenance #SBOM #Policy #YvesBrissaud #JamesCarnegie #DavidDooling #ChristianDupuis
[DevoxxBE2023] Build a Generative AI App in Project IDX and Firebase by Prakhar Srivastav
At Devoxx Belgium 2023, Prakhar Srivastav, a software engineer at Google, unveiled the power of Project IDX and Firebase in crafting a generative AI mobile application. His session illuminated how developers can harness these tools to streamline full-stack, multiplatform app development directly from the browser, eliminating cumbersome local setups. Through a live demonstration, Prakhar showcased the creation of “Listed,” a Flutter-based app that leverages Google’s PaLM API to break down user-defined goals into actionable subtasks, offering a practical tool for task management. His engaging presentation, enriched with real-time coding, highlighted the synergy of cloud-based development environments and AI-driven solutions.
Introducing Project IDX: A Cloud-Based Development Revolution
Prakhar introduced Project IDX as a transformative cloud-based development environment designed to simplify the creation of multiplatform applications. Unlike traditional setups requiring hefty binaries like Xcode or Android Studio, Project IDX enables developers to work entirely in the browser. Prakhar demonstrated this by running Android and iOS emulators side-by-side within the browser, showcasing a Flutter app that compiles to multiple platforms—Android, iOS, web, Linux, and macOS—from a single codebase. This eliminates the need for platform-specific configurations, making development accessible even on lightweight devices like Chromebooks.
The live demo featured “Listed,” a mobile app where users input a goal, such as preparing for a tech talk, and receive AI-generated subtasks and tips. For instance, entering “give a tech talk at a conference” yielded steps like choosing a relevant topic and practicing the presentation, with a tip to have a backup plan for technical issues. Prakhar’s real-time tweak—changing the app’s color scheme from green to red—illustrated the iterative development flow, where changes are instantly reflected in the emulator, enhancing productivity and experimentation.
Harnessing the PaLM API for Generative AI
Central to the app’s functionality is Google’s PaLM API, which Prakhar utilized to integrate generative AI capabilities. He explained that large language models (LLMs), like those powering the PaLM API, act as sophisticated autocomplete systems, predicting likely text outputs based on extensive training data. For “Listed,” the text API was chosen for its suitability in single-turn interactions, such as generating subtasks from a user’s query. Prakhar emphasized the importance of crafting effective prompts, comparing a vague prompt like “the sky is” to a precise one like “complete the sentence: the sky is,” which yields more relevant results.
To enhance the AI’s output, Prakhar employed few-shot prompting, providing the model with examples of desired responses. For instance, for the query “go camping,” the prompt included sample subtasks like choosing a campsite and packing meals, along with a tip about wildlife safety. This structured approach ensured the model generated contextually accurate and actionable suggestions, making the app intuitive for users tackling complex tasks.
Securing AI Integration with Firebase Extensions
Integrating the PaLM API into a mobile app poses security challenges, particularly around API key exposure. Prakhar addressed this by leveraging Firebase Extensions, which provide pre-packaged solutions to streamline backend integration. Specifically, he used a Firebase Extension to securely call the PaLM API via Cloud Functions, avoiding the need to embed sensitive API keys in the client-side Flutter app. This setup not only enhances security but also simplifies infrastructure management, as the extension handles logging, monitoring, and optional AppCheck for client verification.
In the live demo, Prakhar navigated the Firebase Extensions Marketplace, selecting the “Call PaLM API Securely” extension. With a few clicks, he deployed Cloud Functions that exposed a POST API for sending prompts and receiving AI-generated responses. The code walkthrough revealed a straightforward implementation in Dart, where the app constructs a JSON payload with the prompt, model name (text-bison-001), and temperature (0.25 for deterministic outputs), ensuring seamless and secure communication with the backend.
Building the Flutter App: Simplicity and Collaboration
The Flutter app’s architecture, built within Project IDX, was designed for simplicity and collaboration. Prakhar walked through the main.dart file, which scaffolds the app’s UI with a material-themed interface, an input field for user queries, and a list to display AI-generated tasks. The app uses anonymous Firebase authentication to secure backend calls without requiring user logins, enhancing accessibility. A PromptBuilder class dynamically constructs prompts by combining predefined prefixes and examples, ensuring flexibility in handling varied user inputs.
Project IDX’s integration with Visual Studio Code’s open-source framework added collaborative features. Prakhar demonstrated how developers can invite colleagues to a shared workspace, enabling real-time collaboration. Additionally, the IDE’s AI capabilities allow users to explain selected code or generate new snippets, streamlining development. For instance, selecting the PromptBuilder class and requesting an explanation provided detailed insights into its parameters, showcasing how Project IDX enhances developer productivity.
Links:
Decoding Shazam: Unraveling Music Recognition Technology
This post delves into Moustapha AGACK’s Devoxx FR 2023 presentation, “Jay-Z, Maths and Signals! How to clone Shazam 🎧,” exploring the technology behind the popular song identification application, Shazam. AGACK shares his journey to understand and replicate Shazam’s functionality, explaining the core concepts of sound, signals, and frequency analysis.
Understanding Shazam’s Core Functionality
Moustapha AGACK begins by captivating the audience with a demonstration of Shazam’s seemingly magical ability to identify songs from brief audio snippets, often recorded in noisy and challenging acoustic environments. He emphasizes the robustness of Shazam’s identification process, noting its ability to function even with background conversations, ambient noise, or variations in recording quality. This remarkable capability sparked Moustapha’s curiosity as a developer, prompting him to embark on a quest to investigate the inner workings of the application.
Moustapha mentions that his exploration started with the seminal paper authored by Avery Wang, a co-founder of Shazam, which meticulously details the design and implementation of the Shazam algorithm. This paper, a cornerstone of music information retrieval, provides deep insights into the signal processing techniques, data structures, and search strategies employed by Shazam. However, Moustapha humorously admits to experiencing initial difficulty in fully grasping the paper’s complex mathematical formalisms and dense signal processing jargon. He acknowledges the steep learning curve associated with the field of digital signal processing, which requires a solid foundation in mathematics, physics, and computer science. Despite the initial challenges, Moustapha emphasizes the importance of visual aids within the paper, such as insightful graphs and illustrative spectrograms, which greatly aided his conceptual understanding and provided valuable intuition.
The Physics of Sound: A Deep Dive
Moustapha explains that sound, at its most fundamental level, is a mechanical wave phenomenon. It originates from the vibration of objects, which disturbs the surrounding air molecules. These molecules collide with their neighbors, transferring the energy of the vibration and causing a chain reaction that propagates the disturbance through the air as a wave. This wave travels through the air at a finite speed (approximately 343 meters per second at room temperature) and eventually reaches our ears, where it is converted into electrical signals that our brains interpret as sound.
These sound waves are typically represented mathematically as sinusoidal signals, also known as sine waves. A sine wave is a smooth, continuous, and periodic curve that oscillates between a maximum and minimum value. Two key properties characterize these signals: frequency and amplitude.
- Frequency is defined as the number of complete cycles of the wave that occur in one second, measured in Hertz (Hz). One Hertz is equivalent to one cycle per second. Frequency is the primary determinant of the perceived pitch of the sound. High-frequency waves correspond to high-pitched sounds (treble), while low-frequency waves correspond to low-pitched sounds (bass). For example, a sound wave oscillating at 440 Hz is perceived as the musical note A above middle C. The higher the frequency, the more rapidly the air molecules are vibrating, and the higher the perceived pitch.
- Amplitude refers to the maximum displacement of the wave from its equilibrium position. It is a measure of the wave’s intensity or strength and directly correlates with the perceived volume or loudness of the sound. A large amplitude corresponds to a loud sound, meaning the air molecules are vibrating with greater force, while a small amplitude corresponds to a quiet sound, indicating gentler vibrations.
Moustapha notes that the human auditory system possesses a limited range of frequency perception, typically spanning from 20 Hz to 20 kHz. This means that humans can generally hear sounds with frequencies as low as 20 cycles per second and as high as 20,000 cycles per second. However, it’s important to note that this range can vary slightly between individuals and tends to decrease with age, particularly at the higher frequency end. Furthermore, Moustapha points out that very high frequencies (above 2000 Hz) can often be perceived as unpleasant or even painful due to the sensitivity of the ear to rapid pressure changes.
Connecting Musical Notes and Frequencies
Moustapha draws a direct and precise relationship between musical notes and specific frequencies, a fundamental concept in music theory and acoustics. He uses the A440 standard as a prime example. The A440 standard designates the A note above middle C (also known as concert pitch) as having a frequency of exactly 440 Hz. This standard is crucial in music, as it provides a universal reference for tuning musical instruments, ensuring that musicians playing together are in harmony.
Moustapha elaborates on the concept of octaves, a fundamental concept in music theory and acoustics. An octave represents a doubling or halving of frequency. When the frequency of a note is doubled, it corresponds to the same note but one octave higher. Conversely, when the frequency is halved, it corresponds to the same note but one octave lower. This logarithmic relationship between pitch and frequency is essential for understanding musical scales, chords, and harmonies.
For instance:
- The A note in the octave below A440 has a frequency of 220 Hz (440 Hz / 2).
- The A note in the octave above A440 has a frequency of 880 Hz (440 Hz * 2).
This consistent doubling or halving of frequency for each octave creates a predictable and harmonious relationship between notes, which is exploited by Shazam’s algorithms to identify musical patterns and structures.
The Complexity of Real-World Sound Signals
Moustapha emphasizes that real-world sound is significantly more complex than the idealized pure sine waves often used for basic explanations. Instead, real-world sound signals are typically composed of a superposition, or sum, of numerous sine waves, each with its own unique frequency, amplitude, and phase. These constituent sine waves interact with each other, through a process called interference, creating complex and intricate waveforms.
Furthermore, real-world sounds often contain harmonics, which are additional frequencies that accompany the fundamental frequency of a sound. The fundamental frequency is the lowest frequency component of a complex sound and is typically perceived as the primary pitch. Harmonics, also known as overtones, are integer multiples of the fundamental frequency. For example, if the fundamental frequency is 440 Hz, the first harmonic will be 880 Hz (2 * 440 Hz), the second harmonic will be 1320 Hz (3 * 440 Hz), and so on.
Moustapha illustrates this complexity with the example of a piano playing the A440 note. While the piano will produce a strong fundamental frequency at 440 Hz, it will simultaneously generate a series of weaker harmonic frequencies. These harmonics are not considered “noise” or “parasites” in the context of music; they are integral to the rich and distinctive sound of the instrument. The specific set of harmonics and their relative amplitudes, or strengths, are what give a piano its characteristic timbre, allowing us to distinguish it from a guitar, a flute, or other instruments playing the same fundamental note.
Moustapha further explains that the physical characteristics of musical instruments, such as the materials from which they are constructed (e.g., wood, metal), their shape and size, the way they produce sound (e.g., strings vibrating, air resonating in a tube), and the presence of resonance chambers, all significantly influence the production and relative intensities of these harmonics. For instance, a violin’s hollow body amplifies certain harmonics, creating its characteristic warm and resonant tone, while a trumpet’s brass construction and flared bell shape emphasize different harmonics, resulting in its bright and piercing sound. This is why a violin and a piano, or a trumpet and a flute, sound so different, even when playing the same fundamental pitch.
He also points out that the human voice is an exceptionally complex sound source. The vocal cords, resonance chambers in the throat and mouth, the shape of the oral cavity, and the position of the tongue and lips all contribute to the unique harmonic content and timbre of each individual’s voice. These intricate interactions make voice recognition and speech analysis challenging tasks, as the acoustic characteristics of speech can vary significantly between speakers and even within the same speaker depending on emotional state and context.
To further emphasize the difference between idealized sine waves and real-world sound, Moustapha contrasts the pure sine wave produced by a tuning fork (an instrument specifically designed to produce a nearly pure tone with minimal harmonics) with the complex waveforms generated by various musical instruments playing the same note. The tuning fork’s waveform is a smooth, regular sine wave, devoid of significant overtones, while the instruments’ waveforms are jagged, irregular, and rich in harmonic content, reflecting the unique timbral characteristics of each instrument.
Harnessing the Power of Fourier Transform
To effectively analyze these complex sound signals and extract the individual frequencies and their amplitudes, Moustapha introduces the Fourier Transform. He acknowledges Joseph Fourier, a renowned 18th-century mathematician and physicist, as the “father of signal theory” for his groundbreaking work in this area. Fourier’s mathematical insights revolutionized signal processing and have found applications in diverse fields far beyond audio analysis, including image compression (e.g., JPEG), telecommunications, medical imaging (e.g., MRI), seismology, and even quantum mechanics.
The Fourier Transform is presented as a powerful mathematical tool that decomposes any complex, time-domain signal into a sum of simpler sine waves, each with its own unique frequency, amplitude, and phase. In essence, it performs a transformation of the signal from the time domain, where the signal is represented as a function of time (i.e., amplitude versus time), to the frequency domain, where the signal is represented as a function of frequency (i.e., amplitude versus frequency). This transformation allows us to see the frequency content of the signal, revealing which frequencies are present and how strong they are.
Moustapha provides a simplified explanation of how the Fourier Transform works conceptually. He first illustrates how it would analyze pure sine waves. If the input signal is a single sine wave, the Fourier Transform will precisely identify the frequency of that sine wave and its amplitude. The output in the frequency domain will be a spike or peak at that specific frequency, with the height of the spike corresponding to the amplitude (strength) of the sine wave.
He then emphasizes that the true power and utility of the Fourier Transform become apparent when analyzing complex signals that are the sum of multiple sine waves. In this case, the Fourier Transform will decompose the complex signal into its individual sine wave components, revealing the presence, amplitude, and phase of each frequency. This is precisely the nature of real-world sound, which, as previously discussed, is a mixture of many frequencies and harmonics. By applying the Fourier Transform to an audio signal, it becomes possible to determine the constituent frequencies and their relative strengths, providing valuable information for music analysis, audio processing, and, crucially, song identification as used by Shazam.
Meet with Others: tools for speech
In a world increasingly dominated by digital communication and remote work, the ability to connect with others and speak confidently has become more challenging yet more valuable than ever. At Devoxx France 2023, Alex Casanova delivered an engaging workshop on overcoming the barriers to effective communication and public speaking, drawing from her extensive experience as an actress, trainer, and sophrologist.
The Importance of Human Connection
Alex began her presentation with an interactive exercise, asking the audience to identify what prevents people from speaking in public. The responses came quickly: shyness, lack of confidence, fear of judgment, feeling illegitimate, and the intimidation of speaking after someone more articulate has already spoken. She then asked what would help overcome these barriers: confidence, feeling safe, stress management, feedback, a supportive atmosphere, and practice.
“The development of digital technology, artificial intelligence, and social distancing following lockdowns has had an impact on human beings and our self-confidence,” Alex explained. “It increases fear—fear of going out, fear of approaching others, fear of not knowing what to say—because the professional world is demanding and always asks for more: more ideas, more spontaneity, more innovation.”
As a professional actress, trainer, and sophrologist, Alex shared that she too has experienced impostor syndrome and naturally tends toward introversion. Her life path consciously or unconsciously led her to theater, which provided tools to express herself better, feel comfortable in front of an audience, and create a space where she could be fully herself.
Understanding Communication Types
Alex outlined three types of communication we encounter:
- Interpersonal communication – Between two people, involving an emitter and a receiver
- Group communication – One person addressing a group, such as in presentations or conferences
- Mass communication – Multiple sources addressing large audiences through various channels
The workshop focused primarily on the first two types, which are most relevant to professional settings.
The Hero’s Journey to Better Communication
Alex framed the workshop as a hero’s journey where participants would face and overcome four challenges that prevent effective communication:
Challenge 1: Breaking Mental and Physical Isolation
The first monster to defeat is the fear of leaving our comfort zone. Alex guided the audience through a sophrological relaxation exercise focusing on:
- Posture awareness and alignment
- Square breathing technique (inhale, hold, exhale, hold)
- Visualization of a safe, comforting place
- Recalling a memory of personal excellence and confidence
This simple but powerful tool helps create grounding, calm, strengthen personal resources, gain perspective on emotions, and bring focus to the present moment.
Challenge 2: Public Speaking and Self-Confidence
The second challenge involves overcoming stage fright, anxiety, and various fears:
- Fear of not being understood
- Fear of being judged
- Fear of not being good enough
- Fear of losing composure
Alex demonstrated the “Victory V” posture—standing tall with arms raised in a V shape—based on Amy Cuddy’s research on body language and its influence on mental state. Maintaining this posture for 30 seconds releases hormones that boost confidence and create an optimistic, open mindset.
“Body language truly puts you in an attitude of openness,” Alex explained, contrasting it with closed postures associated with fear or sadness. She shared a personal anecdote of using this technique at a networking event where she felt out of place, which led to the event organizer approaching her and introducing her to others.
Challenge 3: Team Relationships and Quick Thinking
The third challenge addresses conflict avoidance, difficulty collaborating, lack of self-confidence, fear of not knowing what to say, viewing others as enemies, and fear of rejection.
Alex led the audience through a word association exercise:
- First individually, thinking of a word and making associations (e.g., bottle → alcohol → cocktail → vacation)
- Then collectively, with audience members building on each other’s associations
This simple activity immediately created engagement, spontaneity, and connection among strangers, demonstrating the philosophy of improvisation.
“Improv puts you in a state of play, exchange, meeting, letting go, and self-confidence,” Alex explained. She has used improvisation tools to help anesthesiologists improve their listening skills, multitasking abilities, and patient interaction, as well as with high-ranking military personnel who needed to develop active listening to communicate with civilians.
Challenge 4: Creativity and Innovation
The final challenge involves overcoming:
- Fear of failure
- Fear of not measuring up
- Fear of leaving one’s comfort zone
- Fear of not being original
As an exercise, Alex asked participants to list five positive adjectives about themselves, including one starting with the first letter of their name, and then say them aloud together.
This tool helps transform limiting beliefs into motivating ones, shifting from a closed to an open state, from procrastination to action.
The Virtuous Circle
Alex concluded by presenting the virtuous circle that replaces the vicious circle of self-doubt:
- I live, therefore I exist – Recognizing your inherent right to exist and take up space
- I recognize my qualities and experiences – Building on small successes
- I welcome errors as opportunities to learn – Seeing challenges as feedback rather than failure
- I reach my goals at my own pace – Bringing compassion and kindness to yourself
“It’s really up to you to be your best ally,” Alex emphasized.
Applying These Tools
Alex’s approach combines inspiration and action—balancing periods of calm, introspection, and theory with practice, simulation, and implementation. Her multidisciplinary background allows her to use theatrical improvisation, psychology, sophrology, and coaching to adapt to individual and corporate needs.
Her ultimate goal is to help people develop greater self-confidence and what psychologist Carl Rogers calls “congruence”—alignment and coherence between our thoughts, feelings, words, and actions. This authenticity creates empathy and acceptance of ourselves and others.
About Alex Casanova
Alex Casanova is an actress, trainer, and sophrologist who specializes in helping individuals develop confidence through experiential learning. Her multidisciplinary approach combines the performing arts, psychology, and therapeutic techniques to create personalized development pathways for both individuals and organizations.
Through her work, she aims to bring more humanity, respect, and tolerance into corporate environments by focusing on authentic communication and personal growth. Her “INSPIR’ACTION” methodology balances introspection with practical application to create sustainable behavioral change.
Navigating the Reactive Frontier: Oleh Dokuka’s Reactive Streams at Devoxx France 2023
On April 13, 2023, Oleh Dokuka commanded the Devoxx France stage with a 44-minute odyssey titled “From imperative to Reactive: the Reactive Streams adventure!” Delivered at Paris’s Palais des Congrès, Oleh, a reactive programming luminary, guided developers through the paradigm shift from imperative to reactive programming. Building on his earlier R2DBC talk, he unveiled the power of Reactive Streams, a specification for non-blocking, asynchronous data processing. His narrative was a thrilling journey, blending technical depth with practical insights, inspiring developers to embrace reactive systems for scalable, resilient applications.
Oleh began with a relatable scenario: a Java application overwhelmed by high-throughput data, such as a real-time analytics dashboard. Traditional imperative code, with its synchronous loops and blocking calls, buckles under pressure, leading to latency spikes and resource exhaustion. “We’ve all seen threads waiting idly for I/O,” Oleh quipped, his humor resonating with the audience. Reactive Streams, he explained, offer a solution by processing data asynchronously, using backpressure to balance producer and consumer speeds. Oleh’s passion for reactive programming set the stage for a deep dive into its principles, tools, and real-world applications.
Embracing Reactive Streams
Oleh’s first theme was the core of Reactive Streams: a specification for asynchronous stream processing with non-blocking backpressure. He introduced its four interfaces—Publisher, Subscriber, Subscription, and Processor—and their role in building reactive pipelines. Oleh likely demonstrated a simple pipeline using Project Reactor, a Reactive Streams implementation:
Flux.range(1, 100)
.map(i -> processData(i))
.subscribeOn(Schedulers.boundedElastic())
.subscribe(System.out::println);
In this demo, a Flux emits numbers, processes them asynchronously, and prints results, all while respecting backpressure. Oleh showed how the Subscription controls data flow, preventing the subscriber from being overwhelmed. He contrasted this with imperative code, where a loop might block on I/O, highlighting reactive’s efficiency for high-throughput tasks like log processing or event streaming. The audience, familiar with synchronous Java, leaned in, captivated by the prospect of responsive systems.
Building Reactive Applications
Oleh’s narrative shifted to practical application, his second theme. He explored integrating Reactive Streams with Spring WebFlux, a reactive web framework. In a demo, Oleh likely built a REST API handling thousands of concurrent requests, using Mono and Flux for non-blocking responses:
@GetMapping("/events")
Flux<Event> getEvents() {
return eventService.findAll();
}
This API, running on Netty and leveraging virtual threads (echoing José Paumard’s talk), scaled effortlessly under load. Oleh emphasized backpressure strategies, such as onBackpressureBuffer(), to manage fast producers. He also addressed error handling, showing how onErrorResume() ensures resilience in reactive pipelines. For microservices or event-driven architectures, Oleh argued, Reactive Streams enable low-latency, resource-efficient systems, a must for cloud-native deployments.
Oleh shared real-world examples, noting how companies like Netflix use Reactor for streaming services. He recommended starting with small reactive components, such as a single endpoint, and monitoring performance with tools like Micrometer. His practical advice—test under load, tune buffer sizes—empowered developers to adopt reactive programming incrementally.
Reactive in the Ecosystem
Oleh’s final theme was Reactive Streams’ role in Java’s ecosystem. Libraries like Reactor, RxJava, and Akka Streams implement the specification, while frameworks like Spring Boot 3 integrate reactive data access via R2DBC (from his earlier talk). Oleh highlighted compatibility with databases like MongoDB and Kafka, ideal for reactive pipelines. He likely demonstrated a reactive Kafka consumer, processing messages with backpressure:
KafkaReceiver.create(receiverOptions)
.receive()
.flatMap(record -> processRecord(record))
.subscribe();
This demo showcased seamless integration, reinforcing reactive’s versatility. Oleh urged developers to explore Reactor’s documentation and experiment with Spring WebFlux, starting with a prototype project. He cautioned about debugging challenges, suggesting tools like BlockHound to detect blocking calls. Looking ahead, Oleh envisioned reactive systems dominating data-intensive applications, from IoT to real-time analytics.
As the session closed, Oleh’s enthusiasm sparked hallway discussions about reactive programming’s potential. Developers left with a clear path: build a reactive endpoint, integrate with Reactor, and measure scalability. Oleh’s adventure through Reactive Streams was a testament to Java’s adaptability, inspiring a new era of responsive, cloud-ready applications.
[DevoxxFR 2023] Tests, an Investment for the Future: Building Reliable Software
Introduction
In “Les tests, un investissement pour l’avenir,” presented at Devoxx France 2023, Julien Deniau, a developer at Amadeus, champions software testing as a cornerstone of sustainable development. This 14-minute quickie draws from his work on airline reservation systems, where reliability is non-negotiable. Deniau’s passionate case for testing offers developers practical strategies to ensure code quality while accelerating delivery.
Key Insights
Deniau frames testing as an investment, not a cost, emphasizing its role in preventing regressions and enabling fearless refactoring. At Amadeus, where systems handle billions of transactions annually, comprehensive tests are critical. He outlines a testing pyramid:
-
Unit Tests: Fast, isolated tests for individual components, forming the pyramid’s base.
-
Integration Tests: Validate interactions between modules, such as APIs and databases.
-
End-to-End Tests: Simulate user journeys, used sparingly due to complexity.
Deniau shares a case study of refactoring a booking system, where a robust test suite allowed the team to rewrite critical components without introducing bugs. He advocates for Test-Driven Development (TDD) to clarify requirements before coding and recommends tools like JUnit and Cucumber for Java-based projects. The talk also addresses cultural barriers, such as convincing stakeholders to allocate time for testing, achieved by demonstrating reduced maintenance costs.
Lessons Learned
Deniau’s talk provides key takeaways:
-
Test Early, Test Often: Writing tests upfront saves time during debugging and refactoring.
-
Balance the Pyramid: Prioritize unit tests for speed, but don’t neglect integration tests.
-
Sell Testing: Highlight business benefits, like faster delivery and fewer outages, to gain buy-in.
These insights are crucial for teams in high-stakes industries or those struggling with legacy code. Deniau’s enthusiasm makes testing feel like an empowering tool rather than a chore.
Conclusion
Julien Deniau’s quickie reframes testing as a strategic asset for building reliable, maintainable software. His Amadeus experience underscores the long-term value of a disciplined testing approach. This talk is a must-watch for developers seeking to future-proof their codebases.
[DevoxxFR 2023] Hexagonal Architecture in 15 Minutes: Simplifying Complex Systems
Introduction
Julien Topçu, a tech lead at LesFurets, delivers a concise yet powerful Devoxx France 2023 quickie titled “L’architecture hexagonale en 15 minutes.” In this 17-minute talk, Topçu introduces hexagonal architecture (also known as ports and adapters) as a solution for building maintainable, testable systems. Drawing from his experience at LesFurets, a French insurance comparison platform, he provides a practical guide for developers navigating complex codebases.
Key Insights
Topçu explains hexagonal architecture as a way to decouple business logic from external systems, like databases or APIs. At LesFurets, where rapid feature delivery is critical, this approach reduced technical debt and improved testing. The architecture organizes code into:
-
Core Business Logic: Pure functions or classes that handle the application’s rules.
-
Ports: Interfaces defining interactions with the outside world.
-
Adapters: Implementations of ports, such as database connectors or HTTP clients.
Topçu shares a refactoring example, where a tightly coupled insurance quote system was restructured. By isolating business rules in a core module, the team simplified unit testing and swapped out a legacy database without changing the core logic. He highlights tools like Java’s interfaces and Spring’s dependency injection to implement ports and adapters efficiently. The talk also addresses trade-offs, such as the initial overhead of defining ports, balanced by long-term flexibility.
Lessons Learned
Topçu’s insights are actionable:
-
Decouple Early: Separating business logic prevents future refactoring pain.
-
Testability First: Hexagonal architecture enables comprehensive unit tests without mocks.
-
Start Small: Apply the pattern incrementally to avoid overwhelming teams.
These lessons resonate with developers maintaining evolving systems or adopting Domain-Driven Design. Topçu’s clear explanations make hexagonal architecture accessible even to newcomers.
Conclusion
Julien Topçu’s quickie offers a masterclass in hexagonal architecture, proving its value in real-world applications. His LesFurets example shows how to build systems that are robust yet adaptable. This talk is essential for developers aiming to create clean, maintainable codebases.
Event Sourcing Without a Framework: A Practical Approach
Introduction
In his Devoxx France 2023 quickie, “Et si on faisait du Event Sourcing sans framework ?”, Jonathan Lermitage, a developer at Worldline, challenges the reliance on complex frameworks for event sourcing. This 17-minute talk explores how his team implemented event sourcing from scratch to meet the needs of a payment processing system. Lermitage’s practical approach, grounded in Worldline’s high-stakes environment, offers developers a clear path to adopting event sourcing without overwhelming dependencies.
Key Insights
Lermitage begins by explaining event sourcing, where application state is derived from a sequence of events rather than a static database. At Worldline, which processes millions of transactions daily, event sourcing ensures auditability and resilience. However, frameworks like Axon or EventStore introduced complexity that clashed with the team’s need for simplicity and control.
Instead, Lermitage’s team built a custom solution using:
-
PostgreSQL for Event Storage: Storing events as JSON objects in a single table, with indexes for performance.
-
Kafka for Event Streaming: Ensuring scalability and real-time processing.
-
Java for Business Logic: Simple classes to handle event creation, storage, and replay.
He shares a case study of tracking payment statuses, where events like PaymentInitiated or PaymentConfirmed formed an auditable trail. Lermitage emphasizes minimalism, avoiding over-engineered patterns and focusing on readable code. The talk also covers challenges, such as managing event schema evolution and ensuring idempotency during replays, solved with versioned events and unique identifiers.
Lessons Learned
Lermitage’s experience offers key takeaways:
-
Keep It Simple: Avoid frameworks if your use case demands lightweight solutions.
-
Prioritize Auditability: Event sourcing shines in systems requiring traceability, like payments.
-
Plan for Evolution: Design events with versioning in mind to handle future changes.
These insights are valuable for developers in regulated industries or those wary of framework lock-in. Lermitage’s focus on practicality makes event sourcing approachable for teams of varying expertise.
Conclusion
Jonathan Lermitage’s talk demystifies event sourcing by showing how to implement it without heavy frameworks. His Worldline case study proves that simplicity and control can coexist in complex systems. This quickie is a must-watch for developers seeking flexible, auditable architectures.
“A monolith, or nothing!”: Embracing the Monolith at Ornikar
Introduction
In “Un monolithe sinon rien,” presented at Devoxx France 2023, Nicolas Demengel, a tech lead at Ornikar, makes a bold case for sticking with a monolithic architecture. In this 14-minute quickie, Demengel challenges the microservices trend, arguing that a well-structured monolith can be a powerful choice for startups like Ornikar, a French online driving school platform. His talk offers a refreshing perspective for developers weighing architectural trade-offs.
Key Insights
Demengel begins by acknowledging the allure of microservices: scalability, independence, and modern appeal. However, he argues that for Ornikar, a monolith provided simplicity and speed during rapid growth. The talk details Ornikar’s architecture, where a single Ruby on Rails application handles everything from user onboarding to payment processing. This centralized approach reduced complexity for a small team, enabling faster feature delivery.
Demengel shares how Ornikar maintains its monolith’s health through rigorous testing and modular design. He highlights practices like domain-driven boundaries within the codebase to prevent spaghetti code. The talk also addresses scaling challenges, such as handling increased traffic during peak enrollment periods, which Ornikar solved with database optimizations rather than a microservices overhaul.
Lessons Learned
Demengel’s talk offers practical takeaways:
-
Simplicity First: A monolith can accelerate development for startups with limited resources.
-
Discipline Matters: Modular design and testing keep a monolith maintainable.
-
Context is Key: Architectural choices should align with team size, expertise, and business goals.
These insights are valuable for startups and small teams evaluating whether to follow industry trends or stick with simpler solutions. Demengel’s pragmatic approach encourages developers to prioritize outcomes over dogma.
Conclusion
Nicolas Demengel’s “Un monolithe sinon rien” is a thought-provoking defense of the monolith in an era dominated by microservices hype. By sharing Ornikar’s success story, Demengel inspires developers to make context-driven architectural decisions. This talk is a must-watch for teams navigating the monolith vs. microservices debate.
Navigating the Challenges of Legacy Systems
Introduction
In her Devoxx France 2023 quickie, “Votre pire cauchemar : être responsable du legacy,” Camille Pillot, a consultant at Takima, tackles the daunting reality of managing legacy code. With humor and pragmatism, Pillot shares strategies for transforming legacy systems from a developer’s nightmare into an opportunity for growth. This 14-minute talk, rooted in her experience at Takima, a consultancy specializing in software modernization, offers actionable advice for developers tasked with maintaining aging codebases.
Key Insights
Pillot opens by defining legacy code as software that’s critical yet outdated, often poorly documented and resistant to change. She draws from her work at Takima, where teams frequently inherit complex systems. The talk outlines a three-step approach to managing legacy:
-
Assessment: Understand the system’s architecture and dependencies, using tools like code audits and dependency graphs.
-
Stabilization: Implement tests and monitoring to prevent regressions, even if the code remains brittle.
-
Modernization: Gradually refactor or rewrite components, prioritizing high-impact areas.
Pillot shares a case study from a Takima project, where a legacy e-commerce platform was stabilized by introducing unit tests, then partially refactored to improve performance. She emphasizes the importance of stakeholder buy-in, as modernization efforts often require time and budget. The talk also addresses the emotional toll of legacy work, encouraging developers to find value in incremental improvements.
Lessons Learned
Pillot’s insights are a lifeline for developers facing legacy challenges:
-
Start Small: Small, targeted improvements build momentum and trust.
-
Communicate Value: Articulate the business benefits of modernization to secure resources.
-
Embrace Patience: Legacy work is a marathon, not a sprint, requiring resilience.
These strategies are particularly relevant for consultancy roles, where developers must balance technical debt with client expectations. Pillot’s empathetic approach makes the talk relatable and inspiring.
Conclusion
Camille Pillot’s talk transforms the fear of legacy code into a call to action. By offering a clear framework and real-world examples, she empowers developers to tackle legacy systems with confidence. This quickie is essential viewing for anyone navigating the complexities of maintaining critical but outdated software.