Archive for the ‘General’ Category
[DefCon32] Defeating EDR-Evading Malware with Memory Forensics
Andrew Case, a core developer on the Volatility memory analysis project and Director of Research at V-Soft Consulting, joined colleagues Sellers and Richard to present a groundbreaking session at DEF CON 32. Their talk focused on new memory forensics techniques to detect malware that evades Endpoint Detection and Response (EDR) systems. Andrew and his team developed plugins for Volatility 3, addressing sophisticated bypass techniques like direct system calls and malicious exception handlers. Their work, culminating in a comprehensive white paper, offers practical solutions for countering advanced malware threats.
The Arms Race with EDR Systems
Andrew opened by outlining the growing prominence of EDR systems, which perform deep system inspections to detect malware beyond traditional antivirus capabilities. However, malware developers have responded with advanced evasion techniques, such as code injection and manipulation of debug registers, fueling an ongoing arms race. Andrew’s research at V-Soft Consulting focuses on analyzing these techniques during incident response, revealing how attackers exploit low-level hardware and software components to bypass EDR protections, as seen in high-profile ransomware attacks.
New Memory Forensics Techniques
Delving into their research, Andrew detailed the development of Volatility 3 plugins to detect EDR bypasses. These plugins target techniques like direct and indirect system calls, module overwriting, and abuse of exception handlers. By enumerating handlers and applying static disassembly, their tools identify malicious processes generically, even when attackers tamper with functions like AMSI’s scan buffer. Andrew highlighted a specific plugin, Patchus AMSI, which catches both vector exception handlers and debug register abuses, ensuring EDRs cannot be fooled by malicious PowerShell or macros.
Practical Applications and Detection
The team’s plugins enable real-time detection of EDR-evading malware, providing defenders with actionable insights. Andrew demonstrated how their tools identify suspicious behaviors, such as breakpoints set on critical functions, allowing malicious code to execute undetected. He emphasized the importance of their 19-page white paper, available on the DEF CON website, which documents every known EDR bypass technique in userland. This resource, combined with the open-source plugins, empowers security professionals to strengthen their defenses against sophisticated threats.
Empowering the Cybersecurity Community
Concluding, Andrew encouraged attendees to explore the released plugins and white paper, which include 40 references for in-depth understanding. He stressed the collaborative nature of their work, inviting feedback to refine the Volatility framework. By sharing these tools, Andrew and his team aim to equip defenders with the means to counter evolving malware, ensuring EDR systems remain effective. Their session underscored the critical role of memory forensics in staying ahead of attackers in the cybersecurity landscape.
Links:
[NDCMelbourne2025] A Look At Modern Web APIs You Might Not Know – Julian Burr
As web technologies evolve, the capabilities of browsers have expanded far beyond their traditional roles, often rendering the need for native applications obsolete for certain functionalities. Julian Burr, a front-end engineer with a passion for design systems, delivers an engaging exploration of modern web APIs at NDC Melbourne 2025. Through his demo application, stopme.io—a stopwatch-as-a-service platform—Julian showcases how these APIs can enhance user experiences while adhering to the principle of progressive enhancement. His talk illuminates the power of web APIs to bridge the gap between web and native app experiences, offering practical insights for developers.
The Philosophy of Progressive Enhancement
Julian begins by championing progressive enhancement, a design philosophy that ensures baseline functionality for all users while delivering enhanced experiences for those with modern browsers. Quoting Mozilla, he defines it as providing essential content to as many users as possible while optimizing for advanced environments. This approach is critical when integrating web APIs, as it prevents over-reliance on features that may not be universally supported. For instance, in stopme.io, Julian ensures core stopwatch functionality remains accessible, with APIs adding value only when available. This philosophy guides his exploration, ensuring inclusivity and robustness in application design.
Observing the Web: Resize and Intersection Observers
The first category Julian explores is observability APIs, starting with the Resize Observer and Intersection Observer. These APIs, widely supported, allow developers to monitor changes in DOM element sizes and visibility within the viewport. In stopme.io, Julian uses the Intersection Observer to load JavaScript chunks only when components become visible, optimizing performance. While CSS container queries address styling needs, these APIs enable dynamic behavioral changes, making them invaluable for frameworks like Astro that rely on code splitting. Julian emphasizes their relevance, as they underpin many modern front-end optimizations, enhancing user experience without compromising accessibility.
Enhancing User Context: Network and Battery Status
Julian then delves into APIs that provide contextual awareness, such as the Page Visibility API, Network Information API, and Battery Status API. The Page Visibility API allows stopme.io to update the browser title bar with the timer status when the tab is inactive, enabling users to multitask. The Network Information API offers insights into connection types, allowing developers to serve lower-resolution assets on cellular networks. Similarly, the Battery Status API warns users of potential disruptions due to low battery, as demonstrated when stopme.io alerts users about long-running timers. Julian cautions about fingerprinting risks, noting that browser vendors intentionally reduce accuracy to protect privacy, aligning with progressive enhancement principles.
Elevating Components: Screen Wake Lock and Vibration
Moving to component enhancement, Julian highlights the Screen Wake Lock and Vibration APIs. The Screen Wake Lock API prevents devices from entering sleep mode during critical tasks, such as keeping stopme.io’s timer visible. The Vibration API adds haptic feedback, like notifying users when a timer finishes, with customizable patterns for engaging effects. Julian stresses user control, suggesting toggles to avoid intrusive experiences. While these APIs—often Chrome-centric—enhance interactivity, Julian underscores the need for fallback options to maintain functionality across browsers, ensuring no user is excluded.
Native-Like Experiences: File System, Clipboard, and Share APIs
Julian showcases APIs that rival native app capabilities, including the File System, Clipboard, and Share APIs. The File System API enables file picker interactions, while the Clipboard API facilitates seamless copy-paste operations. The Share API, used in stopme.io, triggers native sharing dialogs, simplifying content distribution across platforms. These APIs, inspired by tools like Cordova, reflect the web’s evolution toward native-like functionality. Julian notes their security mechanisms, such as transient activation for Clipboard API, which require user interaction to prevent misuse, ensuring both usability and safety.
Streamlining Authentication: Web OTP and Credential Management
Authentication APIs, such as Web OTP and Credential Management, offer seamless login experiences. The Web OTP API automates SMS-based one-time password entry, as demonstrated in stopme.io, where Chrome facilitates code sharing across devices. The Credential Management API streamlines password storage and retrieval, reducing login friction. Julian highlights their synergy with the Web Authentication API, which supports passwordless logins via biometrics. These APIs, widely available, enhance security and user convenience, making them essential for modern web applications.
Links:
[OxidizeConf2024] C++ Migration Strategies
Bridging Legacy and Modern Systems
The push for memory-safe software has highlighted the limitations of C++ in today’s data-driven landscape, prompting organizations to explore Rust as a modern alternative. At OxidizeConf2024, Til Adam from KDAB and Florian Gilcher from Ferrous Systems presented a strategic approach to migrating from C++ to Rust, emphasizing an “Aloha” philosophy of compassion and pragmatism. With decades of experience in C++ development, Til and Florian offered insights into integrating Rust into existing codebases without requiring a full rewrite, ensuring efficiency and safety.
Drawing an analogy to music, Til compared C++ to a complex electric guitar and Rust to a simpler, joy-inducing ukulele. While C++ remains prevalent in legacy systems, its complexity can lead to errors, particularly in concurrency and memory management. Rust’s memory safety and modern features address these issues, but Florian stressed that migration must be practical, leveraging the C Application Binary Interface (C ABI) as a lingua franca for interoperability. This allows Rust and C++ to coexist, enabling incremental adoption in existing projects.
Practical Migration Techniques
The speakers outlined a multi-level migration strategy, starting with low-level integration using the C ABI. This involves wrapping C++ code in extern C functions to interface with Rust, a process Florian described as effective but limited by platform-specific complexities. Tools like cxx and autocxx simplify this by automating bindings, though challenges remain with templates and generics. Til shared examples from KDAB’s projects, where Rust was integrated into C++ codebases for specific components, reducing security risks without disrupting existing functionality.
At a higher level, Florian advocated for a component-based approach, where Rust modules replace C++ components in areas like concurrency or security-critical code. This strategy maximizes Rust’s benefits, such as its borrow checker, while preserving investments in C++ code. The speakers emphasized identifying high-value areas—such as modules prone to concurrency issues—where Rust’s safety guarantees provide immediate benefits, aligning with organizational goals like regulatory compliance and developer productivity.
Balancing Innovation and Pragmatism
Migration decisions are driven by a balance of enthusiasm, regulatory pressure, and resource constraints. Til noted that younger developers often champion Rust for its modern features, while executives respond to mandates like the NSA’s push for memory-safe languages. However, budget and time limitations necessitate a pragmatic approach, focusing on areas where Rust delivers significant value. Florian highlighted successful migrations at KDAB and Ferrous Systems, where Rust was adopted for new features or rewrites of problematic components, improving safety and maintainability.
The speakers also addressed challenges, such as the lack of direct support for C++ templates in Rust. While tools like autocxx show promise, ongoing community efforts are needed to enhance interoperability. By sharing best practices and real-world examples, Til and Florian underscored Rust’s potential to modernize legacy systems, fostering a collaborative approach to innovation that respects existing investments while embracing future-ready technologies.
Links:
[DotJs2024] Your App Crashes My Browser
Amid the ceaseless churn of web innovation, a stealthy saboteur lurks: memory leaks that silently erode browser vitality, culminating in the dreaded “Out of Memory” epitaph. Stoyan Stefanov, a trailblazing entrepreneur and web performance sage with roots at Facebook, confronted this scourge at dotJS 2024. Once a fixture in optimizing vast social feeds, Stoyan transitioned from crisis aversion—hard reloads post-navigation—to empowerment through diagnostics. His manifesto: arm developers with telemetry and sleuthing to exorcise these phantoms, ensuring apps endure without devouring RAM.
Stoyan’s alarm rang true via Nolan Lawson’s audit: a decade’s top SPAs unanimously hemorrhaged, underscoring leaks’ ubiquity even among elite codebases. Personal scars abounded—from a social giant’s browser-crushing sprawl, mitigated by crude resets—to the thrill of unearthing culprits sans autopsy. The panacea commences with candor: the Reporting API, a beacon flagging crashes in the wild, piping diagnostics to endpoints for pattern mining. This passive vigilance—triggered on OOM—unmasks field frailties, from rogue closures retaining DOM vestiges to event sentinels orphaned post-unmount.
Diagnosis demands ritual: heap snapshots bracketing actions, GC sweeps purifying baselines, diffs revealing retainers. Stoyan evangelized Memlab, Facebook’s CLI oracle, automating Puppeteer-driven cycles—load, act, revert—yielding lucid diffs: “1,000 objects via EventListener cascade.” For the uninitiated, his Recorder extension chronicles clicks into scenario scripts, demystifying Puppeteer. Leaks manifest insidiously: un-nullified globals, listener phantoms in React class components—addEventListener sans symmetric removal—hoarding callbacks eternally.
Remediation rings simple: sever references—null assignments, framework hooks like useEffect cleanups—unleashing GC’s mercy. Stoyan’s ethos: paranoia pays; leaks infest, but tools tame. From Memlab’s precision on map apps—hotel overlays persisting post-dismiss—to listener audits, vigilance yields fluidity. In an age of sprawling SPAs, this vigilance isn’t luxury but lifeline, sparing users frustration and browsers demise.
Unveiling Leaks in the Wild
Stoyan spotlighted Reporting API’s prowess: crash telemetry streams to logs, correlating OOM with usage spikes. Nolan’s Speedline probe affirmed: elite apps falter uniformly, from unpruned caches to eternal timers. Proactive profiling—snapshots pre/post actions—exposes retain cycles, Memlab automating to spotlight listener detritus or closure traps.
Tools and Tactics for Eradication
Memlab’s symphony: Puppeteer orchestration, intelligent diffs tracing leaks to sources—e.g., 1K objects via unremoved handlers. Stoyan’s Recorder eases entry, click-to-script. Fixes favor finality: removeEventListener in disposals, nulls for orphans. Paranoia’s yield: resilient apps, jubilant users.
Links:
[DevoxxUK2025] Platform Engineering: Shaping the Future of Software Delivery
Paula Kennedy, co-founder and COO of Cintaso, delivered a compelling lightning talk at DevoxxUK2025, tracing the evolution of platform engineering and its impact on software delivery. Drawing from over a decade of experience, Paula explored how platforms have shifted from siloed operations to force multipliers for developer productivity. Referencing the journey from DevOps to PaaS to Kubernetes, she highlighted current trends like inner sourcing and offered practical strategies for assessing platform maturity. Her narrative, infused with lessons from the past and present, underscored the importance of a user-centered approach to avoid the pitfalls of hype and ensure platforms drive innovation.
The Evolution of Platforms
Paula began by framing platforms as foundations that elevate development, drawing on Gregor Hohpe’s analogy of a Volkswagen chassis enabling diverse car models. She recounted her career, starting in 2002 at Acturus, a SaaS provider with rigid silos between developers and operations. The DevOps movement, sparked in 2009, sought to bridge these divides, but its “you build it, you run it” mantra often overwhelmed teams. The rise of Platform-as-a-Service (PaaS), exemplified by Cloud Foundry, simplified infrastructure management, allowing developers to focus on code. However, Paula noted, the complexity of Kubernetes led organizations to build custom internal platforms, sometimes losing sight of the original value proposition.
Current Trends and Challenges
Today, platform engineering is at a crossroads, with Gartner predicting that by 2026, 80% of large organizations will have dedicated teams. Paula highlighted principles like self-service APIs, internal developer portals (e.g., Backstage), and golden paths that guide developers to best practices. She emphasized treating platforms as products, applying product management practices to align with user needs. However, the 2024 DORA report reveals challenges: while platforms boost organizational performance, they often fail to improve software reliability or delivery throughput. Paula attributed this to automation complacency and “platform complacency,” where trust in internal platforms leads to reduced scrutiny, urging teams to prioritize observability and guardrails.
Links:
[Oracle Dev Days 2025] Optimizing Java Performance: Choosing the Right Garbage Collector
Jean-Philippe BEMPEL , a seasoned developer at Datadog and a Java Champion, delivered an insightful presentation on selecting and tuning Garbage Collectors (GCs) in OpenJDK to enhance Java application performance. His talk, rooted in practical expertise, unraveled the complexities of GCs, offering a roadmap for developers to align their choices with specific application needs. By dissecting the characteristics of various GCs and their suitability for different workloads, Jean-Philippe provided actionable strategies to optimize memory management, reduce production issues, and boost efficiency.
Understanding Garbage Collectors in OpenJDK
Garbage Collectors are pivotal in Java’s memory management, silently handling memory allocation and reclamation. However, as Jean-Philippe emphasized, a misconfigured GC can lead to significant performance bottlenecks in production environments. OpenJDK offers a suite of GCs—Serial GC, Parallel GC, G1, Shenandoah, and ZGC—each designed with distinct characteristics to cater to diverse application requirements. The challenge lies in selecting the one that best matches the workload, whether it prioritizes throughput or low latency.
Jean-Philippe began by outlining the foundational concepts of GCs, particularly the generational model. Most GCs in OpenJDK are generational, dividing memory into the Young Generation (for short-lived objects) and the Old Generation (for longer-lived objects). The Young Generation is further segmented into the Eden space, where new objects are allocated, and Survivor spaces, which hold objects that survive initial collections before promotion to the Old Generation. Additionally, the Metaspace stores class metadata, a critical but often overlooked component of memory management.
Serial GC: Simplicity for Constrained Environments
The Serial GC, one of the oldest collectors, operates with a single thread and employs a stop-the-world approach, pausing all application threads during collection. Jean-Philippe highlighted its suitability for small-scale applications, particularly those running in containers with less than 2 GB of RAM, where it serves as the default GC. Its simplicity makes it ideal for environments with limited resources, but its stop-the-world nature can introduce noticeable pauses, making it less suitable for latency-sensitive applications.
To illustrate, Jean-Philippe explained the mechanics of the Young Generation’s Survivor spaces. These spaces, S0 and S1, alternate roles as source and destination during minor GC cycles, copying live objects to manage memory efficiently. Objects surviving multiple cycles are promoted to the Old Generation, reducing the overhead of frequent collections. This generational approach leverages the hypothesis that most objects die young, minimizing the cost of memory reclamation.
Parallel GC: Maximizing Throughput
For applications prioritizing throughput, such as batch processing jobs, the Parallel GC offers significant advantages. Unlike the Serial GC, it leverages multiple threads to reclaim memory, making it efficient for systems with ample CPU cores. Jean-Philippe noted that it was the default GC until JDK 8 and remains a strong choice for throughput-oriented workloads like Spark jobs, Kafka consumers, or ETL processes.
The Parallel GC, also stop-the-world, excels in scenarios where total execution time matters more than individual pause durations. Jean-Philippe shared a benchmark using a JFR (Java Flight Recorder) file parsing application, where Parallel GC outperformed others, achieving a throughput of 97% (time spent in application versus GC). By tuning the Young Generation size to reduce frequent minor GCs, developers can further minimize object copying, enhancing overall performance.
G1 GC: Balancing Throughput and Latency
The G1 (Garbage-First) GC, default since JDK 9 for heaps larger than 2 GB, strikes a balance between throughput and latency. Jean-Philippe described its region-based memory management, dividing the heap into smaller regions (Eden, Survivor, Old, and Humongous for large objects). This structure allows G1 to focus on regions with the most garbage, optimizing memory reclamation with minimal copying.
In his benchmark, G1 showed a throughput of 85%, with average pause times of 76 milliseconds, aligning with its target of 200 milliseconds. However, Jean-Philippe pointed out challenges with Humongous objects, which can increase GC frequency if not managed properly. By adjusting region sizes (up to 32 MB), developers can mitigate these issues, improving throughput for applications like batch jobs while maintaining reasonable pause times.
Shenandoah and ZGC: Prioritizing Low Latency
For latency-sensitive applications, such as HTTP servers or microservices, Shenandoah and ZGC are the go-to choices. These concurrent GCs minimize pause times, often below a millisecond, by performing most operations alongside the running application. Jean-Philippe highlighted Shenandoah’s non-generational approach (though a generational version is in development) and ZGC’s generational support since JDK 21, making the latter particularly efficient for large heaps.
In a latency-focused benchmark using a Spring PetClinic application, Jean-Philippe demonstrated that Shenandoah and ZGC maintained request latencies below 200 milliseconds, significantly outperforming Parallel GC’s 450 milliseconds at the 99th percentile. ZGC’s use of colored pointers and load/store barriers ensures rapid memory reclamation, allowing regions to be freed early in the GC cycle, a key advantage over Shenandoah.
Tuning Strategies for Optimal Performance
Tuning GCs is as critical as selecting the right one. For Parallel GC, Jean-Philippe recommended sizing the Young Generation to reduce the frequency of minor GCs, ideally exceeding 50% of the heap to minimize object copying. For G1, adjusting region sizes can address Humongous object issues, while setting a maximum pause time target (e.g., 50 milliseconds) can shift its behavior toward latency sensitivity, though it may not compete with Shenandoah or ZGC in extreme cases.
For concurrent GCs like Shenandoah and ZGC, ensuring sufficient heap size and CPU cores prevents allocation stalls, where threads wait for memory to be freed. Jean-Philippe emphasized that Shenandoah requires careful heap sizing to avoid full GCs, while ZGC’s rapid region reclamation reduces such risks, making it more forgiving for high-allocation-rate applications.
Selecting the Right GC for Your Workload
Jean-Philippe concluded by categorizing workloads into two types: throughput-oriented (SPOT) and latency-sensitive. For SPOT workloads, such as batch jobs or ETL processes, Parallel GC or G1 are optimal, with Parallel GC offering easier tuning for predictable performance. For latency-sensitive applications, like microservices or databases (e.g., Cassandra), ZGC’s generational efficiency and Shenandoah’s low-pause capabilities shine, with ZGC being particularly effective for large heaps.
By analyzing workload characteristics and leveraging tools like GC Easy for log analysis, developers can make informed GC choices. Jean-Philippe’s benchmarks underscored the importance of tailoring GC configurations to specific use cases, ensuring both performance and stability in production environments.
Links:
Hashtags: #Java #GarbageCollector #OpenJDK #Performance #Tuning #Datadog #JeanPhilippeBempel #OracleDevDays2025
[DefCon32] Digital Emblems—When Markings Are Required, but You Have No Rattle-Can
Bill Woodcock, a seasoned contributor to the Internet Engineering Task Force (IETF), presented an insightful session at DEF CON 32 on the development of digital emblems. These digital markers aim to replace or supplement physical markings required under international law, such as those on ISO containers, press vests, or humanitarian symbols like UN blue helmets. Bill’s work, conducted within the IETF, leverages protocols like DNS and DNSSEC to create a global, cryptographically secure marking system. His talk explored the technical and security implications of this standardization effort, inviting feedback from the DEF CON community on potential vulnerabilities.
The Need for Digital Emblems
Bill introduced the concept of digital emblems, explaining their necessity in an increasingly digitized world. Physical markings, such as serial numbers on shipping containers or symbols on humanitarian vehicles, are critical for compliance with international regulations. However, as processes like border transport and battlefield protections become digitized, these markings must transition to machine-readable formats. Bill outlined how the IETF’s proposed standard aims to create a unified protocol for digital emblems, ensuring they are scannable, cryptographically verifiable, and adaptable to various use cases, from logistics to military operations.
Technical Foundations and Challenges
Delving into the technical details, Bill described how the digital emblem system builds on existing protocols like DNS and DNSSEC, enabling robust validation without constant network connectivity. He highlighted the ability to embed significant data in devices like RFID tags, allowing offline validation through cached root signatures. However, Bill acknowledged challenges, particularly in ensuring the security of these emblems against adversarial tampering. He noted that military use cases, where covert validation is critical, pose unique risks, as adversaries could mislabel objects to deceive validators, necessitating strong cryptographic protections.
Security and Privacy Considerations
Bill addressed the security and privacy concerns raised by digital emblems, particularly in adversarial scenarios. He explained that the system allows for covert inspection, enabling validators to check emblems without alerting potential attackers. However, he cautioned that physical binding remains a weak point, as malicious actors could exploit mislabeled objects in conflict zones. Bill invited the DEF CON audience to scrutinize the proposed standard for vulnerabilities, emphasizing the importance of community input to harden the system against attacks, especially in high-stakes military and humanitarian contexts.
Shaping the Future of Digital Standards
Concluding, Bill underscored the potential of digital emblems to streamline global processes while enhancing security. He encouraged the DEF CON community to engage with the IETF’s ongoing work, accessible via the provided URLs, to contribute to refining the standard. By addressing vulnerabilities and ensuring robust cryptographic validation, Bill envisions a future where digital emblems enhance trust and compliance across borders and battlefields. His call to action resonated with the audience, inviting hackers to play a pivotal role in shaping this emerging technology.
Links:
(temporary testing) Mapping pages
- [DevoxxFR] Kill Your Branches, Do Feature Toggles
- [DevoxxFR] An Ultrasonic Adventure!
- [DevoxxFR] How to be a Tech Lead in an XXL Pizza Team Without Drowning
- [DevoxxFR 2017] Why Your Company Should Store All Its Code in a Single Repo
- [DevoxxFR 2017] Introduction to the Philosophy of Artificial Intelligence
- [DevoxxFR 2017] Terraform 101: Infrastructure as Code Made Simple
- [DevoxxFR 2018] Deploying Microservices on AWS: Compute Options Explored at Devoxx France 2018
- [DevoxxFR 2018] Software Heritage: Preserving Humanity's Software Legacy
- [DevoxxFR 2018] Apache Kafka: Beyond the Brokers – Exploring the Ecosystem
- [DevoxxFR 2018] Are you "merge" or "rebase" oriented?
- A Decade of Devoxx FR and Java Evolution: A Detailed Retrospective and Forward-Looking Analysis
- [DevoxxFR 2022] Exploiter facilement des fonctions natives avec le Projet Panama depuis Java
- Kafka Streams @ Carrefour : Traitement big data à la vitesse de l’éclair
- [DevoxxFR 2022] Père Castor 🐻, raconte-nous une histoire (d’OPS)
- [DevoxFR 2022] Cracking Enigma: A Tale of Espionage and Mathematics
- [VivaTech 2021] Tech to Rethink Our Workplace at VivaTech 2021
- [VivaTech 2021] Emmanuel Macron : Championing European Scale-Ups and Innovation
- [DevoxxFR 2021] Maximizing Productivity with Programmable Ergonomic Keyboards: Insights from Alexandre Navarro
- [Devoxx FR 2021] IoT Open Source at Home
- [VivaTech 2019] Funding and Growing Tomorrow's Unicorns
- [VivaTech 2019] What's Your Next Bet
- [Devoxx France 2021] Overcoming Impostor Syndrome: Practical Tips
- [VivaTech 2018] How VCs Are Growing Tomorrow's Euro-corns
- [ScalaDays 2019] Techniques for Teaching Scala
- Kotlin Native Concurrency Explained by Kevin Galligan
- [DevoxxFR 2018] Java in Docker: Best Practices for Production
- [DevoxxFR 2018] Watch Out! Don't Plug in That USB, You Might Get Seriously Hacked!
- Gradle: A Love-Hate Journey at Margot Bank
- Navigating the Application Lifecycle in Kubernetes
- [DevoxxFR 2019] Back to Basics: Stop Wasting Time with Dates
- "All Architects !": Empowering Every Developer as an Architect
- Navigating the Challenges of Legacy Systems
- "A monolith, or nothing!": Embracing the Monolith at Ornikar
- Event Sourcing Without a Framework: A Practical Approach
- [DevoxxFR 2023] Hexagonal Architecture in 15 Minutes: Simplifying Complex Systems
- [DevoxxFR 2023] Tests, an Investment for the Future: Building Reliable Software
- Ce que l’open source peut apprendre des startups : Une perspective nouvelle
- Et si on parlait un peu de sécurité ? Un guide pour les développeurs
- Navigating the Reactive Frontier: Oleh Dokuka’s Reactive Streams at Devoxx France 2023
- Meet with Others: tools for speech
- Gestion des incidents : Parler et agir
- Decoding Shazam: Unraveling Music Recognition Technology
- [AWS Summit Berlin 2023] Go-to-Market with Your Startup: Tips and Best Practices from VC Investors
- [KotlinConf'23] The Future of Kotlin is Bright and Multiplatform
- [KotlinConf'2023] Coroutines and Loom: A Deep Dive into Goals and Implementations
- [GopherCon UK 2022] Leading in Tech
- [Spring I/O 2023] Multitenant Mystery: Only Rockers in the Building by Thomas Vitale
- [Spring I/O 2023] Do You Really Need Hibernate?
- [Spring I/O 2023] Managing Spring Boot Application Secrets: Badr Nass Lahsen
- [KotlinConf2023] Java and Kotlin: A Mutual Evolution
- A Tricky Java Question
- SnowFlake❄: Why does SUM() return NULL instead of 0?
- JpaSystemException: A collection with cascade="all-delete-orphan" was no longer referenced by the owning entity instance
- SpringBatch: How to have different schedules, per environment, for instance: keep the fixedDelay=60000 in prod, but schedule with a Cron expression in local dev?
- 🚀 Making Spring AOP Work with Struts 2: A Powerful Combination! 🚀
- 🚀 Mastering Flyway Migrations with Maven
- [Devoxx FR 2024] Instrumenting Java Applications with OpenTelemetry: A Comprehensive Guide
- [Devoxx FR 2024] Mastering Reproducible Builds with Apache Maven: Insights from Hervé Boutemy
- Understanding Dependency Management and Resolution: A Look at Java, Python, and Node.js
- [PyData Paris 2024] Exploring Quarto Dashboard for Impactful and Visual Communication
- Onyxia: A User-Centric Interface for Data Scientists in the Cloud Age
- Boosting AI Reliability: Uncertainty Quantification with MAPIE
- Predictive Modeling and the Illusion of Signal
- Building Intelligent Data Products at Scale
- Renovate/Dependabot: How to Take Control of Dependency Updates
- [PyData Global 2024] Making Gaussian Processes Useful
- [AWS Summit Paris 2024] Winning Fundraising Strategies for 2024
- [DevoxxFR 2024] Super Tech’Rex World: The Assembler Strikes Back
- [KotlinConf2024] Hacking Sony Cameras with Kotlin
- [DevoxxFR 2024] Going AOT: Mastering GraalVM for Java Applications
- Java's Emerging Role in AI and Machine Learning: Bridging the Gap to Production
- CTO Perspective: Choosing a Tech Stack for Mainframe Rebuild
- Elastic APM: When to Use @CaptureSpan vs. @CaptureTransaction?
- How to Bypass Elasticsearch’s 10,000-Result Limit with the Scroll API
- Problem: Spring JMS MessageListener Stuck / Not Receiving Messages
- Efficient Inter-Service Communication with Feign and Spring Cloud in Multi-Instance Microservices
- Mastering DNS Configuration: A, AAAA, CNAME, and Best Practices with OVH
- Advanced Encoding in Java, Kotlin, Node.js, and Python
- Understanding volatile in Java: A Deep Dive with a Cloud-Native Use Case
- Quick and dirty script to convert WordPress export file to Blogger / Atom XML
- Why Project Managers Must Guard Against “Single Points of Failure” in Human Capital
- RSS to EPUB Converter: Create eBooks from RSS Feeds
- Essential Security Considerations for Docker Networking
- CTO's Wisdom: Feature Velocity Over Premature Scalability in Early-Stage Startups
- AWS S3 Warning: “No Content Length Specified for Stream Data” – What It Means and How to Fix It
- The CTO's Tightrope Walk: Deeper into the Hire vs. Outsource Dilemma
- Understanding Chi-Square Tests: A Comprehensive Guide for Developers
- Creating EPUBs from Images: A Developer's Guide to Digital Publishing
- The Fractional CTO: A Strategic Ally or a Risky Gamble?
- Orchestrating Progress: A CTO's Strategy for Balancing Innovation and Stability
- 🚀 Making Spring AOP Work with Struts 2: A Powerful Combination! 🚀
- Bridging the Divide: CTO Communication with Aliens (aka: Non-Technical Stakeholders)
- 5 Classic Software Security Holes Every Developer Should Know
- Advanced Java Security: 5 Critical Vulnerabilities and Mitigation Strategies
- Using Redis as a Shared Cache in AWS: Architecture, Code, and Best Practices
- 🗄️ AWS S3 vs. MinIO – Choosing the Right Object Storage
- Demystifying Parquet: The Power of Efficient Data Storage in the Cloud
- ️ Prototype Pollution: The Silent JavaScript Vulnerability You Shouldn’t Ignore
- Mastering Information Structure: A Deep Dive into Lists and Nested Lists Across Document Formats
- Applications Web with Spring Boot 2.0
- Script to clean WSL and remove Ubuntu from Windows 11
- [Oracle Dev Days 2025] From JDK 21 to JDK 25: Jean-Michel Doudoux on Java’s Evolution
[Oracle Dev Days 2025] From JDK 21 to JDK 25: Jean-Michel Doudoux on Java’s Evolution
Jean-Michel Doudoux, a renowned Java Champion and Sciam consultant, delivered a session, charting Java’s evolution from JDK 21 to JDK 25. As the next Long-Term Support (LTS) release, JDK 25 introduces transformative features that redefine Java development. Jean-Michel’s talk provided a comprehensive guide to new syntax, APIs, JVM enhancements, and security measures, equipping developers to navigate Java’s future with confidence.
Enhancing Syntax and APIs
Jean-Michel began by exploring syntactic improvements that streamline Java code. JEP 456 in JDK 22 introduces unnamed variables using _, improving clarity for unused variables. JDK 23’s JEP 467 adds Markdown support for Javadoc, easing documentation. In JDK 25, JEP 511 simplifies module imports, while JEP 512’s implicit classes and simplified main methods make Java more beginner-friendly. JEP 513 enhances constructor flexibility, enabling pre-constructor logic. These changes collectively minimize boilerplate, boosting developer efficiency.
Expanding Capabilities with New APIs
The session highlighted APIs that broaden Java’s scope. The Foreign Function & Memory API (JEP 454) enables safer native code integration, replacing sun.misc.Unsafe. Stream Gatherers (JEP 485) enhance data processing, while the Class-File API (JEP 484) simplifies bytecode manipulation. Scope Values (JEP 506) improve concurrency with lightweight alternatives to thread-local variables. Jean-Michel’s practical examples demonstrated how these APIs empower developers to craft modern, robust applications.
Strengthening JVM and Security
Jean-Michel emphasized JVM and security advancements. JEP 472 in JDK 25 restricts native code access via --enable-native-access, enhancing system integrity. The deprecation of sun.misc.Unsafe aligns with safer alternatives. The removal of 32-bit support, the Security Manager, and certain JMX features reflects Java’s modern focus. Performance boosts in HotSpot JVM, Garbage Collectors (G1, ZGC), and startup times via Project Leyden (JEP 483) ensure Java’s competitiveness.
Boosting Productivity with Tools
Jean-Michel covered enhancements to Java’s tooling ecosystem, including upgraded Javadoc, JCMD, and JAR utilities, which streamline workflows. New Java Flight Recorder (JFR) events improve diagnostics. He urged developers to test JDK 25’s early access builds to prepare for the LTS release, highlighting how these tools enhance efficiency and scalability in application development.
Navigating JDK 25’s LTS Future
Jean-Michel wrapped up by emphasizing JDK 25’s role as an LTS release with extended support. He encouraged proactive engagement with early access programs to adapt to new features and deprecations. His session offered a clear, actionable roadmap, empowering developers to leverage JDK 25’s innovations confidently. Jean-Michel’s expertise illuminated Java’s trajectory, inspiring attendees to embrace its evolving landscape.
Links:
Hashtags: #Java #JDK25 #LTS #JVM #Security #Sciam #JeanMichelDoudoux
[DefCon32] Changing Global Threat Landscape
Rob Joyce, a distinguished former National Security Agency (NSA) official, joined Jeff Moss, known as The Dark Tangent and founder of DEF CON, for a riveting fireside chat at DEF CON 32. Their discussion delved into the dynamic evolution of global cyber threats, with a particular focus on the transformative role of artificial intelligence (AI) in reshaping cybersecurity. Rob, recently retired after 34 years at the NSA, brought a wealth of experience from roles such as Cybersecurity Coordinator at the White House and head of the NSA’s Tailored Access Operations. Jeff facilitated a conversation that explored how AI is redefining defense strategies and the broader implications for global security, offering insights into the challenges and opportunities ahead.
The Evolution of Cyber Threats
Rob began by reflecting on his extensive career at the NSA, where he witnessed the transformation of cyber threats from isolated incidents to sophisticated, state-sponsored campaigns. He highlighted how adversaries now leverage AI to enhance attack vectors, such as spear-phishing and polymorphic malware, which adapt dynamically to evade detection. Rob emphasized that the scale and speed of these threats demand a shift from reactive to proactive defenses, underscoring the importance of understanding adversaries’ intentions through signals intelligence. His experience during the Iraq War as an issue manager provided a unique perspective on the strategic use of cyber intelligence to counter evolving threats.
AI’s Dual Role in Cybersecurity
The conversation pivoted to AI’s dual nature as both a tool for attackers and defenders. Rob explained how AI enables rapid analysis of vast datasets, allowing defenders to identify patterns and anomalies that would be impossible for human analysts alone. However, he cautioned that adversaries exploit similar capabilities to craft advanced persistent threats (APTs) and automate large-scale attacks. Jeff probed the balance between automation and human oversight, to which Rob responded that AI-driven tools, like those developed by the NSA, are critical for scaling defenses, particularly for protecting critical infrastructure. The integration of AI, he noted, is essential to keep pace with the growing complexity of cyber threats.
Strengthening Defenses Through Collaboration
Rob stressed the importance of bipartisan support for cybersecurity, noting that stopping foreign adversaries is a shared goal across administrations. He highlighted the role of the Office of the National Cyber Director (ONCD) in convening agencies to synchronize efforts, citing examples where ground-up collaboration among agencies has led to effective threat mitigation. Jeff asked about the resource gap, and Rob acknowledged that the scope of threats often outpaces available resources. He advocated for widespread adoption of two-factor authentication and secure software development practices, such as moving away from memory-unsafe languages, to build more defensible systems.
Building a Resilient Future
Concluding, Rob expressed optimism about the trajectory of cybersecurity, emphasizing that automation can alleviate the burden on security teams, particularly for 24/7 operations. He underscored the need for robust teams and innovative technologies to address the relentless pace of vulnerabilities exploited by attackers. Jeff echoed this sentiment, encouraging the DEF CON community to contribute to shaping a secure digital landscape. Their dialogue highlighted the critical role of collaboration between government, industry, and the hacker community in navigating the ever-changing threat landscape.