Recent Posts
Archives

Archive for the ‘en-US’ Category

PostHeaderIcon [NDCOslo2024] Underwhelming Game Development with PICO-8 – Jonas Winje

In the shadowed underbelly of enterprise engineering, where sprawling systems spawn suffocation and deadlines devour delight, Jonas Winje, a Norwegian developer grappling with the grind of grand-scale governance, seeks solace in simplicity. His sanctuary: PICO-8, a pint-sized fantasy console that curtails chaos with charming constraints—128×128 pixels, 16 hues, Lua’s lean lexicon. Jonas’s jaunt, a jaunty juxtaposition of corporate colossalness and cartridge compactness, celebrates the catharsis of creation unencumbered, where felines frolic freely and feedback flows fleetly, reminding us that restraint can reignite the rapture of the craft.

Jonas greets with a grin, confessing his conference conundrum: a final-day slot, sans sleep, yet surging with stories of structured suffering. Enterprise’s essence, he evokes—overwhelm’s octet, buried beneath burgeoning backlogs—contrasts PICO-8’s playful parameters, a devkit distilling development to its delightful distillate. No need for neural networks or native nightmares; here, Lua’s lightness lets logic leap lightly.

Constraints as Catalysts: The Charm of Curtailment

PICO-8’s precepts propel productivity: 8×8 sprites sidestep artistic angst, 128×128 canvases confine complexity, sound slots spur succinct scores. Jonas jests at his ineptitude—pixels as proxies for prowess—yet praises the palette’s pardon, where 16 shades shield the shoddy from scrutiny. Lua, the lingua franca, limits lines to luminous loops, forestalling the fog of feature frenzy.

His homage: the IDE’s intimacy, interleaving ink and iteration—code a cat, cue its caper, cycle ceaselessly. No compilation consternation; changes cascade crisply, cultivating a cadence of continuous communion with the creation. Jonas’s kernel: constraints kindle creativity, turning “tiny” into triumph, where a jumping feline fosters flow states forgotten in feature factories.

Feline Frolics: From Pixel to Playable

Jonas’s journey: sketching a sprite—tabby torso, whisker wisps—then scripting its saunter. _update() orchestrates orbits, _draw() depicts dances; variables vault the varmint vertically, gravity grounding its gambols. He highlights hitches—hues’ harmony, hitbox hassles—yet lauds the latitude for “good enough,” unyoked from scalability’s specter.

PICO-8’s perks permeate: fast feedback fuels focus, eschewing enterprise’s endless entanglements. Jonas nods to nootropics’ absence—no AI accretions, just 2MB heaps—ensuring essence endures. His epiphany: such sanctuaries salvage sanity, seeding skills that seep into salaried spheres—prompt prototypes, pixel-perfect pursuits.

Wholesome Whims: The Wholesome Workflow

Jonas’s jubilation: PICO-8’s purity prompts pleasure, a palliative for procedural purgatory. Cats’ cuteness compounds calm, while whimsy wards weariness. His horizon: harness this haven to hone habits—hasty harmony, human handiwork—bridging boutique builds to behemoth backends.

In this microcosm, Jonas joyously affirms: underwhelm to uplift, where whims whisper wisdom.

Links:

PostHeaderIcon [KotlinConf2025] Charts, Code, and Sails: Winning a Regatta with Kotlin Notebook

In the high-stakes world of competitive sailing, where every decision can mean the difference between victory and defeat, an extraordinary tool has emerged: Kotlin Notebook. Roman Belov, a distinguished member of the JetBrains team, shared a captivating account of leveraging this innovative technology to triumph in a 24-hour regatta. The narrative transcends a simple code demonstration, illustrating how interactive programming becomes a critical asset in a dynamic, unpredictable environment like the open sea.

This journey highlights the power of Kotlin Notebook as more than just a development tool; it’s a platform for real-time problem-solving. While a seasoned developer, Roman’s most cherished hat is that of a yachtsman. He uses the notebook to translate complex nautical challenges into actionable, data-driven decisions. The essence of the task is to navigate a course, which is essentially a graph with nodes representing different locations and edges representing the path between them. However, unlike a typical graph problem, the rules of sailing introduce complex variables. The boat cannot sail directly into the wind, and its speed is heavily dependent on the angle of the wind. This means the graph is constantly changing, making traditional route-planning algorithms obsolete.

The solution required a tool that could rapidly process data, visualize outcomes, and allow for on-the-fly adjustments. This is where Kotlin Notebook excelled, providing a live, interactive environment. Roman outlined how he could use the notebook to perform crucial tasks in the middle of the race: visualizing the race course on a map, calculating the fastest path based on current wind conditions, and dynamically adjusting the route as the wind shifted. This is achieved by creating a “sailable roads” model, which evaluates every potential path on the graph at regular intervals and discards any that are impossible given the wind direction. For the remaining paths, the notebook computes the optimal boat speed and time to complete that segment, effectively modeling the race in real time.

Roman then showcased the brute-force search algorithm that was used to find the optimal path. The code, written in Kotlin, was surprisingly straightforward and demonstrated the language’s elegance and readability. The algorithm, running within the notebook, would constantly iterate through the potential paths, calculating the time to finish for each one and discarding any that were slower than the best time found so far. The visual output of the notebook, which could render the different routes directly on the map, was a game-changer. It transformed abstract data and calculations into a clear, visual representation that allowed the sailors to make quick, informed decisions.

The application of Kotlin Notebook in this unconventional scenario proves its versatility beyond traditional data science or development tasks. It demonstrated how a tool designed for rapid experimentation can be applied to complex, real-world problems. The interactive nature of the notebook allowed Roman to combine data analysis, algorithm execution, and visual feedback into a single, cohesive workflow, enabling him and his crew to stay ahead of the competition and ultimately, win the race. This story is a testament to the power of a modern programming language and an adaptable toolchain, turning a challenging maritime endeavor into an exciting display of computational prowess.

Links:


PostHeaderIcon [DefCon32] DEF CON 32: Exploiting the Unexploitable: Insights from the Kibana Bug Bounty

Mikhail Shcherbakov, a PhD candidate at KTH Royal Institute of Technology in Stockholm, captivated the DEF CON 32 audience with his deep dive into exploiting seemingly unexploitable vulnerabilities in modern JavaScript and TypeScript applications. Drawing from his participation in the Kibana Bug Bounty Program, Mikhail shared case studies that reveal how persistence and creative exploitation can transform low-impact vulnerabilities into critical remote code execution (RCE) chains. His presentation, rooted in his research on code reuse attacks, offered actionable techniques for security researchers and robust mitigation strategies for defenders.

Navigating the Kibana Bug Bounty

Mikhail began by outlining his journey in the Kibana Bug Bounty Program, where he encountered vulnerabilities initially deemed “by design” or unexploitable by triage teams. His work at KTH, focusing on static and dynamic program analysis, equipped him to challenge these assumptions. Mikhail explained how he identified prototype pollution vulnerabilities in Kibana, a popular data visualization platform, that could crash applications in seconds. By combining these with novel exploitation primitives, he achieved RCE, demonstrating the hidden potential of overlooked flaws.

Unlocking Prototype Pollution Exploits

Delving into technical specifics, Mikhail detailed his approach to exploiting prototype pollution, a common JavaScript vulnerability. He showcased how merge functions in popular libraries like Lodash could be manipulated to pollute object prototypes, enabling attackers to inject malicious properties. Mikhail’s innovative chain involved polluting a runner object and triggering a backup handler, resulting in RCE. He emphasized that even fixed prototype pollution cases could be combined with unfixed ones across unrelated application features, amplifying their impact and bypassing conventional defenses.

Advanced Exploitation Techniques

Mikhail introduced new primitives and gadgets that elevate prototype pollution beyond denial-of-service (DoS) attacks. He demonstrated how carefully crafted payloads could exploit Kibana’s internal structures, leveraging tools like Node.js and Deno to execute arbitrary code. His research also touched on network-based attacks, such as ARP spoofing in Kubernetes environments, highlighting the complexity of securing modern applications. Mikhail’s findings, documented in papers like “Silence Print” and “Dust,” provide a roadmap for researchers to uncover similar vulnerabilities in other JavaScript ecosystems.

Mitigating and Defending Against RCE

Concluding, Mikhail offered practical recommendations for mitigating these threats, urging developers to adopt secure coding practices and validate inputs rigorously. He encouraged researchers to persist in exploring seemingly unexploitable bugs, sharing resources like his collection of server-side prototype pollution gadgets. His work, accessible via his blog posts and Twitter updates, inspires the cybersecurity community to push boundaries in vulnerability research while equipping defenders with tools to fortify JavaScript applications against sophisticated attacks.

Links:

PostHeaderIcon [DefCon32] Defeating EDR-Evading Malware with Memory Forensics

Andrew Case, a core developer on the Volatility memory analysis project and Director of Research at V-Soft Consulting, joined colleagues Sellers and Richard to present a groundbreaking session at DEF CON 32. Their talk focused on new memory forensics techniques to detect malware that evades Endpoint Detection and Response (EDR) systems. Andrew and his team developed plugins for Volatility 3, addressing sophisticated bypass techniques like direct system calls and malicious exception handlers. Their work, culminating in a comprehensive white paper, offers practical solutions for countering advanced malware threats.

The Arms Race with EDR Systems

Andrew opened by outlining the growing prominence of EDR systems, which perform deep system inspections to detect malware beyond traditional antivirus capabilities. However, malware developers have responded with advanced evasion techniques, such as code injection and manipulation of debug registers, fueling an ongoing arms race. Andrew’s research at V-Soft Consulting focuses on analyzing these techniques during incident response, revealing how attackers exploit low-level hardware and software components to bypass EDR protections, as seen in high-profile ransomware attacks.

New Memory Forensics Techniques

Delving into their research, Andrew detailed the development of Volatility 3 plugins to detect EDR bypasses. These plugins target techniques like direct and indirect system calls, module overwriting, and abuse of exception handlers. By enumerating handlers and applying static disassembly, their tools identify malicious processes generically, even when attackers tamper with functions like AMSI’s scan buffer. Andrew highlighted a specific plugin, Patchus AMSI, which catches both vector exception handlers and debug register abuses, ensuring EDRs cannot be fooled by malicious PowerShell or macros.

Practical Applications and Detection

The team’s plugins enable real-time detection of EDR-evading malware, providing defenders with actionable insights. Andrew demonstrated how their tools identify suspicious behaviors, such as breakpoints set on critical functions, allowing malicious code to execute undetected. He emphasized the importance of their 19-page white paper, available on the DEF CON website, which documents every known EDR bypass technique in userland. This resource, combined with the open-source plugins, empowers security professionals to strengthen their defenses against sophisticated threats.

Empowering the Cybersecurity Community

Concluding, Andrew encouraged attendees to explore the released plugins and white paper, which include 40 references for in-depth understanding. He stressed the collaborative nature of their work, inviting feedback to refine the Volatility framework. By sharing these tools, Andrew and his team aim to equip defenders with the means to counter evolving malware, ensuring EDR systems remain effective. Their session underscored the critical role of memory forensics in staying ahead of attackers in the cybersecurity landscape.

Links:

PostHeaderIcon [NDCMelbourne2025] A Look At Modern Web APIs You Might Not Know – Julian Burr

As web technologies evolve, the capabilities of browsers have expanded far beyond their traditional roles, often rendering the need for native applications obsolete for certain functionalities. Julian Burr, a front-end engineer with a passion for design systems, delivers an engaging exploration of modern web APIs at NDC Melbourne 2025. Through his demo application, stopme.io—a stopwatch-as-a-service platform—Julian showcases how these APIs can enhance user experiences while adhering to the principle of progressive enhancement. His talk illuminates the power of web APIs to bridge the gap between web and native app experiences, offering practical insights for developers.

The Philosophy of Progressive Enhancement

Julian begins by championing progressive enhancement, a design philosophy that ensures baseline functionality for all users while delivering enhanced experiences for those with modern browsers. Quoting Mozilla, he defines it as providing essential content to as many users as possible while optimizing for advanced environments. This approach is critical when integrating web APIs, as it prevents over-reliance on features that may not be universally supported. For instance, in stopme.io, Julian ensures core stopwatch functionality remains accessible, with APIs adding value only when available. This philosophy guides his exploration, ensuring inclusivity and robustness in application design.

Observing the Web: Resize and Intersection Observers

The first category Julian explores is observability APIs, starting with the Resize Observer and Intersection Observer. These APIs, widely supported, allow developers to monitor changes in DOM element sizes and visibility within the viewport. In stopme.io, Julian uses the Intersection Observer to load JavaScript chunks only when components become visible, optimizing performance. While CSS container queries address styling needs, these APIs enable dynamic behavioral changes, making them invaluable for frameworks like Astro that rely on code splitting. Julian emphasizes their relevance, as they underpin many modern front-end optimizations, enhancing user experience without compromising accessibility.

Enhancing User Context: Network and Battery Status

Julian then delves into APIs that provide contextual awareness, such as the Page Visibility API, Network Information API, and Battery Status API. The Page Visibility API allows stopme.io to update the browser title bar with the timer status when the tab is inactive, enabling users to multitask. The Network Information API offers insights into connection types, allowing developers to serve lower-resolution assets on cellular networks. Similarly, the Battery Status API warns users of potential disruptions due to low battery, as demonstrated when stopme.io alerts users about long-running timers. Julian cautions about fingerprinting risks, noting that browser vendors intentionally reduce accuracy to protect privacy, aligning with progressive enhancement principles.

Elevating Components: Screen Wake Lock and Vibration

Moving to component enhancement, Julian highlights the Screen Wake Lock and Vibration APIs. The Screen Wake Lock API prevents devices from entering sleep mode during critical tasks, such as keeping stopme.io’s timer visible. The Vibration API adds haptic feedback, like notifying users when a timer finishes, with customizable patterns for engaging effects. Julian stresses user control, suggesting toggles to avoid intrusive experiences. While these APIs—often Chrome-centric—enhance interactivity, Julian underscores the need for fallback options to maintain functionality across browsers, ensuring no user is excluded.

Native-Like Experiences: File System, Clipboard, and Share APIs

Julian showcases APIs that rival native app capabilities, including the File System, Clipboard, and Share APIs. The File System API enables file picker interactions, while the Clipboard API facilitates seamless copy-paste operations. The Share API, used in stopme.io, triggers native sharing dialogs, simplifying content distribution across platforms. These APIs, inspired by tools like Cordova, reflect the web’s evolution toward native-like functionality. Julian notes their security mechanisms, such as transient activation for Clipboard API, which require user interaction to prevent misuse, ensuring both usability and safety.

Streamlining Authentication: Web OTP and Credential Management

Authentication APIs, such as Web OTP and Credential Management, offer seamless login experiences. The Web OTP API automates SMS-based one-time password entry, as demonstrated in stopme.io, where Chrome facilitates code sharing across devices. The Credential Management API streamlines password storage and retrieval, reducing login friction. Julian highlights their synergy with the Web Authentication API, which supports passwordless logins via biometrics. These APIs, widely available, enhance security and user convenience, making them essential for modern web applications.

Links:

PostHeaderIcon [OxidizeConf2024] C++ Migration Strategies

Bridging Legacy and Modern Systems

The push for memory-safe software has highlighted the limitations of C++ in today’s data-driven landscape, prompting organizations to explore Rust as a modern alternative. At OxidizeConf2024, Til Adam from KDAB and Florian Gilcher from Ferrous Systems presented a strategic approach to migrating from C++ to Rust, emphasizing an “Aloha” philosophy of compassion and pragmatism. With decades of experience in C++ development, Til and Florian offered insights into integrating Rust into existing codebases without requiring a full rewrite, ensuring efficiency and safety.

Drawing an analogy to music, Til compared C++ to a complex electric guitar and Rust to a simpler, joy-inducing ukulele. While C++ remains prevalent in legacy systems, its complexity can lead to errors, particularly in concurrency and memory management. Rust’s memory safety and modern features address these issues, but Florian stressed that migration must be practical, leveraging the C Application Binary Interface (C ABI) as a lingua franca for interoperability. This allows Rust and C++ to coexist, enabling incremental adoption in existing projects.

Practical Migration Techniques

The speakers outlined a multi-level migration strategy, starting with low-level integration using the C ABI. This involves wrapping C++ code in extern C functions to interface with Rust, a process Florian described as effective but limited by platform-specific complexities. Tools like cxx and autocxx simplify this by automating bindings, though challenges remain with templates and generics. Til shared examples from KDAB’s projects, where Rust was integrated into C++ codebases for specific components, reducing security risks without disrupting existing functionality.

At a higher level, Florian advocated for a component-based approach, where Rust modules replace C++ components in areas like concurrency or security-critical code. This strategy maximizes Rust’s benefits, such as its borrow checker, while preserving investments in C++ code. The speakers emphasized identifying high-value areas—such as modules prone to concurrency issues—where Rust’s safety guarantees provide immediate benefits, aligning with organizational goals like regulatory compliance and developer productivity.

Balancing Innovation and Pragmatism

Migration decisions are driven by a balance of enthusiasm, regulatory pressure, and resource constraints. Til noted that younger developers often champion Rust for its modern features, while executives respond to mandates like the NSA’s push for memory-safe languages. However, budget and time limitations necessitate a pragmatic approach, focusing on areas where Rust delivers significant value. Florian highlighted successful migrations at KDAB and Ferrous Systems, where Rust was adopted for new features or rewrites of problematic components, improving safety and maintainability.

The speakers also addressed challenges, such as the lack of direct support for C++ templates in Rust. While tools like autocxx show promise, ongoing community efforts are needed to enhance interoperability. By sharing best practices and real-world examples, Til and Florian underscored Rust’s potential to modernize legacy systems, fostering a collaborative approach to innovation that respects existing investments while embracing future-ready technologies.

Links:

PostHeaderIcon [DotJs2024] Your App Crashes My Browser

Amid the ceaseless churn of web innovation, a stealthy saboteur lurks: memory leaks that silently erode browser vitality, culminating in the dreaded “Out of Memory” epitaph. Stoyan Stefanov, a trailblazing entrepreneur and web performance sage with roots at Facebook, confronted this scourge at dotJS 2024. Once a fixture in optimizing vast social feeds, Stoyan transitioned from crisis aversion—hard reloads post-navigation—to empowerment through diagnostics. His manifesto: arm developers with telemetry and sleuthing to exorcise these phantoms, ensuring apps endure without devouring RAM.

Stoyan’s alarm rang true via Nolan Lawson’s audit: a decade’s top SPAs unanimously hemorrhaged, underscoring leaks’ ubiquity even among elite codebases. Personal scars abounded—from a social giant’s browser-crushing sprawl, mitigated by crude resets—to the thrill of unearthing culprits sans autopsy. The panacea commences with candor: the Reporting API, a beacon flagging crashes in the wild, piping diagnostics to endpoints for pattern mining. This passive vigilance—triggered on OOM—unmasks field frailties, from rogue closures retaining DOM vestiges to event sentinels orphaned post-unmount.

Diagnosis demands ritual: heap snapshots bracketing actions, GC sweeps purifying baselines, diffs revealing retainers. Stoyan evangelized Memlab, Facebook’s CLI oracle, automating Puppeteer-driven cycles—load, act, revert—yielding lucid diffs: “1,000 objects via EventListener cascade.” For the uninitiated, his Recorder extension chronicles clicks into scenario scripts, demystifying Puppeteer. Leaks manifest insidiously: un-nullified globals, listener phantoms in React class components—addEventListener sans symmetric removal—hoarding callbacks eternally.

Remediation rings simple: sever references—null assignments, framework hooks like useEffect cleanups—unleashing GC’s mercy. Stoyan’s ethos: paranoia pays; leaks infest, but tools tame. From Memlab’s precision on map apps—hotel overlays persisting post-dismiss—to listener audits, vigilance yields fluidity. In an age of sprawling SPAs, this vigilance isn’t luxury but lifeline, sparing users frustration and browsers demise.

Unveiling Leaks in the Wild

Stoyan spotlighted Reporting API’s prowess: crash telemetry streams to logs, correlating OOM with usage spikes. Nolan’s Speedline probe affirmed: elite apps falter uniformly, from unpruned caches to eternal timers. Proactive profiling—snapshots pre/post actions—exposes retain cycles, Memlab automating to spotlight listener detritus or closure traps.

Tools and Tactics for Eradication

Memlab’s symphony: Puppeteer orchestration, intelligent diffs tracing leaks to sources—e.g., 1K objects via unremoved handlers. Stoyan’s Recorder eases entry, click-to-script. Fixes favor finality: removeEventListener in disposals, nulls for orphans. Paranoia’s yield: resilient apps, jubilant users.

Links:

PostHeaderIcon [DevoxxUK2025] Platform Engineering: Shaping the Future of Software Delivery

Paula Kennedy, co-founder and COO of Cintaso, delivered a compelling lightning talk at DevoxxUK2025, tracing the evolution of platform engineering and its impact on software delivery. Drawing from over a decade of experience, Paula explored how platforms have shifted from siloed operations to force multipliers for developer productivity. Referencing the journey from DevOps to PaaS to Kubernetes, she highlighted current trends like inner sourcing and offered practical strategies for assessing platform maturity. Her narrative, infused with lessons from the past and present, underscored the importance of a user-centered approach to avoid the pitfalls of hype and ensure platforms drive innovation.

The Evolution of Platforms

Paula began by framing platforms as foundations that elevate development, drawing on Gregor Hohpe’s analogy of a Volkswagen chassis enabling diverse car models. She recounted her career, starting in 2002 at Acturus, a SaaS provider with rigid silos between developers and operations. The DevOps movement, sparked in 2009, sought to bridge these divides, but its “you build it, you run it” mantra often overwhelmed teams. The rise of Platform-as-a-Service (PaaS), exemplified by Cloud Foundry, simplified infrastructure management, allowing developers to focus on code. However, Paula noted, the complexity of Kubernetes led organizations to build custom internal platforms, sometimes losing sight of the original value proposition.

Current Trends and Challenges

Today, platform engineering is at a crossroads, with Gartner predicting that by 2026, 80% of large organizations will have dedicated teams. Paula highlighted principles like self-service APIs, internal developer portals (e.g., Backstage), and golden paths that guide developers to best practices. She emphasized treating platforms as products, applying product management practices to align with user needs. However, the 2024 DORA report reveals challenges: while platforms boost organizational performance, they often fail to improve software reliability or delivery throughput. Paula attributed this to automation complacency and “platform complacency,” where trust in internal platforms leads to reduced scrutiny, urging teams to prioritize observability and guardrails.

Links:

PostHeaderIcon [Oracle Dev Days 2025] Optimizing Java Performance: Choosing the Right Garbage Collector

Jean-Philippe BEMPEL , a seasoned developer at Datadog and a Java Champion, delivered an insightful presentation on selecting and tuning Garbage Collectors (GCs) in OpenJDK to enhance Java application performance. His talk, rooted in practical expertise, unraveled the complexities of GCs, offering a roadmap for developers to align their choices with specific application needs. By dissecting the characteristics of various GCs and their suitability for different workloads, Jean-Philippe provided actionable strategies to optimize memory management, reduce production issues, and boost efficiency.

Understanding Garbage Collectors in OpenJDK

Garbage Collectors are pivotal in Java’s memory management, silently handling memory allocation and reclamation. However, as Jean-Philippe emphasized, a misconfigured GC can lead to significant performance bottlenecks in production environments. OpenJDK offers a suite of GCs—Serial GC, Parallel GC, G1, Shenandoah, and ZGC—each designed with distinct characteristics to cater to diverse application requirements. The challenge lies in selecting the one that best matches the workload, whether it prioritizes throughput or low latency.

Jean-Philippe began by outlining the foundational concepts of GCs, particularly the generational model. Most GCs in OpenJDK are generational, dividing memory into the Young Generation (for short-lived objects) and the Old Generation (for longer-lived objects). The Young Generation is further segmented into the Eden space, where new objects are allocated, and Survivor spaces, which hold objects that survive initial collections before promotion to the Old Generation. Additionally, the Metaspace stores class metadata, a critical but often overlooked component of memory management.

Serial GC: Simplicity for Constrained Environments

The Serial GC, one of the oldest collectors, operates with a single thread and employs a stop-the-world approach, pausing all application threads during collection. Jean-Philippe highlighted its suitability for small-scale applications, particularly those running in containers with less than 2 GB of RAM, where it serves as the default GC. Its simplicity makes it ideal for environments with limited resources, but its stop-the-world nature can introduce noticeable pauses, making it less suitable for latency-sensitive applications.

To illustrate, Jean-Philippe explained the mechanics of the Young Generation’s Survivor spaces. These spaces, S0 and S1, alternate roles as source and destination during minor GC cycles, copying live objects to manage memory efficiently. Objects surviving multiple cycles are promoted to the Old Generation, reducing the overhead of frequent collections. This generational approach leverages the hypothesis that most objects die young, minimizing the cost of memory reclamation.

Parallel GC: Maximizing Throughput

For applications prioritizing throughput, such as batch processing jobs, the Parallel GC offers significant advantages. Unlike the Serial GC, it leverages multiple threads to reclaim memory, making it efficient for systems with ample CPU cores. Jean-Philippe noted that it was the default GC until JDK 8 and remains a strong choice for throughput-oriented workloads like Spark jobs, Kafka consumers, or ETL processes.

The Parallel GC, also stop-the-world, excels in scenarios where total execution time matters more than individual pause durations. Jean-Philippe shared a benchmark using a JFR (Java Flight Recorder) file parsing application, where Parallel GC outperformed others, achieving a throughput of 97% (time spent in application versus GC). By tuning the Young Generation size to reduce frequent minor GCs, developers can further minimize object copying, enhancing overall performance.

G1 GC: Balancing Throughput and Latency

The G1 (Garbage-First) GC, default since JDK 9 for heaps larger than 2 GB, strikes a balance between throughput and latency. Jean-Philippe described its region-based memory management, dividing the heap into smaller regions (Eden, Survivor, Old, and Humongous for large objects). This structure allows G1 to focus on regions with the most garbage, optimizing memory reclamation with minimal copying.

In his benchmark, G1 showed a throughput of 85%, with average pause times of 76 milliseconds, aligning with its target of 200 milliseconds. However, Jean-Philippe pointed out challenges with Humongous objects, which can increase GC frequency if not managed properly. By adjusting region sizes (up to 32 MB), developers can mitigate these issues, improving throughput for applications like batch jobs while maintaining reasonable pause times.

Shenandoah and ZGC: Prioritizing Low Latency

For latency-sensitive applications, such as HTTP servers or microservices, Shenandoah and ZGC are the go-to choices. These concurrent GCs minimize pause times, often below a millisecond, by performing most operations alongside the running application. Jean-Philippe highlighted Shenandoah’s non-generational approach (though a generational version is in development) and ZGC’s generational support since JDK 21, making the latter particularly efficient for large heaps.

In a latency-focused benchmark using a Spring PetClinic application, Jean-Philippe demonstrated that Shenandoah and ZGC maintained request latencies below 200 milliseconds, significantly outperforming Parallel GC’s 450 milliseconds at the 99th percentile. ZGC’s use of colored pointers and load/store barriers ensures rapid memory reclamation, allowing regions to be freed early in the GC cycle, a key advantage over Shenandoah.

Tuning Strategies for Optimal Performance

Tuning GCs is as critical as selecting the right one. For Parallel GC, Jean-Philippe recommended sizing the Young Generation to reduce the frequency of minor GCs, ideally exceeding 50% of the heap to minimize object copying. For G1, adjusting region sizes can address Humongous object issues, while setting a maximum pause time target (e.g., 50 milliseconds) can shift its behavior toward latency sensitivity, though it may not compete with Shenandoah or ZGC in extreme cases.

For concurrent GCs like Shenandoah and ZGC, ensuring sufficient heap size and CPU cores prevents allocation stalls, where threads wait for memory to be freed. Jean-Philippe emphasized that Shenandoah requires careful heap sizing to avoid full GCs, while ZGC’s rapid region reclamation reduces such risks, making it more forgiving for high-allocation-rate applications.

Selecting the Right GC for Your Workload

Jean-Philippe concluded by categorizing workloads into two types: throughput-oriented (SPOT) and latency-sensitive. For SPOT workloads, such as batch jobs or ETL processes, Parallel GC or G1 are optimal, with Parallel GC offering easier tuning for predictable performance. For latency-sensitive applications, like microservices or databases (e.g., Cassandra), ZGC’s generational efficiency and Shenandoah’s low-pause capabilities shine, with ZGC being particularly effective for large heaps.

By analyzing workload characteristics and leveraging tools like GC Easy for log analysis, developers can make informed GC choices. Jean-Philippe’s benchmarks underscored the importance of tailoring GC configurations to specific use cases, ensuring both performance and stability in production environments.

Links:

Hashtags: #Java #GarbageCollector #OpenJDK #Performance #Tuning #Datadog #JeanPhilippeBempel #OracleDevDays2025

PostHeaderIcon [DefCon32] Digital Emblems—When Markings Are Required, but You Have No Rattle-Can

Bill Woodcock, a seasoned contributor to the Internet Engineering Task Force (IETF), presented an insightful session at DEF CON 32 on the development of digital emblems. These digital markers aim to replace or supplement physical markings required under international law, such as those on ISO containers, press vests, or humanitarian symbols like UN blue helmets. Bill’s work, conducted within the IETF, leverages protocols like DNS and DNSSEC to create a global, cryptographically secure marking system. His talk explored the technical and security implications of this standardization effort, inviting feedback from the DEF CON community on potential vulnerabilities.

The Need for Digital Emblems

Bill introduced the concept of digital emblems, explaining their necessity in an increasingly digitized world. Physical markings, such as serial numbers on shipping containers or symbols on humanitarian vehicles, are critical for compliance with international regulations. However, as processes like border transport and battlefield protections become digitized, these markings must transition to machine-readable formats. Bill outlined how the IETF’s proposed standard aims to create a unified protocol for digital emblems, ensuring they are scannable, cryptographically verifiable, and adaptable to various use cases, from logistics to military operations.

Technical Foundations and Challenges

Delving into the technical details, Bill described how the digital emblem system builds on existing protocols like DNS and DNSSEC, enabling robust validation without constant network connectivity. He highlighted the ability to embed significant data in devices like RFID tags, allowing offline validation through cached root signatures. However, Bill acknowledged challenges, particularly in ensuring the security of these emblems against adversarial tampering. He noted that military use cases, where covert validation is critical, pose unique risks, as adversaries could mislabel objects to deceive validators, necessitating strong cryptographic protections.

Security and Privacy Considerations

Bill addressed the security and privacy concerns raised by digital emblems, particularly in adversarial scenarios. He explained that the system allows for covert inspection, enabling validators to check emblems without alerting potential attackers. However, he cautioned that physical binding remains a weak point, as malicious actors could exploit mislabeled objects in conflict zones. Bill invited the DEF CON audience to scrutinize the proposed standard for vulnerabilities, emphasizing the importance of community input to harden the system against attacks, especially in high-stakes military and humanitarian contexts.

Shaping the Future of Digital Standards

Concluding, Bill underscored the potential of digital emblems to streamline global processes while enhancing security. He encouraged the DEF CON community to engage with the IETF’s ongoing work, accessible via the provided URLs, to contribute to refining the standard. By addressing vulnerabilities and ensuring robust cryptographic validation, Bill envisions a future where digital emblems enhance trust and compliance across borders and battlefields. His call to action resonated with the audience, inviting hackers to play a pivotal role in shaping this emerging technology.

Links: