Archive for the ‘en-US’ Category
[NDCOslo2024] Get Old, Go Slow, Write Code! – Tobias Modig
In the inexorable march toward maturity within the software realm, where velocity often eclipses wisdom, Tobias Modig, a veteran developer and agile enthusiast, delivers a defiant ode to senescence and serenity. With decades of debugging and deployment under his belt, Tobias dismantles the dread of obsolescence, championing the virtues of deliberate deliberation over frenetic fervor. His manifesto, infused with humor and historical homage, reframes aging as an asset, urging seasoned coders to linger in their craft, cultivating depth that outlasts the dash of youth.
Tobias sets the stage with three audacious aims: extol the merits of maturation, extol the elegance of unhurried execution, and exhort eternal engagement with the keyboard. He concedes the tribulations of tenure—framework flux, fledgling fluency—yet counters with conviction: the elder’s edge lies in equanimity, a measured mastery that millennials might mistake for malaise. Drawing from personal peregrinations, Tobias recounts races against rookies, where haste harvested hazards, while patience polished prowess.
Embracing Maturity: The Gifts of Graying Grace
Aging, Tobias asserts, accrues acuity: accumulated anecdotes afford anticipation of anomalies, sparing the squad from snafus. He invokes the Peter Principle’s peril—that of ascending to incompetence—warning against the siren song of supervisory seclusion. Developers, he declares, thrive in trenches, where tactile troubleshooting trumps theoretical tenure. His anecdote: a mid-career pivot to management, marred by monotony, until a return to roots reignited rapture.
Deliberation distinguishes the doyenne: novices navigate novelties nimbly, yet veterans vet viability, averting avoidable adventures. Tobias’s tenet: slowness safeguards sustainability, yielding code that’s not just correct but crafted with care, comprehensible to cohorts centuries hence.
Deliberate Deliberation: The Delights of Dawdling Development
Haste, Tobias laments, harbors hubris: crammed calendars court catastrophe, as unforeseen exigencies eclipse equilibrium. He likens laden ledgers to jammed junctions—a single snag spawns stalemates. His remedy: infuse interstices—unallocated intervals for introspection, ideation, or intercession—transforming tension into tranquility.
This tempo tempers teams: slack spaces spawn serendipity, where neighboring novices nurture under seasoned scrutiny, sans overtime’s overhang. Tobias’s triumph: a project propelled by pauses, where prototypes pondered yielded paradigms that persisted, proving premeditation’s primacy.
Perpetual Pursuit: Coding as Continuum
Tobias’s triad culminates in commitment: code ceaselessly, defying the drift to desks. He bewails the “developer lifecycle”—from coder to curator to custodian—as a cul-de-sac of creativity. His exhortation: evade elevation, or equilibrate it with engagements that endure—pairing, mentoring, moonlighting.
His horizon: harness hoariness as hegemony, letting longevity lead, as the world whirls while wisdom waits.
Links:
[DefCon32] Spies and Bytes: Victory in the Digital Age
Cyber warfare reshapes global security, demanding agility and collaboration. General Paul M. Nakasone, retired U.S. Army and former director of the NSA and U.S. Cyber Command, shares insights from his career defending against nation-state hackers. His narrative, rooted in real-world operations, highlights strategies for securing critical infrastructure and countering sophisticated threats. Now founding director of Vanderbilt University’s Institute for National Security, Paul envisions a future where adaptive cyber strategies and new leadership tackle emerging challenges.
Paul’s experiences, from thwarting cyberattacks to fostering international alliances, underscore the importance of transparency and intelligence sharing. His forward-looking vision emphasizes resilience and interdisciplinary approaches to safeguard the digital frontier.
Defending Against Nation-State Threats
Paul recounts operations against adversaries like China and Russia, where rapid intelligence sharing thwarted attacks on U.S. infrastructure. As NSA director, he prioritized real-time collaboration with allies, disrupting cyber campaigns targeting elections and utilities.
These efforts highlight the need for dynamic defenses, adapting to adversaries’ evolving tactics in a borderless digital battlefield.
Building Resilient Cyber Defenses
At U.S. Cyber Command, Paul oversaw strategies integrating offensive and defensive operations. He describes fortifying critical systems, like power grids, through persistent engagement—proactively disrupting attacker infrastructure. Partnerships with private sectors, including tech giants, amplified these efforts, leveraging collective expertise.
Transparency in operations, he argues, builds trust and deters adversaries, a lesson drawn from high-stakes missions.
The Role of Intelligence and Alliances
International cooperation was central to Paul’s tenure. Alliances with NATO and Five Eyes nations enabled coordinated responses to threats, such as ransomware campaigns. Intelligence-driven operations, blending human and technical sources, provided actionable insights, often preventing attacks before they materialized.
This collaborative model sets a benchmark for future cyber defense, emphasizing shared responsibility.
Shaping the Future of Cybersecurity
At Vanderbilt, Paul aims to cultivate young leaders through the Institute for National Security, launching in October 2025. By integrating AI, cybersecurity, and decision-making, the institute addresses the industry’s age gap, where most professionals are over 50. He invites the DEF CON community to join, fostering innovation through partnerships and open dialogue.
Links:
[DevoxxBE2024] AI and Code Quality: Building a Synergy with Human Intelligence by Arthur Magne
In a session at Devoxx Belgium 2024, Arthur Magne, CPO and co-founder of Packmind, explored how AI can enhance code quality when guided by human expertise. Addressing the rapid rise of AI-generated code through tools like GitHub Copilot, Arthur highlighted the risks of amplifying poor practices and the importance of aligning AI outputs with team standards. His talk showcased Packmind’s approach to integrating AI with human-defined best practices, enabling teams to maintain high-quality, maintainable code while leveraging AI’s potential to accelerate learning and enforce standards.
The Double-Edged Sword of AI in Software Development
Arthur opened with Marc Andreessen’s 2011 phrase, “Software is eating the world,” updating it to reflect AI’s current dominance in code generation. Tools like GitHub Copilot and Codium produce vast amounts of code, but their outputs reflect the quality of their training data—often outdated or flawed, as noted by Veracode’s Chris Wysopal. A 2024 Uplevel study found 41% more bugs in AI-assisted code among 800 developers, and GitLab’s 2023 report showed a 100% increase in code churn since AI’s rise in 2022, indicating potential technical debt. Arthur argued that while AI boosts individual productivity (88% of developers feel faster, per GitHub), team-level benefits are limited without proper guidance, as code reviews and bug fixes offset time savings.
The Role of Human Guidance in AI-Driven Development
AI lacks context about team-specific practices, such as security, accessibility, or architecture preferences, leading to generic or suboptimal code. Arthur emphasized the need for human guidance to steer AI outputs. By explicitly defining best practices—covering frameworks like Spring, security protocols, or testing strategies—teams can ensure AI generates code aligned with their standards. However, outdated documentation, like neglected Confluence pages, can mislead AI, amplifying hidden issues. Arthur advocated for a critical human-in-the-loop approach, where developers validate AI suggestions and integrate company context to produce high-quality, maintainable code.
Packmind’s Solution: AI as a Technical Coach
Packmind, a tool developed over four years, acts as an IDE and browser extension for platforms like GitHub and GitLab, enabling teams to define and share best practices. Arthur demonstrated how Packmind identifies practices during code reviews, such as preferring flatMap over for loops with concatenation in TypeScript or Java for performance. Developers can flag negative examples (e.g., inefficient loops) or positive ones (e.g., standardized loggers) and create structured practice descriptions with AI assistance, including “what,” “why,” and “how to fix” sections. These practices are validated through team discussions or communities of practice, ensuring consensus before enforcement. Packmind’s AI suggests improvements, generates guidelines, and integrates with tools like GitHub Copilot to produce code adhering to team standards.
Enforcing Standards and Upskilling Teams
Once validated, practices are enforced via Packmind’s IDE extension, which flags violations and suggests fixes tailored to team conventions, akin to a customized SonarQube. For example, a team preferring TestNG over JUnit can configure AI to generate compliant test cases. Arthur highlighted Packmind’s role in upskilling, allowing junior developers to propose practices and learn from team feedback. AI-driven practice reviews, conducted biweekly or monthly, foster collaboration and spread expertise across organizations. Studies cited by Arthur suggest that teams using AI without understanding underlying practices struggle to maintain code post-project, underscoring the need for AI to augment, not replace, human expertise.
Balancing Productivity and Long-Term Quality
Quoting Kent Beck, Arthur noted that AI automates 80% of repetitive tasks, freeing developers to focus on high-value expertise. Packmind’s process ensures AI amplifies team knowledge rather than generic patterns, reducing code review overhead and technical debt. By pushing validated practices to tools like GitHub Copilot, teams achieve consistent, high-quality code. Arthur concluded by stressing the importance of explicit standards and critical evaluation to harness AI’s potential, inviting attendees to discuss further at Packmind’s booth. His talk underscored a synergy where AI accelerates development while human intelligence ensures lasting quality.
Links:
[DefCon32] Redefining V2G: How to Use Your Vehicle as a Game Controller
Modern vehicles, intricate networks of computers on wheels, offer more than mobility—they can become game controllers. Timm Lauser and Jannis Hamborg, researchers from P3 Group, present Vehicle-to-Game (V2G), a Python-based project that transforms cars into Bluetooth gamepads. By leveraging the CAN bus or OBD2 port, V2G maps vehicle inputs like steering or pedals to game controls, blending automotive hacking with playful innovation.
Timm and Jannis, driven by curiosity about vehicle networks, developed V2G to run on laptops or Raspberry Pi Zero WH, requiring reverse-engineering of CAN messages or UDS diagnostics. Their work, accessible via a public GitHub repository, invites enthusiasts to explore car interfaces while highlighting the accessibility of automotive security research.
Understanding Vehicle Networks
Timm explains vehicle architectures, where CAN buses and diagnostic ports like OBD2 facilitate communication between ECUs. V2G intercepts signals from components like the steering wheel or accelerator, translating them into gamepad inputs. This requires understanding proprietary CAN messages, often unique to each vehicle model.
Their Volkswagen ID.3 demo showcases real-time mapping of driving inputs to game controls, illustrating the project’s practicality.
Building the V2G Framework
Jannis details V2G’s implementation, using Python to interface with CAN buses via affordable hardware. The framework supports Bluetooth gamepad emulation, allowing cars to control games like racing simulators. Reverse-engineering CAN signals, though labor-intensive, is achievable with tools like CAN-utils, making V2G adaptable to various vehicles.
The open-source release encourages community contributions, with QR codes linking to the repository for further development.
Creative Applications and Challenges
Beyond gaming, V2G sparks interest in automotive interfaces, such as heads-up display integration. Timm and Jannis explore connecting to in-car screens via adapters, though cost remains a barrier. Flight simulator mapping, suggested by an audience member, highlights V2G’s versatility for unconventional inputs.
Challenges include model-specific CAN protocols and hardware costs, but the project lowers barriers for hobbyists and researchers.
Implications for Automotive Security
While playful, V2G underscores the accessibility of vehicle networks, a double-edged sword for security. Exposed interfaces like OBD2 ports are potential attack vectors, urging manufacturers to secure diagnostic communications. Timm and Jannis advocate responsible exploration, fostering learning without compromising safety.
Links:
[DevoxxGR2025] Optimized Kubernetes Scaling with Karpenter
Alex König, an AWS expert, delivered a 39-minute talk at Devoxx Greece 2025, exploring how Karpenter enhances Kubernetes cluster autoscaling for speed, cost-efficiency, and availability.
Karpenter’s Dynamic Autoscaling
König introduced Karpenter as an open-source, Kubernetes-native autoscaling solution, contrasting it with the traditional Cluster Autoscaler. Unlike the latter, which relies on uniform node groups (e.g., nodes with four CPUs and 16GB RAM), Karpenter uses the EC2 Fleet API to dynamically provision nodes tailored to workload needs. For instance, if a pod requires one CPU, Karpenter allocates a node with minimal excess capacity, avoiding resource waste. This right-sizing, combined with groupless scaling, enables faster and more cost-effective scaling, especially in dynamic environments.
Ensuring Availability with Constraints
König addressed availability challenges reported by users, emphasizing Kubernetes-native scheduling constraints to mitigate disruptions. Topology spread constraints distribute pods across availability zones, reducing the risk of downtime if a node fails. Pod disruption budgets, affinity/anti-affinity rules, and priority classes further ensure critical workloads are scheduled appropriately. For stateful workloads using EBS, König recommended setting the volume binding mode to “wait for first consumer” to avoid pod-volume mismatches across zones, preventing crashes and ensuring reliability.
Integrating with KEDA for Application Scaling
For advanced scaling, König highlighted combining Karpenter with KEDA for event-driven, application-specific scaling. KEDA scales pods based on metrics like Kafka topic sizes or SQS queues, beyond CPU/memory. Karpenter then provisions nodes for pending pods, enabling seamless scaling for workloads like flash sales. König outlined a four-step migration from Cluster Autoscaler to Karpenter, emphasizing its simplicity and open-source documentation.
Links
[DefCon32] Splitting the Email Atom: Exploiting Parsers to Bypass Access Controls
Email addresses, seemingly mundane, harbor complexities that can unravel security controls. Gareth Heyes, a security researcher at PortSwigger, exposes how arcane RFC standards governing email parsing enable attackers to bypass access controls. By crafting RFC-compliant email addresses, Gareth demonstrates spoofing domains, accessing internal systems, and executing blind CSS injection. His toolkit, integrated with Burp Suite, automates these attacks, revealing vulnerabilities in applications and libraries.
Gareth’s exploration, rooted in parser discrepancies, shows how seemingly valid emails can route to unintended destinations, undermining Zero Trust architectures. His methodology and open-source tools empower researchers to probe email-handling systems, urging developers to fortify defenses against these subtle yet potent attacks.
The Chaos of Email RFCs
Gareth begins with the convoluted RFCs defining email syntax, which allow exotic encodings like Unicode overflows and encoded words. These standards, often misunderstood, lead to parser inconsistencies. For example, an email ending in @example.com might route elsewhere due to mishandled Unicode or Punycode, breaking domain-based authorization.
Case studies illustrate real-world exploits, including bypassing employee-only registrations and accessing internal systems by exploiting parser flaws.
Exploiting Parser Discrepancies
Using tools like Hackverter and Turbo Intruder, Gareth automates the generation of malicious email addresses. His Punycode fuzzer, for instance, substitutes placeholders with random characters, uncovering exploitable parser behaviors. A notable exploit involved GitHub’s handling of null characters, found via Turbo Intruder, leading to unauthorized access.
These techniques transform harmless inputs into payloads that misroute emails or inject CSS, compromising application security.
Defensive Strategies
Gareth advocates filtering encoded words and verifying email addresses before use, even from trusted SSO providers. Relying solely on domains for authorization is perilous, as demonstrated by his exploits. Regular expression sanitization and strict validation can mitigate risks, ensuring emails route as intended.
He references influential blog posts by researchers like Pep Villa, emphasizing community knowledge-sharing to bolster defenses.
Tools and Future Research
Gareth’s toolkit, including a Burp Suite wordlist and a vulnerable Joomla Docker instance, enables researchers to replicate his attacks. A Web Security Academy CTF further hones skills in email splitting. He encourages exploring additional parser vulnerabilities, such as those in mailer libraries, to uncover new attack vectors.
Links:
[GoogleIO2024] Under the Hood with Google AI: Exploring Research, Impact, and Future Horizons
Delving into AI’s foundational elements, Jeff Dean, James Manyika, and Koray Kavukcuoglu, moderated by Laurie Segall, discussed Google’s trajectory. Their dialogue traced historical shifts, current breakthroughs, and societal implications, offering profound perspectives on technology’s evolution.
Tracing AI’s Evolution and Key Milestones
Jeff recounted AI’s journey from rule-based systems to machine learning, highlighting neural networks’ resurgence around 2010 due to computational advances. Early applications at Google, like spelling corrections, paved the way for vision, speech, and language tasks. Koray noted hardware investments’ role in enabling generative methods, transforming content creation across fields.
James emphasized AI’s multiplier effect, reshaping sciences like biology and software development. The panel agreed that multimodal, long-context models like Gemini represent culminations of algorithmic and infrastructural progress, allowing generalization to novel challenges.
Addressing Societal Impacts and Ethical Considerations
James stressed AI’s mirror to humanity, prompting grapples with bias, fairness, and values—issues societies must collectively resolve. Koray advocated responsible deployment, integrating safety from inception through techniques like watermarking and red-teaming. Jeff highlighted balancing innovation with safeguards, ensuring models align with human intent while mitigating harms.
Discussions touched on global accessibility, with efforts to support underrepresented languages and equitable benefits. The leaders underscored collaborative approaches, involving diverse stakeholders to navigate complexities.
Envisioning AI’s Future Applications and Challenges
Koray envisioned AI accelerating healthcare, solving diseases efficiently worldwide. Jeff foresaw enhancements across human endeavors, from education to scientific discovery, if pursued thoughtfully. James hoped AI fosters better humanity, aiding complex problem-solving.
Challenges include advancing agentic systems for multi-step reasoning, improving evaluation beyond benchmarks, and ensuring inclusivity. The panel expressed optimism, viewing AI as an amplifier for positive change when guided responsibly.
Links:
[PHPForumParis 2024] Is WordPress a lost cause?
Cyrille Coquard, a seasoned WordPress developer, took the stage at PHPForumParis2024 to address a contentious question: Is WordPress a lost cause? With humor and insight, Cyrille tackles the platform’s reputation, often marred by perceptions of outdated code and technical debt. By drawing parallels between WordPress and PHP’s evolution, he argues that WordPress is undergoing a quiet transformation toward professionalism. His talk, aimed at PHP developers, demystifies WordPress’s architecture and advocates for modern development practices to elevate its potential.
The Shared Legacy of WordPress and PHP
Cyrille opens by highlighting the intertwined histories of WordPress and PHP, both born in an era of amateur-driven development. This shared origin, while fostering accessibility, has led to technical debt that tarnishes their reputations. He compares WordPress to a “Fiat or Clio”—a practical, accessible tool for the masses—contrasting it with frameworks like Symfony, likened to a high-end race car. This metaphor underscores WordPress’s mission to democratize web creation, prioritizing user-friendliness over developer-centric complexity. Cyrille emphasizes that the platform’s early design choices, while not always optimal, reflect its commitment to simplicity and affordability.
Plugins and Themes: Extending WordPress’s Power
A core strength of WordPress lies in its extensibility through plugins and themes. Cyrille explains how themes allow for visual customization, enabling the 40% of websites powered by WordPress to look unique. Plugins, meanwhile, add functionality or modify behavior, addressing both generic and specific user needs. He illustrates this with examples like WooCommerce for e-commerce and Gravity Forms for form creation. By leveraging pre-existing plugins, developers can meet common requirements efficiently, reserving custom development for unique challenges. This approach, Cyrille notes, significantly reduces costs, as seen in his work at WP Rocket, where a single plugin optimizes performance across millions of sites.
Modernizing WordPress Development with Launchpad
To address WordPress’s development challenges, Cyrille introduces Launchpad, his open-source framework designed to bring modern PHP practices to the WordPress ecosystem. Inspired by Laravel and Symfony, Launchpad simplifies plugin creation by introducing concepts like subscribers and service providers. These patterns, familiar to PHP developers, reduce the learning curve for newcomers while promoting clean, maintainable code. Cyrille demonstrates how to create a simple plugin, emphasizing event-driven development that hooks into WordPress’s core functionality. By providing a standardized, well-documented framework, Launchpad aims to bridge the gap between WordPress’s amateur roots and professional standards.
Links:
Hashtags: #WordPress #PHP #WebDevelopment #Launchpad #PHPForumParis2024 #CyrilleCoquard #WPRocket
[DefCon32] Sshamble: Unexpected Exposures in the Secure Shell
The Secure Shell (SSH), a cornerstone of secure communication, powers a vast array of systems beyond traditional POSIX environments, from network devices to Windows file transfer tools. HD Moore and Rob King, security researchers at Rumble, Inc., delve into the lesser-known implementations of SSH, uncovering surprising vulnerabilities. Their presentation introduces “Sshamble,” an open-source tool designed to probe SSH services, revealing weaknesses in diverse implementations. With OpenSSH dominating 80% of deployments, HD and Rob explore the long tail of alternative servers, exposing flaws like null byte password acceptance in honeypots and key mismanagement.
Their journey, sparked by the XZ backdoor investigation, reveals tens of thousands of vulnerable SSH instances. By analyzing server behaviors and handshake anomalies, Sshamble empowers researchers to identify and exploit misconfigurations, urging a reevaluation of SSH’s assumed security.
The Landscape of SSH Implementations
HD outlines SSH’s evolution from a remote shell to a ubiquitous transport protocol, second only to TLS. While OpenSSH prevails, alternatives like Dropbear and niche libraries in devices and forges introduce variability. Their research uncovers servers accepting invalid credentials or mangled requests, often indicative of honeypots or flawed implementations. For instance, many honeypots accept null byte passwords, a trait absent in legitimate OpenSSH setups.
This diversity, while functional, creates an attack surface ripe for exploitation, as non-standard servers deviate from expected security models.
Sshamble: A Tool for Discovery
Rob introduces Sshamble, a versatile tool that scans SSH services across specified ports, performing handshakes to detect anomalies. It identifies honeypots by exploiting behaviors like accepting any public key or malformed passwords. The tool’s open-source release on GitHub encourages community contributions, enhancing its ability to catalog and test SSH implementations.
Demonstrations show Sshamble pinpointing vulnerable servers, including those misconfigured to accept arbitrary credentials, highlighting the need for rigorous server validation.
Exploiting SSH Weaknesses
HD details specific vulnerabilities, such as key generation issues in libraries and servers that bypass standard authentication. While client-side tools like PuTTY were not the focus, server-side flaws dominate, with some implementations ignoring protocol specifications. These gaps allow attackers to bypass authentication or inject malicious data, compromising systems.
The XZ backdoor, though not directly exploitable, inspired their broader exploration, revealing systemic issues in SSH ecosystems.
Mitigating SSH Risks
Rob emphasizes hardening SSH deployments through strict configuration and regular audits. Disabling null byte passwords, enforcing strong key management, and monitoring handshake behaviors mitigate risks. Sshamble aids defenders by identifying weak implementations, urging organizations to standardize on robust servers like OpenSSH.
The talk concludes with a call for ongoing research into SSH’s evolving attack surface, leveraging tools like Sshamble to bolster defenses.
Links:
[DevoxxBE2024] The Next Phase of Project Loom and Virtual Threads by Alan Bateman
At Devoxx Belgium 2024, Alan Bateman delivered a comprehensive session on the advancements in Project Loom, focusing on virtual threads and their impact on Java concurrency. As a key contributor to OpenJDK, Alan explored how virtual threads enable high-scale server applications with a thread-per-task model, addressing challenges like pinning, enhancing serviceability, and introducing structured concurrency. His talk provided practical insights into leveraging virtual threads for simpler, more scalable code, while detailing ongoing improvements in JDK 24 and beyond.
Understanding Virtual Threads and Project Loom
Project Loom, a transformative initiative in OpenJDK, aims to enhance concurrency in Java by introducing virtual threads—lightweight, user-mode threads that support a thread-per-task model. Unlike traditional platform threads, which are resource-intensive and often pooled, virtual threads are cheap, allowing millions to run within a single JVM. Alan emphasized that virtual threads enable developers to write simple, synchronous, blocking code that is easy to read and debug, avoiding the complexity of reactive or asynchronous models. Finalized in JDK 21 after two preview releases, virtual threads have been widely adopted by frameworks like Spring and Quarkus, with performance and reliability proving robust, though challenges like pinning remain.
The Pinning Problem and Its Resolution
A significant pain point with virtual threads is “pinning,” where a virtual thread cannot unmount from its carrier thread during blocking operations within synchronized methods or blocks, hindering scalability. Alan detailed three scenarios causing pinning: blocking inside synchronized methods, contention on synchronized methods, and object wait/notify operations. These can lead to scalability issues or even deadlocks if all carrier threads are pinned. JEP 444 acknowledged this as a quality-of-implementation issue, not a flaw in the synchronized keyword itself. JEP 491, currently in Early Access for JDK 24, addresses this by allowing carrier threads to be released during such operations, eliminating the need to rewrite code to use java.util.concurrent.locks.ReentrantLock. Alan urged developers to test these Early Access builds to validate reliability and performance, noting successful feedback from initial adopters.
Enhancing Serviceability for Virtual Threads
With millions of virtual threads in production, diagnosing issues is critical. Alan highlighted improvements in serviceability tools, such as thread dumps that now distinguish carrier threads and include stack traces for mounted virtual threads in JDK 24. A new JSON-based thread dump format, introduced with virtual threads, supports parsing for visualization and preserves thread groupings, aiding debugging of complex applications. For pinning, JFR (Java Flight Recorder) events now capture stack traces when blocking occurs in synchronized methods, with expanded support for FFM and JNI in JDK 24. Heap dumps in JDK 23 include unmounted virtual thread stacks, and new JMX-based monitoring interfaces allow dynamic inspection of the virtual thread scheduler, enabling fine-tuned control over parallelism.
Structured Concurrency: Simplifying Concurrent Programming
Structured concurrency, a preview feature in JDK 21–23, addresses the complexity of managing concurrent tasks. Alan presented a motivating example of aggregating data from a web service and a database, comparing sequential and concurrent approaches using thread pools. Traditional thread pools with Future.get() can lead to leaks or wasted cycles if tasks fail, requiring complex cancellation logic. The StructuredTaskScope API simplifies this by ensuring all subtasks complete before the main task proceeds, using a single join method to wait for results. If a subtask fails, others are canceled, preventing leaks and preserving task relationships in a tree-like structure. An improved API in Loom Early Access builds, planned for JDK 24 preview, introduces static factory methods and streamlined exception handling, making structured concurrency a powerful complement to virtual threads.
Future Directions and Community Engagement
Alan outlined Project Loom’s roadmap, focusing on JEP 491 for pinning resolution, enhanced diagnostics, and structured concurrency’s evolution. He emphasized that virtual threads are not a performance boost for individual methods but excel in scalability through sheer numbers. Misconceptions, like replacing all platform threads with virtual threads or pooling them, were debunked, urging developers to focus on task migration. Structured concurrency’s simplicity aligns with virtual threads’ lightweight nature, promising easier debugging and maintenance. Alan encouraged feedback on Early Access builds for JEP 491 and structured concurrency (JEP 480), highlighting their importance for production reliability. Links to JEP 444, JEP 491, and JEP 480 provide further details for developers eager to explore.