Recent Posts
Archives

Archive for the ‘en-US’ Category

PostHeaderIcon [DevoxxGR2024] Butcher Virtual Threads Like a Pro at Devoxx Greece 2024 by Piotr Przybyl

Piotr Przybyl, a Java Champion and developer advocate at Elastic, captivated audiences at Devoxx Greece 2024 with a dynamic exploration of Java 21’s virtual threads. Through vivid analogies, practical demos, and a touch of humor, Piotr demystified virtual threads, highlighting their potential and pitfalls. His talk, rich with real-world insights, offered developers a guide to leveraging this transformative feature while avoiding common missteps. As a seasoned advocate for technologies like Elasticsearch and Testcontainers, Piotr’s presentation was a masterclass in navigating modern Java concurrency.

Understanding Virtual Threads

Piotr began by contextualizing virtual threads within Java’s concurrency evolution. Introduced in Java 21 under Project Loom, virtual threads address the limitations of traditional platform threads, which are costly to create and limited in number. Unlike platform threads, virtual threads are lightweight, managed by a scheduler that mounts and unmounts them from carrier threads during I/O operations. This enables a thread-per-request model, scaling applications to handle millions of concurrent tasks. Piotr likened virtual threads to taxis in a busy city like Athens, efficiently transporting passengers (tasks) without occupying resources during idle periods.

However, virtual threads are not a universal solution. Piotr emphasized that they do not inherently speed up individual requests but improve scalability by handling more concurrent tasks. Their API remains familiar, aligning with existing thread practices, making adoption seamless for developers accustomed to Java’s threading model.

Common Pitfalls and Pinning

A central theme of Piotr’s talk was “pinning,” a performance issue where virtual threads remain tied to carrier threads, negating benefits. Pinning occurs during I/O or native calls within synchronized blocks, akin to keeping a taxi running during a lunch break. Piotr demonstrated this with a legacy Elasticsearch client, using Testcontainers and Toxiproxy to simulate slow network calls. By enabling tracing with flags like -J-DTracePinnThreads, He identified and resolved pinning issues, replacing synchronized methods with modern, non-blocking clients.

Piotr cautioned against misuses like thread pooling or reusing virtual threads, which disrupt their lightweight design. He advocated for careful monitoring using JFR events to ensure threads remain unpinned, ensuring optimal performance in production environments.

Structured Concurrency and Scope Values

Piotr explored structured concurrency, a preview feature in Java 21, designed to eliminate thread leaks and cancellation delays. By creating scopes that manage forks, developers can ensure tasks complete or fail together, simplifying error handling. He demonstrated a shutdown-on-failure scope, where a single task failure cancels all others, contrasting this with the complexity of managing interdependent futures.

Scope Values, another preview feature, offer immutable, one-way thread locals to prevent bugs like data leakage in thread pools. Piotr illustrated their use in maintaining request context, warning against mutability to preserve reliability. These features, he argued, complement virtual threads, fostering robust, maintainable concurrent applications.

Practical Debugging and Best Practices

Through live coding, Piotr showcased how debugging with logging can inadvertently introduce I/O, unmounting virtual threads and degrading performance. He compared this to a concert where logging scatters tasks, reducing completion rates. To mitigate this, he recommended avoiding I/O in critical paths and using structured concurrency for monitoring.

Piotr’s best practices included using framework-specific annotations (e.g., Quarkus, Spring) to enable virtual threads and ensuring tasks are interruptible. He urged developers to test thoroughly, leveraging tools like Testcontainers to simulate real-world conditions. His blog post on testing unpinned threads provides further guidance for practitioners.

Conclusion

Piotr’s presentation was a clarion call to embrace virtual threads with enthusiasm and caution. By understanding their mechanics, avoiding pitfalls like pinning, and leveraging structured concurrency, developers can unlock unprecedented scalability. His engaging analogies and practical demos made complex concepts accessible, empowering attendees to modernize Java applications responsibly. As Java evolves, Piotr’s insights ensure developers remain equipped to navigate its concurrency landscape.

Links:

PostHeaderIcon [DefCon32] Closing Ceremonies & Awards

As the echoes of innovation and collaboration fade from the halls of the Las Vegas Convention Center, the closing ceremonies of DEF CON 32 encapsulate the spirit of a community that thrives on engagement, resilience, and shared purpose. Hosted by Jeff Moss, known as Dark Tangent, alongside contributors like Mar Williams and representatives from various teams, the event reflects on achievements, honors trailblazers, and charts a course forward. Amid reflections on past giants and celebrations of current triumphs, the gathering underscores the hacker ethos: pushing boundaries while fostering inclusivity and growth.

Jeff opens with a tone of relief and gratitude, acknowledging the unforeseen venue shift that tested the community’s adaptability. What began as a potential setback transformed into a revitalized experience, with attendees praising the spacious layout that evoked the intimacy of earlier conventions. This backdrop sets the stage for a moment of solemnity, where participants pause to honor those who paved the way—mentors, innovators, and unsung heroes whose legacies endure in the collective memory.

The theme of “engage” permeates the proceedings, inspiring initiatives that extend the conference’s impact beyond its annual confines. Jeff highlights two new ventures aimed at channeling the community’s expertise toward societal good and personal advancement. These efforts embody a commitment to proactive involvement, bridging the gap between hacker ingenuity and real-world challenges.

Honoring the Past: A Moment of Reflection

In a poignant start, Jeff calls for silence to remember predecessors whose contributions form the foundation of today’s cybersecurity landscape. This ritual serves as a reminder that progress stems from accumulated wisdom, urging attendees to carry forward the ethos of giving back. The gesture resonates deeply, connecting generations and reinforcing the communal bonds that define DEF CON.

Transitioning to celebration, the ceremonies spotlight individuals and organizations embodying selfless dedication. Jeff presents the Uber Contributor Award to The Prophet, a figure whose decades-long involvement spans writing for 2600 magazine, educating newcomers, and organizing events like Telephreak Challenge and QueerCon. His journey from phreaker to multifaceted influencer exemplifies the transformative power of sustained engagement. The Prophet’s acceptance speech captures the magic of the community, where dreams materialize through collective effort.

Similarly, the Electronic Frontier Foundation (EFF) receives recognition for over two decades of advocacy, raising $130,000 this year alone to support speakers and defend digital rights. Their representative emphasizes EFF’s role in amplifying security research for global benefit, aligning with DEF CON’s mission to empower ethical hacking.

Embracing the Theme: Engagement in Action

The “engage” motif drives discussions on evolving the community’s role in an increasingly complex digital world. Jeff articulates how this concept prompted bold experiments, acknowledging the uncertainties but embracing potential failures as learning opportunities. This mindset reflects the hacker’s adaptability, turning challenges into catalysts for innovation.

Attendees share feedback on the new venue, noting reduced overcrowding and a more relaxed atmosphere reminiscent of DEF CON’s earlier editions. Such observations validate the rapid pivot from the previous location, a decision thrust upon organizers by an unexpected contract termination. Jeff recounts the whirlwind process with humor, crediting quick alliances and the community’s resilience for the seamless transition.

Spotlight on Creativity: The Badge Unveiled

Mar Williams takes the stage to demystify the DEF CON 32 badge, a testament to accessible design and collaborative artistry. Drawing from a concept rooted in inclusivity, Mar aimed to create something approachable for novices while offering depth for experts. Partnering with Raspberry Pi, the badge incorporates layers of interactivity—from loading custom ROMs to developing games via GB Studio.

Acknowledgments flow to the team: Bonnie Finley for 3D modeling and game art, Chris Maltby for plugins and development, Nutmeg for additional game work, Will Tuttle for narrative input, Ada Rose Cannon for character creation, Legion 303 for audio, and others like ICSN for manufacturing. Mar’s vision emphasizes community participation, with the badge’s game dedicating itself to players who engage and make an impact. Challenges like SOS signals and proximity interactions foster connections, while post-conference resources encourage ongoing tinkering.

Triumphs in Competition: Village and Challenge Winners

The ceremonies burst with energy as winners from myriad contests are announced, showcasing the breadth of skills within the community. From the AI Village Capture the Flag, where teams like AI Cyber Challenge victors demonstrate prowess in emerging tech, to the Aviation Village’s high-flying achievements, each victory highlights specialized expertise.

Notable accolades include the AppSec Village’s top performers in secure coding, the Biohacking Village’s innovative health hacks, and the Car Hacking Village’s vehicular exploits. The Cloud Village CTF crowns champions in scalable defenses, while the Crypto & Privacy Village recognizes cryptographic ingenuity. Diversity shines through in the ICS Village’s industrial control triumphs and the IoT Village’s device dissections.

Special mentions go to the Lockpick Village’s dexterity masters, the Misinformation Village’s truth-seekers, and the Packet Hacking Village’s network ninjas. The Password Cracking Contest and Physical Pentest Challenge celebrate brute force and subtle infiltration, respectively. The Policy Village engages in advocacy wins, and the Recon Village excels in intelligence gathering.

Celebrating Hands-On Innovation: More Contest Highlights

The Red Team Village’s strategic simulations yield victors in offensive operations, complemented by the RFID Village’s access control breakthroughs. Rogue Access Point contests reward wireless wizardry, while the Soldering Skills Village honors precise craftsmanship.

The Space Security Village pushes boundaries in orbital defenses, and the Tamper Evident Village masters detection of intrusions. Telecom and Telephreak challenges revive analog artistry, with the Vishing Competition testing social engineering finesse. The Voting Village exposes electoral vulnerabilities, and the WiFi Village dominates spectrum battles.

Wireless CTF and Wordle Hacking rounds out the roster, each contributing to a tapestry of technical mastery and creative problem-solving.

Organizational Gratitude: Behind-the-Scenes Heroes

Jeff extends heartfelt thanks to departments, goons, and volunteers who orchestrated the event amid upheaval. Retiring goons like GMark, Noise, Ira, Estang, Gataca, Duna, The Samorphix, Brick, Wham, Casper receive nods for their service, earning lifetime attendance. New “noons” are welcomed, injecting fresh energy.

Gold badge holders, signifying a decade of dedication, are celebrated for their enduring commitment. This segment underscores the human element sustaining DEF CON’s scale and vibrancy.

Looking Ahead: Community and Continuity

Social channels keep the conversation alive year-round, from Discord movie nights to YouTube archives and Instagram updates. The DEF CON Social Mastodon server offers a moderated space adhering to the code of conduct, providing a haven amid social media fragmentation.

A lighthearted anecdote from Jeff about the badge’s “dark chocolate” Easter egg illustrates serendipitous joy, where proximity triggers whimsical interactions. Such moments encapsulate the conference’s blend of seriousness and play.

Finally, anticipation builds for DEF CON 33, slated for August 7-10 at the same venue. Jeff reflects on the positive reception, affirming the space’s role in reducing FOMO and enhancing connections. With content continually uploaded online, the community remains engaged, ready to disengage only until the next convergence.

Links:

EN_DEFCON32MainStageTalks_007_010.md

PostHeaderIcon [SpringIO2024] Revving Up the Good Old Samaritan: Spring Boot Admin by Jatin Makhija @ Spring I/O 2024

At Spring I/O 2024 in Barcelona, Jatin Makhija, an engineering leader at Deutsche Telekom Digital Labs, delivered an insightful presentation on leveraging Spring Boot Admin to enhance application monitoring and management. With a rich background in startups like Exigo and VWO, Jatin shared practical use cases and live demonstrations, illustrating how Spring Boot Admin empowers developers to streamline operations in complex, distributed systems. This talk, filled with actionable insights, highlighted the tool’s versatility in addressing real-world challenges, from log management to feature flag automation.

Empowering Log Management

Jatin began by addressing a universal pain point for developers: debugging production issues. He emphasized the critical role of logs in resolving incidents, noting that Spring Boot Admin allows engineers to dynamically adjust log levels—from info to trace—in seconds without redeploying applications. Through a live demo, Jatin showcased how to filter logs at the class level, enabling precise debugging. However, he cautioned about the costs of excessive logging, both in infrastructure and compliance with GDPR. By masking personally identifiable information (PII) and reverting log levels promptly, teams can maintain security and efficiency. This capability ensures rapid issue resolution while keeping customers satisfied, as Jatin illustrated with real-time log adjustments.

Streamlining Feature Flags

Feature flags are indispensable in modern applications, particularly in multi-tenant environments. Jatin explored how Spring Boot Admin simplifies their management, allowing teams to toggle features without redeploying. He presented two compelling use cases: a legacy discount system and a mobile exchange program. In the latter, Jatin demonstrated dynamically switching locales (e.g., from German to English) to adapt third-party integrations, ensuring seamless user experiences across regions. By refreshing application contexts on the fly, Spring Boot Admin reduces downtime and enhances testing coverage. Jatin’s approach empowers product owners to experiment confidently, minimizing technical debt and ensuring robust feature validation.

Automating Operations

Automation is a cornerstone of efficient development, and Jatin showcased how Spring Boot Admin’s REST APIs can be harnessed to automate testing workflows. By integrating with CI/CD pipelines like Jenkins and test frameworks such as Selenium, teams can dynamically patch configurations and validate multi-tenant setups. A recorded demo illustrated an automated test toggling a mobile exchange feature, highlighting increased test coverage and early defect detection. Jatin emphasized that this automation reduces manual effort, boosts regression testing accuracy, and enables scalable deployments, allowing teams to ship with confidence.

Scaling Monitoring and Diagnostics

Monitoring distributed systems is complex, but Spring Boot Admin simplifies it with built-in metrics and diagnostics. Jatin demonstrated accessing health statuses, thread dumps, and heap dumps through the tool’s intuitive interface. He shared a story of debugging a Kubernetes pod misconfiguration, where Spring Boot Admin revealed discrepancies in CPU allocation, preventing application instability. By integrating the Git Commit Plugin, teams can track deployment details like commit IDs and timestamps, enhancing traceability in microservices. Jatin also addressed scalability, showcasing a deployment managing 374 instances across 24 applications, proving Spring Boot Admin’s robustness in large-scale environments.

Links:

PostHeaderIcon [DefCon32] Counter Deception: Defending Yourself in a World Full of Lies

The digital age promised universal access to knowledge, yet it has evolved into a vast apparatus for misinformation. Tom Cross and Greg Conti examine this paradox, tracing deception’s roots from ancient stratagems to modern cyber threats. Drawing on military doctrines and infosec experiences, they articulate principles for crafting illusions and, crucially, for dismantling them. Their discourse empowers individuals to navigate an ecosystem where truth is obscured, fostering tools and mindsets to reclaim clarity.

Deception, at its essence, conceals reality to gain advantage, influencing decisions or inaction. Historical precedents abound: the Trojan Horse’s cunning infiltration, Civil War quaker guns mimicking artillery, or the Persian Gulf War’s feigned amphibious assault diverting attention from a land offensive. In contemporary conflicts, like Russia’s Ukraine invasion, fabricated narratives such as the “Ghost of Kyiv” bolster morale while masking intentions. These tactics transcend eras, targeting not only laypersons but experts, code, and emerging AI systems.

In cybersecurity, falsehoods manifest at every layer: spoofed signals in the electromagnetic spectrum, false flags in malware attribution, or fabricated personas for network access and influence propagation. Humans fall prey through phishing, typo-squatting, or mimicry, while specialists encounter deceptive metadata or rotating infrastructures. Malware detection evades scrutiny via polymorphism or fileless techniques, and AI succumbs to data poisoning or jailbreaks. Strategically, deception scales from tactical engagements to national objectives, concealing capabilities or projecting alternatives.

Maxims of Effective Deception

Military thinkers have distilled deception into enduring guidelines. Sun Tzu advocated knowing adversaries intimately while veiling one’s own plans, emphasizing preparation and adaptability. Von Clausewitz viewed war—and by extension, conflict—as enveloped in uncertainty, where illusions amplify fog. Modern doctrines, like those from the U.S. Joint Chiefs, outline six tenets: focus on key decision-makers, integration with operations, centralized control for consistency, timeliness to exploit windows, security to prevent leaks, and adaptability to evolving conditions.

These principles manifest in cyber realms. Attackers exploit cognitive biases—confirmation, anchoring, availability—embedding falsehoods in blind spots. Narratives craft compelling stories, leveraging emotions like fear or outrage to propagate. Coordination ensures unified messaging across channels, while adaptability counters defenses. In practice, state actors deploy bot networks for amplification, or cybercriminals use deepfakes for social engineering. Understanding these offensive strategies illuminates defensive countermeasures.

Inverting Principles for Countermeasures

Flipping offensive maxims yields defensive strategies. To counter focus, broaden information sources, triangulating across diverse perspectives to mitigate echo chambers. Against integration, scrutinize contexts: does a claim align with broader evidence? For centralized control, identify coordination patterns—sudden surges in similar messaging signal orchestration.

Timeliness demands vigilance during critical periods, like elections, where rushed judgments invite errors. Security’s inverse promotes transparency, fostering open verification. Adaptability encourages continuous learning, refining discernment amid shifting tactics.

Practically, countering biases involves self-awareness: question assumptions, seek disconfirming evidence. Triangulation cross-references claims against reliable outlets, fact-checkers, or archives. Detecting narratives entails pattern recognition—recurring themes, emotional triggers, or inconsistencies. Tools like reverse image searches or metadata analyzers expose fabrications.

Applying Counter Deception in Digital Ecosystems

The internet’s structure amplifies deceit, yet hackers’ ingenuity can reclaim agency. Social media, often ego-centric, distorts realities through algorithmic funhouse mirrors. Curating expert networks—via follows, endorsements—filters noise, prioritizing credible voices. Protocols for machine-readable endorsements, akin to LinkedIn but open, enable querying endorsed specialists on topics, surfacing informed commentary.

Innovative protocols like backlinks—envisioned by pioneers such as Vannevar Bush, Douglas Engelbart, and Ted Nelson—remain underexplored. These allow viewing inbound references, revealing critiques or extensions. Projects like Xanadu or Hyperscope hint at potentials: annotating documents with trusted overlays, highlighting recent edits for scrutiny. Content moderation challenges stymied widespread adoption, but coupling with decentralized systems like Mastodon offers paths forward.

Large language models (LLMs) present dual edges: prone to hallucinations, yet adept at structuring unstructured data. Dispassionate analysis could unearth omitted facts from narratives, or map expertise by parsing academic sites to link profiles. Defensive tools might flag biases or inconsistencies, augmenting human judgment per Engelbart’s augmentation ethos.

Scaling countermeasures involves education: embedding media literacy in curricula, emphasizing critical inquiry. Resources like Media Literacy Now provide K-12 frameworks, while frameworks like “48 Critical Thinking Questions” prompt probing—who benefits, where’s the origin? Hackers, adept at discerning falsehoods, can prototype tools—feed analyzers, narrative detectors—leveraging open protocols for innovation.

Ultimately, countering deception demands vigilance and creativity. By inverting offensive doctrines, individuals fortify perceptions, transforming the internet from a misinformation conduit into a truth-seeking engine.

Links:

EN_DEFCON32MainStageTalks_006_006.md

PostHeaderIcon [NodeCongress2024] The Architecture of Asynchronous Code Context and Package Resolution in Node.js

Lecturer: Yagiz Nizipli

Yagiz Nizipli is a respected software architect, entrepreneur, and prominent contributor to the Node.js ecosystem, with a Master’s degree in Computer Science from Fordham University. He is an active member of the Node.js Technical Steering Committee (TSC) and a voting member of the OpenJS Foundation. His primary academic and professional focus is on improving the performance of Node.js, exemplified by his creation of the Ada URL parser, which has been adopted into Node.js core and is considered the fastest WHATWG-compliant URL parser. He has held roles as a Senior Software Engineer and currently works at Sentry, specializing in error tracking and performance.

Relevant Links:
* Professional Website: https://www.yagiz.co/
* GitHub Profile: https://github.com/anonrig
* X/Twitter: https://twitter.com/yagiznizipli

Abstract

This article analyzes the intricate mechanisms of package resolution within the Node.js runtime, comparing the established CommonJS (CJS) module system with the modern ECMAScript Modules (ESM) specification. It explores the performance overhead inherent in the CJS resolution algorithm, which relies on extensive filesystem traversal, and identifies key developer methodologies that can significantly mitigate these bottlenecks. The analysis highlights how adherence to modern standards, such as explicit file extensions and the use of the package.json exports field, is crucial for building performant and maintainable Node.js applications.

The Dual Modality of Package Resolution in Node.js

Context and Methodology

The Node.js runtime employs distinct, yet interoperable, mechanisms for locating and loading dependencies based on whether the module utilizes the legacy CommonJS (require) system or the modern ECMAScript Modules (import) system.

The CJS resolution algorithm is complex and contributes to runtime latency. When a package path is provided without an extension, the CJS resolver performs synchronous filesystem operations, sequentially checking for .js, .json, and .node extensions. If the target is a directory, it attempts to resolve the module via entry points specified in a local package.json file, or by sequentially checking for index.js, index.json, etc.. Crucially, if the required module is not found locally, the resolver recursively traverses up the directory tree, checking every adjacent node_modules folder until the file system root is reached, incurring a significant performance penalty due to high Input/Output (I/O) operations.

In contrast, ESM resolution is strictly defined by the WHATWG specification, mandating that all imports must include the full file extension. The module system determines whether a file is CJS or ESM by checking the type field in the nearest package.json file, falling back to CJS if the field is absent or set to "commonjs", and defaulting to ESM if set to "module".

Performance Implications and Optimization Strategies

The primary performance bottleneck in Node.js package loading stems from the synchronous filesystem traversal and redundant extension checks inherent in the legacy CJS resolution process.

To address this, the following optimization methodologies are recommended:

  1. Mandatory Extension Usage: Developers should always include file extensions in require() or import statements, even where the CJS specification allows omission. This practice eliminates the need for the CJS resolver to check multiple extensions (.js, .json, .node) sequentially, which directly reduces I/O latency.
  2. Explicit Module Type Declaration: For projects, particularly one-time scripts without a package.json file, the use of explicit extensions like .mjs for ESM and .cjs for CJS is advised. This provides an immediate, unambiguous hint to the runtime, eliminating the need for slow directory traversal to locate an ancestor package.json file.
  3. Modern Package Manifest Fields: The exports field in package.json represents a modern innovation that significantly improves resolution performance and security. This field explicitly defines the package’s public entry points, thereby:
    • Accelerating Resolution: The resolver is immediately directed to the correct entry point, bypassing ambiguous path searching.
    • Encapsulation: It restricts external access to internal, private files (deep imports), enforcing a clean package boundary.
      The related imports field allows for internal aliasing within a package, facilitating faster resolution of inter-package dependencies.

While experimental flags like --experimental-detect-module exist to allow .js files without explicit extensions or package.json fields, they are cautioned against due to their experimental status and known instability. The adoption of strict resolution practices is therefore the more reliable, long-term strategy for ensuring optimal API and application performance.

Links

PostHeaderIcon [DefCon32] AMDSinkclose – Universal Ring2 Privilege Escalation

In the intricate landscape of hardware security, vulnerabilities often lurk within architectural designs that have persisted for years. Enrique Nissim and Krzysztof Okupski, principal security consultants at IOActive, unravel a profound flaw in AMD processors, dubbed AMDSinkclose. Their exploration reveals how this issue enables attackers to escalate privileges to System Management Mode (SMM), granting unparalleled access to system resources. By dissecting the mechanics of SMM and the processor’s memory handling, they demonstrate exploitation paths that bypass traditional safeguards, affecting a vast array of devices from laptops to servers.

SMM represents one of the most potent execution environments in x86 architectures, offering unrestricted control over I/O devices and memory. It operates stealthily, invisible to operating systems, hypervisors, and security tools like antivirus or endpoint detection systems. During boot, firmware initializes hardware and loads SMM code into a protected memory region called SMRAM. At runtime, the OS can invoke SMM services for tasks such as power management or security checks via System Management Interrupts (SMIs). When an SMI triggers, the processor saves its state in SMRAM, executes the necessary operations, and resumes normal activity. This isolation makes SMM an attractive target for persistence mechanisms, including bootkits or firmware implants.

The duo’s prior research focused on vendor misconfigurations and software flaws in SMM components, yielding tools for vulnerability detection and several CVEs in 2023. However, AMDSinkclose shifts the lens to an inherent processor defect. Unlike Intel systems, where SMM-related Model-Specific Registers (MSRs) are accessible only within SMM, AMD allows ring-0 access to these registers. While an SMM lock bit prevents runtime tampering with key configurations, a critical oversight in the documentation exposes two fields—TClose and AClose—not covered by this lock. TClose, in particular, redirects data accesses in SMM to Memory-Mapped I/O (MMIO) instead of SMRAM, creating a pathway for manipulation.

Architectural Foundations and the Core Vulnerability

At the heart of SMM security lies the memory controller’s role in protecting SMRAM. Firmware configures registers like TSEG Base, TSEG Mask, and SMM Base to overlap and shield this region. The TSEG Mask includes fields for enabling protections, but the unlocked TClose bit allows ring-0 users to set it, altering behavior without violating the lock. When activated, instruction fetches in SMM remain directed to DRAM, but data accesses divert to MMIO. This split enables attackers to control execution by mapping malicious content into the MMIO space.

The feature originated around 2006 to allow SMM code to access I/O devices using SMRAM’s physical addresses, though no vendors appear to utilize it. Documentation warns against leaving TClose set upon SMM exit, as it could misdirect state saves to MMIO. Yet, from ring-0, setting this bit and triggering an SMI causes immediate system instability—freezes or hangs—due to erroneous data handling. This echoes the 2015 Memory Sinkhole attack by Christopher Domas, which remapped the APIC to overlap TSEG, but AMDSinkclose affects the entire TSEG region, amplifying the impact.

Brainstorming exploits, Enrique and Krzysztof considered remapping PCI devices to overlay SMRAM, but initial attempts failed due to hardware restrictions. Instead, they targeted the SMM entry point, a vendor-defined layout typically following EDK2 standards. This includes a core area for support code, per-core SMM bases with entry points at offset 0x8000, and save states at 0xFE00. By setting TClose and invoking an SMI, data reads from these offsets redirect to MMIO, allowing control if an attacker maps a suitable device there.

Exploitation Techniques and Multi-Core Challenges

Exploiting AMDSinkclose requires precise manipulation of the Global Descriptor Table (GDT) and Interrupt Descriptor Table (IDT) within SMM. Upon SMI entry, the processor operates in real mode, loading a GDT from the save state to transition to protected mode. By controlling data fetches via TClose, attackers can supply a malicious GDT, enabling arbitrary code execution. The challenge lies in aligning MMIO mappings with SMM offsets, as direct PCI remapping proved ineffective.

The solution involves leveraging the processor’s address wraparound behavior. In protected mode, addresses exceeding 4GB wrap around, but SMM’s real-mode entry point operates at a lower level where this wraparound can be exploited. By setting the SMM base to a high address like 0xFFFFFFF0, data accesses wrap to low MMIO regions (0x0 to 0xFFF), where integrated devices like the Local APIC reside. This allows overwriting the GDT with controlled content from the APIC’s registers.

Multi-core systems introduce complexity, as all cores enter SMM simultaneously during a broadcast SMI. The exploit must handle concurrent execution, ensuring only one core performs the malicious action while others halt safely. Disabling Simultaneous Multithreading (SMT) simplifies this, but wraparound enables targeting specific cores. Testing on Ryzen laptops confirmed reliability, with code injection succeeding across threads.

Impact on Firmware and Mitigation Strategies

The ramifications extend to firmware persistence. Once in SMM, attackers disable SPI flash protections like ROM Armor, enabling writes to non-volatile storage. Depending on configurations—such as Platform Secure Boot (PSB)—outcomes vary. Fully enabled protections limit writes to variables, potentially breaking Secure Boot by altering keys. Absent PSB, full firmware implants become feasible, resistant to OS reinstalls or updates, as malware can intercept and falsify flash operations.

Research on vendor configurations reveals widespread vulnerabilities: many systems lack ROM Armor or PSB, exposing them to implants. Even with protections, bootkits remain possible, executing pre-OS loader. A fused disable of PSB ensures perpetual vulnerability.

AMD’s microcode update addresses the issue, though coverage may vary. OEMs can patch SMM entry points to detect and halt on TClose activation, integrable into EDK2 or Coreboot. Users might trap MSR accesses via hypervisors. Reported in October 2023, CVE-2023-31315 was assigned, with an advisory published recently. Exploit code is forthcoming, underscoring the need for deepened architectural scrutiny.

Links:

PostHeaderIcon [DotJs2024] How to Test Web Applications

Tracing the sinews of testing evolution unveils a saga of ingenuity amid constraints, where manual pokes birthed automated sentinels. Jessica Sachs, a product-minded frontend engineer at Ionic with a penchant for vintage tech, chronicled this odyssey at dotJS 2024. From St. Augustine’s cobblestone allure—America’s eldest city, founded 1565—she drew parallels to web dev’s storied paths, unearthing undocumented timelines via Wayback Machine dives and Twitter lore. Sachs’s quest: demystify the proliferation of test runners, revealing how historical exigencies—from CGI pains to Node’s ascent—shaped today’s arsenal, advocating patience for tools that integrate seamlessly into workflows.

Sachs ignited with a Twitter thread amassing 178 responses, crowdsourcing pre-2011 practices. The ’90s dawned with CGI scripts in C or Perl, rendering dynamic content via URL params—a nightmare for verification. Absent browsers, coders FTP’d to prod, editing vi in situ, then paraded to webmasters’ desks for eyeball tests on finicky monitors. Issues skewed infrastructural: network glitches, deployment fumbles, not logic lapses. Enter Selenium circa 2011, Sachs’s genesis as manual QA tapping iPads, automating browser puppeteering. Predecessors? Fragmented: HTTPUnit for server mocks, early Selenium precursors like Kantara for JavaScript injection.

The aughts splintered further. jQuery’s 2006 surge spawned QUnit; Yahoo UI birthed YUITest; Scriptaculous, Ruby-infused, shipped bespoke runners amid TDD fervor. Pushback mounted: velocity killers, JS’s ancillary role to backend logic. Breakthrough: 2007’s JS Test Driver, Mishko’s Java-forged Google tool, spawning browsers, watching files, reporting terminals—paving for Testacular (Karma’s cheeky forebear). PhantomJS enabled headless CI, universally loathed yet indispensable till Node. Sachs unearthed Ryan Florence’s GitHub plea rebranding Testacular to Karma, easing corporate qualms.

Node’s 2011 arrival unified: Jest, open-sourced by Facebook in 2014 (conceived 2011), tackled module transforms, fake DOMs for builds. Sachs lauded its webpack foresight, supplanting concatenation. Yet, sprawl persists: Bun, Deno, edge functions defy file systems; ESM, TypeScript confound. Vitest ascends, context-switching via jsdom, HappyDOM, browser modes, E2E orchestration—bundler-agnostic, coupling to transformers sans custom ones. Sachs’s epiphany: runners mirror environments; history’s lessons—manual sufficed for Android pre-automation—affirm: prioritize speed, workflow harmony. Novel tools demand forbearance; value accrues organically.

Sachs’s tapestry reminds: testing’s not punitive but enabler, evolving from ad-hoc to ecosystem symbiote, ensuring robustness amid flux.

Unearthing Testing’s Archaic Roots

Sachs’s archival foray exposed ’90s drudgery: CGI’s prod edits via vi, manual verifications on webmaster rigs, network woes trumping semantics. Selenium’s 2011 automation eclipsed this, but antecedents like HTTPUnit hinted at mocks. The 2000s fragmented—YUITest, QUnit tying to libs—yet JS Test Driver unified, birthing Karma’s headless era via PhantomJS, Node’s prelude.

The Node Era and Modern Convergence

Jest’s 2014 debut addressed builds, modules; Vitest now reigns, emulating DOMs diversely, launching browsers, integrating E2E. Sachs spotlighted bundlers as logic proxies, ESM/TS as Jest’s Achilles; Vitest’s flexibility heralds adaptability. Android’s manual heritage validates: tools must accelerate, not hinder—foster adoption through velocity.

Links:

PostHeaderIcon [DevoxxBE2024] A Kafka Producer’s Request: Or, There and Back Again by Danica Fine

Danica Fine, a developer advocate at Confluent, took Devoxx Belgium 2024 attendees on a captivating journey through the lifecycle of a Kafka producer’s request. Her talk demystified the complex process of getting data into Apache Kafka, often treated as a black box by developers. Using a Hobbit-themed example, Danica traced a producer.send() call from client to broker and back, detailing configurations and metrics that impact performance and reliability. By breaking down serialization, partitioning, batching, and broker-side processing, she equipped developers with tools to debug issues and optimize workflows, making Kafka less intimidating and more approachable.

Preparing the Journey: Serialization and Partitioning

Danica began with a simple schema for tracking Hobbit whereabouts, stored in a topic with six partitions and a replication factor of three. The first step in producing data is serialization, converting objects into bytes for brokers, controlled by key and value serializers. Misconfigurations here can lead to errors, so monitoring serialization metrics is crucial. Next, partitioning determines which partition receives the data. The default partitioner uses a key’s hash or sticky partitioning for keyless records to distribute data evenly. Configurations like partitioner.class, partitioner.ignore.keys, and partitioner.adaptive.partitioning.enable allow fine-tuning, with adaptive partitioning favoring faster brokers to avoid hot partitions, especially in high-throughput scenarios like financial services.

Batching for Efficiency

To optimize throughput, Kafka groups records into batches before sending them to brokers. Danica explained key configurations: batch.size (default 16KB) sets the maximum batch size, while linger.ms (default 0) controls how long to wait to fill a batch. Setting linger.ms above zero introduces latency but reduces broker load by sending fewer requests. buffer.memory (default 32MB) allocates space for batches, and misconfigurations can cause memory issues. Metrics like batch-size-avg, records-per-request-avg, and buffer-available-bytes help monitor batching efficiency, ensuring optimal throughput without overwhelming the client.

Sending the Request: Configurations and Metrics

Once batched, data is sent via a produce request over TCP, with configurations like max.request.size (default 1MB) limiting batch volume and acks determining how many replicas must acknowledge the write. Setting acks to “all” ensures high durability but increases latency, while acks=1 or 0 prioritizes speed. enable.idempotence and transactional.id prevent duplicates, with transactions ensuring consistency across sessions. Metrics like request-rate, requests-in-flight, and request-latency-avg provide visibility into request performance, helping developers identify bottlenecks or overloaded brokers.

Broker-Side Processing: From Socket to Disk

On the broker, requests enter the socket receive buffer, then are processed by network threads (default 3) and added to the request queue. IO threads (default 8) validate data with a cyclic redundancy check and write it to the page cache, later flushing to disk. Configurations like num.network.threads, num.io.threads, and queued.max.requests control thread and queue sizes, with metrics like network-processor-avg-idle-percent and request-handler-avg-idle-percent indicating thread utilization. Data is stored in a commit log with log, index, and snapshot files, supporting efficient retrieval and idempotency. The log.flush.rate and local-time-ms metrics ensure durable storage.

Replication and Response: Completing the Journey

Unfinished requests await replication in a “purgatory” data structure, with follower brokers fetching updates every 500ms (often faster). The remote-time-ms metric tracks replication duration, critical for acks=all. Once replicated, the broker builds a response, handled by network threads and queued in the response queue. Metrics like response-queue-time-ms and total-time-ms measure the full request lifecycle. Danica emphasized that understanding these stages empowers developers to collaborate with operators, tweaking configurations like default.replication.factor or topic-level settings to optimize performance.

Empowering Developers with Kafka Knowledge

Danica concluded by encouraging developers to move beyond treating Kafka as a black box. By mastering configurations and monitoring metrics, they can proactively address issues, from serialization errors to replication delays. Her talk highlighted resources like Confluent Developer for guides and courses on Kafka internals. This knowledge not only simplifies debugging but also fosters better collaboration with operators, ensuring robust, efficient data pipelines.

Links:

PostHeaderIcon [DotAI2024] DotAI 2024: Neil Zeghidour – Forging Multimodal Foundations for Voice AI

Neil Zeghidour, co-founder and Chief Modeling Officer at Kyutai, demystified multimodal language models at DotAI 2024. Transitioning from Google DeepMind’s generative audio vanguard—pioneering text-to-music APIs and neural codecs—to Kyutai’s open-science bastion, Zeghidour chronicled Moshi’s genesis: the inaugural open-source, real-time voice AI blending text fluency with auditory nuance.

Elevating Text LLMs to Sensory Savants

Zeghidour contextualized text LLMs’ ubiquity—from translation relics to coding savants—yet lamented their sensory myopia. True assistants demand perceptual breadth: visual discernment, auditory acuity, and generative expressivity like image synthesis or fluid discourse.

Moshi embodies this fusion, channeling voice bidirectionally with duplex latency under 200ms. Unlike predecessors—Siri’s scripted retorts or ChatGPT’s turn-taking delays—Moshi interweaves streams, parsing interruptions sans artifacts via multi-stream modeling: discrete tokens for phonetics, continuous for prosody.

This architecture, Zeghidour detailed, disentangles content from timbre, enabling role-aware training. Voice actress Alice’s emotive recordings—whispers to cowboy drawls—seed synthetic dialogues, yielding hundreds of thousands of hours where Moshi learns deference, yielding floors fluidly.

Unveiling Technical Ingenuity and Open Horizons

Zeghidour dissected Mimi, Kyutai’s streaming codec: outperforming FLAC in fidelity while slashing bandwidth, it encodes raw audio into manageable tokens for LLM ingestion. Training on vast, permissioned corpora—podcasts, audiobooks—Moshi masters accents, emotions, and interruptions, rivaling human cadence.

Challenges abounded: duplexity’s echo cancellation, prosody’s subtlety. Yet, open-sourcing weights, code, and a 60-page treatise democratizes replication, from MacBook quantization to commercial scaling.

Zeghidour’s Moshi-Moshi vignette hinted at emergent quirks—self-dialogues veering philosophical—while inviting scrutiny via Twitter. Kyutai’s mandate: propel voice agents through transparency, fostering adoption in research and beyond.

In Moshi, Zeghidour glimpsed assistants unbound by text’s tyranny, conversing as kin— a sonic stride toward AGI’s empathetic embrace.

Links:

PostHeaderIcon [PHPForumParis2023] Women in Tech: Challenges and Solutions – Isabelle Collet

Isabelle Collet, a sociologist and expert in gender studies, delivered a thought-provoking keynote at Forum PHP 2023, addressing the underrepresentation of women in the tech industry. Drawing from her extensive research, Isabelle challenged common assumptions about gender equality in programming, offering a nuanced perspective on systemic barriers and actionable solutions. Her engaging approach, infused with humor and real-world examples, invited the PHP community to reflect on fostering inclusivity and supporting diverse talent in technology.

Unpacking Gender Stereotypes

Isabelle opened by confronting a common sentiment: “I don’t see gender, only skills.” While well-intentioned, she argued, this overlooks systemic biases that shape tech’s male-dominated landscape. Using a playful exercise, she asked attendees to identify the gender of their neighbors, highlighting how societal cues—like clothing or beards—often guide assumptions. Isabelle explained that these unconscious biases influence hiring and retention, with statistics showing women are significantly underrepresented in tech roles globally. Her candid approach set the stage for a deeper exploration of structural challenges.

Cultural and Social Barriers

Delving into global perspectives, Isabelle noted that women’s participation in tech varies by region. In countries like Malaysia and India, women make up a higher proportion of tech professionals due to fewer cultural stereotypes about programming. Conversely, in Western nations, “geek” stereotypes rooted in pop culture deter women from entering the field. She highlighted unique cases, such as Pakistan, where women dominate image processing roles due to cultural norms around privacy. These insights underscored the complex interplay of culture, opportunity, and representation in shaping tech’s gender landscape.

Encouraging Women’s Participation

Isabelle proposed practical solutions to boost women’s involvement in tech. She emphasized early education, advocating for programs that introduce girls to coding in supportive environments. Addressing workplace challenges, she cited testimonies from women who love programming but face isolation or bias, leading some to leave the industry. Isabelle urged companies to foster inclusive cultures, mentor junior talent, and challenge stereotypes. Her own journey—pivoting from potential programmer to sociologist—highlighted how supportive environments could retain diverse talent.

Building an Inclusive Future

Concluding her talk, Isabelle called on the PHP community to take responsibility for change. She encouraged developers to mentor women, support diversity initiatives, and question biases in hiring and team dynamics. By sharing stories of women who thrive in tech despite obstacles, Isabelle inspired attendees to create environments where everyone can excel, regardless of gender. Her keynote left a lasting impression, urging collective action to make tech a more equitable space.