Recent Posts
Archives

PostHeaderIcon [DefCon32] DEF CON 32: NTLM The Last Ride

Jim Rush and Tomais Williamson, security researchers from Wellington, New Zealand, electrified DEF CON 32 with a deep dive into exploiting NTLM authentication before its planned phase-out in Windows 11 and beyond. Representing CyberCX, they unveiled new vulnerabilities, bypassed existing fixes, and exposed insecure defaults in Microsoft’s NTLM-related controls. Their fast-paced presentation, infused with humor and technical depth, offered a final hurrah for NTLM hacking, urging attendees to turn off NTLM where possible.

Revisiting NTLM’s Persistent Flaws

Jim and Tomais began by contextualizing NTLM, a 25-year-old authentication protocol still prevalent despite its known weaknesses. They highlighted Microsoft’s plan to deprecate NTLM, yet emphasized its lingering presence in legacy systems. Their research uncovered new bugs, including a bypass of a previously patched CVE, allowing attackers to coerce NTLM hashes from various applications. By exposing these flaws, Jim and Tomais underscored the urgency of transitioning to more secure protocols like Kerberos.

Novel Exploitation Techniques

The duo detailed their innovative approaches, combining multiple bug classes to extract NTLM hashes from unexpected sources, such as document processors and build servers. Their live demonstrations showcased “cooked” bugs—exploits leveraging URL inputs to trigger hash leaks. Jim’s anecdotes about their discoveries, including a nod to their CyberCX colleague’s assistance, highlighted the collaborative nature of their work. These techniques revealed NTLM’s fragility, especially in environments with permissive defaults.

Insecure Defaults and Systemic Gaps

Focusing on Microsoft’s NTLM security controls, Jim and Tomais exposed glaring gaps, such as libraries allowing unauthenticated hash extraction. They demonstrated how attackers could exploit these defaults in applications like Microsoft Teams or PDF generators, turning innocuous features into attack vectors. Their findings, supported by CyberCX’s research efforts, emphasized the need for organizations to audit NTLM usage and disable it wherever feasible to prevent hash coercion.

Community Collaboration and Future Steps

Concluding, Jim and Tomais called for community engagement, inviting attendees to share ideas for extracting hashes from novel sources like video games. They praised Microsoft’s MSRC team for their responsiveness and urged continued disclosure to advance research. Their advice to “turn off NTLM, then turn it back on when someone screams” humorously captured the challenge of legacy system dependencies, encouraging proactive steps toward more secure authentication frameworks.

Links:

PostHeaderIcon [RivieraDev2025] Stanley Servical and Louis Fredice Njako Molom – Really Inaccessible

At Riviera DEV 2025, Stanley Servical and Louis Fredice Njako Molom presented an immersive workshop titled “Really Inaccessible,” designed as an escape game to spotlight the challenges of digital accessibility. Through a hands-on, interactive experience, Stanley and Louis guided participants into the perspectives of users with visual, auditory, motor, and cognitive disabilities. Their session not only highlighted the barriers faced by these users but also provided practical strategies for building inclusive digital solutions. This engaging format, combined with a focus on actionable improvements, underscores the critical role of accessibility in modern software development.

Immersive Learning Through an Escape Game

Stanley and Louis kicked off their workshop with an innovative escape game, inviting participants to navigate a digital environment deliberately designed with accessibility flaws. The game, accessible via a provided URL, immersed attendees in scenarios mimicking real-world challenges faced by individuals with disabilities. Participants were encouraged to use headphones for a fully immersive experience, engaging with tasks that highlighted issues like poor color contrast, missing link styles, and inaccessible form elements. The open-source nature of the game, as Stanley emphasized, allows developers to adapt and reuse it, fostering broader awareness within teams and organizations.

The escape game served as a powerful tool to simulate the frustrations of inaccessible interfaces, such as navigating without a mouse or interpreting low-contrast text. Feedback from participants underscored the game’s impact, with one developer noting how it deepened their understanding of motor and auditory challenges, reinforcing the need for inclusive design. Louis highlighted that the game’s public availability enables it to be shared with colleagues or even non-technical audiences, amplifying its educational reach.

The State of Digital Accessibility

Following the escape game, Stanley and Louis transitioned to a debrief, offering a comprehensive overview of digital accessibility’s current landscape. They emphasized that accessibility extends beyond screen readers, encompassing motor, cognitive, and visual impairments. The European Accessibility Act, effective since June 28, 2025, was cited as a pivotal legal driver, mandating inclusive digital services across public and private sectors. However, they framed this not as a mere compliance obligation but as an opportunity to enhance user experience and reach broader audiences.

The speakers identified common accessibility pitfalls, such as unstyled links or insufficient color contrast, which disrupt user navigation. They stressed that accessibility challenges are highly individualized, requiring flexible solutions that adapt to diverse needs. Tools like screen readers and keyboard navigation aids were discussed, with Stanley noting their limitations when applications lack proper semantic structure. This segment underscored the necessity of integrating accessibility from the earliest stages of design and development to avoid retrofitting costs.

User-Centric Testing for Inclusive Design

A core theme of the workshop was the adoption of a user-centric testing approach to ensure accessibility. Louis introduced tools like Playwright and Cypress, which integrate accessibility checks into end-to-end testing workflows. By simulating user interactions—such as keyboard navigation or form completion—these tools help developers identify and address issues like focus traps in pop-ups or inaccessible form inputs. For instance, Louis demonstrated a test scenario where a form’s number input required specific accessibility roles to ensure compatibility with assistive technologies.

The speakers emphasized that user-centric testing aligns accessibility with functional requirements, enhancing overall application quality. They showcased how tools like Axe-core can be embedded in testing pipelines to scan single-page applications (SPAs) for accessibility violations on a per-use-case basis, rather than just page-level checks. This approach, as Stanley noted, ensures that tests remain relevant to real-world user interactions, making accessibility a seamless part of the development process.

Practical Strategies for Improvement

Stanley and Louis concluded with actionable strategies for improving accessibility, drawing from real-world case studies. They advocated for simple yet impactful practices, such as ensuring proper focus management in pop-ups, using semantic HTML, and maintaining high contrast ratios. For example, they highlighted the importance of updating page titles dynamically in SPAs to aid screen reader users, a practice often overlooked in dynamic web applications.

They also addressed the integration of accessibility into existing workflows, recommending manual testing for critical user journeys and automated checks for scalability. The open-source ecosystem around their escape game, including plugins and VS Code extensions, was presented as a resource for developers to streamline accessibility testing. Louis emphasized collaboration between developers and manual testers to avoid redundant efforts, ensuring that accessibility enhancements align with business goals.

Leveraging Open-Source and Community Feedback

The workshop’s open-source ethos was a recurring theme, with Stanley and Louis encouraging participants to contribute to the escape game’s evolution. They highlighted its flexibility, noting that developers can tailor scenarios to specific accessibility challenges, such as color blindness or motor impairments. The inclusion of a “glitch code” to bypass bugs in the game demonstrated their commitment to practical usability, even in an educational tool.

Participant feedback was actively solicited, with suggestions like adding a menu to navigate specific game sections directly. Stanley acknowledged this as a valuable enhancement, noting that relative URLs for individual challenges are already available in the game’s repository. This collaborative approach, paired with the workshop’s emphasis on community-driven improvement, positions the escape game as a living project that evolves with user input.

Legal and Ethical Imperatives

Beyond technical solutions, Stanley and Louis underscored the ethical and legal imperatives of accessibility. The European Accessibility Act, alongside frameworks like the RGAA (Référentiel Général d’Amélioration de l’Accessibilité), provides a structured guide for compliance. However, they framed accessibility as more than a regulatory checkbox—it’s a commitment to inclusivity that enhances user trust and broadens market reach. By designing for the most marginalized users, developers can create applications that are more robust and user-friendly for all.

The speakers also addressed emerging trends, such as voice-activated navigation, referencing tools like Dragon NaturallySpeaking. While not yet fully integrated into their framework, they expressed openness to exploring such technologies, inviting community contributions to tackle these challenges. This forward-looking perspective ensures that accessibility remains dynamic, adapting to new user needs and technological advancements.

Empowering Developers for Change

The workshop closed with a call to action, urging developers to apply their learnings immediately. Stanley and Louis encouraged attendees to share the escape game, integrate accessibility testing into their workflows, and advocate for inclusive design within their organizations. They emphasized that small, consistent efforts—such as verifying keyboard navigation or ensuring proper ARIA roles—can yield significant improvements. By fostering a culture of accessibility, developers can drive meaningful change, aligning technical innovation with social responsibility.

Links:

  • None available

PostHeaderIcon [DevoxxUK2025] Passkeys in Practice: Implementing Passwordless Apps

At DevoxxUK2025, Daniel Garnier-Moiroux, a Spring Security team member at VMware, delivered an engaging talk on implementing passwordless authentication using passkeys and the WebAuthn specification. Highlighting the security risks of traditional passwords, Daniel demonstrated how passkeys leverage cryptographic keys stored on devices like YubiKeys, Macs, or smartphones to provide secure, user-friendly login flows. Using Spring Boot 3.4’s new WebAuthn support, he showcased practical steps to integrate passkeys into an existing application, emphasizing phishing resistance and simplified user experiences. His live coding demo and insights into Spring Security’s configuration made this a compelling session for developers seeking modern authentication solutions.

The Problem with Passwords

Daniel opened by underscoring the vulnerabilities of passwords, often reused or poorly secured, leading to frequent breaches. He introduced passwordless alternatives, starting with one-time tokens (OTTs), which Spring Security supports for temporary login links sent via email. While effective, OTTs require cumbersome steps like copying tokens across devices. Passkeys, based on the WebAuthn standard, offer a superior solution by using cryptographic keys tied to specific domains, eliminating password-related risks. Supported by major browsers and platforms like Apple, Google, and Microsoft, passkeys enable seamless authentication via biometrics, PINs, or physical devices, combining convenience with robust security.

Understanding WebAuthn and Passkeys

Passkeys utilize asymmetric cryptography, where a private key remains on the user’s device (e.g., a YubiKey or iPhone) and a public key is shared with the server. Daniel explained the two-phase process: registration, where a key pair is generated and the public key is stored on the server, and authentication, where the server sends a challenge, the device signs it with the private key, and the server verifies it. This ensures phishing resistance, as keys are domain-specific and cannot be used on fraudulent sites. WebAuthn, a W3C standard backed by the FIDO Alliance, simplifies this process for developers by abstracting complex cryptography through browser APIs like navigator.credentials.create() and navigator.credentials.get().

Integrating Passkeys with Spring Security

Using a live demo, Daniel showed how to integrate passkeys into a Spring Boot 3.4 application. He added the spring-security-webauthn dependency and configured a security setup with the application name, relying party (RP) ID (e.g., localhost), and allowed origins. This minimal configuration enables a default passkey login page. For persistence, Spring Security 6.5 (releasing soon after the talk) offers JDBC support, requiring two tables: one for user credentials (storing public keys and metadata) and another linking passkeys to users. Daniel emphasized that Spring Security handles cryptographic validation, sparing developers from implementing complex WebAuthn logic manually.

Customizing the Passkey Experience

To enhance user experience, Daniel demonstrated creating a custom login page with a branded “Sign in with Passkey” button, styled with CSS (featuring a comic sans font for humor). He highlighted the need for JavaScript to interact with WebAuthn APIs, copying Spring Security’s Apache-licensed sample code for authentication flows. This involves handling CSRF tokens and redirecting users post-authentication. While minimal Java code is needed, developers must write some JavaScript to trigger browser APIs. Daniel advised using Spring Security’s defaults for simplicity but encouraged customization for production apps, ensuring alignment with brand aesthetics.

Practical Considerations and Feedback

Daniel stressed that passkeys are not biometric data but cryptographic credentials, synced across devices via password managers or iCloud Keychain without server involvement. For organizations using identity providers like Keycloak or Azure Entra ID, passkey support is often a checkbox configuration, reducing implementation effort. He encouraged developers to provide feedback on Spring Security’s passkey support via GitHub issues, emphasizing community contributions to refine features. For those interested in deeper WebAuthn mechanics, he recommended Ubico’s developer guide over the dense W3C specification, offering practical insights for implementation.

Links:

PostHeaderIcon [DefCon32] DEF CON 32: Finding & Exploiting Local Attacks on 1Password Mac Desktop App

J. Hoffman and Colby Morgan, offensive security engineers at Robinhood, delivered a compelling presentation at DEF CON 32, exploring vulnerabilities in the 1Password macOS desktop application. Focusing on the risks posed by compromised endpoints, they unveiled multiple attack vectors to dump local vaults, exposing weaknesses in 1Password’s software architecture and IPC mechanisms. Their research, blending technical rigor with practical demonstrations, offered critical insights into securing password managers against local threats.

Probing 1Password’s Security Assumptions

J. and Colby opened by highlighting the immense trust users place in password managers like 1Password, which safeguard sensitive credentials. They posed a critical question: how secure are these credentials if a device is compromised? Their research targeted the macOS application, uncovering vulnerabilities that could allow attackers to access vaults. By examining 1Password’s reliance on inter-process communication (IPC) and open-source components, they revealed how seemingly robust encryption fails under local attacks, setting the stage for their detailed findings.

Exploiting Application Vulnerabilities

The duo detailed several vulnerabilities, including an XPC validation bypass that enabled unauthorized access to 1Password’s processes. Their live demonstrations showcased how attackers could exploit these flaws to extract vault data, even on locked systems. They also identified novel bugs in Google Chrome’s interaction with 1Password’s browser extension, amplifying the attack surface. J. and Colby’s meticulous approach, including proof-of-concept scripts released at Morgan’s GitHub, underscored the need for robust validation in password manager software.

Mitigating Local Threats

Addressing mitigation, J. and Colby recommended upgrading to the latest 1Password versions, noting fixes in versions 8.10.18 and 8.10.36 for their disclosed issues. They urged organizations to enhance endpoint security, emphasizing that password managers are prime targets for red teamers seeking cloud credentials or API keys. Their findings, developed over a month of intensive research, highlighted the importance of proactive patching and monitoring to safeguard sensitive data on compromised devices.

Engaging the Security Community

Concluding, J. and Colby encouraged the DEF CON community to extend their research to other password managers, noting that similar vulnerabilities likely exist. They shared their code to inspire further exploration and emphasized responsible disclosure, having worked with 1Password to address the issues. Their call to action invited attendees to collaborate on improving password manager security, reinforcing the collective effort needed to protect critical credentials in an era of sophisticated local attacks.

Links:

PostHeaderIcon [OxidizeConf2024] Panel: What Has to Change to Increase Rust Adoption in Industrial Companies?

Overcoming Barriers to Adoption

The promise of memory-safe programming has positioned Rust as a leading candidate for industrial applications, yet its adoption in traditional sectors faces significant hurdles. At OxidizeConf2024, a panel moderated by Florian Gilcher from Ferrous Systems, featuring Michał Fita, James Munns from OneVariable UG, and Steve Klabnik from Oxide Computer Company, explored strategies to enhance Rust’s uptake in industrial companies. The discussion, enriched by audience questions, addressed cultural, technical, and managerial barriers, offering actionable insights for developers and organizations.

Michał highlighted the managerial perspective, noting that industrial companies often prioritize stability and cost over innovation. The challenge lies in convincing decision-makers of Rust’s benefits, particularly when hiring skilled Rust developers is difficult. James added that industrial users, rooted in mechanical and electrical engineering, are less likely to share challenges publicly, complicating efforts to gauge their needs. Florian emphasized the role of initiatives like Ferrocene, a safety-compliant Rust compiler, in opening doors to regulated industries like automotive.

Technical and Cultural Shifts

Steve underscored Rust’s technical advantages, such as memory safety and concurrency guarantees, which align with regulatory pressures from organizations like the NSA advocating for memory-safe languages. However, he cautioned against framing Rust as a direct replacement for C++, which risks alienating skilled C++ developers. Instead, the panel advocated for a collaborative approach, highlighting Rust’s total cost of ownership benefits—fewer bugs, faster debugging, and improved maintainability. James noted that tools like cargo-deny and cargo-tree enhance security and dependency management, addressing industrial concerns about reliability.

Cultural resistance also plays a role, particularly in companies reliant on trade secrets. Michał pointed out that Rust’s open-source ethos can clash with proprietary mindsets, requiring tailored strategies to demonstrate value. The panel suggested focusing on high-impact areas, such as safety-critical components, where Rust’s guarantees provide immediate benefits. By integrating Rust incrementally, companies can leverage existing C++ codebases while transitioning to safer, more modern practices.

Engaging Stakeholders and Building Community

Convincing stakeholders requires a nuanced approach, avoiding dismissive rhetoric about legacy languages. Florian stressed the importance of meeting developers where they are, respecting the expertise of C++ practitioners while showcasing Rust’s practical advantages. Steve highlighted successful adoptions, such as Oxide’s server stack, as case studies to inspire confidence. The panel also discussed the role of community efforts, such as the Rust Foundation, in providing resources and certifications to ease adoption.

Audience input reinforced the need for positive messaging. A C++ developer cautioned against framing Rust as a mandate driven by external pressures, advocating for dialogue that emphasizes mutual benefits. The panel agreed, suggesting that events like OxidizeConf and open-source contributions can bridge gaps between communities, fostering collaboration. By addressing technical, cultural, and managerial challenges, Rust can gain traction in industrial settings, driving innovation without discarding legacy expertise.

Links:

PostHeaderIcon [DevoxxFR2025] Dagger Modules: A Swiss Army Knife for Modern CI/CD Pipelines

Continuous Integration and Continuous Delivery (CI/CD) pipelines are the backbone of modern software development, automating the process of building, testing, and deploying applications. However, as these pipelines grow in complexity, they often become difficult to maintain, debug, and port across different execution platforms, frequently relying on verbose and platform-specific YAML configurations. Jean-Christophe Sirot, in his presentation, introduced Dagger as a revolutionary approach to CI/CD, allowing pipelines to be written as code, executable locally, testable, and portable. He explored Dagger Functions and Dagger Modules as key concepts for creating and sharing reusable, language-agnostic components for CI/CD workflows, positioning Dagger as a versatile “Swiss Army knife” for modernizing these critical pipelines.

The Pain Points of Traditional CI/CD

Jean-Christophe began by outlining the common frustrations associated with traditional CI/CD pipelines. Relying heavily on YAML or other declarative formats for defining pipelines can lead to complex, repetitive, and hard-to-read configurations, especially for intricate workflows. Debugging failures within these pipelines is often challenging, requiring pushing changes to a remote CI server and waiting for the pipeline to run. Furthermore, pipelines written for one CI platform (like GitHub Actions or GitLab CI) are often not easily transferable to another, creating vendor lock-in and hindering flexibility. This dependency on specific platforms and the difficulty in managing complex workflows manually are significant pain points for development and DevOps teams.

Dagger: CI/CD as Code

Dagger offers a fundamentally different approach by treating CI/CD pipelines as code. It allows developers to write their pipeline logic using familiar programming languages (like Go, Python, Java, or TypeScript) instead of platform-specific configuration languages. This brings the benefits of software development practices – such as code reusability, modularity, testing, and versioning – to CI/CD. Jean-Christophe explained that Dagger executes these pipelines using containers, ensuring consistency and portability across different environments. The Dagger engine runs the pipeline logic, orchestrates the necessary container operations, and manages dependencies. This allows developers to run and debug their CI/CD pipelines locally using the same code that will execute on the remote CI platform, significantly accelerating the debugging cycle.

Dagger Functions and Modules

Key to Dagger’s power are Dagger Functions and Dagger Modules. Jean-Christophe described Dagger Functions as the basic building blocks of a pipeline – functions written in a programming language that perform specific CI/CD tasks (e.g., building a Docker image, running tests, deploying an application). These functions interact with the Dagger engine to perform container operations. Dagger Modules are collections of related Dagger Functions that can be packaged and shared. Modules allow teams to create reusable components for common CI/CD patterns or specific technologies, effectively creating a library of CI/CD capabilities. For example, a team could create a “Java Build Module” containing functions for compiling Java code, running Maven or Gradle tasks, and building JAR or WAR files. These modules can be easily imported and used in different projects, promoting standardization and reducing duplication across an organization’s CI/CD workflows. Jean-Christophe demonstrated how to create and use Dagger Modules, illustrating their potential for building composable and maintainable pipelines. He highlighted that Dagger’s language independence means that modules can be written in one language (e.g., Python) and used in a pipeline defined in another (e.g., Java), fostering collaboration between teams with different language preferences.

The Benefits: Composable, Maintainable, Portable

By adopting Dagger, teams can create CI/CD pipelines that are:
Composable: Pipelines can be built by combining smaller, reusable Dagger Modules and Functions.
Maintainable: Pipelines written as code are easier to read, understand, and refactor using standard development tools and practices.
Portable: Pipelines can run on any platform that supports Dagger and containers, eliminating vendor lock-in.
Testable: Individual Dagger Functions and modules can be unit tested, and the entire pipeline can be run and debugged locally.

Jean-Christophe’s presentation positioned Dagger as a versatile tool that modernizes CI/CD by bringing the best practices of software development to pipeline automation. The ability to write pipelines in code, leverage reusable modules, and execute locally makes Dagger a powerful “Swiss Army knife” for developers and DevOps engineers seeking more efficient, reliable, and maintainable CI/CD workflows.

Links:

PostHeaderIcon Understanding `elastic.apm.instrument_ancient_bytecode=true` in Elastic APM

Elastic APM (Application Performance Monitoring) is a powerful tool designed to provide visibility into your application’s performance by instrumenting code at runtime. Most of the time, Elastic APM dynamically attaches itself to Java applications, weaving in the necessary instrumentation logic to capture transactions, spans, and errors. However, some applications, especially legacy systems or those running on older bytecode, may require additional configuration. This is where the parameter elastic.apm.instrument_ancient_bytecode=true becomes relevant.

What Does This Parameter Do?

By default, the Elastic APM agent is optimized for modern JVM bytecode, typically generated by more recent versions of Java compilers. However, in certain environments, applications may rely on very old Java bytecode compiled with legacy compilers, or on classes transformed in ways that deviate from expected patterns. In such cases, the default instrumentation mechanisms may fail.

Setting elastic.apm.instrument_ancient_bytecode=true explicitly tells the agent to attempt instrumentation on bytecode that does not fully conform to current JVM standards. It essentially relaxes some of the agent’s safeguards and fallback logic, allowing it to process “ancient” or non-standard bytecode.

When Is This Necessary?

Most modern Java applications do not require this parameter. However, it becomes useful in scenarios such as:

  • Legacy Applications: Systems still running on bytecode generated by Java 5, 6, or even earlier.
  • Bytecode Manipulation: Applications that make heavy use of frameworks or tools that dynamically generate or transform bytecode in unusual ways.
  • Incompatible Class Structures: Some libraries written long ago may use patterns that modern instrumentation cannot safely parse.

Examples of Differences

Without the Parameter

  • The Elastic APM agent may skip certain classes entirely, resulting in gaps in transaction traces.
  • Errors such as “class not instrumented” may appear in logs when working with older or unusual bytecode.
  • Performance metrics may look incomplete, missing critical spans in legacy code paths.

With the Parameter Enabled

  • The agent attempts a broader set of instrumentation strategies, even for outdated or malformed bytecode.
  • Legacy classes and libraries are more likely to be traced successfully, providing a fuller view of application performance.
  • Developers gain visibility into workflows that would otherwise remain opaque, such as old JDBC calls or
    proprietary frameworks compiled years ago.

Trade-offs and Risks

While enabling this parameter may seem like a straightforward fix, it should be approached with caution:

  • Stability Risks: Forcing instrumentation of very old bytecode could lead to runtime issues if the agent misinterprets structures.
  • Performance Overhead: Instrumenting non-standard classes may come with higher CPU or memory costs.
  • Support Limitations: Elastic primarily supports mainstream JVM versions, so using this
    parameter places the application in less-tested territory.

Best Practices

  • Enable elastic.apm.instrument_ancient_bytecode only if you detect missing traces or errors in the agent logs related to class instrumentation.
  • Test thoroughly in a staging environment before applying it to production.
  • Document which modules require this setting and track their eventual migration to modern Java versions.

Conclusion

The elastic.apm.instrument_ancient_bytecode=true parameter is a niche but valuable option for teams maintaining legacy Java systems. By enabling it, organizations can bridge the gap between outdated bytecode and modern observability needs, ensuring that even older applications benefit from the insights provided by Elastic APM. However, this should be viewed as a temporary measure on the journey toward modernizing application stacks, not as a permanent fix.


Hashtags:
#ElasticAPM #JavaMonitoring #ApplicationPerformance #LegacySystems #DevOps #Observability #JavaDevelopment #PerformanceMonitoring #ElasticStack #SoftwareMaintenance

PostHeaderIcon [AWSReInforce2025] How AWS’s global threat intelligence transforms cloud protection (SEC302)

Lecturer

The presentation features AWS security leadership and engineering experts who architect the global threat intelligence platform. Their collective expertise spans distributed systems, machine learning, and real-time security operations across AWS’s planetary-scale infrastructure.

Abstract

The session examines AWS’s threat intelligence lifecycle—from sensor deployment through data processing to automated disruption—demonstrating how global telemetry volume enables precision defense at scale. It reveals the architectural patterns and machine learning models that convert billions of daily security events into actionable mitigations, establishing security as a reliability function within the shared responsibility model.

Global Sensor Network and Telemetry Foundation

AWS operates the world’s largest sensor network for security telemetry, spanning every Availability Zone, edge location, and service endpoint. This includes hypervisor introspection, network flow logs, DNS query monitoring, and host-level signals from EC2 instances. The scale is staggering: thousands of potential security events are blocked daily before customer impact, derived from petabytes of raw telemetry.

Sensors are purpose-built for specific threat classes. Network sensors detect C2 beaconing patterns; host sensors identify cryptominer process trees; DNS sensors flag domain generation algorithms. This layered approach ensures coverage across the attack lifecycle—from reconnaissance through exploitation to persistence.

Data Processing Pipeline and Intelligence Generation

Raw telemetry flows through a multi-stage pipeline. First, deterministic rules filter known bad indicators—IP addresses from botnet controllers, certificate hashes of phishing kits. Surviving events enter machine learning models trained on historical compromise patterns.

The models operate in two modes: supervised classification for known attack families, and unsupervised anomaly detection for zero-day behaviors. Feature engineering extracts behavioral fingerprints—process lineage entropy, network flow burstiness, file system access velocity. Models refresh hourly using federated learning across regions, preventing single-point compromise.

Intelligence quality gates require precision above 99.9% to minimize false positives. When confidence thresholds are met, signals become actionable intelligence with metadata: actor attribution, campaign identifiers, TTP mappings to MITRE ATT&CK.

Automated Disruption and Attacker Cost Imposition

Intelligence drives automated responses through three mechanisms. First, infrastructure-level blocks: malicious IPs are null-routed at the network edge within seconds. Second, service-level mitigations: compromised credentials trigger forced password rotation and session termination. Third, customer notifications via GuardDuty findings with remediation playbooks.

The disruption philosophy focuses on increasing attacker cost. By blocking C2 infrastructure early, campaigns lose command visibility. By rotating compromised keys rapidly, lateral movement becomes expensive. By publishing indicators publicly, defenders globally benefit from AWS’s visibility.

Shared Outcomes in the Responsibility Model

The shared responsibility model extends to outcomes, not just controls. AWS secures the cloud—hypervisors, network fabric, physical facilities—while customers secure their workloads. Threat intelligence bridges this divide: AWS’s global view detects campaigns targeting multiple customers, enabling proactive protection before individual compromise.

This manifests in services like Shield Advanced, which absorbs DDoS attacks at the network perimeter, and Macie, which identifies exposed PII across S3 buckets. Customers focus on application logic—input validation, business rule enforcement—while AWS handles undifferentiated heavy lifting.

Machine Learning at Security Scale

Scaling threat intelligence requires automation beyond human capacity. Data scientists build models that generalize across attack variations while maintaining low false positive rates. Techniques include:

  • Graph neural networks to detect credential abuse chains
  • Time-series analysis for cryptominer thermal signatures
  • Natural language processing on phishing email corpora

Model interpretability ensures security analysts can validate decisions. Feature importance rankings and counterfactual examples explain why a particular IP was blocked, maintaining operational trust.

Operational Integration and Customer Impact

Intelligence integrates into customer-facing services seamlessly. GuardDuty consumes the same models used internally, surfacing findings with evidence packages. Security Hub centralizes signals from AWS and partner solutions. WAF rulesets update automatically with emerging threat patterns.

The impact compounds: a campaign targeting one customer is disrupted globally. A novel malware strain detected in one region triggers protections everywhere. This network effect makes the internet safer collectively.

Conclusion: Security as Reliability Engineering

Threat intelligence at AWS scale transforms security from reactive defense to proactive reliability engineering. By investing in sensors, processing, and automation, AWS prevents disruptions before they affect customer operations. The shared outcomes model—where infrastructure protection enables application innovation—creates a virtuous cycle: more secure workloads generate better telemetry, improving intelligence quality, which prevents more disruptions.

Links:

PostHeaderIcon [DefCon32] DEF CON 32: Laundering Money

Michael Orlitzky, a multifaceted security researcher and mathematician, captivated the DEF CON 32 audience with a provocative presentation on bypassing payment mechanisms in CSC ServiceWorks’ pay-to-play laundry machines. By exploiting physical vulnerabilities in Speed Queen washers and dryers, Michael demonstrated how to run these machines without payment, framing his actions as a response to CSC’s exploitative practices. His talk, rich with technical detail and humor, shed light on the intersection of physical security and consumer frustration, urging attendees to question predatory business models.

Uncovering CSC’s Predatory Practices

Michael began by introducing CSC ServiceWorks, a major provider of coin- and app-operated laundry machines in residential buildings. He detailed their business model, which charges tenants for laundry despite rent covering utilities, often trapping users with non-refundable prepaid cards or unreliable apps like CSC GO. Michael recounted personal grievances, such as machines eating quarters or failing to deliver services, supported by widespread customer complaints citing CSC’s poor maintenance and refund processes. His narrative positioned CSC as a corporate antagonist, justifying his exploration of hardware bypasses as a form of reclaiming fairness.

Bypassing Coin Slots with Hardware Hacks

Delving into the technical core, Michael explained how to access the service panels of CSC-branded Speed Queen machines, which use standardized keys available online. By short-circuiting red and black wires in the coin-drop mechanism, he tricked the machine into registering payment, enabling free cycles without damage. His live demonstration, complete with safety warnings about grounding and electrical risks, showcased the simplicity of the bypass—achievable in seconds with minimal tools. Michael’s approach, detailed on his personal website, emphasized accessibility, requiring only determination and basic equipment.

Addressing CSC’s Security Upgrades

Michael also addressed CSC’s response to his findings, noting that days before DEF CON 32, the company upgraded his building’s machines with new tubular locks and security Torx screws. Undeterred, he demonstrated how to bypass these using a tubular lockpick or a flathead screwdriver, highlighting CSC’s superficial fixes. His candid tone and humorous defiance—acknowledging the machines’ internet-connected logs—underscored the low risk of repercussions, as CSC’s focus on profit over maintenance left such vulnerabilities unaddressed. This segment reinforced the talk’s theme of exploiting systemic flaws in poorly secured systems.

Ethical Implications and Community Call

Concluding, Michael framed his work as a protest against CSC’s exploitative practices, encouraging attendees to consider the ethics of bypassing systems that exploit consumers. He shared resources, including manuals and his write-up, to empower others while cautioning about legal risks. His talk sparked reflection on the balance between technical ingenuity and corporate accountability, urging the DEF CON community to challenge predatory systems through informed action.

Links:

PostHeaderIcon [DotAI2024] DotAI 2024: Maxim Zaks – Mojo: Beyond Buzz, Toward a Systems Symphony

Maxim Zaks, polymath programmer from IDEs to data ducts, and FlatBuffers’ fleet-footed forger, interrogated Mojo’s mettle at DotAI 2024. As Mojo’s communal curator—contributing to its canon sans corporate crest—Zaks, unyoked to Modular, affirmed its ascent: not ephemeral éclat, but enduring edifice for AI artisans and systems smiths alike.

Echoes of Eras: From Procedural Progenitors to Pythonic Prodigies

Zaks zested with zeitgeist: Married… with Children’s clan conjuring C’s centrality, Smalltalk’s sparkle, BASIC’s benevolence—80s archetypes amid enterprise esoterica. Fast-forward: Java’s juggernaut, Python’s pliant poise—yet performance’s plaint persists, Python’s pyrotechnics paling in precision’s precinct.

Mojo manifests as meld: Python’s patois, systems’ sinew—superset sans schism, scripting’s suavity fused with C’s celerity. Zaks zinged its zygote: 2023’s stealthy spawn, Howard’s herald as “decades’ dawn”—now TIOBE’s 48th, browser-bound for barrierless baptism.

Empowering Engineers: From Syntax to SIMD

Zaks zoomed to zealots: high-performance heralds harnessing SIMD sorcery, data designs deftly dispatched—SIMD intrinsics summoning speedups sans syntax strain. Mojo’s mantle: multithreading’s mastery, inline ML’s alchemy—CPUs as canvases, GPUs on horizon.

For non-natives, Zaks zapped a prefix-sum parable: prosaic Python plodding, Mojo’s baseline brisk, SIMD’s spike surging eightfold—arcane accessible, sans secondary syntaxes like Zig’s ziggurats or Rust’s runes.

Community’s crucible: inclusive incubus, tools transcendent—VS Code’s vassal, REPL’s rapture. Zaks’ zest: Mojo’s mirthful meld, where whimsy weds wattage, inviting idiomatic idioms.

In finale, Zaks flung a flourish: browser beckons at mojo.modular.com—forge futures, unfettered.

Links: