[DefCon32] DEF CON 32: Using ALPC Security Features to Compromise RPC Services
WanJunJie Zhang and Yisheng He, security researchers from Huorong Network Security, delivered a compelling presentation at DEF CON 32 on exploiting Windows Advanced Local Procedure Call (ALPC) security mechanisms to compromise Remote Procedure Call (RPC) services. Their research uncovered a subtle flaw in ALPC’s security checks, enabling unauthorized users to escalate to system privileges. WanJunJie and Yisheng’s detailed analysis of ALPC and RPC internals, combined with their innovative exploitation techniques, provided a fresh perspective on Windows kernel vulnerabilities.
Understanding ALPC and RPC Mechanics
WanJunJie opened by demystifying ALPC, a Windows kernel mechanism for inter-process communication, and its integration with RPC services. He explained the marshal/unmarshal processes, previously underexplored, which handle data exchange between processes. Their research at Huorong Network Security identified how ALPC’s security measures, designed to validate data and context, could be subverted. By analyzing historical ALPC and RPC bugs, such as time-of-check-time-of-use (TOCTOU) issues, WanJunJie set the stage for their discovery of a novel vulnerability.
Exploiting the Security Flaw
Yisheng detailed the critical flaw they uncovered in ALPC’s security mechanism, which they dubbed “defeating magic by magic.” This vulnerability allowed them to bypass strict kernel checks, achieving system-level privilege escalation. By manipulating ALPC syscalls in the Windows kernel (ntoskrnl), they crafted an exploit that leveraged a small oversight in the security validation process. Yisheng’s demonstration highlighted multiple exploitation paths, showcasing the versatility of their approach in targeting RPC services.
Lessons from Bug Hunting
The duo shared their bug hunting philosophy, emphasizing the importance of distrusting vendor patches, which may fail to fully address vulnerabilities. WanJunJie advocated for creative and critical analysis during patch reviews, noting that side effects from patches can introduce new flaws. Their experience, drawn from Huorong’s rigorous testing, underscored the need for patience and persistence in uncovering kernel-level bugs. They also highlighted the potential for automation in extracting RPC interface information to streamline future exploit development.
Enhancing Windows Security
Concluding, Yisheng offered insights into fortifying ALPC and RPC security, urging Microsoft to refine validation mechanisms and reduce reliance on backward compatibility. They encouraged the DEF CON community to explore RPC’s specialized features for new attack surfaces and share innovative ideas. Their references to prior works, such as Clement Rouault’s Hack.lu 2017 talk, provide a foundation for further research, inspiring attendees to probe Windows kernel vulnerabilities with renewed vigor.
Links:
[Voxxed Amsterdam 2025] From Zero to AI: Building Smart Java or Kotlin Applications with Spring AI
At VoxxedDaysAmsterdam2025, Christian Tzolov, a Spring AI team member at VMware and lead of the MCP Java SDK, delivered a comprehensive session titled “From Zero to AI: Building Smart Java or Kotlin Applications with Spring AI.” Spanning nearly two hours, the session provided a deep dive into integrating generative AI into Java and Kotlin applications using Spring AI, a framework designed to connect enterprise data and APIs with AI models. Through live coding demos, Tzolov showcased practical use cases, including conversation memory, tool/function calling, retrieval-augmented generation (RAG), and multi-agent systems, while addressing challenges like AI hallucinations and observability. Attendees left with actionable insights to start building AI-driven applications, leveraging Spring AI’s portable abstractions and the Model Context Protocol (MCP).
Overcoming LLM Limitations with Spring AI
Tzolov began by outlining the challenges of large language models (LLMs): they are stateless, frozen in time, and lack domain-specific knowledge, requiring developers to provide context, manage state, and handle interactions with external systems. Spring AI addresses these issues with high-level abstractions like the ChatClient, similar to Spring’s RestClient or WebClient, enabling seamless integration with models like OpenAI’s GPT-4o, Anthropic’s Claude, or open-source alternatives like LLaMA. A live demo of a flight booking assistant illustrated these concepts. Tzolov started with a basic Spring Boot application connected to OpenAI, demonstrating a simple chat interface. To ground the model, he used system prompts to define its behavior as a customer support agent for “Fun Air,” ensuring contextually appropriate responses. He then introduced conversation memory using Spring AI’s ChatMemoryAdvisor, which retains a chronological list of messages to maintain state, addressing the stateless nature of LLMs. For long-term memory, Tzolov employed a vector store (Chroma) to store conversation history semantically, retrieving only relevant data for queries, thus overcoming context window limitations. This setup allowed the assistant to respond accurately to queries like “What is my flight status?” by fetching booking details (e.g., booking number 103) from a mock database.
Enhancing AI Applications with Tool Calling and RAG
To enable LLMs to interact with external systems, Tzolov demonstrated tool/function calling, where Spring AI wraps existing services (e.g., a flight booking service) as tools with metadata (name, description, JSON schema). In the demo, the assistant used a getBookingDetails tool to query a database, allowing it to provide accurate flight status updates. Tzolov emphasized the importance of descriptive tool metadata to guide the LLM in deciding when and how to invoke tools, reducing the risk of misinterpretation. For domain-specific knowledge, he introduced prompt stuffing—injecting additional context into prompts—and RAG for dynamic data retrieval. In a RAG demo, cancellation policies were loaded into a Chroma vector store, chunked into meaningful segments, and retrieved dynamically based on user queries. This approach mitigated hallucinations, as seen when the assistant correctly cited a 50% refund policy for premium economy bookings within 40 hours. Tzolov highlighted advanced RAG techniques, such as data compression and reranking, supported by Spring AI’s APIs, and stressed the importance of evaluating responses to ensure relevance, referencing frameworks like those from contributor Thomas Vitali.
Building Multi-Agent Systems with MCP
Tzolov explored the Model Context Protocol (MCP), initiated by Anthropic, as a standardized way to integrate AI applications with external tools and resources across platforms. Using Spring AI’s MCP Java SDK, he demonstrated how to build and consume MCP-compliant tools. In one demo, a Spring AI application connected to MCP servers for Brave Search (JavaScript-based) and file system access, enabling an agent to answer queries about Spring AI support for MCP and write summaries to a file. Another demo reversed the setup, exposing a Spring AI weather tool (using Open-Meteo) as an MCP server, accessible by third-party clients like Claude Desktop via standard I/O or HTTP/SSE transports. Tzolov explained MCP’s bidirectional architecture, where clients can act as servers, supporting features like sampling (allowing servers to request LLM processing from clients). He addressed security concerns, noting Spring AI’s integration with Spring Security (referencing a blog by Daniel Garnier-Moiroux) to secure MCP servers with OAuth 2.1. The session also introduced agentic systems, where LLMs act as a “brain” for planning and tools as a “body” for interaction, with an agentic loop evaluating and refining responses. A work-in-progress demo showcased an orchestration pattern, delegating tasks to searcher, fact-checker, and writer agents, to be published on the Spring AI Community Portal.
Observability and Multimodality for Robust AI Systems
Observability was a key focus, as Tzolov underscored its importance in debugging complex AI interactions. Spring AI integrates with Micrometer to provide metrics (e.g., token usage, model throughput, latency), tracing, and logging (via Loki). A dashboard demo displayed real-time metrics for the flight booking assistant, highlighting tool calls and errors, crucial for diagnosing issues in agentic systems. Tzolov also explored multimodality, demonstrating a voice assistant using OpenAI’s GPT-4o audio preview, which processes audio input and output. Configured as “Marvin the Paranoid Android,” the assistant responded to voice queries with humorous, contextually appropriate replies, showcasing Spring AI’s support for non-text modalities like images, PDFs, and videos (e.g., Gemini’s video support). Tzolov noted that multimodality enables richer interactions, such as analyzing images or converting PDFs to markdown, and Spring AI’s abstractions handle these seamlessly. He concluded by encouraging developers to explore Spring AI’s documentation, experiment with MCP, and contribute to the community, emphasizing its role in building robust, interoperable AI applications.
Links:
- Christian Tzolov on LinkedIn
- Spring AI
- Model Context Protocol
- VMware
- Anthropic
- Open AI
- Brave Search API
- Open-Meteo
- Chroma
- Micrometer
- Loki
- Spring AI GitHub
Hashtags: #SpringAI #GenerativeAI #ModelContextProtocol #ChristianTzolov #VoxxedDaysAmsterdam2025 #AIAgents #RAG #Observability
[Voxxed Amsterdam 2025] How to Survive as a Developer in the Exponential Age of AI
In a dynamic and fast-paced session at VoxxedDaysAmsterdam2025, Sander Hoogendoorn, CTO at iBOOD.com, explores the transformative impact of artificial intelligence (AI) on software development. With over four decades of coding experience, Hoogendoorn demystifies the hype around AI, examining its practical benefits, challenges, and implications for developers. Far from signaling the end of programming careers, he argues that AI empowers developers to tackle broader and more complex problems—provided they adapt and remain committed to quality and lifelong learning.
AI: A Developer’s Ally
Hoogendoorn clarifies AI’s role in development, highlighting tools like Cursor AI, which can extract components, fix linting issues, and generate unit tests using natural language prompts. He recounts an experience where his non-technical colleague Elmo built an iOS app with Cursor AI, illustrating AI’s democratizing potential. However, Sander warns that such tools require supervision to ensure code quality and compliance with project standards. AI’s ability to automate repetitive tasks—such as refactoring or test generation—frees developers to focus on complex problem-solving. At iBOOD, AI has enabled the team to create content with a unique tone, retrieve competitor pricing, and automate invoice recognition—tasks that previously required external expertise or significant manual effort.
Navigating Technical Debt in the AI Era
The rise of AI-assisted development—especially “vibe coding,” where problems are described in natural language to generate code—introduces new forms of technical debt. Hoogendoorn references Ward Cunningham’s metaphor of technical debt as a loan that accelerates development but demands repayment through refactoring. AI-generated code, while fast to produce, often lacks context or long-term maintainability. For instance, Cursor AI struggled to integrate with iBOOD’s custom markdown components, resulting in complex solutions that required manual fixes. Research suggests that AI can amplify technical debt if used without rigorous validation, emphasizing the need for developers to stay vigilant and prioritize code quality over short-term gains.
Thriving in an AI-Centric Future
Far from replacing developers, Sander Hoogendoorn asserts that AI enhances their capabilities, enabling them to tackle ambitious challenges. He reminds us that developers are not mere typists—they are problem-solvers who think critically and collaborate to meet business needs. Historical shifts, from COBOL to cloud computing, have always empowered developers to solve bigger problems, and AI is no exception. By thoughtfully experimenting with AI—integrating it into workflows for content creation, price retrieval, or invoice processing—Sander’s team at iBOOD has unlocked previously unreachable efficiencies. The key to thriving, he says, lies in relentless learning and a willingness to adapt, ensuring that developers remain indispensable in an AI-driven world.
Links:
Hashtags: #AI #SoftwareDevelopment #TechnicalDebt #CursorAI #iBOOD #SanderHoogendoorn #VoxxedDaysAmsterdam2025
Understanding Kubernetes for Docker and Docker Compose Users
TL;DR
Kubernetes may look like an overly complicated version of Docker Compose, but it operates on a different level entirely. Where Compose excels at quick, local orchestration of containers, Kubernetes is a robust, distributed platform designed for automated scaling, fault-tolerance, and production-grade deployments across multi-node clusters. This article provides a comprehensive comparison and shows how ArgoCD enhances GitOps-based Kubernetes workflows.
Docker Compose vs Kubernetes – Similarities and First Impressions
At a high level, Docker Compose and Kubernetes share similar concepts: containers, services, configuration, and volumes. This often leads to the assumption that Kubernetes is just a verbose, harder-to-write Compose replacement. However, Kubernetes is more than a runtime. It’s a control plane, a state manager, and a policy enforcer.
| Concept | Docker Compose | Kubernetes |
|---|---|---|
| Service definition | docker-compose.yml |
Deployment, Service, etc. YAML manifests |
| Networking | Shared bridge network, service discovery by name | DNS, internal IPs, ClusterIP, NodePort, Ingress |
| Volume management | volumes: |
PersistentVolume, PersistentVolumeClaim, StorageClass |
| Secrets and configs | .env, environment: |
ConfigMap, Secret, ServiceAccount |
| Dependency management | depends_on |
initContainers, readinessProbe, livenessProbe |
| Scaling | Manual (scale flag or duplicate services) | Declarative (replicas), automatic via HPA |
Real-Life Use Cases – Docker Compose vs Kubernetes Examples
Tomcat + Oracle + MongoDB + NGINX Stack
Docker Compose
version: '3'
services:
nginx:
image: nginx:latest
ports:
- "80:80"
depends_on:
- tomcat
tomcat:
image: tomcat:9
ports:
- "8080:8080"
environment:
DB_URL: jdbc:oracle:thin:@oracle:1521:orcl
oracle:
image: oracle/database:19.3.0-ee
environment:
ORACLE_PWD: secretpass
volumes:
- oracle-data:/opt/oracle/oradata
mongo:
image: mongo:5
volumes:
- mongo-data:/data/db
volumes:
oracle-data:
mongo-data:
Kubernetes Equivalent
- Each service becomes a
Deploymentand aService. - Environment variables and passwords are stored in
Secrets. - Volumes are defined with
PVCandStorageClass.
apiVersion: v1
kind: Secret
metadata:
name: oracle-secret
type: Opaque
data:
ORACLE_PWD: c2VjcmV0cGFzcw==
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: tomcat
spec:
replicas: 2
selector:
matchLabels:
app: tomcat
template:
metadata:
labels:
app: tomcat
spec:
containers:
- name: tomcat
image: tomcat:9
ports:
- containerPort: 8080
env:
- name: DB_URL
value: jdbc:oracle:thin:@oracle:1521:orcl
NodeJS + Express + MySQL + NGINX
Docker Compose
services:
mysql:
image: mysql:8
environment:
MYSQL_ROOT_PASSWORD: rootpass
volumes:
- mysql-data:/var/lib/mysql
api:
build: ./api
environment:
DB_USER: root
DB_PASS: rootpass
DB_HOST: mysql
nginx:
image: nginx:latest
ports:
- "80:80"
Kubernetes Equivalent
apiVersion: v1
kind: Secret
metadata:
name: mysql-secret
type: Opaque
data:
MYSQL_ROOT_PASSWORD: cm9vdHBhc3M=
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: api
spec:
replicas: 2
template:
spec:
containers:
- name: api
image: node-app:latest
env:
- name: DB_PASS
valueFrom:
secretKeyRef:
name: mysql-secret
key: MYSQL_ROOT_PASSWORD
⚙️ Docker Compose vs kubectl – Command Mapping
| Task | Docker Compose | Kubernetes |
|---|---|---|
| Start services | docker-compose up -d |
kubectl apply -f . |
| Stop/cleanup | docker-compose down |
kubectl delete -f . |
| View logs | docker-compose logs -f |
kubectl logs -f pod-name |
| Scale a service | docker-compose up --scale web=3 |
kubectl scale deployment web --replicas=3 |
| Shell into container | docker-compose exec app sh |
kubectl exec -it pod-name -- /bin/sh |
ArgoCD – GitOps Made Practical
ArgoCD is a Kubernetes-native continuous deployment tool. It uses Git as the single source of truth, enabling declarative infrastructure and GitOps workflows.
✨ Key Features
- Declarative sync of Git and cluster state
- Drift detection and automatic repair
- Multi-environment and multi-namespace support
- CLI and Web UI available
Example ArgoCD Commands
argocd login argocd.myorg.com
argocd app create my-app \
--repo https://github.com/org/app.git \
--path k8s \
--dest-server https://kubernetes.default.svc \
--dest-namespace production
argocd app sync my-app
argocd app get my-app
argocd app diff my-app
Sample ArgoCD Application Manifest
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: my-api
spec:
destination:
namespace: default
server: https://kubernetes.default.svc
project: default
source:
path: k8s/app
repoURL: https://github.com/org/api.git
targetRevision: HEAD
syncPolicy:
automated:
prune: true
selfHeal: true
✅ Conclusion
Docker Compose is perfect for prototyping and local dev. Kubernetes is built for cloud-native workloads, distributed systems, and high availability. ArgoCD makes declarative, Git-based continuous deployment simple, scalable, and observable.
[NDCOslo2024] Underwhelming Game Development with PICO-8 – Jonas Winje
In the shadowed underbelly of enterprise engineering, where sprawling systems spawn suffocation and deadlines devour delight, Jonas Winje, a Norwegian developer grappling with the grind of grand-scale governance, seeks solace in simplicity. His sanctuary: PICO-8, a pint-sized fantasy console that curtails chaos with charming constraints—128×128 pixels, 16 hues, Lua’s lean lexicon. Jonas’s jaunt, a jaunty juxtaposition of corporate colossalness and cartridge compactness, celebrates the catharsis of creation unencumbered, where felines frolic freely and feedback flows fleetly, reminding us that restraint can reignite the rapture of the craft.
Jonas greets with a grin, confessing his conference conundrum: a final-day slot, sans sleep, yet surging with stories of structured suffering. Enterprise’s essence, he evokes—overwhelm’s octet, buried beneath burgeoning backlogs—contrasts PICO-8’s playful parameters, a devkit distilling development to its delightful distillate. No need for neural networks or native nightmares; here, Lua’s lightness lets logic leap lightly.
Constraints as Catalysts: The Charm of Curtailment
PICO-8’s precepts propel productivity: 8×8 sprites sidestep artistic angst, 128×128 canvases confine complexity, sound slots spur succinct scores. Jonas jests at his ineptitude—pixels as proxies for prowess—yet praises the palette’s pardon, where 16 shades shield the shoddy from scrutiny. Lua, the lingua franca, limits lines to luminous loops, forestalling the fog of feature frenzy.
His homage: the IDE’s intimacy, interleaving ink and iteration—code a cat, cue its caper, cycle ceaselessly. No compilation consternation; changes cascade crisply, cultivating a cadence of continuous communion with the creation. Jonas’s kernel: constraints kindle creativity, turning “tiny” into triumph, where a jumping feline fosters flow states forgotten in feature factories.
Feline Frolics: From Pixel to Playable
Jonas’s journey: sketching a sprite—tabby torso, whisker wisps—then scripting its saunter. _update() orchestrates orbits, _draw() depicts dances; variables vault the varmint vertically, gravity grounding its gambols. He highlights hitches—hues’ harmony, hitbox hassles—yet lauds the latitude for “good enough,” unyoked from scalability’s specter.
PICO-8’s perks permeate: fast feedback fuels focus, eschewing enterprise’s endless entanglements. Jonas nods to nootropics’ absence—no AI accretions, just 2MB heaps—ensuring essence endures. His epiphany: such sanctuaries salvage sanity, seeding skills that seep into salaried spheres—prompt prototypes, pixel-perfect pursuits.
Wholesome Whims: The Wholesome Workflow
Jonas’s jubilation: PICO-8’s purity prompts pleasure, a palliative for procedural purgatory. Cats’ cuteness compounds calm, while whimsy wards weariness. His horizon: harness this haven to hone habits—hasty harmony, human handiwork—bridging boutique builds to behemoth backends.
In this microcosm, Jonas joyously affirms: underwhelm to uplift, where whims whisper wisdom.
Links:
[KotlinConf2025] Charts, Code, and Sails: Winning a Regatta with Kotlin Notebook
In the high-stakes world of competitive sailing, where every decision can mean the difference between victory and defeat, an extraordinary tool has emerged: Kotlin Notebook. Roman Belov, a distinguished member of the JetBrains team, shared a captivating account of leveraging this innovative technology to triumph in a 24-hour regatta. The narrative transcends a simple code demonstration, illustrating how interactive programming becomes a critical asset in a dynamic, unpredictable environment like the open sea.
This journey highlights the power of Kotlin Notebook as more than just a development tool; it’s a platform for real-time problem-solving. While a seasoned developer, Roman’s most cherished hat is that of a yachtsman. He uses the notebook to translate complex nautical challenges into actionable, data-driven decisions. The essence of the task is to navigate a course, which is essentially a graph with nodes representing different locations and edges representing the path between them. However, unlike a typical graph problem, the rules of sailing introduce complex variables. The boat cannot sail directly into the wind, and its speed is heavily dependent on the angle of the wind. This means the graph is constantly changing, making traditional route-planning algorithms obsolete.
The solution required a tool that could rapidly process data, visualize outcomes, and allow for on-the-fly adjustments. This is where Kotlin Notebook excelled, providing a live, interactive environment. Roman outlined how he could use the notebook to perform crucial tasks in the middle of the race: visualizing the race course on a map, calculating the fastest path based on current wind conditions, and dynamically adjusting the route as the wind shifted. This is achieved by creating a “sailable roads” model, which evaluates every potential path on the graph at regular intervals and discards any that are impossible given the wind direction. For the remaining paths, the notebook computes the optimal boat speed and time to complete that segment, effectively modeling the race in real time.
Roman then showcased the brute-force search algorithm that was used to find the optimal path. The code, written in Kotlin, was surprisingly straightforward and demonstrated the language’s elegance and readability. The algorithm, running within the notebook, would constantly iterate through the potential paths, calculating the time to finish for each one and discarding any that were slower than the best time found so far. The visual output of the notebook, which could render the different routes directly on the map, was a game-changer. It transformed abstract data and calculations into a clear, visual representation that allowed the sailors to make quick, informed decisions.
The application of Kotlin Notebook in this unconventional scenario proves its versatility beyond traditional data science or development tasks. It demonstrated how a tool designed for rapid experimentation can be applied to complex, real-world problems. The interactive nature of the notebook allowed Roman to combine data analysis, algorithm execution, and visual feedback into a single, cohesive workflow, enabling him and his crew to stay ahead of the competition and ultimately, win the race. This story is a testament to the power of a modern programming language and an adaptable toolchain, turning a challenging maritime endeavor into an exciting display of computational prowess.
Links:
- Roman Belov on JetBrains Blog
- JetBrains website
- Charts, Code, and Sails: Winning a Regatta with Kotlin Notebook | Roman Belov
[DefCon32] DEF CON 32: Exploiting the Unexploitable: Insights from the Kibana Bug Bounty
Mikhail Shcherbakov, a PhD candidate at KTH Royal Institute of Technology in Stockholm, captivated the DEF CON 32 audience with his deep dive into exploiting seemingly unexploitable vulnerabilities in modern JavaScript and TypeScript applications. Drawing from his participation in the Kibana Bug Bounty Program, Mikhail shared case studies that reveal how persistence and creative exploitation can transform low-impact vulnerabilities into critical remote code execution (RCE) chains. His presentation, rooted in his research on code reuse attacks, offered actionable techniques for security researchers and robust mitigation strategies for defenders.
Navigating the Kibana Bug Bounty
Mikhail began by outlining his journey in the Kibana Bug Bounty Program, where he encountered vulnerabilities initially deemed “by design” or unexploitable by triage teams. His work at KTH, focusing on static and dynamic program analysis, equipped him to challenge these assumptions. Mikhail explained how he identified prototype pollution vulnerabilities in Kibana, a popular data visualization platform, that could crash applications in seconds. By combining these with novel exploitation primitives, he achieved RCE, demonstrating the hidden potential of overlooked flaws.
Unlocking Prototype Pollution Exploits
Delving into technical specifics, Mikhail detailed his approach to exploiting prototype pollution, a common JavaScript vulnerability. He showcased how merge functions in popular libraries like Lodash could be manipulated to pollute object prototypes, enabling attackers to inject malicious properties. Mikhail’s innovative chain involved polluting a runner object and triggering a backup handler, resulting in RCE. He emphasized that even fixed prototype pollution cases could be combined with unfixed ones across unrelated application features, amplifying their impact and bypassing conventional defenses.
Advanced Exploitation Techniques
Mikhail introduced new primitives and gadgets that elevate prototype pollution beyond denial-of-service (DoS) attacks. He demonstrated how carefully crafted payloads could exploit Kibana’s internal structures, leveraging tools like Node.js and Deno to execute arbitrary code. His research also touched on network-based attacks, such as ARP spoofing in Kubernetes environments, highlighting the complexity of securing modern applications. Mikhail’s findings, documented in papers like “Silence Print” and “Dust,” provide a roadmap for researchers to uncover similar vulnerabilities in other JavaScript ecosystems.
Mitigating and Defending Against RCE
Concluding, Mikhail offered practical recommendations for mitigating these threats, urging developers to adopt secure coding practices and validate inputs rigorously. He encouraged researchers to persist in exploring seemingly unexploitable bugs, sharing resources like his collection of server-side prototype pollution gadgets. His work, accessible via his blog posts and Twitter updates, inspires the cybersecurity community to push boundaries in vulnerability research while equipping defenders with tools to fortify JavaScript applications against sophisticated attacks.
Links:
[DefCon32] Defeating EDR-Evading Malware with Memory Forensics
Andrew Case, a core developer on the Volatility memory analysis project and Director of Research at V-Soft Consulting, joined colleagues Sellers and Richard to present a groundbreaking session at DEF CON 32. Their talk focused on new memory forensics techniques to detect malware that evades Endpoint Detection and Response (EDR) systems. Andrew and his team developed plugins for Volatility 3, addressing sophisticated bypass techniques like direct system calls and malicious exception handlers. Their work, culminating in a comprehensive white paper, offers practical solutions for countering advanced malware threats.
The Arms Race with EDR Systems
Andrew opened by outlining the growing prominence of EDR systems, which perform deep system inspections to detect malware beyond traditional antivirus capabilities. However, malware developers have responded with advanced evasion techniques, such as code injection and manipulation of debug registers, fueling an ongoing arms race. Andrew’s research at V-Soft Consulting focuses on analyzing these techniques during incident response, revealing how attackers exploit low-level hardware and software components to bypass EDR protections, as seen in high-profile ransomware attacks.
New Memory Forensics Techniques
Delving into their research, Andrew detailed the development of Volatility 3 plugins to detect EDR bypasses. These plugins target techniques like direct and indirect system calls, module overwriting, and abuse of exception handlers. By enumerating handlers and applying static disassembly, their tools identify malicious processes generically, even when attackers tamper with functions like AMSI’s scan buffer. Andrew highlighted a specific plugin, Patchus AMSI, which catches both vector exception handlers and debug register abuses, ensuring EDRs cannot be fooled by malicious PowerShell or macros.
Practical Applications and Detection
The team’s plugins enable real-time detection of EDR-evading malware, providing defenders with actionable insights. Andrew demonstrated how their tools identify suspicious behaviors, such as breakpoints set on critical functions, allowing malicious code to execute undetected. He emphasized the importance of their 19-page white paper, available on the DEF CON website, which documents every known EDR bypass technique in userland. This resource, combined with the open-source plugins, empowers security professionals to strengthen their defenses against sophisticated threats.
Empowering the Cybersecurity Community
Concluding, Andrew encouraged attendees to explore the released plugins and white paper, which include 40 references for in-depth understanding. He stressed the collaborative nature of their work, inviting feedback to refine the Volatility framework. By sharing these tools, Andrew and his team aim to equip defenders with the means to counter evolving malware, ensuring EDR systems remain effective. Their session underscored the critical role of memory forensics in staying ahead of attackers in the cybersecurity landscape.
Links:
[NDCMelbourne2025] A Look At Modern Web APIs You Might Not Know – Julian Burr
As web technologies evolve, the capabilities of browsers have expanded far beyond their traditional roles, often rendering the need for native applications obsolete for certain functionalities. Julian Burr, a front-end engineer with a passion for design systems, delivers an engaging exploration of modern web APIs at NDC Melbourne 2025. Through his demo application, stopme.io—a stopwatch-as-a-service platform—Julian showcases how these APIs can enhance user experiences while adhering to the principle of progressive enhancement. His talk illuminates the power of web APIs to bridge the gap between web and native app experiences, offering practical insights for developers.
The Philosophy of Progressive Enhancement
Julian begins by championing progressive enhancement, a design philosophy that ensures baseline functionality for all users while delivering enhanced experiences for those with modern browsers. Quoting Mozilla, he defines it as providing essential content to as many users as possible while optimizing for advanced environments. This approach is critical when integrating web APIs, as it prevents over-reliance on features that may not be universally supported. For instance, in stopme.io, Julian ensures core stopwatch functionality remains accessible, with APIs adding value only when available. This philosophy guides his exploration, ensuring inclusivity and robustness in application design.
Observing the Web: Resize and Intersection Observers
The first category Julian explores is observability APIs, starting with the Resize Observer and Intersection Observer. These APIs, widely supported, allow developers to monitor changes in DOM element sizes and visibility within the viewport. In stopme.io, Julian uses the Intersection Observer to load JavaScript chunks only when components become visible, optimizing performance. While CSS container queries address styling needs, these APIs enable dynamic behavioral changes, making them invaluable for frameworks like Astro that rely on code splitting. Julian emphasizes their relevance, as they underpin many modern front-end optimizations, enhancing user experience without compromising accessibility.
Enhancing User Context: Network and Battery Status
Julian then delves into APIs that provide contextual awareness, such as the Page Visibility API, Network Information API, and Battery Status API. The Page Visibility API allows stopme.io to update the browser title bar with the timer status when the tab is inactive, enabling users to multitask. The Network Information API offers insights into connection types, allowing developers to serve lower-resolution assets on cellular networks. Similarly, the Battery Status API warns users of potential disruptions due to low battery, as demonstrated when stopme.io alerts users about long-running timers. Julian cautions about fingerprinting risks, noting that browser vendors intentionally reduce accuracy to protect privacy, aligning with progressive enhancement principles.
Elevating Components: Screen Wake Lock and Vibration
Moving to component enhancement, Julian highlights the Screen Wake Lock and Vibration APIs. The Screen Wake Lock API prevents devices from entering sleep mode during critical tasks, such as keeping stopme.io’s timer visible. The Vibration API adds haptic feedback, like notifying users when a timer finishes, with customizable patterns for engaging effects. Julian stresses user control, suggesting toggles to avoid intrusive experiences. While these APIs—often Chrome-centric—enhance interactivity, Julian underscores the need for fallback options to maintain functionality across browsers, ensuring no user is excluded.
Native-Like Experiences: File System, Clipboard, and Share APIs
Julian showcases APIs that rival native app capabilities, including the File System, Clipboard, and Share APIs. The File System API enables file picker interactions, while the Clipboard API facilitates seamless copy-paste operations. The Share API, used in stopme.io, triggers native sharing dialogs, simplifying content distribution across platforms. These APIs, inspired by tools like Cordova, reflect the web’s evolution toward native-like functionality. Julian notes their security mechanisms, such as transient activation for Clipboard API, which require user interaction to prevent misuse, ensuring both usability and safety.
Streamlining Authentication: Web OTP and Credential Management
Authentication APIs, such as Web OTP and Credential Management, offer seamless login experiences. The Web OTP API automates SMS-based one-time password entry, as demonstrated in stopme.io, where Chrome facilitates code sharing across devices. The Credential Management API streamlines password storage and retrieval, reducing login friction. Julian highlights their synergy with the Web Authentication API, which supports passwordless logins via biometrics. These APIs, widely available, enhance security and user convenience, making them essential for modern web applications.
Links:
[DotJs2024] Your App Crashes My Browser
Amid the ceaseless churn of web innovation, a stealthy saboteur lurks: memory leaks that silently erode browser vitality, culminating in the dreaded “Out of Memory” epitaph. Stoyan Stefanov, a trailblazing entrepreneur and web performance sage with roots at Facebook, confronted this scourge at dotJS 2024. Once a fixture in optimizing vast social feeds, Stoyan transitioned from crisis aversion—hard reloads post-navigation—to empowerment through diagnostics. His manifesto: arm developers with telemetry and sleuthing to exorcise these phantoms, ensuring apps endure without devouring RAM.
Stoyan’s alarm rang true via Nolan Lawson’s audit: a decade’s top SPAs unanimously hemorrhaged, underscoring leaks’ ubiquity even among elite codebases. Personal scars abounded—from a social giant’s browser-crushing sprawl, mitigated by crude resets—to the thrill of unearthing culprits sans autopsy. The panacea commences with candor: the Reporting API, a beacon flagging crashes in the wild, piping diagnostics to endpoints for pattern mining. This passive vigilance—triggered on OOM—unmasks field frailties, from rogue closures retaining DOM vestiges to event sentinels orphaned post-unmount.
Diagnosis demands ritual: heap snapshots bracketing actions, GC sweeps purifying baselines, diffs revealing retainers. Stoyan evangelized Memlab, Facebook’s CLI oracle, automating Puppeteer-driven cycles—load, act, revert—yielding lucid diffs: “1,000 objects via EventListener cascade.” For the uninitiated, his Recorder extension chronicles clicks into scenario scripts, demystifying Puppeteer. Leaks manifest insidiously: un-nullified globals, listener phantoms in React class components—addEventListener sans symmetric removal—hoarding callbacks eternally.
Remediation rings simple: sever references—null assignments, framework hooks like useEffect cleanups—unleashing GC’s mercy. Stoyan’s ethos: paranoia pays; leaks infest, but tools tame. From Memlab’s precision on map apps—hotel overlays persisting post-dismiss—to listener audits, vigilance yields fluidity. In an age of sprawling SPAs, this vigilance isn’t luxury but lifeline, sparing users frustration and browsers demise.
Unveiling Leaks in the Wild
Stoyan spotlighted Reporting API’s prowess: crash telemetry streams to logs, correlating OOM with usage spikes. Nolan’s Speedline probe affirmed: elite apps falter uniformly, from unpruned caches to eternal timers. Proactive profiling—snapshots pre/post actions—exposes retain cycles, Memlab automating to spotlight listener detritus or closure traps.
Tools and Tactics for Eradication
Memlab’s symphony: Puppeteer orchestration, intelligent diffs tracing leaks to sources—e.g., 1K objects via unremoved handlers. Stoyan’s Recorder eases entry, click-to-script. Fixes favor finality: removeEventListener in disposals, nulls for orphans. Paranoia’s yield: resilient apps, jubilant users.