Posts Tagged ‘NodeCongress2024’
[NodeCongress2024] Strategies for High-Performance Node.js API Microservices
Lecturer: Tamar Twena-Stern
Tamar Twena-Stern is an experienced software professional, serving as a developer, manager, and architect with a decade of expertise spanning server-side development, big data, mobile, web technologies, and security. She possesses a deep specialization in Node.js server architecture and performance optimization. Her work is centered on practical strategies for improving Node.js REST API performance, encompassing areas from database interaction and caching to efficient framework and library selection.
Relevant Links:
* GitNation Profile (Talks): https://gitnation.com/person/tamar_twenastern
* Lecture Video: Implementing a performant URL parser from scratch
Abstract
This article systematically outlines and analyzes key strategies for optimizing the performance of Node.js-based REST API microservices, a requirement necessitated by the high concurrency demands of modern, scalable web services. The analysis is segmented into three primary areas: I/O optimization (database access and request parallelism), data locality and caching, and strategic library and framework selection. Key methodologies, including the use of connection pooling, distributed caching with technologies like Redis, and the selection of low-overhead utilities (e.g., Fastify and Pino), are presented as essential mechanisms for minimizing latency and maximizing API throughput.
Performance Engineering in Node.js API Architecture
I/O Optimization: Database and Concurrency
The performance of a Node.js API is heavily constrained by Input/Output (I/O) operations, particularly those involving database queries or external network requests. Optimizing this layer is paramount for achieving speed at scale:
- Database Connection Pooling: At high transaction volumes, the overhead of opening and closing a new database connection for every incoming request becomes a critical bottleneck. The established pattern of connection pooling is mandatory, as it enables the reuse of existing, idle connections, significantly reducing connection establishment latency.
- Native Drivers vs. ORMs: For applications operating at large scale, performance gains can be realized by preferring native database drivers over traditional Object-Relational Mappers (ORMs). While ORMs offer abstraction and development convenience, they introduce an layer of overhead that can be detrimental to raw request throughput.
- Parallel Execution: Latency within a single request often results from sequential execution of independent I/O tasks (e.g., multiple database queries or external service calls). The implementation of
Promise.all
allows for the parallel execution of these tasks, ensuring that the overall response time is determined by the slowest task, rather than the sum of all tasks. - Query Efficiency: Fundamental to performance is ensuring an efficient database architecture and optimizing all underlying database queries.
Data Locality and Caching Strategies
Caching is an essential architectural pattern for reducing I/O load and decreasing request latency for frequently accessed or computationally expensive data.
- Distributed Caching: In-memory caching is strongly discouraged for services deployed in multiple replicas or instances, as it leads to data inconsistency and scalability issues. The professional standard is distributed caching, utilizing technologies such as Redis or
etcd
. A distributed cache ensures all service instances access a unified, shared source of cached data. - Cache Candidates: Data recommended for caching includes results of complex DB queries, computationally intensive cryptographic operations (e.g., JWT parsing), and external HTTP requests.
Strategic Selection of Runtime Libraries
The choice of third-party libraries and frameworks has a profound impact on the efficiency of the Node.js event loop.
- Web Framework Selection: Choosing a high-performance HTTP framework is a fundamental optimization. Frameworks like Fastify or Hapi offer superior throughput and lower overhead compared to more generalized alternatives like Express.
- Efficient Serialization: Performance profiling reveals that JSON serialization can be a significant bottleneck when handling large payloads. Utilizing high-speed serialization libraries, such as
Fast-JSON-Stringify
, can replace the slower, defaultJSON.stringify
to drastically improve response times. - Logging and I/O: Logging is an I/O operation and, if handled inefficiently, can impede the main thread. The selection of a high-throughput, low-overhead logging utility like Pino is necessary to mitigate this risk.
- Request Parsing Optimization: Computational tasks executed on the main thread, such as parsing components of an incoming request (e.g., JWT token decoding), should be optimized, as they contribute directly to request latency.
Links
- Lecture Video: JS Perf Wins & New Node.js Features with Yagiz Nizipli – Syntax #716
- Lecturer’s GitNation Profile: https://gitnation.com/person/tamar_twenastern
[NodeCongress2024] Bridging Runtimes: Advanced Testing Strategies for Cloudflare Workers with Vitest
Lecturer: Brendan Coll
Brendan Coll is a software engineer and key contributor to the Cloudflare Workers ecosystem. He is recognized as the creator of Miniflare, an open-source, fully-local simulator designed for the development and testing of Cloudflare Workers. His work focuses heavily on improving the developer experience for serverless and edge computing environments, particularly concerning local development, robust testing, and TypeScript integration. He has played a crucial role in leveraging and contributing to the open-source Workers runtime, workerd, to enhance performance and local fidelity.
Relevant Links:
* Cloudflare Author Profile: https://blog.cloudflare.com/author/brendan-coll/
* Cloudflare TV Discussion on Miniflare: https://cloudflare.tv/event/fireside-chat-with-brendan-coll-the-creator-of-miniflare/dgMlnqZD
* Cloudflare Developer Platform: https://pages.cloudflare.com/
Abstract
This article investigates the architectural methodology employed to integrate the Vitest testing framework, a Node.js-centric tool, with the Cloudflare Workers environment, which utilizes the custom workerd runtime. The analysis focuses on the development of a Custom Pool for process management, the fundamental architectural modifications required within workerd
to support dynamic code evaluation, and the introduction of advanced developer experience features such as isolated per-test storage and declarative mocking. The integration serves as a significant case study in porting widely adopted testing standards to alternative serverless runtimes.
Custom Runtimes and the Vitest Testing Architecture
The Context of Alternative Runtimes
Cloudflare Workers operate on the workerd
runtime, a V8-based environment optimized for high concurrency and low latency in a serverless, edge context. Developers interact with this environment locally through the Miniflare simulator and the Wrangler command-line interface. The objective of this methodology was to enable the use of Vitest, a popular Node.js testing library that typically relies on Node.js-specific primitives like worker threads, within the workerd
runtime.
Methodology: Implementing the Custom Pool
The core innovation for this integration lies in the implementation of a Custom Pool within Vitest. Vitest typically uses pools (e.g., threads
, forks
) to manage the parallel execution of tests. The Cloudflare methodology replaced the standard Node.js thread management with a Custom Pool designed to orchestrate communication between the Node.js driver process (which runs Vitest itself) and the dedicated workerd
process (where the actual Worker code executes).
This Custom Pool utilizes a two-way Inter-Process Communication (IPC) channel, typically established over sockets, to send test code, configuration, and receive results and logging from the isolated workerd
environment.
Architectural Challenges: Dynamic Code Evaluation
A major architectural challenge arose from workerd
‘s initial lack of support for dynamic code evaluation methods such as eval()
or new Function()
, which are essential for test runners like Vitest to process and execute test files dynamically.
The solution involved introducing a new primitive into the workerd
runtime called the Module Inspector
. This primitive enables the runtime to accept code dynamically and execute it as a module, thereby satisfying the requirements of the Vitest framework. This necessary modification to the underlying runtime highlights the complexity involved in making non-Node.js environments compatible with the Node.js testing ecosystem.
Enhanced Developer Experience (DX) and Test Isolation
The integration extends beyond mere execution compatibility by introducing features focused on improving testing ergonomics and isolation:
- Isolated Storage: The use of Miniflare enables hermetic, per-test isolation of all storage resources, including KV (Key-Value storage), R2 (Object storage), and D1 (Serverless Database). This is achieved by creating and utilizing a temporary directory for each test run, ensuring that no test can pollute the state of another, which is a fundamental requirement for reliable unit and integration testing.
- Durable Object Test Helpers: A specialized helper function,
get and wait for durable object
, was developed to simplify the testing of Durable Objects (Cloudflare’s stateful serverless primitive). This allows developers to interact with a Durable Object instance directly, treating it effectively as a standard JavaScript class for testing purposes. - Declarative HTTP Mocking: To facilitate isolated testing of external dependencies, the methodology leverages the
undici
MockAgent for declarative HTTP request mocking. This system intercepts all outgoingfetch
requests, usingundici
‘sDispatchHandlers
to match and return mocked responses, thereby eliminating reliance on external network access during testing. TheonComplete
handler is utilized to construct and return a standardResponse
object based on the mocked data.
Links
- Lecture Video: Yagiz Nizipli – Node.js Performance
- Lecturer’s Cloudflare Author Profile: https://blog.cloudflare.com/author/brendan-coll/
- Cloudflare Workers SDK GitHub: (Implied project link)
[NodeCongress2024] The Architecture of Asynchronous Code Context and Package Resolution in Node.js
Lecturer: Yagiz Nizipli
Yagiz Nizipli is a respected software architect, entrepreneur, and prominent contributor to the Node.js ecosystem, with a Master’s degree in Computer Science from Fordham University. He is an active member of the Node.js Technical Steering Committee (TSC) and a voting member of the OpenJS Foundation. His primary academic and professional focus is on improving the performance of Node.js, exemplified by his creation of the Ada URL parser, which has been adopted into Node.js core and is considered the fastest WHATWG-compliant URL parser. He has held roles as a Senior Software Engineer and currently works at Sentry, specializing in error tracking and performance.
Relevant Links:
* Professional Website: https://www.yagiz.co/
* GitHub Profile: https://github.com/anonrig
* X/Twitter: https://twitter.com/yagiznizipli
Abstract
This article analyzes the intricate mechanisms of package resolution within the Node.js runtime, comparing the established CommonJS (CJS) module system with the modern ECMAScript Modules (ESM) specification. It explores the performance overhead inherent in the CJS resolution algorithm, which relies on extensive filesystem traversal, and identifies key developer methodologies that can significantly mitigate these bottlenecks. The analysis highlights how adherence to modern standards, such as explicit file extensions and the use of the package.json
exports
field, is crucial for building performant and maintainable Node.js applications.
The Dual Modality of Package Resolution in Node.js
Context and Methodology
The Node.js runtime employs distinct, yet interoperable, mechanisms for locating and loading dependencies based on whether the module utilizes the legacy CommonJS (require
) system or the modern ECMAScript Modules (import
) system.
The CJS resolution algorithm is complex and contributes to runtime latency. When a package path is provided without an extension, the CJS resolver performs synchronous filesystem operations, sequentially checking for .js
, .json
, and .node
extensions. If the target is a directory, it attempts to resolve the module via entry points specified in a local package.json
file, or by sequentially checking for index.js
, index.json
, etc.. Crucially, if the required module is not found locally, the resolver recursively traverses up the directory tree, checking every adjacent node_modules
folder until the file system root is reached, incurring a significant performance penalty due to high Input/Output (I/O) operations.
In contrast, ESM resolution is strictly defined by the WHATWG specification, mandating that all imports must include the full file extension. The module system determines whether a file is CJS or ESM by checking the type
field in the nearest package.json
file, falling back to CJS if the field is absent or set to "commonjs"
, and defaulting to ESM if set to "module"
.
Performance Implications and Optimization Strategies
The primary performance bottleneck in Node.js package loading stems from the synchronous filesystem traversal and redundant extension checks inherent in the legacy CJS resolution process.
To address this, the following optimization methodologies are recommended:
- Mandatory Extension Usage: Developers should always include file extensions in
require()
orimport
statements, even where the CJS specification allows omission. This practice eliminates the need for the CJS resolver to check multiple extensions (.js
,.json
,.node
) sequentially, which directly reduces I/O latency. - Explicit Module Type Declaration: For projects, particularly one-time scripts without a
package.json
file, the use of explicit extensions like.mjs
for ESM and.cjs
for CJS is advised. This provides an immediate, unambiguous hint to the runtime, eliminating the need for slow directory traversal to locate an ancestorpackage.json
file. - Modern Package Manifest Fields: The
exports
field inpackage.json
represents a modern innovation that significantly improves resolution performance and security. This field explicitly defines the package’s public entry points, thereby:- Accelerating Resolution: The resolver is immediately directed to the correct entry point, bypassing ambiguous path searching.
- Encapsulation: It restricts external access to internal, private files (deep imports), enforcing a clean package boundary.
The relatedimports
field allows for internal aliasing within a package, facilitating faster resolution of inter-package dependencies.
While experimental flags like --experimental-detect-module
exist to allow .js
files without explicit extensions or package.json
fields, they are cautioned against due to their experimental status and known instability. The adoption of strict resolution practices is therefore the more reliable, long-term strategy for ensuring optimal API and application performance.
Links
- Lecture Video: Road to a fast url parser in Node.js
- Lecturer’s Professional Page: https://www.yagiz.co/
- X/Twitter: https://twitter.com/yagiznizipli
[NodeCongress2024] Deep Dive into Undici: Architecture, Performance, and the Future of HTTP in Node.js
Lecturer: Matteo Collina
Matteo Collina is an internationally recognized expert in Node.js and open-source architecture, serving as the Co-Founder and CTO of Platformatic. He holds a Ph. D. in “Application Platforms for the Internet of Things”. A member of the Node.js Technical Steering Committee (TSC), he is a major contributor to the platform’s core, with a focus on streams, diagnostics, and the HTTP stack. He is the original author of the highly successful, high-performance web framework Fastify and the ultra-fast JSON logger Pino. His open-source modules are downloaded billions of times annually.
- Institutional Profile: Matteo Collina
- Publications: Matteo Collina’s talks, articles, workshops, certificates – GitNation
Abstract
This article presents a technical analysis of Undici, the high-performance, standards-compliant HTTP/1.1 client that serves as the foundation for the native fetch()
API in Node.js. It explains the motivation for Undici’s creation—addressing critical performance and protocol deficiencies in the legacy Node.js stack. The article details the core architectural components, particularly the Client and Dispatcher abstractions, and explains how Undici achieves superior efficiency through advanced connection management and HTTP/1.1 pipelining. The final analysis explores the methodological implications of Undici’s modularity, including enabling zero-overhead internal testing and powering highly efficient modular monolith and microservice runtimes.
Context: Limitations of the Legacy Node.js HTTP Stack
The legacy Node.js HTTP client suffered from several long-standing limitations, primarily in performance and compliance with modern standards. Specifically, it lacked proper support for HTTP/1.1 pipelining—the ability to send multiple requests sequentially over a single connection without waiting for the first response. Furthermore, its connection pool management was inefficient, often failing to enforce proper limits, leading to potential resource exhaustion and performance bottlenecks. Undici was developed to resolve these architectural deficiencies, becoming the native engine for fetch()
within Node.js core.
Architecture and Methodology of Undici
Undici’s design is centered around optimizing connection usage and abstracting the request lifecycle:
- The Client and Connection Pools: The core component is the
Client
, which is scoped to a single origin (protocol, hostname, and port). The Client manages a pool of TCP connections and is responsible for implementing the efficiency of the HTTP protocol. - Pipelining for Performance: Undici explicitly implements HTTP/1.1 pipelining. This methodology permits the efficient use of the network and is essential for maximum HTTP/1.1 performance, particularly when connecting to modern servers that support the feature.
- The Dispatcher Abstraction: Undici utilizes a pluggable
Dispatcher
interface. This abstraction governs the process of taking a request, managing connection logic, and writing the request to a socket. Key Dispatcher implementations include the standardClient
(for a single origin) and theAgent
(for multiple origins). - Connection Management: The pooling mechanism employs a strategy to retire connections gracefully to allow DNS changes and resource rotation, contrasting with legacy systems that often held connections indefinitely.
Consequences and Architectural Innovations
Undici’s modular and abstracted architecture has led to significant innovations beyond core HTTP performance:
- In-Process Request Testing: The Dispatcher model allows for the implementation of a
MockClient
(accessible via thelight-my-request
module), which completely bypasses the network stack. This permits the injection of HTTP requests directly into a running Node.js server within the same process, enabling zero-overhead, high-speed unit and integration testing without opening any actual sockets. - Internal Mesh Networking: The architecture enables a unique pattern for running multiple microservices within a single process. Using a custom dispatcher (
fastify-undici-dispatcher
), internal HTTP requests can be routed directly to other services (e.g., Fastify instances) running in the same process via an in-memory mesh network, completely bypassing the network layer for inter-service communication. This methodology, employed in the Platformatic runtime, allows developers to transition from a modular monolith to a microservice architecture with minimal code changes, retaining maximum performance for inter-service calls.
Links
- Lecture Video: Deep Dive into Undici – Matteo Collina, Node Congress 2024
- Lecturer’s X/Twitter: https://twitter.com/matteocollina
- Organization: https://platformatic.dev/
Hashtags: #Undici #NodeJS #HTTPClient #Fastify #Microservices #PerformanceEngineering #Platformatic
[NodeCongress2024] Asynchronous Context Tracking in Modern JavaScript Runtimes: From `AsyncLocalStorage` to the `AsyncContext` Standard
Lecturer: James M Snell
James M Snell is a distinguished open-source contributor and software engineer, currently serving as a Principal Engineer on the Cloudflare Workers team. He is a long-standing core contributor to the Node.js Technical Steering Committee (TSC), where his technical leadership has been instrumental in modernizing Node.js’s networking stack, including the implementation of HTTP/2, the WHATWG URL implementation, and the QUIC protocol. Snell is also a key founder and participant in the WinterCG (Web-interoperable Runtimes Community Group), an effort dedicated to aligning standards across disparate JavaScript runtimes.
- Institutional Profile: James M Snell – The Cloudflare Blog
- GitHub: James M Snell jasnell – GitHub
Abstract
This article provides an analytical deep dive into the concept and implementation of Asynchronous Context Tracking in JavaScript runtimes, focusing on Node.js’s existing AsyncLocalStorage
(ALS) API and the proposed AsyncContext
standard. It explains the critical problem of preserving request-specific contextual data (e.g., request IDs or transaction details) across asynchronous I/O boundaries in highly concurrent environments. The article details the technical methodology, which relies on Async Hooks and a Context Frame Stack, and discusses the implications of the TC-39 standardization effort to create a portable, globally accessible AsyncContext
API across runtimes like Node.js, Cloudflare Workers, Deno, and Bun.
Context: The Challenge of Asynchronous Execution Flow
In a concurrent, non-blocking I/O model like Node.js, the execution of a single logical operation (e.g., handling one HTTP request) is typically fragmented across multiple asynchronous callbacks. The JavaScript engine often switches between different logical requests while waiting for I/O operations to complete, making it impossible to rely on simple global or thread-local variables for storing request-specific metadata. The challenge is ensuring that contextual information (such as a unique request identifier or security principal) is preserved and accessible to every segment of the logical operation’s flow, regardless of how many other concurrent operations interleave with it.
Methodology: Context Frames and Async Hooks
Asynchronous Context Tracking solves this by establishing a mechanism to associate a context frame (a logical map of key/value pairs) with the execution flow of an asynchronous operation.
- The Role of Async Hooks: The foundation of this system is the Async Hook API (or its internal equivalent in other runtimes). The runtime uses these hooks to trace the lifecycle of asynchronous resources (e.g., timers, network requests). Every time an asynchronous operation is created or executed, the runtime utilizes the hooks to push and pop context frames onto a dedicated stack for that specific asynchronous flow.
- The
run
andgetStore
/get
Methods: The primary interface for managing context is therun
method (available on bothAsyncLocalStorage
andAsyncContext
). When a function is wrapped instore.run(value, callback)
, it initiates a new context frame containing thatvalue
, ensuring that all subsequent asynchronous operations originating from the callback have access to the frame. ThegetStore
(ALS) orget
(Async Context) method then accesses the value from the current frame on the stack. - Copy-on-Run Principle: Critically, the
run
method ensures that context is copied and isolated for the new frame. Modifying a context value within arun
call does not affect the context of the calling function, preventing data leakage or corruption between concurrent requests.
The Evolution to AsyncContext
and Interoperability
The AsyncLocalStorage
API in Node.js, initially residing in node:async_hooks
, has proven the utility of this model, leading to its adoption in other runtimes. The subsequent step is the standardization of AsyncContext
by the TC-39 committee. The changes between the two APIs are minimal—primarily making the API a global object and renaming getStore
to get
—but the implications are profound. The standardization effort ensures that this crucial pattern for context propagation becomes portable and interoperable across the entire JavaScript ecosystem, benefiting Node.js, Cloudflare Workers, Deno, and Bun.
Links
- Lecture Video: Understanding Async Context – James Snell, Node Congress 2024
- Lecturer’s GitHub: https://github.com/jasnell
- Organization Blog: Cloudflare Blog
Hashtags: #AsyncContext #AsyncLocalStorage #NodeJS #JavaScriptRuntimes #AsyncHooks #WinterCG #TC39
[NodeCongress2024] The Supply Chain Security Crisis in Open Source: A Shift from Vulnerabilities to Malicious Attacks
Lecturer: Feross Aboukhadijeh
Feross Aboukhadijeh is an entrepreneur, prolific open-source programmer, and the Founder and CEO of Socket, a developer-first security platform. He is renowned in the JavaScript ecosystem for creating widely adopted open-source projects such as WebTorrent and Standard JS, and for maintaining over 100 npm packages. Academically, he serves as a Lecturer at Stanford University, where he has taught the course CS 253 Web Security. His professional career includes roles at major technology companies like Quora, Facebook, Yahoo, and Intel.
- Institutional Profile: Feross Aboukhadijeh Bio
- Professional Page: Socket Security
Abstract
This article analyzes the escalating threat landscape within the open-source software (OSS) supply chain, focusing specifically on malicious package attacks as opposed to traditional security vulnerabilities. Drawing from a scholarly lecture, it outlines the primary attack vectors, including typosquatting, dependency confusion, and sophisticated account takeover (e.g., the XZ Utils backdoor). The analysis highlights the methodological shortcomings of the existing vulnerability reporting system (CVE/GHSAs) in detecting these novel risks. Finally, it details the emerging innovation of using static analysis, dynamic runtime analysis, and Large Language Models (LLMs) to proactively audit package behavior and safeguard the software supply chain.
Context: The Evolving Open Source Threat Model
The dependency model of modern software development, characterized by the massive reuse of third-party open-source packages, has created a fertile ground for large-scale security breaches. The fundamental issue is the inherent trust placed in thousands of transitive dependencies, which collectively form the software supply chain. The context of security has shifted from managing known vulnerabilities to defending against deliberate malicious injection.
Analysis of Primary Attack Vectors
Attackers employ several cunning strategies to compromise the supply chain:
- Typosquatting and Name Confusion: This low-effort but high-impact method involves publishing a package with a name slightly misspelled from a popular one (e.g.,
eslunt
instead ofeslint
). Developers accidentally install the malicious version, which often contains code to exfiltrate environment variables, system information, or credentials. - Dependency Confusion: This technique exploits automated build tools in private development environments. By publishing a malicious package to a public registry (like npm) with the same name as a private internal dependency, the public package is often inadvertently downloaded and prioritized, leading to unauthorized code execution.
- Account Takeover and Backdoors: This represents the most sophisticated class of attack, exemplified by the XZ Utils incident. Attackers compromise a maintainer’s account (often via phishing) and subtly introduce a backdoor into a critical, widely used project. The XZ Utils attack, in particular, was characterized by years of preparation and extremely complex code obfuscation, which utilized a Trojanized
m4
macro to hide the malicious payload and only execute it on specific conditions (e.g., when run on a Linux distribution withsshd
installed).
Methodological Innovations in Defense
The traditional security model, reliant on the Common Vulnerabilities and Exposures (CVE) database, is inadequate for detecting these malicious behaviors. A new, analytical methodology is required, focusing on package auditing and behavioral analysis:
- Static Manifest Analysis: Packages can be analyzed for red flags in their manifest file (
package.json
), such as the use of riskypostinstall
scripts, which execute code immediately upon installation and are often used by malware. - Runtime Behavioral Analysis (Sandboxing): The most effective defense is to run the package installation and observe its behavior in a sandboxed environment, checking for undesirable actions like networking activity or shell command execution.
- LLM-Assisted Analysis: Advanced security tools are now using Large Language Models (LLMs) to reason about the relationship between a package’s declared purpose and its actual code. An LLM can be prompted to assess whether a dependency that claims to be a utility function is legitimately opening network connections, providing a powerful, context-aware method for identifying behavioral anomalies.
Conclusion and Implications for Robust Software Engineering
The rise of malicious supply chain attacks mandates a paradigm shift in how developers approach dependency management. The existing vulnerability-centric system is too noisy and fails to address the root cause of these sophisticated exploits. For secure and robust software engineering, the definition of “open-source security” must be expanded beyond traditional vulnerability scanning to include maintenance risks (unmaintained or low-quality packages). Proactive defense requires the implementation of continuous, behavioral auditing tools that leverage advanced techniques like LLMs to identify deviations from expected package behavior.
Links
- Lecture Video: The Dark Side of Open Source – Feross Aboukhadijeh, Node Congress 2024
- Lecturer’s X/Twitter: https://x.com/feross
- Lecturer’s LinkedIn: https://www.linkedin.com/in/feross
- Organization: https://socket.dev/
Hashtags: #OpenSourceSecurity #SupplyChainAttack #SoftwareSupplyChain #LLMSecurity #Typosquatting #NodeCongress