Posts Tagged ‘NodeCongress2023’
[NodeCongress2023] The Road to Async Context: Standardizing Contextual Data Tracking in Asynchronous JavaScript
Lecturer
James M Snell
James M Snell is a Systems Engineer on the Cloudflare Workers team. He is a highly influential figure in the JavaScript runtime space, serving as a core contributor to the Node.js project and a member of the Node.js Technical Steering Committee (TSC). His work focuses on driving the adoption of web-compatible standard APIs across diverse JavaScript runtime environments, including Node.js, Deno, and Cloudflare Workers. Before his current role, he spent 16 years working on open technologies and standards at IBM. He is actively involved in the Web-interoperable Runtimes Community Group (WinterCG).
- Institutional Profile/Professional Page: jasnell.me
- X (Twitter): @jasnell
- Organization: Cloudflare Workers
Abstract
This article examines the evolution and standardization efforts surrounding Async Context in JavaScript runtimes, transitioning from Node.js’s AsyncLocalStorage to the proposed AsyncContext API within the TC-39 standards body. The analysis defines the core problem of tracking contextual data across asynchronous boundaries and explains the mechanism by which AsyncContext provides a deterministic, reliable way to manage this state, which is vital for modern diagnostic, security, and feature management tools. The article highlights the methodology of the Web-interoperable Runtimes Community Group (WinterCG) in establishing a portable subset of this API for immediate cross-runtime compatibility.
Context: The Asynchronous State Problem
In a synchronous programming environment, state—such as user identity, transaction ID, or locale settings—is managed within a thread’s local memory (thread-local storage). However, modern JavaScript runtimes operate on a single thread with a shared event loop, where a single incoming request often forks into multiple asynchronous operations (I/O, network calls, timers) that execute non-sequentially. The fundamental challenge is maintaining this contextual information reliably across these asynchronous function boundaries. Traditional solutions, like passing context through function arguments, are impractical and violate encapsulation.
Methodology and Mechanisms
Async Context Tracking
The core concept of Async Context (first implemented as AsyncLocalStorage in Node.js) involves a model that links contextual information to the asynchronous flow of execution.
- Asynchronous Resource Stack: Context tracking is achieved by building a stack of “asynchronous resources”. When an asynchronous operation (e.g., a promise, a timer, or an HTTP request) begins, a new entry is added to this conceptual stack.
- The
runMethod: The primary public API for setting context is therunmethod, which executes a function within a new, dedicated context frame. Any asynchronous work initiated within this function will inherit that context.
The Move to AsyncContext
The standardization effort in TC-39 aims to introduce AsyncContext as a native language feature, replacing runtime-specific APIs like AsyncLocalStorage. The key difference in the future AsyncContext model is a move towards immutability within the context frame, specifically deprecating mutable methods like enter and exit that Node.js historically experimented with in AsyncResource. The consensus is to maintain the determinism and integrity of the context by requiring a new frame to be created for any changes, thus making the context “immutable across asynchronous boundaries”.
Consequences and Interoperability
The implications of standardized Async Context are significant, primarily for Observability and Cross-Runtime Compatibility:
- Observability (Diagnostics): The context mechanism is critical for application performance monitoring (APM) and diagnostics. It allows an instrument to reliably attach a request ID, correlation ID, or span data to every operation performed during the lifecycle of a single incoming request, which is essential for distributed tracing.
- Runtime Interoperability: The Web-interoperable Runtimes Community Group (WinterCG) is actively defining a “portable subset” of the
AsyncLocalStorageAPI. This subset is designed to be compatible with the forthcomingAsyncContextstandard and is being implemented across multiple runtimes (Node.js, Cloudflare Workers, Deno, Bun) in advance. This collective effort is paving the way for truly portable JavaScript code, where contextual state management is a reliable, universal primitive.
Conclusion
The standardization of Async Context represents a pivotal development in the maturity of server-side JavaScript. By integrating a reliable mechanism for tracking contextual state across asynchronous flows, the community is solving a long-standing architectural complexity. The collaboration within WinterCG ensures that this critical feature is implemented uniformly, fostering a more robust, standards-compliant, and portable ecosystem for all major JavaScript runtimes.
Relevant links and hashtags
- Lecture Video: The Road to Async Context – James M Snell, Node Congress 2023
- Lecturer Professional Links:
- Professional Page: jasnell.me
- X (Twitter): @jasnell
- Organization: Cloudflare Workers
Hashtags: #AsyncContext #AsyncLocalStorage #TC39 #WinterCG #NodeJS #CloudflareWorkers #JavaScriptStandards #Observability #NodeCongress
[NodeCongress2023] Building a Modular Monolith in Node.js: The Fastify Architecture for Scalable Development
Lecturer: Matteo Collina
Matteo Collina is the Co-Founder and Chief Technology Officer (CTO) of Platformatic.dev, focusing on reducing friction in backend development. He is a prominent figure in the JavaScript and Node.js open-source communities, serving as a member of the Node.js Technical Steering Committee (TSC), concentrating on streams, diagnostics, and HTTP. Dr. Collina is the creator and maintainer of several foundational Node.js projects, including the high-performance web framework Fastify and the super-fast JSON logger Pino. He completed his Ph.D. in 2014, with his thesis focusing on “Application Platforms for the Internet of Things”.
- Institutional Profile/Professional Page: nodeland.dev
- X (Twitter): @matteocollina
- LinkedIn: in/matteocollina
- Organization: Platformatic.dev
Abstract
This article explores the architectural pattern of the modular monolith as a superior alternative to monolithic or premature microservice designs in the Node.js ecosystem, using the Fastify framework as the primary methodology. The analysis highlights how Fastify’s plugin system allows developers to create well-organized, testable, and maintainable applications by enforcing clear separation of concerns and rigorously avoiding the anti-pattern of global Singletons. Furthermore, it details how this architectural choice establishes a robust foundation that facilitates a near-frictionless migration to a microservices architecture when required.
Context: The Challenge of Free-Form Development
The inherent flexibility of the Node.js development model—while powerful—often leads to organizational and structural issues in large codebases, frequently resulting in “big ball of mud” monoliths. A key contributor to this technical debt is the liberal use of Singletons, global objects (like a database connection or configuration store) that hide dependencies and make components non-reusable and difficult to test in isolation.
Methodology: Fastify and the Modular Monolith
The proposed solution is the modular monolith, implemented using Fastify’s architectural features:
- Fastify Plugin System: Fastify’s core design dictates that any component (routes, business logic, configuration) must be encapsulated as a plugin. This system is fundamentally based on a Directed Acyclic Graph (DAG) architecture.
- Encapsulation and Scoping: When a plugin registers a dependency (e.g., a database connection), it gets its own encapsulated scope. Subsequent plugins registered underneath it inherit that dependency, but plugins registered in parallel do not. This rigorous encapsulation is the mechanism that prevents global Singletons, ensuring clear dependency flow and isolation of modules.
- Code-First Configuration: Using tools like
platformatic/service, the structure of the application—including plugins, databases, and general configuration—can be defined declaratively in a single configuration file.
Analysis of Implications
The modular monolith, when built with this methodology, offers significant consequences for engineering workflow and application lifecycle:
- Improved Testability and Organization: The strict encapsulation ensures that each module (plugin) can be tested in isolation, knowing precisely its inputs and outputs, leading to a codebase that can “stand the test of time”.
- Production Readiness: The architecture easily accommodates production-level features such as automatic OpenAPI documentation and Prometheus metrics integration via simple configuration toggles.
- Seamless Microservice Migration: By maintaining clear separation of concerns and avoiding shared state (i.e., a “share nothing architecture”), the architectural components created as Fastify plugins are already structurally prepared to be extracted into independent microservices. The transition is reduced to primarily a configuration step, validating the modular monolith as an intelligent starting point.
Conclusion
The use of Fastify to architect a modular monolith is a powerful and pragmatic solution for scalable Node.js development. It resolves the core issues of structural degradation and hidden dependencies inherent in free-form Node.js development by leveraging a robust plugin system that enforces encapsulation. This pattern ensures maintainability, simplifies testing, and provides a clear, low-friction pathway for future transition to a fully distributed microservices architecture.
Relevant links and hashtags
- Lecture Video: Building a modular monolith with Fastify – Matteo Collina, Node Congress 2023
- Lecturer Professional Links:
- Professional Page: nodeland.dev
- X (Twitter): @matteocollina
- LinkedIn: in/matteocollina
- Organization: Platformatic.dev
Hashtags: #ModularMonolith #Fastify #NodeJSArchitecture #Microservices #SoftwareDesign #OpenSource #NodeCongress
[NodeCongress2023] Architectural Strategies for Achieving 40 Million Operations Per Second in a Distributed Database
Lecturer: Michael Hirschberg
Michael Hirschberg is a Solutions Engineer with extensive operational experience in distributed database systems, particularly with Couchbase. He is affiliated with Couchbase and has previously served as a Senior System Engineer for eight years at Amadeus. His work focuses on advising companies on optimal database architecture, performance, and scalability, with a notable specialization in handling extremely high-throughput environments. He is based in Erding, Bavaria.
- Institutional Profile/Professional Page: Michael Hirschberg’s talks, articles, workshops, certificates – GitNation
- LinkedIn: in/hirschbergm
- Organization: Couchbase
Abstract
This article investigates the architectural principles and methodological innovations required to sustain database throughput rates of up to 40 million operations per second. The analysis highlights the critical role of in-memory data storage, sophisticated horizontal scaling, and the utilization of “smart clients” to bypass traditional database bottlenecks. Furthermore, the article explores specialized deployments, such as mobile databases designed for an offline-first strategy, and the diverse data access mechanisms necessary for high-performance applications.
Context: The Imperative of Latency and Throughput
In modern distributed computing, especially in applications developed using environments like Node.js, the database often becomes the critical bottleneck to achieving high performance and low latency. The architecture needed to support extremely high operations per second (Ops/S) must diverge significantly from traditional relational or monolithic NoSQL designs.
Methodology: Distributed In-Memory Architecture
The core methodology for achieving extreme throughput centers on an optimized, distributed, in-memory data platform:
- In-Memory Storage: The initial and primary method of storing data is in RAM, which is foundational to the “lightning” speed described for operation execution.
- Sharding and Distribution: The architecture relies on horizontal scaling by sharding the data across multiple nodes. This mechanism distributes the load and ensures that no single machine becomes a point of failure or congestion.
- Smart Clients/SDKs: Crucially, the system utilizes “smart clients” or SDKs that incorporate the sharding logic. These clients calculate the exact node where the data resides and connect directly to that node, bypassing any centralized routing or proxy layer which would otherwise introduce latency.
Analysis of Specialised Data Models and Deployment
Data Structure and Access
The database is built to efficiently digest data in two specific formats: JSON documents and raw binaries.
- Access Mechanisms: Developers can interact with the data using several high-level methods, including:
- SQL for JSON (N1QL): A declarative query language that allows SQL-like querying of JSON data.
- Full Text Search (FTS): Enabling complex, efficient text-based searches across the dataset.
- The architecture explicitly notes a lack of support for Vector databases.
Mobile Database Implementation
A complementary lightweight version of the database is designed for mobile devices, web browsers, and edge hardware like Raspberry Pi.
- Offline-First: This design is built to prioritize working offline, storing data locally on the device.
- Synchronization: Data is synchronized with the main database in the cloud or on-premises via a special component. This component ensures that the mobile device receives only the data it is authorized and supposed to access, maintaining security and data integrity. Mobile databases can also communicate peer-to-peer.
Conclusion
The capability to handle 40 million Ops/S is achieved through a multi-faceted architectural approach that leverages in-memory data, aggressive horizontal sharding, and the crucial innovation of smart clients that eliminate centralized bottlenecks. This methodology minimizes network hops and maximizes read/write performance. Furthermore, specialized components for mobile and edge deployment extend the high-performance model to offline and low-bandwidth environments, confirming the system’s relevance for globally distributed, modern application needs.
Relevant links and hashtags
- Lecture Video: The Database Magic Behind 40MIO Ops/S – Michael Hirschberg, Node Congress 2023
- Lecturer Professional Links:
- LinkedIn: in/hirschbergm
- Organization: Couchbase
Hashtags: #NoSQL #DatabaseArchitecture #HighPerformance #40MIOOpsS #Couchbase #DistributedSystems #NodeCongress
[NodeCongress2023] Deconstructing the JavaScript Runtime: V8, Libuv, and the Mechanics of Performance
Lecturer: Erick Wendel
Erick Wendel is a highly active member of the JavaScript and Node.js open-source community, serving as a Node.js core committer and international keynote speaker. He is recognized as a professional educator and holds multiple prestigious awards, including Google Developer Expert (GDE), Microsoft Most Valuable Professional (MVP), and GitHub Star. Mr. Wendel has made significant contributions to the Node.js core, specifically in modules like the native test runner and child process, and is known for his work in recreating the Node.js runtime from scratch as an educational exercise. He is the founder of EW Academy, a platform dedicated to advanced JavaScript education, having trained over 100,000 developers globally.
- Institutional Profile/Professional Page: Erick Wendel
- X (Twitter): @erickwendel_
- LinkedIn: in/erickwendel
Abstract
This article provides a scholarly analysis of the fundamental architectural components of modern JavaScript runtimes, such as Node.js, Deno, and Bun, by examining the core technologies that enable their high performance. The focus is on the essential interaction between the V8 JavaScript engine, the Libuv library for asynchronous I/O, and the C++ bindings (Core Modules) that interface with the operating system. The study highlights the methodological challenges involved in bridging JavaScript’s garbage-collected memory model with C++’s direct memory management, which is key to understanding runtime optimization.
Context: The Runtime Landscape
The recent proliferation of new JavaScript runtimes (Bun, Deno) underscores the need for deep optimization and architectural choices that address modern system demands. These runtimes are not monolithic; they are complex systems composed of distinct, specialized components. Understanding these components is crucial to grasping the functional differences and performance advantages between the various environments.
Methodology and Core Components
The construction of a JavaScript runtime hinges on three primary architectural pillars:
- V8 JavaScript Engine: V8 is the foundation, responsible for parsing, compiling (Just-In-Time), and executing JavaScript code. Its primary functions are managing the garbage-collected memory heap and the event loop. It provides an interface (an
isolate) that must be utilized to communicate with the JavaScript environment. - Libuv (Library for Unix/Windows V8): Libuv is a cross-platform asynchronous I/O library that provides the core threading, event loop management, and non-blocking networking capabilities. It abstracts the differences in operating system APIs (e.g., Windows vs. Linux) to ensure a consistent interface for I/O operations.
- Core Modules/Bindings (The Bridge): These are the C++ functions that act as the essential link between the JavaScript environment (managed by V8) and the operating system’s capabilities (often accessed via Libuv). They expose low-level operating system features (like file system access or network sockets) to JavaScript.
Analysis: The Interoperability Challenge
The most critical challenge in runtime design is managing the interaction between V8 and the C++ code. Since V8 manages its own memory (garbage collection) and C++ uses direct memory allocation, a robust, safe interface is required. Any attempt to call a C++ function from JavaScript necessitates crossing this boundary, which requires obtaining a V8 isolate instance. Furthermore, to handle asynchronous operations, C++ functions must receive a callback, execute their blocking I/O on the Libuv thread pool, and then signal V8 to process the result on the main event loop.
Consequences and Implications
The lecture demonstrates that core differences between runtimes often stem from the specific C++ implementations and optimization choices made within the Core Modules/Bindings and how they interact with underlying system calls and Libuv. While V8 and Libuv are largely standardized components, the performance gains and unique features of runtimes like Bun (which uses Zig and JavaScriptCore instead of V8) are realized by optimizing the “glue” layer. The current environment suggests collaboration rather than outright competition, with core contributors and key figures moving between and contributing to the various runtime organizations (Node.js, Deno), fostering a shared advancement of the ecosystem.
Relevant links and hashtags
- Lecture Video: Bun, Deno, Node.js? Recreating a JavaScript runtime from Scratch – Erick Wendel, Node Congress 2023
- Lecturer Professional Links:
- Professional Page: Erick Wendel
- X (Twitter): @erickwendel_
- LinkedIn: in/erickwendel
- Organization: EW Academy
Hashtags: #JSRuntime #V8Engine #Libuv #NodeJS #Deno #Bun #SystemsProgramming #CoreModules #NodeCongress
[NodeCongress2023] Evolving the JavaScript Backend – Architectural Shifts in Deno 2.0
Lecturer: Ryan Dahl
Ryan Dahl is an American software engineer and entrepreneur, widely recognized as the creator of the Node.js JavaScript runtime, which he released in 2009. Following his initial work on Node.js, he later created the Deno JavaScript/TypeScript runtime to address what he perceived as fundamental architectural issues in Node.js. Mr. Dahl studied mathematics at the University of California, San Diego (UCSD) and pursued algebraic topology at the University of Rochester for graduate school before pivoting to software engineering, which he found more applicable to real life. He currently serves as the Co-Founder and Chief Executive Officer of Deno Land Inc.. His work emphasizes moving away from centralized module systems and toward conservative, browser-like security models.
- Institutional Profile/Professional Page: Ryan Dahl – Wikipedia
- Organization: Deno Land Inc.
Abstract
This article analyzes the strategic architectural and functional changes introduced in Deno 2.0, interpreting them as a move toward enhanced interoperability with the existing Node.js ecosystem and a strong commitment to cloud-native development paradigms. The analysis focuses on key innovations, including dependency management enhancements (package.json auto-discovery and bare specifiers), the introduction of built-in distributed primitives (Deno.KV), and the philosophical shift from optimizing local servers to building optimal global services by restricting programs to distributed cloud primitives.
Context: The Evolution of JavaScript Server Runtimes
The initial philosophy behind the Node.js runtime was to restrict I/O primitives to asynchronous methods, enabling developers to build optimal local servers. However, the proliferation of cloud computing and serverless architectures necessitated a rethinking of runtime design. Deno 2.0 is positioned as an expanded version of this initial philosophy, focusing on restricting programs to distributed Cloud Primitives to facilitate the development of optimal Global Services.
Analysis of Architectural Innovations
Interoperability and Dependency Management
A central focus of Deno 2.0 is improving backwards compatibility and reducing friction for developers migrating from or using npm packages.
package.jsonAuto-Discovery: Deno 2.0 introduces automatic detection and configuration based on an existingpackage.jsonfile, significantly streamlining the process of using npm packages.- Bare Specifiers: The update adds support for bare specifiers (e.g.,
import { serve } from 'std/http'), enabling modules to be imported without requiring a fully qualified URL, which improves code readability and familiarity for many developers. - Import Maps: The use of import maps is highlighted as a solution to address critical issues in the JavaScript ecosystem, specifically the pervasive problem of duplicate dependencies and the issue of disappearing or unmaintained dependencies.
deno:Specifiers and Registry: Built-in support fordeno:specifiers on thedeno.land/xregistry provides a recommended and streamlined path for publishing reusable code, promoting internal consistency.
The Shift to Distributed Primitives
The most significant philosophical shift in Deno 2.0 is the direct integration of distributed systems primitives into the runtime. This moves beyond the I/O layer (like Node.js) to address the needs of modern globally distributed applications.
Deno.KV(Key-Value Store): This innovation introduces a built-in, globally distributed key-value store. It is designed to be a durable, globally replicated, and transactionally correct database, providing developers with a default persistence layer that is natively integrated and prepared to scale. The concept aims to force optimization by offering a scalable default for state management.- Other Cloud Primitives: Other features are under development to support global services, including persistent caches, background workers, and object storage.
Consequences and Implications
The Deno 2.0 feature set represents a concerted effort to optimize JavaScript for the serverless and edge computing landscape. By including distributed primitives like Deno.KV, Deno is reducing the boilerplate and external configuration traditionally required to build a scalable, production-ready backend. The expanded backward compatibility features indicate a pragmatic approach to ecosystem adoption, balancing Deno’s core security and design principles with the practical necessity of using existing npm modules.
This new model reflects an emerging computing abstraction, articulated by the analogy: “bash is to JavaScript as elf is to wasm“. This suggests that JavaScript, running in modern, standards-compliant runtimes, is moving into a “post-Unix future,” becoming the universal scripting and service layer that replaces traditional shell scripting and native binaries in the cloud environment.
Conclusion
Deno 2.0’s innovations solidify its role as a forward-thinking JavaScript runtime designed explicitly for the era of global, distributed services. The focus on integrated cloud primitives and improved interoperability addresses key challenges in modern backend development, pushing the JavaScript ecosystem toward more opinionated, secure, and globally performant architectures. The movement, which includes collaboration in standards bodies like the Web-interoperable Runtimes Community Group (WinterCG), indicates a broad industry consensus on the need for a unified, standards-based approach to server-side JavaScript.
Relevant links and hashtags
- Lecture Video: Deno 2.0 – Ryan Dahl, Node Congress 2023
- Lecturer Professional Links:
- Organization: Deno Land Inc.
- Professional Profile: Ryan Dahl – Wikipedia
Hashtags: #Deno20 #JavaScriptRuntime #CloudNative #GlobalServices #DenoKV #WebInteroperability #NodeCongress