Archive for the ‘en-US’ Category
[KotlinConf’2023] Coroutines and Loom: A Deep Dive into Goals and Implementations
The advent of OpenJDK’s Project Loom and its virtual threads has sparked considerable discussion within the Java and Kotlin communities, particularly regarding its relationship with Kotlin Coroutines. Roman Elizarov, Project Lead for Kotlin at JetBrains, addressed this topic head-on at KotlinConf’23 in his talk, “Coroutines and Loom behind the scenes”. His goal was not just to answer whether Loom would make coroutines obsolete (the answer being a clear “no”), but to delve into the distinct design goals, implementations, and trade-offs of each, clarifying how they can coexist and even complement each other. Information about Project Loom can often be found via OpenJDK resources or articles like those on Baeldung.
Roman began by noting that Project Loom, introducing virtual threads to the JVM, was nearing stability, targeted for Java 21 (late 2023). He emphasized that understanding the goals behind each technology is crucial, as these goals heavily influence their design and optimal use cases.
Project Loom: Simplifying Server-Side Concurrency
Project Loom’s primary design goal, as Roman Elizarov explained, is to preserve the thread-per-request programming style prevalent in many existing Java server-side applications, while dramatically increasing scalability. Traditionally, assigning one platform thread per incoming request becomes a bottleneck due to the high cost of platform threads. Virtual threads aim to solve this by providing lightweight, JVM-managed threads that can run existing synchronous, blocking Java code with minimal or no changes. This allows legacy applications to scale much better without requiring a rewrite to asynchronous or reactive patterns.
Loom achieves this by “unmounting” a virtual thread from its carrier (platform) thread when it encounters a blocking operation (like I/O) that has been integrated with Loom. The carrier thread is then free to run other virtual threads. When the blocking operation completes, the virtual thread is “remounted” on a carrier thread to continue execution. This mechanism is largely transparent to the application code. However, Roman pointed out a potential pitfall: if blocking operations occur within synchronized blocks or native JNI calls that haven’t been adapted for Loom, the carrier thread can get “pinned,” preventing unmounting and potentially negating some of Loom’s benefits in those specific scenarios.
Kotlin Coroutines: Fine-Grained, Structured Concurrency
In contrast, Kotlin Coroutines were designed with different primary goals:
- Enable fine-grained concurrency: Allowing developers to easily launch tens of thousands or even millions of concurrent tasks without performance issues, suitable for highly concurrent applications like UI event handling or complex data processing pipelines.
- Provide structured concurrency: Ensuring that the lifecycle of coroutines is managed within scopes, simplifying cancellation and preventing resource leaks. This is particularly critical for UI applications where tasks need to be cancelled when UI components are destroyed.
Kotlin Coroutines achieve this through suspendable functions (suspend fun) and a compiler-based transformation. When a coroutine suspends, it doesn’t block its underlying thread; instead, its state is saved, and the thread is released to do other work. This is fundamentally different from Loom’s approach, which aims to make blocking calls non-problematic for virtual threads. Coroutines explicitly distinguish between suspending and non-suspending code, a design choice that enables features like structured concurrency but requires a different programming model than traditional blocking code.
Comparing Trade-offs and Performance
Roman Elizarov presented a detailed comparison:
- Programming Model: Loom aims for compatibility with existing blocking code. Coroutines introduce a new model with suspend functions, which is more verbose for simple blocking calls but enables powerful features like structured concurrency and explicit cancellation. Forcing blocking calls into a coroutine world requires wrappers like withContext(Dispatchers.IO), while Loom handles blocking calls transparently on virtual threads.
- Cost of Operations:
- Launching: Launching a coroutine is significantly cheaper than starting even a virtual thread, as coroutines are lighter weight objects.
- Yielding/Suspending: Suspending a coroutine is generally cheaper than a virtual thread yielding (unmounting/remounting), due to compiler optimizations in Kotlin for state machine management. Roman showed benchmarks indicating lower memory allocation and faster execution for coroutine suspension compared to virtual thread context switching in preview builds of Loom.
- Error Handling & Cancellation: Coroutines have built-in, robust support for structured cancellation. Loom’s virtual threads rely on Java’s traditional thread interruption mechanisms, which are less integrated into the programming model for cooperative cancellation.
- Debugging: Loom’s virtual threads offer a debugging experience very similar to traditional threads, with understandable stack traces. Coroutines, due to their state-machine nature, can sometimes have more complex stack traces, though IDE support has improved this.
Coexistence and Future Synergies
Roman Elizarov concluded that Loom and coroutines are designed for different primary use cases and will coexist effectively.
- Loom excels for existing Java applications using the thread-per-request model that need to scale without major rewrites.
- Coroutines excel for applications requiring fine-grained, highly concurrent operations, structured concurrency, and explicit cancellation management, often seen in UI applications or complex backend services with many interacting components.
He also highlighted a potential future synergy: Kotlin Coroutines could leverage Loom’s virtual threads for their Dispatchers.IO (or a similar dispatcher) when running on newer JVMs. This could allow blocking calls within coroutines (those wrapped in withContext(Dispatchers.IO)) to benefit from Loom’s efficient handling of blocking operations, potentially eliminating the need for a large, separate thread pool for I/O-bound tasks in coroutines. This would combine the benefits of both: coroutines for structured, fine-grained concurrency and Loom for efficient handling of any unavoidable blocking calls.
Links:
Hashtags: #Kotlin #Coroutines #ProjectLoom #Java #JVM #Concurrency #AsynchronousProgramming #RomanElizarov #JetBrains
[KotlinConf’23] The Future of Kotlin is Bright and Multiplatform
KotlinConf’23 kicked off with an energizing keynote, marking a highly anticipated return to an in-person format in Amsterdam. Hosted by Hadi Hariri from JetBrains, the session brought together key figures from both JetBrains and Google, including Roman Elizarov, Svetlana Isakova, Egor Tolstoy, and Grace Kloba (VP of Engineering for Android Developer Experience at Google), to share exciting updates and future directions for the Kotlin language and its ecosystem. The conference also boasted a global reach with KotlinConf Global events held across 41 countries. For those unable to attend, the key announcements from the keynote are also available in a comprehensive blog post on the official Kotlin blog.
The keynote began by celebrating Kotlin’s impressive growth, with compelling statistics underscoring its widespread adoption, particularly in Android development where it stands as the most popular language, utilized in over 95% of the top 1000 Android applications. A significant emphasis was placed on the forthcoming Kotlin 2.0, which is centered around the revolutionary new K2 compiler. This compiler promises significant performance improvements, enhanced stability, and a robust foundation for the language’s future evolution. The K2 compiler is nearing completion and is slated for release as Kotlin 2.0. Additionally, the IntelliJ IDEA plugin will also adopt the K2 frontend, ensuring alignment with IntelliJ releases and a consistent developer experience.
The Evolution of Kotlin: K2 Compiler and Language Features
The K2 compiler was a central theme of the keynote, signifying a major milestone for Kotlin. This re-architected compiler frontend, which also powers the IDE, is designed to be faster, more stable, and to enable quicker development of new language features and tooling capabilities. Kotlin 2.0, built upon the K2 compiler, is set to bring these profound benefits to all Kotlin developers, improving both compiler performance and IDE responsiveness.
Beyond the immediate horizon of Kotlin 2.0, the speakers provided a glimpse into potential future language features that are currently under consideration. These exciting prospects included:
Prospective Language Enhancements
- Static Extensions: This feature aims to allow static resolution of extension functions, which could potentially improve performance and code clarity.
- Collection Literals: The introduction of a more concise syntax for creating collections, such as using square brackets for lists, with efficient underlying implementations, is on the cards.
- Name-Based Destructuring: Offering a more flexible way to destructure objects based on property names rather than simply their positional order.
- Context Receivers: A powerful capability designed to provide contextual information to functions in a more implicit and structured manner. This feature, however, is being approached with careful consideration to ensure it aligns well with Kotlin’s core principles and doesn’t introduce undue complexity.
- Explicit Fields: This would provide developers with more direct control over the backing fields of properties, offering greater flexibility in certain scenarios.
The JetBrains team underscored a cautious and deliberate approach to language evolution, ensuring that any new features are meticulously designed and maintainable within the Kotlin ecosystem. Compiler plugins were also highlighted as a powerful mechanism for extending Kotlin’s capabilities without altering its core.
Kotlin in the Ecosystem: Google’s Investment and Multiplatform Growth
Grace Kloba from Google took the stage to reiterate Google’s strong and unwavering commitment to Kotlin. She shared insights into Google’s substantial investments in the Kotlin ecosystem, including the development of Kotlin Symbol Processing (KSP) and the continuous emphasis on Kotlin as the default choice for Android development. Google officially championed Kotlin for Android development as early as 2017, a pivotal moment for the language’s widespread adoption. Furthermore, the Kotlin DSL is now the default for Gradle build scripts within Android Studio, significantly enhancing the developer experience with features such as semantic syntax highlighting and advanced code completion. Google also actively contributes to the Kotlin Foundation and encourages community participation through initiatives like the Kotlin Foundation Grants Program, which specifically focuses on supporting multiplatform libraries and frameworks.
Kotlin Multiplatform (KMP) emerged as another major highlight of the keynote, emphasizing its increasing maturity and widespread adoption. The overarching vision for KMP is to empower developers to share code across a diverse range of platforms—Android, iOS, desktop, web, and server-side—while retaining the crucial ability to write platform-specific code when necessary for optimal integration and performance. The keynote celebrated the burgeoning number of multiplatform libraries and tools, including KMM Bridge, which are simplifying KMP development workflows. The future of KMP appears exceptionally promising, with ongoing efforts to further enhance the developer experience and expand its capabilities across even more platforms.
Compose Multiplatform and Emerging Technologies
The keynote also featured significant advancements in Compose Multiplatform, JetBrains’ declarative UI framework for building cross-platform user interfaces. A particularly impactful announcement was the alpha release of Compose Multiplatform for iOS. This groundbreaking development allows developers to write their UI code once in Kotlin and deploy it seamlessly across Android and iOS, and even to desktop and web targets. This opens up entirely new avenues for code sharing and promises accelerated development cycles for mobile applications, breaking down traditional platform barriers.
Finally, the JetBrains team touched upon Kotlin’s expansion into truly emerging technologies, such as WebAssembly (Wasm). JetBrains is actively developing a new compiler backend for Kotlin specifically targeting WebAssembly, coupled with its own garbage collection proposal. This ambitious effort aims to deliver high-performance Kotlin code directly within the browser environment. Experiments involving the execution of Compose applications within the browser using WebAssembly were also mentioned, hinting at a future where Kotlin could offer a unified development experience across an even broader spectrum of platforms. The keynote concluded with an enthusiastic invitation to the community to delve deeper into these subjects during the conference sessions and to continue contributing to Kotlin’s vibrant and ever-expanding ecosystem.
Links:
- Blog Post on KotlinConf’23 Keynote Highlights
- JetBrains Website
- JetBrains on LinkedIn
- Grace Kloba – KotlinConf’23 Speaker Profile
Hashtags: #Keynote #JetBrains #Google #K2Compiler #Kotlin2 #Multiplatform #ComposeMultiplatform #WebAssembly
[DevoxxBE2023] A Deep Dive into Advanced TypeScript: A Live Coding Expedition by Christian Wörz
Christian Wörz, a seasoned full-stack engineer and freelancer, captivated the Devoxx Belgium 2023 audience with a hands-on exploration of advanced TypeScript features. Through live coding, Christian illuminated powerful yet underutilized constructs like mapped types, template literals, conditional types, and recursion, demonstrating their practical applications in real-world scenarios. His session, blending technical depth with accessibility, empowered developers to leverage TypeScript’s full potential for creating robust, type-safe codebases.
Mastering Mapped Types and Template Literals
Christian kicked off with mapped types, showcasing their ability to dynamically generate type-safe structures. He defined an Events type with add and remove properties and created an OnEvent type to prefix event keys with “on” (e.g., onAdd, onRemove). Using the keyof operator and template literal syntax, he ensured that OnEvent mirrored Events, enforcing consistency. For instance, adding a move event to Events required updating OnEvent to onMove, providing compile-time safety. He enhanced this with TypeScript’s intrinsic Capitalize function to uppercase property names, ensuring precise naming conventions.
Template literals were explored through a chessboard example, where Christian generated all possible positions (e.g., A1, B2) by combining letter and number types. He extended this to a CSS validator, defining a GapCSS type for properties like margin-left and padding-top, paired with valid CSS sizes (e.g., rem, px). This approach narrowed string types to enforce specific formats, preventing errors like invalid CSS values at compile time.
Leveraging Conditional Types and Never for Safety
Christian delved into conditional types and the never type to enhance compile-time safety. He introduced a NoEmptyString type that prevents empty strings from being assigned, using a conditional check to return never for invalid inputs. Applying this to a failOnEmptyString function ensured that only non-empty strings were accepted, catching errors before runtime. He also demonstrated exhaustive switch cases, using never to enforce complete coverage. For a getCountryForLocation function, assigning an unhandled London case to a never-typed variable triggered a compile-time error, ensuring all cases were addressed.
Unraveling Types with Infer and Recursion
The infer keyword was a highlight, enabling Christian to extract type information dynamically. He created a custom MyReturnType to mimic TypeScript’s ReturnType, inferring a function’s return type (e.g., number for an addition function). This was particularly useful for complex type manipulations. Recursion was showcased through an UnnestArray type, unwrapping deeply nested arrays to access their inner types (e.g., extracting string from string[][][]). He also built a recursive Tuple type, generating fixed-length arrays with specified element types, such as a three-element RGB tuple, with an accumulator to collect elements during recursion.
Branded Types for Enhanced Type Safety
Christian concluded with branded types, a technique to distinguish specific string formats, like emails, without runtime overhead. By defining an Email type as a string intersected with an object containing a _brand property, he ensured that only validated strings could be assigned. A type guard function, isValidEmail, checked for an @ symbol, allowing safe usage in functions like sendEmail. This approach maintained the simplicity of primitive types while enforcing strict validation, applicable to formats like UUIDs or custom date strings.
Links:
Secure Development with Docker: DockerCon 2023 Workshop
The DockerCon 2023 workshop, “Secure Development with Docker,” delivered by Yves Brissaud, James Carnegie, David Dooling, and Christian Dupuis from Docker, offered a comprehensive exploration of securing the software supply chain. Spanning over three hours, this session addressed the tension between developers’ need for speed and security teams’ focus on risk mitigation. Participants engaged in hands-on labs to identify and remediate common vulnerabilities, leverage Docker Scout for actionable insights, and implement provenance, software bills of materials (SBOMs), and policies. The workshop emphasized Docker’s developer-centric approach to security, empowering attendees to enhance their workflows without compromising safety. By integrating Docker Scout, attendees learned to secure every stage of the software development lifecycle, from code to deployment.
Tackling Common Vulnerabilities and Exposures (CVEs)
The workshop began with a focus on Common Vulnerabilities and Exposures (CVEs), a critical starting point for securing software. David Dooling introduced CVEs as publicly disclosed cybersecurity vulnerabilities in operating systems, dependencies like OpenSSL, or container images. Participants used Docker Desktop 4.24 and the Docker Scout CLI to scan images based on Alpine 3.14, identifying vulnerabilities in base images and added layers, such as npm packages (e.g., Express and its transitive dependency Qs). Hands-on exercises guided attendees to update base images to Alpine 3.18, using Docker Scout’s recommendations to select versions with fewer vulnerabilities. The CLI’s cve command and Desktop’s vulnerability view provided detailed insights, including severity filters and package details, enabling developers to remediate issues efficiently. This segment underscored that while scanning is essential, it’s only one part of a broader security strategy, setting the stage for a holistic approach.
Understanding Software Supply Chain Security
The second segment, led by Dooling, introduced the software supply chain as a framework encompassing source code, dependencies, build processes, and deployment. Drawing an analogy to brewing coffee—where beans, water, and equipment have their own supply chains—the workshop highlighted risks like supply chain attacks, as outlined by CISA’s open-source security roadmap. These attacks, such as poisoning repositories, differ from CVEs by involving intentional tampering. Participants explored Docker Scout’s role as a supply chain management tool, not just a CVE scanner. Using the workshop’s GitHub repository (dc23-secure-workshop), attendees set up environment variables and Docker Compose to build images, learning how Scout tracks components across the lifecycle. This segment emphasized the need to secure every stage, from code creation to deployment, to prevent vulnerabilities and malicious injections.
Leveraging Docker Scout for Actionable Insights
Docker Scout was the cornerstone of the workshop, offering a developer-friendly interface to manage security. Yves Brissaud guided participants through hands-on labs using Docker Desktop and the Scout CLI to analyze images. Attendees explored vulnerabilities in a front-end image (using Express) and a Go-based back-end image, applying filters to focus on critical CVEs or specific package types (e.g., npm). Scout’s compare command allowed participants to assess changes between image versions, such as updating from Alpine 3.14 to 3.18, revealing added or removed packages and their impact on vulnerabilities. Desktop’s visual interface displayed recommended fixes, like updating base images or dependencies, while the CLI provided detailed outputs, including quick views for rapid assessments. This segment demonstrated Scout’s ability to integrate into CI/CD pipelines, providing early feedback to developers without disrupting workflows.
Implementing Provenance and Software Bill of Materials (SBOM)
The third segment focused on provenance and SBOMs, critical for supply chain transparency. Provenance, aligned with the SALSA framework’s Build Level 1, documents how an image is built, including base image tags, digests, and build metadata. SBOMs list all packages and their versions, ensuring consistency across environments. Participants rebuilt images with the --provenance and --sbom flags using BuildKit, generating attestations stored in Docker Hub. Brissaud demonstrated using the imagetools command to inspect provenance and SBOMs, revealing details like build timestamps and package licenses. The workshop highlighted the importance of embedding this metadata at build time to enable reproducible builds and accurate recommendations. By integrating Scout’s custom SBOM indexer, attendees ensured consistent vulnerability reporting across Desktop, CLI, and scout.docker.com, enhancing trust in the software’s integrity.
Enforcing Developer-Centric Policies
The final segment introduced Docker Scout’s policy enforcement, designed with a developer mindset to avoid unnecessary build failures. Dooling explained Scout’s “first do no harm” philosophy, rooted in Kaizen’s continuous improvement principles. Unlike traditional policies that block builds for existing CVEs, Scout compares new builds to production images, allowing progress if vulnerabilities remain unchanged. Participants explored four out-of-the-box policies in Early Access: fixing critical/high CVEs, updating base images, and avoiding deprecated tags. Using the scout policy command, attendees evaluated images against these policies, viewing compliance status on Desktop and scout.docker.com. The workshop also previewed upcoming GitHub Action integrations for pull request policy checks, enabling developers to assess changes before merging. This approach ensures security without hindering development, aligning with Docker’s mission to empower developers.
Links:
- DockerCon 2023 Workshop Video
- Docker Website
- Announcing Docker Scout GA: Actionable Insights for the Software Supply Chain
- Docker Scout: Securing The Complete Software Supply Chain (DockerCon 2023)
- What’s in My Container? Docker Scout CLI and CI to the Rescue (DockerCon 2023)
Hashtags: #DockerCon2023 #SoftwareSupplyChain #DockerScout #SecureDevelopment #CVEs #Provenance #SBOM #Policy #YvesBrissaud #JamesCarnegie #DavidDooling #ChristianDupuis
[SpringIO2023] Managing Spring Boot Application Secrets: Badr Nass Lahsen
In a compelling session at Spring I/O 2023, Badr Nass Lahsen, a DevSecOps expert at CyberArk, tackled the critical challenge of securing secrets in Spring Boot applications. With the rise of cloud-native architectures and Kubernetes, secrets like database credentials or API keys have become prime targets for attackers. Badr’s talk, enriched with demos and real-world insights, introduced CyberArk’s Conjur solution and various patterns to eliminate hard-coded credentials, enhance authentication, and streamline secrets management, fostering collaboration between developers and security teams.
The Growing Threat to Application Secrets
Badr opened with alarming statistics: in 2021, software supply chain attacks surged by 650%, with 71% of organizations experiencing such breaches. He cited the 2022 Uber attack, where a PowerShell script with hard-coded credentials enabled attackers to escalate privileges across AWS, Google Suite, and other systems. Using the SALSA threat model, Badr highlighted vulnerabilities like compromised source code (e.g., Okta’s leaked access token) and build processes (e.g., SolarWinds). These examples underscored the need to eliminate hard-coded secrets, which are difficult to rotate, track, or audit, and often exposed inadvertently. Badr advocated for “shifting security left,” integrating security from the design phase to mitigate risks early.
Introducing Application Identity Security
Badr introduced the concept of non-human identities, noting that machine identities (e.g., SSH keys, database credentials) outnumber human identities 45 to 1 in enterprises. These secrets, if compromised, grant attackers access to critical resources. To address this, Badr presented CyberArk’s Conjur, an open-source secrets management solution that authenticates workloads, enforces policies, and rotates credentials. He emphasized the “secret zero problem”—the initial secret needed at application startup—and proposed authenticators like JWT or certificate-based authentication to solve it. Conjur’s attribute-based access control (ABAC) ensures least privilege, enabling scalable, auditable workflows that balance developer autonomy and security requirements.
Patterns for Securing Spring Boot Applications
Through a series of demos using the Spring Pet Clinic application, Badr showcased five patterns for secrets management in Kubernetes. The API pattern integrates Conjur’s SDK, using Spring’s @Value annotations to inject secrets without changing developer workflows. The Secrets Provider pattern updates Kubernetes secrets from Conjur, minimizing code changes but offering less security. The Push-to-File pattern stores secrets in shared memory, updating application YAML files securely. The Summon pattern uses a process wrapper to inject secrets as environment variables, ideal for apps relying on such variables. Finally, the Secretless Broker pattern proxies connections to resources like MySQL, hiding secrets entirely from applications and developers. Badr demonstrated credential rotation with zero downtime using Spring Cloud Kubernetes, ensuring resilience for critical applications.
Enhancing Kubernetes Security and Auditing
Badr cautioned that Kubernetes secrets, being base64-encoded and unencrypted by default, are insecure without etcd encryption. He introduced KubeScan, an open-source tool to identify risky roles and permissions in clusters. His demos highlighted Conjur’s auditing capabilities, logging access to secrets and enabling security teams to track usage. By centralizing secrets management, Conjur eliminates “security islands” created by disparate tools like AWS Secrets Manager or Azure Key Vault, ensuring compliance and visibility. Badr stressed the need for a federated governance model to manage secrets across diverse technologies, empowering developers while maintaining robust security controls.
Links:
[DevoxxBE2023] Build a Generative AI App in Project IDX and Firebase by Prakhar Srivastav
At Devoxx Belgium 2023, Prakhar Srivastav, a software engineer at Google, unveiled the power of Project IDX and Firebase in crafting a generative AI mobile application. His session illuminated how developers can harness these tools to streamline full-stack, multiplatform app development directly from the browser, eliminating cumbersome local setups. Through a live demonstration, Prakhar showcased the creation of “Listed,” a Flutter-based app that leverages Google’s PaLM API to break down user-defined goals into actionable subtasks, offering a practical tool for task management. His engaging presentation, enriched with real-time coding, highlighted the synergy of cloud-based development environments and AI-driven solutions.
Introducing Project IDX: A Cloud-Based Development Revolution
Prakhar introduced Project IDX as a transformative cloud-based development environment designed to simplify the creation of multiplatform applications. Unlike traditional setups requiring hefty binaries like Xcode or Android Studio, Project IDX enables developers to work entirely in the browser. Prakhar demonstrated this by running Android and iOS emulators side-by-side within the browser, showcasing a Flutter app that compiles to multiple platforms—Android, iOS, web, Linux, and macOS—from a single codebase. This eliminates the need for platform-specific configurations, making development accessible even on lightweight devices like Chromebooks.
The live demo featured “Listed,” a mobile app where users input a goal, such as preparing for a tech talk, and receive AI-generated subtasks and tips. For instance, entering “give a tech talk at a conference” yielded steps like choosing a relevant topic and practicing the presentation, with a tip to have a backup plan for technical issues. Prakhar’s real-time tweak—changing the app’s color scheme from green to red—illustrated the iterative development flow, where changes are instantly reflected in the emulator, enhancing productivity and experimentation.
Harnessing the PaLM API for Generative AI
Central to the app’s functionality is Google’s PaLM API, which Prakhar utilized to integrate generative AI capabilities. He explained that large language models (LLMs), like those powering the PaLM API, act as sophisticated autocomplete systems, predicting likely text outputs based on extensive training data. For “Listed,” the text API was chosen for its suitability in single-turn interactions, such as generating subtasks from a user’s query. Prakhar emphasized the importance of crafting effective prompts, comparing a vague prompt like “the sky is” to a precise one like “complete the sentence: the sky is,” which yields more relevant results.
To enhance the AI’s output, Prakhar employed few-shot prompting, providing the model with examples of desired responses. For instance, for the query “go camping,” the prompt included sample subtasks like choosing a campsite and packing meals, along with a tip about wildlife safety. This structured approach ensured the model generated contextually accurate and actionable suggestions, making the app intuitive for users tackling complex tasks.
Securing AI Integration with Firebase Extensions
Integrating the PaLM API into a mobile app poses security challenges, particularly around API key exposure. Prakhar addressed this by leveraging Firebase Extensions, which provide pre-packaged solutions to streamline backend integration. Specifically, he used a Firebase Extension to securely call the PaLM API via Cloud Functions, avoiding the need to embed sensitive API keys in the client-side Flutter app. This setup not only enhances security but also simplifies infrastructure management, as the extension handles logging, monitoring, and optional AppCheck for client verification.
In the live demo, Prakhar navigated the Firebase Extensions Marketplace, selecting the “Call PaLM API Securely” extension. With a few clicks, he deployed Cloud Functions that exposed a POST API for sending prompts and receiving AI-generated responses. The code walkthrough revealed a straightforward implementation in Dart, where the app constructs a JSON payload with the prompt, model name (text-bison-001), and temperature (0.25 for deterministic outputs), ensuring seamless and secure communication with the backend.
Building the Flutter App: Simplicity and Collaboration
The Flutter app’s architecture, built within Project IDX, was designed for simplicity and collaboration. Prakhar walked through the main.dart file, which scaffolds the app’s UI with a material-themed interface, an input field for user queries, and a list to display AI-generated tasks. The app uses anonymous Firebase authentication to secure backend calls without requiring user logins, enhancing accessibility. A PromptBuilder class dynamically constructs prompts by combining predefined prefixes and examples, ensuring flexibility in handling varied user inputs.
Project IDX’s integration with Visual Studio Code’s open-source framework added collaborative features. Prakhar demonstrated how developers can invite colleagues to a shared workspace, enabling real-time collaboration. Additionally, the IDE’s AI capabilities allow users to explain selected code or generate new snippets, streamlining development. For instance, selecting the PromptBuilder class and requesting an explanation provided detailed insights into its parameters, showcasing how Project IDX enhances developer productivity.
Links:
[PHPForumParis2022] BFF: Our Best Friend Forever for Frontend Applications? – Valentin Claras
Valentin Claras, a seasoned team leader at Bedrock, delivered a compelling session at PHP Forum Paris 2022, exploring the Backend for Frontend (BFF) pattern as a solution for managing complex frontend applications. With over a decade of development experience, Valentin shared insights from his work at Bedrock, formerly MC6, illustrating how BFF streamlines frontend-backend interactions. His presentation, dense with practical examples, highlighted the pattern’s potential to enhance performance and maintainability in PHP-driven projects.
Understanding the BFF Pattern
Valentin introduced the BFF pattern as a specialized backend layer tailored to specific frontend needs, acting as a “glue” between diverse APIs and client applications. Drawing from Bedrock’s streaming platform, he explained how BFF aggregates data from multiple backend services, simplifying frontend development. By reducing the complexity of direct API calls, BFF enables faster iteration and better user experiences, particularly for applications with varied frontend requirements like web and mobile interfaces.
Optimizing Performance with Asynchronous Processing
Addressing performance concerns, Valentin detailed Bedrock’s use of the Tornado engine to handle asynchronous API calls within the BFF layer. He explained how parallelizing 10 to 20 API requests ensures reasonable response times, even under heavy loads. Valentin referenced prior talks by colleague Benoit Viguier, emphasizing the importance of non-sequential processing to maintain efficiency. This approach, he argued, mitigates the risk of performance bottlenecks, making BFF a viable solution for high-traffic applications.
Maintaining Clear Boundaries
Valentin emphasized the importance of keeping BFF’s responsibilities minimal to avoid it becoming a monolithic service. At Bedrock, the BFF focuses solely on data aggregation and transformation, leaving business logic to dedicated services. This clear separation ensures maintainability and scalability, preventing the BFF from absorbing unrelated responsibilities. Valentin’s insights, grounded in real-world challenges, offered a blueprint for developers aiming to implement BFF effectively in their PHP projects.
Fostering Collaborative Development
Concluding, Valentin highlighted BFF’s role in fostering collaboration between frontend and backend teams. By providing a unified interface, BFF reduces miscommunication and aligns development efforts. He encouraged developers to adopt BFF incrementally, leveraging its flexibility to enhance project workflows. Valentin’s practical approach inspired attendees to explore BFF as a tool for building robust, frontend-friendly PHP applications, drawing from Bedrock’s successful implementation.
Links:
[NodeCongress2021] Can We Double HTTP Client Throughput? – Matteo Collina
HTTP clients, the sinews of distributed dialogues, harbor untapped vigor amid presumptions of stasis. Matteo Collina, Node.js TSC stalwart, Fastify co-architect, and Pino progenitor, challenges this inertia, unveiling Undici—a HTTP/1.1 vanguard doubling, nay tripling, Node’s native throughput via HOL-blocking evasion.
Matteo’s odyssey traces TCP/IP genesis: Nagle’s algorithm coalesces packets, delaying ACKs—elegant for telnet, anathema for HTTP’s pipelined pleas. Keep-alive sustains sockets, multiplexing requests; yet core http’s single-flight per connection bottlenecks bursts.
Undici disrupts: connection pools parallelize, pipelining dispatches volleys sans serialization. Matteo benchmarks: native peaks at baselines; Undici’s agents—configurable concurrency—surge 3x, streams minimizing JSON parses.
Mitigating Head-of-Line Shadows
HOL’s specter—prior stalls cascade—yields to Undici’s ordered queues, responses slotted sans reordering. Matteo codes: fetch wrappers proxy natives, agents tune origins—pipelining: true unleashes floods.
Comparisons affirm: Undici’s strictness trumps core’s leniency, APIs diverge—request/stream for granularity. Fastify proxy’s genesis birthed Undici, Robert Nagy’s polish primed production.
Matteo’s clarion—agents mandatory, Undici transformative—ushers HTTP’s renaissance, slashing latencies in microservice meshes.
Links:
[NodeCongress2023] Architectural Strategies for Achieving 40 Million Operations Per Second in a Distributed Database
Lecturer: Michael Hirschberg
Michael Hirschberg is a Solutions Engineer with extensive operational experience in distributed database systems, particularly with Couchbase. He is affiliated with Couchbase and has previously served as a Senior System Engineer for eight years at Amadeus. His work focuses on advising companies on optimal database architecture, performance, and scalability, with a notable specialization in handling extremely high-throughput environments. He is based in Erding, Bavaria.
- Institutional Profile/Professional Page: Michael Hirschberg’s talks, articles, workshops, certificates – GitNation
- LinkedIn: in/hirschbergm
- Organization: Couchbase
Abstract
This article investigates the architectural principles and methodological innovations required to sustain database throughput rates of up to 40 million operations per second. The analysis highlights the critical role of in-memory data storage, sophisticated horizontal scaling, and the utilization of “smart clients” to bypass traditional database bottlenecks. Furthermore, the article explores specialized deployments, such as mobile databases designed for an offline-first strategy, and the diverse data access mechanisms necessary for high-performance applications.
Context: The Imperative of Latency and Throughput
In modern distributed computing, especially in applications developed using environments like Node.js, the database often becomes the critical bottleneck to achieving high performance and low latency. The architecture needed to support extremely high operations per second (Ops/S) must diverge significantly from traditional relational or monolithic NoSQL designs.
Methodology: Distributed In-Memory Architecture
The core methodology for achieving extreme throughput centers on an optimized, distributed, in-memory data platform:
- In-Memory Storage: The initial and primary method of storing data is in RAM, which is foundational to the “lightning” speed described for operation execution.
- Sharding and Distribution: The architecture relies on horizontal scaling by sharding the data across multiple nodes. This mechanism distributes the load and ensures that no single machine becomes a point of failure or congestion.
- Smart Clients/SDKs: Crucially, the system utilizes “smart clients” or SDKs that incorporate the sharding logic. These clients calculate the exact node where the data resides and connect directly to that node, bypassing any centralized routing or proxy layer which would otherwise introduce latency.
Analysis of Specialised Data Models and Deployment
Data Structure and Access
The database is built to efficiently digest data in two specific formats: JSON documents and raw binaries.
- Access Mechanisms: Developers can interact with the data using several high-level methods, including:
- SQL for JSON (N1QL): A declarative query language that allows SQL-like querying of JSON data.
- Full Text Search (FTS): Enabling complex, efficient text-based searches across the dataset.
- The architecture explicitly notes a lack of support for Vector databases.
Mobile Database Implementation
A complementary lightweight version of the database is designed for mobile devices, web browsers, and edge hardware like Raspberry Pi.
- Offline-First: This design is built to prioritize working offline, storing data locally on the device.
- Synchronization: Data is synchronized with the main database in the cloud or on-premises via a special component. This component ensures that the mobile device receives only the data it is authorized and supposed to access, maintaining security and data integrity. Mobile databases can also communicate peer-to-peer.
Conclusion
The capability to handle 40 million Ops/S is achieved through a multi-faceted architectural approach that leverages in-memory data, aggressive horizontal sharding, and the crucial innovation of smart clients that eliminate centralized bottlenecks. This methodology minimizes network hops and maximizes read/write performance. Furthermore, specialized components for mobile and edge deployment extend the high-performance model to offline and low-bandwidth environments, confirming the system’s relevance for globally distributed, modern application needs.
Relevant links and hashtags
- Lecture Video: The Database Magic Behind 40MIO Ops/S – Michael Hirschberg, Node Congress 2023
- Lecturer Professional Links:
- LinkedIn: in/hirschbergm
- Organization: Couchbase
Hashtags: #NoSQL #DatabaseArchitecture #HighPerformance #40MIOOpsS #Couchbase #DistributedSystems #NodeCongress
[SpringIO2023] Going Native: Fast and Lightweight Spring Boot Applications with GraalVM
At Spring I/O 2023 in Barcelona, Alina Yurenko, a developer advocate at Oracle Labs, captivated the audience with her deep dive into GraalVM Native Image support for Spring Boot 3.0. Her session, a blend of technical insights, live demos, and community engagement, showcased how GraalVM transforms Spring Boot applications into fast-starting, lightweight native executables that eliminate the need for a JVM. By leveraging GraalVM’s ahead-of-time (AOT) compilation, developers can achieve significant performance gains, reduced memory usage, and enhanced security, making it a game-changer for cloud-native deployments.
GraalVM: Beyond a Traditional JDK
Alina began by demystifying GraalVM, a versatile platform that extends beyond a standard JDK. While it can run Java applications using the OpenJDK HotSpot VM with an optimized Graal compiler, the spotlight was on its Native Image feature. This AOT compilation process converts a Spring Boot application into a standalone native executable, stripping away runtime code loading and compilation. The result? Applications that start in fractions of a second and consume minimal memory. Alina emphasized that GraalVM’s ability to include only reachable code—application logic, dependencies, and necessary JDK classes—reduces binary size and enhances efficiency, a critical advantage for cloud environments where resources are costly.
Performance and Resource Efficiency in Action
Through live demos, Alina illustrated GraalVM’s impact using the Spring Pet Clinic application. On her laptop, the JVM version took 1.5 seconds to start, while the native executable launched in just 0.3 seconds—a fivefold improvement. The native version was also significantly smaller, at roughly 50 MB without compression, compared to the JVM’s bulkier footprint. To stress-test performance, Alina ran a million requests against a simple Spring Boot app, comparing JVM and native modes. The JVM achieved 80k requests per second, while the native image hit 67k. However, with profile-guided optimizations (PGO), which mimic JVM’s runtime profiling at build time, the optimized native version reached 81k requests per second, rivaling JVM peak throughput. These demos underscored GraalVM’s ability to balance startup speed, low memory usage, and competitive throughput.
Security and Compact Packaging
Alina highlighted GraalVM’s security benefits, noting that native images eliminate runtime code loading, reducing attack vectors like those targeting just-in-time compilation. Only reachable code is included, minimizing the risk of unused dependencies introducing vulnerabilities. Dynamic features like reflection require explicit configuration, ensuring deliberate control over runtime behavior. On packaging, Alina showcased how native images can be compressed using tools like UPX, achieving sizes as low as a few megabytes, though she cautioned about potential runtime decompression trade-offs. These features make GraalVM ideal for deploying compact, secure applications in constrained environments like Kubernetes or serverless platforms.
Practical Integration with Spring Boot
The session also covered GraalVM’s seamless integration with Spring Boot 3.0, which graduated Native Image support from the experimental Spring Native project to general availability in November 2022. Spring Boot’s AOT processing step optimizes applications for native compilation, reducing reflective calls and generating configuration files for GraalVM. Alina demonstrated how Maven and Gradle plugins, along with the GraalVM Reachability Metadata Repository, simplify builds by automatically handling library configurations. For developers, this means minimal changes to existing workflows, with tools like the tracing agent and Spring’s runtime hints easing the handling of dynamic features. Alina’s practical advice—develop on the JVM for fast feedback, then compile to native in CI/CD pipelines—resonated with attendees aiming to adopt GraalVM.