Recent Posts
Archives

Archive for the ‘en-US’ Category

PostHeaderIcon Onyxia: A User-Centric Interface for Data Scientists in the Cloud Age

Watch the video

Introduction

The team from INSEE presents Onyxia, an open-source, Kubernetes-based platform designed to offer flexible, collaborative, and powerful cloud environments for data scientists.

Rethinking Data Science Infrastructure

Traditional local development faces issues like configuration divergence, data duplication, and limited compute resources. Onyxia solves these by offering isolated namespaces, integrated object storage, and a seamless user interface that abstracts Kubernetes and S3 complexities.

Versatile Deployment

With a few clicks, users can launch preconfigured environments — including Jupyter notebooks, VS Code, Postgres, and MLflow — empowering fast innovation without heavy IT overhead. Organizations can extend Onyxia by adding custom services, ensuring future-proof, evolvable data labs.

Success Stories

Adopted across French universities and research labs, Onyxia enables students and professionals alike to work in secure, scalable, and fully-featured environments without managing infrastructure manually.

Conclusion

Onyxia democratizes access to powerful cloud tools for data scientists, streamlining collaboration and fostering innovation.

PostHeaderIcon [NDCOslo2024] Reusable Ideas About the Reuse of Software – Audun Fauchald Strand & Trond Arve Wasskog

In the sprawling digital expanse of Norway’s welfare agency, NAV, where 143 million lines of code burgeon, Audun Fauchald Strand and Trond Arve Wasskog, principal engineers, confront the Sisyphean challenge of maintenance. Their discourse, a clarion call for strategic reuse, dissects NAV’s labyrinthine codebase, advocating for shared components to curb redundancy. With a nod to domain-driven design and Conway’s Law, Audun and Trond weave a narrative of organizational alignment, technical finesse, and cultural recalibration, urging a shift from ad-hoc replication to deliberate commonality.

NAV, serving Norway’s social safety net, grapples with legacy sprawl. Audun and Trond, seasoned navigators of this terrain, challenge the mantra “reuse should be discovered, not designed.” Their thesis: intentional reuse, underpinned by product thinking, demands ownership, incentives, and architecture harmonized with organizational contours. From open-source libraries to shared services, they map a spectrum of reuse, balancing technical feasibility with social dynamics.

Redefining Reuse: From Code to Culture

Reuse begins with understanding context. Audun outlines NAV’s scale: thousands of developers, hundreds of teams, and a codebase ballooning through modernization. Copy-pasting code—tempting for speed—breeds technical debt. Instead, they champion shared libraries and services, like payment gateways or journaling systems, already reused across NAV’s ecosystem. Open-source, they note, exemplifies external success; internally, however, reuse falters without clear ownership.

Trond delves into Conway’s Law: systems mirror organizational structures. NAV’s fragmented teams spawn siloed solutions unless guided by unified governance. Their solution: designate component owners, aligning incentives to prioritize maintenance over novelty. A payment service, reused across domains, exemplifies success, reducing duplication while enhancing reliability.

Technical Tactics and Organizational Orchestration

Technically, reuse demands robust infrastructure. Audun advocates platforms—centralized APIs, standardized pipelines—to streamline integration. Shared libraries, versioned meticulously, prevent divergence, while microservices enable modular reuse. Yet, technical prowess alone suffices not; social engineering is paramount. Trond emphasizes cross-team collaboration, ensuring components like letter-sending services are maintained by dedicated squads, not orphaned.

Their lesson: reuse is a socio-technical dance. Without organizational buy-in—financing, accountability, clear roles—components decay. NAV’s pivot to product-oriented teams, guided by domain-driven design, fosters reusable assets, aligning technical solutions with business imperatives.

Navigating Pitfalls: Ownership and Maintenance

The core challenge lies in the “blue box”—NAV’s monolithic systems. Audun and Trond dissect failures: reused components falter when unowned, leading to outages or obsolescence. Their antidote: explicit ownership models, where teams steward components, supported by funding and metrics. They cite successes—journaling services, payment APIs—where ownership ensures longevity.

Their vision: an internal open-source ethos, where teams contribute to and consume shared assets, mirrored by external triumphs like Kubernetes. By realigning incentives, NAV aims to transform reuse from serendipity to strategy, reducing code bloat while accelerating delivery.

Fostering a Reuse-First Mindset

Audun and Trond conclude with a cultural clarion: reuse thrives on intentionality. Teams must evaluate trade-offs—forking versus libraries, services versus platforms—within their context. Their call to action: join NAV’s mission, where reuse reshapes welfare delivery, blending technical rigor with societal impact.

Links:

PostHeaderIcon Renovate/Dependabot: How to Take Control of Dependency Updates

At Devoxx France 2024, held in April at the Palais des Congrès in Paris, Jean-Philippe Baconnais and Lise Quesnel, consultants at Zenika, presented a 30-minute talk titled Renovate/Dependabot, ou comment reprendre le contrôle sur la mise à jour de ses dépendances. The session explored how tools like Dependabot and Renovate automate dependency updates, reducing the tedious and error-prone manual process. Through a demo and lessons from open-source and client projects, they shared practical tips for implementing Renovate, highlighting its benefits and pitfalls. 🚀

The Pain of Dependency Updates

The talk opened with a relatable skit: Lise, working on a side project (a simple Angular 6 app showcasing women in tech), admitted to neglecting updates due to the effort involved. Jean-Philippe emphasized that this is a common issue across projects, especially in microservice architectures with numerous components. Updating dependencies is critical for:

  • Security: Applying patches to reduce exploitable vulnerabilities.
  • Features: Accessing new functionalities.
  • Bug Fixes: Benefiting from the latest corrections.
  • Performance: Leveraging optimizations.
  • Attractiveness: Using modern tech stacks (e.g., Node 20 vs. Node 8) to appeal to developers.

However, the process is tedious, repetitive, and complex due to transitive dependencies (e.g., a median of 683 for NPM projects) and cascading updates, where one update triggers others.

Automating with Dependabot and Renovate

Dependabot (acquired by GitHub) and Renovate (from Mend) address this by scanning project files (e.g., package.json, Maven POM, Dockerfiles) and opening pull requests (PRs) or merge requests (MRs) for available updates. These tools:

  • Check registries (NPM, Maven Central, Docker Hub) for new versions.
  • Provide visibility into dependency status.
  • Save time by automating version checks, especially in microservice setups.
  • Enhance reactivity, critical for applying security patches quickly.

Setting Up the Tools

Dependabot: Configured via a dependabot.yml file, specifying ecosystems (e.g., NPM), directories, and update schedules (e.g., weekly). On GitHub, it integrates natively via project settings. GitLab users can use a similar approach.

# dependabot.yml
version: 2
updates:
  - package-ecosystem: "npm"
    directory: "/"
    schedule:
      interval: "weekly"

Renovate: Configured via a renovate.json file, extending default presets. It supports GitHub and GitLab via apps or CI/CD pipelines (e.g., GitLab CI with a Docker image). For self-hosted setups, Renovate can run as a Docker container or Kubernetes CronJob.

# renovate.json
{
  "extends": [
    "config:recommended"
  ]
}

In their demo, Jean-Philippe and Lise showcased Renovate on a GitLab project, using a .gitlab-ci.yml pipeline to run Renovate on a schedule, creating MRs for updates like rxjs (from 6.3.2 to 6.6.7).

Customizing Renovate

Renovate’s strength lies in its flexibility through presets and custom configurations:

    • Presets: Predefined rules (e.g., npm:unpublishSafe waits 3 days before proposing updates). Presets can extend others, forming a hierarchy (e.g., config:recommended extends base presets).
    • Custom Presets: Organizations can define reusable configs in a dedicated repository (e.g., renovate-config) and apply them across projects.
// renovate-config/default.json
{
  "extends": [
    "config:recommended",
    ":npm"
  ]
}
    • Grouping Updates: Combine related updates (e.g., all ESLint packages) using packageRules or presets like group:recommendedLinters to reduce PR noise.
{
  "packageRules": [
    {
      "matchPackagePatterns": ["^eslint"],
      "groupName": "eslint packages"
    }
  ]
}
  • Dependency Dashboard: An issue tracking open, rate-limited, or ignored MRs, activated via the dependencyDashboard field or preset.

Going Further: Automerge and Beyond

To streamline updates, Renovate supports automerge, automatically merging MRs if the pipeline passes, relying on robust tests. Options include:

  • automerge: true for all updates.
  • automergeType: "pr" or strategy for specific behaviors.
  • Presets like automerge:patch for patch updates only.

The demo showed an automerged rxjs update, triggering a new release (v1.2.1) via semantic-release, tagged, and deployed to Google Cloud Run. A failed Angular update (due to a major version gap) demonstrated how failing tests block automerge, ensuring safety.

Renovate can also update itself and its configuration (e.g., deprecated fields) via the config:migration preset, creating MRs for self-updates.

Lessons Learned and Recommendations

From their experiences, Jean-Philippe and Lise shared key tips:

  • Manage PR Overload: Limit concurrent PRs (e.g., prConcurrentLimit: 5) and group related updates to reduce noise.
  • Use Schedules: Run Renovate at off-peak times (e.g., nightly) to avoid overloading CI runners and impacting production deployments.
  • Ensure Robust Tests: Automerge relies on trustworthy tests; weak test coverage can lead to broken builds.
  • Balance Frequency: Frequent runs catch updates quickly but risk conflicts; infrequent runs may miss critical patches.
  • Monitor Resource Usage: Excessive pipelines can strain runners and increase costs in autoscaling environments (e.g., cloud platforms).
  • Handle Transitive Dependencies: Renovate manages them like direct dependencies, but cascading updates require careful review.
  • Support Diverse Ecosystems: Renovate works well with Java (e.g., Spring Boot, Quarkus), Scala, and NPM, with grouping to manage high-dependency ecosystems like NPM.
  • Internal Repositories: Configure Renovate to scan private registries by specifying URLs.
  • Major Updates: Use presets to stage major updates incrementally, avoiding risky automerge for breaking changes.

Takeaways

Jean-Philippe and Lise’s talk highlighted how Dependabot and Renovate transform dependency management from a chore to a streamlined process. Their demo and practical advice showed how Renovate’s flexibility—via presets, automerge, and dashboards—empowers teams to stay secure and up-to-date, especially in complex microservice environments. However, success requires careful configuration, robust testing, and resource management to avoid overwhelming teams or infrastructure. 🌟

PostHeaderIcon [DevoxxUK2024] Project Leyden: Capturing Lightning in a Bottle by Per Minborg

Per Minborg, a seasoned member of Oracle’s Core Library team, delivered an insightful session at DevoxxUK2024, unveiling the ambitions of Project Leyden, a transformative initiative to enhance Java application performance. Focused on slashing startup time, accelerating warmup, and reducing memory footprint, Per’s talk explores how Java can evolve to meet modern demands while preserving its dynamic nature. By strategically shifting computations to optimize execution, Project Leyden introduces innovative techniques like condensers and enhanced Class Data Sharing (CDS). This session provides a roadmap for developers seeking to harness Java’s potential in high-performance environments, balancing flexibility with efficiency.

The Vision of Project Leyden

Per begins by outlining the core objectives of Project Leyden: improving startup time, warmup time, and memory footprint. Startup time, the duration from launching an application to its first meaningful output (e.g., a “Hello World” or serving a web request), is critical for user experience. Warmup time, the period until an application reaches peak performance through JIT compilation, can hinder responsiveness in dynamic systems. Footprint, encompassing memory and storage use, impacts scalability, especially in cloud environments. Per emphasizes that the best approach is to eliminate unnecessary computations, but when that’s not feasible, shifting them temporally—either earlier to compile time or later to runtime—can yield significant gains. This philosophy underpins Leyden’s strategy to refine Java’s execution model.

Shifting Computations for Efficiency

A cornerstone of Project Leyden is the concept of temporal computation shifting. Per explains that Java’s dynamic nature—encompassing dynamic class loading, JIT compilation, and runtime optimizations—enables expressive programming but can inflate startup and warmup times. By moving computations to build time, such as through constant folding or ahead-of-time (AOT) compilation, Leyden reduces runtime overhead. Alternatively, lazy evaluation postpones non-critical tasks, streamlining startup. Per introduces condensers, a novel mechanism that transforms program representations by shifting computations earlier, adding metadata, or imposing constraints on dynamism. Condensers are composable, meaning-preserving, and selectable, allowing developers to tailor optimizations based on application needs. For instance, a condenser might precompile lambda expressions into bytecode at build time, slashing runtime costs.

Enhancing Class Data Sharing (CDS)

Per delves into Class Data Sharing (CDS), a long-standing Java feature that Project Leyden enhances to achieve dramatic performance boosts. CDS allows pre-initialized JDK classes to be stored in a file, bypassing costly class loading during startup. With CDS++, Leyden extends this to include application classes, compiled code, and resolved constant pool references. Per shares compelling benchmarks: a test compiling 100 small Java files achieved a 2x startup improvement, while an XML parsing workload saw an 8x boost. For the Spring Pet Clinic benchmark, Leyden’s optimizations, including early class loading and cached compiled code, yielded up to 4x faster startup. These gains stem from a training run approach, where a representative execution gathers profiling data to inform optimizations, ensuring compatibility across platforms.

Balancing Dynamism and Performance

Java’s dynamism—encompassing dynamic typing, class loading, and reflection—empowers developers but complicates optimization. Per proposes selective constraints to balance this trade-off. For example, developers can restrict dynamic class loading for specific modules, enabling aggressive optimizations without sacrificing Java’s flexibility. The stable value feature, initially part of Leyden but now a standalone JEP, allows delayed initialization of final fields while maintaining performance akin to compile-time constants. Per illustrates this with a Fibonacci computation example, where memoization using stable values drastically reduces recursive overhead. By offering a “mixer board” of concessions, Leyden empowers developers to fine-tune performance, ensuring compatibility and preserving program semantics across diverse use cases.

Links:

PostHeaderIcon [NodeCongress2024] Asynchronous Context Tracking in Modern JavaScript Runtimes: From `AsyncLocalStorage` to the `AsyncContext` Standard

Lecturer: James M Snell

James M Snell is a distinguished open-source contributor and software engineer, currently serving as a Principal Engineer on the Cloudflare Workers team. He is a long-standing core contributor to the Node.js Technical Steering Committee (TSC), where his technical leadership has been instrumental in modernizing Node.js’s networking stack, including the implementation of HTTP/2, the WHATWG URL implementation, and the QUIC protocol. Snell is also a key founder and participant in the WinterCG (Web-interoperable Runtimes Community Group), an effort dedicated to aligning standards across disparate JavaScript runtimes.

Abstract

This article provides an analytical deep dive into the concept and implementation of Asynchronous Context Tracking in JavaScript runtimes, focusing on Node.js’s existing AsyncLocalStorage (ALS) API and the proposed AsyncContext standard. It explains the critical problem of preserving request-specific contextual data (e.g., request IDs or transaction details) across asynchronous I/O boundaries in highly concurrent environments. The article details the technical methodology, which relies on Async Hooks and a Context Frame Stack, and discusses the implications of the TC-39 standardization effort to create a portable, globally accessible AsyncContext API across runtimes like Node.js, Cloudflare Workers, Deno, and Bun.

Context: The Challenge of Asynchronous Execution Flow

In a concurrent, non-blocking I/O model like Node.js, the execution of a single logical operation (e.g., handling one HTTP request) is typically fragmented across multiple asynchronous callbacks. The JavaScript engine often switches between different logical requests while waiting for I/O operations to complete, making it impossible to rely on simple global or thread-local variables for storing request-specific metadata. The challenge is ensuring that contextual information (such as a unique request identifier or security principal) is preserved and accessible to every segment of the logical operation’s flow, regardless of how many other concurrent operations interleave with it.

Methodology: Context Frames and Async Hooks

Asynchronous Context Tracking solves this by establishing a mechanism to associate a context frame (a logical map of key/value pairs) with the execution flow of an asynchronous operation.

  • The Role of Async Hooks: The foundation of this system is the Async Hook API (or its internal equivalent in other runtimes). The runtime uses these hooks to trace the lifecycle of asynchronous resources (e.g., timers, network requests). Every time an asynchronous operation is created or executed, the runtime utilizes the hooks to push and pop context frames onto a dedicated stack for that specific asynchronous flow.
  • The run and getStore/get Methods: The primary interface for managing context is the run method (available on both AsyncLocalStorage and AsyncContext). When a function is wrapped in store.run(value, callback), it initiates a new context frame containing that value, ensuring that all subsequent asynchronous operations originating from the callback have access to the frame. The getStore (ALS) or get (Async Context) method then accesses the value from the current frame on the stack.
  • Copy-on-Run Principle: Critically, the run method ensures that context is copied and isolated for the new frame. Modifying a context value within a run call does not affect the context of the calling function, preventing data leakage or corruption between concurrent requests.

The Evolution to AsyncContext and Interoperability

The AsyncLocalStorage API in Node.js, initially residing in node:async_hooks, has proven the utility of this model, leading to its adoption in other runtimes. The subsequent step is the standardization of AsyncContext by the TC-39 committee. The changes between the two APIs are minimal—primarily making the API a global object and renaming getStore to get—but the implications are profound. The standardization effort ensures that this crucial pattern for context propagation becomes portable and interoperable across the entire JavaScript ecosystem, benefiting Node.js, Cloudflare Workers, Deno, and Bun.

Links

Hashtags: #AsyncContext #AsyncLocalStorage #NodeJS #JavaScriptRuntimes #AsyncHooks #WinterCG #TC39

PostHeaderIcon [DevoxxGR2024] Small Steps Are the Fastest Way Forward: Navigating Chaos in Software Development

Sander Hoogendoorn, CTO at iBOOD, delivered an engaging and dynamic talk at Devoxx Greece 2024, addressing the challenges of software development in a rapidly changing world. Drawing from his extensive experience as a programmer, architect, and leader, Sander explored how organizations can overcome technical debt and the innovator’s dilemma by embracing continuous experimentation, small teams, and short delivery cycles. His narrative, peppered with real-world anecdotes, offered practical strategies for navigating complexity and fostering innovation in a post-agile landscape.

Understanding Technical Debt and Quality

Sander opened by tackling the elusive concept of software quality, contrasting it with tangible products like coffee or cars, where higher quality correlates with higher cost. In software, quality—encompassing maintainability, testability, and reliability—is harder to quantify and often lacks a direct price relationship. He introduced Ward Cunningham’s concept of technical debt, where initial shortcuts accelerate development but, if unaddressed, can cripple organizations. Sander shared an example from an insurance company with 18 million lines of COBOL and 12 million lines of Java, where outdated code and retiring developers created a maintenance nightmare. Similarly, at iBOOD, a patchwork of systems led to “technical death,” where maintenance consumed all resources, stifling innovation.

To mitigate technical debt, Sander advocated for continuous refactoring as part of daily work, rather than a separate task requiring approval. He emphasized finding a balance between quality and cost, tailored to the organization’s goals—whether building a quick mobile app or a long-lasting banking system.

The Innovator’s Dilemma and Continuous Renovation

Sander introduced the innovator’s dilemma, where successful products reach a saturation point, and new entrants with innovative technologies disrupt the market. He recounted his experience at a company that pioneered smart thermostats but failed to reinvent itself, leading to its acquisition and dissolution. To avoid this fate, organizations must operate in “continuous renovation mode,” maintaining existing systems while incrementally building new features. This approach, inspired by John Gall’s law—that complex systems evolve from simple, working ones—requires small, iterative steps rather than large-scale rebuilds.

At iBOOD, Sander implemented this by allocating 70% of resources to innovation and 30% to maintenance, ensuring the “shop stays open” while progressing toward strategic goals. He emphasized the importance of defining a clear “dot on the horizon,” such as iBOOD’s ambition to become Europe’s leading deal site, to guide these efforts.

Navigating Complexity with the Cynefin Framework

To navigate the chaotic and complex nature of modern software development, Sander introduced the Cynefin framework, which categorizes problems into clear, complicated, complex, and chaotic zones. Most software projects reside in the complex zone, where no best practices exist, and experimentation is essential. He cautioned against treating complex problems as complicated, citing failed attempts at iBOOD’s insurance client to rebuild systems from scratch. Instead, organizations should run small experiments, accepting the risk of failure as a path to learning.

Sander illustrated this with iBOOD’s decision-making process, where a cross-functional team evaluates ideas based on their alignment with strategic goals, feasibility, and size. Ideas too large are broken into smaller pieces, ensuring manageable experiments that deliver quick feedback.

Delivering Features in Short Cycles

Sander argued that traditional project-based approaches and even Scrum’s sprint model are outdated in a world demanding rapid iteration. He advocated for continuous delivery, where features are deployed multiple times daily, minimizing dependencies and enabling immediate feedback. At iBOOD, features are released in basic versions, refined based on business input, and prioritized over less critical tasks. This approach, supported by automated CI/CD pipelines and extensive testing, ensures quality is built into the process, reducing reliance on manual inspections.

He shared iBOOD’s pipeline, which includes unit tests, static code analysis, and production testing, allowing developers to code with confidence. By breaking features into small, independent services, iBOOD achieves flexibility and resilience, avoiding the pitfalls of monolithic systems.

Empowering Autonomous Micro-Teams

Finally, Sander addressed the human element of software development, arguing that the team, not the individual, is the smallest unit of delivery. He advocated for autonomous “micro-teams” that self-organize around tasks, drawing an analogy to jazz ensembles where musicians form sub-groups based on skills. At iBOOD, developers choose their tasks and collaborators, fostering learning and flexibility. This autonomy, while initially uncomfortable for some, encourages ownership and innovation.

Sander emphasized minimizing rules to promote critical thinking, citing an Amsterdam experiment where removing traffic signs improved road safety through communication. By eliminating Scrum rituals like sprints and retrospectives, iBOOD’s teams focus on solving one problem daily, enhancing efficiency and morale.

Conclusion

Sander Hoogendoorn’s talk at Devoxx Greece 2024 offered a refreshing perspective on thriving in software development’s chaotic landscape. By addressing technical debt, embracing the innovator’s dilemma, and leveraging the Cynefin framework, organizations can navigate complexity through small, experimental steps. Continuous delivery and autonomous micro-teams further empower teams to innovate rapidly and sustainably. Sander’s practical insights, grounded in his leadership at iBOOD, provide a compelling blueprint for organizations seeking to evolve in a post-agile world.

Links:

PostHeaderIcon [GoogleIO2024] Developer Keynote: Innovations in AI and Development Tools at Google I/O 2024

The Developer Keynote at Google I/O 2024 showcased a transformative vision for software creation, emphasizing how generative artificial intelligence is reshaping the landscape for creators worldwide. Delivered by a team of Google experts, the session highlighted accessible AI models, enhanced productivity across platforms, and new tools designed to simplify complex workflows. This presentation underscored Google’s commitment to empowering millions of developers through an ecosystem that spans billions of devices, fostering innovation without the burden of underlying infrastructure challenges.

Advancing AI Accessibility and Model Integration

A core theme of the keynote revolved around making advanced AI capabilities available to every programmer. The speakers introduced Gemini 1.5 Flash, a lightweight yet powerful model optimized for speed and cost-effectiveness, now accessible globally via the Gemini API in Google AI Studio. This tool balances quality, efficiency, and affordability, enabling developers to experiment with multimodal applications that incorporate audio, video, and extensive context windows. For instance, Jacqueline demonstrated a personal workflow where voice memos and prior blog posts were synthesized into a draft article, illustrating how large context windows—up to two million tokens—unlock novel interactions while reducing computational expenses through features like context caching.

This approach extends beyond simple API calls, as the team emphasized techniques such as model tuning and system instructions to personalize outputs. Real-world examples included Loc.AI’s use of Gemini for renaming elements in frontend designs from Figma, enhancing code readability by interpreting nondescript labels. Similarly, Invision leverages the model’s speed for real-time environmental descriptions aiding low-vision users, while Zapier automates podcast editing by removing filler words from audio uploads. These cases highlight how Gemini empowers practical transformations, from efficiency gains to user delight, encouraging participation in the Gemini API developer competition for innovative applications.

Enhancing Mobile Development with Android and Gemini

Shifting focus to mobile ecosystems, the keynote delved into Android’s evolution as an AI-centric operating system. With over three billion devices, Android now integrates Gemini to enable on-device experiences that prioritize privacy and low latency. Gemini Nano, the most efficient model for edge computing, powers features like smart replies in messaging without data leaving the device, available on select hardware like the Pixel 8 Pro and Samsung Galaxy S24 series, with broader rollout planned.

Early adopters such as Patreon and Grammarly showcased its potential: Patreon for summarizing community chats, and Grammarly for intelligent suggestions. Maru elaborated on Kotlin Multiplatform support in Jetpack libraries, allowing shared business logic across Android, iOS, and web, as seen in Google Docs migrations. Compose advancements, including performance boosts and adaptive layouts, were highlighted, with examples from SoundCloud demonstrating faster UI development and cross-form-factor compatibility. Testing improvements, like Android Device Streaming via Firebase and resizable emulators, ensure robust validation for diverse hardware.

Jamal illustrated Gemini’s role in Android Studio, evolving from Studio Bot to provide code optimizations, translations, and multimodal inputs for rapid prototyping. A demo converted a wireframe image into functional Jetpack Compose code, underscoring how AI accelerates from ideation to implementation.

Revolutionizing Web and Cross-Platform Experiences

The web’s potential was amplified through AI integrations, marking its 35th anniversary with tools like WebGPU and WebAssembly for on-device inference. John discussed how these enable efficient model execution across devices, with examples like Bilibili’s 30% session duration increase via MediaPipe’s image recognition. Chrome’s enhancements, including AI-powered dev tools for error explanations and code suggestions, streamline debugging, as shown in a Boba tea app troubleshooting CORS issues.

Aaron introduced Project IDX, now in public beta, as an integrated workspace for full-stack, multiplatform development, incorporating Google Maps, DevTools, and soon Checks for privacy compliance. Flutter’s updates, including WebAssembly support for up to 2x performance gains, were exemplified by Bricket’s cross-platform expansion. Firebase’s evolution, with Data Connect for SQL integration, App Hosting for scalable web apps, and Genkit for seamless AI workflows, further simplifies backend connections.

Customizing AI Models and Future Prospects

Shabani and Lawrence explored open models like Gemma, with new variants such as PaliGemma for vision-language tasks and the upcoming Gemma 2 for enhanced performance on optimized hardware. A demo in Colab illustrated fine-tuning Gemma for personalized book recommendations, using synthetic data from Gemini and on-device inference via MediaPipe. Project Gameface’s Android expansion demonstrated accessibility advancements, while an early data science agent concept showcased multi-step reasoning with long context.

The keynote concluded with resources like accelerators and the Google Developer Program, emphasizing community-driven innovation. Eugene AI’s emissions reduction via DeepMind research exemplified real-world impact, reinforcing Google’s ecosystem for reaching global audiences.

Links:

PostHeaderIcon [DotJs2024] Generative UI: Bring your React Components to AI Today!

The fusion of artificial intelligence and frontend development is reshaping how we conceive interactive experiences, placing JavaScript engineers at the vanguard of this transformation. Malte Ubl, CTO at Vercel, captivated audiences at dotJS 2024 with a compelling exploration of Generative UI, a pivotal advancement in Vercel’s AI SDK. Originally hailing from Germany and now entrenched in Silicon Valley’s innovation hub, Ubl reflected on the serendipitous echoes of past tech eras—from CGI uploads via FTP to his own contributions like Whiz at Google—before pivoting to AI’s seismic impact. His message was unequivocal: frontend expertise isn’t obsolete in the AI surge; it’s indispensable, empowering developers to craft dynamic, context-aware interfaces that transcend textual exchanges.

Ubl framed the narrative around a paradigm shift from Software 1.0’s laborious machine learning to Software 2.0’s accessible, API-driven intelligence. Where once PhD-level Python tinkering dominated, today’s landscape favors TypeScript applications invoking large language models (LLMs) as services. Models have ballooned in scale and savvy, rendering fine-tuning optional and prompting paramount. This velocity—shipping products in days rather than years—democratizes AI development, yet disrupts traditional roles. Ubl’s optimism stems from a clear positioning: frontend developers as architects of human-AI symbiosis, leveraging React components to ground abstract prompts in tangible interactions.

Central to his demonstration was a conversational airline booking interface, where users query seat changes via natural language. Conventional AI might bombard with options like 14C or 19D, overwhelming without context. Generative UI elevates this: the LLM invokes React server components as functions, streaming a interactive seat map pre-highlighting viable window seats. Users manipulate the UI directly—selecting, visualizing availability—bypassing verbose back-and-forth. Ubl showcased the underlying simplicity: a standard React project with TypeScript files for boarding passes and seat maps, hot-module-reloading enabled, running locally. The magic unfolds via AI functions—React server components that embed client-side state, synced back to the LLM through an “AI state” mechanism. Selecting 19C triggers a callback: “User selected seat 19C,” enabling seamless continuations like checkout flows yielding digital boarding passes.

This isn’t mere novelty; Ubl underscored practical ramifications. End-user chatbots gain depth, support teams wield company-specific components for real-time adjustments, and search engines like the open-source Wary (a Perplexity analog) integrate existing product renderers for enriched results. Accessibility leaps forward too: retrofitting legacy sites with AI state turns static pages into voice-navigable experiences, empowering non-traditional input modalities. Ubl likened AI to a potent backend—API calls fetching not raw data, but rendered intelligence—amplifying human-computer dialogue beyond text. As models from OpenAI, Google Gemini, Anthropic’s Claude, and Mistral proliferate, frontend differentiation via intuitive UIs becomes the competitive edge, uplifting the stack’s user-facing stratum.

Ubl’s closing exhortation: embrace this disruption by viewing React components as AI-native building blocks. Vercel’s AI SDK examples offer starter chatbots primed for customization, accelerating prototyping. In a world where AI smarts escalate, frontend artisans—adept at state orchestration and visual storytelling—emerge as the revolution’s fulcrum, forging empathetic, efficient digital realms.

The Dawn of AI-Infused Interfaces

Ubl vividly contrasted archaic AI chats with generative prowess, using an airline scenario to highlight contextual rendering’s superiority. Prompts yield not lists, but explorable maps—streamed via server components—where selections feed back into the AI loop. This bidirectional flow, powered by AI state, ensures coherence, transforming passive queries into collaborative sessions. Ubl’s live demo, from flight selection to boarding pass issuance, revealed the unobtrusive elegance: plain React, no arcane setups, just LLM-orchestrated functions bridging intent and action.

Empowering Developers in the AI Era

Beyond demos, Ubl advocated for strategic adoption, spotlighting use cases like e-commerce search enhancements and accessibility overlays. Existing components slot into AI workflows effortlessly, while diverse models foster toolkit pluralism. The SDK’s documentation and examples lower barriers, inviting experimentation. Ubl’s thesis: as AI commoditizes logic, frontend’s artistry—crafting modality-agnostic interactions—secures its primacy, heralding an inclusive future where developers orchestrate intelligence with familiar tools.

Links:

PostHeaderIcon [DevoxxFR 2024] Debugging Your Salary: Winning Strategies for Successful Negotiation

At Devoxx France 2024, Shirley Almosni Chiche, an independent IT recruiter and career agent, delivered a dynamic session titled “Debuggez votre salaire ! Mes stratégies gagnantes pour réussir sa négociation salariale.” With over a decade of recruitment experience, Shirley unpacked the complexities of salary negotiation, offering actionable strategies to overcome common obstacles. Through humor, personas, and real-world insights, she empowered developers to approach salary discussions with confidence and preparation, transforming a daunting process into a strategic opportunity.

Shirley opened with a candid acknowledgment: salary discussions are fraught with tension, myths, and frustrations. Drawing from her role at Build RH, her recruitment firm, she likened salary negotiation to a high-stakes race, where candidates endure lengthy recruitment processes only to face disappointing offers. Common employer excuses—“we must follow the salary grid,” “we can’t pay more than existing staff,” or “the budget is tight”—often derail negotiations, leaving candidates feeling undervalued.

To frame her approach, Shirley introduced six “bugs” that justify low salaries, each paired with a persona representing typical employer archetypes. These included the rigid “Big Corp” manager enforcing salary grids, the team-focused “Didier Deschamps” avoiding pay disparities, and the budget-conscious “François Damiens” citing financial constraints. Other personas, like the overly technical “Elon” scrutinizing code, the relentless negotiator “Patrick,” and the discriminatory “Hubert,” highlighted diverse challenges candidates face.

Shirley shared market insights, noting a 2023–2024 tech slowdown with 200,000 global layoffs, reduced venture funding, and a shift toward cost-conscious industries like banking and retail. This context, she argued, demands strategic preparation to secure fair compensation.

Countering the Bugs: Tactical Responses

For each bug, Shirley offered counter-arguments rooted in empathy and alignment with employer priorities. Against the salary grid, she advised exploring non-salary benefits like profit-sharing or PERCO plans, common in large firms. Using a “mirror empathy” tactic, candidates can frame salary needs in the employer’s language—e.g., linking pay to productivity. Challenging outdated grids by highlighting market research or internal surveys also strengthens arguments.

For the “Didier Deschamps” persona, Shirley suggested emphasizing unique skills (e.g., full-stack expertise in a backend-heavy team) to justify higher pay without disrupting team cohesion. Proposing contributions like speaking at conferences or aiding recruitment can further demonstrate value. She shared a success story where a candidate engaged the team directly, securing a better offer through collective dialogue.

When facing “François Damiens” and financial constraints, Shirley recommended focusing on risk mitigation. For startups, candidates can negotiate stock options or bonuses, arguing that their expertise accelerates product delivery, saving recruitment costs. Highlighting polyvalence—combining skills like development, data, and security—positions candidates as multi-role assets, justifying premium pay.

For technical critiques from “Elon,” Shirley urged immediate feedback post-interview to address perceived weaknesses. If gaps exist, candidates should negotiate training opportunities to ensure long-term fit. Pointing out evaluation mismatches (e.g., testing frontend skills for a backend role) can redirect discussions to relevant strengths.

Against “Patrick,” the negotiator, Shirley advised setting firm boundaries—two rounds of negotiation max—to avoid endless haggling. Highlighting project flaws tactfully and aligning expertise with business goals can shift the dynamic from adversarial to collaborative.

Addressing Discrimination: A Sobering Reality

Shirley tackled the “Hubert” persona, representing discriminatory practices, with nuance. Beyond gender pay gaps, she highlighted biases against older candidates, neurodivergent individuals, those with disabilities, and career switchers. Citing her mother’s experience as a Maghrebi woman facing a 20% pay cut, Shirley acknowledged the harsh realities for marginalized groups.

Rather than dismissing discriminatory offers outright, she advised viewing them as career stepping stones. Candidates can leverage such roles for training or experience, using “mirror empathy” to negotiate non-salary benefits like remote work or learning opportunities. While acknowledging privilege, Shirley urged resilience, encouraging candidates to “lend an ear to learning” and rebound from setbacks.

Mastering Preparation: Anticipating the Negotiation

Shirley emphasized proactive preparation as the cornerstone of successful negotiation. Understanding one’s relationship with money—shaped by upbringing, traumas, or social pressures—is critical. Some candidates undervalue themselves due to impostor syndrome, while others see salary as a status symbol or family lifeline. Recognizing these drivers informs negotiation strategies.

She outlined key preparation steps:

  • Job Selection: Target roles within your expertise and in high-paying sectors (e.g., cloud, security) for better leverage. Data roles can yield 7–13% salary gains.
  • Market Research: Use resources like Choose Your Boss or APEC barometers to benchmark salaries. Shirley noted Île-de-France salaries exceed regional ones by 10–15K, with a 70K ceiling for seniors in 2023.
  • Company Analysis: Assess financial health via LinkedIn or job ad longevity. Long-posted roles signal negotiation flexibility.
  • Recruiter Engagement: Treat initial recruiter calls as data-gathering opportunities, probing team culture, hiring urgency, and technical expectations.
  • Value Proposition: Highlight impact—product roadmaps, technical migrations, or team mentoring—early in interviews to set a premium tone.

Shirley cautioned against oversharing personal financial details (e.g., current salary or expenses) during salary discussions. Instead, provide a specific range (e.g., “around 72K”) based on market data and role demands. Mentioning parallel offers tactfully can spur employers to act swiftly.

Sealing the Deal: Confidence and Coherence

In the final negotiation phase, Shirley advised a 48-hour reflection period after receiving an offer, consulting trusted peers for perspective. Counteroffers should be fact-based, reiterating interview insights and using empathetic language. Timing matters—avoid Mondays or late Fridays for discussions.

Citing APEC data, Shirley noted that 80% of executives who negotiate are satisfied, with 65% securing their target salary or higher. She urged candidates to remain consistent, avoiding last-minute demands that erode trust. Beyond salary, consider workplace culture, inclusion, and work-life balance to ensure long-term fit.

Shirley closed with a rallying call: don’t undervalue your skills or settle for less. By blending preparation, empathy, and resilience, candidates can debug their salary negotiations and secure rewarding outcomes.

Hashtags: #SalaryNegotiation #DevoxxFrance #CareerDevelopment #TechRecruitment

PostHeaderIcon [PHPForumParis2023] You Build It, You Run It: Observability for Developers – Smaïne Milianni

Smaïne Milianni, a former taxi driver turned PHP developer, delivered an engaging talk at Forum PHP 2023, exploring the “You Build It, You Run It” philosophy and the critical role of observability in modern development. Now an Engineering Manager at Yousign, Smaïne shared insights from his decade-long journey in PHP, emphasizing how observability tools like logs, metrics, traces, and alerts empower developers to maintain robust applications. His practical approach and humorous delivery offered actionable strategies for PHP developers to enhance system reliability and foster a culture of continuous improvement.

The Essence of Observability

Smaïne introduced observability as the cornerstone of the “You Build It, You Run It” model, where developers are responsible for both building and maintaining their applications. He explained how observability encompasses logs, metrics, traces, and custom alerts to monitor system health. Using real-world examples, Smaïne illustrated how these tools help identify issues, such as application errors or system outages, before they escalate. His emphasis on proactive monitoring resonated with developers seeking to ensure their PHP applications remain stable and performant.

Implementing Observability in PHP

Diving into practical applications, Smaïne outlined how to integrate observability into PHP projects. He highlighted tools like Datadog for collecting metrics and traces, and demonstrated how to set up alerts for critical incidents, such as P1 outages that trigger SMS and email notifications. Smaïne stressed the importance of prioritizing alerts based on severity to avoid notification fatigue. His examples, drawn from his experience at Yousign, provided a clear roadmap for developers to implement observability, ensuring rapid issue detection and resolution.

The Power of Post-Mortems

Smaïne concluded by emphasizing the role of post-mortems in fostering a virtuous cycle of improvement. Responding to an audience question, he explained how his team conducts weekly manager reviews to track post-mortem actions, ensuring they are prioritized and addressed. By treating errors as learning opportunities rather than failures, Smaïne’s approach encourages developers to refine their code and systems iteratively. His talk inspired attendees to adopt observability practices that enhance both technical reliability and team collaboration.

Links: