Recent Posts
Archives

Posts Tagged ‘Google’

PostHeaderIcon [KotlinConf2025] Closing Panel

The concluding panel of KotlinConf2025 offered a vibrant and candid discussion, serving as the capstone to the conference. The diverse group of experts from JetBrains, Netflix, and Google engaged in a wide-ranging dialogue, reflecting on the state of Kotlin, its evolution, and the path forward. They provided a unique blend of perspectives, from language design and backend development to mobile application architecture and developer experience. The conversation was an unfiltered look into the challenges and opportunities facing the Kotlin community, touching on everything from compiler performance to the future of multiplatform development.

The Language and its Future

A central theme of the discussion was the ongoing development of the Kotlin language itself. The panel members, including Simon from the K2 compiler team and Michael from language design, shared insights into the rigorous process of evolving Kotlin. They addressed questions about new language features and the careful balance between adding functionality and maintaining simplicity. A notable point of contention and discussion was the topic of coroutines and the broader asynchronous programming landscape. The experts debated the best practices for managing concurrency and how Kotlin’s native features are designed to simplify these complex tasks. There was a consensus that while new features are exciting, the primary focus remains on stability, performance, and enhancing the developer experience.

The State of Multiplatform Development

The conversation naturally shifted to Kotlin Multiplatform (KMP), which has become a cornerstone of the Kotlin ecosystem. The panelists explored the challenges and successes of building applications that run seamlessly across different platforms. Representatives from companies like Netflix and AWS, who are using KMP for large-scale projects, shared their experiences. They discussed the complexities of managing shared codebases, ensuring consistent performance, and maintaining a robust build system. The experts emphasized that while KMP offers immense benefits in terms of code reuse, it also requires a thoughtful approach to architecture and toolchain management. The panel concluded that KMP is a powerful tool, but its success depends on careful planning and a deep understanding of the underlying platforms.

Community and Ecosystem

Beyond the technical discussions, the panel also reflected on the health and vibrancy of the Kotlin community. A developer advocate, SA, and others spoke about the importance of fostering an inclusive environment and the role of the community in shaping the language. They highlighted the value of feedback from developers and the critical role it plays in guiding the direction of the language and its tooling. The discussion also touched upon the broader ecosystem, including the various libraries and frameworks that have emerged to support Kotlin development. The panel’s enthusiasm for the community was palpable, and they expressed optimism about Kotlin’s continued growth and adoption in the years to come.

Links:

PostHeaderIcon [KotlinConf2025] The Life and Death of a Kotlin Native Object

The journey of an object within a computer’s memory is a topic that is often obscured from the everyday developer. In a highly insightful session, Troels Lund, a leader on the Kotlin/Native team at Google, delves into the intricacies of what transpires behind the scenes when an object is instantiated and subsequently discarded within the Kotlin/Native runtime. This detailed examination provides a compelling look at a subject that is usually managed automatically, demonstrating the sophisticated mechanisms at play to ensure efficient memory management and robust application performance.

The Inner Workings of the Runtime

Lund begins by exploring the foundational elements of the Kotlin/Native runtime, highlighting its role in bridging the gap between high-level Kotlin code and the native environment. The runtime is responsible for a variety of critical tasks, including memory layout, garbage collection, and managing object lifecycles. One of the central tenets of this system is its ability to handle memory allocation and deallocation with minimal developer intervention. The talk illustrates how an object’s structure is precisely defined in memory, a crucial step for both performance and predictability. This low-level perspective offered a new appreciation for the seamless operation that developers have come to expect.

A Deep Dive into Garbage Collection

The talk then progresses to the sophisticated mechanisms of garbage collection. A deep dive into the Kotlin/Native memory model reveals a system designed for both performance and concurrency. Lund describes the dual approach of a parallel mark and concurrent sweep and a concurrent mark and sweep. The parallel mark and concurrent sweep is designed to maximize throughput by parallelizing the marking phase, while the concurrent mark and sweep aims to minimize pause times by allowing the sweeping phase to happen alongside application execution. The session details how these processes identify and reclaim memory from objects that are no longer in use, preventing memory leaks and maintaining system stability. The discussion also touches upon weak references and their role in memory management. Lund explains how these references are cleared out in a timely manner, ensuring that objects that should be garbage-collected are not resurrected.

Final Thoughts on the Runtime

In his concluding remarks, Lund offers a final summary of the Kotlin/Native runtime. He reiterates that this is a snapshot of what is happening now, and that the details are subject to change over time as new features are added and existing ones are optimized. He emphasizes that the goal of the team is to ensure that the developer experience is as smooth and effortless as possible, with the intricate details of memory management handled transparently by the runtime. The session serves as a powerful reminder of the complex engineering that underpins the simplicity and elegance of the Kotlin language, particularly in its native context.

Links:

PostHeaderIcon [DotJs2024] Our Future Without Passwords

Dawn a horizon where authentication dissolves into biometric whispers and cryptographic confidences, banishing the tyranny of forgotten passphrases. Maud Nalpas, a fervent advocate for web security at Google, charted this trajectory at dotJS 2024, escorting audiences through passkeys’ ascent—a paradigm supplanting passwords with phishing-proof, breach-resistant elegance. With a lens honed on Chrome’s privacy vanguard, Maud dissected the relic’s frailties, from 81% breach culpability to mnemonic mayhem, before unveiling passkeys as the seamless salve.

Maud’s reverie evoked 1999’s innocence: Solitaire sessions interrupted by innocuous files, now echoed in 2024’s tax-season tedium—yet passwords persist, unyielding. Their design flaws—reusability, server-side secrets—fuel epidemics, mitigated marginally by managers yet unsolved at root. Enter passkeys: cryptographic duos, private halves cradled in device enclaves, publics enshrined server-side. Creation’s choreography: a GitHub prompt summons Google’s credential vault, fingerprint affirms, yielding a named token. Login? A tap unlocks biometrics, end-to-end encryption syncing across ecosystems—iCloud, 1Password—sans exposure.

This ballet boasts trifecta virtues. Usability gleams: no rote recall, mere device nudge. Economics entice: dual-role as MFA slashes SMS tolls. Security soars: no server secrets—biometrics localize, publics inert—phishing foiled by domain-binding; faux sites summon voids. Adoption surges—Amazon, PayPal vanguard—spanning web and native, browsers from Chrome to Safari, platforms Android to macOS. Caveats linger: Linux/Firefox lags, cross-ecosystem QR fallbacks bridge. Maud heralded 2024’s synchrony strides, Google’s Password Manager poised for ubiquity.

Implementation beckons via passkeys.directory: libraries like @simplewebauthn streamline, UX paramount—progressive prompts easing novices. Maud’s missive: trial as user, embed as architect; this future, phishing-free and frictionless, awaits invocation.

Passkeys’ Cryptographic Core

Maud illuminated the duo: private keys, hardware-harbored, sign challenges; publics verify, metadata minimal. Sync veils in E2EE—Google’s vault, Apple’s chain—device recovery via QR or recreation. Phishing’s nemesis: origin-tied, spoofed realms elicit absences, thwarting lures.

Adoption Accelerants and Horizons

Cross-platform chorus—Windows Edge, iOS Safari—minus Linux/Firefox snags, soon salved. Costs dwindle via MFA fusion; UX evolves prompts contextually. Maud’s clarion: libraries scaffold, inspiration abounds—forge passwordless realms resilient and radiant.

Links:

PostHeaderIcon [DefCon32] The Way to Android Root: Exploiting Smartphone GPU

Xiling Gong and Eugene Rodionov, security researchers at Google, delve into the vulnerabilities of Qualcomm’s Adreno GPU, a critical component in Android devices. Their presentation uncovers nine exploitable flaws leading to kernel code execution, demonstrating a novel exploit method that bypasses Android’s CFI and W^X mitigations. Xiling and Eugene’s work highlights the risks in GPU drivers and proposes actionable mitigations to enhance mobile security.

Uncovering Adreno GPU Vulnerabilities

Xiling opens by detailing their analysis of the Adreno GPU kernel module, prevalent in Qualcomm-based devices. Their research identified nine vulnerabilities, including race conditions and integer overflows, exploitable from unprivileged apps. These flaws, discovered through meticulous fuzzing, expose the GPU’s complex attack surface, making it a prime target for local privilege escalation.

Novel Exploit Techniques

Eugene describes their innovative exploit method, leveraging GPU features to achieve arbitrary physical memory read/write. By exploiting a race condition, they achieved kernel code execution with 100% success on a fully patched Android device. This technique bypasses Control-flow Integrity (CFI) and Write XOR Execute (W^X) protections, demonstrating the potency of GPU-based attacks and the need for robust defenses.

Challenges in GPU Security

The duo highlights the difficulties in securing GPU drivers, which are accessible to untrusted code and critical for performance. Xiling notes that Android’s reliance on in-process GPU handling, unlike isolated IPC mechanisms, exacerbates risks. Their fuzzing efforts, tailored for concurrent code, revealed the complexity of reproducing and mitigating these vulnerabilities, underscoring the need for advanced testing.

Proposing Robust Mitigations

Concluding, Eugene suggests moving GPU operations to out-of-process handling and adopting memory-safe languages to reduce vulnerabilities. Their work, published via Google’s Android security research portal, calls for vendor action to limit attack surfaces. By sharing their exploit techniques, Xiling and Eugene inspire the community to strengthen mobile security against evolving threats.

Links:

PostHeaderIcon [DefCon32] Sudos and Sudon’ts: Peering Inside Sudo for Windows

In a groundbreaking move, Microsoft introduced Sudo for Windows in February 2024, bringing a Unix-like privilege elevation mechanism to Windows 11 Insider Preview. Michael Torres, a security researcher at Google, delves into the architecture of this novel feature, exploring its implementation, inter-process communication, and potential vulnerabilities. Michael’s analysis, rooted in reverse engineering and Rust’s interaction with Windows APIs, uncovers security flaws that challenge the tool’s robustness. His open-source approach invites the community to scrutinize and enhance Sudo for Windows, ensuring it balances usability with security.

Understanding Sudo for Windows

Michael begins by demystifying Sudo for Windows, a utility designed to allow users to execute commands with elevated permissions directly from a non-elevated console. Unlike its Unix counterpart, it leverages User Account Control (UAC) for elevation and Advanced Local Procedure Call (ALPC) for communication between processes. Available in Windows 11 version 24H2, the tool supports three configurations: running commands in a new window, disabling input in the current window, or inline execution akin to Linux sudo. Michael highlights its open-source nature, hosted on GitHub, which enables researchers to dissect its codebase for potential weaknesses.

Security Implications and Rust Challenges

Delving into the technical intricacies, Michael examines how Sudo for Windows interoperates with Windows APIs through Rust, a language touted for memory safety. However, invoking native Windows APIs requires “unsafe” Rust code, introducing risks of memory corruption vulnerabilities—counterintuitive to Rust’s safety guarantees. He identifies non-critical issues reported to Microsoft’s Security Response Center (MSRC) and one embargoed vulnerability, emphasizing the need for rigorous scrutiny. For bug hunters, Michael advises focusing on unsafe Rust boundaries, where Windows API calls create exploitable seams.

Path Resolution and Process Coordination

Michael explores the path resolution process, critical for handling file and relative path inputs in Sudo for Windows. The tool’s reliance on ALPC for coordinating elevated and non-elevated processes introduces complexity, as it must maintain secure communication across privilege boundaries. Missteps in path handling or process elevation could lead to unintended escalations, a concern Michael flags for further investigation. His analysis underscores the delicate balance between functionality and security in this new feature.

Community Engagement and Future Directions

Encouraging community involvement, Michael praises the open-source release, urging researchers to probe the codebase for additional vulnerabilities. As Sudo for Windows rolls out to mainline Windows 11, its adoption could reshape administrative workflows, but only if security holds. He advocates for responsible bug hunting to prevent malicious exploitation, ensuring the tool delivers on its promise of seamless elevation without compromising system integrity.

Links:

PostHeaderIcon [DefCon32] QuickShell: Sharing Is Caring About RCE Attack Chain on QuickShare

In the interconnected world of file sharing, Google’s QuickShare, bridging Android and Windows, presents a deceptively inviting attack surface. Or Yair and Shmuel Cohen, researchers at SafeBreach, uncover ten vulnerabilities, culminating in QuickShell, a remote code execution (RCE) chain exploiting five flaws. Their journey, sparked by QuickShare’s Windows expansion, reveals logical weaknesses that enable file writes, traffic redirection, and system crashes, culminating in a sophisticated RCE.

Or, a vulnerability research lead, and Shmuel, formerly of Check Point, dissect QuickShare’s Protobuf-based protocol. Initial fuzzing yields crashes but no exploits, prompting a shift to logical vulnerabilities. Their findings, responsibly disclosed to Google, lead to patches and two CVEs, addressing persistent Wi-Fi connections and file approval bypasses.

QuickShare’s design, facilitating seamless device communication, lacks robust validation, allowing attackers to manipulate file transfers and network connections. The RCE chain combines these flaws, achieving unauthorized code execution on Windows systems.

Protocol Analysis and Fuzzing

Or and Shmuel begin with QuickShare’s protocol, using hooks to decode Protobuf messages. Their custom fuzzer targets the Windows app, identifying crashes but lacking exploitable memory corruptions. This pivot to logical flaws uncovers issues like unauthenticated file writes and path traversals, exposing user directories.

Tools built for device communication enable precise vulnerability discovery, revealing weaknesses in QuickShare’s trust model.

Vulnerability Discoveries

The researchers identify ten issues: file write bypasses, denial-of-service (DoS) crashes, and Wi-Fi redirection via crafted access points. Notable vulnerabilities include forcing file approvals without user consent and redirecting traffic to malicious networks.

A novel HTTPS MITM technique amplifies the attack, intercepting communications to escalate privileges. These flaws, present in both Android and Windows, highlight systemic design oversights.

Crafting the RCE Chain

QuickShell chains five vulnerabilities: a DoS to destabilize QuickShare, a file write to plant malicious payloads, a path traversal to target system directories, a Wi-Fi redirection to control connectivity, and a final exploit triggering RCE. This unconventional chain leverages seemingly minor bugs, transforming them into a potent attack.

Demonstrations show persistent connections and code execution, underscoring the chain’s real-world impact.

Takeaways for Developers and Defenders

Or and Shmuel emphasize that minor bugs, often dismissed, can cascade into severe threats. The DoS flaw, critical to their chain, exemplifies how non-security issues enable attacks. They advocate holistic security assessments, beyond memory corruptions, to evaluate logical behaviors.

Google’s responsive fixes, completed by January 2025, validate the research’s impact. The team’s open-source tools invite further exploration, urging developers to prioritize robust validation in file-sharing systems.

Links:

PostHeaderIcon [DotJs2024] Converging Web Frameworks

In the ever-evolving landscape of web development, frameworks like Angular and React have long stood as pillars of innovation, each carving out distinct philosophies while addressing the core challenge of synchronizing application state with the user interface. Minko Gechev, an engineering and product leader at Google with deep roots in Angular’s evolution, recently illuminated this dynamic during his presentation at dotJS 2024. Drawing from his extensive experience, including the convergence of Angular with Google’s internal Wiz framework, Gechev unpacked how these tools, once perceived as divergent paths, are now merging toward shared foundational principles. This shift not only streamlines developer workflows but also promises more efficient, performant applications that better serve modern web demands.

Gechev began by challenging a common misconception: despite their surface-level differences—Angular’s class-based templates versus React’s functional JSX components—these frameworks operate under remarkably similar mechanics. At their heart, both construct an abstract component tree, a hierarchical data structure encapsulating state that the framework must propagate to the DOM. This reactivity, as Gechev termed it, was historically managed through traversal algorithms in both ecosystems. For instance, updating a shopping cart’s item quantity in a nested component tree would trigger a full or optimized scan, starting from the root and pruning unaffected branches via Angular’s OnPush strategy or React’s memoization. Yet, as applications scale to thousands of components, these manual optimizations falter, demanding deeper introspection into runtime behaviors.

What emerges from Gechev’s analysis is a narrative of maturation. Benchmarks from the prior year revealed Angular and React grappling similarly with role-swapping scenarios, where entire subtrees require recomputation, while excelling in partial updates. Real-world apps, however, amplify these inefficiencies; traversing vast trees repeatedly erodes performance. Angular’s response? Embracing signals—a reactive primitive now uniting a constellation of frameworks including Ember, Solid, and Vue. Signals enable granular tracking of dependencies at compile time, distinguishing static from dynamic view elements. In Angular, assigning a signal to a property like title and reading it in a template flags precise update loci, minimizing unnecessary DOM touches. React, meanwhile, pursues a compiler-driven path yielding analogous outputs, underscoring a broader industry alignment on static analysis for reactivity.

This convergence extends beyond reactivity. Gechev highlighted dependency injection patterns, akin to React’s Context API, fostering modular state management. Looking ahead, he forecasted alignment on event replay for seamless hydration—capturing user interactions during server-side rendering gaps and replaying them post-JavaScript execution—and fine-grained code loading via partial hydration or island architectures. Angular’s defer views, for example, delineate interactivity islands, hydrating only triggered sections like a navigation bar upon user engagement, slashing initial JavaScript payloads. Coupled with libraries like JAction for event dispatch, this approach, battle-tested in Google Search, bridges the interactivity chasm without compromising user fidelity.

Gechev’s insights resonate profoundly in an era where framework selection feels paralyzing. With ecosystems like Angular boasting backward compatibility across 4,500 internal applications—each rigorously tested during upgrades—the emphasis tilts toward stability and inclusivity. Developers, he advised, should prioritize tools with robust longevity and vibrant communities, recognizing that syntactic variances mask converging implementations. As web apps demand finer control over performance and user experience, this unification equips builders to craft resilient, scalable solutions unencumbered by paradigm silos.

Signals as the Reactivity Keystone

Delving deeper into signals, Gechev positioned them as the linchpin of modern reactivity, transcending mere state updates to forge dependency graphs that anticipate change propagation. Unlike traditional observables, signals compile-time track reads, ensuring updates cascade only to affected nodes. This granularity shines in Angular’s implementation, where signals integrate seamlessly with zoneless change detection, obviating runtime polling. Gechev illustrated this with a user profile and shopping cart example: altering cart quantities ripples solely through relevant branches, sparing unrelated UI like profile displays. React’s compiler echoes this, optimizing JSX into signal-like structures for efficient re-renders. The result? Frameworks shedding legacy traversal overheads, aligning on a primitive that empowers developers to author intuitive, responsive interfaces without exhaustive profiling.

Horizons of Hydration and Modularity

Peering into future convergences, Gechev envisioned event replay and modular loading as transformative forces. Event replay, via tools like JAction now in Angular’s developer preview, mitigates hydration delays by queuing interactions during static markup rendering. Meanwhile, defer views pioneer island-based hydration, loading JavaScript incrementally based on viewport or interaction cues—much like Astro’s serverside islands or Remix’s partial strategies. Dependency injection further unifies this, providing scoped services that mirror React’s context while scaling enterprise needs. Gechev’s vision: a web where frameworks dissolve into interoperable primitives, letting developers focus on delighting users rather than wrangling abstractions.

Links:

PostHeaderIcon [KotlinConf2023] KotlinConf’23 Closing Panel: Community Questions and Future Insights

KotlinConf’23 concluded with its traditional Closing Panel, an open forum where attendees could pose their burning questions to a diverse group of experts from the Kotlin community, including key figures from JetBrains and Google. The panel, moderated by Hadi Hariri, featured prominent names such as Roman Elizarov, Egor Tolstoy, Maxim Shafirov (CEO of JetBrains), Svetlana Isakova, Pamela Hill, Sebastian Aigner (all JetBrains), Grace Kloba, Kevin Galligan, David Blanc, Wenbo, Jeffrey van Gogh (all Google), Jake Wharton (Cash App), and Zac Sweers (Slack), among others.

The session was lively, covering a wide range of topics from language features and tooling to ecosystem development and the future of Kotlin across different platforms.

Kotlin’s Ambitions and Language Evolution

One of the initial questions addressed Kotlin’s overarching goal, humorously framed as whether Kotlin aims to “get rid of other programming languages”. Roman Elizarov quipped they only want to get rid of “bad ones,” while Egor Tolstoy clarified that Kotlin’s focus is primarily on application development (services, desktop, web, mobile) rather than systems programming.

Regarding Kotlin 2.0 and the possibility of removing features, the panel indicated a strong preference for maintaining backward compatibility. However, if a feature were to be considered for removal, it would likely be something with a clearly superior alternative, such as potentially older ways of doing things if newer, more robust mechanisms (like K2 compiler plugins replacing older KAPT mechanisms, hypothetically) became the standard. The discussion also touched on the desire for a unified, official Kotlin style guide and formatter to reduce community fragmentation around tooling, though Zac Sweers noted that even with an official tool, community alternatives would likely persist.

Multiplatform, Compose, and Ecosystem

A significant portion of the Q&A revolved around Kotlin Multiplatform (KMP) and Compose Multiplatform.
* Dart Interoperability: Questions arose about interoperability between Kotlin/Native (especially for Compose on iOS which uses Skia) and Dart/Flutter. While direct, deep interoperability wasn’t presented as a primary focus, the general sentiment was that both ecosystems are strong, and developers choose based on their needs. The panel emphasized that Compose for iOS aims for a native feel and deep integration with iOS platform features.
* Compose UI for iOS and Material Design: A recurring concern was whether Compose UI on iOS would feel “too Material Design” and not native enough for iOS users. Panelists from JetBrains and Google acknowledged this, stressing ongoing efforts to ensure Compose components on iOS adhere to Cupertino (iOS native) design principles and feel natural on the platform. Jake Wharton added that making Kotlin APIs feel idiomatic to iOS developers is crucial for adoption.
* Future of KMP: The panel expressed strong optimism for KMP’s future, highlighting its stability and growing library support. They see KMP becoming the default way to build applications when targeting multiple platforms with Kotlin. The focus is on making KMP robust and ensuring a great developer experience across all supported targets.

Performance, Tooling, and Emerging Areas

  • Build Times: Concerns about Kotlin/Native build times, especially for iOS, were acknowledged. The team is continuously working on improving compiler performance and reducing build times, with K2 expected to bring further optimizations.
  • Project Loom and Coroutines: Roman Elizarov reiterated points from his earlier talk, stating that Loom is excellent for migrating existing blocking Java code, while Kotlin Coroutines offer finer-grained control and structured concurrency, especially beneficial for UI and complex asynchronous workflows. They are not mutually exclusive and can coexist.
  • Kotlin in Gaming: While not a primary focus historically, the panel acknowledged growing interest and some community libraries for game development with Kotlin. The potential for KMP in this area was also noted.
  • Documentation: The importance of clear, comprehensive, and up-to-date documentation was a recurring theme, with the panel acknowledging it as an ongoing effort.
  • AI and Kotlin: When asked about AI taking developers’ jobs, Zac Sweers offered a pragmatic take: AI won’t take your job, but someone who knows how to use AI effectively might. The panel highlighted that Kotlin is well-suited for building AI tools and applications.

The panel concluded with the exciting reveal of Kotlin’s reimagined mascot, Kodee (spelled K-O-D-E-E), a cute, modern character designed to represent the language and its community. Pins of Kodee were made available to attendees, adding a fun, tangible takeaway to the conference’s close.

Links:

PostHeaderIcon [DevoxxBE2023] Build a Generative AI App in Project IDX and Firebase by Prakhar Srivastav

At Devoxx Belgium 2023, Prakhar Srivastav, a software engineer at Google, unveiled the power of Project IDX and Firebase in crafting a generative AI mobile application. His session illuminated how developers can harness these tools to streamline full-stack, multiplatform app development directly from the browser, eliminating cumbersome local setups. Through a live demonstration, Prakhar showcased the creation of “Listed,” a Flutter-based app that leverages Google’s PaLM API to break down user-defined goals into actionable subtasks, offering a practical tool for task management. His engaging presentation, enriched with real-time coding, highlighted the synergy of cloud-based development environments and AI-driven solutions.

Introducing Project IDX: A Cloud-Based Development Revolution

Prakhar introduced Project IDX as a transformative cloud-based development environment designed to simplify the creation of multiplatform applications. Unlike traditional setups requiring hefty binaries like Xcode or Android Studio, Project IDX enables developers to work entirely in the browser. Prakhar demonstrated this by running Android and iOS emulators side-by-side within the browser, showcasing a Flutter app that compiles to multiple platforms—Android, iOS, web, Linux, and macOS—from a single codebase. This eliminates the need for platform-specific configurations, making development accessible even on lightweight devices like Chromebooks.

The live demo featured “Listed,” a mobile app where users input a goal, such as preparing for a tech talk, and receive AI-generated subtasks and tips. For instance, entering “give a tech talk at a conference” yielded steps like choosing a relevant topic and practicing the presentation, with a tip to have a backup plan for technical issues. Prakhar’s real-time tweak—changing the app’s color scheme from green to red—illustrated the iterative development flow, where changes are instantly reflected in the emulator, enhancing productivity and experimentation.

Harnessing the PaLM API for Generative AI

Central to the app’s functionality is Google’s PaLM API, which Prakhar utilized to integrate generative AI capabilities. He explained that large language models (LLMs), like those powering the PaLM API, act as sophisticated autocomplete systems, predicting likely text outputs based on extensive training data. For “Listed,” the text API was chosen for its suitability in single-turn interactions, such as generating subtasks from a user’s query. Prakhar emphasized the importance of crafting effective prompts, comparing a vague prompt like “the sky is” to a precise one like “complete the sentence: the sky is,” which yields more relevant results.

To enhance the AI’s output, Prakhar employed few-shot prompting, providing the model with examples of desired responses. For instance, for the query “go camping,” the prompt included sample subtasks like choosing a campsite and packing meals, along with a tip about wildlife safety. This structured approach ensured the model generated contextually accurate and actionable suggestions, making the app intuitive for users tackling complex tasks.

Securing AI Integration with Firebase Extensions

Integrating the PaLM API into a mobile app poses security challenges, particularly around API key exposure. Prakhar addressed this by leveraging Firebase Extensions, which provide pre-packaged solutions to streamline backend integration. Specifically, he used a Firebase Extension to securely call the PaLM API via Cloud Functions, avoiding the need to embed sensitive API keys in the client-side Flutter app. This setup not only enhances security but also simplifies infrastructure management, as the extension handles logging, monitoring, and optional AppCheck for client verification.

In the live demo, Prakhar navigated the Firebase Extensions Marketplace, selecting the “Call PaLM API Securely” extension. With a few clicks, he deployed Cloud Functions that exposed a POST API for sending prompts and receiving AI-generated responses. The code walkthrough revealed a straightforward implementation in Dart, where the app constructs a JSON payload with the prompt, model name (text-bison-001), and temperature (0.25 for deterministic outputs), ensuring seamless and secure communication with the backend.

Building the Flutter App: Simplicity and Collaboration

The Flutter app’s architecture, built within Project IDX, was designed for simplicity and collaboration. Prakhar walked through the main.dart file, which scaffolds the app’s UI with a material-themed interface, an input field for user queries, and a list to display AI-generated tasks. The app uses anonymous Firebase authentication to secure backend calls without requiring user logins, enhancing accessibility. A PromptBuilder class dynamically constructs prompts by combining predefined prefixes and examples, ensuring flexibility in handling varied user inputs.

Project IDX’s integration with Visual Studio Code’s open-source framework added collaborative features. Prakhar demonstrated how developers can invite colleagues to a shared workspace, enabling real-time collaboration. Additionally, the IDE’s AI capabilities allow users to explain selected code or generate new snippets, streamlining development. For instance, selecting the PromptBuilder class and requesting an explanation provided detailed insights into its parameters, showcasing how Project IDX enhances developer productivity.

Links:

PostHeaderIcon [KotlinConf2023] KotlinConf’23 Keynote: The Future of Kotlin is Bright and Multiplatform

KotlinConf’23 kicked off with an energizing keynote, marking a much-anticipated return to an in-person format in Amsterdam. Hosted by Hadi Hariri of JetBrains, the session brought together key figures from both JetBrains and Google, including Roman Elizarov, Svetlana Isakova, Egor Tolstoy, and Grace Kloba (VP Engineering for Android Developer Experience at Google), to share exciting updates and future directions for the Kotlin language and its ecosystem. The conference also featured a global reach with KotlinConf Global events in 41 countries. The main announcements from the keynote are also available in a blog post on the Kotlin blog.

The keynote celebrated Kotlin’s impressive growth, with statistics highlighting its widespread adoption, particularly in Android development where it’s the most popular language, used in over 95% of the top 1000 Android apps. A major focus was the upcoming Kotlin 2.0, centered around the new K2 compiler, which promises significant performance improvements, stability, and a foundation for future language evolution. The K2 compiler is nearing completion and is set to be released as Kotlin 2.0. The IntelliJ IDEA plugin will also adopt the K2 frontend, aligning with IntelliJ releases.

The Evolution of Kotlin: K2 Compiler and Language Features

The K2 compiler was a central theme of the keynote, marking a major milestone for Kotlin. This new compiler front-end, which also powers the IDE, is designed to be faster, more stable, and enable the development of new language features and tooling capabilities more rapidly. Kotlin 2.0, built on K2, is expected to bring these benefits to all Kotlin developers, enhancing both compiler performance and IDE responsiveness.

Looking beyond Kotlin 2.0, the speakers provided a glimpse into potential future language features that are under consideration. These included:
* Static Extensions: Allowing extension functions to be resolved statically, potentially improving performance and clarity.
* Collection Literals: Introducing a more concise syntax for creating collections, like using square brackets for lists, with efficient implementations.
* Name-Based Destructuring: Offering a more flexible way to destructure objects based on property names rather than just position.
* Context Receivers: A powerful feature for providing contextual information to functions in a more implicit and structured manner. This feature, however, is being approached carefully to ensure it aligns well with Kotlin’s principles.
* Explicit Fields: Providing more control over backing fields for properties.

The team emphasized a careful approach to evolving the language, ensuring new features are well-designed and maintainable. Compiler plugins were also highlighted as an avenue for extending Kotlin’s capabilities.

Kotlin in the Ecosystem: Google’s Investment and Multiplatform Growth

Grace Kloba from Google took the stage to reiterate Google’s strong commitment to Kotlin. She shared insights into Google’s investments in the Kotlin ecosystem, including the development of Kotlin Symbol Processing (KSP) and the continued focus on making Kotlin the default choice for Android development. Google officially supported Kotlin for Android development by early 2017. The Kotlin DSL is now the default for Gradle build scripts in Android Studio, enhancing developer experience with features like semantic highlighting and code completion. Google also actively contributes to the Kotlin Foundation and encourages the community to participate through programs like the Kotlin Foundation Grants Program, which focuses on supporting multiplatform libraries and frameworks.

Kotlin Multiplatform (KMP) was another major highlight, showcasing its growing maturity and adoption. The vision for KMP is to allow developers to share code across various platforms—Android, iOS, desktop, web, and server-side—while retaining the ability to write platform-specific code when needed. The keynote celebrated the increasing number of multiplatform libraries and tools, including KMM Bridge, that simplify KMP development. The future of KMP looks bright, with efforts to further improve the developer experience and expand its capabilities.

Compose Multiplatform and Emerging Technologies

The keynote also showcased the advancements in Compose Multiplatform, JetBrains’ declarative UI framework for building cross-platform user interfaces. A significant announcement was the alpha release of Compose Multiplatform for iOS, enabling developers to write their UI once in Kotlin and deploy it on both Android and iOS, and even desktop and web. This opens up new possibilities for code sharing and faster development cycles for mobile applications.

Finally, the team touched upon Kotlin’s expansion into emerging technologies like WebAssembly (Wasm). JetBrains is developing a new compiler backend for Kotlin targeting WebAssembly with its garbage collection proposal, aiming for high-performance Kotlin code in the browser. Experiments with running Compose applications in the browser using WebAssembly were also mentioned, hinting at a future where Kotlin could offer a unified development experience across an even wider range of platforms. The keynote concluded with an invitation for the community to dive deeper into these topics during the conference and to continue contributing to Kotlin’s vibrant ecosystem.

Links: