Recent Posts
Archives

Posts Tagged ‘GoogleIO2025’

PostHeaderIcon [GoogleIO2025] Adaptive Android development makes your app shine across devices

Keynote Speakers

Alex Vanyo works as a Developer Relations Engineer at Google, concentrating on adaptive applications for the Android platform. His expertise encompasses user interface design and responsive layouts, contributing to tools that facilitate cross-device compatibility.

Emilie Roberts serves as a Developer Relations Engineer at Google, specializing in Android integration with Chrome OS. She advocates for optimized experiences on large-screen devices, drawing from her background in software engineering to guide developers in multi-form factor adaptations.

Abstract

This analysis explores the principles of adaptive development for Android applications, emphasizing strategies to ensure seamless performance across diverse hardware ecosystems including smartphones, tablets, foldables, automotive interfaces, and extended reality setups. It examines emerging platform modifications in Android 16, updates to Jetpack libraries, and innovative tooling in Android Studio, elucidating their conceptual underpinnings, implementation approaches, and potential effects on user retention and developer workflows. By evaluating practical demonstrations and case studies, the discussion reveals how these elements promote versatile, future-proof software engineering in a fragmented device landscape.

Rationale for Adaptive Strategies in Expanding Ecosystems

Alex Vanyo and Emilie Roberts commence by articulating the imperative for adaptive methodologies in Android development, tracing the evolution from monolithic computing to ubiquitous mobile paradigms. They posit that contemporary applications must transcend single-form-factor constraints to embrace an array of interfaces, from wrist-worn gadgets to vehicular displays and immersive headsets. This perspective is rooted in the observation that users anticipate fluid functionality across all touchpoints, transforming software from mere utilities into integral components of daily interactions.

Contextually, this arises from Android’s proliferation beyond traditional handhelds. Roberts highlights the integration of adaptive apps into automotive environments via Android Automotive OS and Android Auto, where permitted categories can now operate in parked modes without necessitating bespoke versions. This leverages existing mobile codebases, extending reach to in-vehicle screens that serve as de facto tablets.

Furthermore, Android 16 introduces desktop windowing enhancements, enabling phones, foldables, and tablets to morph into free-form computing spaces upon connection to external monitors. With over 500 million active large-screen units, this shift democratizes desktop-like productivity, allowing arbitrary resizing and multitasking. Vanyo notes the foundational AOSP support for connected displays, poised for developer previews, which underscores a methodological pivot toward hardware-agnostic design.

The advent of Android XR further diversifies the landscape, positioning headsets as spatial computing hubs where apps inhabit immersive realms. Home space mode permits 2D window placement in three dimensions, akin to boundless desktops, while full space grants exclusive environmental control for volumetric content. Roberts emphasizes that Play Store-distributed mobile apps inherently support XR, with adaptive investments yielding immediate benefits in this nascent arena.

Implications manifest in heightened user engagement; multi-device owners exhibit tripled usage in streaming services compared to single-device counterparts. Methodologically, this encourages a unified codebase strategy, averting fragmentation while maximizing monetization. However, it demands foresight in engineering to accommodate unforeseen hardware, fostering resilience against ecosystem volatility.

Core Principles and Mindset of Adaptive Design

Delving into the ethos, Vanyo defines adaptivity as a comprehensive tactic that anticipates the Android spectrum’s variability, encompassing screen dimensions, input modalities, and novel inventions. This mindset advocates for a singular application adaptable to phones, tablets, foldables, Chromebooks, connected displays, XR, and automotive contexts, eschewing siloed variants.

Roberts illustrates via personal anecdote: transitioning from phone-based music practice to tablet or monitor-enhanced sessions necessitates consistent features like progress tracking and interface familiarity. Disparities risk user attrition, as alternatives offering cross-device coherence gain preference. This user-centric lens complements business incentives, where adaptive implementations correlate with doubled retention rates, as evidenced by games like Asphalt Legends Unite.

Practically, demonstrations of the Socialite app—available on GitHub—exemplify this through a list-detail paradigm via Compose Adaptive. Running identical code across six devices, it dynamically adjusts: XR home space resizes panes fluidly, automotive interfaces optimize for parked interactions, and desktop modes support free-form windows. Such versatility stems from libraries detecting postures like tabletop on foldables, enabling tailored views without codebase bifurcation.

Analytically, this approach mitigates development overhead by centralizing logic, yet requires vigilant testing against configuration shifts to preserve state and avoid visual artifacts. Implications extend to inclusivity, accommodating diverse user scenarios while positioning developers to capitalize on emerging markets like XR, projected to burgeon.

Innovations in Tooling and Libraries for Responsiveness

Roberts and Vanyo spotlight Compose Adaptive 1.1, a Jetpack library facilitating responsive UIs via canonical patterns. It categorizes windows into compact, medium, and expanded classes, guiding layout decisions—e.g., bottom navigation for narrow views versus side rails for wider ones. The library’s supporting pane abstraction manages list-detail flows, automatically transitioning based on space availability.

Code exemplar:

val supportingPaneScaffoldState = rememberSupportingPaneScaffoldState(
    initialValue = SupportingPaneScaffoldValue.Hidden
)
SupportingPaneScaffold(
    state = supportingPaneScaffoldState,
    mainPane = { ListContent() },
    supportingPane = { DetailContent() }
)

This snippet illustrates dynamic pane revelation, adapting to resizes without explicit orientation handling. Navigation 3 complements this, decoupling navigation graphs from UI elements for reusable, posture-aware routing.

Android Studio’s enhancements, like the adaptive UI template wizard, streamline initiation by generating responsive scaffolds. Visual linting detects truncation or overflow in varying configurations, while emulators simulate XR and automotive scenarios for holistic validation.

Methodologically, these tools embed adaptivity into workflows, leveraging Compose’s declarative paradigm for runtime adjustments. Contextually, they address historical assumptions about fixed orientations, preparing for Android 16’s disregard of such restrictions on large displays. Implications include reduced iteration cycles and elevated quality, though necessitate upskilling in reactive design principles.

Platform Shifts and Preparation for Android 16

A pivotal revelation concerns Android 16’s cessation of honoring orientation, resizability, and aspect ratio constraints on displays exceeding 600dp. Targeting SDK 36, activities must accommodate arbitrary shapes, ignoring portrait/landscape mandates to align with user preferences. This standardization echoes OEM overrides, enforcing free-form adaptability.

Common pitfalls include clipped elements, distorted previews, or state loss during rotations—issues users encounter via overrides today. Vanyo advises comprehensive testing, layout revisions, and state preservation. Transitional aids encompass opt-out flags until SDK 37, user toggles, and game exemptions via manifest or Play categories.

For games, Unity 6 integrates configuration APIs, enabling seamless handling of size and density alterations. Samples guide optimizations, while titles like Dungeon Hunter 5 demonstrate foldable integrations yielding retention boosts.

Case studies reinforce: Luminar Neo’s Compose-built editor excels offline via Tensor SDK; Cubasis 3 offers robust audio workstations on Chromebooks; Algoriddim’s djay explores XR scratching. These exemplify methodological fusion of libraries and testing, implying market advantages through device ubiquity.

Strategic Implications and Forward Outlook

Adaptivity emerges as a strategic imperative amid Android’s diversification, where single codebases span ecosystems, enhancing loyalty and revenue. Platform evolutions like desktop windowing and XR demand foresight, with tools mitigating complexities.

Future trajectories involve deeper integrations, potentially with AI-driven layouts, ensuring longevity. Developers are urged to iterate compatibly, avoiding presumptions to future-proof against innovations, ultimately enriching user experiences across the Android continuum.

Links:

PostHeaderIcon [GoogleIO2025] Google I/O ’25 Keynote

Keynote Speakers

Sundar Pichai serves as the Chief Executive Officer of Alphabet Inc. and Google, overseeing the company’s strategic direction with a focus on artificial intelligence integration across products and services. Born in India, he holds degrees from the Indian Institute of Technology Kharagpur, Stanford University, and the Wharton School, and has been instrumental in advancing Google’s cloud computing and AI initiatives since joining the firm in 2004.

Demis Hassabis acts as the Co-Founder and Chief Executive Officer of Google DeepMind, leading efforts in artificial general intelligence and breakthroughs in areas like protein folding and game-playing AI. A former child chess prodigy with a PhD in cognitive neuroscience from University College London, he has received knighthood for his contributions to science and technology.

Liz Reid holds the position of Vice President of Search at Google, directing product management and engineering for core search functionalities. She joined Google in 2003 as its first female engineer in the New York office and has spearheaded innovations in local search and AI-enhanced experiences.

Johanna Voolich functions as the Chief Product Officer at YouTube, guiding product strategies for the platform’s global user base. With extensive experience at Google in search, Android, and Workspace, she emphasizes AI-driven enhancements for content creation and consumption.

Dave Burke previously served as Vice President of Engineering for Android at Google, contributing to the platform’s development for over a decade before transitioning to advisory roles in AI and biotechnology.

Donald Glover is an acclaimed American actor, musician, writer, and director, known professionally as Childish Gambino in his music career. Born in 1983, he has garnered multiple Emmy and Grammy awards for his work in television series like Atlanta and music albums exploring diverse themes.

Sameer Samat operates as President of the Android Ecosystem at Google, responsible for the operating system’s user and developer experiences worldwide. Holding a bachelor’s degree in computer science from the University of California San Diego, he has held leadership roles in product management across Google’s mobile and ecosystem divisions.

Abstract

This examination delves into the pivotal announcements from the Google I/O 2025 keynote, centering on breakthroughs in artificial intelligence models, agentic systems, search enhancements, generative media, and extended reality platforms. It dissects the underlying methodologies driving these advancements, their contextual evolution from research prototypes to practical implementations, and the far-reaching implications for technological accessibility, societal problem-solving, and ethical AI deployment. By analyzing demonstrations and strategic integrations, the discourse illuminates how Google’s full-stack approach fosters rapid innovation while addressing real-world challenges.

Evolution of AI Models and Infrastructure

The keynote commences with Sundar Pichai highlighting the accelerated pace of AI development within Google’s ecosystem, emphasizing the transition from foundational research to widespread application. Central to this narrative is the Gemini model family, which has seen substantial enhancements since its inception. Pichai notes the deployment of over a dozen models and features in the past year, underscoring a methodology that prioritizes swift iteration and integration. For instance, the Gemini 2.5 Pro model achieves top rankings on benchmarks like the Ella Marina leaderboard, reflecting a 300-point increase in ELO scores—a metric evaluating model performance across diverse tasks.

This progress is underpinned by Google’s proprietary infrastructure, exemplified by the seventh-generation TPU named Ironwood. Designed for both training and inference at scale, it offers a tenfold performance boost over predecessors, enabling 42.5 exaflops per pod. Such hardware advancements facilitate cost reductions and efficiency gains, allowing models to process outputs at unprecedented speeds—Gemini models dominate the top three positions for tokens per second on leading leaderboards. The implications extend to democratizing AI, as lower prices and higher performance make advanced capabilities accessible to developers and users alike.

Demis Hassabis elaborates on the intelligence layer, positioning Gemini 2.5 Pro as the world’s premier foundation model. Updated previews have empowered creators to generate interactive applications from sketches or simulate urban environments, demonstrating multimodal reasoning that spans text, code, and visuals. The incorporation of LearnM, a specialized educational model, elevates its utility in learning scenarios, topping relevant benchmarks. Meanwhile, the refined Gemini 2.5 Flash serves as an efficient alternative, appealing to developers for its balance of speed and affordability.

Methodologically, these models leverage vast datasets and advanced training techniques, including reinforcement learning from human feedback, to enhance reasoning and contextual understanding. The context of this evolution lies in Google’s commitment to a full-stack AI strategy, integrating hardware, software, and research. Implications include fostering an ecosystem where AI augments human creativity, though challenges like computational resource demands necessitate ongoing optimizations to ensure equitable access.

Agentic Systems and Personalization Strategies

A significant portion of the presentation explores agentic AI, where systems autonomously execute tasks while remaining under user oversight. Pichai introduces concepts like Project Starline evolving into Google Beam, a 3D video platform that merges multiple camera feeds via AI to create immersive communications. This innovation, collaborating with HP, employs real-time rendering at 60 frames per second, implying enhanced remote interactions that mimic physical presence.

Building on this, Project Astra’s capabilities migrate to Gemini Live, enabling contextual awareness through camera and screen sharing. Demonstrations reveal its application in everyday scenarios, such as interview preparation or fitness training. The introduction of multitasking in Project Mariner allows oversight of up to ten tasks, utilizing “teach and repeat” mechanisms where agents learn from single demonstrations. Available via the Gemini API, this tool invites developer experimentation, with partners like UiPath integrating it for automation.

The agent ecosystem is bolstered by protocols like the open agent-to-agent framework and Model Context Protocol (MCP) compatibility in the Gemini SDK, facilitating inter-agent communication and service access. In practice, agent mode in the Gemini app exemplifies this by sourcing apartment listings, applying filters, and scheduling tours—streamlining complex workflows.

Personalization emerges as a complementary frontier, with “personal context” allowing models to draw from user data across Google apps, ensuring privacy through user controls. An example in Gmail illustrates personalized smart replies that emulate individual styles by analyzing past communications and documents. This methodology relies on secure data handling and fine-tuned models, implying deeper user engagement but raising ethical considerations around data consent and bias mitigation.

Overall, these agentic and personalized approaches shift AI from reactive tools to proactive assistants, contextualized within Google’s product suite. The implications are transformative for productivity, yet require robust governance to balance utility with user autonomy.

Innovations in Search and Information Retrieval

Liz Reid advances the discussion on search evolution, framing AI Overviews and AI Mode as pivotal shifts. With over 1.5 billion monthly users, AI Overviews synthesize responses from web content, enhancing query resolution. AI Mode extends this into conversational interfaces, supporting complex, multi-step inquiries like travel planning by integrating reasoning, tool usage, and web interaction.

Methodologically, this involves grounding models in real-time data, ensuring factual accuracy through citations and diverse perspectives. Demonstrations showcase handling ambiguous queries, such as dietary planning, by breaking them into sub-tasks and verifying outputs. The introduction of video understanding allows analysis of uploaded content, providing step-by-step guidance.

Contextually, these features address information overload in an era of abundant data, implying improved user satisfaction—evidenced by higher engagement metrics. However, implications include potential disruptions to content ecosystems, necessitating transparency in sourcing to maintain trust.

Generative Media and Creative Tools

Johanna Voolich and Donald Glover spotlight generative media, with Imagine 3 and V3 models enabling high-fidelity image and video creation. Imagine 3’s stylistic versatility and V3’s narrative consistency allow seamless editing, as Glover illustrates in crafting a short film.

The Flow tool democratizes filmmaking by generating clips from prompts, supporting extensions and refinements. Methodologically, these leverage diffusion-based architectures trained on vast datasets, ensuring coherence across outputs.

Context lies in empowering creators, with implications for industries like entertainment—potentially lowering barriers but raising concerns over authenticity and intellectual property. Subscription plans like Google AI Pro and Ultra provide access, fostering experimentation.

Android XR Platform and Ecosystem Expansion

Sameer Samat introduces Android XR, optimized for headsets and glasses, integrating Gemini for contextual assistance. Project Muhan with Samsung offers immersive experiences, while glasses prototypes enable hands-free interactions like navigation and translation.

Partnerships with Gentle Monster and Warby Parker emphasize style, with developer previews forthcoming. Methodologically, this builds on Android’s ecosystem, ensuring app compatibility.

Implications include redefining human-computer interaction, enhancing accessibility, but demanding advancements in battery life and privacy.

Societal Impacts and Prospective Horizons

The keynote culminates in applications like Firesat for wildfire detection and drone relief during disasters, showcasing AI’s role in societal challenges. Pichai envisions near-term realizations in robotics, medicine, quantum computing, and autonomous vehicles.

This forward-looking context underscores ethical deployment, with implications for global equity. Personal anecdotes reinforce technology’s inspirational potential, urging collaborative progress.

Links:

PostHeaderIcon [GoogleIO2025] Google I/O ’25 Developer Keynote

Keynote Speakers

Josh Woodward serves as the Vice President of Google Labs, where he leads teams focused on advancing AI products, including the Gemini app and innovative tools like NotebookLM and AI Studio. His work emphasizes turning AI research into practical applications that align with Google’s mission to organize the world’s information.

Logan Kilpatrick is the Lead Product Manager for Google AI Studio, specializing in the Gemini API and artificial general intelligence initiatives. With a background in computer science from Harvard and Oxford, and prior experience at NASA and OpenAI, he drives product development to make AI accessible for developers.

Paige Bailey holds the position of Lead Product Manager for Generative Models at Google DeepMind. Her expertise lies in machine learning, with a focus on democratizing advanced AI technologies to enable developers to create innovative applications.

Diana Wong is a Group Product Manager at Google, contributing to Android ecosystem advancements. She oversees product strategies that enhance user experiences across devices, drawing from her education at Carnegie Mellon University.

Florina Muntenescu is a Developer Relations Manager at Google, specializing in Android development. With a background in computer science from Babeș-Bolyai University, she advocates for tools like Jetpack Compose and promotes best practices in app performance and adaptability.

Addy Osmani is the Head of Chrome Developer Experience at Google, serving as a Senior Staff Engineering Manager. He leads efforts to improve developer tools in Chrome, with a strong emphasis on performance, AI integration, and web standards.

David East is the Developer Relations Lead for Project IDX at Google, with extensive experience in Firebase. He has been instrumental in backend-as-a-service products, focusing on cloud-based development workspaces.

Gus Martins is the Product Manager for the Gemma family of open models at Google DeepMind. His role involves making AI models adaptable for various domains, including healthcare and multilingual applications, while fostering community contributions.

Abstract

This article examines the key innovations presented in the Google I/O 2025 Developer Keynote, focusing on advancements in AI-driven development tools across Google’s ecosystem. It explores updates to the Gemini API, Android enhancements, web technologies, Firebase Studio, and the Gemma open models, analyzing their technical foundations, practical implementations, and broader implications for software engineering. By dissecting demonstrations and announcements, the discussion highlights how these tools facilitate rapid prototyping, multimodal AI integration, and cross-platform development, ultimately aiming to empower developers in creating performant, adaptive applications.

Advancements in Gemini API and AI Studio

The keynote opens with a strong emphasis on the Gemini API, showcasing its evolution as a cornerstone for building intelligent applications. Josh Woodward introduces the concept of blending code and design through experimental tools like Stitch, which leverages Gemini 2.5 Flash for rapid interface generation. This model, noted for its speed and cost-efficiency, enables developers to transition from textual prompts to functional designs and markup in minutes. For instance, a prompt to create an app for discovering California activities generates editable screens in Figma format, complete with customizable themes such as dark mode with lime green accents.

Logan Kilpatrick delves deeper into AI Studio, positioning it as a prototyping environment that answers whether ideas can be realized with Gemini. The introduction of the 2.5 Flash native audio model enhances voice agent capabilities, supporting 24 languages and ignoring extraneous noises—ideal for real-world applications. Key improvements include function calling, search grounding, and URL context, allowing models to fetch and integrate web data dynamically. An example demonstrates grounding responses with developer docs, where a prompt yields a concise summary of function calling: connecting models to external APIs for real-world actions.

A practical illustration involves generating a text adventure game using Gemini and Imagen, where the model reasons through specifications, generates code, and self-corrects errors. This iterative, multi-turn process underscores the API’s role in accelerating development cycles. Furthermore, support for the Model Context Protocol (MCP) in the GenAI SDK facilitates integration with open-source tools, expanding the ecosystem.

Paige Bailey extends this by remixing a maps app into a “keynote companion” agent named Casey, demonstrating live audio processing and UI updates. Using functions like increment_utterance_count, the agent tracks mentions of Gemini-related terms, showcasing sliding context windows for long-running sessions. Asynchronous function calls enable non-blocking operations, such as fetching fun facts via search grounding, while structured JSON outputs ensure UI consistency.

These advancements reflect a methodological shift toward agentive AI, where models not only process inputs but execute actions autonomously. The implications are profound: developers can build conversational apps for e-commerce or navigation with minimal code, reducing latency and enhancing user engagement. However, challenges like ensuring data privacy in multimodal inputs warrant careful consideration in production environments.

AI Integration in Android Development

Shifting to mobile ecosystems, Diana Wong and Florina Muntenescu highlight how AI powers “excellent” Android apps—defined by delight, performance, and cross-device compatibility. The Androidify app exemplifies this, using selfies and image generation to create personalized Android bots. Under the hood, Gemini’s multimodal capabilities process images via generate_content, followed by Imagen 3 for robot rendering, all orchestrated through Firebase with just five lines of code.

On-device AI via Gemini Nano offers APIs for tasks like summarization and rewriting, ensuring privacy by avoiding server transmissions. The Material 3 Expressive update introduces playful elements, such as cookie-shaped buttons and morphing animations, available in Compose Material Alpha. Live updates in Android 16 provide time-sensitive notifications, enhancing user relevance.

Performance optimizations, including R8 and baseline profiles, yield significant gains, as evidenced by Reddit’s one-star rating increase. API changes in Android 16 eliminate orientation restrictions, promoting responsive UIs. Collaboration with Samsung on desktop windowing and adaptive layouts in Compose supports foldables, tablets, Chromebooks, cars, and XR devices like Project Muhan and Aura.

Developer productivity tools in Android Studio leverage Gemini for natural language-based end-to-end testing. For example, a journey script selects photos via descriptions like “woman with a pink dress,” automating assertions without manual synchronization. An AI agent for dependency updates scans projects, suggesting migrations like Kotlin 2.0, streamlining maintenance.

The contextual implications are clear: AI reduces barriers to creating adaptive, performant apps, boosting engagement metrics—Canva reports twice-weekly usage among cross-device users. Methodologically, this integrates cloud and on-device models, balancing power and privacy, but requires developers to optimize for diverse hardware, potentially increasing testing complexity.

Enhancing Web Development with Chrome Tools

Addy Osmani and Yuna Shin focus on web innovations, advocating for a “powerful web made easier” through AI-infused tools. Project IDX, now Firebase Studio, enables prompt-based app creation, but the web segment emphasizes Chrome DevTools and built-in AI APIs.

Baseline integration in VS Code and ESLint provides browser compatibility checks directly in tooltips, warning on unsupported features. AI assistance in DevTools uses natural language to debug issues, such as misaligned buttons fixed via transform properties, applying changes to workspaces without context switching.

The redesigned performance panel identifies layout shifts, with Gemini suggesting fixes like font optimizations. Seven new AI APIs, backed by Gemini Nano, support on-device processing for privacy-sensitive scenarios. Multimodal capabilities process audio and images, demonstrated by extracting ticket details to highlight seats in a theater app.

Hybrid solutions with Firebase allow fallback to cloud models, ensuring cross-browser compatibility. Partners like Deote leverage these for faster onboarding, projecting 30% efficiency gains.

Analytically, this methodology embeds AI in workflows, reducing debugging time and enabling scalable features. Implications include broader AI adoption in regulated sectors, but raise questions about model biases in automated fixes. The fine-tuning for web contexts ensures relevance, fostering a more inclusive developer experience.

Innovations in Firebase Studio

David East presents Firebase Studio as a cloud-based AI workspace for full-stack app generation. Importing Figma designs via Builder.io translates to functional components, as shown with a furniture store app. Gemini assists in extending designs, creating product detail pages with routing, data flow, and add-to-cart features using 2.5 Pro.

Automatic backend provisioning detects needs for databases or authentication, generating blueprints and code. This open, extensible VM allows custom stacks, with deployment to Firebase Hosting.

The approach streamlines prototyping, breaking changes into reviewable steps and auto-generating descriptions for placeholders. Implications extend to rapid iteration, lowering entry barriers for non-coders, though dependency on AI prompts necessitates clear specifications to avoid errors.

Expanding the Gemma Family of Open Models

Gus Martins introduces Gemma 3N, a lightweight model running on 2GB RAM with audio understanding, available in AI Studio and open-source tools. Med-Gemma advances healthcare applications, analyzing radiology images.

Fine-tuning demonstrations use LoRA in Google Colab, creating personalized emoji translators. The new AI-first Colab transforms prompts into UIs, facilitating comparisons between base and tuned models.

Community-driven variants, like Navarasa for Indic languages and S-Gemma for sign languages, highlight multilingual prowess. Dolphin Gemma, fine-tuned on vocalization data, aids marine research.

This open model strategy democratizes AI, enabling domain-specific adaptations. Implications include ethical advancements in accessibility and science, but require safeguards against misuse in sensitive areas like healthcare.

Implications and Future Directions

Collectively, these innovations signal a paradigm where AI augments every development stage, from ideation to deployment. Methodologically, multimodal models and agentive tools reduce boilerplate, fostering creativity. Contexts like privacy and performance drive hybrid approaches, with implications for inclusive tech—empowering global developers.

Future directions may involve deeper ecosystem integrations, addressing scalability and bias. As tools mature, they promise transformative impacts on software paradigms, urging ethical considerations in AI adoption.

Links: