Recent Posts
Archives

Posts Tagged ‘DotAI2024’

PostHeaderIcon [DotAI2024] DotAI 2024: Eliot Andres – From Scratch to Scale: Crafting and Cascading Foundational Image Models

Eliot Andres, co-founder and CTO of Photoroom, chronicled the odyssey of bespoke vision at DotAI 2024. With nine years honing deep learning for e-commerce elixirs, Andres propelled Photoroom’s ascent—YC S20 alumna serving global galleries. His narrative dissected in-house genesis over off-the-shelf oracles, unveiling diffusion’s dawn-to-dusk: bespoke blueprints, data distillations, compute conquests, and feedback forges yielding thrice-swift sorcery for millions.

Forging Foundations Beyond Borrowed Blueprints

Andres interrogated imitation’s insufficiency: Stable Diffusion’s splendor suits savants, yet falters for Photoroom’s precinct—product portraits purged of props, shadows sculpted sans seams. Off-the-shelf oracles, he observed, ossify on outliers: e-commerce ephemera demands domain devotion.

Thus, genesis from void: custom cascades commencing with chromatic chaos—splashes sans structure—escalating to entity emergence post-thousand-hour tutelage, culminating in crystalline compositions after 40,000 epochs. Andres attributed ascent to architectural autonomy: latent labyrinths laced with proprietary priors, data distilled from decamillions of dealer dossiers—curated for commerce’s cadence.

Compute’s crucible: H100 hordes harnessed in harmonious herds, mitigating mishaps via meticulous monitoring—gradient guardians averting gradients’ ghosts.

Navigating Novelties and Nurturing at Nascent

Andres aired adversities: data’s deluge demands discernment—deduping dross, equilibrating epochs—while scaling summons stability, feedback’s fount from frontline forges finessing flaws. Photoroom’s polity: purveyors as partners, iterating on idiosyncrasies like luminous lapses or artifact anomalies.

Deployment’s decree: distillation’s dual dance—LCM’s leapfrog lessons compressing cascades to sprints—and TensorRT’s transmutations, fusing fluxions for fleet-footed fruition, doubling dispatch sans diminishment.

FR 2030’s fellowship fuels forthcoming: grander guardians, verisimilar visions—velocity unyielding. Andres beckoned bibliophiles to GitHub’s groves: datasets as doorways, teams as talismans—collaborative conquests crowning communal code.

In tableau, Andres toasted tandemry: machine learning’s mosaic, indivisible from ingenuity’s impetus—Photoroom’s pantheon, propelling pixels to panoramas.

Links:

PostHeaderIcon [DotAI2024] DotAI 2024: Sri Satish Ambati – Open-Source Multi-Agent Frameworks as Catalysts for Universal Intelligence

Sri Satish Ambati, visionary founder and CEO of H2O.ai, extolled the emancipatory ethos of communal code at DotAI 2024. Architecting H2O.ai since 2012 to universalize AI—spanning 20,000 organizations and spearheading o2forindia.org’s life-affirming logistics—Ambati views open-source as sovereignty’s salve. His manifesto positioned multi-agent symphonies as the symphony of tomorrow, where LLMs orchestrate collectives, transmuting solitary sparks into societal symphonies.

The Imperative of Inclusive Innovation

Ambati evoked AGI’s communal cradle: mathematics and melody as heirlooms, AI as extension—public patrimony, not proprietorial prize. Open-source’s vanguard—Meta’s LLaMA kin—eclipses enclosures, birthing bespoke brains via synthetic seeds and scaling sagas.

H2O’s odyssey mirrors this: from nascent nets to agentic ensembles, where h2oGPT’s modular mosaics meld models, morphing monoliths into mosaics. Ambati dissected LLM lineages: from encoder sentinels to decoder dynamos, now agent architects—reasoning relays, tool tenders, memory marshals.

This progression, he averred, democratizes dominion: agents as apprentices, apprising actions, auditing anomalies—autonomy amplified, not abdicated.

Orchestrating Agentic Alliances for Societal Surplus

Ambati unveiled h2oGPTe’s polyphonic prowess: document diviners, code conjurers, RAG refiners—each a specialist in symphonic service. Multi-agent marvels emerge: debate dynamos deliberating dilemmas, hierarchical heralds delegating duties, self-reflective sages self-correcting.

He heralded horizontal harmonies—peers polling peers for probabilistic prudence—and vertical vigils, overseers overseeing outputs. Ambati’s canvas: marketing maestros mirroring motifs, scientific scribes sifting syntheses—abundance assured, from temporal treasures to spatial expanses.

Yet, perils persist: viral venoms, martial mirages, disinformation deluges. Ambati’s antidote: AI as altruism’s ally, open-source as oversight’s oracle—fostering forges where innovation inoculates inequities.

In epilogue, Ambati summoned a selfie symphony, a nod to global galvanizers—from Parisian pulses to San Franciscan surges—where communal code kindles collective conquests.

Forging Futures Through Federated Fabrics

Ambati’s coda canvassed consumption’s crest: prompts as prolific progeny, birthing billion-thought tapestries. AI devours dogmas—SaaS supplanted, Nobels nipped—yet nourishes novelty, urging utility’s uplift.

H2O’s horizon: agentic abundances, ethical engines—open-source as equalizer, ensuring enlightenment’s equity.

Links:

PostHeaderIcon [DotAI2024] DotAI 2024: Armand Joulin – Elevating Compact Open Language Models to Frontier Efficacy

Armand Joulin, Research Director at Google DeepMind overseeing Gemma’s open iterations, chronicled the alchemy of accessible intelligence at DotAI 2024. Transitioning from Meta’s EMEA stewardship—nurturing LLaMA, DINO, and FastText—Joulin now democratizes Gemini’s essence, crafting lightweight sentinels that rival titans thrice their heft. Gemma 2’s odyssey, spanning 2B to 27B parameters, exemplifies architectural finesse and pedagogical pivots, empowering myriad minds with potent, pliable cognition.

Reforging Architectures for Scalable Savvy

Joulin queried Google’s open gambit: why divulge amid proprietary prowess? The rejoinder: ubiquity. Developers dwell in open realms; arming them fosters diversity, curbing monopolies while seeding innovations that loop back—derivatives surpassing progenitors via communal cunning.

Gemma 2’s scaffold tweaks transformers: rotary embeddings for positional poise, attention refinements curbing quadratic quagmires. Joulin spotlighted the 2B and 9B variants, schooled not in next-token clairvoyance but auxiliary pursuits—masked modeling, causal contrasts—honing discernment over divination.

These evolutions yield compacts that converse competently: multilingual fluency, coding camaraderie, safety sans shackles. Joulin lauded derivatives: Hugging Face teems with Gemma-spun specialists, from role-play virtuosos to knowledge navigators, underscoring open’s osmotic gains.

Nurturing Ecosystems Through Pervasive Accessibility

Deployment’s democracy demands pervasiveness: Gemma graces Hugging Face, NVIDIA’s bastions, even AWS’s arches—agnostic to allegiance. Joulin tallied 20 million downloads in half a year, birthing a constellation of adaptations that eclipse originals in niches, a testament to collaborative cresting.

Use cases burgeon: multilingual muses for global dialogues, role enactors for immersive interfaces, knowledge curators for scholarly scaffolds. Joulin envisioned this as empowerment’s engine—students scripting savants, enthusiasts engineering epiphanies—where AI pockets transcend privilege.

In closing, Joulin affirmed open’s mandate: not largesse, but leverage—furnishing foundations for futures forged collectively, where size yields to sagacity.

Links:

PostHeaderIcon [DotAI2024] DotAI 2024: Dr Laure Seugé and Arthur Talpaert – Enhancing Compassion and Safeguarding Sensitive Health Information in AI

Dr Laure Seugé, a pediatric nephrologist and rheumatologist practicing at the Children’s Institute and Necker-Enfants Malades Hospital, alongside Arthur Talpaert, Head of AI Product for Consultation Assistant at Doctolib, unveiled a groundbreaking tool at DotAI 2024. As a medical expert advising Doctolib’s innovation teams, Seugé brings frontline insights into patient care, while Talpaert, with his PhD in applied mathematics and tenure at McKinsey Digital, steers AI deployments that prioritize ethical rigor. Their collaboration heralds the Consultation Assistant, a system poised to redefine physician-patient interactions by automating administrative burdens, thereby fostering deeper empathy and upholding stringent data protections.

Cultivating Deeper Human Connections Through Intelligent Augmentation

Seugé painted a vivid portrait of the Consultation Assistant’s inception, rooted in the daily tribulations of clinicians who juggle diagnostic acuity with clerical demands. Envision a consultation where the physician maintains unwavering eye contact, unencumbered by keyboard drudgery—notes transcribed in real-time, summaries generated instantaneously, and prescriptions streamlined. This vision, she articulated, stems from co-creation: medical advisors like herself interrogated prototypes, infusing domain knowledge to ensure outputs align with clinical precision.

Talpaert elaborated on the architecture’s dual pillars—empathy and reliability. The assistant leverages speech recognition to capture dialogues verbatim, then employs large language models fine-tuned on anonymized, consented datasets to distill insights. Hallucinations, those elusive inaccuracies plaguing generative systems, are mitigated through iterative validation prompts, compelling users to scrutinize and amend drafts. This “nudge” mechanism, Talpaert explained, embeds accountability, transforming potential pitfalls into teachable reinforcements.

Moreover, the tool’s interface anticipates workflow friction: contextual suggestions surface relevant guidelines or drug interactions, drawn from European pharmacopeias, without disrupting narrative flow. Seugé recounted beta trials where pediatricians reported reclaimed consultation minutes—time redirected toward nuanced histories or family counseling. Such reallocations, she posited, amplify relational bonds, where vulnerability meets expertise unhindered by screens.

Fortifying Privacy and Ensuring Clinical Integrity

Central to their ethos is an unyielding commitment to data sovereignty, a bulwark against breaches in healthcare’s trust economy. Talpaert delineated the fortress: training corpora comprise solely explicit consents, purged post-optimization to preclude retention. Inference phases encrypt transients—audio evanesces upon processing—while persistent records adhere to GDPR’s pseudonymization mandates, hosted on health-certified European clouds.

Seugé underscored patient agency: opt-ins are granular, revocable, and transparent, mirroring her consultations where data stewardship precedes diagnostics. This parity fosters reciprocity—patients entrust narratives, assured of containment. Talpaert complemented with probabilistic safeguards: models calibrate uncertainty, flagging low-confidence inferences for manual override, thus preserving therapeutic latitude.

Their synergy extends to error ecosystems: post-deployment monitoring aggregates anonymized feedback, fueling refinements that eclipse isolated incidents. Seugé’s advocacy for interdisciplinary loops—developers shadowed by clinicians—ensures evolutions honor human frailties, not exacerbate them. As Talpaert reflected, AI’s potency lies in amplification: augmenting discernment without supplanting it, yielding consultations where empathy flourishes amid efficiency.

In unveiling this assistant, Seugé and Talpaert not only launch a product but ignite a paradigm—AI as steward, not sovereign, in medicine’s sacred dialogues.

Links:

PostHeaderIcon [DotAI2024] DotAI 2024: Gael Varoquaux – Streamlining Tabular Data for ML Readiness

Gael Varoquaux, Inria research director and scikit-learn co-founder, championed data alchemy at DotAI 2024. Advising Probabl while helming Soda team, Varoquaux tackled tabular toil—the unsung drudgery eclipsing AI glamour. His spotlight on Skrub, a nascent library, vows to eclipse wrangling woes, funneling more cycles toward modeling insights.

Alleviating the Burden of Tabular Taming

Varoquaux lamented tables’ ubiquity: organizational goldmines in healthcare, logistics, yet mired in heterogeneity—strings, numerics, outliers demanding normalization. Scikit-learn’s 100M+ downloads dwarf PyTorch’s, underscoring preparation’s primacy; pandas reigns not for prophecy, but plumbing.

Deep learning faltered here: trees outshine nets on sparse, categorical sprawls. Skrub intervenes with ML-infused transformers: automated imputation via neighbors, outlier culling sans thresholds, encoding that fuses categoricals with targets for richer signals.

Varoquaux showcased dirty-to-d gleaming: messy merges resolved via fuzzy matching, strings standardized through embeddings—slashing manual heuristics.

Bridging Data Frames to Predictive Pipelines

Skrub’s API mirrors pandas fluidity, yet weaves ML natively: multi-table joins with learned aggregations, pipelines composable into scikit-learn estimators for holistic optimization. Graphs underpin reproducibility—reapply transformations on fresh inflows, parallelizing recomputes.

Open-source ethos drives: Inria’s taxpayer-fueled labors spin to Probabl for acceleration, inviting contributions to hasten maturity. Varoquaux envisioned production graphs: optimized for sparsity, caching intermediates to slash latencies.

This paradigm—cognitive relief via abstraction—erodes engineer-scientist divides, liberating tabular troves for AI’s discerning gaze. Skrub, he averred, heralds an epoch where preparation propels, not paralyzes, discovery.

Links:

PostHeaderIcon [DotAI2024] DotAI 2024: Pierre Stock – Unleashing Edge Agents with Compact Powerhouses

Pierre Stock, VP of Science Operations at Mistral AI and a vanguard in efficient deployment, dissected edge AI’s promise at DotAI 2024. From Meta’s privacy-preserving federated learning to Mistral’s inaugural hire, Stock champions compact models—1-3B parameters—that rival behemoths in latency-bound realms like mobiles and wearables, prioritizing confidentiality and responsiveness.

Sculpting Efficiency in Constrained Realms

Stock introduced Ministral family: 3B and 8B variants, thrice slimmer than Llama-3’s 8B kin, yet surpassing on coding benchmarks via native function calling. Pixtral 12B, a vision-text hybrid, outpaces Llama-3-Vision 90B in captioning, underscoring scale’s diminishing returns for edge viability.

Customization reigns: fine-tuning on domain corpora—legal tomes or medical scans—tailors inference without ballooning footprints. Stock advocated speculative decoding and quantization—4-bit weights halving memory—to squeeze sub-second latencies on smartphones.

Agents thrive here: function calling, where models invoke tools via JSON schemas, conserves tokens—each call equaling thousands—enabling tool orchestration sans exhaustive contexts.

Orchestrating Autonomous Edge Ecosystems

Stock demoed Le Chat’s agentic scaffolding: high-level directives trigger context retrieval and tool chains, like calendaring via API handoffs. Native chaining—parallel tool summons—amplifies autonomy, from SQL queries to transaction validations.

Mistral’s platform simplifies: select models, infuse instructions, connect externalities—yielding JSON-formatted outputs for seamless integration. This modularity, Stock asserted, demystifies agency: no arcane rituals, just declarative intents yielding executable flows.

Future vistas: on-device personalization, where federated updates hone models sans data exodus. Stock urged experimentation—build agents atop Ministral, probe boundaries—heralding an era where intelligence permeates pockets, unhindered by clouds.

Links:

PostHeaderIcon [DotAI2024] DotAI 2024: Neil Zeghidour – Forging Multimodal Foundations for Voice AI

Neil Zeghidour, co-founder and Chief Modeling Officer at Kyutai, demystified multimodal language models at DotAI 2024. Transitioning from Google DeepMind’s generative audio vanguard—pioneering text-to-music APIs and neural codecs—to Kyutai’s open-science bastion, Zeghidour chronicled Moshi’s genesis: the inaugural open-source, real-time voice AI blending text fluency with auditory nuance.

Elevating Text LLMs to Sensory Savants

Zeghidour contextualized text LLMs’ ubiquity—from translation relics to coding savants—yet lamented their sensory myopia. True assistants demand perceptual breadth: visual discernment, auditory acuity, and generative expressivity like image synthesis or fluid discourse.

Moshi embodies this fusion, channeling voice bidirectionally with duplex latency under 200ms. Unlike predecessors—Siri’s scripted retorts or ChatGPT’s turn-taking delays—Moshi interweaves streams, parsing interruptions sans artifacts via multi-stream modeling: discrete tokens for phonetics, continuous for prosody.

This architecture, Zeghidour detailed, disentangles content from timbre, enabling role-aware training. Voice actress Alice’s emotive recordings—whispers to cowboy drawls—seed synthetic dialogues, yielding hundreds of thousands of hours where Moshi learns deference, yielding floors fluidly.

Unveiling Technical Ingenuity and Open Horizons

Zeghidour dissected Mimi, Kyutai’s streaming codec: outperforming FLAC in fidelity while slashing bandwidth, it encodes raw audio into manageable tokens for LLM ingestion. Training on vast, permissioned corpora—podcasts, audiobooks—Moshi masters accents, emotions, and interruptions, rivaling human cadence.

Challenges abounded: duplexity’s echo cancellation, prosody’s subtlety. Yet, open-sourcing weights, code, and a 60-page treatise democratizes replication, from MacBook quantization to commercial scaling.

Zeghidour’s Moshi-Moshi vignette hinted at emergent quirks—self-dialogues veering philosophical—while inviting scrutiny via Twitter. Kyutai’s mandate: propel voice agents through transparency, fostering adoption in research and beyond.

In Moshi, Zeghidour glimpsed assistants unbound by text’s tyranny, conversing as kin— a sonic stride toward AGI’s empathetic embrace.

Links:

PostHeaderIcon [DotAI2024] DotAI 2024: Romain Huet and Katia Gil Guzman – Pioneering AI Innovations at OpenAI

Romain Huet and Katia Gil Guzman, stalwarts of OpenAI’s Developer Experience team, charted the horizon of AI integration at DotAI 2024. Huet, Head of Developer Experience with roots at Stripe and Twitter, alongside Guzman—a solutions architect turned advocate for scalable tools—illuminated iterative deployment’s ethos. Their dialogue unveiled OpenAI’s trajectory from GPT-3’s nascent API to multimodal frontiers, empowering builders to conjure native AI paradigms.

From Experimentation to Ecosystem Maturity

Huet reminisced on GPT-3’s 2020 launch: an API inviting tinkering yielded unforeseen gems like AI Dungeon’s narrative weaves or code autocompletions. This exploratory ethos, he emphasized, birthed a vibrant ecosystem—now boasting Assistants API for persistent threads and fine-tuning for bespoke adaptations.

Guzman delved into Assistants’ evolution: function calling bridges models to externalities, orchestrating tools like databases or calendars sans hallucination pitfalls. Retrieval threads embed knowledge bases, fostering context-aware dialogues that scale from prototypes to enterprises.

Their synergy underscored OpenAI’s research-to-product cadence: iterative releases, from GPT-4’s multimodal prowess to o1’s reasoning chains, democratize AGI pursuits. Huet spotlighted Pioneers Program, partnering select founders for custom fine-tunes, accelerating innovation while gleaning real-world insights.

Multimodal Horizons and Real-Time Interactions

Guzman demoed Realtime API’s alchemy: low-latency voice pipelines fuse speech-to-text with tool invocation, enabling immersive exchanges—like querying cosmic data mid-conversation, visualizing trajectories via integrated visuals. Audio’s debut heralds vision’s integration, birthing interfaces that converse fluidly across senses.

Huet envisioned this as interface reinvention: beyond text, agents navigate worlds, leveraging GPT-4’s perceptual depth for grounded actions. Early adopters, he noted, craft speech-to-speech odysseys—piloting virtual realms or debugging via vocal cues—portending conversational computing’s renaissance.

As Paris beckons with a forthcoming office, Huet and Guzman rallied the French tech vanguard: leverage these primitives to reforge software legacies into intuitive symphonies. Their clarion: wield this vanguard toolkit to author humanity’s AGI narrative.

Forging the Next Wave of AI Natives

Huet’s closing evoked a collaborative odyssey: developers as AGI co-pilots, surfacing use cases that refine models iteratively. Guzman’s parting wisdom: harness exclusivity—early access begets advantage in modality-rich vistas.

Together, they affirmed OpenAI’s mantle: not solitary savants, but enablers of collective ingenuity, where APIs evolve into canvases for tomorrow’s intelligences.

Links:

PostHeaderIcon [DotAI2024] DotAI 2024: Ines Montani – Crafting Resilient NLP Systems in the Generative Era

Ines Montani, co-founder and CEO of Explosion AI, illuminated the pitfalls and potentials of natural language processing pipelines at DotAI 2024. As a core contributor to spaCy—an open-source NLP powerhouse—and Prodigy, a data annotation suite, Montani champions modular tools that blend human intuition with computational might. Her address critiqued the “prompts suffice” ethos, advocating hybrid architectures that fuse rules, examples, and generative flair for robust, production-viable solutions.

Harmonizing Paradigms for Enduring Intelligence

Montani traced instruction evolution: from rigid rules yielding brittle brittleness to supervised learning’s nuanced exemplars, now augmented by in-context prompts’ linguistic alchemy. Rules shine in clarity for novices, yet crumble under data flux; examples infuse domain savvy but demand curation toil; prompts democratize prototyping, yet hallucinate sans anchors.

The synergy? Layered pipelines where rules scaffold prompts, examples calibrate outputs, and LLMs infuse creativity. Montani showcased spaCy’s evolution: rule-based tokenizers ensure consistency, while generative components handle ambiguity, like entity resolution in noisy texts. This modularity mitigates drift, preserving fidelity across model swaps.

In industrial extraction—parsing resumes or contracts—Montani stressed data’s primacy: raw inputs reveal logic gaps, prompting refactorings that unearth “window-knocking machines”—flawed proxies mistaking correlation for causation. A chatbot querying calendars, she analogized, falters if oblivious to time zones; true utility demands holistic orchestration.

Fostering Modularity Amid Generative Hype

Montani cautioned against abstraction overload: leaky layers spawn brittle facades, where one-liners unravel on edge cases. Instead, embrace transparency—Prodigy’s active learning loops refine datasets iteratively, blending human oversight with AI proposals to curb over-reliance.

Retrieval-augmented generation (RAG) exemplifies balanced integration: LLMs query structured stores, yielding chat interfaces atop databases, supplanting clunky GUIs. Yet, Montani warned, context dictates efficacy; for analytical dives, raw views trump conversational veils.

Her ethos: interrogate intent—who wields the tool, what risks lurk? Surprise greets data dives, unveiling bespoke logics that generative magic alone can’t conjure. Efficiency, privacy, and modularity—spaCy’s hallmarks—thwart big-tech monoliths, empowering bespoke ingenuity.

In sum, Montani’s blueprint rejects compromise: generative AI amplifies, not supplants, principled engineering, birthing interfaces that endure and elevate.

Links:

PostHeaderIcon [DotAI2024] DotAI 2024: Marcin Detyniecki – Navigating Bias Toward Equitable AI Outcomes

Marcin Detyniecki, Group Chief Data Scientist and Head of AI Research at AXA, probed the ethical frontiers of artificial intelligence at DotAI 2024. Steering AXA’s R&D toward fair, interpretable ML amid insurance’s high-stakes decisions, Detyniecki dissected algorithmic bias through predictive justice lenses. His exploration grappled with AI’s paradoxical promise: a “black box” oracle that, if harnessed judiciously, could forge impartial futures despite inherent opacity.

Unmasking Inherent Prejudices in Decision Engines

Detyniecki commenced with COMPAS, a U.S. recidivism predictor that flagged disproportionate risks for Black defendants, igniting bias debates. Yet, he challenged snap judgments: human intuitions, too, falter—his own unease at a “shady” visage mirroring the tool’s contested outputs. This duality reveals bias as endemic, not algorithmic anomaly; data mirrors societal skews, amplifying inequities unless confronted.

In insurance, parallels abound: pricing models risk entrenching disparities by correlating proxies like zip codes with peril, sidelining root causes. Detyniecki advocated reconstructing “sensitive variables”—demographics or vulnerabilities—within models to enforce equity, inverting the blind-justice archetype. Justice, he posited, demands vigilant oversight, not ignorance, to calibrate decisions across strata.

Fairness metrics proliferate—demographic parity, equalized odds—yet clash irreconcilably: precision for individuals versus solidarity in groups. Detyniecki’s Fairness Compass, an open GitHub toolkit, simulates trade-offs, logging rationales for transparency. This framework recasts metrics as tunable dials, enabling stakeholders to align algorithms with values, be it meritocracy or diversity.

Architecting Transparent Pathways to Just Applications

Detyniecki unveiled AXA’s causal architectures, embedding interventions to disentangle correlations from causations. By modeling “what-ifs”—altering features sans sensitive ties—models simulate equitable scenarios, outperforming ad-hoc debiasing. In hiring analogies, this yields top talent sans gender skew; in premiums, it mutualizes risks across cohorts, balancing acuity with solidarity.

Challenges persist: metric incompatibility demands philosophical reckoning, and sensitive data access invites misuse. Detyniecki urged guarded stewardship—reconstructing attributes internally to audit without exposure—ensuring AI amplifies equity, not erodes it.

Ultimately, Detyniecki affirmed AI’s redemptive arc: though veiled, its levers, when pulled ethically, illuminate fairer horizons. Trust, he concluded, bridges the chasm—humans guiding machines toward benevolence.

Links: