Adaption Labs Raises $50M Seed from Ex-Cohere Leaders
Sara Hooker, a prominent AI researcher dedicated to developing more affordable AI systems that demand less computational resources, is launching her own independent venture.
Previously serving as vice president of research at the AI firm Cohere and with extensive experience at Google DeepMind, she has successfully secured $50 million in seed funding for her fresh startup named Adaption Labs.
Alongside her co-founder Sudip Roy, who formerly held the position of director of inference computing at Cohere, they are focused on engineering AI systems that consume significantly less computing power and are cheaper to operate compared to the majority of today’s top-tier AI models. Their efforts also emphasize creating models that employ diverse strategies to become more adaptable to specific tasks assigned to them, which is reflected in the company’s chosen name.
The investment round is spearheaded by Emergence Capital Partners, with additional backing from Mozilla Ventures, the venture capital entity Fifty Years, Threshold Ventures, Alpha Intelligence Capital, e14 Fund, and Neo. Headquartered in San Francisco, Adaption Labs has chosen not to disclose details regarding its valuation after this funding milestone.
In discussions with Fortune, Hooker expressed her ambition to develop AI models capable of continuous learning without the need for costly retraining or fine-tuning processes, and without relying on the intensive prompt engineering and context management that enterprises typically employ to customize AI for their unique applications.
The ability to enable models to learn continuously stands out as one of the most pressing unresolved challenges in the field of artificial intelligence. Hooker described this as “probably the most important problem that I’ve worked on,” underscoring its significance.
Adaption Labs embodies a bold challenge to the dominant perspective in the AI sector, which posits that enhancing AI capabilities requires scaling up large language models (LLMs) and training them on ever-expanding datasets. As major technology companies invest billions in increasingly massive training efforts, Hooker contends that this strategy is yielding progressively smaller gains. She explained, “Most labs won’t quadruple the size of their model each year, mainly because we’re seeing saturation in the architecture.”
According to Hooker, the AI industry has reached a critical “reckoning point,” where further advancements will not stem from merely enlarging models but from designing systems that can efficiently and economically adjust to particular tasks.
Adaption Labs is part of a growing wave of innovative “neolabs”—emerging frontier AI laboratories inspired by the triumphs of established players like OpenAI, Anthropic, and Google DeepMind. These newcomers are exploring novel AI architectures to achieve breakthroughs in continuous learning. For example, Jerry Tworek, a senior researcher from OpenAI, recently departed to establish Core Automation, where he is pursuing advanced AI techniques for ongoing learning capabilities. Similarly, David Silver, a leading figure from Google DeepMind, exited last month to start Ineffable Intelligence, concentrating on reinforcement learning approaches that enable AI to learn from its actions rather than fixed datasets, potentially facilitating continuous adaptation in certain setups.
Hooker’s enterprise structures its research and development around three core “pillars”: adaptive data, where AI systems dynamically generate and process the necessary data to solve problems in real-time instead of depending on vast, static training datasets; adaptive intelligence, which involves automatically scaling computational resources according to the complexity of the task; and adaptive interfaces, focusing on learning from user interactions to refine system performance.
Since her tenure at Google, Hooker has built a strong reputation in AI communities as a critic of the “scale is all you need” philosophy embraced by many peers. Her influential 2020 paper, “The Hardware Lottery,” posited that AI innovations often thrive or falter due to compatibility with prevailing hardware rather than pure conceptual strength. More lately, her paper “On the Slow Death of Scaling” demonstrated how smaller models, enhanced by superior training methods, can surpass much larger counterparts in performance.
During her time at Cohere, Hooker led the Aya initiative, partnering with 3,000 computer scientists across 119 nations to deliver cutting-edge AI functionalities to numerous languages underserved by mainstream frontier models—all achieved with relatively modest model sizes. This project illustrated how innovative data curation and training methodologies can offset the need for sheer scale.
Among the concepts Adaption Labs is exploring is “gradient-free learning.” Contemporary AI models consist of enormous neural networks with billions of digital neurons. Conventional training employs gradient descent, akin to a sightless explorer navigating downhill by tentative steps, sensing the slope’s direction. The system incrementally tweaks billions of internal parameters known as “weights,” which dictate how neurons prioritize inputs from connected neurons in their outputs. After each adjustment, it evaluates progress toward accuracy. This demands immense computational resources and can span weeks or months. Post-training, these weights become fixed.
To specialize a model for specific applications, users often resort to fine-tuning, which entails additional training on targeted, smaller datasets—typically comprising thousands to tens of thousands of examples—further refining the weights. This remains resource-intensive, potentially costing millions.
As an alternative, users craft precise instructions or prompts to guide the model. Hooker critiques this as “prompt acrobatics,” pointing out that such prompts frequently become obsolete with model updates, necessitating rewrites.
Her ultimate aim is “to eliminate prompt engineering” entirely from the process.
Gradient-free learning circumvents the drawbacks of fine-tuning and prompt engineering. Rather than overhauling the model’s core weights via protracted training, Adaption Labs modifies behavior during inference—the phase when the model generates responses to queries. The foundational weights stay intact, yet the system tailors its outputs contextually.
Hooker posed the question, “How do you update a model without touching the weights?” She highlighted ongoing innovations in architecture that optimize compute usage dramatically. Among these are “on-the-fly merging,” where the system draws from a library of specialized adapters—compact models trained on niche datasets—that modulate the primary model’s responses. The selection of adapters occurs dynamically based on the query.
Another technique is “dynamic decoding,” which adjusts output probabilities according to task requirements without altering underlying weights. Decoding is the mechanism by which models choose from multiple plausible responses.
Hooker emphasized, “We’re moving away from it just being a model. This is part of the profound notion—it’s based on the interaction, and a model should change in real time based on what the task is.”
She asserts that adopting these techniques transforms the financial dynamics of AI development. “The most costly compute is pre-training compute, largely because it is a massive amount of compute, a massive amount of time,” she noted. In contrast, inference compute delivers superior efficiency per unit.
Sudip Roy, Adaption Labs’ CTO, contributes profound knowledge in optimizing AI efficiency. Hooker remarked, “My co-founder makes GPUs go extremely fast, which is important for us because of the real-time component.”
With the seed funding, Adaption Labs plans to expand its team by recruiting additional AI researchers, engineers, and designers. These hires will focus on crafting innovative user interfaces for AI, moving beyond the conventional chat interface prevalent in most systems.
