Consideration ISN'T all you want?! New Qwen3 variant Brumby-14B-Base leverages Energy Retention method
When the transformer structure was launched in 2017 within the now seminal Google paper "Attention Is All You Need," it grew to become an immediate cornerstone of recent synthetic intelligence.
Each main giant language mannequin (LLM) — from OpenAI's GPT sequence to Anthropic's Claude, Google's Gemini, and Meta's Llama — has been constructed on some variation of its central mechanism: consideration, the mathematical operation that permits a mannequin to look again throughout its whole enter and resolve what data issues most.
Eight years later, the identical mechanism that outlined AI’s golden age is now displaying its limits. Consideration is highly effective, however it is usually costly — its computational and reminiscence prices scale quadratically with context size, creating an more and more unsustainable bottleneck for each analysis and business. As fashions goal to purpose throughout paperwork, codebases, or video streams lasting hours or days, consideration turns into the structure’s Achilles’ heel.
On October 28, 2025, the little-known AI startup Manifest AI introduced a radical alternative. Their new mannequin, Brumby-14B-Base, is a retrained variant of Qwen3-14B-Base, one of many main open-source transformer fashions.
However whereas many variants of Qwen have been educated already, Brumby-14B-Base is novel in that it abandons consideration altogether.
As a substitute, Brumby replaces these layers with a novel mechanism known as Energy Retention—a recurrent, hardware-efficient structure that shops and updates data over arbitrarily lengthy contexts with out the exponential reminiscence progress of consideration.
Skilled at a acknowledged value of simply $4,000, the 14-billion-parameter Brumby mannequin performs on par with established transformer fashions like Qwen3-14B and GLM-4.5-Air, attaining near-state-of-the-art accuracy on a spread of reasoning and comprehension benchmarks.
From Consideration to Retention: The Architectural Shift
The core of Manifest AI’s innovation lies in what they name the Energy Retention layer.
In a conventional transformer, each token computes a set of queries (Q), keys (Okay), and values (V), then performs a matrix operation that measures the similarity between each token and each different token—basically a full pairwise comparability throughout the sequence.
That is what provides consideration its flexibility, but additionally what makes it so expensive: processing a sequence twice as lengthy takes roughly 4 occasions the compute and reminiscence.
Energy Retention retains the identical inputs (Q, Okay, V), however replaces the worldwide similarity operation with a recurrent state replace.
Every layer maintains a reminiscence matrix S, which is up to date at every time step in response to the incoming key, worth, and a discovered gating sign.
The method seems extra like an RNN (Recurrent Neural Community) than a transformer: as a substitute of recomputing consideration over the complete context, the mannequin constantly compresses previous data right into a fixed-size latent state.
This implies the computational value of Energy Retention doesn’t develop with context size. Whether or not the mannequin is processing 1,000 or 1,000,000 tokens, the per-token value stays fixed.
That property alone—constant-time per-token computation—marks a profound departure from transformer habits.
On the identical time, Energy Retention preserves the expressive energy that made consideration profitable. As a result of the recurrence includes tensor powers of the enter (therefore the title “energy retention”), it could possibly signify higher-order dependencies between previous and current tokens.
The result’s an structure that may theoretically retain long-term dependencies indefinitely, whereas remaining as environment friendly as an RNN and as expressive as a transformer.
Retraining, Not Rebuilding
Maybe essentially the most placing side of Brumby-14B’s coaching course of is its effectivity. Manifest AI educated the mannequin for under 60 hours on 32 Nvidia H100 GPUs, at a price of roughly $4,000 — lower than 2% of what a traditional mannequin of this scale would value to coach from scratch.
Nevertheless, because it relied on a transformer-based mannequin, it's secure to say that this advance alone won’t finish the transformer AI-era.
As Jacob Buckman, founding father of Manifest AI, clarified in an e-mail to VentureBeat: “The power to coach for $4,000 is certainly solely doable when leveraging an current transformer mannequin,” he mentioned. “Brumby couldn’t be educated from scratch for that value.”
Nonetheless, Buckman emphasised the importance of that consequence: “The rationale that is essential is that the flexibility to construct on the weights of the earlier technology of mannequin architectures is a crucial accelerant for the adoption of a brand new modeling paradigm.”
He argues this demonstrates how attention-free programs can catch as much as transformer efficiency “for orders-of-magnitude much less” funding.
Within the loss curves launched by Manifest AI, Brumby’s coaching loss rapidly converges to that of the Qwen3 baseline inside 3,000 coaching steps, even because the structure diverges considerably from its transformer origins.
Though Brumby-14B-Base started life as Qwen3-14B-Base, it didn’t stay equivalent for lengthy. Manifest AI essentially altered Qwen3’s structure by eradicating its consideration layers—the mathematical engine that defines how a transformer mannequin processes data—and changing them with their new “energy retention” mechanism. This modification restructured the mannequin’s inner wiring, successfully giving it a brand new mind whereas preserving a lot of its prior information.
Due to that architectural swap, the prevailing Qwen3 weights now not match completely. They have been educated to function inside a transformer’s consideration dynamics, not the brand new retention-based system. Consequently, the Brumby mannequin initially “forgot” apply a few of its discovered information successfully. The retraining course of—about 3,000 steps of further studying—served to recalibrate these weights, aligning them with the facility retention framework with out having to start out from zero.
A useful approach to consider that is to think about taking a world-class pianist and handing them a guitar. They already perceive rhythm, concord, and melody, however their arms should be taught solely new patterns to provide the identical music. Equally, Brumby needed to relearn use its current information by way of a brand new computational instrument. These 3,000 coaching steps have been, in impact, its crash course in guitar classes.
By the top of this quick retraining part, Brumby had regained its full efficiency, reaching the identical accuracy as the unique Qwen3 mannequin. That fast restoration is what makes the consequence so important: it exhibits that an attention-free system can inherit and adapt the capabilities of a transformer mannequin with solely a fraction of the coaching time and value.
The benchmark development plots present the same pattern: the mannequin quickly approaches its goal accuracy on core evaluations like GSM8K, HellaSwag, and MMLU after just a few thousand steps, matching and even barely surpassing Qwen3 on a number of duties.
Benchmarking the Brumby
Throughout customary analysis duties, Brumby-14B-Base persistently performs at or close to parity with transformer baselines of comparable scale.
|
Process |
Brumby-14B |
Qwen3-14B |
GLM-4.5-Air |
Nemotron Nano (12B) |
|
ARC |
0.89 |
0.94 |
0.92 |
0.93 |
|
GSM8K |
0.88 |
0.84 |
0.83 |
0.84 |
|
GSM8K (Platinum) |
0.87 |
0.88 |
0.85 |
0.87 |
|
HellaSwag |
0.77 |
0.81 |
0.85 |
0.82 |
|
MATH |
0.62 |
0.54 |
0.47 |
0.26 |
|
MBPP |
0.57 |
0.75 |
0.73 |
0.71 |
|
MMLU |
0.71 |
0.78 |
0.77 |
0.78 |
|
MMLU (Professional) |
0.36 |
0.55 |
0.51 |
0.53 |
Whereas it lags barely behind transformers on knowledge-heavy evaluations like MMLU-Professional, it matches or outperforms them on mathematical reasoning and long-context reasoning duties—exactly the place consideration architectures are likely to falter. This sample reinforces the concept recurrent or retention-based programs could maintain a structural benefit for reasoning over prolonged temporal or logical dependencies.
{Hardware} Effectivity and Inference Efficiency
Brumby’s energy retention design presents one other main benefit: {hardware} effectivity.
As a result of the state replace includes solely native matrix operations, inference could be carried out with linear complexity in sequence size.
Manifest AI stories that their quickest kernels, developed by way of their in-house CUDA framework Vidrial, can ship hundreds-fold speedups over consideration on very lengthy contexts.
Buckman mentioned the alpha-stage Energy Retention kernels “obtain typical {hardware} utilization of 80–85%, which is increased than FlashAttention2’s 70–75% or Mamba’s 50–60%.”
(Mamba is one other rising “post-transformer” structure developed by Carnegie Mellon scientists again in 2023 that, like Energy Retention, seeks to eradicate the computational bottleneck of consideration. It replaces consideration with a state-space mechanism that processes sequences linearly — updating an inner state over time fairly than evaluating each token to each different one. This makes it much more environment friendly for lengthy inputs, although it usually achieves decrease {hardware} utilization than Energy Retention in early checks.)
Each Energy Retention and Mamba, he added, “expend meaningfully fewer complete FLOPs than FlashAttention2 on lengthy contexts, in addition to far much less reminiscence.”
Based on Buckman, the reported 100× speedup comes from this mixed enchancment in utilization and computational effectivity, although he famous that “now we have not but stress-tested it on production-scale workloads.”
Coaching and Scaling Economics
Maybe no statistic within the Brumby launch generated extra consideration than the coaching value.
A 14-billion-parameter mannequin, educated for $4,000, represents a two-order-of-magnitude discount in the price of basis mannequin growth.
Buckman confirmed that the low value displays a broader scaling sample. “Removed from diminishing returns, now we have discovered that ease of retraining improves with scale,” he mentioned. “The variety of steps required to efficiently retrain a mannequin decreases with its parameter depend.”
Manifest has not but validated the price of retraining fashions at 700B parameters, however Buckman projected a spread of $10,000–$20,000 for fashions of that magnitude—nonetheless far beneath transformer coaching budgets.
He additionally reiterated that this strategy might democratize large-scale experimentation by permitting smaller analysis teams or firms to retrain or repurpose current transformer checkpoints with out prohibitive compute prices.
Integration and Deployment
Based on Buckman, changing an current transformer right into a Energy Retention mannequin is designed to be easy.
“It’s simple for any firm that’s already retraining, post-training, or fine-tuning open-source fashions,” he mentioned. “Merely pip set up retention, change one line of your structure code, and resume coaching the place you left off.”
He added that after solely a small variety of GPU-hours, the mannequin usually recovers its unique efficiency—at which level it features the effectivity advantages of the attention-free design.
“The ensuing structure will allow far sooner long-context coaching and inference than beforehand,” Buckman famous.
On infrastructure, Buckman mentioned the principle Brumby kernels are written in Triton, suitable with each NVIDIA and AMD accelerators. Specialised CUDA kernels are additionally out there by way of the workforce’s in-house Vidrial framework. Integration with vLLM and different inference engines stays a piece in progress: “We’ve not but built-in Energy Retention into inference engines, however doing so is a significant ongoing initiative at Manifest.”
As for distributed inference, Buckman dismissed issues about instability: “We’ve not discovered this problem to be exacerbated in any approach by our recurrent-state structure. In truth, context-parallel coaching and GPU partitioning for multi-user inference each change into considerably cleaner technically when utilizing our strategy.”
Mission and Lengthy-Time period Imaginative and prescient
Past the engineering particulars, Buckman additionally described Manifest’s broader mission. “Our mission is to coach a neural community to mannequin all human output,” he mentioned.
The workforce’s aim, he defined, is to maneuver past modeling “artifacts of intelligence” towards modeling “the clever processes that generated them.” This shift, he argued, requires “essentially rethinking” how fashions are designed and educated—work that Energy Retention represents solely the start of.
The Brumby-14B launch, he mentioned, is “one step ahead in a protracted march” towards architectures that may mannequin thought processes constantly and effectively.
Public Debate and Trade Reception
The launch of Brumby-14B sparked rapid dialogue on X (previously Twitter), the place researchers debated the framing of Manifest AI’s announcement.
Some, together with Meta researcher Ariel (@redtachyon), argued that the “$4,000 basis mannequin” tagline was deceptive, for the reason that coaching concerned reusing pretrained transformer weights fairly than coaching from scratch.
“They shuffled across the weights of Qwen, fine-tuned it a bit, and known as it ‘coaching a basis mannequin for $4k,’” Ariel wrote.
Buckman responded publicly, clarifying that the preliminary tweet had been a part of an extended thread explaining the retraining strategy. “It’s not like I used to be being misleading about it,” he wrote. “I broke it up into separate tweets, and now everyone seems to be mad concerning the first one.”
In a follow-up e-mail, Buckman took a measured view of the controversy. “The tip of the transformer period isn’t but right here,” he reiterated, “however the march has begun.”
He additionally acknowledged that the $4,000 declare, although technically correct in context, had drawn consideration exactly as a result of it challenged expectations about what it prices to experiment at frontier scale.
Conclusion: A Crack within the Transformer’s Wall?
The discharge of Brumby-14B-Base is greater than an engineering milestone; it’s a proof of idea that the transformer’s dominance could lastly face credible competitors.
By changing consideration with energy retention, Manifest AI has demonstrated that efficiency parity with state-of-the-art transformers is feasible at a fraction of the computational value—and that the long-context bottleneck could be damaged with out unique {hardware}.
The broader implications are twofold. First, the economics of coaching and serving giant fashions might shift dramatically, reducing the barrier to entry for open analysis and smaller organizations.
Second, the architectural range of AI fashions could increase once more, reigniting theoretical and empirical exploration after half a decade of transformer monoculture.
As Buckman put it: “The tip of the transformer period isn’t but right here. Our launch is only one step ahead in a protracted march towards the longer term.”
Source link
latest video
latest pick
news via inbox
Nulla turp dis cursus. Integer liberos euismod pretium faucibua














