Inside Ring-1T: Ant engineers remedy reinforcement studying bottlenecks at trillion scale

Inside Ring-1T: Ant engineers remedy reinforcement studying bottlenecks at trillion scale

Last Updated: October 26, 2025By


China’s Ant Group, an affiliate of Alibaba, detailed technical data round its new mannequin, Ring-1T, which the corporate stated is “the primary open-source reasoning mannequin with one trillion whole parameters.”

Ring-1T goals to compete with different reasoning fashions like GPT-5 and the o-series from OpenAI, in addition to Google’s Gemini 2.5. With the brand new launch of the most recent mannequin, Ant extends the geopolitical debate over who will dominate the AI race: China or the US. 

Ant Group stated Ring-1T is optimized for mathematical and logical issues, code era and scientific problem-solving. 

“With roughly 50 billion activated parameters per token, Ring-1T achieves state-of-the-art efficiency throughout a number of difficult benchmarks — regardless of relying solely on pure language reasoning capabilities,” Ant stated in a paper.

Ring-1T, which was first launched on preview in September, adopts the identical structure as Ling 2.0 and educated on the Ling-1T-base mannequin the corporate launched earlier this month. Ant stated this permits the mannequin to assist as much as 128,000 tokens.

To coach a mannequin as giant as Ring-1T, researchers needed to develop new strategies to scale reinforcement studying (RL).

New strategies of coaching

Ant Group developed three “interconnected improvements” to assist the RL and coaching of Ring-1T, a problem given the mannequin's dimension and the usually giant compute necessities it entails. These three are IcePop, C3PO++ and ASystem.

IcePop removes noisy gradient updates to stabilize coaching with out slowing inference. It helps get rid of catastrophic training-inference misalignment in RL. The researchers famous that when coaching fashions, notably these utilizing a mixture-of-experts (MoE) structure like Ring-1T, there can usually be a discrepancy in likelihood calculations. 

“This downside is especially pronounced within the coaching of MoE fashions with RL because of the inherent utilization of the dynamic routing mechanism. Moreover, in lengthy CoT settings, these discrepancies can step by step accumulate throughout iterations and turn out to be additional amplified,” the researchers stated. 

IcePop “suppresses unstable coaching updates via double-sided masking calibration.”

The subsequent new methodology the researchers needed to develop is C3PO++, an improved model of the C3PO system that Ant beforehand established. The tactic manages how Ring-1T and different extra-large parameter fashions generate and course of coaching examples, or what they name rollouts, so GPUs don’t sit idle. 

The best way it really works would break work in rollouts into items to course of in parallel. One group is the inference pool, which generates new information, and the opposite is the coaching pool, which collects outcomes to replace the mannequin. C3PO++ creates a token finances to manage how a lot information is processed, making certain GPUs are used effectively.

The final new methodology, ASystem, adopts a SingleController+SPMD (Single Program, A number of Knowledge) structure to allow asynchronous operations.  

Benchmark outcomes

Ant pointed Ring-1T to benchmarks measuring efficiency in arithmetic, coding, logical reasoning and common duties. They examined it towards fashions corresponding to DeepSeek-V3.1-Terminus-Pondering, Qwen-35B-A22B-Pondering-2507, Gemini 2.5 Professional and GPT-5 Pondering. 

In benchmark testing, Ring-1T carried out strongly, coming in second to OpenAI’s GPT-5 throughout most benchmarks. Ant stated that Ring-1T confirmed the very best efficiency amongst all of the open-weight fashions it examined. 

The mannequin posted a 93.4% rating on the AIME 25 leaderboard, second solely to GPT-5. In coding, Ring-1T outperformed each DeepSeek and Qwen.

“It signifies that our fastidiously synthesized dataset shapes Ring-1T’s strong efficiency on programming purposes, which kinds a robust basis for future endeavors on agentic purposes,” the corporate stated. 

Ring-1T exhibits how a lot Chinese language corporations are investing in fashions 

Ring-1T is simply the most recent mannequin from China aiming to dethrone GPT-5 and Gemini. 

Chinese language corporations have been releasing spectacular fashions at a fast tempo because the shock launch of DeepSeek in January. Ant's mum or dad firm, Alibaba, just lately launched Qwen3-Omni, a multimodal mannequin that natively unifies textual content, picture, audio and video. DeepSeek has additionally continued to enhance its fashions and earlier this month, launched DeepSeek-OCR. This new mannequin reimagines how fashions course of data. 

With Ring-1T and Ant’s growth of latest strategies to coach and scale extra-large fashions, the battle for AI dominance between the US and China continues to warmth up.   


Source link

Leave A Comment

you might also like