> 自媒体 > (AI)人工智能 > DeepSeek Unveils Its Most Powerful Open-source Innovation, Challenging GPT-5 and Gemini 3
DeepSeek Unveils Its Most Powerful Open-source Innovation, Challenging GPT-5 and Gemini 3
来源:钛媒体APP
2025-12-05 08:17:41
151
管理

Source: DeepSeek official WeChat

Meanwhile, DeepSeek also stated that compared to the recently released Kimi-K2-Thinking from the domestic large model developer Moonshot AI, DeepSeek V3.2 has significantly reduced output length, which greatly decreases computational overhead and user wait time. In agent benchmarking, V3.2 also outperformed other open-source models such as Kimi-K2-Thinking and MiniMax M2, making it the strongest open-source large model to date. Its overall performance is now extremely close to that of the top closed-source models.

Image from DeepSeek official WeChat

What’s even more noteworthy is V3.2’s performance in certain Q&A scenarios and general agent tasks. In a specific case involving travel advice, for example, V3.2 leveraged deep reasoning along with web crawling and search engine tools to provide highly detailed and accurate travel tips and recommendations. The latest API update for V3.2 also supports tool usage in “thinking mode” for the first time, greatly enriching the usefulness and breadth of answers users receive.

In addition, DeepSeek specifically emphasized that V3.2 was not specially trained on the tools featured in these evaluation datasets.

We’ve observed that while benchmark scores for large models are climbing, these models often make basic factual errors in everyday user interactions (a criticism especially directed at GPT-5 upon its release). Against this backdrop, DeepSeek has made a point of highlighting with each update that it avoids relying solely on correct answers as a reward mechanism. As a result, they have not produced a so-called “super-intelligent brain” that appears clever in benchmarks yet fails at simple tasks and questions that matter to ordinary users—a “low EQ” AI agent.

Overcoming this challenge at a fundamental level—becoming a large model with both high IQ and high EQ—is the key to developing a truly versatile, reliable, and efficient AI agent. DeepSeek also believes that V3.2 can demonstrate strong generalization capabilities in real-world application scenarios.

In order to strike a balance between computational efficiency, powerful reasoning capabilities, and agent performance, DeepSeek has implemented comprehensive optimizations across training, integration, and application layers. According to its technical paper, V3.2 introduces DSA (DeepSeek Sparse Attention mechanism), which significantly reduces computational complexity in long-context scenarios while maintaining model performance.

At the same time, to integrate reasoning capabilities into tool-using scenarios, DeepSeek has developed a new synthesis pipeline that enables systematic, large-scale generation of training data. This approach facilitates scalable agent post-training optimization, substantially improving generalization in complex, interactive environments as well as the model’s ability to follow instructions.

In addition, as mentioned earlier, V3.2 is also the first model from DeepSeek to incorporate reasoning into tool usage, greatly enhancing the model’s generalization capabilities.

If the focus of V3.2 is on “saying things that make sense and getting things done”—a balance-seeking approach for practical intelligent agents—then the positioning of the “Special Forces” V3.2 Speciale is to push the reasoning ability of open-source models to the limit and explore the boundaries of model capabilities through extended reasoning.

It’s worth noting that a major highlight of V3.2 Speciale is its integration of the theorem-proving capabilities from DeepSeek-Math-V2, the most powerful mathematical large model released just last week.

Math-V2 not only achieved gold-medal-level performance in the 2025 International Mathematical Olympiad and the 2024 China Mathematical Olympiad, but also outperformed Gemini 3 in the IMO-Proof Bench benchmark evaluation.

Moreover, in a similar vein to previously discussed approaches, this mathematical model is also striving to overcome the limitations of correct-answer reward mechanisms and the so-called “test-solver” identity by adopting a self-verification process. In doing so, it seeks to break through the current bottlenecks in AI’s deep reasoning, enabling large models to truly understand mathematics and logical derivations; as a result, it aims to achieve more robust, reliable, and versatile theorem-proving capabilities.

With its greatly enhanced reasoning abilities, V3.2 Speciale has achieved Gemini 3.0 Pro-level results in mainstream reasoning benchmarks. However, V3.2 Speciale’s performance advantages come at the cost of consuming a large number of tokens, which significantly increases its operational costs. As a result, it currently does not support tool calls or everyday conversation and writing, and is intended for research use only.

From OCR to Math-V2, then to V3.2 and V3.2 Speciale, each of DeepSeek’s recent product launches has been met with widespread praise. At the same time, these releases have not only brought significant improvements in overall capabilities, but also continually clarified the main development trajectories of “practicality” and “generalization”.

In the second half of 2025, with GPT-5, Gemini 3, and Claude Opus 4.5 launching one after another—each outperforming the last in benchmark tests—and with DeepSeek rapidly catching up, the race to be crowned the “most powerful large model” is already getting crowded. Leading large models are now showing clear distinctions in their training approaches as well as their unique characteristics in real-world performance, setting the stage for an even more exciting competition among large models in 2026. (Author|Hu Jiameng, Editor|Li Chengcheng)

0
点赞
赏礼
赏钱
0
收藏
免责声明:本文仅代表作者个人观点,与本站无关。其原创性以及文中陈述文字和内容未经本网证实,对本文以及其中全部或者 部分内容、文字的真实性、完整性、及时性本站不作任何保证或承诺,请读者仅作参考,并请自行核实相关内容。 凡本网注明 “来源:XXX(非本站)”的作品,均转载自其它媒体,转载目的在于传递更多信息,并不代表本网赞同其观点和对 其真实性负责。 如因作品内容、版权和其它问题需要同本网联系的,请在一周内进行,以便我们及时处理。 QQ:617470285 邮箱:617470285@qq.com
相关文章
花20万买SUV,到底怎么才算良心车?实测5款家用SUV,句句大实话..
你有没有过这种纠结:预算20万上下,想买台靠谱家用SUV,怕费油、怕小毛..
15万无对手?5款“闭眼入”燃油SUV,省心省钱抗造,家用车天花板..
15万左右买SUV别瞎选了,就这五款你闭着眼睛干就完事儿,家用省油省心全..
2026年重磅新车展望 聚焦SUV 新能源仍是重点
【中关村在线原创技术】2026年2月1日,国内多家主流车企陆续发布2026年1..
马年硬派SUV上新,谁会是新爆款?iCAR V27/哈弗猛龙PLUS/银河战舰..
iCAR V27作为奇瑞子品牌旗下中大型硬派SUV,目前实车已经到店,上市在即..
2026耐用SUV四强!十年不大修,家用闭眼买都不亏
阅读之前,麻烦用你发财的小手点点"爱心",创作不易,(木子李随笔)感谢大..
别再被销售忽悠!SUV和轿车根本不是一类车,弄懂5点再选绝不后悔..
买车时绕不开的灵魂拷问:到底选SUV还是轿车?身边人说法不一:有人说SUV..
四款王炸SUV,公认的耐用王
四款王炸SUV,公认的耐用王。有很多车主到现在还是看不起国产车,买车非..
2026买车先别急!8款重磅SUV来袭,自主合资全都有
阅读之前,麻烦用你发财的小手点点"爱心",创作不易,(木子李随笔)感谢大..
买SUV不踩坑:国产4款+合资3款+豪华3款,照着需求选就对了..
大家好我是心心念念,每天给大家带来最新动态,不赶节奏,内容随缘更,但..
关于作者
感恩的人(普通会员)
文章
1881
关注
0
粉丝
0
点击领取今天的签到奖励!
签到排行

成员 网址收录40418 企业收录2986 印章生成263660 电子证书1157 电子名片68 自媒体103435

0
0
分享
请选择要切换的马甲:

个人中心

每日签到

我的消息

内容搜索