DeepSeek previews new AI model that ‘closes the gap’ with frontier models

1 week ago 11

Chinese AI laboratory DeepSeek has launched 2 preview versions of its newest ample connection model, DeepSeek V4, a much-awaited update to past year’s V3.2 exemplary and the accompanying R1 reasoning model that took the AI satellite by storm.

The institution says some DeepSeek V4 Flash and V4 Pro are mixture-of-experts models with discourse windows of 1 cardinal tokens each — capable to let ample codebases oregon documents to beryllium utilized successful prompts. The mixture-of-experts attack involves activating lone a definite fig of parameters per task to little inference costs.

The Pro exemplary has a full of 1.6 trillion parameters (49 cardinal active), which makes it the biggest open-weight exemplary available, outstripping Moonshot AI’s Kimi K 2.6 (1.1 trillion), MiniMax’s M1 (456 billion), and much than treble DeepSeek V3.2 (671 billion). The smaller, V4 Flash has 284 cardinal parameters (13 cardinal active).

DeepSeek says some models are much businesslike and performant than DeepSeek V3.2 owed to architectural improvements, and person astir “closed the gap” with existent starring models, some unfastened and closed, connected reasoning benchmarks.

The institution claims its caller V4-Pro-Max exemplary outperforms its open-source peers crossed reasoning benchmarks, and outstrips OpenAI’s GPT-5.2 and Gemini 3.0 Pro connected immoderate tasks. In coding contention benchmarks, DeepSeek said some V4 models’ show is “comparable to GPT-5.4.”

However, the models look to autumn somewhat down frontier models successful cognition tests, specifically OpenAI’s GPT-5.4 and Google’s latest Gemini 3.1 Pro. This lag suggests a “developmental trajectory that trails state-of-the-art frontier models by astir 3 to 6 months,” the laboratory wrote.

Both V4 Flash and V4 Pro enactment substance only, dissimilar galore of its closed-source peers, which connection enactment for knowing and generating audio, video, and images.

Techcrunch event

San Francisco, CA | October 13-15, 2026

Notably, DeepSeek V4 is overmuch much affordable than immoderate frontier exemplary disposable today. The smaller V4 Flash exemplary costs $0.14 per cardinal input tokens and $0.28 per cardinal output tokens, undercutting GPT-5.4 Nano, Gemini 3.1 Flash, GPT-5.4 Mini, and Claude Haiku 4.5. The larger V4 Pro model, meanwhile, costs $0.145 per cardinal input tokens and $3.48 per cardinal output tokens, besides undercutting Gemini 3.1 Pro, GPT-5.5, Claude Opus 4.7, and GPT-5.4.

The motorboat comes a time aft the U.S. accused China of stealing American AI labs’ IP connected an concern standard utilizing thousands of proxy accounts. DeepSeek itself has been accused by Anthropic and OpenAI of “distilling,” fundamentally copying, their AI models.

When you acquisition done links successful our articles, we whitethorn gain a tiny commission. This doesn’t impact our editorial independence.

Read Entire Article