As the race to improve generative artificial intelligence continues, you may have missed a quiet move from Chinese start-up DeepSeek. On April 3, the company quietly released an updated version of its specialist maths-solving AI model, Prover-V2, on Hugging Face — the world’s largest open-source AI platform. This happened just one day after Chinese tech giant Alibaba announced the launch of Qwen3, the latest version of its general-purpose AI model.
DeepSeek, based in Hangzhou, did not promote the new model on its official social media pages. However, this stealthy update has drawn attention within the tech world, especially among those watching the rapid developments in AI reasoning and problem-solving capabilities.
Prover-V2 is designed to solve maths with precision
You might find it interesting that Deepseek’s Prover series is not just another general AI model. Instead, it is built specifically to handle mathematical challenges, including formal theorem proving and complex reasoning tasks. This focus on maths makes the Prover models different from most other AI systems currently available.
The latest release, Prover-V2, appears to build on the company’s powerful V3 foundational model. According to files posted on Hugging Face, the base model boasts an impressive 671 billion parameters and uses a “mixture-of-experts” design. This approach helps reduce training costs while boosting performance — a key advantage as companies balance efficiency with power.
Though DeepSeek has not yet provided detailed information about the new model’s structure or features, earlier research suggests that the Prover range can improve mathematical accuracy in large language models. With Prover-V2 now out, many speculate that even more advanced AI models from DeepSeek may be coming soon.
Quiet timing raises industry speculation
The low-key release of Prover-V2 came just after Alibaba proudly announced its own AI upgrade. Qwen3 is Alibaba’s third-generation AI model, and it reportedly outperforms not only DeepSeek’s earlier R1 model and OpenAI’s o1 reasoning model in several benchmarks. This timing has sparked questions about whether DeepSeek is strategically staying quiet while preparing a larger rollout of its following AI systems.
Prover-V2 follows the earlier Prover-V1.5, which was released in August last year — four months before DeepSeek shocked the tech world with its energy-efficient, high-performance V3 model. At the time, the company claimed that V3 was built using only a fraction of the resources that Western AI firms typically need to train similar models.
DeepSeek stays silent but continues research progress
So far, DeepSeek has not responded to any requests for comment about the surprise Prover-V2 release. It has also not shared an official roadmap for upcoming AI developments. Still, the company continues to publish research updates and model improvements quietly.
Last month, DeepSeek unveiled a refreshed version of its V3 foundational model, featuring stronger reasoning, better programming skills, and improved support for Chinese language tasks. These updates are part of a larger pattern of consistent innovation — even if it’s not always loudly announced.
With the launch of Prover-V2, DeepSeek has again shown that it’s a serious contender in the AI world. Whether you’re following maths-focused models or broader AI trends, it’s worth watching what comes next from this rising player in China’s AI sector.