Microsoft introduced powerful new AI models designed to offer top-level performance even with fewer resources. These models are part of its expanding Phi family, known for being lightweight yet highly capable. The highlight of the release is the Phi 4 reasoning series, which includes three new models: Phi 4 mini reasoning, Phi 4 reasoning, and Phi 4 reasoning plus.
These models are specially created to focus on “reasoning,” meaning they can take longer to verify answers and solve more complex problems. They are now available on the AI development platform Hugging Face, where you can also find detailed technical reports for each model.
What makes the Phi 4 models stand out
You’ll find that the Phi 4 mini reasoning model was trained using around 1 million synthetic maths problems. These problems were generated by the Chinese AI company DeepSeek’s R1 reasoning model. Even though Phi 4 mini reasoning has only about 3.8 billion parameters — a number used to measure a model’s skill at solving problems — it’s still very effective. According to Microsoft, it’s ideal for educational tools like embedded tutors on smaller devices.
Models with more parameters often give better results. Still, the clever design of Phi 4 mini reasoning allows it to compete with much larger systems without needing the same amount of computing power. This makes it useful for schools, apps, or low-cost hardware that still want high-quality AI performance.
Phi 4 reasoning, which is larger with 14 billion parameters, was built using high-quality data collected from across the web. It also uses “curated demonstrations” taken from OpenAI’s o3-mini model, giving it strong math, science, and even computer coding skills.
Then there’s Phi 4 reasoning plus. This model takes Microsoft’s earlier Phi-4 and improves it to make it even more accurate at handling tasks. In internal tests, Microsoft says Phi 4 reasoning plus comes close to the performance of DeepSeek’s R1 model, which has a massive 671 billion parameters. In fact, on a maths test called OmniMath, this improved Phi model performed at the same level as OpenAI’s o3-mini — a significant achievement considering its smaller size.
Smarter models for real-world needs
What’s great about these models is that they are openly available with permissive licenses, meaning developers like you can use them more freely when building apps. Microsoft believes these compact models are perfect for use on devices with limited power, like smartphones or tablets, because they offer quick responses without needing large data centers.
Microsoft’s blog explains how these new models were built using a smart mix of techniques. These include distillation (which helps shrink models without losing ability), reinforcement learning (improving results through feedback), and high-quality data. The result is a series of small, fast, and powerful models.
“These models balance size and performance,” Microsoft said in its announcement. “They are small enough for low-latency environments yet maintain strong reasoning capabilities that rival much bigger models. This blend allows even resource-limited devices to perform complex reasoning tasks efficiently.”
A big step forward in AI accessibility
If you’re developing AI applications but don’t have access to large computing resources, Microsoft’s new Phi 4 models could be a game-changer. They show that you don’t always need massive AI systems to achieve strong results — especially regarding reasoning, maths, and logic.
Now available on Hugging Face, the Phi 4 mini reasoning, Phi 4 reasoning, and Phi 4 reasoning plus models make it easier than ever to use advanced AI in real-world situations, regardless of your device or budget.