If you work at Microsoft, you cannot use the DeepSeek app. The company’s vice chairman and president, Brad Smith, confirmed this during a May 8 U.S. Senate hearing. According to Smith, the ban is due to serious concerns about data security and the risk of Chinese government influence through propaganda.
“At Microsoft, we don’t allow our employees to use the DeepSeek app,” Smith stated, referring to the chatbot service available on desktop and mobile devices. He also mentioned that Microsoft hasn’t added DeepSeek to its app store for the same reasons.
This is the first time Microsoft has spoken publicly about the restriction. Although other companies and even some governments have already taken similar actions, this official stance marks a notable moment in the growing scrutiny of artificial intelligence tools developed in China.
Data stored in China sparks security worries
Smith explained that the ban mainly comes from the risk that user data could be stored in China. DeepSeek’s privacy policy confirms that it stores user data on servers located in China. As a result, that information is governed by Chinese law, which includes requirements to share data with the country’s intelligence services if requested.
Another issue raised was the content produced by DeepSeek. Smith said the app could be influenced by “Chinese propaganda.” DeepSeek is known to censor topics that the Chinese government finds sensitive, making it more likely that users will receive filtered or biased responses when using the app.
Microsoft still uses DeepSeek’s model, under strict control
Even though Microsoft is banning its employees from using the DeepSeek app itself, the company briefly offered DeepSeek’s open-source model—called R1—through its Azure cloud platform earlier this year after the model gained attention for its performance.
Smith clarified that this doesn’t mean Microsoft is endorsing the app. Because DeepSeek is open-source, anyone can run the model on their servers without sending data back to China. This allows companies to maintain some control over privacy and data handling.
However, there are still concerns. Even if the data doesn’t return to Chinese servers, using the model may pose risks. These include the spread of propaganda or the creation of flawed or unsafe code.
During the Senate hearing, Smith said Microsoft had gone into DeepSeek’s R1 model and made changes to reduce what he called “harmful side effects.” He didn’t give many details, instead pointing back to his hearing remarks when media outlets asked.
When Microsoft first made DeepSeek’s R1 model available on Azure, the company said it had undergone “rigorous red teaming and safety evaluations” to ensure its safety.
Not all AI chatbots are banned from Microsoft platforms
It’s worth noting that DeepSeek competes directly with Microsoft’s own Copilot chatbot. However, not all competitors are blocked. The AI search app Perplexity is still available in the Microsoft Store.
That said, a quick search in Microsoft’s store doesn’t show results for any apps by Google, such as the Chrome browser or its Gemini chatbot, suggesting a possible preference for keeping out rival tech giants.
Still, Microsoft’s clear stance on DeepSeek draws a firm line between what the company views as a competitive product and what it sees as a potential threat to its data security and ethical standards.