Microsoft said that users can run the DeepSeek R1 model right on their laptops

The company quickly integrated the model into its Azure cloud platform and GitHub developer tool. Microsoft announced that the new DeepSeek artificial intelligence model will be available in NPU-optimized versions that will be more in line with Windows 11 Copilot+ PCs and compatible with the components that run on them. A version for Qualcomm Snapdragon X devices will be released first, followed by a version for Intel Lunar Lake PCs and finally a variant for AMD Ryzen AI 9 processors. Microsoft will also add the DeepSeek-R1-Distill-Qwen-1.5B model to its Microsoft AI Toolkit for developers. Microsoft also noted that these optimized models will help developers build and run AI-powered apps that will run efficiently on Copilot+ devices.
Microsoft also said that these optimized models will help developers build and run AI-powered apps that will run efficiently on Copilot+ devices.

The DeepSeek support could be part of Microsoft’s strategy to reduce its reliance on OpenAI, as the company develops its own models and deploys third-party solutions for its Microsoft 365 Copilot AI product. In addition, Microsoft’s Swift support could be beneficial to DeepSeek as it addresses privacy and data sharing concerns. However, DeepSeek’s servers are located in China, which could raise concerns for users in the US. Meanwhile, Microsoft is investigating to see if DeepSeek used illegal methods to train its models. This is a cause for concern, especially after a White House spokesman said DeepSeek may have «stolen intellectual property from the U.S.».
An investigation is underway to determine whether DeepSeek used illegal methods to train its models.
It was previously reported that DeepSeek may have used the distillation method to extract data from OpenAI code, which involves interaction between two models. DeepSeek is positioning itself as a low-cost open source model, especially on low-power Nvidia chips.
An open source model with low cost, especially on low-power Nvidia chips.