DeepSeek, an emerging AI model that has captured significant attention for challenging OpenAI’s ChatGPT, has been at the center of various claims. One such claim is that its training cost is around $6 million. However, a Semi Analysis report challenges this estimate, revealing that the actual cost is far higher. According to the findings, DeepSeek’s total server capital expenditure reaches an eye-watering $1.3 billion.

DeepSeek’s True Costs and Strategic Advantage in the AI Race
The $6 million estimate only accounts for GPU pre-training costs, overlooking crucial expenses such as research and development, infrastructure, and operations. A large portion of the $1.3 billion is dedicated to maintaining DeepSeek’s GPU clusters, which are central to its computational power. Reportedly, the company utilizes around 50,000 Hopper GPUs, including a mix of H800s, H100s, and Nvidia’s H20s, specifically chosen to comply with US export restrictions.
In addition, DeepSeek sets itself apart from larger AI labs by running its own data centers and adopting a streamlined, efficient organizational structure that boosts its agility in the highly competitive AI landscape.
DeepSeek’s Controversies and Global Expansion Amid Scrutiny
Despite its rapid rise, DeepSeek has faced controversy. A New York Times report highlighted that the model has been found to spread Chinese propaganda, prompting Taiwan to ban its use by government agencies. In Europe, Italy’s data protection regulator has blocked the app, while the Dutch privacy authority has launched an investigation into DeepSeek’s data collection practices. Meanwhile, in India, local cloud providers such as Ola Krutrim and AceCloud have begun offering DeepSeek’s services, broadening its reach.