Skip to main content

Laurent Denize, Global Co-CIO, ODDO BHF AM.

“For those watching the global AI race, DeepSeek’s development is a reminder that innovation doesn’t solely come from established giants. It can emerge from anywhere. No one can assume a permanent and consistent edge”

On the morning of October 4, 1957, the world awoke to shocking news: the Soviet Union had beaten the United States in the race to space. The surprise was so profound that the launch of the world’s first artificial satellite, Sputnik, became a defining moment in history. Now, some are calling DeepSeek’s emergence China’s own “Sputnik moment” for the artificial intelligence ecosystem. To what extent does this comparison hold? Is DeepSeek a turning point for the AI industry? Does it fundamentally reshape the AI investment landscape? Is it altogether a “deep sink” for U.S. exceptionalism and a “deep boost” for China? Rather than relying on ChatGPT and DeepSeek for analysis, let’s use our own reasoning to measure the impact of this Chinese AI breakthrough on asset allocation and investment positioning.

DeepSeek, the Chinese startup shaking the AI world

Founded in 2023, DeepSeek is a Chinese startup that has quickly made waves in the AI landscape. Very recently, it released two open-source ChatGPT-like Large Language Models (LLMs): DeepSeek-V3 in December 2024 and DeepSeek-R1 in January 2025. These models leverage several breakthroughs in architecture, achieving performance on par with current frontier models while significantly reducing training compute costs.

Headlines suggest that the V3 model was trained in just two months using approximately 2,000 Nvidia H800 chips at an estimated cost of $5.6 million – a fraction of the hundreds of millions spent by OpenAI, Google, or Meta on their leading AI models. Is that claim true? It seems most industry experts acknowledge DeepSeek’s performance and 10x efficiency gains. Many think DeepSeek’s methodology is legitimate and that such cost savings are plausible.

In any case, we believe DeepSeek challenges the long-standing assumption in the AI community that “bigger is better”, which has been driving the AI trade for the past decade.

Macro Boost? Macro Drag? Micro Boost? Micro Drag?

On the macro side, the primary near-term risk to GDP is that more efficient model training and declining compute costs could reduce AI-related infrastructure capital expenditure (Capex). This impact could be particularly pronounced in the U.S., where some estimates suggest a drag of 0.1-0.2 percentage points on GDP.

However, hyperscalers Microsoft, Meta, Amazon, and Alphabet have recently reaffirmed their commitment to continued spending. Therefore, a new, tested AI technology with significantly lower costs development costs could, over the medium term, reshape the current competitive landscape and create macroeconomic upside, in our view.

EFI

Author EFI

More posts by EFI