Skip to Content
Top AI expert explains DeepSeek's global impact

Top AI expert explains DeepSeek's global impact

February 3, 2025 at 9:00am


A small Chinese firm named DeepSeek is causing a sensation in the West in 2025 as it releases several highly acclaimed artificial intelligence models that rival—or even outperform—its U.S. counterparts. DeepSeek’s app holds the No. 1 position in Apple’s App Store, and its ascent is sending stocks on a roller coaster ride.

What makes DeepSeek different? Some are applauding how easy the AI is to work with. Others are heralding capabilities and performance. A few commentators are even calling DeepSeek’s debut a ‘Sputnik moment for AI.’

Here to help the public understand DeepSeek’s emergence and what it means for our world is Mark Finlayson, a professor at Florida International University (FIU) and one of the country’s top researchers of artificial intelligence.

He discusses safety concerns around DeepSeek, how it is different from ChatGPT, what new capabilities (if any) the company has introduced, and where the West’s AI researchers may go from here.

Let’s talk safety first. DeepSeek is a Chinese startup that until recently most of us in the U.S. knew nothing about. Can we trust DeepSeek to safeguard our personal data and information?

I am raising caution. Whatever personal information you type in, they are capturing it. You can be sure of that. This is standard practice for these models. They capture user input to train future model versions on. OpenAI’s ChatGPT does this, as do all the other cloud-based models. OpenAI gives you the opportunity to go into a private mode in which they claim they don’t record what you do. Does DeepSeek provide the same guarantees? I haven’t used the site, but the rule of law is quite weak in China. Your protections are minimal.

DeepSeek may have no malicious intent itself, but the lack of data protection in China raises concerns. If you put sensitive information into their models, there is a good chance that the Chinese Communist Party can obtain it.

Is DeepSeek’s large language model much different from ChatGPT?

I am a little bit mystified as to why there is such panic [among U.S. AI companies] around DeepSeek. You do have some outperformance at the margins, but their models appear to be very similar to ones that are already out there. It uses many already-known techniques published in the literature over the last two years. The arrival of a model like this should not be such a surprise.

One interesting thing that DeepSeek has done is that they seem to have ingested the same amount of data as OpenAI, the makers of ChatGPT. This is interesting because OpenAI has a team of about 150 people, while DeepSeek claims to be significantly smaller. Another noteworthy part of this is that DeepSeek’s R1 model shows its work when it is answering complex problems. This reasoning, called “Chain of Thought” in our industry, is not public in OpenAI’s model.

One fact that we are hearing a lot is that DeepSeek’s R1 model was developed using only $6 million, which sounds like a staggeringly low budget next to what U.S. companies spend. Can we really develop a model like this at that cost?

That cost is probably just the cost of electricity. DeepSeek even says this much in the paper they released along with their model. This doesn't include the purchase price of hardware, salaries, prior research or facilities. And that’s fine as numbers in our industry are often not reported that way, but people need to be aware that if we just say, “This model cost only $6 million to make,” this could be misleading to the public.

What do you think DeepSeek’s emergence could mean for AI research in the U.S.?

We AI researchers should be careful when using this tool. We often experiment with the top large language models like ChatGPT to innovate and develop new techniques, and I suspect that there will be research conducted with these new models, too. But remember, this company could be capturing any data we input into it. We need to be aware that if we experiment with DeepSeek, the company could be capturing our inputs and give Chinese competitors a window into the latest research trends we are exploring. 

There are a lot of people asking, “What should the AI community in the West do in response to this?” I think that we should just do the same thing we have been doing, continuing to innovate. When you do something great for a long time, the competition is not just going to wilt away. You have to compete by working hard.

Finlayson

Mark Finlayson researches the science of narrative from a computational point of view. He develops novel AI technology to better understand the connections between stories, cognition, and culture, while simultaneously helping AI better understand human language and behavior. His work has advanced knowledge in a wide variety of fields, from computer science and AI to defense and the digital humanities. He is an associate professor of computer science in FIU's Knight Foundation School of Computing and Information Sciences, and in early January was awarded a Presidential Early Career Award in Science and Engineering (PECASE) from the Biden administration.