Is DeepSeek Really a Game Changer?
- Massimiliano Turazzini
- Feb 2
- 6 min read
Ok, enough time has passed, now I can talk about DeepSeek too.
Over the past few days, I've received dozens of messages urging me to speak out, and I've bitten my tongue as I've responded to many online figures who've jumped on the bandwagon for a few more likes.
I have to say that I was quite annoyed by the superficiality with which it was treated by many pseudo-experts, the ton of inaccuracies said by authoritative and uninformed voices. Who knows how many of them have at least read the paper or if they have had it summarized by ChatGPT.
In any case, all the technical data, the ‘AMAZING’ things it does, if it beats some benchmark by 0.0012151, I invite you to look it up on the net where many have tried to have their say before someone else said it.
Given that it is too early to make a precise analysis, and there are certainly effects that we will notice later... I will try to reflect with you. So be prepared for a lot of questions.

The questions that are going around in my head are more or less these:
Did anyone really think that sooner or later someone would come along and change the rules?
Does anyone really expect this not to happen again?
Did anyone really think that the systems released to date were optimised to the max? That LLMs as they are were the maximum achievable? That there are no other forms of generative AI that can replace LLMs or the way they are produced?
Does anyone really believe that we won't need so many NVidia GPUs now that models are increasingly multimodal? Processing voice and video requires much more power. And AI usage is still ridiculously low if we look at how much of the population actually uses it every day.
When disruption is needed, optimization is not looked at, it is done afterwards . Now every model manufacturer needs to go out and to do so has immense capital at its disposal: it is not concerned with reducing costs but with maximizing market shares. DeepSeek has moved perhaps with a principle of Jugaad Innovation (Frugal Innovation, by Navi Radjou a good book that I recommend) not having much else (in addition to the billion dollars of funding...)
I like to talk about how AI will challenge entire business sectors: why shouldn't it start with itself? If we get to AGI, what will happen to the stock exchanges if we continue to behave like this? Or if an AI helps us solve big problems (cancer, crime, social inequality, whatever you want), what will happen to the markets based on the endless search for solutions to solve them?
AI is about IMPACT not technology.
Let's expect it to happen, then, even in unexpected sectors...
PS. In the meantime OpenAI has already responded with the O3 Mini series (I like it, but it's too early for a definitive opinion), so get ready for a wave of posts starting with "AMAZING!"
What has changed with DeepSeek
We can download versions of " thinking models " locally, install them on our servers, with reduced costs and maximum data protection (although I doubt that my PC is more secure than a server of those hosting an average AI service).
We know that even OpenAI has no moats to protect its business model (Although with Stargate, some antibodies are preparing it)
Today, OpenAI is no longer alone in the field of advanced models: we are witnessing the emergence of Reasoning Models, an evolution from traditional Language Models. We have in fact entered the third wave of scaling laws, that of Test-Time Scaling, where the focus is no longer just on language processing, but on real-time reasoning and adaptation. We will talk a lot about this in the future.
Jensen Huang, CEO of NVidia showing a graphic probably inspired by Peter Gostev (Just to give you some names behind things) In addition to Deepseek, models like Gemini Thinking and Claude (who currently thinks but does not tell us what he thinks) represent this new generation: before answering, they think and, very interestingly, show us their chain of thoughts.
The prices of input and output tokens have dropped 1000 times in a year and a half: now the prices of reasoning tokens will also start to drop heavily.
Thanks to this:
We can start working seriously on agentic applications to be implemented in the coming months, without breaking the bank on token costs.
We can explore each topic in depth by reading the preliminary thoughts of the models. Remember that OPENAI does not show all the thoughts, but a summary of the thoughts, precisely to avoid giving an advantage to others who could use them to train their models (The accusation that OpenAI made to DeepSeek is precisely on this).
What I don't like
OpenAI and DeepSeek are probably not telling the whole story: benchmarks and development processes remain secret.
DeepSeek is not open source, but open weight. Where is the data they used? How did they use other models to distill theirs? What is the transformer algorithm they used made? What tuning activities did they do exactly and above all what is in the 'cold-start data' they used to make it minimally understandable by a human in the R1 version? When you read or hear of someone talking to you about Open Source in AI, go and check these aspects and you will discover that almost no model, among the best performing, is 100% open source.
DeepSeek Chat's license raises significant concerns: all text entered is freely accessible to the company (Section 3.3 of the Terms of Use). This means that the data shared could be used for internal purposes, such as improving models or analyzing interactions, without direct control by the user. (Section 3.3 of the Terms of Use). All of this happens, for the app and web chat, on Chinese servers, where the laws are different from ours.
What intrigues me
1) That I can finally have models thinking on my Mac or in my projects. Using variants of R1 will open up many innovative local scenarios, without privacy concerns or catastrophic geo-scenarios.
2) Using these models pushes the bar of thought higher for humans.
Try to follow this reasoning.
DeepSeek-R1-Zero, the version from which DeepSeek-R1 originated, thinks in a non-aligned way, mixes languages to express itself. In an exponential version of what I do when I use dozens of English terms in my Italian to, I think, explain myself better.
The paper reads something like this: “ Although alignment does cause a slight performance degradation, ... it makes it more readable.”
Meaning that DeepSeek has done some tuning to the model anyway to tell it how to think in a way that we can understand. Otherwise it would have done it 'its own way'
And here a phrase by Gianfranco Carofiglio came to mind:
"Ideas only exist if we have the words to name and describe them."
What happens when an AI starts developing its own language in which to condense its own ideas into lemmas generated from all the languages of the world? Is it creating its own Esperanto? If language is reasoning, if we need to know words to express complex concepts, couldn't we evolve our own thinking by seeing its raw version?
An interesting example is the case of OpenAI with its GPT models, where the emergence of unintentionally programmed linguistic patterns was observed, a phenomenon called "linguistic drift". Similarly, Google DeepMind reported similar phenomena in its AlphaGo and AlphaZero models, where the AI developed game strategies that were unpredictable for humans ( the famous move 37).
What will happen when we do not understand how a language model works exactly (current situation) and how it communicates and thinks? Isn't this an emergent behavior that, unlike many others, we cannot understand well and, therefore, leave under a blanket for the moment?
Could this be a first step towards models capable of developing completely autonomous conceptual structures, a sort of artificial metacognition that redefines the boundary between language and thought, in which to condense one's ideas into lemmas generated from all the languages of the world?
What is DeepSeek doing with this raw version? (If anyone knows how to test it and wants to tell us, it will help us figure it out.)
So what...
I leave you with some final considerations. In fact, this post of mine is a huge "So what..."
It certainly won’t be the last time that AI-related news sends markets and people into a panic.
If the world of AI is the fastest we have ever known, we should expect more disruption. We are far from a mature market.
The players are trying to dig ditches to defend their positions, but the competitors are coming from the sky.
When it happens again, take a deep breath, focus on the essentials, analyze the available data critically, and compare different sources to gain a clearer vision. Set operational priorities to adapt quickly to changes without being overwhelmed by the moment's urgency. Draw your conclusions and change what you need to change.
In any case, it doesn't end here, and probably, I'm not seeing the moon, but I'm only looking at the finger that points to it.
I leave you with a few links that I didn't mind
Peter Diamandis's vision: https://x.com/PeterDiamandis/status/1884054525733949710
The DeepSeek paper: https://arxiv.org/abs/2501.12948
An interesting test: https://substack.com/home/post/p-156061193
Gary Marcus's Vision https://substack.com/home/post/p-155919736
If you liked this article, share it. Find out more at @maxturazzini or at https://maxturazzini.com
Comments