Deepseek-R1 is a blessing for institutions-which makes artificial intelligence applications cheaper, easier to build and more innovative

Join daily and weekly newsletters to obtain the latest updates and exclusive content to cover the leading artificial intelligence in the industry. Learn more


Causing the reference model in Deepseek R1 Shock waves through the technology industryWith a more clear sign is surprising Selling the main Amnesty International shares. The feature of well -funded artificial intelligence laboratories like Openai and Anthropor are no longer very strong, as Deepseek was reported to have been able to develop their O1 competitor in a small part of the cost.

While some artificial intelligence laboratories are currently in crisis, in relation to the institutions sector, it is mostly good news.

Cheaper applications, more applications

As we said here before, one of the trends worth seeing in 2025 is the continuous decrease in the cost of using artificial intelligence models. Institutions must experience and build preliminary models with the latest artificial intelligence models regardless of the price, knowing that the continued reduction in prices will enable them to ultimately spread their applications on a large scale.

This trend has just seen a significant change. Openai o1 costs $ 60 per million directing symbols reverse $ 2.19 per million for Deepseek R1. If you are interested in sending your data to Chinese servers, you can access the R1 to the US service providers like together and AI fireworksWhere it is $ 8 and $ 9 per million symbols, respectively – is still a huge deal compared to O1.

In order to be fair, O1 still has the edge more than R1, but not as much as this big difference in the price justifies. Moreover, R1 capabilities will be sufficient for most institutions applications. We can expect to issue more advanced and capable models in the coming months.

We can also expect second -class effects on the artificial intelligence market in general. For example, SAM Altman, CEO of Openai, announced that free Chatgpt users will soon be able to access the O3-MINI. Although R1 was not explicitly mentioned as a reason, the fact that the advertisement was issued shortly after the launch of the R1.

More innovation

R1 still leaves a lot of questions unanswered – for example, there are multiple reports that Deepseek has trained the model on the outputs of large Openai language models (LLMS). But if its paper and technical report is correct, Deepseek has managed to create a model that is almost identical to the latest latest its latest while reducing costs and removing some technical steps that require a lot of handicrafts.

If others can reproduce Deepseek results, this may be good news for AI’s laboratories and companies that have been marginalized due to financial barriers in front of innovation in this field. Institutions can anticipate faster innovation and more artificial intelligence products to run their applications.

What will happen to billions of dollars spent by large technology companies to obtain speeds of devices? We are still not reaching the ceiling of what is possible with artificial intelligence, so leading technology companies will be able to do more with their resources. In fact, more than artificial intelligence at reasonable prices will increase the demand for the medium to long term.

But more importantly, R1 is evidence that everything is not related to larger calculations and data groups. With the correct engineering pieces and good talent, you will be able to pay the limits of what is possible.

Open source to win

To be clear, the R1 is not completely open, as Deepseek has released only the weights, but not the symbol or the full details of the training data. However, it is a great victory for the open source community. Since the release of Deepseek R1, more than 500 derivatives have been published on the Hugging Face, and the model has been downloaded millions of times.

Institutions will also give more flexibility around the location of their models. Regardless of the full model of 671 billion teachers, there are distilled versions of R1, ranging from 1.5 billion to 70 billion teachers, allowing companies to operate the model on a variety of devices. Moreover, unlike O1, R1 reveals a full thinking chain, giving developers a better understanding of the behavior of the model and the ability to direct it in the desired direction.

By catching up with the open specifications of closed models, we can hope to renew the commitment to exchange knowledge and research so that everyone can benefit from progress in artificial intelligence.

Leave a Comment