Released under the MIT License, DeepSeek-R1 provides responses comparable to other contemporary large language models, such as OpenAI's GPT-4 and o1.[10] Its training cost was reported to be significantly lower than other LLMs. The company claims that it trained its V3 model for US million—far less than the US million cost for OpenAI's GPT-4 in 2023[11]—and using approximately one-tenth the computing power consumed by Meta's comparable model, Llama 3.1.[11][12][13][14] DeepSeek's success against larger and more established rivals has been described as "upending AI".[15][16]
DeepSeek's models are described as "open weight," meaning the exact parameters are openly shared, although certain usage conditions differ from typical open-source software.[17][18] The company reportedly recruits AI researchers from top Chinese universities[15] and also hires from outside traditional computer science fields to broaden its models' knowledge and capabilities.[12]
DeepSeek significantly reduced training expenses for their R1 model by incorporating techniques such as mixture of experts (MoE) layers.[19] The company also trained its models during ongoing trade restrictions on AI chip exports to China, using weaker AI chips intended for export and employing fewer units overall.[13][20] Observers say this breakthrough sent "shock waves" through the industry which were described as triggering a "Sputnik moment" for the US in the field of artificial intelligence, particularly due to its open-source, cost-effective, and high-performing AI models.[21][22][23] This threatened established AI hardware leaders such as Nvidia; Nvidia's share price dropped sharply, losing US billion in market value, the largest single-company decline in U.S. stock market history.[24][25]
History
Founding and early years (2016–2023)
In February 2016, High-Flyer was co-founded by AI enthusiast Liang Wenfeng, who had been trading since the 2008 financial crisis while attending Zhejiang University.[26] The company began stock trading using a GPU-dependent deep learning model on 21 October 2016; before then, it had used CPU-based linear models. By the end of 2017, most of its trading was driven by AI.[27]
Liang established High-Flyer as a hedge fund focused on developing and using AI trading algorithms, and by 2021 the firm was using AI exclusively,[28] often using Nvidia chips.[29]
In 2019, the company began constructing its first computing cluster, Fire-Flyer, at a cost of 200 million yuan; it contained 1,100 GPUs interconnected at 200 Gbit/s and was retired after 1.5 years in operation.[27]
By 2021, Liang had started buying large quantities of Nvidia GPUs for an AI project,[29] reportedly obtaining 10,000 Nvidia A100 GPUs[30] before the United States restricted chip sales to China.[28] Computing cluster Fire-Flyer 2 began construction in 2021 with a budget of 1 billion yuan.[27]
It was reported that in 2022, Fire-Flyer 2's capacity had been used at over 96%, totaling 56.74 million GPU hours. 27% was used to support scientific computing outside the company.[27]
During 2022, Fire-Flyer 2 had 5000 PCIe A100 GPUs in 625 nodes, each containing 8 GPUs. At the time, it exclusively used PCIe instead of the DGX version of A100, since at the time the models it trained could fit within a single 40 GB GPU VRAM and so there was no need for the higher bandwidth of DGX (i.e., it required only data parallelism but not model parallelism).[31] Later, it incorporated NVLinks and NCCL (Nvidia Collective Communications Library) to train larger models that required model parallelism.[32][33]
On 14 April 2023,[34] High-Flyer announced the launch of an artificial general intelligence (AGI) research lab, stating that the new lab would focus on developing AI tools unrelated to the firm's financial business.[35][36] Two months later, on 17 July 2023,[1] that lab was spun off into an independent company, DeepSeek, with High-Flyer as its principal investor and backer.[28][37][36]Venture capital investors were reluctant to provide funding, as they considered it unlikely that the venture would be able to quickly generate an "exit".[28]
Model releases (2023–present)
DeepSeek released its first model, DeepSeek Coder, on 2 November 2023, followed by the DeepSeek-LLM series on 29 November 2023.[38]: section 5 In January 2024, it released two DeepSeek-MoE models (Base and Chat),[39] and in April three DeepSeek-Math models (Base, Instruct, and RL).[40]
DeepSeek-V2 was released in May 2024, followed a month later by the DeepSeek-Coder V2 series.[41] In September 2024, DeepSeek V2.5 was introduced and revised in December.[42] On 20 November 2024, the preview of DeepSeek-R1-Lite became available via API and chat.[43][44] In December, DeepSeek-V3-Base and DeepSeek-V3 (chat) were released.[32]
The DeepSeek login page following a cyberattack around its 21 January 2025 launch
On 20 January 2025, DeepSeek launched the DeepSeek chatbot—based on the DeepSeek-R1 model—free for iOS and Android. By 27 January, DeepSeek surpassed ChatGPT as the most downloaded freeware app on the iOS App Store in the United States,[15] triggering an 18% drop in Nvidia's share price.[45][46]
On 24 March 2025, DeepSeek released DeepSeek-V3-0324 under the MIT License.[47][48]
In February 2025, Singaporean authorities arrested several individuals for illegally exporting advanced Nvidia chips to DeepSeek.[49] In April 2025, it was reported that the Trump administration was considering penalties that would attempt to block DeepSeek from buying U.S. technology.[50]
On 28 May 2025, DeepSeek released DeepSeek-R1-0528 under the MIT License.[51] The model has been noted for more tightly following official Chinese Communist Party ideology and censorship in its answers to questions than prior models.[52]
On August 21, 2025, DeepSeek released DeepSeek V3.1 under the MIT License.[53] This model features a hybrid architecture with thinking and non-thinking modes. It also surpasses prior models like V3 and R1, by over 40% on certain benchmarks like SWE-bench and Terminal-bench.[54]
Company operation
DeepSeek is headquartered in Hangzhou, Zhejiang, and is owned and funded by High-Flyer. Its co-founder, Liang Wenfeng, serves as CEO. As of May 2024, Liang personally held an 84% stake in DeepSeek through two shell corporations.[note 1][55]
Strategy
DeepSeek states that it focuses on research and does not have immediate plans for commercialization.[56] This posture also means it can skirt certain provisions of China's AI regulations aimed at consumer-facing technologies.[12]
DeepSeek's hiring approach emphasizes skills over lengthy work experience, resulting in many hires fresh out of university.[36][12] The company likewise recruits individuals without computer science backgrounds to expand the range of expertise incorporated into the models, for instance in poetry or advanced mathematics.[15][12] According to The New York Times, dozens of DeepSeek researchers have or have previously had affiliations with People's Liberation Army laboratories and the Seven Sons of National Defence.[50]
Training framework
High-Flyer/DeepSeek had operated at least two primary computing clusters: Fire-Flyer (萤火一号) and Fire-Flyer 2 (萤火二号). Fire-Flyer 1 was constructed in 2019 and was retired after 1.5 years of operation. Fire-Flyer 2 is still in operation as of 2025. Fire-Flyer 2 consists of co-designed software and hardware architecture. On the hardware side, Nvidia GPUs use 200 Gbps interconnects. The cluster is divided into two "zones", and the platform supports cross-zone tasks. The network topology was two fat trees, chosen for high bisection bandwidth. On the software side are:[33][27]
3FS (Fire-Flyer File System): A distributed parallel file system, specifically designed for asynchronous random reads. It uses Direct I/O and RDMA Read. In contrast to standard Buffered I/O, Direct I/O does not cache data. Caching is useless in this case, since each data read is random and is not reused.[57][58]
hfreduce: Library for asynchronous communication, originally designed to replace Nvidia Collective Communication Library (NCCL).[31] It is mainly used for allreduce, especially of gradients during backpropagation. It is asynchronously run on the CPU to avoid blocking kernels on the GPU.[33] It uses two-tree broadcast like NCCL.[31]
hfai.nn: Software library of commonly used operators for neural network training, similar to torch.nn in PyTorch.
HaiScale Distributed Data Parallel (DDP): Parallel training library that implements various forms of parallelism such as Data Parallelism (DP), Pipeline Parallelism (PP), Tensor Parallelism (TP), Experts Parallelism (EP), Fully Sharded Data Parallel (FSDP) and Zero Redundancy Optimizer (ZeRO). It is similar to PyTorch DDP, which uses NCCL on the backend.
HAI Platform: Various applications such as task scheduling, fault handling, and disaster recovery.[59]
As of 2022, Fire-Flyer 2 had 5000 PCIe A100 GPUs in 625 nodes, each containing 8 GPUs.[31] It later incorporated NVLinks and NCCL to train larger models that required model parallelism.[32][33]
Development and release history
Major versions of DeepSeek models. SFT stands for supervised finetuning.
Major versions
Release date
Status
Major variants
Remarks
DeepSeek Coder
November 2, 2023
Discontinued
Base (pretrained); Instruct (with instruction-finetuned)
The architecture is essentially the same as Llama.
Developed multi-head latent attention (MLA). Also used mixture of experts (MoE).
Implemented KV caching.
DeepSeek V3
December 2024
Active
DeepSeek-V3-Base DeepSeek-V3 (a chat model)
The architecture is essentially the same as V2. Updated on 2025-03-24.
DeepSeek-Prover-V2
May 1, 2025
Active
DeepSeek-Prover-V2-671B DeepSeek-Prover-V2-7B
DeepSeek VL2
December 13, 2024
Active
DeepSeek R1
November 20, 2024
Active
DeepSeek-R1-Lite-Preview
Only accessed through API and a chat interface.
January 20, 2025
Active
DeepSeek-R1
DeepSeek-R1-Zero
Initialized from DeepSeek-V3-Base and sharing the V3 architecture.
Distilled models
Initialized from other models, such as Llama, Qwen, etc. Distilled from data synthesized by R1 and R1-Zero.[60]
May 28, 2025
Active
DeepSeek-R1-0528
DeepSeek V3.1
August 21, 2025
Active
DeepSeek-V3.1-Base DeepSeek-V3.1 (a chat model)
Hybrid architecture (thinking and non-thinking modes available). Trained on over 800B additional tokens on top of V3.
The first DeepSeek models were essentially the same as Llama,[38] which were dense decoder-only transformers. Later models incorporated the multi-head latent attention (MLA), Mixture of Experts (MoE), and KV caching.[39][41]
A standard MoE Transformer generally use the sparsely-gated MoE layers in the FFN layers. In such an MoE layer, there are several FFN modules in parallel ("routed experts") and a small classifier ("gate") to compute a score for all these modules upon each token. Only the highest-scoring modules are activated. Starting with DeepSeekMoE, DeepSeek adopted a variant that adds "shared experts", which are always activated.[39]
DeepSeek's models are "open weight", which provides less freedom for modification than true open source software.[17][18]
DeepSeek Coder
DeepSeek Coder is a series of eight models, four pretrained (Base) and four instruction-finetuned (Instruct). All have 16K context lengths. The model was made source-available under the DeepSeek License, which includes "open and responsible downstream usage" restrictions.[61]
The DeepSeek-LLM series was released in November 2023. It has 7B and 67B parameters in both Base and Chat forms. DeepSeek's accompanying paper claimed benchmark results higher than Llama 2 and most open-source LLMs at the time.[38]: section 5 The model code is under the source-available DeepSeek License.[66]
DeepSeek-MoE models (Base and Chat), each have 16B parameters (2.7B activated per token, 4K context length). The training was essentially the same as DeepSeek-LLM 7B, and was trained on a part of its training dataset. They claimed performance comparable to a 16B MoE as a 7B non-MoE. It is a variant of the standard sparsely-gated MoE, with "shared experts" that are always queried, and "routed experts" that might not be. They found this to help with expert balancing. In standard MoE, some experts can become overused, while others are rarely used, wasting space. Attempting to balance expert usage causes experts to replicate the same capacity. They proposed the shared experts to learn core capacities that are often used, and let the routed experts learn peripheral capacities that are rarely used.[39]
Math
DeepSeek-Math includes 3 models: Base, Instruct, and RL. Math was trained as follows:[40]
Initialize with a previously pretrained DeepSeek-Coder Base v1.5 7B.
Further pretrain with 500B tokens (6% DeepSeekMath Corpus, 4% AlgebraicStack, 10% arXiv, 20% GitHub code, 10% Common Crawl). This produced Base.
Train an instruction-following model by SFT Base with 776K math problems and tool-use-integrated step-by-step solutions. This produced Instruct.
Reinforcement learning (RL): The reward model was a process reward model (PRM) trained from Base according to the Math-Shepherd method.[67] This reward model was then used to train Instruct using Group Relative Policy Optimization (GRPO) on a dataset of 144K math questions "related to GSM8K and MATH". The reward model was continuously updated during training to avoid reward hacking. This resulted in RL.
V2
The architecture of V2, showing both shared-routed MoE and MLA[68]: Figure 2
In May 2024, DeepSeek released the DeepSeek-V2 series. The series includes 4 models, 2 base models (DeepSeek-V2, DeepSeek-V2 Lite) and 2 chatbots (Chat). The two larger models were trained as follows:[68]
Pretrain on a dataset of 8.1T tokens, using 12% more Chinese tokens than English ones.
Extend context length from 4K to 128K using YaRN.[69] This resulted in DeepSeek-V2.
SFT with 1.2M instances for helpfulness and 0.3M for safety. This resulted in Chat SFT, which was not released.
RL using GRPO in two stages. The first stage was trained to solve math and coding problems. This stage used 1 reward model, trained on compiler feedback (for coding) and ground-truth labels (for math). The second stage was trained to be helpful, safe, and follow rules. This stage used 3 reward models. The helpfulness and safety reward models were trained on human preference data. The rule-based reward model was manually programmed. All trained reward models were initialized from Chat (SFT). This resulted in the released version of Chat.
They opted for 2-staged RL, because they found that RL on reasoning data had "unique characteristics" different from RL on general data. For example, RL on reasoning could improve over more training steps.[68]
The two V2-Lite models were smaller, and trained similarly. DeepSeek-V2 Lite-Chat underwent only SFT, not RL. They trained the Lite version to help "further research and development on MLA and DeepSeekMoE".[68]
DeepSeek V2 properties[68]: Section 3.1.2, Appendix B [70][71]
Name
Params.
Active params
# Layers
Context length
# Shared experts
# Routed experts
V2-Lite
15.7B
2.4B
27
32K
2
64
V2
236B
21B
60
128K
2
160
The Financial Times reported that it was cheaper than its peers with a price of 2 RMB for every million output tokens. The University of Waterloo Tiger Lab's leaderboard ranked DeepSeek-V2 seventh on its LLM ranking.[37]
The DeepSeek-Coder V2 series included V2-Base, V2-Lite-Base, V2-Instruct, and V20-Lite-Instruct.. Training:[41][note 3]
Base models were initialized from corresponding intermediate checkpoints after pretraining on 4.2T tokens (not the version at the end of pretraining), then pretrained further for 6T tokens, then context-extended to 128K context length.
DeepSeek-Coder and DeepSeek-Math were used to generate 20K code-related and 30K math-related instruction data, then combined with an instruction dataset of 300M tokens. This was used for SFT.
RL with GRPO. The reward for math problems was computed by comparing with the ground-truth label. The reward for code problems was generated by a reward model trained to predict whether a program would pass the unit tests.
DeepSeek-V2.5 was made by combining DeepSeek-V2-Chat and DeepSeek-Coder-V2-Instruct.[42]
V3
Multi-token prediction
DeepSeek-V3-Base and DeepSeek-V3 (a chat model) use essentially the same architecture as V2 with the addition of multi-token prediction, which (optionally) decodes extra tokens faster but less accurately. Training process:[32]
Pretraining on 14.8T tokens of a multilingual corpus, mostly English and Chinese. It contained a higher ratio of math and programming than the pretraining dataset of V2.
Extend context length twice, from 4K to 32K and then to 128K, using YaRN.[69] This produced DeepSeek-V3-Base.
SFT for 2 epochs on 1.5M samples of reasoning (math, programming, logic) and non-reasoning (creative writing, roleplay, simple question answering) data. Reasoning data was generated by "expert models". Non-reasoning data was generated by DeepSeek-V2.5 and checked by humans.
The "expert models" were trained by starting with an unspecified base model, then SFT on both <problem, original response> data, and synthetic <system prompt, prompt, problem, R1 response> data generated by an internal DeepSeek-R1-Lite model. The system prompt asked R1 to reflect and verify during thinking. Then the expert models were RL using an undisclosed reward function.
Each expert model was trained to generate just synthetic reasoning data in one specific domain (math, programming, logic).
Expert models were used instead of R1 itself, since the output from R1 itself suffered "overthinking, poor formatting, and excessive length".
Model-based reward models were made by starting with a SFT checkpoint of V3, then finetuning on human preference data containing both final reward and chain-of-thought leading to the final reward. The reward model produced reward signals for both questions with objective but free-form answers, and questions without objective answers (such as creative writing).
An SFT checkpoint of V3 was trained by GRPO using both reward models and rule-based reward. The rule-based reward was computed for math problems with a final answer (put in a box), and for programming problems by unit tests. This produced DeepSeek-V3.
DeepSeek released its DeepSeek-V3-0324 model, which used the same architecture as V3, on 24 March 2025 under the MIT License.[72]
The DeepSeek team performed extensive low-level engineering to improve efficiency. They used mixed-precision arithmetic. Much of the forward pass was performed in 8-bit floating point numbers (5E2M: 5-bit exponent and 2-bit mantissa) rather than the standard 32-bit, requiring special GEMM routines to accumulate accurately. They used a custom 12-bit float (E5M6) only for the inputs to the linear layers after the attention modules. Optimizer states were in 16-bit (BF16). They minimized communication latency by extensively overlapping computation and communication, such as dedicating 20 streaming multiprocessors out of 132 per H800 for only inter-GPU communication. They lowered communication by rearranging (every 10 minutes) the exact machine each expert was on so as to avoid querying certain machines more often than others, adding auxiliary load-balancing losses to the training loss function, and other load-balancing techniques.[32]
After training, it was deployed on clusters of H800 GPUs. The 8 H800 GPUs within a cluster were connected by NVLink, and the clusters were connected by InfiniBand.[32]
Total cost of training the DeepSeek-V3 model[32]: Table 1
Stage
Cost (in one thousand GPU hours)
Cost (in one million USD$)
Pre-training
2,664
5.328
Context extension
119
0.24
Fine-tuning
5
0.01
Total
2,788
5.576
The cost has been discussed[74][75][76] and called misleading, because it covers only parts of the true cost.[77]
In January 2025, DeepSeek released the DeepSeek-R1 model under the MIT License.[81]
DeepSeek-R1-Lite-Preview[43][44][note 4] was trained for logical inference, mathematical reasoning, and real-time problem-solving. DeepSeek claimed that it exceeded performance of OpenAI o1 on benchmarks such as American Invitational Mathematics Examination (AIME) and MATH.[82] However, The Wall Street Journal reported that on 15 problems from the 2024 edition of AIME, the o1 model reached a solution faster.[83]
DeepSeek-R1 and DeepSeek-R1-Zero[84] were initialized from DeepSeek-V3-Base and share its architecture. DeepSeek-R1-Distill models were instead initialized from other pretrained open-weight models, including LLaMA and Qwen, then fine-tuned on synthetic data generated by R1.[60]
Template for DeepSeek-R1-Zero
A conversation between User and Assistant. The user asks a question, and the Assistant solves it. The assistant first thinks about the reasoning process in the mind and then provides the user with the answer. The reasoning process and answer are enclosed within <think> </think> and <answer> </answer> tags, respectively, i.e., <think> reasoning process here </think> <answer> answer here </answer>. User: <prompt>. Assistant:
– <prompt> is replaced with the specific reasoning question during training.
DeepSeek-R1-Zero was trained exclusively using GRPO RL without SFT. Unlike previous versions, it used no model-based reward. All reward functions were rule-based, "mainly" of two types (other types were not specified): accuracy rewards and format rewards. Accuracy reward was checking whether a boxed answer is correct (for math) or whether a code passes tests (for programming). Format reward was checking whether the model puts its thinking trace within a <think>...</think> tag.[60]
R1-Zero has issues with readability and mixing languages. R1 was trained to address these issues and further improve reasoning:[60]
SFT DeepSeek-V3-Base on "thousands" of "cold-start" data all with the standard format of |special_token|<reasoning_process>|special_token|<summary>, designed to improve model output readability.
Apply the same GRPO RL process as R1-Zero, adding a "language consistency reward" to encourage it to respond monolingually. This produced an un released internal model.
Synthesize 600K reasoning data from the internal model, with rejection sampling (i.e. if the generated reasoning had a wrong final answer, then it is removed). Synthesize 200K non-reasoning data (writing, factual QA, self-cognition, translation) using DeepSeek-V3.
SFT DeepSeek-V3-Base on the 800K synthetic data for 2 epochs.
Apply the same GRPO RL process as R1-Zero with rule-based reward (for reasoning tasks), but also model-based reward (for non-reasoning tasks, helpfulness, and harmlessness). This produced DeepSeek-R1.
Distilled models were trained by SFT on 800K data synthesized from DeepSeek-R1, in a similar way as step 3. They were not trained with RL.[60]
There were reports that R2, the intended successor to R1, was originally planned for release in early May 2025.[85] However, on 28 May 2025, R1 was instead updated to version R1-0528.[86] As of early July, R2 was not yet released, as Liang Wenfeng was not yet satisfied with its performance. Most Chinese cloud providers of R1 used Nvidia H20.[87] As of August, R2 was not yet released. Sources cite slow data labelling and chip problems. Specifically, DeepSeek was encouraged by authorities to adopt Huawei’s Ascend chips for training, but it had stability issues, slower inter-chip connectivity and inferior software. Consequently it has opted to use Nvidia chips for training and Huawei chips for inference.[88] It is also reported that the Cyberspace Administration of China requested several large corporations to stop buying Nvidia H20 and buy from domestic suppliers instead.[89]
Significance
DeepSeek's success against larger and more established rivals was a surprise to both the industry and to markets,[15][90] and has been compared by investors and pundits to the "Sputnik moment".[15][91][92][23][22][21]
The DeepSeek-R1 model provides responses comparable to other contemporary large language models, such as OpenAI's GPT-4o and o1.[93] Its training cost is reported to be significantly lower than other LLMs.[94][95]
The company claims that it trained V3, a predecessor of R1, for US million compared to million for OpenAI's GPT-4 in 2023,[11] and approximately one tenth of the computing power used for Meta's comparable model, LLaMA 3.1.[11][12][13][14]
After the January 2025 release of the R1 model, which offered significantly lower costs than competing models, some investors anticipated a price war in the American AI industry.[96] It was dubbed the "Pinduoduo of AI", and other Chinese tech giants such as ByteDance, Tencent, Baidu, and Alibaba cut the price of their AI models. Despite its low price, it was profitable compared to its money-losing rivals.[56]
^ abcdefDeepSeek-AI; Bi, Xiao; Chen, Deli; Chen, Guanting; Chen, Shanhuang; Dai, Damai; Deng, Chengqi; Ding, Honghui; Dong, Kai (5 January 2024), DeepSeek LLM: Scaling Open-Source Language Models with Longtermism, arXiv:2401.02954
^ abcdeDai, Damai; Deng, Chengqi; Zhao, Chenggang; Xu, R. X.; Gao, Huazuo; Chen, Deli; Li, Jiashi; Zeng, Wangding; Yu, Xingkai (11 January 2024), DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models, arXiv:2401.06066
^ abShao, Zhihong; Wang, Peiyi; Zhu, Qihao; Xu, Runxin; Song, Junxiao; Bi, Xiao; Zhang, Haowei; Zhang, Mingchuan; Li, Y. K. (27 April 2024), DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models, arXiv:2402.03300.
^ abcdeDeepSeek-AI; Zhu, Qihao; Guo, Daya; Shao, Zhihong; Yang, Dejian; Wang, Peiyi; Xu, Runxin; Wu, Y.; Li, Yukun (17 June 2024), DeepSeek-Coder-V2: Breaking the Barrier of Closed-Source Models in Code Intelligence, arXiv:2406.11931
^ abcGuo, Daya; Zhu, Qihao; Yang, Dejian; Xie, Zhenda; Dong, Kai; Zhang, Wentao; Chen, Guanting; Bi, Xiao; Wu, Y. (26 January 2024), DeepSeek-Coder: When the Large Language Model Meets Programming – The Rise of Code Intelligence, arXiv:2401.14196
^"DeepSeek Coder". deepseekcoder.github.io. Archived from the original on 27 January 2025. Retrieved 27 January 2025.