-
Chinese AI lab DeepSeek launched the DeepSeek-R1 model, rivaling OpenAI in math reasoning and code generation.
-
The model is (in part?*)open-sourced for global research use.
-
Requires way less computing power than competitors like Meta.
-
Competes with OpenAI in critical areas such as mathematical reasoning, code generation, and cost efficiency
-
Overcame U.S. chip export restrictions through optimized architecture.
-
Big Tech are sore loosers
*DeepSeek employs a dual licensing structure for its models. The codebase for DeepSeek-Coder-V2 is released under the MIT License, which allows unrestricted use, modification, and distribution. However, the pre-trained models are governed by the DeepSeek License Agreement, permitting research and commercial use with specific restrictions to prevent harmful applications. While DeepSeek’s models are open in many aspects, some argue they do not fully meet all criteria for being considered “open source” due to these licensing nuances
It’s not open-source, stop spreading disinformation. The core of the product are the model weights and no source is provided for them, making them irreproducible. This is as open source as distributing a single exe file because after all you can read the assembly code, no?
I prefer to call these models “open-weights”. However, “open-source” is widely used and understood in this context. Not an intentional disinformation.
I’ve used the original title of the article and checked some sources.
https://github.com/deepseek-ai/DeepSeek-V2/blob/main/LICENSE-CODE
https://github.com/huggingface/open-r1
Give me better sources and i’ll change the title. 👇
Still waiting for links, but feel free to downvote instead 🤷
Please show me an LLM model that is really open source. My understanding is that most of the open models are open weights. For the record Mistral is also releasing Open weights models.
The fact that no widely used LLM is open source is not a good reason to change its meaning.
deleted by creator