this post was submitted on 28 Dec 2024
62 points (88.8% liked)

Technology

60129 readers
3256 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
 

cross-posted from: https://lemmy.ml/post/24102825

DeepSeek V3 is a big deal for a number of reasons.

At only $5.5 million to train, it's a fraction of the cost of models from OpenAI, Google, or Anthropic which are often in the hundreds of millions.

It breaks the whole AI as a service business model that OpenAI and Google have been pursuing making state-of-the-art language models accessible to smaller companies, research institutions, and even individuals.

The code is publicly available, allowing anyone to use, study, modify, and build upon it. Companies can integrate it into their products without paying for usage, making it financially attractive. The open-source nature fosters collaboration and rapid innovation.

The model goes head-to-head with and often outperforms models like GPT-4o and Claude-3.5-Sonnet in various benchmarks. It excels in areas that are traditionally challenging for AI, like advanced mathematics and code generation. Its 128K token context window means it can process and understand very long documents. Meanwhile it processes text at 60 tokens per second, twice as fast as GPT-4o.

The Mixture-of-Experts (MoE) approach used by the model is key to its performance. While the model has a massive 671 billion parameters, it only uses 37 billion at a time, making it incredibly efficient. Compared to Meta's Llama3.1 (405 billion parameters used all at once), DeepSeek V3 is over 10 times more efficient yet performs better.

DeepSeek V3 can be seen as a significant technological achievement by China in the face of US attempts to limit its AI progress. China once again demonstrates that resourcefulness can overcome limitations.

you are viewing a single comment's thread
view the rest of the comments
[–] cyd@lemmy.world 6 points 1 day ago* (last edited 1 day ago)

Kudos to Deepseek for continuing to releasing the code and model under a permissive license. Would be nicer if the weights were under an MIT license rather than a custom license, but I guess they're afraid of liability. Strange situation we're now in, where the future of open AI (as opposed to "open but actually closed" AI) now almost entirely depends on Chinese companies.

In practice, though, I wonder how many people would actually self host and tinker with this, since the model is way too large to run on any desktop. It would be very interesting to find downstream use-cases and modifications, which is supposed to be a strength of the open source model. Deepseek themselves don't seem to be much concerned about applications; from my understanding, they are basically funded by a sugar daddy and are happy to just do R&D (funnily enough, that is kinda what OpenAI was originally supposed to be before they sold out to Microsoft).