this post was submitted on 28 Dec 2024
56 points (87.8% liked)

Technology

60129 readers
3866 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
 

cross-posted from: https://lemmy.ml/post/24102825

DeepSeek V3 is a big deal for a number of reasons.

At only $5.5 million to train, it's a fraction of the cost of models from OpenAI, Google, or Anthropic which are often in the hundreds of millions.

It breaks the whole AI as a service business model that OpenAI and Google have been pursuing making state-of-the-art language models accessible to smaller companies, research institutions, and even individuals.

The code is publicly available, allowing anyone to use, study, modify, and build upon it. Companies can integrate it into their products without paying for usage, making it financially attractive. The open-source nature fosters collaboration and rapid innovation.

The model goes head-to-head with and often outperforms models like GPT-4o and Claude-3.5-Sonnet in various benchmarks. It excels in areas that are traditionally challenging for AI, like advanced mathematics and code generation. Its 128K token context window means it can process and understand very long documents. Meanwhile it processes text at 60 tokens per second, twice as fast as GPT-4o.

The Mixture-of-Experts (MoE) approach used by the model is key to its performance. While the model has a massive 671 billion parameters, it only uses 37 billion at a time, making it incredibly efficient. Compared to Meta's Llama3.1 (405 billion parameters used all at once), DeepSeek V3 is over 10 times more efficient yet performs better.

DeepSeek V3 can be seen as a significant technological achievement by China in the face of US attempts to limit its AI progress. China once again demonstrates that resourcefulness can overcome limitations.

top 5 comments
sorted by: hot top controversial new old
[–] FaceDeer@fedia.io 24 points 17 hours ago (1 children)

Last year's leaked "We Have No Moat, And Neither Does OpenAI" memo from inside Google continues to age like fine wine. The big industry leaders spend umpteen billions of dollars forcing their way up to the top of the leaderboards and then just a few weeks or months later some little upstart is nipping at their heels with competition that cost only millions to build. I love it.

[–] cyd@lemmy.world 3 points 13 hours ago

The moat is probably mostly inertia. Microsoft or whoever will offer a code assistant that directs to OpenAI's model, and users will just use that. Most software moats are like that, rather than being based on intrinsic technological superiority.

[–] errer@lemmy.world 15 points 15 hours ago (1 children)

I’m 90% sure this article was written by AI. It’s repetitive and unnecessarily long-winded. People realize this sort of writing is crap right?

[–] Cyber@feddit.uk 5 points 13 hours ago

Yeah, that was my thoughts - there's basically a few details reworded many times.

I was looking for the part where they'll want to earn their $5m back...

[–] cyd@lemmy.world 6 points 13 hours ago* (last edited 13 hours ago)

Kudos to Deepseek for continuing to releasing the code and model under a permissive license. Would be nicer if the weights were under an MIT license rather than a custom license, but I guess they're afraid of liability. Strange situation we're now in, where the future of open AI (as opposed to "open but actually closed" AI) now almost entirely depends on Chinese companies.

In practice, though, I wonder how many people would actually self host and tinker with this, since the model is way too large to run on any desktop. It would be very interesting to find downstream use-cases and modifications, which is supposed to be a strength of the open source model. Deepseek themselves don't seem to be much concerned about applications; from my understanding, they are basically funded by a sugar daddy and are happy to just do R&D (funnily enough, that is kinda what OpenAI was originally supposed to be before they sold out to Microsoft).