this post was submitted on 15 Feb 2024
136 points (93.0% liked)

Technology

59300 readers
4713 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

“In 10 years, computers will be doing this a million times faster.” The head of Nvidia does not believe that there is a need to invest trillions of dollars in the production of chips for AI::Despite the fact that Nvidia is now almost the main beneficiary of the growing interest in AI, the head of the company, Jensen Huang, does not believe that

you are viewing a single comment's thread
view the rest of the comments
[–] ryannathans@aussie.zone 0 points 9 months ago (1 children)

Twice for AI or computing in general?

[–] Buffalox@lemmy.world 4 points 9 months ago* (last edited 9 months ago) (2 children)

Why does that make a difference? Compute for AI is build on the progress for compute first for GPU then for data center. They are similar in nature.
Yes they have exceeded 2x for AI for a while, but that has been achieved through exploding die size and cost, but even that won't make a million times faster in 10 years possible, because they can't increase die sizes any further.

[–] ryannathans@aussie.zone 3 points 9 months ago (2 children)

Building an ASIC for purpose built computation is significantly faster than generic gpu compute cores. Like when ASICs were built for bitcoin mining/sha256 and a little 5 watt usb device could outperform the best GPUs

[–] Buffalox@lemmy.world 1 points 9 months ago* (last edited 9 months ago)

The H200 is evolved from Nvidia GPU designs, and will be by far the most powerful AI component in existence when it arrives later this year, AI is now so complex, that it doesn't really make sense to call it an ASIC or to use an ASIC for the purpose, and the cost is $40,000.- for a single H200 unit!!! So no not small 5 watt units, more like 100x that.
If they could make small ASICS that did the same, they'd all do it. Nvidia AMD Intel Google Amazon Huawei etc. But it's simply not an option.

Edit:

In principle the H200 AI/Compute system, is a giant cluster of tiny ASICS built onto one chip for massive parallel compute and greater speed.

[–] frezik@midwest.social 1 points 9 months ago

It may be even more specialized than that. It might be a return to analog computers.

Which isn't going to work for Nvidia's traditional products, either.

[–] fidodo@lemmy.world 1 points 9 months ago

There's also software improvements to consider, there's a lot of room for efficiency improvements.