this post was submitted on 25 Sep 2023
64 points (97.1% liked)

Technology

59438 readers
3439 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] just_another_person@lemmy.world 1 points 1 year ago (2 children)

The number of devices in use out in the world is a direct correlation to how useful a project like ROCm or CUDA is/could be. More devices means devs are more likely to utilize a specific language or library for a specific use. ROCm is open source and attempting to gain more ground simply by expanding to more devices which are already out there. My response to OP is just illustrating that fact.

Example: Nvidia got an early foothold in the AI/ML game in the datacenter because they were first to platform traction with the CUDA toolkit and inference libraries. It's horrible to use, but is useful. AMD is now trying to catch up to that by deploying alternative hardware and software that covers most of the same use-case, plus they now have APU and FPGA devices that Nvidia does not. That's the tldr.

[–] poVoq@slrpnk.net 4 points 1 year ago (1 children)

Your comment doesn't make sense. ROCm is a buggy mess that despite years of working on it AMD hasn't been able to make work well at all.

Intel's oneAPI on the other hand is cross-vendor and by all appearances so far is good software that has a real shot at beating CUDA if AMD was not shooting itself in its own leg by riding the dead horse that is ROCm.

I work with the entire CUDA toolkit on a daily basis, and it is also a mess. Nvidia is locked in though, and doesn't plan any rework anytime soon (you can refer to their own statements on this). Any widespread alternative forces greater competition, and better products as a result.

I've never met a single engineer who has worked on any of Intel's acceleration toolchains, but they are just now getting new devices into the datacenter, so maybe it will gain in popularity.

[–] giacomo@lemm.ee -1 points 1 year ago

lol, and that's the argument OP was making; forget about ROCm and jump onboard with OneAPI