this post was submitted on 05 Mar 2024
158 points (97.0% liked)

Programming

17378 readers
148 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Varyk@sh.itjust.works 12 points 8 months ago (4 children)

What does this mean? Thanks I don't understand the terms here

[–] technom@programming.dev 31 points 8 months ago* (last edited 8 months ago) (2 children)

CUDA is an API to run high performance compute code on Nvidia GPUs. CUDA is proprietary. So CUDA programs run only on Nvidia GPUs. Open alternatives like vulkan compute and opencl aren't as popular as CUDA.

Translation layers are interface software that allow CUDA programs to run on non-Nvidia GPUs. But creating such layers require a bit of reverse engineering of CUDA programs. But they are prohibiting this now. They want to ensure that all the CUDA programs in the world are limited to using Nvidia GPUs alone - classic vendor lock-in by using EULA.

[–] Varyk@sh.itjust.works 11 points 8 months ago

Thank you, that's simply enough that I can understand what you're saying, but complex enough that all of my questions are answered.

Great answer

[–] mindbleach@sh.itjust.works 4 points 8 months ago

... and in addition to the specifics of this abuse, fuck EULAs in general.

[–] floofloof@lemmy.ca 12 points 8 months ago (1 children)

CUDA is a system for programming GPUs (Graphics Processing Units), and it can be used to do far more computations in parallel than regular CPU programming could. In particular, it's widely used in AI programming for machine learning. NVIDIA has quite a hold on this industry right now because CUDA has become a de facto standard, and as a result NVIDIA can price its graphics cards very high. Intel and AMD also make powerful GPUs that tend to be cheaper than NVIDIA's, but they don't natively support CUDA, which is proprietary to NVIDIA. A translation layer is a piece of software that interprets CUDA commands and translates them into commands for the underlying platform such as an AMD graphics card. So translation layers allow people to run CUDA software, such as machine learning software, on non-NVIDIA systems. NVIDIA has just changed its licence to prohibit this, so anyone using CUDA has to use a natively CUDA-capable machine, which means an NVIDIA one.

[–] Varyk@sh.itjust.works 3 points 8 months ago

Thank you, these are really great entry-level answer s so that I can understand what the heck is going on.

[–] Mango@lemmy.world 5 points 8 months ago

They're telling you how to play with the toys they sold you.

[–] MxM111@kbin.social 1 points 8 months ago (1 children)

You can’t use CUDA drivers and then insert translation layer, that translates calls to NVIDIA hardware to calls to non-NVIDIA hardware and use non-NVIDIA hardware with CUDA.

[–] Darkrai@kbin.social 2 points 8 months ago (1 children)

Do you think this is something the EU will say is anti-competitive or something? I don't think current late-state capitalism America will do anything.

[–] 520@kbin.social 2 points 8 months ago

Oh the EU will definitely call this anticompetitive. Especially when nVidia have a monopoly in the AI segment as is.