this post was submitted on 25 Aug 2023
87 points (95.8% liked)

Games

16651 readers
1034 users here now

Video game news oriented community. No NanoUFO is not a bot :)

Posts.

  1. News oriented content (general reviews, previews or retrospectives allowed).
  2. Broad discussion posts (preferably not only about a specific game).
  3. No humor/memes etc..
  4. No affiliate links
  5. No advertising.
  6. No clickbait, editorialized, sensational titles. State the game in question in the title. No all caps.
  7. No self promotion.
  8. No duplicate posts, newer post will be deleted unless there is more discussion in one of the posts.
  9. No politics.

Comments.

  1. No personal attacks.
  2. Obey instance rules.
  3. No low effort comments(one or two words, emoji etc..)
  4. Please use spoiler tags for spoilers.

My goal is just to have a community where people can go and see what new game news is out for the day and comment on it.

Other communities:

Beehaw.org gaming

Lemmy.ml gaming

lemmy.ca pcgaming

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] sugar_in_your_tea@sh.itjust.works 4 points 1 year ago (1 children)

CUDA is only better because the industry has moved to it, and NVIDIA pumps money into its development. OpenCL could be just as good if the industry adopted it and card manufacturers invested in it. AMD and Intel aren't going to invest as much in it as NVIDIA invests in CUDA because the marketshare just isn't there.

Look at Vulkan, it has a ton of potential for greater performance, yet many games (at least Baldur's Gate) work better with DirectX 12, and that's because they've invested resources into making it work better. If those same resources were out into Vulkan development, Vulkan would outperform DirectX on those games.

The same goes for GSync vs FreeSync, most of the problems with FreeSync were poor implementations by monitors, or poor support from NVIDIA. More people had NVIDIA cards, so GSync monitors tended to work better. If NVIDIA and AMD had worked together at the start, variable refresh would've worked better from day one.

Look at web standards, when organizations worked well together (e.g. to overtake IE 6), the web progressed really well and you could largely say "use a modern browser" and things would tend to work well. Now that Chrome has a near monopoly, there's a ton of little things that don't work as nicely between Chrome and Firefox. Things were pretty good until Chrome became dominant, and now it's getting worse.

It absolutely is "pro technology"

Kind of. It's more of an excuse to be anti-consumer by locking out competition with a somewhat legitimate "pro technology" stance.

If they really were so "pro technology," why not release DLSS, GSync, and CUDA as open standards? That way other companies could provide that technology in new ways to more segments of the market. But instead of that, they go the proprietary route, and the rest try to make open standards to oppose their monopoly on that tech.

I'm not proposing any solutions here, just pointing out that NVIDIA does this because it works to secure their dominant market share. If AMD and Intel drop out, they'd likely stop the pace of innovation. If AMD and Intel catch up, NVIDIA will likely adopt open standards. But as long as they have a dominant position, there's no reason for them to play nicely.

[–] conciselyverbose@kbin.social 0 points 1 year ago (1 children)

Cuda was first, and worked well out of the gate. Resources that could have been spent improving cuda for an ecosystem that was outright bad for a long time didn't make sense.

Gsync was first, and was better because it solved a hardware problem with hardware. It was a decade before displays came default with hardware where solving it with software was short of laughable. There was nothing nvidia could have done to make freesync better than dogshit. The approach was terrible.

DLSS was first, and was better because it came with hardware capable of actually solving the problem. FSR doesn't and is inherently never going to be near as useful because of it. The cycles saved are offset significantly by the fact that it needs its own cycles of the same hardware to work.

Opening the standard sounds good, but it doesn't actually do much unless you also compromise the product massively for compatibility. If you let AMD call FSR DLSS because they badly implement the methods, consumers don't get anything better. AMD's "DLSS" still doesn't work, people now think DLSS is bad, and you get accused of gimping performance on AMD because their cards can't do the math, all while also making design compromises to facilitate interoperability. And that's if they even bother doing the work. There have been nvidia technologies that have been able to run on competitor's cards and that's exactly what happened.

[–] sugar_in_your_tea@sh.itjust.works 2 points 1 year ago (2 children)

Opening the standard... compromise the product massively

Citation needed.

All NVIDIA needs to do is:

  1. release the spec with a license AMD and Intel can use
  2. form a standards group, or submit it to an existing one
  3. ensure any changes to the spec go through the standards group; they can be first to market, provided they agree on the spec change

That's it. They don't need to make changes to suit AMD and Intel's hardware, that's on those individual companies to make work correctly.

This works really well in many other areas of computing, such as compression algorithms, web standards, USB specs, etc. Once you have a standard, other products can target it and the consumer has a richer selection of compatible products.

Right now, if you want GPGPU, you need to choose between OpenCL and CUDA, and each choice will essentially lock you out of certain product categories. Just a few years ago, the same as true for FreeSync, though FreeSync seems to have won.

But NVIDIA seems to be allergic to open standards, even going so far as to make their own power cable when they could have worked with the existing relevant standards bodies.

[–] conciselyverbose@kbin.social 1 points 1 year ago (1 children)

Going through a standards group is a massive compromise. It in and of itself completely kills the marriage between the hardware and software designs. Answering to anyone on architecture design is a huge downgrade that massively degrades the product.

[–] sugar_in_your_tea@sh.itjust.works 2 points 1 year ago (1 children)

How do you explain PCIe, DDR, and M.2 standards? Maybe we could've had similar performance sooner if motherboard vendors did their own thing, but with standardization, we get more variety and broader adoption.

If a company wants or needs a major change, they go through the standards body and all competitors benefit from that work. The time to market for an individual feature may be a little longer, but the overall pace is likely pretty similar, they just need to front load the I/O design work.

[–] conciselyverbose@kbin.social 1 points 1 year ago (1 children)

Completely and utterly irrelevant? They are explicitly for the purpose of communicating between two pieces of hardware from different manufacturers, and obscenely simple. The entire purpose is to do the same small thing faster. Standardizing communication costs zero.

The architecture of GPUs is many, many orders of magnitude more complex, solving problems many orders more complex than that. There isn't even a slim possibility that hardware ray tracing would exist if Nvidia hadn't unilaterally done so and said "this is happening now". We almost definitely wouldn't have refresh rate synced displays even today, either. It took Nvidia making a massive investment in showing it was possible and worth doing for a solid decade of completely unusable software solutions before freesync became something that wasn't vomit inducing.

There is no such thing as innovation on standards. It's worth the sacrifice for modular PCs. It's not remotely worth the sacrifice to graphics performance. We'd still be doing the "literally nothing but increasing core count and clocks" race that's all AMD can do for GPUs if Nvidia needed to involve other manufacturers in their giant leaps forward.

[–] sugar_in_your_tea@sh.itjust.works 2 points 1 year ago (1 children)

communicating between two pieces of hardware from different manufacturers

  • like a GPU and a monitor? (FreeSync/GSync)
  • like a GPU and a PSU? (the 12v cable)

DLSS and RTX are the same way, but instead of communicating between two hardware products, it's communicating between two software components, and then translating those messages onto commands for specialized hardware.

Both DLSS and RTX are a simpler, more specific casez of GPGPU, so they likely could've opened and extended CUDA, extended OpenCL, or extended Vulkan/DirectX instead, with the hardware reporting whether it can handle DLSS or RTX extensions efficiently. CPUs do exactly that for things like SIMD instructions, and compilers change the code depending on the features that CPU exposes.

But instead in all of those cases, they went with proprietary and minimal documentation. That means it was intentional that they don't want competitors to compete directly using those technologies, and instead expect them to make their own competing APIs.

Here's how the standards track should work:

  1. company proposes new API A for the standards track
  2. company builds a product based on proposal A
  3. standards body considers and debates proposal A
  4. company releases product based on A, ideally after the standards body agrees on A
  5. if there is a change needed to A, company releases a patch to support the new, agreed-upon standard, and competitors start building their own implementations of A

That's it. Step 1 shouldn't take much effort, and if they did a good job designing the standard, step 5 should be pretty small.

But instead, NVIDIA ignores the whole process and just does their own thing until either they get their way or they're essentially forced to adopt the standard. They basically lost the GSync fight (after years of winning), and they seem to have lost the Wayland EGLStream proposal and have adopted the GBM standard. But they win more than they lose, so they keep doing it.

That's why we need competition, not because NVIDIA isn't innovating, but because NVIDIA is innovating in a way to lock out competition. If AMD and Intel can eat away at NVIDIA's dominant market share, NVIDIA will be forced to pay nice more often.

[–] conciselyverbose@kbin.social 1 points 1 year ago* (last edited 1 year ago) (1 children)

Every single thing about what you're discussing literally guarantees that GPUs are dogshit. There's no path to any of the features we're discussing getting accepted to open standards if AMD has input. They only added them after Nvidia proved how much better they are than brute force by putting them in people's hands.

Standards do not and fundamentally cannot work when actual innovation is called for. Nvidia competing is exactly 100% of the reason we have the technology we have. We'd be a decade behind, bare minimum, if AMD had any input at all in a standards body that controlled what Nvidia can make.

We're not going to agree, though, so I'll stop here.

The process I detailed does not require consensus before a product can be released, it just allows for that consensus to happen eventually. So by definition, it won't impede progress. It does encourage direct competition, and that's something NVIDIA would rather avoid.