this post was submitted on 15 Sep 2024
132 points (100.0% liked)

Steam Deck

14884 readers
369 users here now

A place to discuss and support all things Steam Deck.

Replacement for r/steamdeck_linux.

As Lemmy doesn't have flairs yet, you can use these prefixes to indicate what type of post you have made, eg:
[Flair] My post title

The following is a list of suggested flairs:
[Discussion] - General discussion.
[Help] - A request for help or support.
[News] - News about the deck.
[PSA] - Sharing important information.
[Game] - News / info about a game on the deck.
[Update] - An update to a previous post.
[Meta] - Discussion about this community.

Some more Steam Deck specific flairs:
[Boot Screen] - Custom boot screens/videos.
[Selling] - If you are selling your deck.

These are not enforced, but they are encouraged.

Rules:

Link to our Matrix Space

founded 3 years ago
MODERATORS
 

He specifically cited bad battery life on the ROG Ally and Lenovo Go, saying that getting only one hour of battery life isn't enough. The Steam Deck (especially the OLED model) does a lot better battery wise, but improving power efficiency should really help with any games that are maxing out the Deck's power.

all 14 comments
sorted by: hot top controversial new old
[–] Stampela@startrek.website 31 points 2 months ago (3 children)

Uh, I feel like this is better taken with a low level of enthusiasm: reading the article there’s no mention of how it’s supposed to improve battery, it’s mentioned how it’s AI based, and most concerning for us, both the Ally and Go use the Z1/Z1 Extreme… that have a 10 tops npu.

[–] Dudewitbow@lemmy.zip 18 points 2 months ago* (last edited 2 months ago) (1 children)

the idea of it improving battery is that generating frames is less performance intensive than running a certain framerate (e.g 60 fps capped game with frame gen at double the framerate consumes less power than running the same game at 120 fps). though its slightly less practical because frame generation only makes sense when the base framerate is high enough (ideally above 60) to avoid a lot of screen artifacting. So in practical use, this only makes sense to "save battery" in the context that you have a 120hz+ screen and choose to cap framerate to 60-75fps.

If one is serious about minmaxing battery to performance in a realistic value, people should have the screen cap at 40 hz, as it has half of the input latency between 30 and 60 fps, but only requires 10 more fps than 30 which is a very realistic performance target for maintaining a minimum on handheld.

[–] thingsiplay@beehaw.org 2 points 2 months ago

Agreed. 40 Hz / Fps is a good idea. On the Steam Deck OLED with 90 Hz screen one could also limit to 30 Fps, which would still run the screen at 3 * 30 = 90 Hz for better input latency than 30 Hz while only consuming 30 Fps power. I'm not talking about Frame Generation from AMD, but the Steam Decks feature. Compared to AMD Frame Gen it would not increase latency, but reduce it. This is universal functionality on the Deck that is available for every game. Wish this was available on Desktop too.

[–] thingsiplay@beehaw.org 3 points 2 months ago

I assume the next Ally and Go will be a test platform for AMD. The main focus is probably Steam Deck 2 and next XBox Infinite systems.

[–] beastlykings@sh.itjust.works 2 points 2 months ago (1 children)

I'm reasonably excited. I like the steam deck, and I'm off the opinion that we don't need an updated version just yet. A slower moving target for developers is best for long term game compatibility.

But eventually a new steam deck will arrive, and it will likely use the latest CPU/GPU, which will likely benefit from this new frame generation technology.

And perhaps some benefits will trickle down to the current steam deck, or maybe not 🤷‍♂️

But still, I'm optimistic for the future of mobile gaming.

[–] Stampela@startrek.website 1 points 2 months ago

Oh yeah, new tech is cool and potentially useful. My point was that this particular excitement is not too likely to improve anything on the current hardware we have.

[–] rollinghills@lemm.ee 8 points 2 months ago

Exciting development!

[–] thingsiplay@beehaw.org 8 points 2 months ago (2 children)

Here is my view and a small timeline:

  • FSR 1 (Jun 2021): Post processing. Can be used with any game, any graphics card on any system. Quality is not very good, but developers do not need to support it in order being usable.
  • FSR 2 (Mar 2022): Analytical and Game specific. Analyzes the content of the ingame in order to produce better output than FSR 1. Can be used only with games that have integrated support for. Still system and graphics card agnostic.
  • FSR 3 (Sep 2023): Improved version of FSR 2. Therefore the previous point applies here too, but has a bit more features and should produce better quality. It was late on arrival and was controversial at launch.
  • FSR 4 (maybe 2025): AI and hardware dependent. Not much is known, but we can expect that it requires some form of AI chip on the GPU. We don't know if it will be usable with other GPUs that have such a chip or is restricted to AMD cards. As this is analytical, it requires games to support this, therefore its Game specific as well. It's expected to have superior quality over FSR 3, maybe rivaling XESS or even DSR. But it seems the focus is on low powered weaker hardware, where it would benefit the most.
[–] JohnEdwa@sopuli.xyz 2 points 2 months ago (1 children)

One technical reason for why FSR 1 isn't very good but works in everything is that FSR1 is the only one that just takes your current frame and upscales it, all the newer ones are all temporal - like TAA - and use data from multiple previous frames.
Very simplified, they "jiggle" the camera each frame to a different position so that they can gather extra data to use, but that requires being implemented in the game engine directly.

[–] nekusoul@lemmy.nekusoul.de 3 points 2 months ago

Kind of.

The big thing that actually defines FSR2 is that it has access to a bunch more data, particularly the depth buffer, motion vectors, and also, as you said, uses data from previous frames.

The camera jiggle is mostly just to avoid shimmering when the camera is stationary.

[–] remotelove@lemmy.ca 2 points 2 months ago (3 children)

I am curious as to why they would offload any AI tasks to another chip? I just did a super quick search for upscaling models on GitHub (https://github.com/marcan/cl-waifu2x/tree/master/models) and they are tiny as far as AI models go.

Its the rendering bit that takes all the complex maths, and if that is reduced, that would leave plenty of room for running a baby AI. Granted, the method I linked to was only doing 29k pixels per second, but they said they weren't GPU optimized. (FSR4 is going to be fully GPU optimized, I am sure of it.)

If the rendered image is only 85% of a 4k image, that's ~1.2 million pixels that need to be computed and it still seems plausible to keep everything on the GPU.

With all of that blurted out, is FSR4 AI going to be offloaded to something else? It seems like there would be a significant technical challenges in creating another data bus that would also have to sync with memory and the GPU for offloading AI compute at speeds that didn't risk create additional lag. (I am just hypothesizing, btw.)

[–] Stampela@startrek.website 4 points 2 months ago* (last edited 2 months ago)

The thing with “AI” or better still, ML cores, is that they’re very specialized. Apple hasn’t been slapping ML cores in all of their cpus since the iPhone 8 because they are super powerful, it’s because they can do some things (that the hardware would have no problem doing anyway) by sipping power. You don’t have to think about AI as in the requirements for huge LLM like ChatGPT that require data centers, think about it like a hardware video decoder: This thing could play easily 1080p video! Or, going with raw cpu power rather than hardware decoding, 480p. It’s why you can watch hours of videos on your phone, but try doing anything that hits the cpu and the battery melts.

Edit: my example has been bothering me for days now. I want to clarify to avoid any possible misunderstanding that hardware video decoding has nothing to do with AI, it’s just another very specialized chip.

[–] thingsiplay@beehaw.org 1 points 2 months ago

Well, Nvidia and Intel does that too, and I think Sony added an AI chip to the PS5 Pro for their new AI upscaler as well. We can already run AI calculations on our GPU without AI accleration, but that is not as fast. I have no numbers for you, only the logic that optimized software to use optimized AI chips should run more efficient and faster, without slowing down the regular GPU work. Intel is in this hybrid state, where they support both. One version of XESS can run on all GPUs, but that is worse than XESS specialized for Intel GPUs with their dedicated AI accelerators.

Those upscaler you linked are only upscaling non interactive video or single frames, right? An AI upscaler on live gameplay takes much more into consideration, like menus, specific parts of the image being background and such. These information are programmed into the game, so its drastically different approach from just images upscaling, which wouldn't be different than FSR 1 in such a case. But I have no clue about numbers and how it compares to a solution like that.

I don't think this is a decision they just made recently and probably was planning long before they even started on FSR 4, plus they were already working for 12 months or so on it (allegedly). I think AMD "needs" to do this AI offloading, because market demands it, traditional solution didn't workout as hoped and maybe in co operation with Valve, Microsoft and other vendors. On the other side, this AI acclerator could be used for anything else than upscaling as well, as Nvidia demonstrated.