this post was submitted on 21 Oct 2023
37 points (80.3% liked)

Games

16623 readers
1159 users here now

Video game news oriented community. No NanoUFO is not a bot :)

Posts.

  1. News oriented content (general reviews, previews or retrospectives allowed).
  2. Broad discussion posts (preferably not only about a specific game).
  3. No humor/memes etc..
  4. No affiliate links
  5. No advertising.
  6. No clickbait, editorialized, sensational titles. State the game in question in the title. No all caps.
  7. No self promotion.
  8. No duplicate posts, newer post will be deleted unless there is more discussion in one of the posts.
  9. No politics.

Comments.

  1. No personal attacks.
  2. Obey instance rules.
  3. No low effort comments(one or two words, emoji etc..)
  4. Please use spoiler tags for spoilers.

My goal is just to have a community where people can go and see what new game news is out for the day and comment on it.

Other communities:

Beehaw.org gaming

Lemmy.ml gaming

lemmy.ca pcgaming

founded 1 year ago
MODERATORS
 

A new Genshin Impact web event has some great loot, but you're going to have to look at some terrifying AI animation.

top 19 comments
sorted by: hot top controversial new old
[–] mrbubblesort@kbin.social 17 points 1 year ago (2 children)

Must be living a pretty sheltered life if this is "nightmare inducing"

[–] FMT99@lemmy.world 5 points 1 year ago

Not to sound mean but it's people who spend their time on gacha games.

[–] p03locke@lemmy.dbzer0.com 2 points 1 year ago

Clickbait title is clickbait. This is barely newsworthy.

lmao who greenlights that

[–] MysticKetchup@lemmy.world 9 points 1 year ago

Give them a break guys, they're just a small indie company that doesn't have the money to pay animators to do an actually good job /s

Aside from how awful the animations are, I'm wondering why they felt they needed them at all. The little chibi portraits would have looked fine as static images

[–] mindbleach@sh.itjust.works 5 points 1 year ago (2 children)

AI images can be shockingly good. AI animation... sucks. That'll change. There's too much training data not to. Every minute of video is hundreds of adjacent frames to tell the machine what can happen between adjacent frames. But right now, it's either fuzzy and bad, or clean and worse, and I cannot comprehend how anyone saw these and said "that'll do."

Just pick a good frame and wiggle the parts in Live2D or something.

[–] harmonea@kbin.social 4 points 1 year ago* (last edited 1 year ago) (2 children)

Just pick a good frame and wiggle the parts in Live2D or something.

The hilarious part is that hoyo is constantly pushing the boundaries of what can be done with live2d; it's heavily used in Genshin character teasers, and their otome game uses it extensively. They're really good at this. Why get AI involved?

[–] mindbleach@sh.itjust.works 6 points 1 year ago

Trying makes sense. Failing makes sense. Shipping anyway does not make sense.

[–] tal@lemmy.today 3 points 1 year ago (1 children)

I'm looking forward to superresolution in video.

Each existing frame of video, especially older video, contains a limited amount of information. You can maybe do some static image upscaling -- and AI upscaling is actually pretty remarkable. I was blown away by what Stable Diffusion could do with some old comic book scans.

But more than that...there's a whole video of video of the characters and scenes. For most of the video, that information can, given the right software and a 3d model, be incorporated back into frames to generate a higher-resolution image.

To say nothing of frame interpolation to generate higher-frame-rate video.

Like, I like Lawrence of Arabia. That movie actually has pretty good-quality footage. But...there's still film grain. And the frame rate is only so high. But there is a whole lot of footage of Lawrence in that movie, enough information to do a pretty good job, if used effectively, of dropping film grain, generating intermediate frames, and increasing the resolution.

[–] p03locke@lemmy.dbzer0.com 1 points 1 year ago (1 children)

Like, I like Lawrence of Arabia. That movie actually has pretty good-quality footage. But…there’s still film grain. And the frame rate is only so high. But there is a whole lot of footage of Lawrence in that movie, enough information to do a pretty good job, if used effectively, of dropping film grain, generating intermediate frames, and increasing the resolution.

This is possible today, and without much effort. Most Stable Diffusion kits just come with upscalers and, as long as you pick the right ones for the job, the models act like fucking magic. Way way better than any of the "nearest neighbor" algorithms image editors provide.

Video editors already have really good tools for interpolating frames for slow motion. They are a bit fiddly in high motion situations, but work well otherwise.

[–] tal@lemmy.today 1 points 1 year ago* (last edited 1 year ago) (1 children)

You can do upscaling with AI upscalers in SD today, yeah, and it's pretty nifty, but it's working with a 2D model. That's nice if you have a lot of footage of Lawrence from exactly the same angle; if you train a model on the whole video, then you can use that for upscaling individual frames.

But my point is that if you have software that's smart enough to make use of information derived with a 3D model, then you don't need to have that identical angle to make use of the information there.

Let's say that you've got a shot of Peter O'Toole like this:

https://prod-images.tcm.com/Master-Profile-Images/lawrenceofarabia1962.4455.jpg?w=824

And another like this:

https://media.vanityfair.com/photos/52d691da6088e6966a000006/master/w_2240,c_limit/1389793754760_lawrencethumb.jpg

Those aren't from the same angle.

But add a 3d model to the thing, and you can use data from the close-up in the first image to scale up the second. The software can rotate the data in three dimensions, understand the relationships. If you can take time into account, you could even learn how his robe flaps in the wind or whatnot.

One would need something like this.

[–] p03locke@lemmy.dbzer0.com 1 points 1 year ago

My point is that if all you are doing is cleaning up frames and trying to upscale footage from 24fps to 60fps, you have all of the data you need from the previous/next frames to blend those into in-between frames. A model trained on the movie would help, but there's no need to get into anything as complex as 3D models of objects. Sub-second animation data is just fine.

I didn't even notice the little issues with the animation, I was just getting through it

[–] Even_Adder@lemmy.dbzer0.com 3 points 1 year ago (2 children)

It honestly looks almost alright.

[–] all-knight-party@kbin.run 7 points 1 year ago (1 children)

Perfect back of the box quote, right there

[–] Even_Adder@lemmy.dbzer0.com 0 points 1 year ago

It might look right in a new more iterations. A paper on their implementation would be cool.

[–] MisterLister@lemmy.world 5 points 1 year ago (1 children)

It looks like a newgrounds kid slapped it together after school in 2005

[–] mindbleach@sh.itjust.works 3 points 1 year ago

Not enough gross body motion, far too much minute detail motion.

In case anyone needs a refresher - Flash animation circa 2005 looked like this.