this post was submitted on 23 Mar 2024
74 points (93.0% liked)

PC Gaming

8660 readers
642 users here now

For PC gaming news and discussion. PCGamingWiki

Rules:

  1. Be Respectful.
  2. No Spam or Porn.
  3. No Advertising.
  4. No Memes.
  5. No Tech Support.
  6. No questions about buying/building computers.
  7. No game suggestions, friend requests, surveys, or begging.
  8. No Let's Plays, streams, highlight reels/montages, random videos or shorts.
  9. No off-topic posts/comments, within reason.
  10. Use the original source, no clickbait titles, no duplicates. (Submissions should be from the original source if possible, unless from paywalled or non-english sources. If the title is clickbait or lacks context you may lightly edit the title.)

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] MotoAsh@lemmy.world 8 points 8 months ago* (last edited 8 months ago) (2 children)

You are imagining a supercomputer's LLM running an NPC.

It literally cannot be that fancy. Maybe they can fake it and fool a few rubes, but no there will be no deep characters ran by this.

[–] owen@lemmy.ca 4 points 8 months ago

I think you could make it work by giving them each a limited word pool and pre-set phrases to cover for panic/confusion

[–] PonyOfWar@pawb.social 4 points 8 months ago* (last edited 8 months ago)

The way it works right now is usually over the cloud. I've already tried out a bit of "Convai" as a developer, which is a platform where you can create LLM NPCs and put them in Unreal Engine. It's pretty neat, not perfect, but you can definitely give characters thousands of lines of backstory if you want and they will act in character. They will also remember any conversations a player had with them previously and can refer to them in later convos. Can still be fairly obvious that you're talking to an LLM though, if you know what to ask and what to look for. Due to its cloud-based nature, there is also some delay between the player input and the response. But it has a lot of potential for dialog systems where you can do way more than just choose between 4 predefined sentences. Especially once running these things locally won't be a performance-issue.