this post was submitted on 06 Mar 2024
36 points (100.0% liked)

TechTakes

1427 readers
143 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] mozz@mbin.grits.dev 10 points 8 months ago (1 children)

As somebody said, and im loosely paraphrasing here, most of the intelligent work done by ai is done by the person interpreting what the ai actually said.

This is an absolutely profound take that I hadn't seen before; thank you.

[–] Soyweiser@awful.systems 9 points 8 months ago* (last edited 8 months ago)

It prob came from a few of the fired from various ai places ai ethicists who actually worry about real world problems like the racism/bias from ai systems btw.

The article itself also mentions ideas like this a lot btw. This: "Fan describes how reinforcement learning through human feedback (RLHF), which uses human feedback to condition the outputs of AI models, might come into play. "It's not too different from asking GPT-4 'are you self-conscious' and it gives you a sophisticated answer,"" is the same idea with extra steps.