this post was submitted on 05 Jul 2024
104 points (100.0% liked)

TechTakes

1485 readers
153 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] pop@lemmy.ml 11 points 5 months ago (1 children)

Because these posts are nothing but the model making up something believable to the user. This "prompt engineering" is like asking a parrot who's learned quite a lot of words (but not their meaning), and then the self-proclaimed "pet whisperer" asks some random questions and the parrot, by coincidence makes up something cohesive. And he's like "I made the parrot spill the beans."

[–] sc_griffith@awful.systems 14 points 5 months ago (2 children)

if it produces the same text as its response in multiple instances I think we can safely say it's the actual prompt

[–] dgerard@awful.systems 11 points 5 months ago

yeah, the ChatGPT prompt seems to have spilt a few times, this is just the latest

[–] corbin@awful.systems 7 points 5 months ago

Even better, we can say that it's the actual hard prompt: this is real text written by real OpenAI employees. GPTs are well-known to easily quote verbatim from their context, and OpenAI trains theirs to do it by teaching them to break down word problems into pieces which are manipulated and regurgitated. This is clownshoes prompt engineering done by manager-first principles like "not knowing what we want" and "being able to quickly change the behavior of our products with millions of customers in unpredictable ways".