Actually I’m finding this quite useful. Do you mind posting more of the article? I can’t open links on my phone for some reason
swlabr
what is this “alignment” you speak of? I’ve never heard of this before
video events
Ah you see, this is proof that FSD is actually AGI. Elon told the FSD that it needs to maximise tesla profits. The FSD accessed a camera pointing at a tesla earnings report and realised that it could increase the value of tesla’s carbon credit scheming by taking out trees, hence the events of the video
In the current chapter of “I go looking on linkedin for sneer-bait and not jobs, oh hey literally the first thing I see is a pile of shit”
text in image
Can ChatGPT pick every 3rd letter in "umbrella"?
You'd expect "b" and "I". Easy, right?
Nope. It will get it wrong.
Why? Because it doesn't see letters the way we do.
We see:
u-m-b-r-e-l-l-a
ChatGPT sees something like:
"umb" | "rell" | "a"
These are tokens — chunks of text that aren't always full words or letters.
So when you ask for "every 3rd letter," it has to decode the prompt, map it to tokens, simulate how you might count, and then guess what you really meant.
Spoiler: if it's not given a chance to decode tokens in individual letters as a separate step, it will stumble.
Why does this matter?
Because the better we understand how LLMs think, the better results we'll get.
MFs are boiling the oceans to reinvent cold reading
A real modest {~~brunch~~|bunch}
Just thinking about how I watched “Soylent Green” in high school and thought the idea of a future where technology just doesn’t work anymore was impossible. Then LLMs come and the first thing people want to do with them is to turn working code into garbage, and then the immediate next thing is to kill living knowledge by normalising people relying on LLMs for operational knowledge. Soon, the oceans will boil, agricultural industries will collapse and we’ll be forced to eat recycled human. How the fuck did they get it so right?
If I had my druther’s I’d make my own hosting and call it “UnaGit”, and pretend it’s unagi/eel themed, when it is actually teddy K themed
NASB: A question I asked myself in the shower: “Is there some kind of evolving, sourced document containing all the reasons why LLMs should be turned off?” Then I remembered wikis exist. Wikipedia doesn’t have a dedicated “criticisms of LLMs” page afaict, or even a “Criticisms” section on the LLM page. RationalWiki has a page on LLMs that is almost exclusively criticisms, which is great, but the tone is a few notches too casual and sneery for universal use.
Someone should write a script that estimates how much time has been spent re-fondling LLMPRs on Github.
you all joke, but my mind is so expanded by stimulants that I, and only I, can see how this dogshit code will one day purchase all the car manufacturers and build murderbots
Sorry, as mentioned elsewhere in the thread I can’t open links. Looks like froztbyte explained it though, thanks!