this post was submitted on 04 Jun 2025
113 points (98.3% liked)
Showerthoughts
34840 readers
426 users here now
A "Showerthought" is a simple term used to describe the thoughts that pop into your head while you're doing everyday things like taking a shower, driving, or just daydreaming. The most popular seem to be lighthearted clever little truths, hidden in daily life.
Here are some examples to inspire your own showerthoughts:
- Both “200” and “160” are 2 minutes in microwave math
- When you’re a kid, you don’t realize you’re also watching your mom and dad grow up.
- More dreams have been destroyed by alarm clocks than anything else
Rules
- All posts must be showerthoughts
- The entire showerthought must be in the title
- No politics
- If your topic is in a grey area, please phrase it to emphasize the fascinating aspects, not the dramatic aspects. You can do this by avoiding overly politicized terms such as "capitalism" and "communism". If you must make comparisons, you can say something is different without saying something is better/worse.
- A good place for politics is c/politicaldiscussion
- Posts must be original/unique
- Adhere to Lemmy's Code of Conduct and the TOS
If you made it this far, showerthoughts is accepting new mods. This community is generally tame so its not a lot of work, but having a few more mods would help reports get addressed a little sooner.
Whats it like to be a mod? Reports just show up as messages in your Lemmy inbox, and if a different mod has already addressed the report, the message goes away and you never worry about it.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
The unpredictable element is also why they absolutely suck at being the reliable sources of accurate information that they are being advertised to be.
Yeah, humans are wrong a lot of the time but AI forced into everything should be more reliable than the average human.
That’s not it. Even without any added variability they would still be wrong all the time. The issue is inherent to LLMs; they don’t actually understand your questions or even their own responses. It’s just the most probable jumble of words that would follow the question.
First of all it doesn't matter whether you think that AI can replace human workers. It only matter whether company think that AI can replace human workers.
Secondly, you're assuming that humans typically understand the question at stake. You've clearly never met, or been, an under-paid, over-worked employee who doesn't give a flying fuck about the daily bullshit.
Is it? Is random variance the source of all hallucinations? I think it's not; it's more the fact that they don't understand what they're generating, they're just looking for the most statistically probable next character.
Yeah, they aren't trained to make "correct" responses, but reasonably looking responses; they aren't truth systems. However, I'm not sure what a truth system would even look like. At a certain point truth/fact become subjective, meaning that we probably have a fundamental problem with how we think about and evaluate these systems.
I mean, it's the whole reason programming languages were created, natural language is ambiguous.
Yeah, solipsism existing drives the point about truth home. Thing is, LLMs outright lie without knowing they're lying, because there's no understanding there. It's statistics at the character level.
AI is not my field, so I don't know, either.
If there are 800 sentences/whatever chunk of information it uses about what color a ball is, using the average can result in that sentence using red when it should be blue based on the current question or it could add information about balls that are a different type because it doesn't understand what kind of ball it is talking about. It might be randomness, it might be using an average, or a combination of both.
Like if asked about 'what color is a basketball' and the training set includes a it of custom color combinations by each team it might return a combination of colors that doesn't match a team like brown (default leather) and yellow. This could also be the answer if you asked for an example of a basketball that matched team colors, because it might keep the default color from a ball that just has a team logo.
If someone doesn't know the training set it would probably look like it made something ip. To someone who knows it is impossible to tell of it is random, due to a lack of knowing what it is talking about, or if it had some other less obvious connection that combines the two which lead to yellow and brown result.
I'm not saying I agree with AI being shoehorned into everything, i'm seeing it being pushed into places it shouldn't first hand, but strictly speaking, things don't have to be more reliable if they're fast enough.
Quantum computers are inherently unreliable, but you can perform the same calculation multiple times and average the result / discard the outliers and it will still be faster than a classical computer.
Same thing like back when I was in grade school and teachers would say to not trust internet sources and make sure to look everything up in an physical book / encyclopedia because a book is more reliable. Like, yes, it is, but it also takes me 100x as long to look it up, so ultimately starting at Wikipedia is going to get me to the right answer faster, the vast majority of the time, even if it's not 100% accurate or reliable (this was nearer Wikipedia's original launch).
That works for pattern matching, but you don't want to do that for doing accurate calculations. There is no reason to average the AI run calculation of 12345 x 54321 because that can be done with a tiny calculator with a solar cell the size of a pencil eraser. Doing calculations like that multiple times adds up fast and will always be less reliable than just doing it right in the first place. Same with reporting historical facts.
There is a vslidation step that AI doesn't do. If you feed it 1000 posts from unreliable sources like reddit or don't add even more context about whether the 'fact' is a joke, baseless rumor, or from a reliable source you get the current AI.
Yes, doing multiple calculations efficently and taking averages has a lot of uses, mainly in complex systems where this provides opportunities to test chaotic systems with wildly different starting states. There are a ton of great uses for AI!
But the AI that is being forced down our throats is worse than wikipedia because it averages content from ALL of reddit, facebook, and other massive sites where crackpots are given the same weight as informed individuals and there are no guardrails.
I agree.
I disagree. Those are not remotely the same problem. Both in how they're technically executed, and in what the user expects out of them.
No, it's just different. Is it wrong sometime? Yes. But it can also get you the right answer to a normal human question orders of magnitude faster than a series of traditional searches and documentation readings.
Does that information still need to be vetted afterwards? Yeah, but it's a lot easier to say "copilot, I'm looking at a crossover circuit and I've got one giant wire coil, three white rectangles and a capacitor, what is each of them doing and how kind of meter will I need to test them", then it is to individually search for each component and search for what type of meter you need to test them. Do you still need to verify that info after? Yeah, but it's a lot easier to verify once you know what to actually search for.
Basically any time one human query needs to synthesize information from multiple different sources, an AI search is going to be significantly faster.
In latter classes our teachers just told us to not blindly believe what we read on Wikipedia but cross-reference that with other sources like public newspaper or (as you said) books.