this post was submitted on 20 Dec 2024
640 points (99.4% liked)

196

17571 readers
1322 users here now

Be sure to follow the rule before you head out.


Rule: You must post before you leave.



Other rules

Behavior rules:

Posting rules:

NSFW: NSFW content is permitted but it must be tagged and have content warnings. Anything that doesn't adhere to this will be removed. Content warnings should be added like: [penis], [explicit description of sex]. Non-sexualized breasts of any gender are not considered inappropriate and therefore do not need to be blurred/tagged.

If you have any questions, feel free to contact us on our matrix channel or email.

Other 196's:

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] geneva_convenience@lemmy.ml 2 points 4 months ago* (last edited 4 months ago) (1 children)

My dude you don't need to believe all medical advice it gives. Just use it to get a feel in the right direction and then check whether the symptoms match. It will often suggest multiple options and rate them by likelyhood.

I do not think you understand how much training these LLM's have on medical material. They can accurately diagnose almost any common disease it is not like WebMD always suggesting you have stage 5 cancer.

Seriously try it once before going full anti AI mode.

[–] Umbrias@beehaw.org 2 points 4 months ago (1 children)

You quite literally cannot trust them, their produced information entropy is too high. I understand how much training they have on medical text, you dont understand how little that means. These models are fundamentally incapable of assessing the truth of a statement, you are using something you dont even understand to give you advice about something it cannot reliably give and lack the expertise needed to understand how accurate they actually are at any given answer, on a topic that directly influences your actual physical wellbeing!

"just try it bro it's good i promise" you should actually prompt an llm about a topic you know about in detail. the amount of errors are rampant, then apply that same inaccuracy to topics you know nothing about.

my next recommendation is that since you are not a healthcare professional, do not give medical advice like "use llm" as you personally clearly cannot verify the accuracy of llm for this role.

[–] geneva_convenience@lemmy.ml 2 points 4 months ago (1 children)

If you want to visit a Doctor for every minor thing feel welcome. So far LLM's have correctly predicted every health issue I have had and provided better and more accurate information than the doctor visit afterwards.

This does not mean they are infaillible. But you can easily check what they suggest and see whether the symptoms match other websites and the doctors description.

[–] Umbrias@beehaw.org 1 points 4 months ago (1 children)

If you want to visit a Doctor for every minor thing feel welcome. So far LLM's have correctly predicted every health issue I have had and provided better and more accurate information than the doctor visit afterwards.

lol no

"easily just do the same search you would have after" truly the llm is very helpful and not just an uncertainty adding middle step that through your own admission you rely on over medical professionals.

[–] geneva_convenience@lemmy.ml 1 points 4 months ago* (last edited 4 months ago) (1 children)

Do you believe doctors to be all wise? They are similarly prone to error. Their head is not Wikipedia.

[–] Umbrias@beehaw.org 2 points 4 months ago

Doctors have liability and the ability to self regulate their confidence and understand certainty. Also lol "similarly prone to error" no, human cognition is not a transformer.

Do not give medical advice. Neither you nor any llm are licensed and capable of doing so. Yes, that means you should be held legally liable if that advice ever leads to harm, as should ai companies for convincing you and others of their grift.