this post was submitted on 13 Jun 2025
1410 points (99.4% liked)
Comic Strips
17398 readers
1038 users here now
Comic Strips is a community for those who love comic stories.
The rules are simple:
- The post can be a single image, an image gallery, or a link to a specific comic hosted on another site (the author's website, for instance).
- The comic must be a complete story.
- If it is an external link, it must be to a specific story, not to the root of the site.
- You may post comics from others or your own.
- If you are posting a comic of your own, a maximum of one per week is allowed (I know, your comics are great, but this rule helps avoid spam).
- The comic can be in any language, but if it's not in English, OP must include an English translation in the post's 'body' field (note: you don't need to select a specific language when posting a comic).
- Politeness.
- Adult content is not allowed. This community aims to be fun for people of all ages.
Web of links
- !linuxmemes@lemmy.world: "I use Arch btw"
- !memes@lemmy.world: memes (you don't say!)
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Skimmed through the article and I found it surprisingly difficult to pinpoint what "AI" solution they actually covered, despite going as far as opening the supplementary data of the research they mentioned. Maybe I'm missing something obvious so please do share.
AFAICT they are talking about using computer vision techniques to highlight potential problems in addition to bringing the non annotated image.
This... is great! But I'd argue this is NOT what "AI" at the moment is hyped about. What I mean is that computer vision and statistics have been used, in medicine and elsewhere, with great success and I don't see why it wouldn't be applied. Rather I would argue the hype at he moment in AI is about LLM and generative AI. AFAICT (but again had a hard time parsing through this paper to get anything actually specific) none of that is using it.
FWIW I did specific in my post tht my criticism was about "modern" AI, not AI as a field in general.
I'm not at that exact company, but a very similar one.
It's AI because we essentially we just take early scans from people who are later diagnosed with respiratory illnesses and using that to train a neural network to recognise early signs that a human doctor wouldn't notice.
The actual algorithm we started with and built upon is basically identical to one of the algoriths used in a generative AI models (the one that takes an image, does some maths wizardry on it and tells you how close the image is to the selected prompt). Of course we heavily modify it for our needs so it's pretty different in the end product, and we're not using its output to feedback into a denoiser and we have a lot of cognitive layers and some other tricks to bring the reliability up to a point we can actually use it denoise, but it's still at its core the same algorithm.
Thanks, any publication please to better understand how it work?