this post was submitted on 04 Jan 2024
23 points (100.0% liked)

Programming

13371 readers
1 users here now

All things programming and coding related. Subcommunity of Technology.


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 1 year ago
MODERATORS
 

cross-posted from: https://programming.dev/post/8121843

~n (@nblr@chaos.social) writes:

This is fine...

"We observed that participants who had access to the AI assistant were more likely to introduce security vulnerabilities for the majority of programming tasks, yet were also more likely to rate their insecure answers as secure compared to those in our control group."

[Do Users Write More Insecure Code with AI Assistants?](https://arxiv.org/abs/2211.03622?

you are viewing a single comment's thread
view the rest of the comments
[–] ericjmorey@programming.dev 3 points 10 months ago (1 children)

A Doctor will take risk factors into consideration

Unfortunately we see that the data doesn't support this assumption. Poor populations are not given the same attention by doctors. Black populations in particular receive worse healthcare in the US after adjusting for many factors like income and family medical history.

[–] sonori@beehaw.org 2 points 10 months ago* (last edited 10 months ago)

It’s unfortunately not certain that they will take such measures with their patients even though most try, and indeed ethic discrepancies are one of the things likely to be made worse with machine learning given that there is often little thought or training data given to them, but age of the hospitals machine is not a good proxy for risk factors. It might be statistically corralled, the actual patients risk isn’t. Less at risk people may go to a cheaper hospital, and more at risk people might live in a city which also has a very up to date hospital.