this post was submitted on 23 Jul 2023
72 points (96.2% liked)
Technology
59342 readers
5221 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
tl;dr: The headline is false; the general did not actually say that. I thought it sounded wrong, so I watched the video that the article linked to, to check. Sure enough, it was wrong. However, the reality may not be any more reassuring.
Hypothesis: Like, no, that's obviously wrong; either the headline is trash or the general made a whole tossed salad with mango sauce out of whatever the people working on it said. (stated before further investigation; stay tuned)
Updating: https://youtu.be/wn1yEovtYRM?t=3459
Okay, wow.
So the speaker is saying this at the end of the panel, in response to a question asking about the use of autonomous weapons.
They want to talk about who's trusted to make the decision of whether to employ lethal force in a combat situation: a human American soldier, who might be exhausted and not thinking clearly, or an algorithm that doesn't get tired.
And one thing they mention is that an enemy might not have ethics that would lead them them even care about that distinction. And they express that as "Judeo-Christian morality".
That doesn't sit right with me. It sounds to me, in that moment, like they're implying that people from other cultures could be less moral, and that we should be willing to be more free with our weapons towards such people. That sounds to me like the sort of bullshit that came out of the Vietnam War.
But the rest of the answer sounds like they're trying to point at the problem of making command decisions in scenarios where the opponent might deploy autonomous weapons first. If the enemy has already handed decision-making over to an algorithm, how does that affect what we should do?
And they're maybe expressing that to their expected audience — mind you, the Air Force is heavily infiltrated by far-right Christian radicals — in a way that they hope makes sense.
Conclusion: The headline is incorrect; the general did not actually say that a Pentagon AI would be more ethical for any reason; he was talking about the human ethical decision of whether to trust AI to make decisions. But what he did say is complicated and scary for different reasons, including the internal culture of the US Air Force.