this post was submitted on 21 Aug 2023
407 points (94.3% liked)

Technology

59235 readers
3726 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Police in England installed an AI camera system along a major road. It caught almost 300 drivers in its first 3 days.::An AI camera system installed along a major road in England caught 300 offenses in its first 3 days.There were 180 seat belt offenses and 117 mobile phone

top 50 comments
sorted by: hot top controversial new old
[–] MaxPower@feddit.de 143 points 1 year ago* (last edited 1 year ago) (3 children)

Photos flagged by the AI are then sent to a person for review.

If an offense was correctly identified, the driver is then sent either a notice of warning or intended prosecution, depending on the severity of the offense.

The AI just "identifying" offenses is the easy part. It would be interesting to know whether the AI indeed correctly identified 300 offenses or if the person reviewing the AI's images acted on 300 offenses. That's potentially a huge difference and would have been the relevant part of the news.

[–] tmRgwnM9b87eJUPq@lemmy.world 37 points 1 year ago (2 children)

The system we use in NL is called “monocam”. A few years ago it caught 95% of all offenders.

This means that AI had at most 5% false negatives.

I wonder if they have improved the system in the mean time.

https://nos.nl/artikel/2481555-nieuwe-slimme-camera-s-aangeschaft-om-appende-bestuurders-te-betrappen

[–] zephr_c@lemm.ee 46 points 1 year ago (1 children)

Nobody cares about false negatives. As long as the number isn't something so massive that the system is completely useless false negatives in an automatic system are not a problem.

What are the false positives? Every single false positive is a gross injustice. If you can't come up with a number for that, then you haven't even evaluated your system.

[–] tmRgwnM9b87eJUPq@lemmy.world 15 points 1 year ago* (last edited 1 year ago) (1 children)

The system works with AI signaling phone usage by driving.

Then a human will verify the photo.

AI is used to respect people’s privacy.

The combination of the AI detection+human review leads to a 5% false negative rate, and most probably 0% false positive.

This means that the AI missed at most 5% positives, but probably less because of the human reviewer not being 100% sure there was an offense.

[–] zephr_c@lemm.ee 10 points 1 year ago (4 children)

Look, I'm not saying it's a bad system. Maybe it's great. "Most probably 0%" is meaningless though. If all you've got is gut feelings about it, then you don't know anything about it. Humans make mistakes in the best of circumstances, and they get way, way worse when you're telling them that they're evaluating something that's already pretty reliable. You need to know it's not giving false positive, not have a warm fuzzy feeling about it.

Again, I don't know if someone else has already done that. Maybe they have. I don't live in the Netherlands. I don't trust it until I see the numbers that matter though, and the more numbers that don't matter I see without the ones that do, the less I trust it.

load more comments (4 replies)
[–] Tywele@lemmy.dbzer0.com 18 points 1 year ago (7 children)

How do they know that they caught 95% of all offenders if they didn't catch the remaining 5%? Wouldn't that be unknowable?

[–] lasagna@programming.dev 20 points 1 year ago* (last edited 1 year ago)

Welcome to the world of training datasets.

There are many ways to go about it, but for a limited number they'd probably use human analysts.

But in general, they'd put a lot more effort into a chunk of data and use that as the truth. It's not a perfect method but it's good enough.

load more comments (6 replies)
[–] MotoAsh@lemmy.world 12 points 1 year ago

but digging out that info would involve journalism and possibly reporting something the cops wouldn't like! We all know how that goes.

load more comments (1 replies)
[–] eric5949@lemmy.cloudaf.site 97 points 1 year ago (5 children)

Expansion of the surveillance state, you go uk.

load more comments (5 replies)
[–] Meowoem@sh.itjust.works 70 points 1 year ago (38 children)

I love threads like these because it really shows how flexible opinions are, post about ai surveillance state and everyone is against it but post about car drivers getting fined for not wearing a seatbelt and everyone loves it.

[–] plumbercraic@lemmy.sdf.org 35 points 1 year ago (1 children)

This is a weird phenomenon. Feels a bit like how focusing on "welfare queens" / "dole bludgers" can pave the way for similar privacy erosion (and welfare cuts) even though its a tiny percentage of the people. Seems a short hop away from "if you've got nothing to hide...."

load more comments (1 replies)
[–] realharo@lemm.ee 17 points 1 year ago* (last edited 1 year ago) (1 children)

Seatbelts I don't really care about, because with that people mostly just affect themselves (or others in the same car), but for other infractions it makes sense.

The real issue is whether you can trust that the data will only be used for its intended purpose, as right now there are basically no good mechanisms to prevent misuse.

If we had cameras where you could somehow guarantee that - no access for reason other than stated, only when flagged or otherwise by court order, all access to footage logged with the audit log being publicly available, independent system flagging suspicious accesses to any footage, etc. - it wouldn't be too bad.

Compared to all the private cameras that exist in cars these days...

[–] afraid_of_zombies@lemmy.world 20 points 1 year ago

You know the best way to not have absolute power corrupt? Not have absolute power.

If you collect this data there is degree of probability that eventually it will be abused. If you don't collect this data there is zero chance.

Some > none

Good government is about assuming the worse and decided if you are willing to endure that. If the absolute worse humans you can imagine were put into office how much bad can they do?

load more comments (36 replies)
[–] ikidd@lemmy.world 55 points 1 year ago (1 children)

The UK has never seen a dystopian nightmare they didn't rush to embrace.

load more comments (1 replies)
[–] Boiglenoight@lemmy.world 53 points 1 year ago (8 children)

Is the freedom to drive without feeling like you're being watched more important than the prevention of texting while driving?

During my commute, it's common to see people looking at their phones. I don't know what the effect is without statistics, but seeing an accident along the way is a usual occurrence.

[–] Agent641@lemmy.world 49 points 1 year ago (2 children)

Can't believe people still have the audacity to text while driving. I prefer reading a nice relaxing book.

load more comments (2 replies)
[–] nutsack@lemmy.world 31 points 1 year ago* (last edited 1 year ago)

inattentive driving should be considered gross negligence

[–] Natanael@slrpnk.net 10 points 1 year ago (3 children)

I'm more concerned about error rates and false accusations

load more comments (3 replies)
load more comments (5 replies)
[–] EndlessApollo@lemmy.world 41 points 1 year ago (5 children)

ITT a bunch of people who have never read an ounce of sci fi (or got entirely the wrong message and think law being enforced by robots is a good thing)

[–] echodot@feddit.uk 21 points 1 year ago* (last edited 1 year ago) (8 children)

But the law isn't enforced by robots the law is enforced by humans. All that's happening here is that the process of capturing transgressions has been automated. I don't see how that's a problem.

As long as humans are still part of the sentencing process, and they are, then functionally there's no difference, if a mistake is being made it will be rectified at that time. From the process point of view there isn't really any difference between being caught by an automated AI camera and being caught by a traffic cop.

[–] davidalso@lemmy.world 14 points 1 year ago (2 children)

Although completely reasonable, I fear that your conclusion is inaccessible for most folks.

And as a pedestrian, I'm all for a system that's capable of reducing distracted driving.

load more comments (2 replies)
load more comments (7 replies)
[–] CrayonRosary@lemmy.world 19 points 1 year ago (8 children)

Calling an image recognition system a robot enforcing the law is such a stretch you're going to pull a muscle.

load more comments (8 replies)
load more comments (3 replies)
[–] madge@lemmy.world 31 points 1 year ago (2 children)

I work in an adjacent industry and got a sales pitch from a company offering a similar service. They said that they get the AI to flag the images and then people working from home confirm - and they said it's a lot of people with disabilities/etc getting extra cash that way.

This was about six months ago and I asked them, "there's a lot of bias in AI training datasets - was a diverse dataset used or was it trained mostly on people who look like me (note: I'm white)?" and they completely dodged the question...

(this is definitely a different company as I am not in England)

load more comments (2 replies)
[–] CookieJarObserver@sh.itjust.works 27 points 1 year ago (1 children)

Fuck ai and cameras and uk.

[–] ilikekeyboards@lemmy.world 31 points 1 year ago (4 children)

FUCK CAMERAS. THAT'S WHY I VOTED BREXIT. SO I CAN TEXT AND DRIVE, NOT LIKE THESE BAGUETTE LOVING SISSIES

[–] Bonskreeskreeskree@lemmy.world 10 points 1 year ago

Monitor my internet and tell me what I can do harder daddy

load more comments (3 replies)
[–] thegreenguy@sopuli.xyz 18 points 1 year ago (3 children)

Why are people saying this is a hypersurveillance dystopian nightmare? Guys, you are still in public! The only difference between this and having police officers sitting there and looking is this is much cheaper and more efficient. The recordings are still being sent to a human being for review.

[–] SquishyPandaDev@yiffit.net 39 points 1 year ago (2 children)

The problem is the whole "give an inch, they take a mile." We don't know what rights this may take away from us in the future. So in the now, always question

load more comments (2 replies)
[–] chicken@lemmy.dbzer0.com 11 points 1 year ago

The only difference between this and having police officers sitting there and looking is this is much cheaper and more efficient.

Sure, but that's a huge problem, because the legal system wasn't actually designed for perfectly efficient enforcement. It is important that people be able to get away with breaking the law most of the time. If all of the tens of thousands of laws on the books were always enforced we would all be in prison and bankrupt from fines. Some laws are just bad too, and the way they get repealed is when enough people get away with breaking them for long enough to build political momentum for it.

Also, it isn't like they are going to stop at using scaled-up AI surveillance just to enforce seatbelt use and texting while driving, there is way too much potential for abuse with this sort of tech. For example if there are these sorts of cameras all over, networked together, anyone with access to them can track just about everything you are doing with no way to opt out. Even if you aren't doing anything wrong the feeling that you are always being watched is oppressive and has chilling effects.

load more comments (1 replies)
[–] zepheriths@lemmy.world 16 points 1 year ago
[–] r00ty@kbin.life 16 points 1 year ago (2 children)

My main problem with this is, that this becomes like the huge online behemoths like youtube etc. I think most people have seen incidents where youtube cancelled a channel or applied copyright incorrectly, and getting a human to review things is next to impossible. The reason is clear, the sheer amount of content breaching the rules is too big to cost efficiently deal with by humans.

One camera catching 300 people in 72 hours. We don't see how many it triggered, how many were reviewed and found to be false positives.

The problem is going to be if a whole police force takes it up, or it goes national. The amount of hits generated would be far beyond the ability to confirm with humans. I see it going a similar way to youtube. They just let the AI fine people. You report it as wrong, so they send your petition to another AI that pretends to be human and denies you again. The only way to clear things up is to take it to court. But, now the court system is being flooded so they deny people the right to a court case and the fixed penalties will be automatically applied.

This is the dystopia I fear. Actually catching people committing driving crimes? I don't have a problem with that. Aside from maybe the increasing number of driving crimes coupled with the knowledge these cameras exist could lead to less concentration while people make sure they're sitting upright, looking attentive, eyes straight ahead hands at 10 o'clock and 2 o'clock. Did I indicate for that lane change back there? I guess that remains to be seen.

load more comments (2 replies)
[–] Cataphract@lemmy.ko4abp.com 14 points 1 year ago

Really great dialogue and discourse going on in this post. Thank you everyone for your opinions and viewpoints. Definitely have a lot to think over on my current stance. Exactly what I was missing lately from the social media I've been consuming (actual discussions with merits both sides hold).

load more comments
view more: next ›