"Your honor, the evidence shows quite clearly that the defendent was holding a weapon with his third arm."
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
If you ever encountered an AI hallucinating stuff that just does not exist at all, you know how bad the idea of AI enhanced evidence actually is.
No computer algorithm can accurately reconstruct data that was never there in the first place.
Ever.
This is an ironclad law, just like the speed of light and the acceleration of gravity. No new technology, no clever tricks, no buzzwords, no software will ever be able to do this.
Ever.
If the data was not there, anything created to fill it in is by its very nature not actually reality. This includes digital zoom, pixel interpolation, movement interpolation, and AI upscaling. It preemptively also includes any other future technology that aims to try the same thing, regardless of what it's called.
One little correction, digital zoom is not something that belongs on that list. It’s essentially just cropping the image. That said, “enhanced” digital zoom I agree should be on that list.
Hold up. Digital zoom is, in all the cases I'm currently aware of, just cropping the available data. That's not reconstruction, it's just losing data.
Otherwise, yep, I'm with you there.
Digital zoom is just cropping and enlarging. You're not actually changing any of the data. There may be enhancement applied to the enlarged image afterwards but that's a separate process.
But the fact remains that digital zoom cannot create details that were invisible in the first place due to the distance from the camera to the subject. Modern implementations of digital zoom always use some manner of interpolation algorithm, even if it's just a simple linear blur from one pixel to the next.
The problem is not in how a digital zoom works, it's on how people think it works but doesn't. A lot of people (i.e. [l]users, ordinary non-technical people) still labor under the impression that digital zoom somehow makes the picture "closer" to the subject and can enlarge or reveal details that were not detectable in the original photo, which is a notion we need to excise from people's heads.
I think we need to STOP calling it "Artificial Intelligence". IMHO that is a VERY misleading name. I do not consider guided pattern recognition to be intelligence.
A term created in order to vacuum up VC funding for spurious use cases.
Optical Character Recognition used to be firmly in the realm of AI until it became so common that even the post office uses it. Nowadays, OCR is so common that instead of being proper AI, it’s just another mundane application of a neural network. I guess, eventually Large Language Models will be outside there scope of AI.
How long until we got upscalers of various sorts built into tech that shouldn't have it? For bandwidth reduction, for storage compression, or cost savings. Can we trust what we capture with a digital camera, when companies replace a low quality image of the moon with a professionally taken picture, at capture time? Can sport replays be trusted when the ball is upscaled inside the judges' screens? Cheap security cams with "enhanced night vision" might get somebody jailed.
I love the AI tech. But its future worries me.
It will wild out for the foreseeable future until the masses stop falling for it in gimmicks then it will be reserved for the actual use cases where it's beneficial once the bullshit ai stops making money.
Lol, you think the masses will stop falling for it in gimmicks? Just look at the state of the world.
AI-based video codecs are on the way. This isn't necessarily a bad thing because it could be designed to be lossless or at least less lossy than modern codecs. But compression artifacts will likely be harder to identify as such. That's a good thing for film and TV, but a bad thing for, say, security cameras.
The devil's in the details and "AI" is way too broad a term. There are a lot of ways this could be implemented.
Not all of those are the same thing. AI upscaling for compression in online video may not be any worse than "dumb" compression in terms of loss of data or detail, but you don't want to treat a simple upscale of an image as a photographic image for evidence in a trial. Sport replays and hawkeye technology doesn't really rely on upscaling, we have ways to track things in an enclosed volume very accurately now that are demonstrably more precise than a human ref looking at them. Whether that's better or worse for the game's pace and excitement is a different question.
The thing is, ML tech isn't a single thing. The tech itself can be used very rigorously. Pretty much every scientific study you get these days uses ML to compile or process images or data. That's not a problem if done correctly. The issue is everybody is both assuming "generative AI" chatbots, upscalers and image processers are what ML is and people keep trying to apply those things directly in the dumbest possible way thinking it is basically magic.
I'm not particularly afraid of "AI tech", but I sure am increasingly annoyed at the stupidity and greed of some of the people peddling it, criticising it and using it.
Jesus Christ, does this even need to be pointed out!??
Unfortunately it does need pointing out. Back when I was in college, professors would need to repeatedly tell their students that the real world forensics don't work like they do on NCIS. I'm not sure as to how much thing may or may not have changed since then, but based on American literacy levels being what they are, I do not suppose things have changed that much.
Yes. When people were in full conspiracy mode on Twitter over Kate Middleton, someone took that grainy pic of her in a car and used AI to “enhance it,” to declare it wasn’t her because her mole was gone. It got so much traction people thought the ai fixed up pic WAS her.
I met a student at university last week at lunch who told me he is stressed out about some homework assignment. He told me that he needs to write a report with a minimum number of words so he pasted the text into chatGPT and asked it about the number of words in the text.
I told him that every common text editor has a word count built in and that chatGPT is probably not good at counting words (even though it pretends to be good at it)
Turns out that his report was already waaaaay above the minimum word count and even needed to be shortened.
So much about the understanding of AI in the general population.
I'm studying at a technical university.
The layman is very stupid. They hear all the fantastical shit AI can do and they start to assume its almighty. Thats how you wind up with those lawyers that tried using chat GPT to write up a legal brief that was full of bullshit and didnt even bother to verify if it was accurate.
They dont understand it, they only know that the results look good.
Of course, not everyone is technology literate enough to understand how it works.
That should be the default assumption, that something should be explained so that others understand it and can make better, informed, decisions. .
I’d love to see the “training data” for this model, but I can already predict it will be 99.999% footage of minorities labelled ‘criminal’.
And cops going “Aha! Even AI thinks minorities are committing all the crime”!
Imagine a prosecution or law enforcement bureau that has trained an AI from scratch on specific stimuli to enhance and clarify grainy images. Even if they all were totally on the up-and-up (they aren't, ACAB), training a generative AI or similar on pictures of guns, drugs, masks, etc for years will lead to internal bias. And since AI makers pretend you can't decipher the logic (I've literally seen compositional/generative AI that shows its work), they'll never realize what it's actually doing.
So then you get innocent CCTV footage this AI "clarifies" and pattern-matches every dark blurb into a gun. Black iPhone? Maybe a pistol. Black umbrella folded up at a weird angle? Clearly a rifle. And so on. I'm sure everyone else can think of far more frightening ideas like auto-completing a face based on previously searched ones or just plain-old institutional racism bias.
According to the evidence, the defendant clearly committed the crime with all 17 of his fingers. His lack of remorse is obvious by the fact that he's clearly smiling wider than his own face.
clickity clackity
"ENHANCE"
AI enhanced = made up.
It's incredibly obvious when you call the current generation of AI by its full name, generative AI. It's creating data, that's what it's generating.
Everything that is labeled "AI" is made up. It's all just statistically probable guessing, made by a machine that doesn't know what it is doing.
The fact that it made it that far is really scary.
I'm starting to think that yes, we are going to have some new middle ages before going on with all that "per aspera ad astra" space colonization stuff.
Why not make it a fully AI court and save time if they were going to go that way. It would save so much time and money.
Of course it wouldn't be very just, but then regular courts aren't either.
This actually opens an interesting debate.
Every photo you take with your phone is post processed. Saturation can be boosted, light levels adjusted, noise removed, night mode, all without you being privy as to what's happening.
Typically people are okay with it because it makes for a better photo - but is it a true representation of the reality it tried to capture? Where is the line of the definition of an ai-enhanced photo/video?
We can currently make the judgement call that a phones camera is still a fair representation of the truth, but what about when the 4k AI-Powered Night Sight Camera does the same?
My post is more tangentially related to original article, but I'm still curious as what the common consensus is.
Every photo you take with your phone is post processed.
Years ago, I remember looking at satellite photos of some city, and there was a rainbow colored airplane trail on one of the photos. It was explained that for a lot of satellites, they just use a black and white imaging sensor, and take 3 photos while rotating a red/green/blue filter over that sensor, then combining the images digitally into RGB data for a color image. For most things, the process worked pretty seamlessly. But for rapidly moving objects, like white airplanes, the delay between the capture of red/green/blue channel created artifacts in the image that weren't present in the actual truth of the reality being recorded. Is that specific satellite method all that different from how modern camera sensors process color, through tiny physical RGB filters over specific subpixels?
Even with conventional photography, even analog film, there's image artifacts that derive from how the photo is taken, rather than what is true of the subject of the photograph. Bokeh/depth of field, motion blur, rolling shutter, and physical filters change the resulting image in a way that is caused by the camera, not the appearance of the subject. Sometimes it makes for interesting artistic effects. But it isn't truth in itself, but rather evidence of some truth, that needs to be filtered through an understanding of how the image was captured.
Like the Mitch Hedberg joke:
I think Bigfoot is blurry, that's the problem. It's not the photographer's fault. Bigfoot is blurry, and that's extra scary to me.
So yeah, at a certain point, for evidentiary proof in court, someone will need to prove some kind of chain of custody that the image being shown in court is derived from some reliable and truthful method of capturing what actually happened in a particular time and place. For the most part, it's simple today: i took a picture with a normal camera, and I can testify that it came out of the camera like this, without any further editing. As the chain of image creation starts to include more processing between photons on the sensor and digital file being displayed on a screen or printed onto paper, we'll need to remain mindful of the areas where that can be tripped up.
You'd think it would be obvious you can't submit doctored evidence and expect it to be upheld in court.
For example, there was a widespread conspiracy theory that Chris Rock was wearing some kind of face pad when he was slapped by Will Smith at the Academy Awards in 2022. The theory started because people started running screenshots of the slap through image upscalers, believing they could get a better look at what was happening.
Sometimes I think, our ancestors shouldn’t have made it out of the ocean.