Hello, I have downvoted your post!
Reasons include:
- stupid fucking clickbait title
- sharing information that was otherwise already obvious to everyone for the past 2 years
- ~~quoting elon musk~~ they're actually denigrating elon and I can't read lol
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
Hello, I have downvoted your post!
Reasons include:
The title is not mine and the paper the article is responding to was published last month, not two years ago as you claim. The only mention of Musk in the entire article is in this one sentence:
Unlike self-serving warnings from Open AI CEO Sam Altman or Elon Musk about the “existential risk” artificial general intelligence poses to humanity, Google’s research focuses on real harm that generative AI is currently causing and could get worse in the future.
~~Did you check the hyperlink? Because it is !techtakes@awful.systems levels of stupid.~~ I can't read
Not sure if you're aware so I'll mention it anyway, but as far as I know, downvotes in Beehaw communities don't federate to Beehaw (as in aren't applied here - you might see them on your instance though, not really sure). That being said, your comment does, so you've made a "pseudo-downvote" anyway.
The mechanism for how it works is that as a remote instance sends in it's downvote count, Beehaw immediately drops the message without modifying the database. Part of this exchange is an expected response of the total updated downvotes. However, Beehaw sends back "0" and the remote instance knows it can't be zero, so it treats it's local count with higher validity.
Essentially, this all ends up meaning that what ssm will see is the total of all downvotes from users on their own instance, and nothing else. This might be just their own downvote, especially being on a smaller instance. But I've seen lemmy.world users be confused about it bc the count they see is say, -5. Have been told my instance obviously has them enabled 😅
Remote instances don't communicate their vote tally's with each other for a third instance's post.
What article did you read, seeing as there's nothing from Musk in there?
Specifically "Sam Altman or Elon Musk about the “existential risk” artificial general intelligence poses to humanity" which contains a hyperlink leading to an independent article titled "Elon Musk says AI one of the ‘biggest threats’ to humanity", and is just as much unholy brainrot as one might expect.
“Sam Altman or Elon Musk about the “existential risk” artificial general intelligence poses to humanity”
The full quote is "UNLIKE self-serving warnings from Sam Altman or Elon Musk about the “existential risk” artificial general intelligence poses to humanity". In other words, they're actively denigrating Musk and Altman, and you've taken the quote entirely out of context, in direct opposite to the original meaning.
Can't argue with that, I was ADHD skimming. I will now curl up in ball in the corner and die of embarassment and cringe :(
If it helps, I agreed with your 1st 2 points. You may die with your dignity half intact.
🙏
How are those things self-serving?
The warnings are self-serving, not the AI
generative AI makes it very easy for anyone to flood the internet with generated text, audio, images, and videos.
And? There's already way too much data online to read or watch all of it. We could just move to a "watermark" system where everyone takes credit for their contributions. Things without watermarks could just be dismissed, since they have as much authority as an anonymous comment.
I am waiting for people to start getting both public and hidden authentication tattoos, so they can prove generative images aren't actually them.
How would that work?
AIs learn from existing images, they could just as well learn to reproduce a tattoo and link the pattern to a person's name. Recreating it from different angles, would require more training data, but ultimately would get there.
For public ones, depending on what people started getting, it'd really strain the AIs. You could go in like 1 or two ways, probably different people getting both.
Something very uniform but still unique, like a QR code kind of deal, AIs would hallucinate the crap out of that. Or abstractions, like people do to change the way the shape of their face to combat facial recognition.
For private ones, just don't ever get it photographed, any image showing that area without it would be probably fake.
I slightly hate myself for suggesting it, but are you essentially describing NFTs?
It's called a "name".
Mid journey and the like have already been caught creating shutterstock watermarks in images. Future models might be able to fake specific watermarks well.
Not like that. A server name that can be authenticated. Like when you receive an email from your bank (in the metadata), you know it's legitimate. Each organization can set up their own server to host things they vouch for. With ActivityPub it can be viewed elsewhere with the guarantee that it's from a trusted source.
Isn’t that what NFTs do?
Sure, but so do a lot of other things that aren't as costly. If NFTs were the first secure way to authenticate things online we wouldn't have had online banking until very recently
True but trust is hard to establish in decentralized platforms like the fediverse. As far as I’m aware the only decentralized banking is unfortunately cryptocurrency.
What NFTs (and crypto in general) do is very different from a web of trust style approach
Crypto creates one source of absolute truth, the Blockchain, costly computed via consensus.
Web of trust, on the other hand, requires you to declare which accounts you trust. Via public-private key signing, you can always verify that a post is actually made by a specific person, and if you trust that person (e.g. because you've met them before and exchanged keys), you know it's legit. You can then extend that system by also trusting accounts your trusted accounts verified, etc
We need to get a lot better about this kind of thing now that the cost of generating fake but structurally believable content/information has dropped.
Web of trust has always seemed like it’s for geeks so far. We need to enter a new phase of our cultural history, where competent knowledge of cryptographic games is commonplace.
Either that, or the geeks need to figure a way to preserve civilization link monks in the dark ages, trading accurate science and news among their tiny networks, while the majority of insecure networks are awash in AI-generated psyops/propaganda/scamspeak.
Or, we might get lucky and AI are inherently more ethical as they get more intelligent, as a rule of nature or something.
It’s nice to imagine speech, in general, being a natural environment the human brain is evolutionarily adapted to. And speech among other humans is an environment we’re adapted to. We implicitly assume certain limitations in people’s ability to spin bullshit while keeping it error-free, for instance, so we have an instinct to trust more as we hear more of what a person is saying. We trust longer stories more, and we trust people the longer we know them.
But AI, even if it’s not fundamentally different than humans - ie even if it’s still bounded by the rules of generating bullshit vs just reporting the truth - can still get outside our natural detection systems just by being ten times faster.
I guess what I’m saying is this is like that moment in the Cambrian or whatever when all the oxygen got released, and most of the life just got fucked and that was the end of their story. Just because a niche has been stable for a long time doesn’t mean it’s always going to be there.
Like, imagine a sci fi story about the entire atmosphere being stripped off of Earth, and the subsequent struggle for survival. How it would alter humanity’s history fundamentally, even if we survived, and even if we got the atmosphere back the human culture we knew would be gone.
That’s the level of event we’re facing. We’re in a sci fi story where the air is turning off and we all need to learn to live in vacuum and the only things we get to keep are the parts we can transform into airtight containers.
It might be that way right now, but instead of airtight it’s cryptographically-secure enclaves of knowledge and culture that will survive through the now presumably-endless period of history called “Airless Earth”.
Like having the atmosphere was the intro level of the game. Like in Far Cry 2, when you go to the second area, and it’s drier and more barren and there’s less ammo and cover and now they have roadblocks.
Our era of instinctively-navigable information is over. We’re all in denial because the atmosphere doesn’t go away, so we can’t deal with it, so it can’t be happening, so it’s not happening. But soon the denial won’t be possible any more.
That's the idea behind OpenAI's Worldcoin.
Why would anyone pay for the service? Having a "name" is free, and that dumb worldcoin only works for people. It can't work for governments or businesses.
ActivityPub is actually a good way to authenticate things. If an organization vouches for something they can post it on their server and it can be viewed elsewhere.
I think the idea of WorldCoin is to have a "wallet" linked to a single physical person, then you can sign any work with your key, that you got by proving you are a real person.
IMHO, the coin part is just a hype element to get people to sign up for the password part.
As for ActivityPub, I don't see how it helps with anything. An organization vouching for something, can already post it on their web, or if they want a distributed system, post it on IPFS.
This just makes me think of eBaum's world.
but if we all join hands and sing this song, then our call will reach the sky...
We didn't even have AI when the Internet became flooded with faked images and videos, and those actually are incredibly hard to tell are fake. AI generated images still has very obvious tells that it's fake if you scrutinize them even a little bit. And video is so bad right now, you don't have to do anything but have functioning sight to notice it's not real.
AI generated images have obvious tells to us who are capable of medium levels of scrutiny, but we can expect them to be harder to tell over time
I'm not reading the article but instead trying to be amusing. If it breaks the reality, please put me in a new one with really good scotch, healthy knees, and a spirit of adventure!
Not sure what to make out of this article. The statistics are nice to know, but something like this seems poorly investigated:
AI overview answers in Google search that tell users to eat glue
Google's AI has a strength others lack: not only it allows users to rate an answer, but it can also use Google's search data to check whether people are laughing at or mocking its results.
The "fire breathing swans", the "glue on pizza", or the "gasoline flavored spaghetti", have disappeared from Google's AI.
Gemini now also uses a draft system where it reviews and refines its own initial answer several times, before presenting the final result.
I haven't read this article as the statement is simply wrong. AI is just a technology. What it does (and doesn't) depends on how it is used, and this in turn depends on human decision making.
What Google does here is -once again- denying responsibilty. If I'd be using a tool that says you should put glue on your pizza, then it's me who is responsible, not the tool. It's not the weapon that kilks, it's the human being who pulls the trigger.
Ah yes, analysis of the article from someone who hasn't read it. Classic.