this post was submitted on 11 May 2025
709 points (92.7% liked)

Fuck AI

2773 readers
434 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 1 year ago
MODERATORS
 

cross-posted from: https://pawb.social/post/24295950

Source (Bluesky)

you are viewing a single comment's thread
view the rest of the comments
[–] southsamurai@sh.itjust.works -1 points 4 days ago (4 children)

Aight, here's the thing.

All art is, at its base, about translating a person's inner concept into an external form. Sculpture, painting, poetry, dance, whatever.

To do any art form, there is a barrier to entry. If you want to be a dancer, some part of your body must be mobile, right? Even if it's just your eyeballs, dance by definition is about the human body moving.

But, what if you can't move your body? Is that, and should that be, a barrier? Why can't a person get an exoskeleton device that they can then program to either dance for them, or to respond to their thoughts so they can dance via the gear? Well, in that case the technology isn't here yet, but pretend it was.

Obviously, it wouldn't be the same as someone that's trained and dedicated to dancing, but is it lesser? It still fulfills the self expression via movement.

That can be applied to damn near every form of art. I can't actually think of any that it doesn't apply to at least in part.

There is a difference between a human sitting down (or lying or standing) to write a book and just telling a computer to generate a book. But it doesn't completely invalidate using a computer to generate fictional text. The key in that form is the degree of input and the effort involved. A writer asking an llm for a paragraph about a kid walking down the street when they're blocked isn't the same thing as telling it to write the entire book. There's degrees of use that are valid tools that don't remove the human aspect of the art form.

Take it to visual arts. A person can see things in their head that they may never develop the skill to see executed. They may not be physically capable of moving a brush on canvas, or pen on paper. A painter of incredible skill may be an utter dunce at sculpture, but still have vision and concepts worth being created.

The use of a generative model as a tool is not inherently bad. It's no worse than setting up software to 3d print a sculpture.

The problem comes in when the ai itself is made by, and operated for the benefit of corporate entities, and/or when attribution isn't built in. Attribution matters; a painting made by Monet is different from a painting that looks like Monet could have done it, but it was made by southsamurai. If I paint something that looks like a Monet, that's great! If I paint it and pretend it was made by Monet, that's bullshit.

A "painting" by a piece of software that's indelibly attributed as generated that way isn't a big deal. It comes back to the eye of the beholder in the same way that digital art is when compared to "analog" art via paints and pencils. It only really matters when someone is bullshitting about how they achieved the final results.

Is ai art less impressive? Hell yes, and it's pretty obvious that it isn't the same thing as someone honing their craft over years and decades. An image generated by a piece of software with only the input prompts being human generated is not the same as someone building the image with their hands via paint/touchpad/mouse/whatever.

This is still different from the matter of using ai instead of paying a human to do the work, which is more complicated than people think it is.

But, in terms of an individual having access to tools that allow them to get things inside their head out of their head where it can be seen, it has its place. It just needs to be very clear that that's the tool used.

And yeah, I know this is c/fuckai, and I'm arguing that ai has its place as a tool of self expression, and that's not going to be universally satisfying here. But I maintain that the problem with ai art isn't in the fact that it's ai art, it's the framework behind that that makes it a threat to actual humans.

In a world where artists can choose to create art for their own satisfaction without having to worry about eating and having a roof over their heads, ai art would be a lot less of a threat.

[–] NotForYourStereo@lemmy.world 7 points 4 days ago

So much typing to say fuck all.

[–] RandomVideos@programming.dev 4 points 4 days ago (1 children)

or to respond to their thoughts so they can dance via the gear

But thats not whats happening with AI "art". Thats whats being attempted with other technologies

I have seen a lot of disabled artists complain about bring used in pro-AI arguments

[–] southsamurai@sh.itjust.works 4 points 4 days ago (1 children)

Yup. And it isn't even just artists. Disabled people that aren't creatives on a professional level object to it as well. It's an unpleasant form of ablism, trying to pander on the backs of those poor, sad disabled people.

But it is all a spectrum of technologies, when applied properly.

The properly part is the bottom panel of the posted comic, imo. The various generative models aren't actually about helping people, they aren't about expanding human creativity. They're about trying to cash in on a growing technology.

That doesn't mean that ai can't be a good thing. It just means that it's a bad thing in the way it exists now, or at least in the form that's being shoved down the public's throat.

Had the big ones not stolen the training data, were they not being used to leverage corporate goals over humans, they could be a very useful thing.

[–] RandomVideos@programming.dev 1 points 4 days ago (1 children)

Had the big ones not stolen the training data, were they not being used to leverage corporate goals over humans, they could be a very useful thing

AI still has the problems of spam(propaganda being the most dangerous variant of it), disinformation and impersonating real artists. These could be fixed if every AI image/video had a watermark, but i dont think that could be enforced well enough to completely eliminate these issues

[–] southsamurai@sh.itjust.works 1 points 4 days ago (1 children)

Those specific flaws are down to the same issue though. The training data was flawed enough, in large part due to being stolen wholesale, that it skews the matter towards counterfeits being easier. I would agree that in the absence of legislation, no for profit business based on ai will ever tag their output. It could be an easier task for non profit, and/or open source models though. Definitely something that needs addressing.

I'm not sure what you mean by spam being a direct problem of ai. Are you saying that it's easier to generate propaganda, and thus allow it to be spammed?

As near as I can tell, the propaganda farms were doing quite well spreading misinformation and disinformation before ai. Spamming it too, when that was useful to their goals.

[–] RandomVideos@programming.dev 1 points 4 days ago* (last edited 4 days ago)

As far as i know, the twitter AI tags its images

Propaganda is more of a problem with text generation than image generation, but both can be used to change peoples opinions much more easily than before

[–] VerbFlow@lemmy.world 5 points 4 days ago (2 children)

This "art" costs far more environmentally than any other. It uses mass amounts of electricity and water. It's nothing like, say, eating steak instead of salad, or driving a pickup truck to work. The "miracle" of AI has to come from somewhere, after all.

[–] southsamurai@sh.itjust.works 6 points 4 days ago

Sure, but so does everything. Pigments have to be mined or synthesized. Paper comes from cut down trees. Brushes are either synthesized or from natural hairs. Ink is a vat of survival chemicals.

Electricity by itself is just one resource. You could argue that by centralizing the resource like that, you can easier reduce environmental impacts overall via more sustainable, less damaging energy production.

Ai isn't a miracle, any more than air conditioning is, or refrigerators, or Christmas lights, or even just a stove. It's a tool.

Again, I'm not trying to change anyone's mind here. It's just for the enjoyment of babbling about the subject, maybe having a nice conversation along the way. I have very definite opinions about the way generative models are being used, the impacts it's having, but a lot of the time that's not really interesting because pretty much everyone hates the slop factor.

But that's, to me, like objecting to shovel because someone is using it to dig under your house. Misuse of a thing isn't the same as the thing itself

[–] Lumiluz@slrpnk.net 3 points 4 days ago* (last edited 3 days ago)

Running a local gen model for 500 images uses less electricity than playing Baldur's Gate 3 for 30 minutes.

Edit: Correction; less than 5-10 minutes depending on settings.

[–] MoonlightFox@lemmy.world 2 points 4 days ago

I have seen AI "art" that has moved me emotionally, and been inspiring, increased my immersion etc.

The AI satire video about US workers in a sweatshop factory was politically important, and made me laugh.

I once made a picture of a cat that was busy working tirelessly in the style of Rembrandt, and it was emotionally moving. I saw myself in that cat. 🥲

I and friends used AI for immersion when roleplaying.

This supports your point of giving people the ability to artistically and quickly express ideas without being a skilled artist.

I also believe that the ethical issues of ownership, and theft from authors and artists are huge issues.

The environmental issue is not my biggest concern considering how cheap and quick some genAI can be. So all gen AI isn't automatically seen as unethical due to environmental concerns to me.

Also, has image generation gotten worse? I feel that all generated images are more "correct" but has this bad look to it now, that it did not previously have.