this post was submitted on 29 Aug 2023
129 points (100.0% liked)

Technology

37712 readers
189 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

There is a machine learning bubble, but the technology is here to stay. Once the bubble pops, the world will be changed by machine learning. But it will probably be crappier, not better.

What will happen to AI is boring old capitalism. Its staying power will come in the form of replacing competent, expensive humans with crappy, cheap robots.

AI is defined by aggressive capitalism. The hype bubble has been engineered by investors and capitalists dumping money into it, and the returns they expect on that investment are going to come out of your pocket. The singularity is not coming, but the most realistic promises of AI are going to make the world worse. The AI revolution is here, and I don’t really like it.

top 50 comments
sorted by: hot top controversial new old
[–] lily33@lemm.ee 33 points 1 year ago* (last edited 1 year ago) (6 children)

You could have said the same for factories in the 18th century. But instead of the reactionary sentiment to just reject the new, we should be pushing for ways to have it work for everyone.

[–] Jummit@lemmy.one 17 points 1 year ago* (last edited 1 year ago)

I don't see how rejecting 18th century-style factories or exploitative neural networks is a bad thing. We should have the option of saying "no" to the ideas of capitalists looking for a quick buck. There was an insightful blog post that I can't find right now...

[–] TwilightVulpine@kbin.social 16 points 1 year ago (5 children)

Lets not forget all the exploitation that happened in that period also. People, even children, working for endless hours for nearly no pay, losing limbs to machinery and simply getting discarded for it. Just as there is a history of technology, there is a history of it being used inequitably and even sociopathically, through greed that has no consideration for human well-being. It took a lot of fighting, often literally, to get to the point we have some dignity, and even that is being eroded.

I get your point, it's not the tech, it's the system, and while I lost all excitement for AI I don't think that genie can't be put back in the bottle. But if the whole system isn't changing, we should at least regulate the tech.

But AI will eliminate so many jobs that it will affect a lot of people, and strain the whole system even more. There isn't a "just become a programmer" solution to AI, because even intellectually-oriented jobs are now on the line for elimination. This won't create more jobs than it takes away.

Which shows why people are so fearful of this tech. Freeing people from manual labor to go to intellectual work was overall good, though in retrospect even then it came at a cost of passionate artisans. But now people might be "freed" from being artists to having to become sweatshop workers, who can't outperform machines so their only option is to undercut them. Who is being helped by this?

load more comments (5 replies)
[–] argv_minus_one@beehaw.org 7 points 1 year ago (3 children)

You could have said the same for factories in the 18th century.

Everyone who died as a result of their introduction probably would say the same, yes. If corpses could speak, anyway.

load more comments (3 replies)
load more comments (3 replies)
[–] ReCursing@kbin.social 27 points 1 year ago (6 children)

Top quality luddite opinions right here. Plenty of fear and oprobium being directed against the technology, while taking the kleprocratic capitalism and kakistocracy as a given that can't be challenged.

[–] acastcandream@beehaw.org 19 points 1 year ago* (last edited 1 year ago) (11 children)

The fact that AI evangelists have the gall to call everyone who disagrees with them "luddites" is absolutely astounding to me. It's a word I see people like you throw around over and over again.

And before you heap the same nonsense on me, I use AI and have for years. But the entire discourse by "advocates" is quarter-baked, pretentious, and almost religious. It's bizarre. These are just tools, and people calling for us to think about how we use these tools as more and more ethical issues arise are not "luddites." They are not halting progress. They are asking reasonable questions about what we want to unleash on ourselves. Meanwhile nothing is stopping you or I from using LLM's and running our own local instance of ChatGPT-like systems. Or whatever else we can come up with. So what is the problem?

Imagine if we had taken an extra five minutes before embracing Facebook and all the other social media that came to define "Web2.0." Maybe things could be slightly better. Maybe we wouldn't have as big of a radicalization/silo-ing issue. But we don't know, because anyone who dares to even ask "should we do this?" in the tech world is treated like they need to be sent to a retirement home for their own safety. It's anathema, it's heresy.

So once again: What is the problem? What are those people doing to you? Why are they so threatening? Why are you so angry and insulting them?

I feel like we are just entering the new iteration of crypto bro culture. 

[–] parlaptie@feddit.de 11 points 1 year ago (2 children)

Extra spicy take: The Luddites were right. They were really always about opposing unethical use of technology, people who use their name as an insult were always all about "progress over people", and you should never feel bad for being called a Luddite.

[–] acastcandream@beehaw.org 3 points 1 year ago

I dig it. Take it back

load more comments (1 replies)
load more comments (10 replies)
[–] GenderNeutralBro@lemmy.sdf.org 19 points 1 year ago (1 children)

That seems to be the theme of the era.

Yes, it is incompatible with the status quo. That's a good thing. The status quo is unsustainable. The status quo is on course to kill us all.

The only real danger AI brings is it will let our current corrupt leaders and corrupt institutions be more efficient in their corruption. The problem there is not the AI; it's the corruption.

[–] Umbrias@beehaw.org 9 points 1 year ago (2 children)

Improving human efficiency is essentially the purpose of technology after all. Any new invention will generally have this effect.

load more comments (2 replies)
[–] Gaywallet@beehaw.org 10 points 1 year ago

taking the kleprocratic capitalism and kakistocracy as a given that can’t be challenged.

It's literally baked into the models themselves. AI will reinforce kleptocratic capitalism and kakistocracy as you so aptly put it because the very data it's trained on is a slice of the society it resembles. People on the internet share bad, racist opinions and the bots trained on this data do the same. When AI models are put in charge of systems because it's cheaper than putting humans in place, the systems themselves become entrenched in status-quo. The problem isn't so much the technology itself, but how the technology is being rolled out, driven by capitalistic incentives, and the consequences that brings.

[–] jatone@reddthat.com 8 points 1 year ago (1 children)

snicker drewdevault is an avid critic of capitalism. thats entirely the point of this post actually.

load more comments (1 replies)
[–] jarfil@beehaw.org 16 points 1 year ago (3 children)

Its staying power will come in the form of replacing competent, expensive humans with crappy, cheap robots.

Unlikely to replace the "most" competent humans, but probably the lower 80% (Pareto principle), where "crappy" is "good enough".

What's really troubling, is that it will happen all across the board; I'm yet to find a single field where most tasks couldn't be replaced by an AI. Used to think 3D design would take the longest, but no, there are already 3D design AIs.

[–] potterman28wxcv@beehaw.org 13 points 1 year ago* (last edited 1 year ago) (1 children)

I’m yet to find a single field where most tasks couldn’t be replaced by an AI

Critical-application development. For example, developing a program that drives a rocket or an airplane.

You can have an AI write some code. But good luck proving that the code meets all the safety criteria.

[–] FaceDeer@kbin.social 6 points 1 year ago (1 children)

You just said the same thing the comment responding to did, though. He pointed out that AI can replace the lower 80%, and you said the AI can write some code but that it might have trouble doing the expert work of proving the code meets the safety criteria. That's where the 20% comes in.

Also, it becomes easier to recognize the possibility for AI contribution when you widen your view to consider all the work required for critical application development beyond just the particular task of writing code. The company surrounding that task has a lot of non-coding work that gets done that is also amenable to AI replacement.

[–] PenguinTD@lemmy.ca 4 points 1 year ago (1 children)

That split won't work cause the top 20% would not like to do their day job clean up AI codes. It's much better time investment wise for them to write their own template generation tool so the 80% can write the key part of their task, than taking AI templates that may or may not be wrong and then hunting all over the place to remove bugs.

[–] jarfil@beehaw.org 4 points 1 year ago* (last edited 1 year ago) (1 children)

Use the AI to fix the bugs.

A couple months ago, I tried it on ChatGPT: I had never ever written or seen a single line in COBOL... so I asked ChatGPT to write me a program to print the first 10 elements of the Fibonacci series. I copy+pasted it into a COBOL web emulator... and it failed, with some errors. Copy+pasted the errors back to ChatGPT, asked it to fix them, and at the second or third iteration, the program was working as intended.

If an AI were to run with enough context to keep all the requirements for a module, then iterate with input from a test suite, all one would need to write would be the requirements. Use the AI to also write the tests for each requirement, maybe make a library of them, and the core development loop could be reduced to ticking boxes for the requirements you wanted for each module... but maybe an AI could do that too?

Weird times are coming. 😐

[–] FaceDeer@kbin.social 3 points 1 year ago (4 children)

I'm a professional programmer and this is how I use ChatGPT. Instead of asking it "give me a script to do big complicated task" and then laughing at it when it fails, I tell it "give me a script to do ." Then when I confirm that works, I say "okay, now add a function that takes the output of the first function and does " Repeat until done, correcting it when it makes mistakes. You still need to know how to spot problems but it's way faster than writing it myself, even if I don't have to go rummaging through API documentation and whatnot.

load more comments (4 replies)
[–] Remmock@kbin.social 6 points 1 year ago (1 children)

Fashion designers are being replaced by AI.
Investment capitalists are starting to argue that C-Suite company officers are costing companies too much money.
Our Ouroboros economy hungers.

[–] jarfil@beehaw.org 6 points 1 year ago* (last edited 1 year ago) (1 children)

C-Suites can get replaced by AIs... controlled by a crypto DAO replacing the board. And now that we're at it, replace all workers by AIs, and investors by AI trading bots.

Why have any humans, when you can put in some initial capital, and have the bot invert in a DAO that controls a full-AI company. Bonus points if all the clients are also AIs.

The future is going to be weird AF. 😆😰🙈

[–] FaceDeer@kbin.social 3 points 1 year ago (5 children)

If the AI is doing a better job at each of those things, why not let it?

[–] TwilightVulpine@kbin.social 3 points 1 year ago

That's where we need to ask how we define "better". Is better "when the number goes bigger" or is better "when more people benefit"? If an AI can better optimize to better extract the maximum value from people's work and discard them, then optimize how many ways they can monetize their product to maximize the profit they get from each customer, the result is a horrible company and a horrible society.

load more comments (4 replies)
[–] amki@feddit.de 4 points 1 year ago (1 children)

Unfortunately everything AI does is kind of shitty. Sure you might have a query for which the chosen AI works well but you might as well not.

It you accept that it sometimes just doesn't work at all sure AI is your revolution. Unfortunately there are not too many use cases where this is helpful.

[–] jarfil@beehaw.org 3 points 1 year ago

I posit that in 80% of the cases, an AI working well even less than 50% of the times, is still "good enough" to achieve the shittier 80% of goals.

"I'll have a burger with extra ketchup"... you get extra mayo instead... for half the price; "good enough".

[–] gaytswiftfan@beehaw.org 16 points 1 year ago (2 children)

I'm getting so so tired of these "AI/ML bad, world is doom" articles being posted multiple times a day. who is funding these narratives??

[–] jatone@reddthat.com 12 points 1 year ago (1 children)

you have no idea who drew devault is clearly and you entirely missed the point of what he posted.

[–] JWBananas@startrek.website 7 points 1 year ago

I also have no idea who he is and I also missed the point. It's just another "AI bad" article, even if the message this time is "AI bad, but not as bad as you think."

[–] storksforlegs@beehaw.org 3 points 1 year ago* (last edited 1 year ago)

And its a hot topic so there are a lot of articles about it since it generates traffic. And the bigger and doomier the article, the more hype.

But lots of people are unhappy about AI, its not a narrative being funded by a secret cabal or something.

[–] bedrooms@kbin.social 15 points 1 year ago (3 children)

I disagree. If we replace this writer with ChatGPT4, it would generate a more balanced article.

[–] Akrenion@programming.dev 34 points 1 year ago (1 children)

More balanced articles are not necessarily better though. I'd dather read two conflicting opinions that are well thought out than a mild compromise with unknown bias.

load more comments (1 replies)
[–] acastcandream@beehaw.org 8 points 1 year ago

Why would ChatGPT be more “balanced,” what does “balanced” mean, and why is it better?

[–] Hirom@beehaw.org 5 points 1 year ago* (last edited 1 year ago) (1 children)

More balanced than what?

ChatGPT ingest lots of articles from the web and newspapers, identify patterns in the text, and generate relevant reply based on what it ingested.

I expect ChatGPT to perpetuate biases found in its training data, and don't see how it'd improve balance.

load more comments (1 replies)
[–] abhibeckert@beehaw.org 13 points 1 year ago* (last edited 1 year ago) (1 children)

Sure, it's all about capitalism. Nothing good like this could ever come from advances in technology:

https://www.ucsf.edu/news/2023/08/425986/how-artificial-intelligence-gave-paralyzed-woman-her-voice-back

ML is a tool and like most tools it has broad use cases. Some of them are very, very, good.

[–] 1984@lemmy.today 7 points 1 year ago* (last edited 1 year ago) (5 children)

There is a name for this debating technique where you go "sure, there was nothing good about Hitler - except he cared about dogs!". Can't remember. Is it strawman?

I think we all understand that capitalism is mostly bad for humans, and really good for corporations and their owners. AI and robots will be exploited to replace people since they are massively more powerful and much cheaper.

A few things will be better I guess, but most will be worse. People already are not actually needed to work this much anymore, and as soon as they can be replaced with something cheaper and more efficient they will. That is capitalism.

load more comments (5 replies)
[–] lloram239@feddit.de 11 points 1 year ago

This is a very one sided way to look at things. Yes, people will use AI to generate spam and stuff. What it is missing is that people will also use AI to filter it all away. The nice thing about ChatGPT and friends is that it gives me access to information in whatever format I desire. I don't have to visit dozens of websites to find what I am looking for, the AI will do that for me and report back with what it has found.

Simply put, AI is a possible path to the Semantic Web, which previously failed since ads and SoC were the driver of the Web, not information.

Sometimes I really wonder in what magical wonderland those people complaining about AI live, since as far as I am concerned, the Web and a lot of other stuff went to shit a long while ago, long before AI got any mass traction. AI is our best hope to drag ourselves out of the mud.

The real problem is that AI isn't good enough yet. It can handle Wikipedia-like questions quite well. But try to use it for product and price information and all you get is garbage.

[–] yads@lemmy.ca 9 points 1 year ago

This is my worry as well

load more comments
view more: next ›