this post was submitted on 10 Apr 2024
113 points (91.2% liked)

Technology

34891 readers
840 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
 

The promise of AI, for corporations and investors, is that companies can increase profits and productivity by slashing their reliance upon a skilled human workforce. But as this story and many others show, AI is just today’s buzzword for “outsourcing,” and it comes with the same problems that have plagued outsourced companies and workforces for decades.

top 18 comments
sorted by: hot top controversial new old
[–] jsomae@lemmy.ml 19 points 7 months ago

This article is severely misleading. AI is a buzzword -- but mostly for chatbots. Bots. You can prove this easily: observe that chatbots such as ChatGPT type much faster than a human possibly could.

Much of the training and validation of AI requires outsourcing. Companies which just mindlessly slap an LLM into their product somewhere aren't usually outsourcing.

Don't be misled. When your CEO brings up integrating AI into your product, s/he isn't secretly talking about outsourcing.

[–] maynarkh@feddit.nl 16 points 7 months ago

is that companies can increase profits and productivity

Not even necessarily increasing productivity, just slashing it less than they slash worker pay, so that they have more of the pie for themselves. Like with customer service chatbots that are worse than agents, but are less worse than the amount of money they "save".

[–] FaceDeer@fedia.io 1 points 7 months ago (2 children)

AI is actually real, though, and can actually accomplish many of the things it's being used for. I think this article is focusing overly much on a couple of weird outlier situations.

[–] magic_lobster_party@kbin.run 4 points 7 months ago* (last edited 7 months ago) (3 children)

I’ve worked with AI companies where their “AI” is mostly just outsourced to some human in a low wage country. Some small part was actual AI algorithms, so they weren’t completely lying to investors.

Setting up actual AI is expensive and time consuming, especially for some new startup. Often it involves creating large amounts of labeled data and even that might not be enough. There are so many uncertainties involved. It might only be possible to achieve 90% or even lower accuracy for a given problem with actual AI algorithms, so the quick and easy solution is to outsource.

I believe this practice is more common than most think.

[–] FaceDeer@fedia.io 6 points 7 months ago (1 children)

Sure, but the fact that not all AI isn't really AI doesn't mean it isn't real. I run local LLMs on my home computer to perform various tasks, I can shut off my Internet entirely and they still work. There isn't some secret line out to a third-world sweatshop where outsourced labor is frantically typing responses to the thousands of queries generated by my scripts.

Training an AI is expensive and time consuming, but simply using one can be very straightforward.

[–] magic_lobster_party@kbin.run 1 points 7 months ago* (last edited 7 months ago) (1 children)

I get you. It’s just that when a company says they’re offering AI solutions, it’s most likely just outsourced workforce under the hood.

Companies can either spend one or two years of R&D, possibly involving PhDs, to solve a task using AI, or they can get the product out to the market in a few months by outsourcing the AI part. The first option might not even be up to par with the outsourced option even with all the money spent on R&D.

[–] FaceDeer@fedia.io 0 points 7 months ago* (last edited 7 months ago) (1 children)

You're still talking about training AIs, though. Using AIs doesn't require years of work and PhDs to research. You just sign a contract with one of the AI service providers and they give you an API. You may need to do a little scripting to hook up a front end and some fiddling with prompts and parameters to get the AI to respond correctly, but as I said above, I've done this myself in my own home. Entirely on my own, entirely just for fun. It's really not hard, I could point you to a couple of links for some free software you could use to do it yourself. Heck, even the training part isn't hard if you're starting with one of the existing open models and you've got the hardware for it.

Do you really think all those companies out there with chatbot "help staff" (that speak perfect English and respond faster than a well-trained typist could type) are most likely just outsourced workforce to some cheap foreign company? What is the hundreds of billions of dollars worth of computer hardware the AI service providers are running actually being used for, if not that?

[–] magic_lobster_party@kbin.run 1 points 7 months ago (1 children)

I’m talking about the entire process from design to product. Ok, maybe those useless chatbot “help staff” might be actual LLMs, but that Amazon grocery store used as example in the article was just Indian labor all along.

As soon you want to solve a very specific problem using AI, it can quickly get time consuming and expensive to develop the product. Maybe that off the shelf AI model isn’t good enough for your particular problem? Maybe it only gives 75% accuracy when you really need 95% to be competitive in the market. In that case you need to compare different models, figure out if there’s any trick you can do to boost the accuracy, try out different training strategies, etc.

And once the model has 95% accuracy on your own labeled data, it might turn out it’s completely worthless out in the field because it turns out the data you collected isn’t representative of the reality.

At that point you might just try to figure out how to offload the work someone else. I’ve even heard of self driving car companies who did exactly that.

[–] FaceDeer@fedia.io 1 points 7 months ago

Going back to my original comment:

Sure, but the fact that not all AI isn't really AI doesn't mean it isn't real.

The fact that Amazon was faking it in this one instance doesn't poof all the actual AI out of existence. There are plenty of off-the-shelf AI models that are good enough for various particular problems, they can go ahead and use them. You said it yourself, the chatbot "help staff" might be actual LLMs.

At that point you might just try to figure out how to offload the work someone else.

As I said, most companies using AI will likely be hiring professional AI service providers for it. That's where those hundreds of billions of dollars I mentioned above are going, where all the PhDs spending years on R&D are working.

[–] jsomae@lemmy.ml 3 points 7 months ago* (last edited 7 months ago)

The current AI bubble started around february of last year when every C-level of every major company entered into a mass hysteria about being left behind if they didn't integrate LLMs into their service. This is different from the zeitgeist that came before it.

[–] drwho@beehaw.org 2 points 7 months ago

I can confirm this.

[–] eleitl@lemmy.ml 2 points 7 months ago (2 children)

There is no real AI out there, yet.

[–] jsomae@lemmy.ml 4 points 7 months ago (1 children)

semantics. Nobody agrees on the definition of AI. It's not a meaningful ststement to say no "real" AI exists. Be more specific about what exactly is missing instead.

[–] eleitl@lemmy.ml 2 points 7 months ago (1 children)

Full human equivalent across all capabilities. Gen AI fails pretty much everywhere.

[–] jsomae@lemmy.ml 2 points 7 months ago

Then yes I agree with you. We're not there yet.

[–] FaceDeer@fedia.io 1 points 7 months ago

You are perhaps confusing the highly-general term "AI" for the more-specific term "AGI". It's true that there's no real AGI out there yet, but AI has been around for many decades. LLMs are a type of AI.

[–] SnotFlickerman@lemmy.blahaj.zone 1 points 7 months ago

Two Words: Mechanical Turk

[–] Nobody@lemmy.world 0 points 7 months ago

Every great "breakthrough" is just VC cash, lies, and the exploitation of cheap labor. Their system is simple, and it works for them.