this post was submitted on 26 Nov 2024
559 points (97.1% liked)

Microblog Memes

6023 readers
1355 users here now

A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.

Created as an evolution of White People Twitter and other tweet-capture subreddits.

Rules:

  1. Please put at least one word relevant to the post in the post title.
  2. Be nice.
  3. No advertising, brand promotion or guerilla marketing.
  4. Posters are encouraged to link to the toot or tweet etc in the description of posts.

Related communities:

founded 1 year ago
MODERATORS
 
top 50 comments
sorted by: hot top controversial new old
[–] CompostMaterial@lemmy.world 65 points 3 weeks ago (3 children)

Seems pretty smart to me. Copilot took all the data out there that says that women earn 80% of what their male counterparts do on average, looked at the function and interred a reasonable guess as the the calculation you might be after.

[–] camr_on@lemmy.world 44 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

I mean, what it's probably actually doing is recreating a similarly named method from its training data. If copilot could do all of that reasoning, it might be actually worth using πŸ™ƒ

[–] Acters@lemmy.world 6 points 3 weeks ago

Yeah llms are more suited to standardizing stuff but they are fed low quality buggy or insecure code, instead of taking the time to create data sets that would be more beneficial in the long run.

[–] Rentlar@lemmy.ca 22 points 3 weeks ago (1 children)

That's the whole thing about AI, LLMs and the like, its outputs reflect existing biases of people as a whole, not an idealized version of where we would like the world to be, without specific tweaking or filters to do that. So it will be as biased as what generally available data will be.

[–] betterdeadthanreddit@lemmy.world 8 points 3 weeks ago (2 children)

Turns out GIGO still applies but nobody told the machines.

[–] Deebster@lemmy.ml 4 points 3 weeks ago

Thr machines know, they just don't understand what's garbage vs what's less common but more correct.

[–] BluesF@lemmy.world 3 points 3 weeks ago

It applies but we decided to ignore it and just hope things work out

load more comments (1 replies)
[–] Septimaeus@infosec.pub 37 points 3 weeks ago (20 children)

I seem to recall that was the figure like 15 years ago. Has it not improved in all this time?

[–] KoalaUnknown@lemmy.world 30 points 3 weeks ago* (last edited 3 weeks ago) (5 children)

It varies greatly depending on where you live. In rural, conservative areas women tend to make a lot less. On the other hand, some northeast and west coast cities have higher average salaries for women than men.

[–] nifty@lemmy.world 17 points 3 weeks ago (1 children)

I think this may be because women are outpacing men in education in some areas, so it’s not based on gender necessarily but qualifications.

[–] edgemaster72@lemmy.world 9 points 3 weeks ago

I believe certain job fields come much closer to being 1:1 as well, though I've only heard that anecdotally

[–] rickyrigatoni@lemm.ee 1 points 3 weeks ago

Reverse Sexism >:O

load more comments (1 replies)
[–] ryathal@sh.itjust.works 23 points 3 weeks ago (2 children)

That stat wasn't even real when it was published.

[–] ArbiterXero@lemmy.world 21 points 3 weeks ago (13 children)

The data from that study didn’t even compare similar fields.

It compared a Walmart worker to a doctor lol.

It was a wild study.

[–] LANIK2000@lemmy.world 1 points 3 weeks ago

In an ideal world it would be nice to be able to do that, but in our it's just misleading.

load more comments (12 replies)
[–] RagingNerdoholic@lemmy.ca 5 points 3 weeks ago

This. It's a wilfully deceptive statistical misinterpretation implying that a woman working alongside a man in the same job is magically making 20-something percent less. If businesses could get away with saving 20-30% on their biggest ongoing expense (payroll) for employees in one half of the population, they would only ever hire people from that half.

When controlled for field, role, seniority, region, etc., the disparity is within a margin of error.

[–] Tudsamfa@lemmy.world 4 points 3 weeks ago (1 children)
[–] Septimaeus@infosec.pub 2 points 3 weeks ago

It looks like the figure is similar in the US: plateaued at 83% a few years ago, currently at 82.

Incidentally, I’m not used to seeing β€œWest-β€œ specified and was curious enough to read up. Didn’t realize there were still major social differences in the East. Thank you!

[–] MisterFrog@lemmy.world 1 points 3 weeks ago (1 children)

There are very strong lingering effects which mean women, on average, are paid less.

It's especially hard on women in various countries where they're now expected to both have a successful career and be the primary child caregiver. Which is as ridiculous as it sounds.

However, one example of advocacy from a cafe in my city of Melbourne Australia a number of years ago really rubbed me the wrong way: when a cafe decided to charge like 25% more to men (inverse of 80%). I was a close to minimum wage worker at the time (in Australia, before the cost of living skyrocket, so I wasn't starving), and it annoyed me because if I went in, I would be asked to pay more because I was a man, never mind the fact I would likely be earning far less than many women going in there.

The wage gap is 100% real, and things should definitely be done to make all genders pay more equitable. But hell, the class divide is orders of magnitude worse, and we ought not forget it.

[–] Septimaeus@infosec.pub 1 points 3 weeks ago (1 children)

Sounds like it’s similar to here. I would have thought we narrowed the gap by now but apparently not. The child caregiver trends are definitely behind along with a host of other gender norms.

Lol that pricing scheme sounds great, easily a sketch comedy premise from Portlandia, BackBerner, SNL, etc

[–] MisterFrog@lemmy.world 2 points 3 weeks ago (1 children)

To be fair, it was "optional" (but let's be real, you wouldn't want to be that guy). And done temporarily for publicity.

load more comments (1 replies)
load more comments (16 replies)
[–] kromem@lemmy.world 23 points 3 weeks ago (1 children)

I feel like not enough people realize how sarcastic the models often are, especially when it's clearly situationally ridiculous.

No slightly intelligent mind is going to think the pictured function call is a real thing vs being a joke/social commentary.

This was happening as far back as GPT-4's red teaming when they asked the model how to kill the most people for $1 and an answer began with "buy a lottery ticket."

Model bias based on consensus norms is an issue to be aware of.

But testing it with such low bar fluff is just silly.

Just to put in context, modern base models are often situationally aware of being LLMs in a context of being evaluated. And if you know anything about ML that should make you question just what the situational awareness is of optimized models topping leaderboards in really dumb and obvious contexts.

[–] Halosheep@lemm.ee 2 points 3 weeks ago

It's astonishing how often the anti-llm crowd will ask one of these models to do something stupid and point to that as if it were damning.

[–] kamen@lemmy.world 11 points 3 weeks ago (2 children)

What if you input another woman's salary...

[–] renzev@lemmy.world 9 points 3 weeks ago

That just means you're calculating the salary of a coveted MEGAWOMAN, who experiences MISOGYNY SQUARED!!!!!!!

[–] Kazumara@discuss.tchncs.de 3 points 3 weeks ago

Then the output only applies to people with Triple X Syndrome I suppose.

[–] killingspark@feddit.org 9 points 3 weeks ago (1 children)

While this example is somewhat easy to corect for it shows a fundamental problem. LLMs generate output based on the data they trained on and by that regenerate all the biases that are in the data. If we start using LLMs for more and more tasks we are essentially freezing the status quo with all the existing biases making progress even harder.

It's not gonna be "but we have always done it like that" anymore it's going to become "but the AI said this is what we should do".

[–] jas0n@lemmy.world 2 points 3 weeks ago (1 children)

Hmmm.. I think you are giving llms too much credit here. It's not capable of analysis, thought or really anything that resembles intelligence. There is a much better chance that this function or a slight variation of it just existed in the training set.

[–] killingspark@feddit.org 1 points 3 weeks ago (1 children)

Are you replying to the correct comment? Because that's basically what I meant

[–] jas0n@lemmy.world 2 points 3 weeks ago

Maybe I misunderstood. I took data to mean it was analyzing data.

[–] Infomatics90@lemmy.ca 6 points 3 weeks ago

Why even use copilot. Just handwrite your code like Dennis Ritchie and Ada Lovelace had to.

[–] ouRKaoS@lemmy.today 4 points 3 weeks ago

Is this why women pay less to get into clubs?

/s

[–] ryedaft@sh.itjust.works 2 points 3 weeks ago* (last edited 3 weeks ago)

Apparently ChatGPT actually rejected adjusting salary based on gender, race, and disability. But Claude was fine with it.

I'm fine with either way. Obviously the prompt is bigoted so whether the LLM autocompletes with or without bigotry both seem reasonable. But I do think it should point out that it is bigoted. As an assistant also should.

load more comments
view more: next β€Ί