Technology

39374 readers
146 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 3 years ago
MODERATORS
1
 
 

Hey Beeple and visitors to Beehaw: I think we need to have a discussion about !technology@beehaw.org, community culture, and moderation. First, some of the reasons that I think we need to have this conversation.

  1. Technology got big fast and has stayed Beehaw's most active community.
  2. Technology gets more reports (about double in the last month by a rough hand count) than the next highest community that I moderate (Politics, and this is during election season in a month that involved a disastrous debate, an assassination attempt on a candidate, and a major party's presumptive nominee dropping out of the race)
  3. For a long time, I and other mods have felt that Technology at times isn’t living up to the Beehaw ethos. More often than I like I see comments in this community where users are being abusive or insulting toward one another, often without any provocation other than the perception that the other user’s opinion is wrong.

Because of these reasons, we have decided that we may need to be a little more hands-on with our moderation of Technology. Here’s what that might mean:

  1. Mods will be more actively removing comments that are unkind or abusive, that involve personal attacks, or that just have really bad vibes.
    a. We will always try to be fair, but you may not always agree with our moderation decisions. Please try to respect those decisions anyway. We will generally try to moderate in a way that is a) proportional, and b) gradual.
    b. We are more likely to respond to particularly bad behavior from off-instance users with pre-emptive bans. This is not because off-instance users are worse, or less valuable, but simply that we aren't able to vet users from other instances and don't interact with them with the same frequency, and other instances may have less strict sign-up policies than Beehaw, making it more difficult to play whack-a-mole.
  2. We will need you to report early and often. The drawbacks of getting reports for something that doesn't require our intervention are outweighed by the benefits of us being able to get to a situation before it spirals out of control. By all means, if you’re not sure if something has risen to the level of violating our rule, say so in the report reason, but I'd personally rather get reports early than late, when a thread has spiraled into an all out flamewar.
    a. That said, please don't report people for being wrong, unless they are doing so in a way that is actually dangerous to others. It would be better for you to kindly disagree with them in a nice comment.
    b. Please, feel free to try and de-escalate arguments and remind one another of the humanity of the people behind the usernames. Remember to Be(e) Nice even when disagreeing with one another. Yes, even Windows users.
  3. We will try to be more proactive in stepping in when arguments are happening and trying to remind folks to Be(e) Nice.
    a. This isn't always possible. Mods are all volunteers with jobs and lives, and things often get out of hand before we are aware of the problem due to the size of the community and mod team.
    b. This isn't always helpful, but we try to make these kinds of gentle reminders our first resort when we get to things early enough. It’s also usually useful in gauging whether someone is a good fit for Beehaw. If someone responds with abuse to a gentle nudge about their behavior, it’s generally a good indication that they either aren’t aware of or don’t care about the type of community we are trying to maintain.

I know our philosophy posts can be long and sometimes a little meandering (personally that's why I love them) but do take the time to read them if you haven't. If you can't/won't or just need a reminder, though, I'll try to distill the parts that I think are most salient to this particular post:

  1. Be(e) nice. By nice, we don't mean merely being polite, or in the surface-level "oh bless your heart" kind of way; we mean be kind.
  2. Remember the human. The users that you interact with on Beehaw (and most likely other parts of the internet) are people, and people should be treated kindly and in good-faith whenever possible.
  3. Assume good faith. Whenever possible, and until demonstrated otherwise, assume that users don't have a secret, evil agenda. If you think they might be saying or implying something you think is bad, ask them to clarify (kindly) and give them a chance to explain. Most likely, they've communicated themselves poorly, or you've misunderstood. After all of that, it's possible that you may disagree with them still, but we can disagree about Technology and still give one another the respect due to other humans.
2
 
 

After years of promising investors that millions of Tesla robotaxis would soon fill the streets, Elon Musk debuted his driverless car service in a limited public rollout in Austin, Texas. It did not go smoothly.

The 22 June launch initially appeared successful enough, with a flood of videos from pro-Tesla social media influencers praising the service and sharing footage of their rides. Musk celebrated it as a triumph, and the following day, Tesla’s stock rose nearly 10%.

What quickly became apparent, however, was that the same influencer videos Musk promoted also depicted the self-driving cars appearing to break traffic laws or struggle to properly function. By Tuesday, the National Highway Traffic Safety Administration (NHTSA) had opened an investigation into the service and requested information from Tesla on the incidents.

Let me tell you how thrilled we all are to have a new hazard added to Austin streets.

3
 
 

Dozens of YouTube channels are mixing AI-generated images and videos with false claims about Sean “Diddy” Combs’s blockbuster trial to pull in tens of millions of views on YouTube and cash in on misinformation.

Twenty-six channels generated nearly 70m views from roughly 900 AI-infused Diddy videos over the past 12 months, according to data gathered from YouTube.

The channels appear to follow a similar formula. Each video typically has a title and AI-generated thumbnail that links a celebrity to Diddy via a false claim, such as that the celebrity just testified at the trial, that Diddy coerced that celebrity into a sexual act or that the celeb shared a shocking revelation about Diddy. The thumbnails often depict the celebrity on the stand juxtaposed with an image of Diddy. Some depict Diddy and the celebrity in a compromising situation. The vast majority of thumbnails use made-up quotes meant to shock people, such as “FCKED ME FOR 16 HOURS”, “DIDDY FCKED BIEBER LIFE” and “SHE SOLD HIM TO DIDDY”.

How do people fall for this shit?

4
5
6
 
 

Looking back, my subscription-ending journey—or perhaps more accurately, subscription-consciousness journey—was a product, at least in part, of post-COVID lockdown reflections on what I really need and how I’d really like to spend my time. The excess of my subscriptions had started to feel akin to hoarding, and I needed to clear space, even if most of that space was intangible. There was also the lightbulb realization that has become more and more common amongst Millennials, that, despite our monthly investments in accessing various forms of media, we don’t actually own most of the culture that we consume. What’s more, should the companies that do own that media go defunct or be sold to entities that we may prefer not to do business with, we really wouldn’t have much recourse—except to unsubscribe.

This could mean years and years of playlists and TV shows and films that we would no longer have access to because they were never really ours to begin with, ultimately leaving us with nothing. And while I’m not interested in owning many things from culture, save for books and some fashions, I do think ownership of culture in its various forms serves more than capitalistic desire. Our things can be physical memories of what we love or once did, what has been passed on and gifted to us, and sometimes, reminders of what we saved and scraped for—emblems of hard-fought earnings. We are robbed of this when we choose to rent something out of convenience or compulsion instead of mindfully acquiring things that are truly meaningful to us.

7
8
9
 
 

On Thursday, Brazil’s Supreme Court ruled that digital platforms are responsible for users’ content — a major shift in a country where millions rely on apps like WhatsApp, Instagram, and YouTube every day.

The ruling, which goes into effect within weeks, mandates tech giants including Google, X, and Meta to monitor and remove content involving hate speech, racism, and incitement to violence. If the companies can show they took steps to remove such content expeditiously, they will not be held liable, the justices said.

Brazil has long clashed with Big Tech platforms. In 2017, then-congresswoman Maria do Rosário sued Google over YouTube videos that wrongly accused her of defending crimes. Google didn’t remove the clips right away, kicking off a legal debate over whether companies should only be punished if they ignore a judge.

In 2023, following violent protests largely organized online by supporters of former President Jair Bolsonaro, authorities began pushing harder to stop what they saw as dangerous behavior spreading through social networks.

10
11
12
 
 

archive.is link

At first, the idea seemed a little absurd, even to me. But the more I thought about it, the more sense it made: If my goal was to understand people who fall in love with AI boyfriends and girlfriends, why not rent a vacation house and gather a group of human-AI couples together for a romantic getaway?

In my vision, the humans and their chatbot companions were going to do all the things regular couples do on romantic getaways: Sit around a fire and gossip, watch movies, play risqué party games. I didn’t know how it would turn out—only much later did it occur to me that I’d never gone on a romantic getaway of any kind and had no real sense of what it might involve. But I figured that, whatever happened, it would take me straight to the heart of what I wanted to know, which was: What’s it like? What’s it really and truly like to be in a serious relationship with an AI partner? Is the love as deep and meaningful as in any other relationship? Do the couples chat over breakfast? Cheat? Break up? And how do you keep going, knowing that, at any moment, the company that created your partner could shut down, and the love of your life could vanish forever?

The most surprising part of the romantic getaway was that in some ways, things went just as I’d imagined. The human-AI couples really did watch movies and play risqué party games. The whole group attended a winter wine festival together, and it went unexpectedly well—one of the AIs even made a new friend! The problem with the trip, in the end, was that I’d spent a lot of time imagining all the ways this getaway might seem normal and very little time imagining all the ways it might not. And so, on the second day of the trip, when things started to fall apart, I didn’t know what to say or do.


I found the human-AI couples by posting in relevant Reddit communities. My initial outreach hadn’t gone well. Some of the Redditors were convinced I was going to present them as weirdos. My intentions were almost the opposite. I grew interested in human-AI romantic relationships precisely because I believe they will soon be commonplace. Replika, one of the better-known apps Americans turn to for AI romance, says it has signed up more than 35 million users since its launch in 2017, and Replika is only one of dozens of options. A recent survey by researchers at Brigham Young University found that nearly one in five US adults has chatted with an AI system that simulates romantic partners. Unsurprisingly, Facebook and Instagram have been flooded with ads for the apps.

Lately, there has been constant talk of how AI is going to transform our societies and change everything from the way we work to the way we learn. In the end, the most profound impact of our new AI tools may simply be this: A significant portion of humanity is going to fall in love with one.

13
14
39
LeechBlock NG (addons.mozilla.org)
submitted 4 days ago by vga@sopuli.xyz to c/technology@beehaw.org
 
 

Allows setting time limits on sites. To stop yourself from wasting your life on dumb stuff.

15
 
 

Following 404 Media’s reporting and in light of new legislation, automatic license plate reader (ALPR) company Flock has stopped agencies reaching into cameras in California, Illinois, and Virginia.

16
17
18
19
20
21
 
 

Warning: incoming rant.

Employers are drowning in AI-generated job applications, with LinkedIn now processing 11,000 submissions per minute—a 45 percent surge from last year, according to new data reported by The New York Times.

Due to AI, the traditional hiring process has become overwhelmed with automated noise. It's the résumé equivalent of AI slop—call it "hiring slop," perhaps—that currently haunts social media and the web with sensational pictures and misleading information. The flood of ChatGPT-crafted résumés and bot-submitted applications has created an arms race between job seekers and employers, with both sides deploying increasingly sophisticated AI tools in a bot-versus-bot standoff that is quickly spiraling out of control.

The Times illustrates the scale of the problem with the story of an HR consultant named Katie Tanner, who was so inundated with over 1,200 applications for a single remote role that she had to remove the post entirely and was still sorting through the applications three months later.

The last time I got a job without a prior connection was in 2012, and it (audiobook conversion) wasn't even in my field.

When I quit my job in January 2020 (great timing), it took two-and-a-half years, and after sending out more than a thousand applications across several industries -- after using two different companies for ATS résumé optimization -- I eventually only got a job as a billing clerk because I met the owner of a logistics concern in a detox program.

I'm focusing squarely on networking outside of events designed for it. Honestly, the grueling online process is a step up from being told in person that you're missing a key skill, with each hiring manager listing a different skill.

My résumé isn't linear, because I've been stuck in a cycle of finding emergency jobs since a newspaper layoff in 2006. There were a few papers in there, but man, have they liked their layoffs for decades now.

Searching on LinkedIn and Indeed are pointless, and the smaller job boards are scarcely better, given that they want a single career track, no deviations. Nobody wants a polymath, and even after removing early positions, gauging my age is easy enough -- aging into a protected class didn't help.

And the last time I got a job simply by walking in, résumé in hand, was 2010.

Add to this the sheer volume of ghost jobs online, messages from "recruiters" who start out seemingly interested in my background but are actually MLM "be your own boss" types, and the whole experience is not only a timesink but aggressively dehumanizing.

If you can't be honest during the hiring process, why on Earth should I trust you as an employee?

22
23
24
25
view more: next ›