this post was submitted on 24 Feb 2024
238 points (92.2% liked)

Technology

59342 readers
5137 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] ObviouslyNotBanana@lemmy.world 66 points 8 months ago (7 children)
[–] SlopppyEngineer@lemmy.world 47 points 8 months ago (1 children)

Only if you believe in it. Many CEOs do. They're very good in magical thinking.

[–] Cogency@lemmy.world 6 points 8 months ago (3 children)

I have a counter argument. From an evolutionary standpoint, if you keep doubling computer capacity exponentially isn't it extraordinarily arrogant of humans to assume that their evolutionarily stagnant brains will remain relevant for much longer?

[–] chemical_cutthroat@lemmy.world 10 points 8 months ago (1 children)

You can make the same argument about humans that you do AI, but from a biological and societal standpoint. Barring any jokes about certain political or geographical stereotypes, humans have gotten "smarter" that we used to be. We are very adaptable, and with improvements to diet and education, we have managed to stay ahead of the curve. We didn't peak at hunter-gatherer. We didn't stop at the Renaissance. And we blew right past the industrial revolution. I'm not going to channel my "Humanity, Fuck Yeah" inner wolf howl, but I have to give our biology props. The body is an amazing machine, and even though we can look at things like the current crop of AI and think, "Welp, that's it, humans are done for," I'm sure a lot of people thought the same at other pivotal moments in technological and societal advancement. Here I am, though, farting taco bell into my office chair and typing about it.

[–] Cogency@lemmy.world 14 points 8 months ago* (last edited 8 months ago) (2 children)

You can compare human intelligence to centuries ago on a simple linear scale. Neural density has not increased by any stretch of the imagination in the way that transistor density has. But I'm not just talking density I'm talking about scalability that is infinite. Infinite scale of knowledge and data.

Let's face it people are already not that intelligent, we are smart enough to use the technology of other smarter people. And then there are computers, they are growing intelligently with an artificial evolutionary pressure being exerted on their development, and you're telling me that that's not going to continue to surpass us in every way? There is very little to stop computers from being intelligent on a galactic scale.

[–] chemical_cutthroat@lemmy.world 10 points 8 months ago (6 children)

Computer power doesn't scale infinitely, unless you mean building a world mind and powering if off of the spinning singularity at the center of the galaxy like a type 3 civilization, and that's sci-fi stuff. We still have to worry about bandwidth, power, cooling, coding and everything else that going into running a computer. It doesn't just "scale". There is a lot that goes into it, and it does have a ceiling. Quantum computing may alleviate some of that, but I'll hold my applause until we see some useful real world applications for it.

Furthermore, we still don't understand how the mind works, yet. There are still secrets to unlock and ways to potentially augment and improve it. AI is great, and I fully support the advancement in technology, but don't count out humans so quickly. We haven't even gotten close to human level intelligence and GOFAI, and maybe we never will.

load more comments (6 replies)
load more comments (1 replies)
[–] SlopppyEngineer@lemmy.world 4 points 8 months ago

As a counter argument against that, companies are trying to make self driving cars work for 20 years. Processing power has increased by a million and the things still get stuck. Pure processing power isn't everything.

[–] barsoap@lemm.ee 4 points 8 months ago* (last edited 8 months ago)

If you keep doubling the number of fruit flies exponentially, isn't it likely that humanity will find itself outsmarted?

The answer is no, it isn't. Quantity does not quality make and all our current AI tech is about ways to breed fruit flies that fly left or right depending on what they see.

[–] Xtallll@lemmy.blahaj.zone 23 points 8 months ago (1 children)

Magic as in street magician, not magic as in wizard. Lots of the things that people claim AI can do are like a magic show, it's amazing if you look at it from the right angle, and with the right skill you can hide the strings holding it up, but if you try to use it in the real world it falls apart.

[–] ObviouslyNotBanana@lemmy.world 5 points 8 months ago (2 children)

I wish there was actual magic

[–] Tristaniopsis@aussie.zone 7 points 8 months ago (1 children)

It would make science very difficult.

[–] reev@sh.itjust.works 4 points 8 months ago (1 children)

What if it magically made it easier?

load more comments (1 replies)
load more comments (1 replies)
[–] falkerie71@sh.itjust.works 15 points 8 months ago (1 children)

Everything is magic if you don't understand how the thing works.

[–] ObviouslyNotBanana@lemmy.world 10 points 8 months ago (2 children)

I wish. I don't understand why my stomach can't handle corn, but it doesn't lead to magic. It leads to pain.

load more comments (2 replies)
[–] RobotToaster@mander.xyz 8 points 8 months ago* (last edited 8 months ago)

Sam Altman will make a big pile of investor money disappear before your very eyes.

[–] CosmoNova@lemmy.world 7 points 8 months ago (2 children)

The masses have been treating it like actual magic since the early stages and are only slowly warming up to the idea it‘s calculations. Calculations of things that are often more than the sum of it‘s parts as people start to realize. Well some people anyway.

load more comments (2 replies)
[–] Norgur@kbin.social 7 points 8 months ago

If you're a thechbro, this is the new magic shit, man! To the moooooon!

[–] db2@lemmy.world 3 points 8 months ago
[–] Gsus4@mander.xyz 50 points 8 months ago* (last edited 8 months ago) (5 children)

Yea, try talking to chatgpt about things that you really know in detail about. It will fail to show you the hidden, niche things (unless you mention them yourself), it will make lots of stuff up that you would not pick up on otherwise (and once you point it out, the bloody thing will "I knew that" you, sometimes even if you are wrong) and it is very shallow in its details. Sometimes, it just repeats your question back to you as a well-written essay. And that's fine...it is still a miracle that it is able to be as reliable and entertaining as some random bullshitter you talk to in a bar, it's good for brainstorming too.

[–] interdimensionalmeme@lemmy.ml 23 points 8 months ago (2 children)

It's like watching mainstream media news talk about something you know about.

[–] davysnavy@lemmy.world 6 points 8 months ago

Oh good comparison

[–] Gsus4@mander.xyz 5 points 8 months ago* (last edited 8 months ago) (2 children)

Haha, definitely, it's infuriating and scary. But it also depends on what you are watching for. If you are watching TV, you do it for convenience or entertainment. LLMs have the potential to be much more than that, but unless a very open and accessible ecosystem is created for them, they are going to be whatever our tech overlords decide they want them to be in their boardrooms to milk us.

[–] TheFriar@lemm.ee 7 points 8 months ago* (last edited 8 months ago) (12 children)

Well, if you read the article, you’ll see that’s exactly what is happening. Every company you can imagine is investing the GDP of smaller nations into AI. Google, Facebook, Microsoft. AI isn’t the future of humanity. It’s the future of capitalist interests. It’s the future of profit chasing. It’s the future of human misery. Tech companies have trampled all over human happiness and sanity to make a buck. And with the way surveillance capitalism is moving—facial recognition being integrated into insane places, like the M&M vending machine, the huge market for our most personal, revealing data—these could literally be two horsemen of the apocalypse.

Advancements in tech haven’t helped us as humans in while. But they sure did streamline profit centers. We have to wrest control of our future back from corporate America because this plutocracy driven by these people is very, very fucking dangerous.

AI is not the future for us. It’s the future for them. Our jobs getting “streamlined” will not mean the end of work and the rise of UBI. It will mean stronger, more invasive corporations wielding more power than ever while more and more people suffer, are cast out and told they’re just not working hard enough.

load more comments (12 replies)
load more comments (1 replies)
[–] nomadjoanne@lemmy.world 13 points 8 months ago* (last edited 8 months ago) (1 children)

I really only use for "oh damn, I known there's a great one-liner to do that in Python" sort of thing. It's usually right and of it isn't it'll be immediacy obvious and you can move on with your day. For anything more complex the gas lighting and subtle errors make it unusable.

load more comments (1 replies)
[–] SparrowRanjitScaur@lemmy.world 7 points 8 months ago (1 children)

ChatGPT is great for helping with specific problems. Google search for example gives fairly general answers, or may have information that doesn't apply to your specific situation. But if you give ChatGPT a very specific description of the issue you're running into it will generally give some very useful recommendations. And it's an iterative process, you just need to treat it like a conversation.

load more comments (1 replies)
[–] douglasg14b@lemmy.world 5 points 8 months ago (1 children)

I find it incredibly helpful for breaking into new things.

I want to learn terraform today, no guide/video/docs site can do it as well as having a teacher available at any time for Q&A.

Aside from that, it's pretty good for general Q&A on documented topics, and great when provided context (ie. A full 200MB export of documentation from a tool or system).

But the moment I try and dig deeper I to something I'm an expert in, it just breaks down.

load more comments (1 replies)
load more comments (1 replies)
[–] fidodo@lemmy.world 48 points 8 months ago (11 children)

Good. It's dangerous to view AI as magic. I've had to debate way too many people who think they LLMs are actually intelligent. It's dangerous to overestimate their capabilities lest we use them for tasks they can't perform safely. It's very powerful but the fact that it's totally non deterministic and unpredictable means we need to very carefully design systems that rely on LLMs with heavy guards rails.

[–] FaceDeer@kbin.social 13 points 8 months ago (1 children)

Conversely, there are way too many people who think that humans are magic and that it's impossible for AI to ever do <insert whatever is currently being debated here>.

I've long believed that there's a smooth spectrum between not-intelligent and human-intelligent. It's not a binary yes/no sort of thing. There's basic inert rocks at one end, and humans at the other, and everything else gets scattered at various points in between. So I think it's fine to discuss where exactly on that scale LLMs fall, and accept the possibility that they're moving in our direction.

[–] fidodo@lemmy.world 7 points 8 months ago (2 children)

It's not linear either. Brains are crazy complex and have sub cortexes that are more specialized to specific tasks. I really don't think that LLMs alone can possibly demonstrate advanced intelligence, but I do think it could be a very important cortex for one. There's also different types of intelligence. LLMs are very knowledgeable and have great recall but lack reasoning or worldview.

load more comments (2 replies)
[–] Deceptichum@kbin.social 5 points 8 months ago

I find the people who think they are actually an AI are generally the people opposed to them.

People who use them as the tools they are know how limited they are.

load more comments (9 replies)
[–] FaceDeer@kbin.social 30 points 8 months ago (2 children)

Those recent failures only come across as cracks for people who see AI as magic in the first place. What they're really cracks in is people's misperceptions about what AI can do.

Recent AI advances are still amazing and world-changing. People have been spoiled by science fiction, though, and are disappointed that it's not the person-in-a-robot-body kind of AI that they imagined they were being promised. Turns out we don't need to jump straight to that level to still get dramatic changes to society and the economy out of it.

I get strong "everything is amazing and nobody is happy" vibes from this sort of thing.

[–] VirtualOdour@sh.itjust.works 6 points 8 months ago

Also interesting is that most people don't understand the advances it makes possible so when they hear people saying it's amazing and then try it of course they're going to think it's not lived upto hype.

The big things are going to completely change things like how we use computers especially being able to describe how you want it to lay out ui and create custom tools on the fly.

load more comments (1 replies)
[–] LainTrain@lemmy.dbzer0.com 18 points 8 months ago* (last edited 8 months ago) (3 children)

I hope it collapses in a fire and we can just keep our foss local models with incremental improvements, that way both techbros and artbros eat shit

load more comments (3 replies)
[–] Ropianos@feddit.de 16 points 8 months ago (9 children)

There are quite a lot of AI-sceptics in this thread. If you compare the situation to 10 years ago, isn't it insane how far we've come since then?

Image generation, video generation, self-driving cars (Level 4 so the driver doesn't need to pay attention at all times), capable text comprehension and generation. Whether it is used for translation, help with writing reports or coding. And to top it all off, we have open source models that are at least in a similar ballpark as the closed ones and those models can be run on consumer hardware.

Obviously AI is not a solved problem yet and there are lots of shortcomings (especially with LLMs and logic where they completely fail for even simple problems) but the progress is astonishing.

load more comments (9 replies)
[–] lvxferre@mander.xyz 15 points 8 months ago (5 children)

As I often mention when this subject pops up: while the current statistics-based generative models might see some application, I believe that they'll be eventually replaced by better models that are actually aware of what they're generating, instead of simply reproducing patterns. With the current models being seen as "that cute 20s toy".

In text generation (currently dominated by LLMs), for example, this means that the main "bulk" of the model would do three things:

  • convert input tokens into sememes (units of meaning)
  • perform logic operations with the sememes
  • convert sememes back into tokens for the output

Because, as it stands, LLMs are only chaining tokens. They might do this in an incredibly complex way, but that's it. That's obvious when you look at what LLM-fuelled bots output as "hallucination" - they aren't the result of some internal error, they're simply an undesired product of a model that sometimes outputs desirable stuff too.

Sub "tokens" and "sememes" with "pixels" and "objects" and this probably holds true for image generating models, too. Probably.

Now, am I some sort of genius for noticing this? Probably not; I'm just some nobody with a chimp avatar, rambling in the Fediverse. Odds are that people behind those tech giants already noticed the same ages ago, and at least some of them reached the same conclusion - that better gen models need more awareness. If they are not doing this already, it means that this shit would be painfully expensive to implement, so the "better models" that I mentioned at the start will probably not appear too soon.

Most cracks will stay there; Google will hide them with an obnoxious band-aid, OpenAI will leave them in plain daylight, but the magic trick will still not be perfect, at least in the foreseeable future.

And some might say "use MOAR processing power!", or "input MOAR training data!", in the hopes that the current approach will "magically" fix itself. For those, imagine yourself trying to drain the Atlantic with a bucket: does it really matter if you use more buckets, or larger buckets? Brute-forcing problems only go so far.

Just my two cents.

[–] wewbull@feddit.uk 7 points 8 months ago (2 children)

I don't know much about LLMs but latent diffusion models already have "meaning" encoded into the model. The whole concept of the u-net is that as it reduces the spacial resolution of the image, it increases the semantic resolution by adding extra dimensions of information. It came from medical image analysis where the idea of labelling something as a tumor would be really useful.

This is why you get body dysmorphic results on earlier (and even current) models. It's identified something as a human limb, but isn't quite sure on where the hand is, so it adds one on to what we know is a leg.

load more comments (2 replies)
[–] Buffalox@lemmy.world 7 points 8 months ago* (last edited 8 months ago) (1 children)

I agree 100%, and I think Zuckerberg's attempt at a massive 340,000 of Nvidia’s H100 GPUs AI based on LLM with the aim to create a generel AI sounds stupid. Unless there's a lot more to their attempt, it's doomed to fail.

I suppose the idea is something about achieving critical mass, but it's pretty obvious, that that is far from the only factor missing to achieve general AI.

I still think it's impressive what they can do with LLM. And it seems to be a pretty huge step forward. But It's taken about 40 years from we had decent "pattern recognition" to get here, the next step could be another 40 years?

[–] lvxferre@mander.xyz 7 points 8 months ago

I think that Zuckerberg's attempt is a mix of publicity stunt and "I want [you] to believe!". Trying to reach AGI through a large enough LLM sounds silly, on the same level as "ants build, right? If we gather enough ants, they'll build a skyscraper! Chrust me."

In fact I wonder if the opposite direction wouldn't be a bit more feasible - start with some extremely primitive AGI, then "teach" it Language (as a skill) and a language (like Mandarin or English or whatever).

I'm not sure on how many years it'll take for an AGI to pop up. 100 years perhaps, but I'm just guessing.

load more comments (3 replies)
[–] Usernamealreadyinuse@lemmy.world 10 points 8 months ago (1 children)

I found this graph very clear

[–] eggymachus@sh.itjust.works 4 points 8 months ago

Well, natural language processing is placed in the trough of disillusionment and projected to stay there for years. ChatGPT was released in November 2022...

[–] tinsuke@lemmy.world 8 points 8 months ago (1 children)

Trying to make real and good use of AI generative models are cracks in the magic.

[–] falkerie71@sh.itjust.works 17 points 8 months ago (7 children)

It's pretty useful if you know exactly what you want and how to work within it's limitations.

Coworkers around me already use ChatGPT to generate code snippets for Python, Excel VBA, etc. to good success.

[–] cm0002@lemmy.world 16 points 8 months ago

Right, it's a tool with quirks, techniques and skills to use just like any other tool. ChatGPT has definitely saved me time and on at least one occasion, kept me from missing a deadline that I probably would have missed if I went about it "the old way" lmao

load more comments (6 replies)
[–] RobotToaster@mander.xyz 4 points 8 months ago

"This post is for paid subscribers"

(Also that page has a script I had to override just to copy and paste that)

load more comments
view more: next ›