Ask Lemmy
A Fediverse community for open-ended, thought provoking questions
Please don't post about US Politics. If you need to do this, try !politicaldiscussion@lemmy.world
Rules: (interactive)
1) Be nice and; have fun
Doxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them
2) All posts must end with a '?'
This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?
3) No spam
Please do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.
4) NSFW is okay, within reason
Just remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com.
NSFW comments should be restricted to posts tagged [NSFW].
5) This is not a support community.
It is not a place for 'how do I?', type questions.
If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.
Reminder: The terms of service apply here too.
Partnered Communities:
Logo design credit goes to: tubbadu
view the rest of the comments
We've been "rapidly appeoaching the singularity" for quite a while now, and the current tools being marketed as "AI" don't actually have any "intelligence" to them. We are not going to magically turn what we have now into "AGI", it's simply not possible given our current models and techniques.
From someone in tech, at absolute best this is something that we might see strides in by the time we all die of old age, and that's being absurdly optimistic. The only people pushing the idea of a faster timeline are those with money to grift off the idea.
I see where you're coming from but look at semiconductors. Right now Nvidia has dethroned Intel, and nvidia's own insiders have stated that they are designing chips based on AI which they are then using to power the AI which design the next round of chips.
Maybe the stuff that you and I have access to will never cross the border into AGI territory to some sort of AGI scenario, but that doesn't mean that there are not systems and processes in play that can.
And here we have issues with the many different definitions of AI. Nvidia used machine learning to simulate countless iterations of their chip design to find the best configuration and layout (for the specific goals they set their AI to optimize for). They did not use chatGPT or anything that has textual output. It literally cannot spontaneously develop that ability.
It is constrained by the bounds that are inherently neccessary to make it function and by the goals it is created to optimize for. It cannot just arbitrarily "choose" to go do something they aren't pointing it at. It may do things that aren't intended, but those are "happy accidents" related (again) to the goals it is given to optimize for. Like a delivery AI jumping off a balcony because it's the fastest way down, since no goal weighting was given to self preservation or damaging the package.
At the very least, until we have some way to codify the abstract concept of comprehension into a scoring system can be optimized for, none of these things are going to even approach AGI. This is due to the simple reality of how they work under the hood, and don't for a fucking second believe the charlatans saying that we can't understand them. We may not be able to discretely track each and every step a model takes in modifying it's weights or each decision poiny when optimizing for specific output, but that's a matter of storage space to store each step and drastic speed loss that would occur recording each step. It is not some inherent untracable magic in how they work.
Computers, even quantum computers, work through billions of discrete traceable steps occurring each second. AI still needs discrete inputs, discrete goal/optimization/math to discern good output from bad, even if we choose not to track each step in between.
Put as simply as possible: You cannot duct tape infinite speak and spells together to spontaneously create an intelligence, and that is effectively what current AI is doing in ever increasing amounts. We're brute forcing it by throwing ever increasing amounts of resources at it, with rare and minor improvements in the underlying math occurring at far slower rates. The nvidea chip thing is just improving the ability of chips to do the math we're already doing for this stuff even faster, so... more brute forcing.
Edit: Also, nvidea is making more money than they ever have riding this hype train. Of course they're going to push the idea that absurd leaps of progress are right around the corner, and that their products will get us there. They are the best in the market right now, but anything beyond that is pure conjecture to help drive sales. Their chips are not fundamentally doing anything new, just the same things but more efficiently.