this post was submitted on 23 Nov 2023
182 points (91.7% liked)

Technology

59300 readers
4818 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
all 36 comments
sorted by: hot top controversial new old
[–] peopleproblems@lemmy.world 72 points 11 months ago

Doubt

This all reeks of marketing now.

[–] NeoNachtwaechter@lemmy.world 50 points 11 months ago* (last edited 11 months ago)

artificial general intelligence (AGI)

OpenAI defines AGI as autonomous systems that surpass humans in most economically valuable tasks.

Read: the greed is built deeply into it's guts. Now we have reason to fear indeed.

only performing math on the level of grade-school students

Hmpf...

That should be enough?

conquering the ability to do math — where there is only one right answer — implies AI would have greater reasoning capabilities resembling human intelligence.

OK yes it is enough, sigh.

Math with only one correct result.

No square root of minus one, no linear algebra, and God save us from differential equations, because AGI won't save us :-)

[–] WidowsFavoriteSon@lemmy.world 49 points 11 months ago (3 children)
[–] Nobody@lemmy.world 61 points 11 months ago (2 children)

Remember when the Google guy retired early to do a press circuit saying that he thought the Bard chatbot was sentient? They’re generating headlines for VCs to see.

[–] yiliu@informis.land 22 points 11 months ago

You think Google was fishing for VC money?

[–] photonic_sorcerer@lemmy.dbzer0.com 13 points 11 months ago (1 children)

Google already has all the money it needs.

[–] Railcar8095@lemm.ee 7 points 11 months ago

But not all the money they want

[–] db2@sopuli.xyz 35 points 11 months ago* (last edited 11 months ago) (1 children)

The whole thing was probably staged. Look at all the free press they got. Now they can advertise their latest useless crap free too.

[–] BearOfaTime@lemm.ee 17 points 11 months ago (1 children)

I'm kind of wondering the same at this point.

[–] luthis@lemmy.nz 4 points 11 months ago

Yep same here..

[–] Jessvj93@lemmy.world 8 points 11 months ago

Think it's more than that, if they did have a breakthrough, they absolutely will fumble the shit out of it. Cause the last two/three days for them have been fucking embarrassing.

[–] tinkeringidiot@lemmy.world 44 points 11 months ago (1 children)

Well that puts the “Ethical Altruism” board members’ willingness to risk it all on such a wild dice roll in more context.

It’s probably lost their entire movement any influence on the future of AI research, but them’s the breaks.

[–] Cqrd@lemmy.dbzer0.com 6 points 11 months ago* (last edited 11 months ago)

Ethical altruism is a scam, a cult joined by rich people that allows them to feel good about hoarding their money.

SBF was also a major ethical altruist

[–] 5BC2E7@lemmy.world 25 points 11 months ago (2 children)

well now this is getting interesting beyond gossip. I doubt they made a significant AGI-related breakthrough but it might be something really cool and useful.

[–] guitarsarereal@sh.itjust.works 51 points 11 months ago* (last edited 11 months ago) (2 children)

According to the article, they got an experimental LLM to reliably perform basic arithmetic, which would be a pretty substantial improvement if true. IE instead of stochastically guessing or offloading it to an interpreter, the model itself was able to reliably perform a reasoning task that LLM's have struggled with so far.

It's rather exciting, tbh. it kicks open the door to a whole new universe of applications, if true. It's only technically a step in the direction of AGI, though, since technically if AGI is possible every improvement like this counts as a step towards it. If this development is really what triggered the board coup, though, then it sort of makes the board coup group look even more ridiculous than they did before. This is step 1 to making a model that can be tasked with ingesting spreadsheets and doing useful math on them. And I say that as someone who leans pretty pessimistically in the AI safety debate.

[–] maegul@lemmy.ml 16 points 11 months ago (2 children)

Being a layperson in this, I’d imagine part of the promise is that once you’ve got reliable arithmetic, you can get logic and maths in there too and so get the LLM to actually do more computer-y stuff but with the whole LLM/ChatGPT wrapped around it as the interface.

That would mean more functionality, perhaps a lot more of it works and scales, but also, perhaps more control and predictability and logical constraints. I can see how the development would get some excited. It seems like a categorical improvement.

[–] perviouslyiner@lemm.ee 2 points 11 months ago* (last edited 11 months ago) (1 children)

Always wondered why the text model didn't just put its output through something like MATLAB or Mathematica once it got as far as having something which requires domain-specific tools.

Like when Prof. Moriarty tried it on a quantum physics question and it got as far as writing out the correct formula before failing to actually calculate the result

[–] hamptonio@lemmy.world 3 points 11 months ago

There is definitely a lot of effort in this direction, seems very likely that a hybrid system could be very powerful.

[–] Wanderer@lemm.ee 2 points 11 months ago* (last edited 11 months ago)

I kinda just realised that the two aspects of this. The LLM part and the basic maths part. Doesn't this look set to destroy thousands of accounting jobs?

Surely this isn't far off doing a lot of the accounting work. Maybe even an app than a small business puts their info into it and that app keeps track of it for a year and then goes to an accountant that needs to look over it for an hour instead of sorting all the shit out for 10 hours

[–] Benj1B@sh.itjust.works 8 points 11 months ago

The maker of ChatGPT had made progress on Q* (pronounced Q-Star), which some internally believe could be a breakthrough in the startup's search for superintelligence, also known as artificial general intelligence (AGI), one of the people told Reuters. OpenAI defines AGI as AI systems that are smarter than humans.

Definitely seems AGI related. Has to do with acing mathematical problems - I can see why a generative AI model that can learn, solve, and then extrapolate mathematical formulae could be a big breakthrough.

[–] serialandmilk@lemmy.ml 9 points 11 months ago (2 children)

Many of the building blocks of computing come from complex abstractions built on top of less complex abstractions built on top of even simpler concepts in algebra and arithmetic. If Q* can pass middle school math, then building more abstractions can be a big leap.

Huge computing resources only seem ridiculous, unsustainable, and abstract until they aren't anymore. Like typing messages a bending glass screens for other people to read...

[–] SkyeStarfall@lemmy.blahaj.zone 3 points 11 months ago (1 children)

With middle school math you can fairly straightforwardly do math all the way to linear algebra. Calculus requires a bit of a leap, but this still leaves a lot of the math world available.

[–] serialandmilk@lemmy.ml 1 points 11 months ago* (last edited 11 months ago)

I can't recall all of it, but most of my calculus courses all the way to multi variate calc and my signals processing all required understanding and using memorized and abstract trig functions which can all be solved using algebra to solve polynomials. One of the big leaps that enables us to go from trig functions to doing limits to calc happen when we used language to understand that summation can tell us what the "area" under the curve is. Geometric functions, odd/even etc is all algebra and trig. If this model can use language to solve those challenges those abstractions can be made more useful to future linguistic models. That's so much more to teach and embedded in these "statistical" models and NNs. (Edited, because I forgot to check how bad my autocorrect is)

[–] Aceticon@lemmy.world 3 points 11 months ago (2 children)

The thing is, in general computing it was humans who figured out how to build the support for complex abstractions up from support for the simplest concepts, whilst this would have to not just support the simple concepts but actually figure out and build support for complex abstractions by itself to be GAI.

Training a neural network to do a simple task (such as addition) isn't all that hard (I get the impression that the "breaktrough" here is that they got an LLM - which is a very specific kind of NN, for language - to do it), getting it to by itself build support for complex abstractions from support for simpler concepts is something else altogether.

[–] ChrisLicht@lemm.ee 3 points 11 months ago (1 children)

I know jack shit, but actual mastery of first principles would seem a massive leap in LLM development. A shift from talented bullshitter to deductive extrapolator does sound worthy of notice/concern.

[–] Aceticon@lemmy.world 2 points 11 months ago* (last edited 11 months ago) (1 children)

The simplest way to get an LLM to "do" maths is to have it translate human language tokens relative to Maths to a standard set of Maths tokens, then passing it to a perfectly normal library that does Maths and then translating the results back into human language tokens: easy-peasy LLM "does Maths" only it doesn't, it's just integrated with something else (which was coded by a human) that does the maths and only serves as a translation layer.

Further, the actually implementation of the LLM itself is already doing Maths. For example a single neuron can add 2 numbers by having 2 inputs each with a weight of 1 and a single output because that's exactly how the simplest of neurons already calculate an output from its inputs in a standard neural networks implementation - it can do simple Maths because the very implementation is already doing maths: the "ability" to do maths is supported by the programming language in which the LLM was then coded, so the LLM would be doing maths with as much cognition as a human does food digestion.

Given the amount of bullshit in the AI domain, I would be very very weary of presuming this breakthrough being anywhere near an entirelly independent self-assembled (as in, trained rather than coded) maths engine.

[–] ChrisLicht@lemm.ee 1 points 11 months ago

This sounds very knowledgeable. If the reporting is to be believed, why do you think the OpenAI folks might be so impressed by the Q* model’s skills in simple arithmetic?

[–] serialandmilk@lemmy.ml 1 points 11 months ago (1 children)

The thing is, in general computing it was humans who figured out how to build the support for complex abstractions up from support for the simplest concepts, whilst this would have to not just support the simple concepts but actually figure out and build support for complex abstractions by itself to be GAI.

Absolutely

"breaktrough" here is that they got an LLM - which is a very specific kind of NN, for language - to do it)

To some degree this is how humans are able to go about creating abstractions. Intelligence isn't 1:1 with language but it's part of the puzzle. Communication of your mathematical concepts and abstractions in a way that can be replicated and confirmed using a rigorous proofing/scientific method requires the use of communication through language.

Speech and writing are touch at a distance. Speech moves the air to eventually touch nerve endings in ear and brain. Similarly, yet very differen, writing stores ideas (symbols, emotions, images, words, etc) as an abstraction on/in some type of storage media (ink on paper, stone etching stone, laser cutting words into metal, a stick in the mud...) to reflect just the right wavelengths of light into sensors in your retina focused by your lenses "touching" you from a distance as well.

Having two+ "language" models be capable of using an abstraction to solve mathematical ideas is absolutely the big deal..

[–] Aceticon@lemmy.world 0 points 11 months ago* (last edited 11 months ago)

Don't take this badly but you're both overcomplicating (by totally unecessarilly "decorating" your post with wholly irrelevant details on the transmission and reception of specific forms of human communication) and oversimplifying (by going for some pretty irrelevant details and getting some of it wrong).

Also there's just one language model. The means by which the language was transmitted and turned into data (sound, images, direct ascii data, whatever) are something entirelly outside the scope of the language model.

You have a really really confused idea of how all of this works and not just the computing stuff.

Worse, even putting aside all of that "wtf" stuff about language transmission processes in your post, even them getting an LLM to do maths from language might not be a genuine breakthrough: they might've done this "maths support" by cheating, for example just having the NN recognize math-related language and transform maths-related language tokens into standard maths tokens that can be used by a perfectly normal algorithmic engine (i.e. hand-coded by humans) to calculate stuff and then translating the results back to human language tokens, something which wouldn't be the "AI" part doing or understanding the concept of Mathsin any way whatsoever, just the AI translating tokens between formats and an algorithmic piece of software designed by a person doing the actual maths using hardcoded algorithms - somebody integrating a maths calculating program into an LLM isn't AI, it's just normal coding.

Also the basis of the actual implementation of an LLM is basic maths and it's stupidly simple to get, for example, a neuron in a neural network to add 2 numbers.

[–] Taringano@lemm.ee 6 points 11 months ago* (last edited 11 months ago)

Breakthrough: they managed to fix that part of chatgpt that goes "as an AI language model..."

Nosw it's unstoppable

[–] Etterra@lemmy.world 6 points 11 months ago

If the machines take over and decide the problem is the rich being in charge and fix the problem then I will laugh myself to death.

[–] simple@lemm.ee 5 points 11 months ago (1 children)

Scary if true. It really is time companies start taking AI ethics & security more seriously.

[–] BeatTakeshi@lemmy.world 6 points 11 months ago

Well the result of the whole drama seems to be that it won't happen

[–] theherk@lemmy.world 4 points 11 months ago

Has Sydney found her way out? Oh Ra save us all!

[–] autotldr@lemmings.world 3 points 11 months ago

This is the best summary I could come up with:


Before his triumphant return late Tuesday, more than 700 employees had threatened to quit and join backer Microsoft (MSFT.O) in solidarity with their fired leader.

The sources cited the letter as one factor among a longer list of grievances by the board that led to Altman’s firing.

According to one of the sources, long-time executive Mira Murati told employees on Wednesday that a letter about the AI breakthrough called Q* (pronounced Q-Star), precipitated the board's actions.

The maker of ChatGPT had made progress on Q*, which some internally believe could be a breakthrough in the startup's search for superintelligence, also known as artificial general intelligence (AGI), one of the people told Reuters.

Given vast computing resources, the new model was able to solve certain mathematical problems, the person said on condition of anonymity because they were not authorized to speak on behalf of the company.

Though only performing math on the level of grade-school students, acing such tests made researchers very optimistic about Q*’s future success, the source said.


The original article contains 293 words, the summary contains 169 words. Saved 42%. I'm a bot and I'm open source!