this post was submitted on 20 Dec 2024
3 points (100.0% liked)

Artificial Intelligence

1369 readers
3 users here now

Welcome to the AI Community!

Let's explore AI passionately, foster innovation, and learn together. Follow these guidelines for a vibrant and respectful community:

You can access the AI Wiki at the following link: AI Wiki

Let's create a thriving AI community together!

founded 2 years ago
 

OpenAl saved its biggest announcement for the last day of its 12-day "shipmas" event. On Friday, the company unveiled o3, the successor to the o1 "reasoning" model it released earlier in the year. o3 is a model family, to be more precise as was the case with o1. There's o3 and o3-mini, a smaller, distilled model fine-tuned for particular tasks. OpenAl makes the remarkable claim that o3, at least in certain conditions, approaches AGI - with significant caveats. More on that below.

top 7 comments
sorted by: hot top controversial new old
[–] A_A@lemmy.world 1 points 6 days ago (1 children)

[o3] ... achieves a Codeforces rating — another measure of coding skills — of 2727. (A rating of 2400 places an engineer at the 99.2nd percentile).

So this should mean that the next generation of a.i. should be programmed by artificial intelligence.

[–] brie@programming.dev 1 points 6 days ago (1 children)

LLMs are not programmed in a traditional way. The actual code is quite small. It mostly runs backprop, filters the data. It is already easily generated by LLMs.

[–] A_A@lemmy.world 1 points 6 days ago (1 children)

Okay, fair enough ... but what I tried to say, you know, that point when an AGI will generate the next AGI ... it seems to me that we are getting close.

[–] brie@programming.dev 2 points 6 days ago (1 children)

AGI or human level intelligence has a hardware problem. Fabs are not going to be autonomous within 20 years. Novel lithography and cleaning methods are difficult for large groups of humans. LLMs do not provide much assistance in semiconductor design. We are not even remotely close to manufacturing the infrastructure necessary to run human level intelligence software.

[–] A_A@lemmy.world 1 points 6 days ago (1 children)

Yes, for the hardware, of course. But, if there is large gains to be made in the software (conception ? design ?) then maybe we will see the type of rapid (self improving) change i am expecting.
(... or maybe not since some of these changes are inspired by mimicking what happens inside the brain)

[–] brie@programming.dev 1 points 6 days ago (1 children)

Large gains were due to scaling the hardware, and data. The training algorithms didn't change much, transformers allowed for higher parallelization. There are no signs of the process becoming self-improving. Agentic performance is horrible as you can see with Claude (15% of tasks successful).

What happens in the brain is a big mystery, and thus it cannot be mimicked. Biological neural networks do not exist, because the synaptic cleft is an artifact. The living neurons are round, and the axons are the result of dehydration with ethanol or xylene.

[–] A_A@lemmy.world 2 points 6 days ago

The living neurons are round, and the axons are the result of dehydration with ethanol or xylene.

Most scientists would not believe that. But if you are right in some way, we are very far of what I said here before. Clearly it's not me who will convince you otherwise. I wish you the best, take care 😌