Greg

joined 2 years ago
MODERATOR OF
[–] Greg@lemmy.ca 44 points 2 days ago (1 children)

I'm not sure about -15C but don't operate a freezer at -15K as it violate the fundamental laws of thermodynamics.

[–] Greg@lemmy.ca 8 points 3 days ago

I had a really good friend on MySpace that I lost touch with. I think he was a little paranoid, we didn't speak much and he was always looking over his shoulder. His name was Tom.

[–] Greg@lemmy.ca 4 points 3 days ago (1 children)

That is a very VC baiting title. But it's doesn't appear from the abstract that they're claiming that LLMs will develop to the complexity of AGI.

[–] Greg@lemmy.ca 3 points 4 days ago

Do you have a non paywalled link? And is that quote in relation to LLMs specifically or AI generally?

[–] Greg@lemmy.ca 65 points 4 days ago (7 children)

largely based on the notion that LLMs will, with continued scaling, become artificial general intelligence

Who said that LLMs were going to become AGI? LLMs as part of an AGI system makes sense but not LLMs alone becoming AGI. Only articles and blog posts from people who didn't understand the technology were making those claims. Which helped feed the hype.

I 100% agree that we're going to see an AI market correction. It's going to take a lot of hard human work to achieve the real value of LLMs. The hype is distracting from the real valuable and interesting work.

[–] Greg@lemmy.ca 42 points 6 days ago

This is 100% OPs weird fetish but props to this hussle

[–] Greg@lemmy.ca 9 points 1 week ago

This is true on algorithmic platforms that reward "engagement" of any kind. I didn't get that sense from pre 2010 Facebook or Twitter and I don't get that sense on fediverse platforms.

[–] Greg@lemmy.ca 7 points 1 week ago

I'm not defending Sam Altman or the AI hype. A framework that uses an LLM isn't an LLM and doesn't have the same limitations. So the accurate media coverage that LLMs may have reached a plateau doesn't mean we won't see continued performance in frameworks that use LLMs. OpenAI's o1 is an example. o1 isn't an LLM, it's a framework that augments some of the deficiencies of LLMs with other techniques. That's why it doesn't give you an immediate streamed response when you use it, it's not just an LLM.

[–] Greg@lemmy.ca 37 points 1 week ago (2 children)

And they have a German accent. I can tell from the way they type.

[–] Greg@lemmy.ca 2 points 1 week ago

That's not Sam Altman saying that LLMs will achieve AGI. LLMs are large language models, OpenAI is continuing to develop LLMs (like GPT-4o) but they're also working on frameworks that use LLMs (like o1). Those frameworks may achieve AGI but not the LLMs themselves. And this is a very important distinction because LLMs are reaching performance parity so we are likely reaching a plateau for LLMs given the existing training data and techniques. There is still optimizations for LLMs like increasing context window sizes etc.

[–] Greg@lemmy.ca 3 points 1 week ago* (last edited 1 week ago) (6 children)

When has Sam Altman said LLMs will reach AGI? Can you provide a primary source?

[–] Greg@lemmy.ca 7 points 1 week ago* (last edited 1 week ago)

I'm developing some human centric LLM frameworks at work. Every API request to OpenAI is currently subsidized by venture capital. I do worry about what the industry will look like once there is a big price adjustment. Locally run models are pretty decent now and the pace is still moving forward, especially with regards to context window sizes so as long as I keep the frameworks model agnostic it might not be a big impact.

 
18
VFX1 Headgear (en.m.wikipedia.org)
 
 
 
 
 
 
 
 
 

I think this is ok because my fish is omnivorous

 

This oughta get things going...

view more: next ›