Technology
Which posts fit here?
Anything that is at least tangentially connected to the technology, social media platforms, informational technologies and tech policy.
Rules
1. English only
Title and associated content has to be in English.
2. Use original link
Post URL should be the original link to the article (even if paywalled) and archived copies left in the body. It allows avoiding duplicate posts when cross-posting.
3. Respectful communication
All communication has to be respectful of differing opinions, viewpoints, and experiences.
4. Inclusivity
Everyone is welcome here regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation.
5. Ad hominem attacks
Any kind of personal attacks are expressly forbidden. If you can't argue your position without attacking a person's character, you already lost the argument.
6. Off-topic tangents
Stay on topic. Keep it relevant.
7. Instance rules may apply
If something is not covered by community rules, but are against lemmy.zip instance rules, they will be enforced.
Companion communities
!globalnews@lemmy.zip
!interestingshare@lemmy.zip
Icon attribution | Banner attribution
If someone is interested in moderating this community, message @brikox@lemmy.zip.
view the rest of the comments
Do you think they have unreleased more sentient and less llm in the labs or do you think we arent near that level of tech yet.
I often wonder what it would be like if they stuck it into the quantum chip google has or have they already tried it.
I -personally- don't think so. I also read these regular news articles, claiming OpenAI has clandestinely achieved AGI or their models have developed sentience... And they're just keeping that from us. And it certainly helps increase the value of their company. But I think that's a conspiracy theory. Every time I try ChatGPT or Claude or whatever, I see how it's not that intelligent. It certainly knows a lot of facts. And it's very impressive. But it also fails often at helping me with more complicated emails, coding tasks or just summarizing text correctly. I don't see how it is at the brink of AG, if that's the public variant. And sure, they're probably not telling all the truth. And they have lots of bright scientists working for them And they like some stuff to stay behind closed curtains. Most likely how they violate copyright... But I don't think they're that far off. They could certainly make a lot of money by increasing the usefulness of their product. And it seems to me like it's stagnating. The reasoning ability is a huge progress. But it still doesn't solve a lot of issues. And I'm pretty sure we'd have ChatGPT 5 by now if it was super easy to scale and make it more intelligent.
Plus it's been 2 weeks that a smaller (Chinese) startup proved other entities can compete with the market leader. And do it way more efficiently.
So I think there is lots of circumstantial evidence, leading me to believe they aren't far off from what other people do. And we have academic research and workgroups working at it and publishing their results publicly. So I think we have a rough estimate of what issues they're facing and what AI progress is struggling with. And a lot of those issues are really hard to solve. I think it's going to take some time until we arrive at AGI. And I think it requires a fundamentally different approach than the current model design.