this post was submitted on 24 Jul 2024
7 points (81.8% liked)

Futurology

1765 readers
280 users here now

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] CanadaPlus@lemmy.sdf.org 1 points 3 months ago* (last edited 3 months ago) (1 children)

I really suspect this is how the plateau of productivity will look for machine learning. It will be all about building synergy with conventional algorithms, which can provide the rigour, transparency and reproducibility trained models can't. Computational efficiency sometimes, too, although maybe not in this case.

[–] Lugh@futurology.today 2 points 3 months ago (1 children)

I really suspect this is how the plateau of productivity will look for machine learning.

It seems finding more data to scale up LLMs is a bottleneck too.

[–] CanadaPlus@lemmy.sdf.org 2 points 3 months ago

Yeah, that's part of why I think that. There's also just the alignment issue that no amount of training will fix. At the end of the day, an LLM is a very smart internet simulator, you treat it like something else at your peril, and training it to be something else is very much an open problem.