this post was submitted on 30 Dec 2024
1526 points (97.0% liked)

memes

10808 readers
3125 users here now

Community rules

1. Be civilNo trolling, bigotry or other insulting / annoying behaviour

2. No politicsThis is non-politics community. For political memes please go to !politicalmemes@lemmy.world

3. No recent repostsCheck for reposts when posting a meme, you can only repost after 1 month

4. No botsNo bots without the express approval of the mods or the admins

5. No Spam/AdsNo advertisements or spam. This is an instance rule and the only way to live.

Sister communities

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] AnarchistArtificer@slrpnk.net 2 points 1 week ago (1 children)

That's really neat, thanks for sharing that example.

In my field (biochemistry), there are also quite a few truly awesome use cases for LLMs and other machine learning stuff, but I have been dismayed by how the hype train on AI stuff has been working. Mainly, I just worry that the overhyped nonsense will drown out the legitimately useful stuff, and that the useful stuff may struggle to get coverage/funding once the hype has burnt everyone out.

[–] fine_sandy_bottom@discuss.tchncs.de 4 points 1 week ago (1 children)

I suspect that this is "grumpy old man" type thinking, but my concern is the loss of fundamental skills.

As an example, like many other people I've spent the last few decades developing written communication skills, emailing clients regarding complex topics. Communication requires not only an understanding of the subject, but an understanding of the recipient's circumstances, and the likelihood of the thoughts and actions that may arise as a result.

Over the last year or so I've noticed my assistants using LLMs to draft emails with deleterious results. This use in many cases reduces my thinking feeling experienced and trained assistant to an automaton regurgitating words from publicly available references. The usual response to this concern is that my assistants are using the tool incorrectly, which is certainly the case, but my argument is that the use of the tool precludes the expenditure of the requisite time and effort to really learn.

Perhaps this is a kind of circular argument, like why do kids need to learn handwriting when nothing needs to be handwritten.

It does seem as though we're on a trajectory towards stupider professional services though, where my bot emails your bot who replies and after n iterations maybe they've figured it out.

[–] AnarchistArtificer@slrpnk.net 2 points 1 week ago

Oh yeah, I'm pretty worried about that from what I've seen in biochemistry undergraduate students. I was already concerned about how little structured support in writing science students receive, and I'm seeing a lot of over reliance on chatGPT.

With emails and the like, I find that I struggle with the pressure of a blank page/screen, so rewriting a mediocre draft is immensely helpful, but that strategy is only viable if you're prepared to go in and do some heavy editing. If it were a case of people honing their editing skills, then that might not be so bad, but I have been seeing lots of output that has the unmistakable chatGPT tone.

In short, I think it is definitely "grumpy old man" thinking, but that doesn't mean it's not valid (I say this as someone who is probably too young to be a grumpy old crone yet)