this post was submitted on 11 Aug 2023
50 points (100.0% liked)

Technology

37604 readers
136 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

It just feels too good to be true.

I'm currently using it for formatting technical texts and it's amazing. It doesn't generate them properly. But if I give it the bulk of the info it makes it pretty af.

Also just talking and asking for advice in the most random kinds of issues. It gives seriously good advice. But it makes me worry about whether I'm volunteering my personal problems and innermost thoughts to a company that will misuse that.

Are these concerns valid?

you are viewing a single comment's thread
view the rest of the comments
[–] sub_@beehaw.org 7 points 1 year ago (1 children)

https://www.techradar.com/news/samsung-workers-leaked-company-secrets-by-using-chatgpt

I've never used ChatGPT, so I don't know if there's an offline version. So I assume everything that you typed in, is in turn used to train the model. Thus, using it will probably leak sensitive information.

Also from what I read is that, the replies are convincing enough, but could sometimes be very wrong, thus if you're using it for machineries, medical stuff, etc, it could end up fatal.

[–] lloram239@feddit.de 3 points 1 year ago* (last edited 1 year ago)

I’ve never used ChatGPT, so I don’t know if there’s an offline version.

There is no offline version of ChatGPT itself, but many competing LLMs are available to run locally, e.g. Facebook just released Llama2 and Llama.cpp is a popular way to run those models. The smaller models work reasonably well on modern consumer hardware, the bigger ones less so.

but could sometimes be very wrong

They are mostly correct when you stay within the bounds of its training material. They are completely fiction when you go out of it or try to dig to deep (e.g. summary of popular movie will be fine, asking for specific lines of dialog will be made up, summary of less popular movie might be complete fiction).