Actually Useful AI
Welcome! ๐ค
Our community focuses on programming-oriented, hype-free discussion of Artificial Intelligence (AI) topics. We aim to curate content that truly contributes to the understanding and practical application of AI, making it, as the name suggests, "actually useful" for developers and enthusiasts alike.
Be an active member! ๐
We highly value participation in our community. Whether it's asking questions, sharing insights, or sparking new discussions, your engagement helps us all grow.
What can I post? ๐
In general, anything related to AI is acceptable. However, we encourage you to strive for high-quality content.
What is not allowed? ๐ซ
- ๐ Sensationalism: "How I made $1000 in 30 minutes using ChatGPT - the answer will surprise you!"
- โป๏ธ Recycled Content: "Ultimate ChatGPT Prompting Guide" that is the 10,000th variation on "As a (role), explain (thing) in (style)"
- ๐ฎ Blogspam: Anything the mods consider crypto/AI bro success porn sigma grindset blogspam
General Rules ๐
Members are expected to engage in on-topic discussions, and exhibit mature, respectful behavior. Those who fail to uphold these standards may find their posts or comments removed, with repeat offenders potentially facing a permanent ban.
While we appreciate focus, a little humor and off-topic banter, when tasteful and relevant, can also add flavor to our discussions.
Related Communities ๐
General
- !Artificial@kbin.social
- !artificial_intel@lemmy.ml
- !singularity@lemmy.fmhy.ml
- !ai@kbin.social
- !ArtificialIntelligence@kbin.social
- !aihorde@lemmy.dbzer0.com
Chat
Image
Open Source
Please message @sisyphean@programming.dev if you would like us to add a community to this list.
Icon base by Lord Berandas under CC BY 3.0 with modifications to add a gradient
view the rest of the comments
I do not use AI to solve programming problems.
First, LLMs like ChatGPT often produce incorrect answers to particularly difficult questions, but still seem completely confident in their answer. I don't trust software that would rather make something up than admit that it doesn't know the answer. People can make mistakes, too, but StackOverflow usually pushes the correct answer to the top through community upvotes.
Second, I rarely ask questions on StackOverflow. Most of the time, if I search for a few related keywords, Google will find an SO thread with the answer. This is much faster than writing a SO question and waiting for people to answer it; and it is also faster than explaining the question to ChatGPT.
Third, I'm familiar enough with the languages I use that I don't need help with simple questions anymore, like "how to iterate over a hashmap" or "how to randomly shuffle an array". The situations where I could use help are often so complicated that an LLM would probably be useless. Especially for large code bases, where the relevant code is spread across many files or even multiple repositories (e.g. a backend and a frontend), debugging the problem myself is more efficient than asking for help, be it an online community or a language model.
I might be taking over at a job for a friend who's leaving the country. Not programming, but IT and Sec.
I was concerned about my lack of exp.
They told me just to use ChatGPT 'cause that's what they do.
They don't even have .exe files blocked for users.
I'm now far more concerned about the state of the networks I'll be taking over. Going to be doing a full security audit as soon as I'm up to speed.
TT_TT
I was starting to think I was using LLMS wrong but you perfectly summarized my situation.
This is definitely my issue. I've experimented with LLMs for code generation, but more often than not the code will be unusable, and occasionally it will have grotesque practices like unused function parameters in it. As far as I can tell we are nowhere near an LLM capable of generating ethical code.