this post was submitted on 28 Jul 2023
24 points (83.3% liked)

Actually Useful AI

1999 readers
2 users here now

Welcome! ๐Ÿค–

Our community focuses on programming-oriented, hype-free discussion of Artificial Intelligence (AI) topics. We aim to curate content that truly contributes to the understanding and practical application of AI, making it, as the name suggests, "actually useful" for developers and enthusiasts alike.

Be an active member! ๐Ÿ””

We highly value participation in our community. Whether it's asking questions, sharing insights, or sparking new discussions, your engagement helps us all grow.

What can I post? ๐Ÿ“

In general, anything related to AI is acceptable. However, we encourage you to strive for high-quality content.

What is not allowed? ๐Ÿšซ

General Rules ๐Ÿ“œ

Members are expected to engage in on-topic discussions, and exhibit mature, respectful behavior. Those who fail to uphold these standards may find their posts or comments removed, with repeat offenders potentially facing a permanent ban.

While we appreciate focus, a little humor and off-topic banter, when tasteful and relevant, can also add flavor to our discussions.

Related Communities ๐ŸŒ

General

Chat

Image

Open Source

Please message @sisyphean@programming.dev if you would like us to add a community to this list.

Icon base by Lord Berandas under CC BY 3.0 with modifications to add a gradient

founded 1 year ago
MODERATORS
 

After seeing this graph about the fall of StackOverflow, it's clear to me that I'm not the only one who has stopped using it in favor of LLMS (large language model) alternatives. With the rise of generative AI-powered initiatives like OverflowAI by Stack Overflow, and the potential of ChatGPT by OpenAI, the landscape of programming knowledge sharing is evolving. These AI models have the ability to generate human-like text and provide answers to programming questions. It's fascinating to witness the potential of AI as a game-changing tool in programming problem solving. So, I'm curious, which AI have you replaced StackOverflow with? Share your favorite LLMS or generative AI platform for programming in the comments below!

top 20 comments
sorted by: hot top controversial new old
[โ€“] aloso@programming.dev 15 points 1 year ago* (last edited 1 year ago) (3 children)

I do not use AI to solve programming problems.

First, LLMs like ChatGPT often produce incorrect answers to particularly difficult questions, but still seem completely confident in their answer. I don't trust software that would rather make something up than admit that it doesn't know the answer. People can make mistakes, too, but StackOverflow usually pushes the correct answer to the top through community upvotes.

Second, I rarely ask questions on StackOverflow. Most of the time, if I search for a few related keywords, Google will find an SO thread with the answer. This is much faster than writing a SO question and waiting for people to answer it; and it is also faster than explaining the question to ChatGPT.

Third, I'm familiar enough with the languages I use that I don't need help with simple questions anymore, like "how to iterate over a hashmap" or "how to randomly shuffle an array". The situations where I could use help are often so complicated that an LLM would probably be useless. Especially for large code bases, where the relevant code is spread across many files or even multiple repositories (e.g. a backend and a frontend), debugging the problem myself is more efficient than asking for help, be it an online community or a language model.

I might be taking over at a job for a friend who's leaving the country. Not programming, but IT and Sec.

I was concerned about my lack of exp.

They told me just to use ChatGPT 'cause that's what they do.

They don't even have .exe files blocked for users.

I'm now far more concerned about the state of the networks I'll be taking over. Going to be doing a full security audit as soon as I'm up to speed.

TT_TT

[โ€“] Uplink@programming.dev 6 points 1 year ago

I was starting to think I was using LLMS wrong but you perfectly summarized my situation.

This is definitely my issue. I've experimented with LLMs for code generation, but more often than not the code will be unusable, and occasionally it will have grotesque practices like unused function parameters in it. As far as I can tell we are nowhere near an LLM capable of generating ethical code.

[โ€“] ImpossibleRubiksCube@programming.dev 14 points 1 year ago (1 children)

I prefer the classic method from the 90s. Open up a can of turpentine or varnish, breathe deep counting back from 100, then start explaining your problem loudly and clumsily to a brick wall until it starts talking back.

[โ€“] regular_human@lemmy.world 2 points 1 year ago

Steve Ballmer? Is that you?

[โ€“] CoderSupreme@programming.dev 10 points 1 year ago* (last edited 1 year ago)

I use Perplexity.ai, it uses ChatGPT + search and improves accuracy a lot. It only has 5 free uses of GPT-4 every 4 hours, but the normal ChatGPT + search is still better than any of the other LLMs I've tried.

[โ€“] Txcfe@lemmy.sdf.org 4 points 1 year ago

ChatGPT Plus - GPT-4

[โ€“] Whirlybird@aussie.zone 4 points 1 year ago

Bing AI works great for me. Right there in the browser at the click of a button in the side bar.

[โ€“] Von_Broheim@programming.dev 3 points 1 year ago (1 children)

Initially I was very impressed by ChatGPT but over the past few weeks I'm getting fed up with it. It completely ignores constraints I give it regarding library versions I use. It dreams up insane, and garbage, answers to fairly simple prompts. For more complicated stuff it's even worse.

My current workflow is, try top few google results, if failed try ChatGPT for a few minutes, if failed go to documentation and or crawl through SO for some time, if failed ask on SO.

I tested some of the questions I asked on stackoverflow vs ChatGPT and the answers on stack overflow were much better. So for real "I really am stuck here" sort of issues I use SO.

[โ€“] YaBoyMax@programming.dev 1 points 1 year ago* (last edited 1 year ago)

Out of curiosity, do you use ChatGPT Plus? I've found that GPT-4 is worlds better at solving programming problems and is much less likely to hallucinate (as long as the question isn't too obscure).

[โ€“] varsock@programming.dev 3 points 1 year ago

There are many skills I'm an absolute beginner in. ChatGPT helps me drown out unnecessary noise and points me in the right direction. For things I'd consider myself proficient in, I ask it to write me tedious tasks. Love it for Makefile expansions

[โ€“] Stefh@programming.dev 3 points 1 year ago

Chat gpt and Bing AI

[โ€“] shiveyarbles@beehaw.org 2 points 1 year ago

I noticed jetbrains resharper has an AI feature. I had fun asking it stupid questions, I still use stack though

[โ€“] ctr1@fl0w.cc 2 points 1 year ago

GPT-4 in NeoVim. Definitely has taken the place of StackOverflow for me in most cases, but I still go there for especially difficult problems.

Would rather run something on my own hardware, but I'm waiting for other models to catch up

[โ€“] theneverfox@pawb.social 1 points 1 year ago

That's not how I use it at all. I don't use it for things I can't do, because it can't either.

I use it for things I could easily do, but don't want to, or when I don't know enough about a topic to ask.

For example, I had it build me JSON of the top 100 Lemmy instances.

I also was having trouble customizing a markup renderer - it didn't know how to do it, it couldn't find anything on my situation, but I asked it how it would do it in a few different common libraries.

I still had to figure it out myself with some trial and error, but instead of spending a day diving into how parsing, tokization, and rendering work, it showed me what a solution might look like, and defined some terms for me in context.

Knowing what it looked like, I could guess what the library creator was thinking with the undocumented custom extension I saw in their code, and I quickly got traction

ChatGPT+, Github Copilot and Copilot X. I'm pretty green yet so maybe I'm getting more out of it than a seasoned vet would. I just feel so much more productive and I'm learning faster since Copilot will offer 10 solutions and there always seem to be one that intrigues me with its novelty.

I like the conversational nature of ChatGPT and I love that I'm not getting judged. I can ask the dumbest questions all day long and not sweat that it's going to cost me the next promotion. Not too say I don't reach out to people, but I keep the really dumb shit between me and GPT.

I use Codeium's CoPilot-like tool in Intellij, plus ChatGPT-4 for the occasional quick question.

I still do all my own thinking about how to design and write the code, but for questions like "How do I convert a pandas DataFrame into a PySpark DataFrame", it's been great!

One of my senior-engineer buddies likens it to working with a really fast Junior. You gotta vet all their code, but if you know what code to ask it for, it can save you some time in writing it.

I use AI to solve problems. Like any tool, they have limitations. Eg: complex systems cannot be completely described within context length. Like any content (human or AI) the arguments should be considered critically and references checked.

That does mean I rarely take genererated code as-is.

I replaced it with online docs, Github Issues, Reddit, and Stack Overflow.

Many languages/libraries/tools have great documentation now, 10 years ago this wasn't the case, or at least I didn't know how to find/read documentation. 10 years ago Stack Overflow answers were also better, now many are obsolete due to being 10 years old :).

Good documentation is both more concise and thorough than any QA or ChatGPT output, and more likely to be accurate (it certainly should be in any half-decent documentation, but sometimes no).

If online documentation doesn't work, I try to find the answer on Github issues, Reddit, or a different forum. And sometimes that forum is Stack Overflow. More recently I've started to see most questions where the most upvoted answer has been edited to reflect recent changes; and even when an answer is out-of-date, there's usually a comment which says so.

Now, I never post on Stack Overflow, nor do I usually answer; there are way too many bad questions out there, most of the good ones already have answers or are really tricky, and the community still has its rude reputation. Though I will say the other stack exchange sites are much better.

So far, I've only used LLMs when my question was extremely detailed so I couldn't search it, and/or I ran out of options. There are issues like: I don't like to actually write out the full question (although I'm sure GPT works with query terms, I'll probably try that); GPT4's output is too verbose and it explain basic context I already know so it's just filler; and I still have a hard time trusting GPT4, because I've had it hallucinate before.

With documentation you have the expectation that the information is accurate, and with forums you have other people who will comment if the answer is wrong, but with LLMs you have neither.

load more comments
view more: next โ€บ