sisyphean

joined 1 year ago
MODERATOR OF
 

Starting today, all paying API customers have access to GPT-4. In March, we introduced the ChatGPT API, and earlier this month we released our first updates to the chat-based models. We envision a future where chat-based models can support any use case. Today we’re announcing a deprecation plan for older models of the Completions API, and recommend that users adopt the Chat Completions API.

 

Some interesting quotes:

  1. LLMs do both of the things that their promoters and detractors say they do.
  2. They do both of these at the same time on the same prompt.
  3. It is very difficult from the outside to tell which they are doing.
  4. Both of them are useful.

When a search engine is able to do this, it is able to compensate for a limited index size with intelligence. By making reasonable inferences about what page text is likely to satisfy what query text, it can satisfy more intents with fewer documents.

LLMs are not like this. The reasoning that they do is inscrutable and massive. They do not explain their reasoning in a way that we can trust is actually their reasoning, and not simply a textual description of what such reasoning might hypothetically be.

@AutoTLDR

 

If you are like me, and you didn't immediately understand why people rave about Copilot, these simple examples by Simon Willison may be useful to you:

 

We need scientific and technical breakthroughs to steer and control AI systems much smarter than us. To solve this problem within four years, we’re starting a new team, co-led by Ilya Sutskever and Jan Leike, and dedicating 20% of the compute we’ve secured to date to this effort. We’re looking for excellent ML researchers and engineers to join us.

@AutoTLDR@programming.dev

 
 
 

I haven't tried this yet, but I have a feeling that it would fail for anything nontrivial. Nevertheless, the concept is very interesting, and as soon as I get API access to GPT-4, I will try it.

I've recently ported a library from TypeScript to Python with the help of ChatGPT (GPT-4), and it took me about a day. It would be interesting to run this tool on the same codebase and compare the results.

If anyone has GPT-4 API access, I would really appreciate if they tried running this tool on something simple, and wrote about the result in the comments.

 

👋 Hello everyone, welcome to our Weekly Discussion thread!

This week, we’re interested in your thoughts on AI safety: Is it an issue that you believe deserves significant attention, or is it just fearmongering motivated by financial interests?

I've created a poll to gauge your thoughts on these concerns. Please take a moment to select the AI safety issues you believe are most crucial:

VOTE HERE: 🗳️ https://strawpoll.com/e6Z287ApqnN

Here is a detailed explanation of the options:

  1. Misalignment between AI and human values: If an AI system's goals aren't perfectly aligned with human values, it could lead to unintended and potentially catastrophic consequences.

  2. Unintended Side-Effects: AI systems, especially those optimized to achieve a specific goal, might engage in harmful behavior that was not intended, often referred to as "instrumental convergence".

  3. Manipulation and Deception: AI could be used for manipulating information, deepfakes, or influencing behavior without consent, leading to erosion of trust and reality.

  4. AI Bias: AI models may perpetuate or amplify existing biases present in the data they're trained on, leading to unfair outcomes in various sectors like hiring, law enforcement, and lending.

  5. Security Concerns: As AI systems become more integrated into critical infrastructure, the potential for these systems to be exploited or misused increases.

  6. Economic and Social Impact: Automation powered by AI could lead to significant job displacement and increase inequality, causing major socioeconomic shifts.

  7. Lack of Transparency: AI systems, especially deep learning models, are often criticized as "black boxes," where it's difficult to understand the decision-making process.

  8. Autonomous Weapons: The misuse of AI in warfare could lead to lethal autonomous weapons, potentially causing harm on a massive scale.

  9. Monopoly and Power Concentration: Advanced AI capabilities could lead to an unequal distribution of power and resources if controlled by a select few entities.

  10. Dependence on AI: Over-reliance on AI systems could potentially make us vulnerable, especially if these systems fail or are compromised.

Please share your opinion here in the comments!

1
submitted 1 year ago* (last edited 1 year ago) by sisyphean@programming.dev to c/auai@programming.dev
 

cross-posted from: https://programming.dev/post/314158

Announcement

The bot I announced in this thread is now ready for a limited beta release.

You can see an example summary it wrote here.

How to Use AutoTLDR

  • Just mention it ("@" + "AutoTLDR") in a comment or post, and it will generate a summary for you.
  • If mentioned in a comment, it will try to summarize the parent comment, but if there is no parent comment, it will summarize the post itself.
  • If the parent comment contains a link, or if the post is a link post, it will summarize the content at that link.
  • If there is no link, it will summarize the text of the comment or post itself.
  • 🔒 If you include the #nobot hashtag in your profile, it will not summarize anything posted by you.

Beta limitations

How to try it

  • If you want to test the bot, write a long comment, or include a link in a comment in this thread, and then, in a reply comment, mention the bot.
  • Feel free to test it and try to break it in this thread. Please report any weird behavior you encounter in a PM to me (NOT the bot).
  • You can also use it for its designated purpose anywhere in the AUAI community.
 
 
view more: ‹ prev next ›