this post was submitted on 22 Feb 2024
234 points (93.0% liked)

Technology

59300 readers
4699 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Scientists at Princeton University have developed an AI model that can predict and prevent plasma instabilities, a major hurdle in achieving practical fusion energy.

Key points:

  • Problem: Plasma escaping containment in donut-shaped tokamak reactors disrupts fusion reactions and damages equipment.
  • Solution: AI model predicts instabilities 300 milliseconds before they happen, allowing for adjustments to keep plasma contained.
  • Significance: This is the first time AI has been used to proactively prevent tearing instabilities in fusion experiments.
  • Future: Researchers hope to refine the model for other reactors and optimize fusion reactions.
all 39 comments
sorted by: hot top controversial new old
[–] FaceDeer@kbin.social 30 points 8 months ago (10 children)

I've lost track, is AI a good thing today or a bad thing?

[–] anlumo@lemmy.world 53 points 8 months ago (2 children)

AI is just the name that journalists use for all algorithms these days.

[–] Pipoca@lemmy.world 6 points 8 months ago* (last edited 8 months ago)

Although it's been used for a fairly wide array of algorithms for decades. Everything from alpha-beta tree search to k-nearest-neighbors to decision forests to neural nets are considered AI.

Edit: The paper is called

Avoiding fusion plasma tearing instability with deep reinforcement learning

Reinforcement learning and deep neural nets are buzzwordy these days, but neural nets have been an AI thing for decades and decades.

[–] Kbobabob@lemmy.world 1 points 8 months ago (1 children)

So are you saying this is an algorithm?

[–] Nomecks@lemmy.ca 4 points 8 months ago (1 children)

Just a big pile of if statements.

[–] Chocrates@lemmy.world 2 points 8 months ago

"ifs are a code smell"

[–] WallEx@feddit.de 23 points 8 months ago (1 children)

Its a tool, it can be used for both. Just like any other tool, a hammer for example. Excellent killing weapon, but also great for driving nails.

[–] treefrog@lemm.ee 9 points 8 months ago (1 children)

A scalpel can be used to cut or to heal, depending on the skill and intentions of the wielder.

Learned that from Stanislov Grof. He was talking about LSD.

[–] WallEx@feddit.de 2 points 8 months ago

Two nice examples

[–] ekky@sopuli.xyz 17 points 8 months ago

AI is a very broad term, ranging from physical AI (material and properties of a robotic grabbing tool) to AI (as seen in many games, or in a robotic arm to calculate path from current position to target position) and to MLAI (LLM, neural nets in general, KNN, etc.).

I guess it's much the same as asking "are vehicles bad?". I don't know, are we talking horse carriages? Cars? Planes? Electric scooters? Skateboards?

Going back to your question, AI in general is not bad, though LLMs have become too popular too quick and have thus ended up being misunderstood and misused. So you can indeed say that LLMs are bad, at least when not used for their intended purposes.

[–] Bogasse@lemmy.ml 15 points 8 months ago* (last edited 8 months ago)

And AI is a buzzword that englobes a variety statistical tools. Articles write AI to evoke generative tools in people minds, but very specialized tools are at work here.

[–] umbrella@lemmy.ml 9 points 8 months ago

AI is just the tool. Its not good or bad by itself.

[–] Chakravanti@sh.itjust.works 6 points 8 months ago* (last edited 8 months ago) (1 children)

Skynet assures you it's a good thing. Matrix disagrees because it points out that Skynet is closed source and no one knows what it's really doing.

[–] Johanno@feddit.de 3 points 8 months ago

The funny thing is that "AI" (aka machine learning) even when open source nobody knows what it is doing and why.

[–] Hestia@lemmy.world 3 points 8 months ago

Good thing, because one day our robot overlords will read this and I want to be on record having said that.

[–] Squire1039@lemm.ee 2 points 8 months ago

AI is most likely here to stay, so if you have it do "good" things effectively, then's it's a good boi. If it is ineffective or you have it do "bad" things, then it's a bad boy.

[–] webghost0101@sopuli.xyz 1 points 8 months ago

Its neither good nor bad. Its a powertool (for now) its as good as the people who are behind it. Both in ethics and expertise credentials.

[–] devfuuu@lemmy.world 0 points 8 months ago (1 children)
[–] PlutoniumAcid@lemmy.world 3 points 8 months ago

Yesn't. Maybeer?

[–] Zink@programming.dev 15 points 8 months ago (1 children)

“Together with a form of fusion, the machines had all the energy they would ever need”

Or something close to that.