zogwarg

joined 1 year ago
[–] zogwarg@awful.systems 17 points 3 days ago

Not surprised, still very disappointed, I feel sick.

[–] zogwarg@awful.systems 9 points 2 weeks ago (5 children)

Why hasn't he attempted to make a robotic owl yet? Poser...

[–] zogwarg@awful.systems 4 points 3 weeks ago (1 children)

To be "fair" kubernetes api only supports strongly validated/typed YAML-ish input..., it won't let you put non-string values in string locations. And in reality at the HTTP api layer—at least for kubectl—json is used. (Which also means you cant' do the more weird occult YAML things that JSON wouldn't let you)

You have to blame the deep-nestedness of k8s resources for unreadability...

[–] zogwarg@awful.systems 13 points 3 weeks ago (6 children)

I was also a Elon skeptic back-then, but I'll admit I did get a kick out of the "don't panic" dashboard.

But golly does he read H2G2 completely wrong (transcript):

I think and it highlighted an important point which is that a lot of times the question is harder than the answer. And if you can properly phrase the question, then the answer is the easy part. So, to the degree that we can better understand the universe, then we can better know what questions to ask. Then whatever the question is that most approximates: what’s the meaning of life? That’s the question we can ultimately get closer to understanding. And so I thought to the degree that we can expand the scope and scale of consciousness and knowledge, then that would be a good thing.

It's backwards! It misses the joke! It took thousands of years and they got a nonsensical answer before any question! It took a thousand more and they got a nonsensical—incompatible—question! It has been theorized that should someone understand the universe it would be replaced by something more complicated! It has also been theorized this has already happened! Also regarding scale of knowledge, Trin Tragula definetly showed that the One thing you can't afford to have in this universe, is a sense of perspective!

Surely his reading comprehension isn't actually this bad, and he only got a bad meme-cliffnotes version of the radio-series/books/movies!?!

[–] zogwarg@awful.systems 5 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

I wouldn't swap it for the world ^^, but maybe a tad fewer existantial crises would be nice (no monkey-paw curls plz)

[–] zogwarg@awful.systems 8 points 3 weeks ago (3 children)
  • I don't know Matt Mullenweg, but I'm afraid to ask
  • I love the nonsensical misleading QR code onliner, conflictingly using both echo "<URL>" | and mac-only getoutput("pbpaste") (yuck).
  • It is famously easy to maintain a job and mental well-being when you have no stable home and few sets of clothes! Famously you don't need a registered address to open a bank account, and you don't need a bank account to get a registered address!
  • I guess you could run the GPU for 1000 years.
  • One needs to learn that interpolation = confabulation = useless bullshit.
[–] zogwarg@awful.systems 13 points 3 weeks ago (6 children)

How nice it must be to never ponder how large humanity is, and how each and every person you see outside has a full and rich interior and exterior world, and you that only see a tiny fraction of the people outside.

Personally one of my "oh other people are real!" moment, was when our parents (along with my sisters) took us on a surprise ferry trip to England (from France) and our grandparents that—at least as far as kid me remembered—we only ever saw in their home city, were waiting for us in Portsmouth, and we visited the city together (Portsmouth Historic Dockyard is quite nice btw).

I knew they were real, but realizing that they weren't geo-locked, made me more fully internalize that they had full and independent lives, and therefore that everyone had.


How about people here? When did you realize people are real?

[–] zogwarg@awful.systems 15 points 1 month ago

No no no it's fine! You get the word shuffler to deshuffle the—eloquently—shuffled paragraphs back into nice and tidy bullet points. And I have an idea! You could get an LLM to add metadata to the email to preserve the original bullet points, so the recipient LLM has extra interpolation room to choose to ignore the original list, but keep the—much more correct and eloquent, and with much better emphasis—hallucinated ones.

[–] zogwarg@awful.systems 11 points 1 month ago* (last edited 1 month ago) (1 children)

See image description below

Image descriptionImage shows user joined two weeks ago.
.
Yikes. Could be a troll (I hope it's a troll)

[–] zogwarg@awful.systems 5 points 1 month ago

The video about Anime and Propanganda is very good and reccomended. As a progressive weeb living in Japan, a very cathartic watch.

[–] zogwarg@awful.systems 6 points 1 month ago

From para-social to faux-social!

[–] zogwarg@awful.systems 14 points 1 month ago

SaaS = ~~Storage as a Service~~ Sneer as a Service

 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Semi-obligatory thanks to @dgerard for starting this)

 

Source: nitter, twitter

Transcribed:

Max Tegmark (@tegmark):
No, LLM's aren't mere stochastic parrots: Llama-2 contains a detailed model of the world, quite literally! We even discover a "longitude neuron"

Wes Gurnee (@wesg52):
Do language models have an internal world model? A sense of time? At multiple spatiotemporal scales?
In a new paper with @tegmark we provide evidence that they do by finding a literal map of the world inside the activations of Llama-2! [image with colorful dots on a map]


With this dastardly deliberate simplification of what it means to have a world model, we've been struck a mortal blow in our skepticism towards LLMs; we have no choice but to convert surely!

(*) Asterisk:
Not an actual literal map, what they really mean to say is that they've trained "linear probes" (it's own mini-model) on the activation layers, for a bunch of inputs, and minimizing loss for latitude and longitude (and/or time, blah blah).

And yes from the activations you can get a fuzzy distribution of lat,long on a map, and yes they've been able to isolated individual "neurons" that seem to correlate in activation with latitude and longitude. (frankly not being able to find one would have been surprising to me, this doesn't mean LLM's aren't just big statistical machines, in this case being trained with data containing literal lat,long tuples for cities in particular)

It's a neat visualization and result but it is sort of comically missing the point


Bonus sneers from @emilymbender:

  • You know what's most striking about this graphic? It's not that mentions of people/cities/etc from different continents cluster together in terms of word co-occurrences. It's just how sparse the data from the Global South are. -- Also, no, that's not what "world model" means if you're talking about the relevance of world models to language understanding. (source)
  • "We can overlay it on a map" != "world model" (source)
 

Nitter link

With interspaced sneerious rephrasing:

In the close vicinity of sorta-maybe-human-level general-ish AI, there may not be any sharp border between levels of increasing generality, or any objectively correct place to call it AGI. Any process is continuous if you zoom in close enough.

The profound mysteries of reality carving, means I get to move the goalposts as much as I want. Besides I need to re-iterate now that the foompocalypse is imminent!

Unless, empirically, somewhere along the line there's a cascade of related abilities snowballing. In which case we will then say, post facto, that there's a jump to hyperspace which happens at that point; and we'll probably call that "the threshold of AGI", after the fact.

I can't prove this, but it's the central tenet of my faith, we will recognize the face of god when we see it. I regret that our hindsight 20-20 event is so ~~conveniently~~ inconveniently placed in the future, the bad one no less.

Theory doesn't predict-with-certainty that any such jump happens for AIs short of superhuman.

See how much authority I have, it is not "My Theory" it is "The Theory", I have stared into the abyss and it peered back and marked me as its prophet.

If you zoom out on an evolutionary scale, that sort of capability jump empirically happened with humans--suddenly popping out writing and shortly after spaceships, in a tiny fragment of evolutionary time, without much further scaling of their brains.

The forward arrow of Progress™ is inevitable! S-curves don't exist! The y-axis is practically infinite!
We should extrapolate only from the past (eugenically scaled certainly) century!
Almost 10 000 years of written history, and millions of years of unwritten history for the human family counts for nothing!

I don't know a theoretically inevitable reason to predict certainly that some sharp jump like that happens with LLM scaling at a point before the world ends. There obviously could be a cascade like that for all I currently know; and there could also be a theoretical insight which would make that prediction obviously necessary. It's just that I don't have any such knowledge myself.

I know the AI god is a NeCeSSarY outcome, I'm not sure where to plant the goalposts for LLM's and still be taken seriously. See how humble I am for admitting fallibility on this specific topic.

Absent that sort of human-style sudden capability jump, we may instead see an increasingly complicated debate about "how general is the latest AI exactly" and then "is this AI as general as a human yet", which--if all hell doesn't break loose at some earlier point--softly shifts over to "is this AI smarter and more general than the average human". The world didn't end when John von Neumann came along--albeit only one of him, running at a human speed.

Let me vaguely echo some of my beliefs:

  • History is driven by great men (of which I must be, but cannot so openly say), see our dearest elevated and canonized von Neumann.
  • JvN was so much above the average plebeian man (IQ and eugenics good?) and the AI god will be greater.
  • The greatest single entity/man will be the epitome of Intelligence™, breaking the wheel of history.

There isn't any objective fact about whether or not GPT-4 is a dumber-than-human "Artificial General Intelligence"; just a question of where you draw an arbitrary line about using the word "AGI". Albeit that itself is a drastically different state of affairs than in 2018, when there was no reasonable doubt that no publicly known program on the planet was worthy of being called an Artificial General Intelligence.

No no no, General (or Super) Intelligence is not an completely un-scoped metric. Again it is merely a fuzzy boundary where I will be able to arbitrarily move the goalposts while being able to claim my opponents are!

We're now in the era where whether or not you call the current best stuff "AGI" is a question of definitions and taste. The world may or may not end abruptly before we reach a phase where only the evidence-oblivious are refusing to call publicly-demonstrated models "AGI".

Purity-testing ahoy, you will be instructed to say shibboleth three times and present your Asherah poles for inspection. Do these mean unbelievers not see these N-rays as I do ? What do you mean we have (or almost have, I don't want to be too easily dismissed) is not evidence of sparks of intelligence?

All of this is to say that you should probably ignore attempts to say (or deniably hint) "We achieved AGI!" about the next round of capability gains.

Wasn't Sam the Altman so recently cheeky? He'll ruin my grift!

I model that this is partially trying to grab hype, and mostly trying to pull a false fire alarm in hopes of replacing hostile legislation with confusion. After all, if current tech is already "AGI", future tech couldn't be any worse or more dangerous than that, right? Why, there doesn't even exist any coherent concern you could talk about, once the word "AGI" only refers to things that you're already doing!

Again I reserve the right to remain arbitrarily alarmist to maintain my doom cult.

Pulling the AGI alarm could be appropriate if a research group saw a sudden cascade of sharply increased capabilities feeding into each other, whose result was unmistakeably human-general to anyone with eyes.

Observing intelligence is famously something eyes are SufFicIent for! No this is not my implied racist, judge someone by the color of their skin, values seeping through.

If that hasn't happened, though, deniably crying "AGI!" should be most obviously interpreted as enemy action to promote confusion; under the cover of selfishly grabbing for hype; as carried out based on carefully blind political instincts that wordlessly notice the benefit to themselves of their 'jokes' or 'choice of terminology' without there being allowed to be a conscious plan about that.

See Unbelievers! I can also detect the currents of misleading hype, I am no buffoon, only these hypesters are not undermining your concerns, they are undermining mine: namely damaging our ability to appear serious and recruit new cult members.

 

source nitter link

@EY
This advice won't be for everyone, but: anytime you're tempted to say "I was traumatized by X", try reframing this in your internal dialogue as "After X, my brain incorrectly learned that Y".

I have to admit, for a brief moment i thought he was correctly expressing displeasure at twitter.

@EY
This is of course a dangerous sort of tweet, but I predict that including variables into it will keep out the worst of the online riff-raff - the would-be bullies will correctly predict that their audiences' eyes would glaze over on reading a QT with variables.

Fool! This bully (is it weird to speak in the third person ?) thinks using variables here makes it MORE sneer worthy, especially since this appear to be a general advice, but i would struggle to think of a single instance in my life where it's been applicable.

 

Source Tweet

@ESYudkowsky: Remember when you were a kid and thought you might have psychic powers, so you dealt yourself face-down playing cards and tried to guess whether they were red or black, and recorded your accuracy rate over several batches of tries?

|

And then remember how you had absolutely no idea to do stats at that age, so you stayed confused for a while longer?


Apologies for the usage of the japanese; but it is a very apt description: https://en.wikipedia.org/wiki/Chūnibyō,

 

And of course no experiments whatsoever, the cost of the Manhattan project, the hundreds of thousands of employees were merely a "focusing" magick, a sacrifice to re-enforce the greater powers of our handful of esteemed and glorious thinking men, who wrought the power of destruction from the æther.

Source Tweet

@ESYudkowsky: Yes, but because the first nuclear weapon makers knew what the duck they were doing - analytic precise prediction of desired outcomes and of each intervening step. AGI makers lack similar mastery or anything remotely close, and have a much harder problem; that's the big issue.

@EigenGender: seems pretty noteworthy that the first nuclear weapons were made under conditions where they couldn’t do any experiments and they involved a lot of math but still worked on the first try.

view more: next ›