scruiser

joined 2 years ago
[–] scruiser@awful.systems 2 points 1 day ago (1 children)

Oh, I had misunderstood their role in this. So they are more like someone that was already in place for other (scammey) reasons than anyone's preferred partner or middleman? And they are critical enough to be a weak link that breaks first and brings everyone else down?

[–] scruiser@awful.systems 2 points 1 day ago

Ultra ultra high end gaming? Okay, looking at the link, 94 GB of GPU memory is probably excessive even for eccentrics cranking the graphics settings all the way up. Hobbyists with way too much money trying to screw around with open weight models even after the bubble bursts? Which would presume LLMs or something similar continue to capture hobbyists' interests and that smaller models can't satisfy their interests. Crypto mining with algorithms compatible with GPUs? And cyrpto is its own scam ecosystem, but one that seems to refuse to die permanently.

I think the ultra high end gaming is the closest to a workable market, and even that would require a substantial discount.

[–] scruiser@awful.systems 5 points 1 day ago (3 children)

Isn't being a fall-man the point of Coreweave for Microsoft, NVIDIA, and everyone else using them as middle-man? They all theoretically have the ability to do the things Coreweave does in-house, but that would expose them to more risk if the bubble pops, so they have Coreweave take on the biggest part of the risk and draw in outside investor money?

[–] scruiser@awful.systems 11 points 1 day ago

It's really the perfect opportunity for integration! They can steal the data and content of their own users, instead of other people's users, and then they can serve their slop directly to their own users instead of users having to generate and export their slop to other people's social media sites. And both of these applications can distract from the fact that AGI isn't happening and even more modest LLM agents aren't practically useful. And since Altman already built up a user base on ChatGPT, he'll have a head start on getting a critical mass of users!

Thinking about it... something like this is probably Altman's best bet for making OpenAI's financials work out, because as David Gerard and Ed Zitron and others have all pointed out, they are losing money per LLM user, so they really do need a way to convert a huge user base into money that doesn't involve LLMs.

[–] scruiser@awful.systems 8 points 1 day ago* (last edited 1 day ago)

That feels like a fitting ironic fate, a company selling AI slopcode generation looses a bunch of users from believing their own bullshit and using an LLM as customer support. Hopefully that story repeated a few dozen times across other businesses and the business majors stop pushing LLM usage.

Edit... looking at the orange site comments... some unironically cited Anthropic ~~research~~ marketing hype, which (correctly) shows "Chain-of-Thought" is often bullshit unrelated to the final answer (but it's Anthropic, so the label it as deception and unfaithfulness instead of the entire approach being bullshit in general).

[–] scruiser@awful.systems 7 points 1 day ago

Linking this recent comment on an older thread because it was so relevant: https://awful.systems/comment/6966312

TLDR; GPUs cost as much to operate as they normally depreciate over time, so even if the bubble pops people might be sitting on piles of GPUs without reselling or using them.

[–] scruiser@awful.systems 3 points 1 day ago (2 children)

That is substantially worse than I realized. So possibly people could sit on GPUs for years after the bubble pops instead of selling them or using them? (Particularly if the crash means NVIDIA decides to slow how fast the push the bleeding edge on GPU specs so newer ones don't as radically outperform older ones?)

[–] scruiser@awful.systems 6 points 3 days ago

I mean... Democrats making dishonest promises of actual leftist solutions would be them making any acknowledgement of actual leftism, so I would count that as net progress compared to their current bland status quo maintenance. But yeah, your overall point is true.

[–] scruiser@awful.systems 9 points 3 days ago

That sounds like actual leftism, so no they really don't have the slightest inkling, they still think mainstream Democrats are leftist (and Democrats with some traces of leftism like Bernie or AOC are radical extremist leftists).

[–] scruiser@awful.systems 10 points 3 days ago* (last edited 3 days ago)

These people need to sit through a college level class on linguistics or something like that. This is a demonstration of why STEM majors need general higher education.

[–] scruiser@awful.systems 10 points 3 days ago

Yeah if the author had any self awareness they might consider why the transphobes and racists they have made common cause with are so anti-science and why pro-science and college education people lean progressive, but that would lead to admitting their bigotry is opposed to actual scientific understanding and higher education, and so they will understood come up with any other rationalization.

[–] scruiser@awful.systems 18 points 3 days ago (3 children)

Keep in mind the author isn't just (or even primarily) counting ultra wealth and establishment politicians as "elites", they are also including scientists trying to educate the public on their area of expertise (i.e. COVID, Global Warming, Environmentalism, etc.), and sociologists/psychologists explaining problems the author wants to ignore or are outright in favor of (racism/transphobia/homophobia).

 

I am still subscribed to slatestarcodex on reddit, and this piece of garbage popped up on my feed. I didn't actually read the whole thing, but basically the author correctly realizes Trump is ruining everything in the process of getting at "DEI" and "wokism", but instead of accepting the blame that rightfully falls on Scott Alexander and the author, deflects and blames the "left" elitists. (I put left in quote marks because the author apparently thinks establishment democrats are actually leftist, I fucking wish).

An illustrative quote (of Scott's that the author agrees with)

We wanted to be able to hold a job without reciting DEI shibboleths or filling in multiple-choice exams about how white people cause earthquakes. Instead we got a thousand scientific studies cancelled because they used the string “trans-” in a sentence on transmembrane proteins.

I don't really follow their subsequent points, they fail to clarify what they mean... In sofar as "left elites" actually refers to centrist democrats, I actually think the establishment Democrats do have a major piece of blame in that their status quo neoliberalism has been rejected by the public but the Democrat establishment refuse to consider genuinely leftist ideas, but that isn't the point this author is actually going for... the author is actually upset about Democrats "virtue signaling" and "canceling" and DEI, so they don't actually have a valid point, if anything the opposite of one.

In case my angry disjointed summary leaves you any doubt the author is a piece of shit:

it feels like Scott has been reading a lot of Richard Hanania, whom I agree with on a lot of points

For reference the ssc discussion: https://www.reddit.com/r/slatestarcodex/comments/1jyjc9z/the_edgelords_were_right_a_response_to_scott/

tldr; author trying to blameshift on Trump fucking everything up while keeping up the exact anti-progressive rhetoric that helped propel Trump to victory.

 

So despite the nitpicking they did of the Guardian Article, it seems blatantly clear now that Manifest 2024 was infested by racists. The post article doesn't even count Scott Alexander as "racist" (although they do at least note his HBD sympathies) and identify a count of full 8 racists. They mention a talk discussing the Holocaust as a Eugenics event (and added an edit apologizing for their simplistic framing). The post author is painfully careful and apologetic to distinguish what they personally experienced, what was "inaccurate" about the Guardian article, how they are using terminology, etc. Despite the author's caution, the comments are full of the classic SSC strategy of trying to reframe the issue (complaining the post uses the word controversial in the title, complaining about the usage of the term racist, complaining about the threat to their freeze peach and open discourse of ideas by banning racists, etc.).

 

This is a classic sequence post: (mis)appropriated Japanese phrases and cultural concepts, references to the AI box experiment, and links to other sequence posts. It is also especially ironic given Eliezer's recent switch to doomerism with his new phrases of "shut it all down" and "AI alignment is too hard" and "we're all going to die".

Indeed, with developments in NN interpretability and a use case of making LLM not racist or otherwise horrible, it seems to me like their is finally actually tractable work to be done (that is at least vaguely related to AI alignment)... which is probably why Eliezer is declaring defeat and switching to the podcast circuit.

view more: next ›