bitofhope

joined 2 years ago
[–] bitofhope@awful.systems 11 points 4 months ago (6 children)

Ok, maybe cryptocurrencies made those a little bit easier than doing the same thing with MMO money or having to mail physical goods. I can even go out on a limb and credit the blockchain itself for them, even though the design kind of makes transactions inherently more traceable than some possible aleternatives do.

[–] bitofhope@awful.systems 18 points 4 months ago (9 children)

I distinctly recall a lot of people a few years ago parroting some variation of "well I don't know about Bitcoin specifically, but blockchain itself is probably going to be important and even revolutionary as a technology" and sometimesI wish I'd collected receipts to say "I told you it's not".

Here we are, year of Nakamoto 17 and the full list of use cases for blockchains is:

  • Speculative trading of toy currencies made up by private nobodies
  • Paying through the nose to execute arbitrary code on SETI@Home's evil cousin
  • Speculative trading of arbitrary blobs of bytes made up by private nobodies

And no, Git is not a fucking blockchain. Much like the New York City Subway is not the fucking Loop.

[–] bitofhope@awful.systems 11 points 4 months ago (1 children)

I don't think "victim" is really a word that's even used especially much in "woke" (for a lack of a good word) writing anyway. Hell, even for things like sexual violence, "survivor" is generally preferred nomenclature specifically because many people feel that "victim" reduces the person's agency.

It's the rightoid chuds who keep accusing the "wokes" for performative victimhood and victim mentality, so I suppose that's why they somehow project and assume that "victim" is a particularly common word in left-wing vocabulary.

[–] bitofhope@awful.systems 12 points 4 months ago

Could have called yourself anything and you go for "Creamy Recoil"?

[–] bitofhope@awful.systems 8 points 4 months ago

to /dev/null preferably

[–] bitofhope@awful.systems 10 points 4 months ago

Finally it turns out torturing the kid was unnecessary and spreading out the suffering would have worked fine. All Omelas had to do was raise their income tax a little bit.

[–] bitofhope@awful.systems 8 points 4 months ago (3 children)

GPU programs (specifically CUDA, although other vendors' stacks are similar) combine code for the host system in a conventional programming language (typically C++), and code for the GPU written in CUDA language. Even if the C++ code for the host system can be optimized with hand written assembly, it's not going to lead to significant gains when the performance bottleneck is on the GPU side.

The CUDA compiler translates the high level CUDA code into something called PTX, machine code for a "virtual ISA" which is then translated by the GPU driver into native machine language for the proprietary instruction set of the GPU. This seems to be somewhat comparable to a compiler intermediate representation, such as LLVM. It's plausible that hand written PTX assembly/IR language could have been used to optimize parts of the program, but that would be somewhat unusual.

For another layer or assembly/machine languages, technically they could have reverse engineered the actual native ISA of the GPU core and written machine code for it, bypassing the compiler in the driver. This is also quite unlikely as it would practically mean writing their own driver for latest-gen Nvidia cards that vastly outperforms the official one and that would be at least as big of a news story as Yet Another Slightly Better Chatbot.

While JIT and runtimes do have an overhead compared to direct native machine code, that overhead is relatively small, approximately constant, and easily amortized if the JIT is able to optimize a tight loop. For car analogy enjoyers, imagine a racecar that takes ten seconds to start moving from the starting line in exchange for completing a lap one second faster. If the race is more than ten laps long, the tradeoff is worth it, and even more so the longer the race. Ahead of time optimizations can do the same thing at the cost of portability, but unless you're running Gentoo, most of the C programs on your computer are likely compiled for the lowest common denominator of x86/AMD64/ARMwhatever instruction sets your OS happens to support.

If the overhead of a JIT and runtime are significant in the overall performance of the program, it's probably a small program to begin with. No shame to small programs, but unless you're running it very frequently, it's unlikely to matter if the execution takes five or fifty milliseconds.

[–] bitofhope@awful.systems 26 points 4 months ago

"Wow, this Penny Arcade comic featuring toxic yaoi of submissive Sam Altman is lowkey kinda hot" is a sentence neither I nor any LLM, Markov chain or monkey on a typewriter could have predicted but now exists.

[–] bitofhope@awful.systems 5 points 4 months ago

Counter-objection: so do all species of the nazi genus.

[–] bitofhope@awful.systems 7 points 4 months ago* (last edited 4 months ago)

Meanwhile I'm reverse engineering some very much not performance sensitive video game binary patcher program some guy made a decade ago and Ghidra interprets a string splitting function as a no-op because MSVC decided calling conventions are a spook and made up a new one at link time. And it was right to do that.

EDIT: Also me looking for audio data from another old video game, patiently waiting for my program to take about half an hour on my laptop every time I run it. Then I remember to add --release to cargo run and while the compilation takes three seconds longer, the runtime shrinks to about ten seconds. I wonder if the above guy ever tried adding -O2 to his CFLAGS?

[–] bitofhope@awful.systems 5 points 4 months ago

I hear Private Reasoning of the first through nth LLM Understander Corps is highly motivated

[–] bitofhope@awful.systems 7 points 4 months ago

There's no way to know since they didn't have the money to test.

view more: ‹ prev next ›