I somehow didn't think a regular JIT solution might be applicable here, but it is. Thank you! There seems to be a number of projects doing JIT for C++, will look at them.
liori
So far I've been following recommendations from this person: https://old.reddit.com/r/NewMaxx/comments/16xhbi5/ssd_guides_resources_ssd_help_post_your_questions/
This plea for help is specifically for non-coding, but still deeply technical work.
I'm pretty sure just like transport containers were standardized by ISO to make transport easier, game boxes should be standardized to fit in Kallax.
Another idea that just occurred to me. Maybe position: absolute; both the real content and the gibberish content with the same top, left, width, and height attributes so that the real content and the gibberish overlap and occupy the same location on the page. Make sure both the real and gibberish content elements have no background so that remains clear. Put the gibberish content in the DOM before the real content. (I think that will ensure that the gibberish appears behind the real content even without setting the z-index.) And then make JS set the color of the text in the gibberish element the same color as the background so humans can’t see it.
Be aware that these techniques can affect accessibility for people using screen readers.
As of May 2023, 65% of the Ukrainian refugees that left Ukraine starting February 2022 and decided to stay in Poland found a job—so, within around a year, as opposed to 5-6 years as in the article. Cultural similarity here is likely making it much, much simpler. For those who want to read more about the situation of Ukrainian refugees in Poland, this report by Polish National Bank (Narodowy Bank Polski, NBP) might be useful: https://nbp.pl/wp-content/uploads/2023/05/Raport_Imigranci_EN.pdf (in English!), there is a lot of interesting details.
lemmy.ml is hosted in EU, and lemmynsfw.com uses CloudFlare, which operates in EU. Worst case, issue a GDPR request to both.
Yep, thank you, that's pretty close to what I imagined!
I do not have notes from that time anymore, sorry. I do recall though that after following a chain of citations I ended up at the paper in the center of this controversy. Nobody sane would cite in now except to point out its flaws, but if there's a modern paper that cites a 10 year old paper that cites a 30 year old paper that cites it—people usually won't notice.
From my experience, despite all the citogenesis described in other comments here, Wikipedia citations are still better vetted than in many, many scientific papers, let alone regular journalism :/ I recall spending days following citation links in already well-cited papers to basically debunk basic statements in the field.
I'd probably be fine with hundreds or thousands of these hanging in memory. I suspect the generated code for a single query would be in hundreds of kilobytes, maybe a megabyte. But yeah, this is one of those technical details I'd worry about.
Not sure how a HTTP server would solve the CPU bottleneck of scanning terabytes of data per query?