Unilever are looking for an Ice Cream Head of Artificial Intelligence.
I think I have found a new favorite way to refer to true believers.
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
Unilever are looking for an Ice Cream Head of Artificial Intelligence.
I think I have found a new favorite way to refer to true believers.
This role is responsible for the creation of a virtual AI Centre of Excellence that will drive the creation of an Enterprise-wide Autonomous AI platform. The platform will connect to all Ice Cream technology solutions providing an AI capability that can provide [blah blah blah...]
it's satire right? brilliantly placed satire by a disgruntled hiring manager having one last laugh out the door right? no one would seriously write this right?
I mean it does return a 404 now.
maybe they filled that position already
Re-begun, the edit wars over EA have:
And sure enough, just within the last day the user "Hand of Lixue" has rewritten large portions of the article to read more favorably to the rationalists.
User was created earlier today as well. Two earlier updates from a non-account-holder may be from the same individual. Did a brief dig through the edit logs, but I'm not very practiced in Wikipedia auditing like this so I likely missed things. Their first couple changes were supposedly justified by trying to maintain a neutral POV. By far the larger one was a "culling of excessive references" which includes removing basically all quotes from Cade Metz' work on Scott S and trimming various others to exclude the bit that says "the AI thing is a bit weird" or "now they mostly tell billionaires it's okay to be rich".
I suppose you could explain that on the talk page, if only you expressed it in acronyms for the benefit of the most pedantic nerds on the planet.
Also, not sure if there's anything here but the Britannica page for Lixue suggests that there's no way in hell its hand doesn't have some serious CoIs.
Ed:
Also shout-out to the talk page where the poster of our top-level sneer fodder defended himself by essentially arguing "I wasn't canvassing, I just asked if anyone wanted to rid me of this turbulent priest!"
also: lol @ good faith edits.
A glorious snippet:
The movement ~~connected to~~ attracted the attention of the founder culture of Silicon Valley and ~~leading to many shared cultural shibboleths and obsessions, especially optimism about the ability~~ of intelligent capitalists and technocrats to create widespread prosperity.
At first I was confused at what kind of moron would try using shibboleth positively, but it turns it's just terribly misquoting a citation:
Rationalist culture — and its cultural shibboleths and obsessions — became inextricably intertwined with the founder culture of Silicon Valley as a whole, with its faith in intelligent creators who could figure out the tech, mental and physical alike, that could get us out of the mess of being human.
Also lol at insiting on "exonym" as descriptor for TESCREAL, removing Timnit Gebru and Émile P. Torres and the clear intention of criticism from the term, it doesn't really even make sense to use the acronym unless you're doing critical analasis of the movement(s). (Also removing mentions of the espcially strong overalap between EA and rationalists.)
It's a bit of a hack job at making the page more biased, with a very thin verneer of still using the sources.
So many of those changes are just weird and petty, too. Like, I can't imagine a good reason to not reference Vitalik Buterin as "Ethereum Founder" rather than just a billionaire. I'm sure that I can level the same critique at some pages that are neutrally trying to meet Wikipedia's standards, but especially in this context it's pretty straightforward to see that it's an attempt to remove important context and accurate information that might make them look bad.
There might be enough point-and-laugh material to merit a post (also this came in at the tail end of the week's Stubsack).
That hatchet job from Trace is continuing to have some legs, I see. Also a reread of it points out some unintentional comedy:
This is the sort of coordination that requires no conspiracy, no backroom dealing—though, as in any group, I’m sure some discussions go on...
Getting referenced in a thread on a different site talking about editing an article about themselves explicitly to make it sound more respectable and decent to be a member of their technofascist singularity cult diaspora. I'm sorry that your blogs aren't considered reliable sources in their own right and that the "heterodox" thinkers and researchers you extend so much grace to are, in fact, cranks.
The opening line of the "Beliefs" section of the Wikipedia article:
Rationalists are concerned with improving human reasoning, rationality, and decision-making.
No, they aren't.
Anyone who still believes this in the year Two Thousand Twenty Five is a cultist.
I am too tired to invent a snappier and funnier way of saying this.
I might be the only person here who thinks that the upcoming quantum bubble has the potential to deliver useful things (but boring useful things, and so harder to build hype on) but stuff like this particularly irritates me:
Quantum fucking ai? Motherfucker,
Best case scenario here is that this is how one department of Google get money out of the other bits of Google, because the internal bean counters cannot control their fiscal sphincters when someone says “ai” to them.
Quantum computing reality vs quantum computing in popculture and marketing follows precisely the same line as quantum physics reality vs popular quantum physics.
New article from Axos: Publishers facing existential threat from AI, Cloudflare CEO says
Baldur Bjarnason has given his commentary:
Honestly, if search engine traffic is over, it might be time for blogs and blog software to begin to deny all robots by default
Anyways, personal sidenote/prediction: I suspect the Internet Archive's gonna have a much harder time archiving blogs/websites going forward.
Up until this point, the Archive enjoyed easy access to large swathes of the 'Net - site owners had no real incentive to block new crawlers by default, but the prospect of getting onto search results gave them a strong incentive to actively welcome search engine robots, safe in the knowledge that they'd respect robots.txt and keep their server load to a minimum.
Thanks to the AI bubble and the AI crawlers its unleashed upon the 'Net, that has changed significantly.
Now, allowing crawlers by default risks AI scraper bots descending upon your website and stealing everything that isn't nailed down, overloading your servers and attacking FOSS work in the process. And you can forget about reigning them in with robots.txt - they'll just ignore it and steal anyways, they'll lie about who they are, they'll spam new scrapers when you block the old ones, they'll threaten to exclude you from search results, they'll try every dirty trick they can because these fucks feel entitled to steal your work and fundamentally do not respect you as a person.
Add in the fact that the main upside of allowing crawlers (turning up in search results) has been completely undermined by those very same AI corps, as "AI summaries" (like Google's) steal your traffic through stealing your work, and blocking all robots by default becomes the rational decision to make.
This all kinda goes without saying, but this change in Internet culture all-but guarantees the Archive gets caught in the crossfire, crippling its efforts to preserve the web as site owners and bloggers alike treat any and all scrapers as guilty (of AI fuckery) until proven innocent, and the web becomes less open as a whole as people protect themselves from the AI robber barons.
On a wider front, I expect this will cripple any future attempts at making new search engines, too. In addition to AI making it piss-easy to spam search systems with SEO slop, any new start-ups in web search will struggle with quality websites blocking their crawlers by default, whilst slop and garbage will actively welcome their crawlers, leading to your search results inevitably being dogshit and nobody wanting to use your search engine.
FWIW, due to recent developments, I've found myself increasingly turning to non-search engine sources for reliable web links, such as Wikipedia source lists, blog posts, podcast notes or even Reddit. This almost feels like a return to the early days of the internet, just in reverse and - sadly - with little hope for improvement in the future.
Searching Reddit has really become standard practice for me, a testament to how inhuman the web as a whole has gotten. What a shame.
Sucks that a lot of reddit is also being botted. But yes reddit still good. Still fucked that bots take a redit post as input, rewrite it into llm garbage and those then get a high google ranking, while google only lists one or two reddit pages.
I don't like that it's not open source, and there are opt-in AI features, but I can highly, highly recommend Kagi from a pure search result standpoint, and one of the only alternatives with their own search index.
(Give it a try, they've apparently just opened up their search for users without an account to try it out.)
Almost all the slop websites aren't even shown (or put in a "Listicles" section where they can be accessed, but are not intrusive and do not look like proper results, and you can prioritize/deprioritize sites (for example, I have gituib/reddit/stackoverflow to always show on top, quora and pinterest to never show at all).
Oh, and they have a fediverse "lens" which actually manages to reliably search Lemmy.
This doesn't really address the future of crawling, just the "Google has gone to shit" part 😄
In other news, I got an "Is your website AI ready" e-mail from my website host. I think I'm in the market for a new website host.
"we set out to make the torment nexus, but all we accomplished is making the stupid faucet and now we can't turn it off and it's flooding the house." - Every AI company, probably.
Pre GPT data is going to be like the steel they fish up from before there were nuclear tests.
E: https://arstechnica.com/ai/2025/06/why-one-man-is-archiving-human-made-content-from-before-the-ai-explosion/ ow look, my obvious prediction was obvious.
Alright OpenAI, listen up. I've got a whole 250GB hard drive from 2007 full of the Star Wars/Transformers crossover stories I wrote at the time. I promise you it's AI-free and won't be available to train competing models. Bidding starts at seven billion dollars. I'll wait while you call the VCs.
Do you want shadowrunners to break into your house to steal your discs? Because this is how you get shadowrunners.
dark forest internet here we go!!!
First confirmed openly Dark Enlightenment terrorist is a fact. (It is linked here directly to NRx, but DE is a bit broader than that, it isn't just NRx, and his other references seem to be more garden variety neo-nazi type (not that this kind of categorizing really matters)).
So us sneerclubbers correctly dismissed AI 2027 as bad scifi with a forecasting model basically amounting to "line goes up", but if you end up in any discussions with people that want more detail titotal did a really detailed breakdown of why their model is bad, even given their assumptions and trying to model "line goes up": https://www.lesswrong.com/posts/PAYfmG2aRbdb74mEp/a-deep-critique-of-ai-2027-s-bad-timeline-models
tldr; the AI 2027 model, regardless of inputs and current state, has task time horizons basically going to infinity at some near future date because they set it up weird. Also the authors make a lot of other questionable choices and have a lot of other red flags in their modeling. And the picture they had in their fancy graphical interactive webpage for fits of the task time horizon is unrelated to the model they actually used and is missing some earlier points that make it look worse.
AllTrails doing their part in the war on genAI by disappearing the people who would trust genAI: https://www.nationalobserver.com/2025/06/17/news/alltrails-ai-tool-search-rescue-members
Orange site being orange again... "Pwease don't hurt the fascists feelings 🥺"
Irrelevant. Please stay on topic and refrain from personal attacks.
I think if someone writes a long rant about how germany wasn't at fault for WW2 in a COC for one of their projects, its kinda relevant.
OT: boss makes a dollar, I make a dime, thats why I listen to audiobooks on company time.
(Holy shit I should have got airpods a long time ago. But seriously, the jobs going great.)
New lucidity post: https://ludic.mataroa.blog/blog/contra-ptaceks-terrible-article-on-ai/
The author is entertaining, and if you’ve not read them before their past stuff is worth a look.