I wish functions baked in to browsers could be disabled like an extension, this adding ai to everything is getting as bad as all the bloatware you get on a new PC
Microblog Memes
A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.
Created as an evolution of White People Twitter and other tweet-capture subreddits.
Rules:
- Please put at least one word relevant to the post in the post title.
- Be nice.
- No advertising, brand promotion or guerilla marketing.
- Posters are encouraged to link to the toot or tweet etc in the description of posts.
Related communities:
This is how I felt when Windows 3.1 dropped
I wouldn't mind a decent LOCAL open source AI helping
Firefox can use a local llamafile model, but you have to enable it in about:config first.
Honestly it's easier to find an addon that'll hook to ollama instead, fire fox's inbuilt support is shit
Large X models lack a crucial component of "open-source". Freely redistributable and modifiable for any purpose, sure, but there's no chance in hell of auditing one, let alone if the training data is kept a secret. It's literally impossible; human beings cannot look at a trillion weights and biases representing a single highly chaotic, unfathomably complex nonlinear function whose input and output space are the totality of human language/images/etc. and say "yup, looks good to me." Deep learning models – contrasted with traditional machine learning models – learn their own features which almost 100% of the time would be nonsense to a human. You just have a blob of shareware when you run DeepSeek.
(They also just outright steal from billions of copyright-protected sources to create it, so calling it "open-source" is pretty funny.)
Auditing for bias purposes, yea true. But my primary concern is it having the capability to "phone home" which you don't really need to audit the model itself to be able to detect or prevent
There are a few that are "truly" open like IBM Granite, and a handful of others over the 7B range.
DeepSeek’s model is open-sourced and can be run locally; though I think there some bits related to its training data they have been kept obscured (if I remember correctly) - likely due to the dubious nature of how it was acquired.
some bits related to its training data
AKA ANY details about its training data, and its training hyperparameters, and literally any other details about its training. An 'open' secret among LLM tinkerers is that the Chinese companies seem to have particularly strong English/Chinese training data (not so much other languages though), and I'll give you one guess on how.
Deepseek is unusal in that they are open sourcing the general techniques they used and even some (not all) of the software frameworks they use.
Don't get me wrong, I think any level of openness should be encouraged (unlike OpenAI being as closed as physically possible), but they are still very closed. Unlike, say, IBM Granite models which should be reproducible.
Unless training data is made available, a model is not open source. DeepSeek is better described as "open weight".
I'm far from an AI hater, but I fully agree with this.
I think there's a distinct business oppotunity coming up for two things: Hassle-free self-hosting and back-to-basics apps and services.
Nobody is tapping into those correctly (you're going to want to give me examples of self-hosted things, and you're wrong), and it's extremely hard to do either right, but if you can figure it out and are ballsy enough to build a proper business around it I may be interested in your pitch deck.
Can you elaborate on "Hassle-free self-hosting" & "and you're wrong"
genuinely curious to see what your argument is here.
Kinda not the point, but at the risk of starting a huge tangent: yes, there are a bunch of self-hosted applications that are reasonably practical and easy to install, but there's still the layer of having to understand how to access a thing in your LAN from each device, and ideally you'd want some sort of dedicated server running at all times and a bunch of this stuff is provided in multiple formats, including containerized versions or versions for virtual machines, all of which is way over the heads of normie users.
The closest to a fire-and-forget self-hosting platform is maybe Home Assistant or perhaps some of the commercial NAS sellers, like the Synology suite of apps that will mooostly set themselves up. Maybe Plex. But even those don't work in quite the way mainstream users think about applications working. You really need something you plug in and it goes. Maybe the branded Home Assistant hardware is closest to that, but HA itself is so overengineered and customizable it's not so much the start of a commercial self-hosting revolution as a relatively accessible hobby project rabbit hole.
Have you heard of YUNOHOST? Thats all I'll ask I dont want to like waste your time if you have and you already have an opinion.
back-to-basics apps and services.
I think these do exist, but they're in such a sea of shit that most users scrolling on their phones can't find them. Shameless apps have an intractable engagement/marketing advantage over them, as do the 'lets get acquired by Big Tech' ones.
I guess big companies could engage in this, but... shrug.
Hassle-free self hosting is hard, yeah, AI or not. Not going to argue with that one bit.
I don't mind seeing an AI summary of search results as much as I mind sponsored links fucking up page rank. Sometimes it is even nice to see "hey your search doesn't make sense because you've conflated two terms". But I guess I'm in the minority.
Reminds me of early wikipedia when there was a deep trustworthiness problem. Seeing a wikipedia link on a presentation stole your credibility, but it was still a hell of a lot better starting point than grabbing an encyclopedia and asking jeeves until you found a thread to pull.
AI summaries put another layer of interpretation between the reader and the source material. When having accurate and properly-sourced information matters, it's just not trustworthy enough. At least with Wikipedia, it tells you when there is potentially biased or improperly sourced material. Search AI will confidently assert their summaries as though they are factual, regardless of how reliable or unreliable their own sources are.
I've never had a result that helpful. I've seen it make up sports results in advance though.
I suppose I'm mostly using it for programming, movie look up, vocab, and so on. Not sports/weather/news kinds of things.
Accept cookies?
yeah i noticed yesterday duckduckgo browser has ai now
Its not surprising. Duckduckgo search has ai.
Which- why? Who’s using ddg without understanding how to use a search engine or recognizing the constant AI hallucinations?
The short answer is employees and family members.
Someone who manages tech for other users might configure ddg as default search. I guess people at ddg are concerned that this type of user might be resistant to using ddg unless it has zero-click results.
Laughs in LibreWolf user
It'll get to a point where you just have to work on your critical thinking skills and just be a pessimist because everything that's going to be presented to you is just bullshit lies. So just acknowledge that this relationship is adversarial. Listen to other people talk about work cited, maybe dig into the unknown, the abyss. They will take everything away from you. And they'll make you feel bad for being angry. You are the product. There is no escaping capitalism until you're ready to do something about it. At this point it's just the game of cat and mouse and you're getting closer to the corner. Please, I know, I'm super fucking negative. Don't stop doing things. I'm just saying. Half of the battle is being aware.
For a family member of mine, who has lost most of their site, all of this "AI" has been a blessing. The ability to talk to, summarize, and read back info has made a night and day difference with her ability to communicate with the world.
What i hate about firefox is the fucking wall of links on the home page. It takes forever to remove them, and then they just updated and all that crap is back.
I use an extension called Tabliss and set that as my home page. I have it customized so the links to my most visited pages are set up with an icon so it's very clean and minimalist.
I actually would be pretty happy if my browser could detect and block ads.
But they put a fuck ton of work in to not only NOT do that, they expend material efforts fucking with extensions and other tooling that provide that functionality.
Blocklists are a much more efficient way to do this, and TBH many "traditional" adblockers are still huge performance hogs. Ublock is an exception in this regard due to webassembly and its explicit dedication to lightness.
Vision models are a pretty good way to build sponsorblock/adblock databases though, and maybe even engineer HTML workarounds automatically. It would be cool if you, say, encounter an ad or a dysfunctional web page, and you can opt-in to automatically contribute a fix with your own compute.
I always assumed adblocks already were first-passing against known-advertizing patterns and then rewriting the DOM on the fly. I'm surprised that a vision model would be more performant given that it's still going to have to adjust the DOM anyways.
I’m talking theoretically, heh, I don’t think anyone actually does that yet.
And I am just talking edge cases where existing blockers fail and there’s no manpower to figure out a customization.