this post was submitted on 05 Jul 2023
37 points (93.0% liked)

Lemmy.World Announcements

29048 readers
4 users here now

This Community is intended for posts about the Lemmy.world server by the admins.

Follow us for server news 🐘

Outages πŸ”₯

https://status.lemmy.world

For support with issues at Lemmy.world, go to the Lemmy.world Support community.

Support e-mail

Any support requests are best sent to info@lemmy.world e-mail.

Report contact

Donations πŸ’—

If you would like to make a donation to support the cost of running this platform, please do so at the following donation URLs.

If you can, please use / switch to Ko-Fi, it has the lowest fees for us

Ko-Fi (Donate)

Bunq (Donate)

Open Collective backers and sponsors

Patreon

Join the team

founded 1 year ago
MODERATORS
 

Another day, another update.

More troubleshooting was done today. What did we do:

  • Yesterday evening @phiresky@phiresky@lemmy.world did some SQL troubleshooting with some of the lemmy.world admins. After that, phiresky submitted some PRs to github.
  • @cetra3@lemmy.ml created a docker image containing 3PR's: Disable retry queue, Get follower Inbox Fix, Admin Index Fix
  • We started using this image, and saw a big drop in CPU usage and disk load.
  • We saw thousands of errors per minute in the nginx log for old clients trying to access the websockets (which were removed in 0.18), so we added a return 404 in nginx conf for /api/v3/ws.
  • We updated lemmy-ui from RC7 to RC10 which fixed a lot, among which the issue with replying to DMs
  • We found that the many 502-errors were caused by an issue in Lemmy/markdown-it.actix or whatever, causing nginx to temporarily mark an upstream to be dead. As a workaround we can either 1.) Only use 1 container or 2.) set ~~proxy_next_upstream timeout;~~ max_fails=5 in nginx.

Currently we're running with 1 lemmy container, so the 502-errors are completely gone so far, and because of the fixes in the Lemmy code everything seems to be running smooth. If needed we could spin up a second lemmy container using the ~~proxy_next_upstream timeout;~~ max_fails=5 workaround but for now it seems to hold with 1.

Thanks to @phiresky@lemmy.world , @cetra3@lemmy.ml , @stanford@discuss.as200950.com, @db0@lemmy.dbzer0.com , @jelloeater85@lemmy.world , @TragicNotCute@lemmy.world for their help!

And not to forget, thanks to @nutomic@lemmy.ml and @dessalines@lemmy.ml for their continuing hard work on Lemmy!

And thank you all for your patience, we'll keep working on it!

Oh, and as bonus, an image (thanks Phiresky!) of the change in bandwidth after implementing the new Lemmy docker image with the PRs.

Edit So as soon as the US folks wake up (hi!) we seem to need the second Lemmy container for performance. So that's now started, and I noticed the proxy_next_upstream timeout setting didn't work (or I didn't set it properly) so I used max_fails=5 for each upstream, that does actually work.

all 13 comments
sorted by: hot top controversial new old
[–] phiresky@lemmy.world 21 points 1 year ago* (last edited 1 year ago) (4 children)

server load is too low, everyone upvote more stuff so i can optimize more

edit: guess there is some more work to be done 😁

[–] woelkchen@lemmy.world 2 points 1 year ago (1 children)

Upvote causes an endless spinner on Liftoff. 😁

I'm getting 504 gateway time outs when I try to upvote

[–] PatFussy@lemm.ee -2 points 1 year ago

Double the image upload size and you will see more shitposts

[–] _Rho_@lemmy.world 1 points 1 year ago* (last edited 1 year ago) (1 children)

As a data engineer, I'd be interested in hearing more about the SQL troubleshooting.

EDIT: It looks like !lemmyperformance@lemmy.ml is a good place to subscribe to for more technical info on some of these performance improvements.

Also the Lemmy GitHub of course contains more information on bugs/enhancements/etc.

[–] Schmedes@lemmy.world 1 points 1 year ago

Same, my job is like 80% SQL, so it'd be cool to see what is used in the background and maybe help improve things.

[–] Labotomized@lemmy.world 0 points 1 year ago

Thank you so much! I will be donating a few cappuccinos your way when my next check arrives. I really appreciate how awesome of a community you’ve brought together & all of the transparency with the updates (and the frequency) is astounding! Keep up the great work but don’t forget to take breaks :)

[–] GnothiSeauton@lemmy.world 0 points 1 year ago (1 children)

This is why having a big popular instance isn't all bad. It helps detect and fix the scaling problems and inefficiencies for all the other 1000s of instances out there!

[–] AlmightySnoo@lemmy.world 0 points 1 year ago (1 children)

This, if everyone kept just spreading out to smaller instances as suggested in the beginning, while still a sensible thing to do, no one would have noticed these performance issues. We need to think a few years out, assuming Lemmy succeeds and Reddit dies, and expect that "small instance" will mean 50k users.

[–] deweydecibel@lemmy.world 1 points 1 year ago* (last edited 1 year ago)

I sincerely doubt reddit will die anytime soon, it'll just exist as its own thing that it's new target audience gets bored with and moves on from in a few years when something new and flashy catches their eye in the app store. Just like they do all the other apps designed in exactly the same fashion that reddit is currently morphing into.

Meanwhile Lemmy will be slowly building it's communities up to be what reddit used to be.

[–] HybridSarcasm@lemmy.world -1 points 1 year ago

Wow! Your commitment and diligence is admirable!