this post was submitted on 17 Aug 2023
4 points (100.0% liked)

Lemmy Server Performance

2 readers
1 users here now

Lemmy Server Performance

lemmy_server uses the Diesel ORM that automatically generates SQL statements. There are serious performance problems in June and July 2023 preventing Lemmy from scaling. Topics include caching, PostgreSQL extensions for troubleshooting, Client/Server Code/SQL Data/server operator apps/sever operator API (performance and storage monitoring), etc.

founded 1 year ago
MODERATORS
top 20 comments
sorted by: hot top controversial new old
[–] solrize@lemmy.world 4 points 1 year ago* (last edited 1 year ago) (1 children)

Pagination should probably be redone the way Reddit does it. That is, give a "next page" token which is from an indexed field at the end of every page, instead of page=NUMBER. Then getting the next page becomes WHERE token>xyz which is an indexed operation, instead of OFFSET NUMBER, which requires counting off that many rows and gives the person duplicate results if new posts are arriving while the person browses. This is a well known performance trick. Wikipedia does it too, and I think 'SQL Performance Explained' by Marcus Winand talks about it.

[–] RoundSparrow@lemmy.ml 3 points 1 year ago

That is, give a “next page” token

There's already a pull request on changing paging.

My focus is a very hard wall on performance, scale. There is way too much potential for data to run into the full post table as things are now.

[–] RoundSparrow@lemmy.ml 3 points 1 year ago

It can't be a simple as a date range, because we want to be inclusive for smaller communities.

  1. paging is a consideration. 1000 posts per community would allow 10 pages of 20 posts.
  2. small communities are defined to be 1000 or less posts, regardless of age
  3. large communities would focus on recency, the 1000 post would be recently created or edited
  4. Edited can be more tricky, either skip for now or focus on how to limit some kind of mass edit from taking over newly published

 

Also a good time to be reminded that the published date isn't reliable for a couple reasons:

  1. problems in the field have been shown with incoming federation data having future published dates on content. kbin in an easy example, but it isn't limited to kbin.
  2. federation can lag due to server overload and problems paths between specific servers, ISP issues, etc. It is rather common to have received a post hours after the published date. Lemmy currently does not track the 'received' date of content.
[–] RoundSparrow@lemmy.ml 2 points 1 year ago (3 children)

ok, experimenting on a massive test data set of over 5 million posts... this PostgreSQL works pretty well

SELECT COUNT(ranked_recency.*) AS post_row_count
FROM
  (
     SELECT id, community_id, published,
        rank() OVER (
           PARTITION BY community_id
           ORDER BY published DESC, id DESC
           )
     FROM post_aggregates) ranked_recency
WHERE rank <= 1000
;

This limits any one community to 1000 posts, picking the most recent created posts. This gives a way to age out older data in very active communities without removing any posts at all for small communities.

[–] solrize@lemmy.world 3 points 1 year ago (1 children)

I don't understand what you are trying to do here. What problem are you trying to solve? Is something wrong with having an index on that rank function, then just using a simple WHERE? Why do you want to limit to 1000 posts? If I want to look through 1000 posts in a community, I probably want to look at more than 1000.

[–] RoundSparrow@lemmy.ml 1 points 1 year ago* (last edited 1 year ago) (1 children)

What problem are you trying to solve?

Reproducible regular server crashes from queries taking tens of seconds long because the whole logic is based on no WHERE clause that has any meat to it. The server overloads in the field have been going on every single day that I've been here testing the big servers since May 2023.

If I want to look through 1000 posts in a community, I probably want to look at more than 1000.

I'm well aware of the push back. Everyone chimes in saying they want counting to be real time, the developers seem to avoid caching at all cost, and out of desperation - I'm trying to build some kind of basic sanity logic into the system so it doesn't plow through 5 million rows to do a LIMIT 10 query.

Right now Lemmy works perfectly fine with no personalization. Anonymous users - it works great. If you want to read a million posts, it works great. Start blocking specific users, start adding in NSFW filters, cherry-picking a blend of communities, etc. and the problems show up. The ORM logic is difficult to follow, based on massive JOIN of every field there is in many tables, and at certain data thresholds with per-account preferences engaged - it goes off the rails into the pile of over 1 million posts (taking 40 seconds to list page = 1 of LIMIT 20 posts for even a single community).

The programmers who built the code for over 4 years don't seem to think it is an urgent problem. So I'm chipping in. I personally have never worked with this ORM and I find it painful compared to the hand-crafted SQL I've done on major projects. I'm doing this because I feel like nobody else has for months.

[–] solrize@lemmy.world 1 points 1 year ago (1 children)

Hmm, thanks. I don't know if this discussion is helpful to you (if not, I'll drop out and stay out of your way) but I feel like something is fundamentally broken if any sane user actions result in table scans like that, and decreasing the size of the scan with a WHERE clause seems like a band-aid. Browsing a community should be SELECT whatever FROM threads WHERE sortfield > token ORDER BY sortfield LIMIT 100; or something along those lines, with maybe a JOIN to filter out posts from blocked users and that sort of thing. There are only a few possible sort fields and they should all be indexed so the above query should have no scans at all. You might have to denormalize the comment count (put it in a column in the threads table that gets updated when a new comment is posted) so that you can index it instead of sorting on COUNT of some joined select. There will usually not be a ton of threads being updated at onced, so pg's built in caching should keep it all pretty fast.

Disclosure, I'm completely unfamiliar with the Lemmy code and don't have any significant Postgres experience (I've mostly used MySQL and SQLite) so I may have table names and stuff wrong above. But, I think it's important to keep in mind that Usenet servers of the 1980s handled far more traffic than any current Lemmy server, with hardware 1/1000th of the speed, so if things are slow on Lemmy it's probably best to look for fundamental issues with the queries and schema.

I guess the presence of an ORM is an antipattern in its own right and maybe Lemmy's devs should aim to get rid of it.

[–] RoundSparrow@lemmy.ml 1 points 1 year ago (1 children)

You might have to denormalize the comment count (put it in a column in the threads table

When browsing a community, the concern is 'post', not 'comment' or threads. So this doesn't really come into play. If anything, Reddit/Lemmy style system is focused on the latest vote count, not the number of comments....

[–] solrize@lemmy.world 1 points 1 year ago (1 children)

Again though, that sounds like something that can be indexed. And posts more than a few days old usually won't receive new voted very often. I think Reddit may archive very old posts so they can't receive be votes at all. Are these table scans only an issue when the person is trying to browse by best of all time?

For recent posts, yeah, a bit of buffering could help if votes are arriving very fast. That seems like an eventual good optimization. For now there is not enough traffic to need it, I'd expect.

[–] RoundSparrow@lemmy.ml 1 points 1 year ago (1 children)

Again though, that sounds like something that can be indexed.

I don't get what you are suggesting. There are INDEX.

The problem being addressed is that there is no WHERE clause that actually limits the posts.

JOIN is done on a table without first eliminating rows... and that worked OK when there was only 50,000 posts in the database, but now that it is over 1 million rows - it is causing major performance problems.

[–] solrize@lemmy.world 2 points 1 year ago* (last edited 1 year ago) (1 children)

Sorry about the slow response. What I mean is, suppose you have a column with 1000 integers (27,5,100,60,...). You want to print the top 10:

SELECT num FROM xyz ORDER BY num DESC LIMIT 10;

The db has to scan all 1000 rows to find the 10 biggest numbers. Now suppose instead there is an index on that column, i.e. a btree that lets you search for a value with very few operations, or traverse the list in order. Now the SELECT doesn't have to examine all the rows. It only has to traverse 10 items from the index, starting at the large end. It does mean that UPDATE and INSERT operations for that columb become more expensive, since the index has to be updated too, but that too is less expensive than a table scan.

I'm saying that by having similar indexes on the possible sorting orders of read queries, you can likewise get rid of all the table scans. Does that make sense?

Similarly if you JOIN two indexed fields, that is like merging two sorted lists. The db can traverse both indexes in parallel to find the matching values. Db's can be very clever about stuff like this. It helps though if you use EXPLAIN to make sure they are doing the right thing.

[–] RoundSparrow@lemmy.ml 1 points 1 year ago (1 children)

It makes sense, but there are indexes.

As the subject of the post says... it is JOIN behavior that's the problem. The queries work perfectly fine when you ask for posts without doing JOIN to a bunch of empty tables.

[–] solrize@lemmy.world 1 points 1 year ago (1 children)

Hmm, ok, something weird and pg specific might be going on. JOIN to an empty or almost empty table (I guess you mean outer join) sounds surpising but I'd hope the query planner can still do something reasonable. Anyway I don't feel like I'm being helpful at this point, so I'll stay out of your way. I'll be interested to know how it goes though.

[–] RoundSparrow@lemmy.ml 1 points 1 year ago (1 children)

i’d hope the query planner can still do something reasonable.

PostgreSQL specifically guards against queries with more than 8 joins... and Lemmy plows right past that.

[–] solrize@lemmy.world 1 points 1 year ago (1 children)

What can I say, that sounds suspicious both from the PG side (complex queries with lots of joins are sometimes useful, such as for reporting) and on the Lemmy side (executing such queries in response to routine web requests is a pretty bad smell). It's still early days so this seems like a better time to re-examine the schema and migrate if necessary, than after waiting until there's a ton more data and activity.

[–] RoundSparrow@lemmy.ml 1 points 1 year ago* (last edited 1 year ago) (1 children)

It’s still early days

Lemmy has been on GitHub since February 2019, over four years. It isn't new at all. Several instances go way back.

The answer is: ORM.

[–] solrize@lemmy.world 2 points 1 year ago

I don't mean the code is new, I mean the user base and data corpus are small compared to what we are hoping for. You're probably right about the ORM. :/

[–] RoundSparrow@lemmy.ml 1 points 1 year ago* (last edited 1 year ago) (1 children)

An even less-intrusive approach is to not add any new field to existing tables. Establish a reference table say called include_range. There is already an ENUM value for each sort type, so include_range table with these columns: sort_type ENUM, lowest_id BigInt, highest_id BigInt

Run a variation of this to populate that table:

FROM
  (
     SELECT id, community_id, published,
        rank() OVER (
           PARTITION BY community_id
           ORDER BY published DESC, id DESC
           )
     FROM post_aggregates) ranked_recency
WHERE rank <= 1000

Against every sort order, including OLD. Capture only two BigInt results: the MIN(id) and the MAX(id) - that will give a range over the whole table. Then every SELECT on post_aggregates / post table includes a WHERE id >= lowest_id AND id <= highest_id

That would put in a basic sanity check that ages-out content, and it would be right against the primary key!

[–] RoundSparrow@lemmy.ml 1 points 1 year ago* (last edited 1 year ago)

A core design issue of either approach is that server operators can modify the building of this data without needing to modify or restart the lemmy_server Rust code.

Using a smallint also gives some flexibility (or a new field if going with the id min max approach).... if page greater than 10 for a particular sort, go to include > 1 and fall into tiers.

[–] RoundSparrow@lemmy.ml 1 points 1 year ago

3 hours later... I put it into code and am experimenting with it. Some proof of concept results: https://github.com/LemmyNet/lemmy/files/12373819/auto_explain_list_post_community_0_18_4_dullbananas_with_inclusion_run0a.txt