this post was submitted on 26 Mar 2024
634 points (96.3% liked)

linuxmemes

21378 readers
1301 users here now

Hint: :q!


Sister communities:


Community rules (click to expand)

1. Follow the site-wide rules

2. Be civil
  • Understand the difference between a joke and an insult.
  • Do not harrass or attack members of the community for any reason.
  • Leave remarks of "peasantry" to the PCMR community. If you dislike an OS/service/application, attack the thing you dislike, not the individuals who use it. Some people may not have a choice.
  • Bigotry will not be tolerated.
  • These rules are somewhat loosened when the subject is a public figure. Still, do not attack their person or incite harrassment.
  • 3. Post Linux-related content
  • Including Unix and BSD.
  • Non-Linux content is acceptable as long as it makes a reference to Linux. For example, the poorly made mockery of sudo in Windows.
  • No porn. Even if you watch it on a Linux machine.
  • 4. No recent reposts
  • Everybody uses Arch btw, can't quit Vim, and wants to interject for a moment. You can stop now.
  •  

    Please report posts and comments that break these rules!


    Important: never execute code or follow advice that you don't understand or can't verify, especially here. The word of the day is credibility. This is a meme community -- even the most helpful comments might just be shitposts that can damage your system. Be aware, be smart, don't fork-bomb your computer.

    founded 1 year ago
    MODERATORS
     
    you are viewing a single comment's thread
    view the rest of the comments
    [–] flambonkscious@sh.itjust.works 5 points 8 months ago (1 children)

    That makes complete sense - if you've got something 'needy', as soon as it's queuing up, I imagine it snowballs, too...

    10-20 times the core count is crazy, but I guess it's had a lot of development effort into parallelizing it's execution, which of course goes against what your use case is :)

    [–] MentalEdge@sopuli.xyz 7 points 8 months ago* (last edited 8 months ago)

    Theoretically a load average could be as high as it likes, it's essentially just the length of the task queue, after all.

    Processes having to queue to get executed is no problem at all for lots of workloads. If you're not running anything latency-sensitive, a huge load average isn't a problem.

    Also it's not really a matter of parallelization. Like I mentioned, ffmpeg impacted other processes even when restricted to running in a single thread.

    That's because most other processes will do work in small chunks that complete within nanoseconds. Send a network request, parse some data, decode an image, poll HID device, etc.

    A transcode meanwhile can easily have a CPU running full tilt for well over a second, working on just that one thing. Most processes will show up and go "I need X amount of CPU time" while ffmpeg will show up and go "give me all available CPU time" which is something the scheduler can't actually quantify.

    It's like if someone showed up at a buffet and asked for all the food that no-one else is going to eat. How do you determine exactly how much that is, and thereby how much it is safe to give this person without giving away food someone else might've needed?

    You don't. Without CPU headroom it becomes very difficult for the task scheduler to maintain low system latency. It'll do a pretty good job, but inevitably some CPU time that should have gone to other stuff, will go the process asking for as much as it can get.