this post was submitted on 07 Mar 2025
108 points (97.4% liked)

Selfhosted

43722 readers
368 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

I spent a few days comparing various Hypervisors under the same workload and on the same hardware. This is a very specific workload and results might be different when testing oher workloads.

I wanted to share it here, because many of us run very modest Hardware and getting the most out of it is probably something others are interested in, too. I wanted to share it also because maybe someone finds a flaw in the configurations I ran, which might boost things up.

If you do not want to go to the post / read all of that, the very quick summary is, that XCP-ng was the quickest and KVM the slowest. There is also a summary at the bottom of the post with some graphs if that interests you. For everyone else who reads the whole post, I hope it gives some useful insights for your self-hosting endeavours.

you are viewing a single comment's thread
view the rest of the comments
[–] Voroxpete@sh.itjust.works 3 points 2 days ago* (last edited 2 days ago) (1 children)

Unfortunately I'm not very familiar with Cloudstack or Proxmox; we've always worked with KVM using virt-manager and Cockpit.

Our usual method is to remove the default hard drive, reattach the qcow file as a SCSI device, and then we modify the SCSI controller that gets created to enable queuing. I'm sure at some point I should learn to do all this through the command line, but it's never really been relevant to do so.

The relevant sections look like this in one our prod VMs:

<disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/var/lib/libvirt/images/XXX.qcow2' index='1'/>
      <backingStore/>
      <target dev='sdb' bus='scsi'/>
      <alias name='scsi0-0-0-1'/>
      <address type='drive' controller='0' bus='0' target='0' unit='1'/>
</disk>
<controller type='scsi' index='0' model='virtio-scsi'>
      <driver queues='6'/>
      <alias name='scsi0'/>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
</controller>

The driver queues='X' line is the part you have to add. The number should equal the number of cores assigned to the VM.

See the following for more on tuning KVM:

[–] buedi@feddit.org 1 points 1 day ago (1 children)

Thank you very much. I spent another two hours yesterday reading up on that and creating other VMs and Templates, but I was not able yet to attach the Boot disk to a SCSI controller and make it boot. I would really liked to see if this change would bring it on-par with Proxmox (I wonder now what the defaults for Proxmox are), but even then, it would still be much slower than with Hyper-V or XCP-ng. If I find time, I will look into this again.

[–] Voroxpete@sh.itjust.works 1 points 1 day ago

I'd suggest maybe testing with a plain Debian or Fedora install. Just enable KVM and install virt-manager, and create the environment that way.