this post was submitted on 16 Apr 2024
40 points (93.5% liked)

Selfhosted

40152 readers
493 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

I've been trying to get hardware acceleration working on rootless containers of Plex and Jellyfin and I can't get it to work the proper way.

My current workaround is having my device /dev/dri/renderD128 with permissions set to 666, but I feel like that really isn't an ideal setup.

Some things I've done:

-Currently I'm running my containers with my user with ID 1000.

-My user is part of the render group, which is the group assigned to:

    /dev/dri/renderD128

-I'm passing the device to the containers as such:

  --device /dev/dri:/dev/dri

-In my plex container for example, I'm passing the IDs to use as such:

   -e PUID=1000 and -e PGID=1000

-I tried the option "--group-add keep-groups" and I see the groups in the container but I believe they're assigned to the root user in the container, and from my understanding, the plex and jellyfin images I've tried I think they create a user inside with the IDs I pass, in this case 1000, and so this new user doesn't get assigned my groups on the host. I'm using the LinuxServer.io images currently but I saw the official plex image creates a user named "plex". The LinuxServer.Io images create a user named "abc".

-Out of curiosity on the host I changed the group of /dev/dri/renderD128 to my user's group 1000, but that didn't work either

-I tried with the --privileged option too but that didn't seem to work either, at least running podman as my user.

-I haven't tried running podman as root for these containers, and I wonder how that compares security-wise vs having my /dev/dri/renderD128 with permissions set to 666

For some context, I've been transitioning from Docker to Podman rootless over the past 5 days maybe. I've learned a couple of things but this one has been quite a headache.

Any tips or hints would be appreciated. Thanks!

top 12 comments
sorted by: hot top controversial new old
[–] markstos@lemmy.world 4 points 7 months ago (1 children)

Another good place to ask Podman questions is the Podman discussion forum: https://github.com/containers/podman/discussions

[–] Kekin@lemy.lol 1 points 7 months ago

Thanks! I'll take a look there

[–] core@lemmy.world 4 points 7 months ago (2 children)

I'm running rootful podman but intend to switch to rootless. I also recently got a video card and want to do GPU passthrough, but I haven't had a chance to install the card in my server yet.

Following this and hope to remember to provide some info once I give it a go.

Are you using systemd to manage your podman containers?

[–] Kekin@lemy.lol 2 points 7 months ago

Yes I did the Systemd integration at the user level too and I quite like it

[–] possiblylinux127@lemmy.zip 0 points 7 months ago

I am and it works great, if you remember that you are using systemd to manage containers. I sometimes forget and wonder while my container won't die.

You also need systemd in order to start at boot.

[–] herrfrutti@lemmy.world 3 points 7 months ago (1 children)

I played with this problem too. In my case I wanted a zigbee usb to be passed through. I'm not sure if this procedure works with gpu though...

This was also needed to make it work: https://www.zigbee2mqtt.io/guide/installation/20_zigbee2mqtt-fails-to-start.html#method-1-give-your-user-permissions-on-every-reboot

devices:
      # Make sure this matched your adapter location
      - "/dev/ttyUSB.zigbee-usb:/dev/ttyACM0:rwm"

Also I passed my gpu to immich. But not 100% sure it is working. I've added my user to the render group and passed the gpu like the usb zigbee stick:

devices:
      - "/dev/dri:/dev/dri:rwm"  # If using Intel QuickSync

The immich image main user is root if imI remember correctly and all permissions that my podman user 1000 has are granted to the root user inside the container (at least this is how I understand it...)

For testing I used this: https://www.zigbee2mqtt.io/guide/installation/20_zigbee2mqtt-fails-to-start.html#verify-that-the-user-you-run-zigbee2mqtt-as-has-write-access-to-the-port It should be working with gpu too.

I can test stuff later on my server, if you need more help!

Hope this all makes sense 😅 please correct me if anything is wrong!

[–] Kekin@lemy.lol 1 points 7 months ago

Thanks for the resources, I'll check them out later today!

[–] h3ndrik@feddit.de 3 points 7 months ago* (last edited 7 months ago) (1 children)

Have you verified it is a permission issue? Maybe you're looking at the wrong place. Does it work if you set them 666?

[–] Kekin@lemy.lol 2 points 7 months ago (2 children)

Yeah I'm fairly certain it's a permission issue. Having the gpu with permissions 666 makes it work inside the containers.

The thing is also that these container images (plex and jellyfin) create a separate user inside, instead of using the root user, and this new user ("abc" for lsio images) doesn't get added to the same groups as the root user.

Also the render group that gets passed to the container appears as "nogroup", so I thought of adding user abc to "nogroup" but still didn't seem to work.

[–] h3ndrik@feddit.de 3 points 7 months ago* (last edited 7 months ago)

Sure. I believe that nogroup behaviour is a failsafe. Otherwise every misconfiguration would result in privilege escalation.

Unfortunately I'm not really familiar with that podman setup. I'm not sure if that --group-add keep-groups helps. I'm not sure what kind of groups are defined inside of the container. If the render group is even there and attached to the user that runs the process. Also I'm not sure if it's the group's name or number that counts... The numbers can be different from container to container.

Maybe you can peek at the container, see how it's set up inside? Maybe something like the --device-cgroup-rule helps to give access to the user within the container?

[–] possiblylinux127@lemmy.zip 1 points 7 months ago

Have you tried setting renderD to be owned by your user? Podman runs as a local user.

[–] possiblylinux127@lemmy.zip 1 points 7 months ago* (last edited 7 months ago)

I had to be logged into graphical environment for it to work. (Don't ask me why, IDK)

My solution was to install lightdm and then icewm. From there I setup autologin.