this post was submitted on 11 Oct 2023
9 points (100.0% liked)

Debian operating system

2716 readers
1 users here now

Debian is a free operating system (OS) for your computer. An operating system is the set of basic programs and utilities that make your computer run. Debian provides more than a pure OS: it comes with over 59000 packages, precompiled software bundled up in a nice format for easy installation on your machine.

founded 4 years ago
MODERATORS
 

I have an annoying problem on my server and google has been of no help. I have two drives mirrored for the OS through mdadm, and I recently replaced them with larger versions through the normal process of replacing one at a time and letting the new drive re-sync, then growing the raids in place. Everything is working as expected, with the exception of systemd... It is filling my logs with messages of timing out while trying to locate both of the old drives that no longer exist. Mdadm itself is perfectly happy with the new storage space and has reported no issues, and since this is a server I can't just blindly reboot it to get systemd to shut the hell up.

So what's the solution here? What can I do to make this error message go away? Thanks.

[Update] Thanks to everyone who made suggestions below, it looks like I finally found the solution in systemctl daemon-reload however there is a lot of other great info provided to help with troubleshooting. I'm still trying to learn the systemd stuff so this has all been greatly appreciated!

top 12 comments
sorted by: hot top controversial new old
[–] caseyweederman@lemmy.ca 2 points 1 year ago (1 children)

systemctl disable --now olddisk.mount?

[–] Shdwdrgn@mander.xyz 1 points 1 year ago* (last edited 1 year ago) (1 children)

Sounds interesting, any chance you can tell me what it does? Google doesn't even seem to have any hits on "olddisk.mount" and I want to make sure this won't break anything else as it could be months before the system is intentionally rebooted again.

Also of note - I don't see anything with a name similar to olddisk.mount in the systemd folder. Is this command unique to a particular distro? For reference, I'm running Debian.

[–] XTL@sopuli.xyz 2 points 1 year ago* (last edited 1 year ago) (2 children)

I think olddisk refers to the name of your device. Try systemctl status or just systemctl and see if it's in the output. Or find the name in the journal.

[–] Shdwdrgn@mander.xyz 3 points 1 year ago

Status reports "State: degraded" but then doesn't say WHAT is degraded and shows no other errors (and /proc/mdstat shows no errors). Trying systemctl by itself does show an error from logrotated but that seems unrelated?

I do see the drive errors again in journalctl but I don't see anything helpful here... maybe you'll see something? These errors get repeated for both of the old drives about every 30 minutes, and I believe the UUIDs are for the old drives since they don't match any existing drive.

Oct 11 07:10:40 Juno systemd[1]: Timed out waiting for device ST500LM021-1KJ152 5.

Oct 11 07:10:40 Juno systemd[1]: Dependency failed for /dev/disk/by-uuid/286e26b0-603a-43b2-bc0f-30853998d5ab.

Oct 11 07:10:40 Juno systemd[1]: dev-disk-by\x2duuid-286e26b0\x2d603a\x2d43b2\x2dbc0f\x2d30853998d5ab.swap: Job dev-disk-by\x2duuid-286e26b0\x2d603a\x2d43b2\x2dbc0f\x2d30853998d5ab.swap/start failed with result 'dependency'.

Oct 11 07:10:40 Juno systemd[1]: dev-disk-by\x2duuid-286e26b0\x2d603a\x2d43b2\x2dbc0f\x2d30853998d5ab.device: Job dev-disk-by\x2duuid-286e26b0\x2d603a\x2d43b2\x2dbc0f\x2d30853998d5ab.device/start failed with result 'timeout'.

Oct 11 07:10:40 Juno systemd[1]: dev-disk-by\x2duuid-96b0277b\x2dcf9d\x2d4360\x2dbf90\x2d691166cff52b.device: Job dev-disk-by\x2duuid-96b0277b\x2dcf9d\x2d4360\x2dbf90\x2d691166cff52b.device/start timed out.

[–] caseyweederman@lemmy.ca 2 points 1 year ago* (last edited 1 year ago) (2 children)

Right. systemctl list-automounts
to find the name, maybe? I've never had exactly this problem though.

Looks like list-automounts is relatively new, try systemctl status --full --all -t mount for all mounts and look for your old disks in the info.
-t automount might work but mine is empty, which makes me think this might not be related to the automount unit type.
Hopefully this will point us in the right direction though.

[–] Shdwdrgn@mander.xyz 3 points 1 year ago (1 children)

That appears to be a success! Thanks for the pointers, I'm still trying to figure out the systemd stuff since I rarely have to touch it.

[–] caseyweederman@lemmy.ca 2 points 1 year ago (1 children)

Sweet, no problem. Good luck.

[–] Shdwdrgn@mander.xyz 3 points 1 year ago (1 children)

Still no new errors in the logs. It wasn't hurting anything, it was just annoying and I didn't want to reboot a server just because of a logging issue! 😆

[–] caseyweederman@lemmy.ca 2 points 1 year ago

Also it was just going to keep trying forever.

[–] Shdwdrgn@mander.xyz 3 points 1 year ago

Ah cool... the 'full' command actually advised running systemctl daemon-reload which appears to have cleared the errors listed. Based on previous errors in the log it will likely be another 20 minutes before another error would be generated, so I'm waiting to see what happens now.

[–] Turun@feddit.de 2 points 1 year ago (1 children)

Did you double check /etc/fstab? I once had an old UUID in there, which made systemd wait 90 seconds every boot looking for the device.

[–] Shdwdrgn@mander.xyz 1 points 1 year ago

The UUIDs in fstab all match the ones for the md devices, those didn't change when replacing the discs. The UUIDs being reported by systemd are probably from the old physical disks since they don't match any of the current drives listed in blkid.