Grub update keeps defaulting back to depricated UUID

sathanas65

New Member
Joined
Dec 27, 2022
Messages
2
Reaction score
0
Credits
40
Hello. I am hoping someone can point me in the right direction to solve a problem I am having with Debian 11.5, kernel 5.10.0-20. My LVM was corrupted, and I had to delete and recreate the LVM and member paritions. I had backed up using Timeshift for rsync, so I attempted to restore from snapshot. One notable change is that the original partition structure had /home and /var on separate partitions, but I restored everything to root because the separate partitions had caused some other headaches previously.


The delete/create generated new UUIDs, which broke grub. I tried using both Debian rescue mode and a grub rescue disk, but wasn’t able to get it working. So I reinstalled Debian, backed up an image of the new boot partition and /etc/fstab, then restored from Timeshift again, and this time I replaced the boot partition and fstab with the ones generated by the fresh Debian install.

This allowed me to boot into my system and I could see that my programs were installed and my gnome settings were all preserved. However, my Nvidia Tesla 470 driver had stopped working when trying to get it working I was seeing loads of missing nvidia firmware errors. My troubleshooting steps eventually would lead to a rebuild of /boot/initrd.img-5.10.0-20-amd64. After this, when booting I would get an error that my LVM was not found, and the referenced encrypted drive UUID was an old one from the original partition mapping.

I have gone in circles and can’t seem to resolve this. I can’t find any reference for the old UUID in fstab, grub config or anywhere else I have checked. I have no clue what is causing the grub updates to default back to that old mapping. If I restore the latest /boot image I can get back into the system, but with broken nvidia. I do not need to restore fstab to get it to boot again. Also, when I restore the working /boot image and then overwrite initrd.img-5.10.0-20-amd64 with the one generated by Nvidia driver installation it breaks and once again looks for the old UUID.

There is also an intermediate troubleshooting stage where Nvidia is still broken and I can boot into the system, but boot gets delayed by 90 seconds with “A start job is running for dev/disk/by-uuid/<old UUID >” I am never able to get Nvidia driver back up and continue to get missing firmware errors. I have been fighting with this for many days and considering throwing in the towel and starting over. In any case, I will need to work out a backup solution that will work for me even for new partitions.

Short of full drive or partition image backups, is there any backup solution that would avoid these headaches?

Any idea what I can do to purge the old UUID completely and get Nvidia working again?

Thank you!
 


Without the output of the commands you used, this case might be a case of not differentiating between the UUIDs of the logical volumes compared with the block devices. Perhaps have a look here and see if that helps: https://askubuntu.com/questions/900309/blkid-cant-find-lvms-uuid.

I believe you'd need to sort out the UUIDs before setting up the nvidia graphics.

Edit: on thinking about this further, I wonder why the choice was made to use LVM. For an average desktop it's usually unnecessary for functioning, and adds an extra layer of software which, if one is not planning to modify partitions, is virtually useless. Standard partitions are more straight forward to work with and can be chosen in the installer. Debian is used for servers quite a bit and LVM can be useful in that context where growing partitions, adding disks and being able to handle size variations over time matters. YMMV.
 
Last edited:
Thanks for the swift reply. I will look at the linked post. My reason for LVM was initially because I am new to desktop Linux and the Deb installer only had the LVM option for guided encrypted partitioning. I believe the benefit is that all partitions in the volume can be unlocked with a single key entry rather than, in this case, having to unlock sda2 and swap separately.
 
This is how I fix a start job is running:

As far as the Nvidia issue is concerned I'm not that good with Nvidia GPU's however, installing the driver should help.
 
That is more for a specific partition (eg swap) rather than for a number of other reasons.

I modify

/etc/systemd/system.conf

for mine. In there, you will find a couple of lines

Code:
#DefaultTimeoutStartSec=90s
#DefaultTimeoutStopSec=90s

Although the lines are commented out (with a #), they are the default times.

I just remove the hashes, and change them to 20 seconds or 10 seconds as follows

Code:
DefaultTimeoutStartSec=20s
DefaultTimeoutStopSec=20s

then save the file, and that sorts out my start and stop jobs. You could try with 0 (zero) but I figure they are there for a reason, so I don't eliminate them. If I am bored (I never get bored), I can track down the issues further.

Wizard
 

Staff online

Members online


Top