sathanas65
New Member
Hello. I am hoping someone can point me in the right direction to solve a problem I am having with Debian 11.5, kernel 5.10.0-20. My LVM was corrupted, and I had to delete and recreate the LVM and member paritions. I had backed up using Timeshift for rsync, so I attempted to restore from snapshot. One notable change is that the original partition structure had /home and /var on separate partitions, but I restored everything to root because the separate partitions had caused some other headaches previously.
The delete/create generated new UUIDs, which broke grub. I tried using both Debian rescue mode and a grub rescue disk, but wasn’t able to get it working. So I reinstalled Debian, backed up an image of the new boot partition and /etc/fstab, then restored from Timeshift again, and this time I replaced the boot partition and fstab with the ones generated by the fresh Debian install.
This allowed me to boot into my system and I could see that my programs were installed and my gnome settings were all preserved. However, my Nvidia Tesla 470 driver had stopped working when trying to get it working I was seeing loads of missing nvidia firmware errors. My troubleshooting steps eventually would lead to a rebuild of /boot/initrd.img-5.10.0-20-amd64. After this, when booting I would get an error that my LVM was not found, and the referenced encrypted drive UUID was an old one from the original partition mapping.
I have gone in circles and can’t seem to resolve this. I can’t find any reference for the old UUID in fstab, grub config or anywhere else I have checked. I have no clue what is causing the grub updates to default back to that old mapping. If I restore the latest /boot image I can get back into the system, but with broken nvidia. I do not need to restore fstab to get it to boot again. Also, when I restore the working /boot image and then overwrite initrd.img-5.10.0-20-amd64 with the one generated by Nvidia driver installation it breaks and once again looks for the old UUID.
There is also an intermediate troubleshooting stage where Nvidia is still broken and I can boot into the system, but boot gets delayed by 90 seconds with “A start job is running for dev/disk/by-uuid/<old UUID >” I am never able to get Nvidia driver back up and continue to get missing firmware errors. I have been fighting with this for many days and considering throwing in the towel and starting over. In any case, I will need to work out a backup solution that will work for me even for new partitions.
Short of full drive or partition image backups, is there any backup solution that would avoid these headaches?
Any idea what I can do to purge the old UUID completely and get Nvidia working again?
Thank you!
The delete/create generated new UUIDs, which broke grub. I tried using both Debian rescue mode and a grub rescue disk, but wasn’t able to get it working. So I reinstalled Debian, backed up an image of the new boot partition and /etc/fstab, then restored from Timeshift again, and this time I replaced the boot partition and fstab with the ones generated by the fresh Debian install.
This allowed me to boot into my system and I could see that my programs were installed and my gnome settings were all preserved. However, my Nvidia Tesla 470 driver had stopped working when trying to get it working I was seeing loads of missing nvidia firmware errors. My troubleshooting steps eventually would lead to a rebuild of /boot/initrd.img-5.10.0-20-amd64. After this, when booting I would get an error that my LVM was not found, and the referenced encrypted drive UUID was an old one from the original partition mapping.
I have gone in circles and can’t seem to resolve this. I can’t find any reference for the old UUID in fstab, grub config or anywhere else I have checked. I have no clue what is causing the grub updates to default back to that old mapping. If I restore the latest /boot image I can get back into the system, but with broken nvidia. I do not need to restore fstab to get it to boot again. Also, when I restore the working /boot image and then overwrite initrd.img-5.10.0-20-amd64 with the one generated by Nvidia driver installation it breaks and once again looks for the old UUID.
There is also an intermediate troubleshooting stage where Nvidia is still broken and I can boot into the system, but boot gets delayed by 90 seconds with “A start job is running for dev/disk/by-uuid/<old UUID >” I am never able to get Nvidia driver back up and continue to get missing firmware errors. I have been fighting with this for many days and considering throwing in the towel and starting over. In any case, I will need to work out a backup solution that will work for me even for new partitions.
Short of full drive or partition image backups, is there any backup solution that would avoid these headaches?
Any idea what I can do to purge the old UUID completely and get Nvidia working again?
Thank you!