jediwombat
New Member
Hi all,
I've just done an apt-get dist-upgrade on my Debian 9.6 server, and during the process, it attempted to update GRUB. This failed as it could not find the disk when it attempted to install. I told it not to install GRUB, thinking a simple grub-install /dev/sda1 would work after the dist-upgrade, but this returns a similar error.
As I understand it, I need to install GRUB over the top of itself to make sure it loads the new kernel on boot. I'm fuzzy on this part, as my previous GRUB already pointed to the right drive/partition/etc., so I thought it wouldn't need changing, but as the installer wanted to update it, I'm not confident I'm understanding this part correctly at all.
My setup is 4 3TB drives each with a BIOS partition, and the rest of the drive as a second partition. These second partitions are joined in a RAID5, which is presented to LVM as the PV.
I've found my existing GRUB on /dev/sda:
But when I try to do the grub-install on this drive:
I get the exact same error if I attempt to point it at /dev/sda1.
I'm not sure what other info people may need, but I'm happy to give what I need to if you ask. For now, the system is running, but I'm not at all confident my GRUB will boot the system, and I don't know how to test it without rebooting, which may fail.
tl;dr, three questions:
Do I need to do a new grub-install?
How can I make grub-install work?
How can I know if my current GRUB config is OK?
Thanks!
Benjamin.
I've just done an apt-get dist-upgrade on my Debian 9.6 server, and during the process, it attempted to update GRUB. This failed as it could not find the disk when it attempted to install. I told it not to install GRUB, thinking a simple grub-install /dev/sda1 would work after the dist-upgrade, but this returns a similar error.
As I understand it, I need to install GRUB over the top of itself to make sure it loads the new kernel on boot. I'm fuzzy on this part, as my previous GRUB already pointed to the right drive/partition/etc., so I thought it wouldn't need changing, but as the installer wanted to update it, I'm not confident I'm understanding this part correctly at all.
My setup is 4 3TB drives each with a BIOS partition, and the rest of the drive as a second partition. These second partitions are joined in a RAID5, which is presented to LVM as the PV.
I've found my existing GRUB on /dev/sda:
Code:
root@groth:~# dd if=/dev/sda bs=512 count=1 | xxd | grep -A2 -B2 GRUB
1+0 records in
1+0 records out
512 bytes copied, 9.5261e-05 s, 5.4 MB/s
00000160: 018e db31 f6bf 0080 8ec6 fcf3 a51f 61ff ...1..........a.
00000170: 265a 7cbe 8e7d eb03 be9d 7de8 3400 bea2 &Z|..}....}.4...
00000180: 7de8 2e00 cd18 ebfe 4752 5542 2000 4765 }.......GRUB .Ge
00000190: 6f6d 0048 6172 6420 4469 736b 0052 6561 om.Hard Disk.Rea
000001a0: 6400 2045 7272 6f72 0d0a 00bb 0100 b40e d. Error........
But when I try to do the grub-install on this drive:
Code:
root@groth:~# grub-install /dev/sda
Installing for i386-pc platform.
grub-install: error: disk `lvmid/8AMfWo-fbxG-Hn5u-zZ7u-Oezp-wwYC-NPhiFj/3xMcU1-qviE-c99a-m3T0-u1VO-1edY-Zt7PaP' not found.
I get the exact same error if I attempt to point it at /dev/sda1.
I'm not sure what other info people may need, but I'm happy to give what I need to if you ask. For now, the system is running, but I'm not at all confident my GRUB will boot the system, and I don't know how to test it without rebooting, which may fail.
tl;dr, three questions:
Do I need to do a new grub-install?
How can I make grub-install work?
How can I know if my current GRUB config is OK?
Thanks!
Benjamin.