Hi, thank you for your all answers. I was desperate and installed Windows 11. But I've noticed that the same issues also happened on Windows 11. After downloading many files from Google Drive (they were either small files or some videos so no single file with 90 GB), I've hit two BSODs. Event viewer showed "Dump file creation failed due to error during dump creation. BugCheckProgress was: 0x00040049".
In the meantime, I have also discovered this thread:
https://askubuntu.com/questions/152...eps-crashing-my-fresh-install-of-ubuntu-24-04
I have also run simple test with MiniTool Partition Wizard and there were no bad sectors in their Surface Test.
I'm thinking about giving another try for Linux Ubuntu and this time applying advise from the link mentioned two paragraphs earlier. But I have some questions - what is the meaning of this part of the accepted answer below horizontal line? I mean, do I also need to change BOOT_IMAGE part in /proc/cmdline file? How do I edit grub when it first boots and how to add this latency parameter after ro? Does it have to happen during first boot? These are my main questions.
And regarding commands to be run (dmesg, smartctl), can I run them using Linux Ubuntu running live, not installed on my machine yet?
Regarding very big file from Proton Drive, I understand that this tune2fs is utility that lets me choose large_file and huge_file. But isn't this option turned on by default? At what point do I need to make this change, I understand that after changing grub and before downloading the file.
Is there any test I can perform in order to simulate this behaviour that can lead to troubles (I don't want to wait for big files to download again and set up virtual machine)? Maybe I can somehow redirect random data to single big file of 90 GB, but I would need to know the command. Would it suffice as a test?
Thank you!
Here are some observations on this problem of failure to download the 90GB file, and the system breakdowns in both MS and linux systems.
The fact that both operating systems, MS and linux, had similar problems suggests that the problem may be with the computer itself, since it is the common element, or the network since that's also common to both.
Since the computer may be suspect, it was worth suggesting the health assessment of the hardware.
I have also run simple test with MiniTool Partition Wizard and there were no bad sectors in their Surface Test.
A filesystem check from a linux program is worth considering. The filesystem being checked needs to be unmounted for this check. My preference is to run the check from a live disk or live rescue disk as mentioned in post $15. Assuming the linux system is running systemd, the following steps should work:
Boot the live disk.
Run the command: lsblk.
Identify the root partition device name, e.g. /dev/sda2
Run the file check as root and watch results on screen:
If there's another partition such as for /home, file check that partition as well. No need to check a swap partition since it has no filesystem. When the fsck program runs it will identify problems and ask the user on screen to fix them or leave it, so one needs to watch for that.
Another means of file checking in a systemd system is to add kernel parameters such as the following to the kernel command line:
Code:
fsck.mode=force fsck.repair=yes
To do that, do:
Boot to the grub menu.
Hit e when the grub menu starts when text should appear on screen.
Navigate down to the line that starts with "linux".
Add the above 2 kernel options a space apart, and after a space at the end of the existing line within the quotes if they exist.
Hit cntl+x to boot.
The file checking will repair automatically if there are problems it can handle. These kernel options only last for that boot, so on the next boot, the original boot conditions will apply.
When the machine is next booted after the file checking, it's useful to inspect the logs and page through with one or more of these sorts of commands:
Code:
journalctl -b
journalctl -b -x -p 3
journalctl -b | grep -i error
less /var/log/kernel.log
less /var/log/syslog
grep -i error /var/log/syslog
The health of the hard disk can be be assessed from a live disk so long as its device name is identified. It can be identified from the output of the command: lsblk. If the disk is /dev/sda the command, as root, would be:
The output may be quite extensive and bears close reading.
The tune2fs program does enable large and huge files by default in debian and most debian based distros in my experience.
The memtest86+ program is worth running, even in a minimal way at least since functioning RAM is essential.
There are a number of issues that may arise in downloading very large files.
A reliable network is needed.
Timeouts, retries and speed variations can cause unpredictable outcomes.
The disk input/output functioning needs to be relatively free of "saturation", that is, it needs to be free of heavy use.
The more RAM the better. Somewhere like 16G plus sounds about capable to me, but that's a guess since files of around 10Gb here are the largest downloaded, and that's a rare event.
Finally, check user limits with the ulimit command just in case there's some issue there:
The "file size" variable should be "unlimited"