Unless, of course, you're hosting the website. If you're hosting the site, just use 'ncdu' on the appropriate folder.
If you do not host the site, then no... No, you can probably use httrack/wget to download all the pages (considered kinda rude), but that'd be just the pages and images and stuff like that... There's often still a lot you won't see and download, like the database that populates all that.
So, no... That's not something you can realistically do.
That is only true if the site is pure HTML type stuff. You won't get things like the backend stuff - databases and stuff like that. My linux-tips site is about 10 GB in size, with probably about 1/20 of that being accessible publicly with tools like Pfetch or HTTrack (or wget, which you can set to no-clobber at least).
HTTrack is a free (GPL, libre/free software) and easy-to-use offline browser utility. It allows you to download a World Wide Web site from the Internet to a local directory, building recursively all directories, getting HTML, images, and other files from the server to your computer. HTTrack...
There is a high chance that it is in the default repos of the distribution you run as well.
httrack.x86_64 : Website copier and offline browser