About This Linux

TheBearAK

New Member
Joined
Nov 17, 2021
Messages
10
Reaction score
5
Credits
100
A co-worker wrote the below script a long time ago and there are many things it can do, but obviously stumbles when it comes to detecting the hard drives because /dev/sd[a-z] is not as common as it once was.
I'm curious if anyone wanted to improve upon this switch to work in more situations. I find it handy for what it reports.

======== Start ========

#! /bin/sh

hostname=`hostname`
drives=`ls /dev/sda[1-10] | wc -l`
echo
echo
echo =================================================
echo " $hostname"
echo =================================================

# echo ========
proctype=`cat /proc/cpuinfo | grep 'model name' | head -n 1`
numcpu=`cat /proc/cpuinfo | grep '^processor' | wc -l`
procspeedghz=`cat /proc/cpuinfo | grep 'cpu MHz' | head -n 1 | awk '{printf "%7.2f\n", $4}'`
procspeedbogo=`cat /proc/cpuinfo | grep 'bogomips' | head -n 1 | awk '{printf "%4.0f\n", $3}'`
echo $proctype | cut -d: -f2
echo $numcpu cores
echo $procspeedghz MHz \($procspeedbogo bogomips\)
# echo ========
memory=`cat /proc/meminfo | grep MemTotal | awk '{printf "%4.1f\n", $2 / 1024 / 1024}'`
echo $memory GB RAM
echo -------------------------------------------------
echo $drives hard drives

if [ -f /proc/mdstat ]
then
raid=True
else
raid=False
fi

if [ $raid = 'True' ]
then
df=`df -h | grep md | grep -v boot | grep -v var | awk '{print $2;}'`
freedisk=`df -h | grep md | grep -v boot | grep -v var | awk '{print $4;}'`
echo $df of RAID storage \($freedisk free\)
else
df=`df -k | grep '/dev/sd[a-z]' | awk '{sum += $2} ; END {printf "%4.0fG\n", sum / 1024 / 1024}'`
freedisk=`df -k | grep '/dev/sd[a-z]' | awk '{sum += $4} ; END {printf "%4.0fG\n", sum / 1024 / 1024}'`
echo $df of non-RAID storage \($freedisk free\)
fi

echo -------------------------------------------------
# echo ========
debver=`cat /etc/debian_version`
debverc=`lsb_release -c`
kerver=`uname -r`
echo debian version $debver
echo debian $debverc
echo kernel version $kerver
echo
df -h
========= End ==========
 


Hmm... I'll ping @JasKinasis as this is his kinda question.
 
IDK offhand.

For getting a list of HDs/partitions - perhaps take a look at the output of the lsblk command?!

Maybe you could use that to display the information you want?
It will show any other block devices that are attached to the system though.
For example:
Any applications installed as snaps will have block devices associated with them.
But you might be able to filter out the results from lsblk to show only the ones you're interested in.
 
fdisk -l | grep Disk | grep dev

oy maybe...

fdisk -l | grep Disk | grep -E 'sd|nvme'
 
Always wondered why someone hasn't written a mini-app or something to gather this sort of information.

I don't know much about scripting myself. Just really basic stuff.

Good suggestions above. Thanks.
 
Always wondered why someone hasn't written a mini-app or something to gather this sort of information.

Open up a terminal and try this:
Code:
inxi -Fnxz

It's installed on many systems, but if you don't have it (and you use apt), then install with:
Code:
sudo apt install inxi
 
That is great! not sure why I never found that utility. Thanks!
Glad you like it! You might even bump into the developer (@h2-1) here on the forum sometime... he is still actively improving it. :cool::)

Besides the "-Fnxz" options, there are many others available, depending on your needs. We often suggest that set of options to users here to help diagnose their problems. Check the man page for full details.
 
he is still actively improving it. :cool::)

Unless they abandon the project, it'll be one of those that gets constant updates pretty much forever - or until computer architecture changes in ways I can't even begin to think about. The landscape is constantly changing, so they're in it for the long haul.

(Something that I can appreciate.)
 
Unless they abandon the project, it'll be one of those that gets constant updates pretty much forever - or until computer architecture changes in ways I can't even begin to think about. The landscape is constantly changing, so they're in it for the long haul.
It only takes occasional rewrites to handle new logic for more complex scenarios, and when I do it right, that covers a lot of future cases too, or makes it easy to do in the future. Coming up next is massively enhanced CPU support, almost ready to go, wil go into testing now, that's actually why I dropped in today, to start that.

These computer architecture changes have already happened in many cases in ways people can't even begin to think about, that's certainly ongoing with CPUs, but so far most are getting somewhat/relatively seamlessly handled internally by inxi, without users really knowing they are happening, beyond magically one day suddenly entirely new sub types of data appear.

Code:
CPU:
  Info: 2x 8-Core
    model: Intel Xeon E5-2620 v4
    bits: 64
    type: MT MCP SMP
    arch: Broadwell
    family: 6
    model-id: 4F (79)
    stepping: 1
    microcode: B00003E
    cache:
      L1: 2x 512 KiB (1024 KiB)
        desc: d-8x32 KiB; i-8x32 KiB
      L2: 2x 2 MiB (4 MiB)
        desc: 8x256 KiB
      L3: 2x 20 MiB (40 MiB)
        desc: 1x20 MiB
    flags: avx avx2 ht lm nx pae sse sse2 sse3 sse4_1 sse4_2 ssse3 vmx
    bogomips: 4199
  Speed (MHz):
    avg: 1889
    high: 2339
    min/max: 1200/3000
    cores:
      1: 2046
      2: 2022
      3: 1869
      4: 1900
      .....

What's unfortunately looking increasingly likely is starting to drop BSD support at some point in the future, at least for non OpenBSD BSDs, those are just lagging so far behind Linux now, with shriveling market shares, that it's getting increasingly difficult to support them, or justify the hours/days/weeks required to do so. Unless they vastly up their game in terms of system data and tools, of course, which I can always dream of/hope for.

But that's just part of the ebb and flow of computing, I also don't test on VAX or IBM mainframes from the 70s, or any other legacy operating systems, or other UNIX, and that point is in my opinion much closer than those projects realize based on what I'm seeing in Linux compared to them, I just don't see any room for competition left there, they are dropping the ball in a big way, and don't seem to care at all.
 
By the way, thanks for the work you put into it.
 
This one was a lot. I had to read kernel commits, comments, etc, this stuff was often only documented in actual kernel code and comments, and a lot of is changing, even now, the data is slightly morphing, but the direction the kernel guys are going in with /sys data and CPU is really excellent, in fact, some commits specifically mentioned that the changes would be good for... yep, for system information tools.
 


Top