Script behaves differently in home folder than other folders

C

CrazedNerd

Guest
Here is the best script i have made, and i use it on a pretty regular basis, but for some reason it does not work properly in my home folder, i even put a message at top to point that out:

Code:
#!/bin/bash
#this will look through every file in a working directory and
#find the text files that contain a string
echo -e "Enter string you want to find in your files. This script only searches files"
read -p "in your working directory. Don't use directly from home directory: " term

echo -e "\nUN-HIDDEN FILES\n"

FILES="$(grep -lIs "$term" *)"
#options mean show files containing text of file, omit binary files,
#omit error messages.

#file takes grep output to display their type,
#then sed removes error messages when files have spaces
file $FILES | sed '/No such file or directory/d'

FILESH="$(grep -lIs "$term" .*)"

echo -e "\nHIDDEN FILES\n"

file $FILESH | sed '/No such file or directory/d;'

echo -e "\nFind and grep can't process these files because they have spaces in them:\n"

#sed removes directories and their listings by quitting sed when it finds
#the first directory listing.
ls *' '* | sed -n '/:$/q;p;'

echo -e "\nIf files has returned \"Usage:\", then the search has come back with no results"
echo -e "for that type of file."

When i use this script at the bottom of my home directory, for some reason it just prints out all of the files located in the home directory. When I use it in any directory (like, for example, i have a directory just for educational scripts that don't have any practical value, and also Documents) then it does what it's supposed to do, and just prints the file and file types of text files that contain the string. It has a lot of other aspects, but the important part of this is:

Code:
grep -lIs "$term" *

running grep by itself like that with a typed regular expression has the same issue.
 
Last edited by a moderator:


Here is the best script i have made, and i use it on a pretty regular basis, but for some reason it does not work properly in my home folder, i even put a message at top to point that out:

Code:
#file takes grep output to display their type,
#then sed removes error messages when files have spaces
file $FILES | sed '/No such file or directory/d'

FILESH="$(grep -lIs "$term" .*)"

echo -e "\nHIDDEN FILES\n"

file $FILESH | sed '/No such file or directory/d;'

echo -e "\nFind and grep can't process these files because they have spaces in them:\n"

#sed removes directories and their listings by quitting sed when it finds
#the first directory listing.
ls *' '* | sed -n '/:$/q;p;'

echo -e "\nIf files has returned \"Usage:\", then the search has come back with no results"
echo -e "for that type of file."

When i use this script at the bottom of my home directory, for some reason it just prints out all of the files located in the home directory. When I use it in any directory (like, for example, i have a directory just for educational scripts that don't have any practical value, and also Documents) then it does what it's supposed to do, and just prints the file and file types of text files that contain the string. It has a lot of other aspects, but the important part of this is:

Code:
grep -lIs "$term" *

running grep by itself like that with a typed regular expression has the same issue.
Have you posted your entire script here? Or is this just a snippet from it?

As things stand - regarding the use of "$term" in your grep command - I don't see a variable called $term anywhere in your script. And because you've used double quotes, $term will be substituted for an empty string, unless $term is an environment variable - in which case your search string will be whatever value is in the $term environment variable.

But if you're searching for the literal string $term in your files, you will need to either escape the $:
Bash:
grep -lIs "\$term" .*

Or use single quotes instead of double quotes, which will make $term a literal string and will avoid the variable substitution:
Bash:
grep -lIs '$term' .*

Likewise, I don't see a $FILES variable defined anywhere in your code.

Also find and grep can deal with file-names with spaces. You just need to either escape the spaces, or use double quotes to enclose the path/filenames.
If the paths/file-names being passed to find, or grep are stored in variables, then dereferencing the variables inside double quotes should allow them to deal with filenames containing spaces, or other special characters.

I'm a bit confused about what your script is trying to achieve there, assuming that this is all of it.

Otherwise, if this was just a snippet you can ignore most of my comments. And if you post your entire script, so I can fully understand what you're trying to do, I may be able to help further.
 
Last edited:
Have you posted your entire script here? Or is this just a snippet from it?

As things stand - regarding the use of "$term" in your grep command - I don't see a variable called $term anywhere in your script. And because you've used double quotes, $term will be substituted for an empty string, unless $term is an environment variable - in which case your search string will be whatever value is in the $term environment variable.

But if you're searching for the literal string $term in your files, you will need to either escape the $:
Bash:
grep -lIs "\$term" .*

Or use single quotes instead of double quotes, which will make $term a literal string and will avoid the variable substitution:
Bash:
grep -lIs '$term' .*

Likewise, I don't see a $FILES variable defined anywhere in your code.

Also find and grep can deal with file-names with spaces. You just need to either escape the spaces, or use double quotes to enclose the path/filenames.
If the paths/file-names being passed to find, or grep are stored in variables, then dereferencing the variables inside double quotes should allow them to deal with filenames containing spaces, or other special characters.

I'm a bit confused about what your script is trying to achieve there, assuming that this is all of it.

Otherwise, if this was just a snippet you can ignore most of my comments. And if you post your entire script, so I can fully understand what you're trying to do, I may be able to help further.
Oh, woops: i didn't post the first part because nano's mouse support is somewhat limited, so as it is, doesn't make any sense, ill go and change it in the OP
 
OK, from taking a look at that - I've found a solution to the problem you're having with spaces in the names.

Your comment in your code is a little misleading - it wasn't sed or grep that had problems with filenames with spaces.
It's actually the file command that is balking at the spaces in the file-names.

The way around that would be to read the file-names from grep into an array.
So:
Bash:
readarray -d '' -t FILES < <(grep --null -lIs "$pattern" *)
The readarray command is a newer command added for version 4 of bash and saves you from having to set up a read loop to capture the output from grep.

It's actually a builtin alias/shortcut for the mapfile builtin command. So to see all of the options for mapfile/readarray, use the command help mapfile.
help readarray also works, but if memory serves, it only displays a brief summary of the options, without much explanation. Whereas the help page for the mapfile builtin documents everything fully!

In the above - the readarray command reads the output from grep into an array called FILES.
The -t option removes any trailing delimiters (newlines) from the end of the line.
The -d sets the delimiter to '', note that's two single quotes ', NOT a double quote ". So the two single quotes specifies an empty literal string as a delimiter, which should be the same as a null.
Also note in the grep command, we're also using the --null option, so grep returns a list of null terminated strings. So each file-name will end with a null character. This is how we get around the problem with the spaces in the file-names.

Then, in order to safely run the list of files through file, you could use a for loop.
Bash:
numFiles=${#FILES[@]}
for ( index=0; index<numFiles; index++); do
    file "${FILES[$index]}"
done
Where numFiles is the number of files, which we get by determining the size of the array via ${#FILES[@]}
The # before FILES is what gets us the size of the array.
Then it's just a case of looping through the array and passing each filename to the file command. The double quotes around the variable ensure that the value is properly quoted and any spaces or special characters are all treated as a part of the path/filename.

And you'd repeat that process for your hidden files.

Here's a quick script I wrote off the back of what you've got.
I've modified it a bit further. Feel free to do what you want with it.

I'll put it and all of it's usage information in a spoiler, JIC you don't want to see it yet!
Bash:
#!/usr/bin/env bash

# This script takes a single parameter, which should be a single string
# The string can be a word, or phrase, or a regex.
# You must double quote the string if using spaces or special characters, to prevent the parameter from being interpreted as multiple tokens.

# Display an error message and quit
die()
{
    echo -e "Error: $1"
    echo "Usage: $0 {search argument}"
    echo
    echo "Searches the local directory for files containing a particular pattern"
    exit 1
}

# This function takes two parameters, a search pattern and a string "Hidden" or "Unhidden" and then attempts to find files that contain the pattern
checkfiles()
{
    if [[ "$2" == "Unhidden" ]]; then
        readarray -d '' -t files < <(grep --null -lIs "$1" *)
    elif [[ "$2" == "Hidden" ]]; then
        readarray -d '' -t files < <(grep --null -lIs "$1" .*)
    fi

    numFiles=${#files[@]}
    echo -e "\n\n$2 files containing \"$1\" : $numFiles"
    for (( index=0; index<${numFiles}; index++)); do
      file "${files[$index]}"
    done
}

# Ensure the script received a single parameter
if [[ $# -ne 1 ]]; then
    die "Incorrect number of parameters.\nRequires a search term, or regex pattern"
fi

checkfiles "$1" "Unhidden"
checkfiles "$1" "Hidden"


In the above, I've created a couple of functions.
The first function die is just a generic error handling mechanism that I put into most of my scripts - if there is a problem, it allows us to output an error message and then exit showing some usage information. So it takes a string containing an error message as a parameter.

The next function is checkfiles, which actually performs the search for files containing a phrase and then runs them through file to determine what type of file they are.
This takes two parameters. The first parameter is the search pattern.
The second parameter specifies the type of files to look for. This parameter is a string that must be either "Hidden" or "Unhidden". The value of the second parameter determines which command it uses to search for files, before displaying a summary and the list of files it found.

And the main script checks the number of parameters it received. If we got one parameter, we have our search pattern and can continue. Otherwise we call the die function to display an error message and some usage information.

Then we simply call the checkkfiles function twice.
The first time we try to find "Unhidden" files containing the pattern that was received by the script.
The second time we try to find Hidden files containing the passed-in pattern.

Save it to a directory in your $PATH and make it executable.
Personally, I tend to put scripts like this in my personal bin directory ~/bin/.
Once you have it somewhere in $PATH - it will be available to you wherever you go in the terminal.

Run it like this:
Bash:
script searchPattern

And it should return output that looks something like this:
Code:
Unhidden files containing "searchPattern" : 1
/path/to/file.txt : ASCII text

Hidden files containing "searchPattern" : 1
/path/to/hiddenfile.txt : ASCII text
So you'll see a summary of the number of files found, their filenames and the types of files.

And if it fails to find a result you should see something like this:
Code:
Unhidden files containing "searchPattern" : 0

Hidden files containing "searchPattern" : 0

No more problems with file-names with spaces, or other special characters. If the current working directory has any greppable files that contain the search term, it should be able to list them and tell you what types of files they are.

NOTE: I haven't actually tested any of the code I've posted on this page. But conceptually speaking, it SHOULD work with plain words, phrases, or regexes. It SHOULD side-step the issues you were having with spaces in the file-names. And it SHOULD work no matter what directory you are currently in.

Like I say I'm 99.99% certain it will work. If there are any problems, it's probably not too far off! My clumsy, fat drummer fingers may have made some unfortunate typos! Ha ha!

I'll try testing it later this evening if I can.
 
Last edited:
OK, from taking a look at that - I've found a solution to the problem you're having with spaces in the names.

Your comment in your code is a little misleading - it wasn't sed or grep that had problems with filenames with spaces.
It's actually the file command that is balking at the spaces in the file-names.

The way around that would be to read the file-names from grep into an array.
So:
Bash:
readarray -d '' -t FILES < <(grep --null -lIs "$pattern" *)
The readarray command is a newer command added for version 4 of bash and saves you from having to set up a read loop to capture the output from grep.

It's actually a builtin alias/shortcut for the mapfile builtin command. So to see all of the options for mapfile/readarray, use the command help mapfile.
help readarray also works, but if memory serves, it only displays a brief summary of the options, without much explanation. Whereas the help page for the mapfile builtin documents everything fully!

In the above - the readarray command reads the output from grep into an array called FILES.
The -t option removes any trailing delimiters (newlines) from the end of the line.
The -d sets the delimiter to '', note that's two single quotes ', NOT a double quote ". So the two single quotes specifies an empty literal string as a delimiter, which should be the same as a null.
Also note in the grep command, we're also using the --null option, so grep returns a list of null terminated strings. So each file-name will end with a null character. This is how we get around the problem with the spaces in the file-names.

Then, in order to safely run the list of files through file, you could use a for loop.
Bash:
numFiles=${#FILES[@]}
for ( index=0; index<numFiles; index++); do
    file "${FILES[$index]}"
done
Where numFiles is the number of files, which we get by determining the size of the array via ${#FILES[@]}
The # before FILES is what gets us the size of the array.
Then it's just a case of looping through the array and passing each filename to the file command. The double quotes around the variable ensure that the value is properly quoted and any spaces or special characters are all treated as a part of the path/filename.

And you'd repeat that process for your hidden files.

Here's a quick script I wrote off the back of what you've got.
I've modified it a bit further. Feel free to do what you want with it.

I'll put it and all of it's usage information in a spoiler, JIC you don't want to see it yet!
Bash:
#!/usr/bin/env bash

# This script takes a single parameter, which should be a single string
# The string can be a word, or phrase, or a regex.
# You must double quote the string if using spaces or special characters, to prevent the parameter from being interpreted as multiple tokens.

# Display an error message and quit
die()
{
    echo -e "Error: $1"
    echo "Usage: $0 {search argument}"
    echo
    echo "Searches the local directory for files containing a particular pattern"
    exit 1
}

# This function takes two parameters, a search pattern and a string "Hidden" or "Unhidden" and then attempts to find files that contain the pattern
checkfiles()
{
    if [[ "$2" == "Unhidden" ]]; then
        readarray -d '' -t files < <(grep --null -lIs "$1" *)
    elif [[ "$2" == "Hidden" ]]; then
        readarray -d '' -t files < <(grep --null -lIs "$1" .*)
    fi

    numFiles=${#files[@]}
    echo -e "\n\n$2 files containing \"$1\" : $numFiles"
    for (( index=0; index<${numFiles}; index++)); do
      file "${files[$index]}"
    done
}

# Ensure the script received a single parameter
if [[ $# -ne 1 ]]; then
    die "Incorrect number of parameters.\nRequires a search term, or regex pattern"
fi

checkfiles "$1" "Unhidden"
checkfiles "$1" "Hidden"


In the above, I've created a couple of functions.
The first function die is just a generic error handling mechanism that I put into most of my scripts - if there is a problem, it allows us to output an error message and then exit showing some usage information. So it takes a string containing an error message as a parameter.

The next function is checkfiles, which actually performs the search for files containing a phrase and then runs them through file to determine what type of file they are.
This takes two parameters. The first parameter is the search pattern.
The second parameter specifies the type of files to look for. This parameter is a string that must be either "Hidden" or "Unhidden". The value of the second parameter determines which command it uses to search for files, before displaying a summary and the list of files it found.

And the main script checks the number of parameters it received. If we got one parameter, we have our search pattern and can continue. Otherwise we call the die function to display an error message and some usage information.

Then we simply call the checkkfiles function twice.
The first time we try to find "Unhidden" files containing the pattern that was received by the script.
The second time we try to find Hidden files containing the passed-in pattern.

Save it to a directory in your $PATH and make it executable.
Personally, I tend to put scripts like this in my personal bin directory ~/bin/.
Once you have it somewhere in $PATH - it will be available to you wherever you go in the terminal.

Run it like this:
Bash:
script searchPattern

And it should return output that looks something like this:
Code:
Unhidden files containing "searchPattern" : 1
/path/to/file.txt : ASCII text

Hidden files containing "searchPattern" : 1
/path/to/hiddenfile.txt : ASCII text
So you'll see a summary of the number of files found, their filenames and the types of files.

And if it fails to find a result you should see something like this:
Code:
Unhidden files containing "searchPattern" : 0

Hidden files containing "searchPattern" : 0

No more problems with file-names with spaces, or other special characters. If the current working directory has any greppable files that contain the search term, it should be able to list them and tell you what types of files they are.

NOTE: I haven't actually tested any of the code I've posted on this page. But conceptually speaking, it SHOULD work with plain words, phrases, or regexes. It SHOULD side-step the issues you were having with spaces in the file-names. And it SHOULD work no matter what directory you are currently in.

Like I say I'm 99.99% certain it will work. If there are any problems, it's probably not too far off! My clumsy, fat drummer fingers may have made some unfortunate typos! Ha ha!

I'll try testing it later this evening if I can.
Okay, i will test it and if it cleans up the output then yay. I am still annoyed though that i have no idea why that behaves differently in my very crowded home folder than the other directories.
 
OK, from taking a look at that - I've found a solution to the problem you're having with spaces in the names.

Your comment in your code is a little misleading - it wasn't sed or grep that had problems with filenames with spaces.
It's actually the file command that is balking at the spaces in the file-names.

The way around that would be to read the file-names from grep into an array.
So:
Bash:
readarray -d '' -t FILES < <(grep --null -lIs "$pattern" *)
The readarray command is a newer command added for version 4 of bash and saves you from having to set up a read loop to capture the output from grep.

It's actually a builtin alias/shortcut for the mapfile builtin command. So to see all of the options for mapfile/readarray, use the command help mapfile.
help readarray also works, but if memory serves, it only displays a brief summary of the options, without much explanation. Whereas the help page for the mapfile builtin documents everything fully!

In the above - the readarray command reads the output from grep into an array called FILES.
The -t option removes any trailing delimiters (newlines) from the end of the line.
The -d sets the delimiter to '', note that's two single quotes ', NOT a double quote ". So the two single quotes specifies an empty literal string as a delimiter, which should be the same as a null.
Also note in the grep command, we're also using the --null option, so grep returns a list of null terminated strings. So each file-name will end with a null character. This is how we get around the problem with the spaces in the file-names.

Then, in order to safely run the list of files through file, you could use a for loop.
Bash:
numFiles=${#FILES[@]}
for ( index=0; index<numFiles; index++); do
    file "${FILES[$index]}"
done
Where numFiles is the number of files, which we get by determining the size of the array via ${#FILES[@]}
The # before FILES is what gets us the size of the array.
Then it's just a case of looping through the array and passing each filename to the file command. The double quotes around the variable ensure that the value is properly quoted and any spaces or special characters are all treated as a part of the path/filename.

And you'd repeat that process for your hidden files.

Here's a quick script I wrote off the back of what you've got.
I've modified it a bit further. Feel free to do what you want with it.

I'll put it and all of it's usage information in a spoiler, JIC you don't want to see it yet!
Bash:
#!/usr/bin/env bash

# This script takes a single parameter, which should be a single string
# The string can be a word, or phrase, or a regex.
# You must double quote the string if using spaces or special characters, to prevent the parameter from being interpreted as multiple tokens.

# Display an error message and quit
die()
{
    echo -e "Error: $1"
    echo "Usage: $0 {search argument}"
    echo
    echo "Searches the local directory for files containing a particular pattern"
    exit 1
}

# This function takes two parameters, a search pattern and a string "Hidden" or "Unhidden" and then attempts to find files that contain the pattern
checkfiles()
{
    if [[ "$2" == "Unhidden" ]]; then
        readarray -d '' -t files < <(grep --null -lIs "$1" *)
    elif [[ "$2" == "Hidden" ]]; then
        readarray -d '' -t files < <(grep --null -lIs "$1" .*)
    fi

    numFiles=${#files[@]}
    echo -e "\n\n$2 files containing \"$1\" : $numFiles"
    for (( index=0; index<${numFiles}; index++)); do
      file "${files[$index]}"
    done
}

# Ensure the script received a single parameter
if [[ $# -ne 1 ]]; then
    die "Incorrect number of parameters.\nRequires a search term, or regex pattern"
fi

checkfiles "$1" "Unhidden"
checkfiles "$1" "Hidden"


In the above, I've created a couple of functions.
The first function die is just a generic error handling mechanism that I put into most of my scripts - if there is a problem, it allows us to output an error message and then exit showing some usage information. So it takes a string containing an error message as a parameter.

The next function is checkfiles, which actually performs the search for files containing a phrase and then runs them through file to determine what type of file they are.
This takes two parameters. The first parameter is the search pattern.
The second parameter specifies the type of files to look for. This parameter is a string that must be either "Hidden" or "Unhidden". The value of the second parameter determines which command it uses to search for files, before displaying a summary and the list of files it found.

And the main script checks the number of parameters it received. If we got one parameter, we have our search pattern and can continue. Otherwise we call the die function to display an error message and some usage information.

Then we simply call the checkkfiles function twice.
The first time we try to find "Unhidden" files containing the pattern that was received by the script.
The second time we try to find Hidden files containing the passed-in pattern.

Save it to a directory in your $PATH and make it executable.
Personally, I tend to put scripts like this in my personal bin directory ~/bin/.
Once you have it somewhere in $PATH - it will be available to you wherever you go in the terminal.

Run it like this:
Bash:
script searchPattern

And it should return output that looks something like this:
Code:
Unhidden files containing "searchPattern" : 1
/path/to/file.txt : ASCII text

Hidden files containing "searchPattern" : 1
/path/to/hiddenfile.txt : ASCII text
So you'll see a summary of the number of files found, their filenames and the types of files.

And if it fails to find a result you should see something like this:
Code:
Unhidden files containing "searchPattern" : 0

Hidden files containing "searchPattern" : 0

No more problems with file-names with spaces, or other special characters. If the current working directory has any greppable files that contain the search term, it should be able to list them and tell you what types of files they are.

NOTE: I haven't actually tested any of the code I've posted on this page. But conceptually speaking, it SHOULD work with plain words, phrases, or regexes. It SHOULD side-step the issues you were having with spaces in the file-names. And it SHOULD work no matter what directory you are currently in.

Like I say I'm 99.99% certain it will work. If there are any problems, it's probably not too far off! My clumsy, fat drummer fingers may have made some unfortunate typos! Ha ha!

I'll try testing it later this evening if I can.
Well your script works, but it does exactly what my other script does in terms of how it behaves for directories. There's nothing in my .bashrc file that should influence that behavior, but i will copy it to virtual machines just because this is the most interesting aspect of this problem. If i could get some more features added, and keep it clean, then it would be worth a repository on github...the best version would be something that can also parse text in binaries files and somebody has probably already done it.
 
I’ve tested the script on my laptop this evening and I haven’t seen the script behaving differently. It behaves consistently in every directory for me. It only identifies files that contain the search term.

Can you give a more concrete example of the difference between running it in your home directory and running it in another directory? What search pattern are you using? and what files are getting matched?

It only looks for files in the current working directory. It doesn’t look in sub-directories. But if you want it to, you could add the -R flag to grep. That should recursively search for files.
 
Last edited:
I’ve tested the script on my laptop this evening and I haven’t seen the script behaving differently. It behaves consistently in every directory for me. It only identifies files that contain the search term.

Can you give a more concrete example of the difference between running it in your home directory and running it in another directory? What search pattern are you using? and what files are getting matched?

It only looks for files in the current working directory. It doesn’t look in sub-directories. But if you want it to, you could add the -R flag to grep. That should recursively search for files.
Right, what i mean is the root home directory, /home/userid. I think part of the issue is that root home directories aren't supposed to have anything but configuration files.

In terms of the pattern i use, i always try to use something that i know is absent from most of my files, like "loop". I don't know whats going on, but the presense of errors tends to obfuscate things, along with wierd file names, files installed by other programs, and directories. I literally copied and pasted your script so if it has something to do with my naughty behavior, then i can't tell.
 
No i just screwed up, all i know is that my original script doesn't obey the null character and error conventions of bash so shouldn't be used...to be honest, i still like just because it's pretty terse and i mostly have grasp on what it does.
 
Right, what i mean is the root home directory, /home/userid. I think part of the issue is that root home directories aren't supposed to have anything but configuration files.

In terms of the pattern i use, i always try to use something that i know is absent from most of my files, like "loop". I don't know whats going on, but the presense of errors tends to obfuscate things, along with wierd file names, files installed by other programs, and directories. I literally copied and pasted your script so if it has something to do with my naughty behavior, then i can't tell.
My home directory is choc-full of all kinds of different files!
I have lots of sub-directories containing organised collections of other things, but my home directory itself is a mess. I have plenty of things that could probably be moved elsewhere to organise things a bit better.

So my script worked? But yours is misbehaving still? I'm confused now.

If you want, I can post something that is more faithful to your original script, albeit with the readarray fix in it, if it helps?
That would just be this:
Bash:
#!/bin/bash
#more lightly modified version of CrazedNerds original script.
#this will look through every file in a working directory and
#find the text files that contain a string
read -rp "Enter string you want to find in your files: " term

echo -e "\nUN-HIDDEN FILES\n"
readarray -d '' -t FILES < <(grep --null -lIs "$term" ./*)
numFiles=${#FILES[@]}
for (( index=0; index<numFiles; index++ )); do
  file "${FILES[$index]}"
done

echo -e "\nHIDDEN FILES\n"
readarray -d '' -t FILESH < <(grep --null -lIs "$term" ./.*)
numFiles=${#FILESH[@]}
for (( index=0; index<numFiles; index++ )); do
  file "${FILESH[$index]}"
done
I've restored your prompt, albeit slightly condensed, because the warning about the home directory should no longer apply. I've made the output look exactly the way you originally had it.
And I've made a couple of very minor tweaks to my original suggestions, based on recommendations from the shellcheck plugin I have installed in vim.
Hopefully that looks a little more familiar to you.

And here's a revised version of my script, which also implements some minor changes suggested by shellcheck AND makes the output look a bit more like your original script:
Bash:
#!/usr/bin/env bash

# This script takes a single parameter, which should be a single string
# The string can be a word, or phrase, or a regex.
# You must double quote the string if using spaces or special characters, to prevent the parameter from being interpreted as multiple tokens.

# Display an error message and quit
die()
{
    echo -e "Error: $1"
    echo "Usage: $0 {search argument}"
    echo
    echo "Searches the local directory for files containing a particular pattern"
    exit 1
}

# This function takes two parameters, a search pattern and a string "HIDDEN" or "UNHIDDEN" and then attempts to find files that contain the pattern
checkfiles()
{
    if [[ "$2" == "UNHIDDEN" ]]; then
        readarray -d '' -t files < <(grep --null -lIs "$1" ./*)
    elif [[ "$2" == "HIDDEN" ]]; then
        readarray -d '' -t files < <(grep --null -lIs "$1" ./.*)
    fi

    numFiles=${#files[@]}
    echo -e "\n$2 FILES CONTAINING \"$1\" : $numFiles\n"
    for (( index=0; index<numFiles; index++)); do
      file "${files[$index]}"
    done
    echo
}

# Ensure the script received a single parameter
if [[ $# -ne 1 ]]; then
    die "Incorrect number of parameters.\nRequires a search term, or regex pattern"
fi

checkfiles "$1" "UNHIDDEN"
checkfiles "$1" "HIDDEN"

The updated version of your script is a lot shorter than mine, in terms of lines of code and prompts the user to interactively enter the search term after the script has started. The stuff about files with spaces is gone. So is the stuff about weird behaviour.

My version is non-interactive and requires the user to enter the search term in the terminal when they invoke the script and has a checking mechanism to ensure that a term was passed to it, etc.
But my version still outputs the number of files found.

Both scripts behave identically and identify the same sets of files. They should both identify files in any working directory that contain the specified word or pattern. Including your home directory.

Incidentally, I've tested both scripts with regexes too, like [J,j]ason and that also works. At least with the simple regexes I've tried!
 
My home directory is choc-full of all kinds of different files!
I have lots of sub-directories containing organised collections of other things, but my home directory itself is a mess. I have plenty of things that could probably be moved elsewhere to organise things a bit better.

So my script worked? But yours is misbehaving still? I'm confused now.

If you want, I can post something that is more faithful to your original script, albeit with the readarray fix in it, if it helps?
That would just be this:
Bash:
#!/bin/bash
#more lightly modified version of CrazedNerds original script.
#this will look through every file in a working directory and
#find the text files that contain a string
read -rp "Enter string you want to find in your files: " term

echo -e "\nUN-HIDDEN FILES\n"
readarray -d '' -t FILES < <(grep --null -lIs "$term" ./*)
numFiles=${#FILES[@]}
for (( index=0; index<numFiles; index++ )); do
  file "${FILES[$index]}"
done

echo -e "\nHIDDEN FILES\n"
readarray -d '' -t FILESH < <(grep --null -lIs "$term" ./.*)
numFiles=${#FILESH[@]}
for (( index=0; index<numFiles; index++ )); do
  file "${FILESH[$index]}"
done
I've restored your prompt, albeit slightly condensed, because the warning about the home directory should no longer apply. I've made the output look exactly the way you originally had it.
And I've made a couple of very minor tweaks to my original suggestions, based on recommendations from the shellcheck plugin I have installed in vim.
Hopefully that looks a little more familiar to you.

And here's a revised version of my script, which also implements some minor changes suggested by shellcheck AND makes the output look a bit more like your original script:
Bash:
#!/usr/bin/env bash

# This script takes a single parameter, which should be a single string
# The string can be a word, or phrase, or a regex.
# You must double quote the string if using spaces or special characters, to prevent the parameter from being interpreted as multiple tokens.

# Display an error message and quit
die()
{
    echo -e "Error: $1"
    echo "Usage: $0 {search argument}"
    echo
    echo "Searches the local directory for files containing a particular pattern"
    exit 1
}

# This function takes two parameters, a search pattern and a string "HIDDEN" or "UNHIDDEN" and then attempts to find files that contain the pattern
checkfiles()
{
    if [[ "$2" == "UNHIDDEN" ]]; then
        readarray -d '' -t files < <(grep --null -lIs "$1" ./*)
    elif [[ "$2" == "HIDDEN" ]]; then
        readarray -d '' -t files < <(grep --null -lIs "$1" ./.*)
    fi

    numFiles=${#files[@]}
    echo -e "\n$2 FILES CONTAINING \"$1\" : $numFiles\n"
    for (( index=0; index<numFiles; index++)); do
      file "${files[$index]}"
    done
    echo
}

# Ensure the script received a single parameter
if [[ $# -ne 1 ]]; then
    die "Incorrect number of parameters.\nRequires a search term, or regex pattern"
fi

checkfiles "$1" "UNHIDDEN"
checkfiles "$1" "HIDDEN"

The updated version of your script is a lot shorter than mine, in terms of lines of code and prompts the user to interactively enter the search term after the script has started. The stuff about files with spaces is gone. So is the stuff about weird behaviour.

My version is non-interactive and requires the user to enter the search term in the terminal when they invoke the script and has a checking mechanism to ensure that a term was passed to it, etc.
But my version still outputs the number of files found.

Both scripts behave identically and identify the same sets of files. They should both identify files in any working directory that contain the specified word or pattern. Including your home directory.

Incidentally, I've tested both scripts with regexes too, like [J,j]ason and that also works. At least with the simple regexes I've tried!
I always try to go for short since i think less always tends to better, but i mostly don't like yours because i don't understand that new type of redirection where it has to be next to the string, file, variable, etc. Part of my anxiety over this has been that i get tired of asking programming questions since i got punished pretty badly by stack exchange for not understanding how to "make a good post" initially, luckily stackoverflow.com lifted my several year long ban on asking questions there though. I should probably read the whole bash man page already, 80 pages isn't really that long for a book...the only thing i don't like about the manpages is that they are written more as a reference material than explain-this-robot-language-to-me type of deal.

I have saved your script and put your screen name in it, it's better partially just because there's functions, which are better for improving/recycling code.
 
I always try to go for short since i think less always tends to better, but i mostly don't like yours because i don't understand that new type of redirection where it has to be next to the string, file, variable, etc. Part of my anxiety over this has been that i get tired of asking programming questions since i got punished pretty badly by stack exchange for not understanding how to "make a good post" initially, luckily stackoverflow.com lifted my several year long ban on asking questions there though. I should probably read the whole bash man page already, 80 pages isn't really that long for a book...the only thing i don't like about the manpages is that they are written more as a reference material than explain-this-robot-language-to-me type of deal.

I have saved your script and put your screen name in it, it's better partially just because there's functions, which are better for improving/recycling code.
Stackoverflow can be extremely toxic from what I've seen on there. I'm not surprised you're traumatised. I've never even bothered signing up to the site. Fortunately Linux.org isn't like that. It's a safe space! Ha ha!

I'm pretty much only here for the programming/scripting questions. The threads here that are asking for endless distro recommendations, or asking about common installation problems that have already been asked and answered a million times here (and on other forums all over the web) just do not interest me any more. And I'm not here to give anybody a hard time either. Everybody has their own approaches to solving programming tasks. I usually prefer simple solutions. But I do also like to try to make my code as useful, robust and re-usable as possible. Which sometimes means making things a little more verbose, or complex.

After taking a look at your script and thinking of a way around the "files with spaces" issue, (which I thought might also fix the other problems you were having) - I suggested the fix using readarray and then thought whilst I was in that head-space - I'd have a go at re-writing your script, but in my own style. It wasn't meant as any kind of dig, or diss to your script in any way. Just an alternative perspective. With bash, there's almost always more than one way to approach any problem.

Regarding bash programming - the way to get better at it is to keep using it and keep writing scripts. Try writing scripts to solve lots of different types of problems and ask questions when you get stuck. Asking a good question always helps, of course. ha ha!

I don't think I've ever tried to read the entire bash manual, it's a very dry read. Ha ha!

Most of my learning has come from years of using bash daily and writing scripts to automate, or semi-automate various tasks. Any time I experience a new/unfamiliar error message, or I need to do something new, but don't know what tools to use, or how to use a particular tool for a particular task - I'll do some duckduckgo-fu to see if I can find anybody that has tried to do something similar, or who experienced similar problems and then experiment with any recommended solutions. So I might find an esoteric series of commands to solve a similar problem, but then I need to modify it to fit my very different needs. Sometimes it requires a bit (or even a lot) of critical thinking. And some research/experimentation, but I always manage to get there in the end. Ha ha!
 
Last edited:
It wasn't meant as any kind of dig, or diss to your script in any way. Just an alternative perspective.
Stackoverflow can be extremely toxic from what I've seen on there. I'm not surprised you're traumatised.
Unfortunately, i've found repeatedly that talking to anyone at all over the internet...especially when it becomes more of a depersonalized/shared/open space...is just not a very safe thing to do. There are a lot of reasons for this, so i'm not going to point fingers for the repeatedly mentioned "internet problem". Stack exchange is basically just a great place for some sort of extreme technical truth-seeking, but not really that great for ANYTHING else. I'm just glad i've learned what that company is all about and i can use it for frustration every once in a while instead of arguing in a hostile manner with others.

And i know you weren't trying to be an asshole, but the issue is just that i'm too hard on myself, and computer's are really complicated...and if you start to get really deep into, it's a "smart person" thing, and to be honest with you i kinda hate smart people. Ha ha ha!

On the professional side, computers are not really gold mine as it has been talked about by people who just don't understand how things work. People have praised me for being interested in computers, and the money didn't magically flow into my pockets. I will stop there.
 
Stackoverflow can be extremely toxic from what I've seen on there.

If I may digress a bit, I'd not say 'toxic', I'd say 'strict'. There are very exacting expectations there, and that's for 'good' reasons. It's very specifically not a forum. They do not want much in the way of discussion. It's a question and answer site.

I think it has its place on this giant thing we call the internet.

Now back to your regularly scheduled thread with a reminder that we're PG13 - or something like that.
 
If I may digress a bit, I'd not say 'toxic', I'd say 'strict'. There are very exacting expectations there, and that's for 'good' reasons. It's very specifically not a forum. They do not want much in the way of discussion. It's a question and answer site.

I think it has its place on this giant thing we call the internet.

Now back to your regularly scheduled thread with a reminder that we're PG13 - or something like that.
I pretty much agree and i think the stack exchange constellation has become less toxic and more friendly over the years. People used to pile down down votes like lemmings and do stuff like "disciplinary action will be taken if a picture is not included!" even though someone knowledgable was able to answer the question...im glad that it exists because there are a lot of subject materials present that would cause most people to scratch their head, and i have used it for things not computer related. Also, you can use stack exchange as a toolbox for avoiding academic debt...
 
This is the strangest thing ever: i tried my script in a virtual machine, and it doesn't just print out every file like it does on my system...
 
Also, i take back my comments about stack exchange having improved over the years...my most recent questions in stack overflow and the other sub forums of stack exchange were all well recieved, and even upvoted a little bit, but i am banned from asking anymore questions in stack overflow because my previous questions "need improvements". When my questions do need improvements, i improve them the best way i can...my most recent question in stack overflow was about using grep without returning errors, someone answered it and i upvoted it, and nobody complained about my question. It seems strange that they are asking me to improve really old questions that couldn't be improved anyways. Basically, some mod on there decided to get butt hurt. No reason not to just use other coding forums or just to work on things myself.
 
Basically, some mod on there decided to get butt hurt.

Mods generally can't act independently unless it's particularly egregious - like spam or racism.

People often don't complain - but rather vote. No, not the up/down vote, but 'flag' which is like voting 'cause five flags and it's closed automatically.

They're very specific, pedantic even, with how questions are posted.

(I don't really use the site, but I have in the past. I'm not standing up for them, I'm just explaining what might be the cause of your issues.)

If you feel like linking the question, I might be able to tell you why it was marked.
 
(I don't really use the site, but I have in the past. I'm not standing up for them, I'm just explaining what might be the cause of your issues.)
Oh its not a problem, i dont think its a horrible site and i dont think of my experience with them as being traumatic...just really annoying and as you said, pedantic.

I think what might have happened is that one of my 5 year old posts recieved an edit recommendation, and im fine with it but i want them to do it for me if thats what they want, im not going to work extra because people with moderation/editing power think it could be improved.
 

Members online


Latest posts

Top