Operating System Maintenance, What is necessary?

@MikeWalsh, I show the separate bits & pieces for the sake of those among us who are still feeling their way, Mike.

Hopefully they gain some confidence from doing things that way.

If they tire of the messing around....?....

#!/bin/bash
sudo apt-get update
#sudo apt-get -y upgrade
sudo apt-get -y dist-upgrade
sudo apt-get -y autoremove
 


DuckDuckGo used to be a good search engine, but now they admit that they have started sharing all search requests with Google so there it went. You might look at StartPage. Firefox keeps more information that clearing everything, or so you thought you did, does not remove. If you are using Firefox-esr then look in ~/.mozilla/firefox/ for a directory that uses the extension .default-esr and copy that subdirectory name on the end of ~/.cache/mozilla/firefox/ and then add /cache2 to that. Look in that directory for two more directories named doomed/ and entries/ along with two files named index and index.log. You can clear all of that in ~/.cache/mozilla/firefox/specialdirectorynamehere.default-esr/cache2/ after you are done with your browsing session. ClamAV has complained on occasion about files left in there after I told Firefox to clear the cache, cookies, and history for everything. Your specialdirectorynamehere will be different for each user.

As far as system maintenance goes, backups are priceless, especially when kept in cold storage. Never underestimate them. It might seem like a burden to deal with this, but when you need them they're really nice to have available. Make a backup of your main system after any changes. I backup /etc/ and user home directories separately. Even /var/ is separate. I have a script to handle each area. Let me know if you want a copy of those. I even have a custom script to handle making cold storage backups, given that it takes hours to perform this, well, for me at least.

When you modify unit files for systemd put them in /etc/systemd/ rather than /usr/lib/systemd/ so this way they are not wiped out when you update your packages. Check your system journal every now and then using journalctl -xe or journalctl -xef if you want to keep it flowing like tail -f.

Aside from that, become familiar with your own Linux system. Go look around and learn about it. Know what normal looks like.

Signed,

Matthew Campbell
 
DuckDuckGo used to be a good search engine, but now they admit that they have started sharing all search requests with Google so there it went.

I didn't know that!

Firefox keeps more information that clearing everything, or so you thought you did, does not remove.

This I DO know (with Windows) and is one of the reasons for this thread.

If you are using Firefox-esr then look in ~/.mozilla/firefox/ for a directory that uses the extension .default-esr and copy that subdirectory name on the end of ~/.cache/mozilla/firefox/ and then add /cache2 to that. Look in that directory for two more directories named doomed/ and entries/ along with two files named index and index.log. You can clear all of that in ~/.cache/mozilla/firefox/specialdirectorynamehere.default-esr/cache2/ after you are done with your browsing session. ClamAV has complained on occasion about files left in there after I told Firefox to clear the cache, cookies, and history for everything. Your specialdirectorynamehere will be different for each user.

Right now, this might a bit over my head! If we were talking about Windows, I know enough to try more complicated things. (I had to do things like this EVERY DAY by the time I had enough and dove head first into Linux).

As far as system maintenance goes, backups are priceless, especially when kept in cold storage.
I have an external HDD that important things are backed up to.

Never underestimate them.

Been there, done that. I have TONS of music on CD's. Three times I had to put them ALL back on my HDD. After that, I started keeping everything on a separate drive that IS NOT connected to my PC. It stays in a drawer.

Make a backup of your main system after any changes.

...after every update, plus a few that I know work smoothly. I also have installation files on a dedicated USB drive.

I backup /etc/ and user home directories separately. Even /var/ is separate. I have a script to handle each area. Let me know if you want a copy of those. I even have a custom script to handle making cold storage backups, given that it takes hours to perform this, well, for me at least.

When you modify unit files for systemd put them in /etc/systemd/ rather than /usr/lib/systemd/ so this way they are not wiped out when you update your packages. Check your system journal every now and then using journalctl -xe or journalctl -xef if you want to keep it flowing like tail -f.

Right now this is also over my head. I got there with the Giant, I'll get there with Linux too.

Aside from that, become familiar with your own Linux system. Go look around and learn about it. Know what normal looks like.

Signed,

Matthew Campbell

The biggest challenge right now is knowing where and how to find files, what commands to use to find them, what directory, etc, etc. Things are organized differently in Linux. Names are different. If I don't know what it is, I'm not going to delete it or alter it. I'm not there...

YET!

I'm book marking this page with all the suggestions.
 
DuckDuckGo used to be a good search engine, but now they admit that they have started sharing all search requests with Google so there it went
Where was that information published?
 
DuckDuckGo used to be a good search engine, but now they admit that they have started sharing all search requests with Google so there it went. You might look at StartPage.
Start page is not different, it's no longer owned by it's original owner, it was sold few years ago.

If you read PP of search engines I don't think truly private search engine exists, they all cooperate either with google, bing or baidu directly or indirectly because only those 3 have their own index.
 
@Condobloke
I think their sources are very tiny compared to indexes of the big 3.

The page says:
As per our strict privacy policy, we never share any personal information with any of our partners that could lead to the creation of search histories.
But fails to mention who their partners are.
 
In the main they use Bing.

The search terms they send to Bing are anonimized....no ip address etc of the original requester.
 
I really suggest to get rid of FF touching disk (cache). This is not only cookies but also ssl transactions, restore backups after browser crash, number of bookmarks backups and so on.
Cleaning just browser cache is not enough.

You can create user.js or directly edit about:config in FF. First method is preferable.

There is plenty of user.js available on the web. A lot of them going way above keeping system clean making FF very secure.
Here is an example of nice easy and secure user.js https://github.com/arkenfox/user.js/
FF user.js file is OS agnostic so you can use it in Windows, Linux, BSD or Solaris.
Proper FF configuration is is better than cleaning after browser because you don't have to remember about it once set up. You can set up several FF profiles and experiment with user.js safely.
You can schedule log rotation and set up temp files in tmpfs so they will be gone after each system start. Again, no need to think about it once set up.

Remember that manual maintenance is not reliable. One exception is system update where is better to check if all the updates are necessary.

OS is just a tool that should make your life easier. It should not get in your way.

I bought my current laptop 3yrs ago. 3yrs ago I installed on it Linux. Never had to reinstall OS, never had an issue with shrinking free disk space temp files cache and so on. Same with my previous system: one installation in 6yrs. Configure OS once and forget about keeping it clean. Linux will do maintenance for you.


 
I didn't know that!



This I DO know (with Windows) and is one of the reasons for this thread.



Right now, this might a bit over my head! If we were talking about Windows, I know enough to try more complicated things. (I had to do things like this EVERY DAY by the time I had enough and dove head first into Linux).


I have an external HDD that important things are backed up to.



Been there, done that. I have TONS of music on CD's. Three times I had to put them ALL back on my HDD. After that, I started keeping everything on a separate drive that IS NOT connected to my PC. It stays in a drawer.



...after every update, plus a few that I know work smoothly. I also have installation files on a dedicated USB drive.



Right now this is also over my head. I got there with the Giant, I'll get there with Linux too.



The biggest challenge right now is knowing where and how to find files, what commands to use to find them, what directory, etc, etc. Things are organized differently in Linux. Names are different. If I don't know what it is, I'm not going to delete it or alter it. I'm not there...

YET!

I'm book marking this page with all the suggestions.
I can help you with all of this since I deal with it myself. Linux can have a bit of a learning curve, but that's why we're here, to help one another. When a directory includes the ~ (tilde) symbol that means the user's home directory. So ~/tmp means the tmp directory one step below your own home directory, not the system /tmp directory. All you need to do is ask and the good people here will be happy to help you on your journey to becoming more knowledgeable about Linux. We're here for you.

Like finding your special directory name for Firefox...

Type:

Code:
cd
cd .mozilla/firefox
ls -al

Look for a directory, it should be dark blue on your screen, that uses the extension .default-esr. Mine uses mru7dtcr.default-esr. Then use:

Code:
cd
cd .cache/mozilla/firefox
ls -al

Now look for a directory with the same directory name that you found in ~/.mozilla/firefox/ that ended in .default-esr, which in my case is mru7dtcr.default-esr. Yours will be different. Now use:

Code:
cd mru7dtcr.default-esr/cache2  (Replace mru7dtcr.default-esr with the correct directory name on your system.)
ls -al

The four directory entries will be found here in this directory.

Using cd by itself on the command line will take you to your home directory. You can find out where that is, if your command prompt doesn't already tell you, by using the pwd command. Your command prompt will likely abbreviate the path to your home directory by using ~/ instead of something like /home/usernamehere/.

Signed,

Matthew Campbell
 
Start page is not different, it's no longer owned by it's original owner, it was sold few years ago.

If you read PP of search engines I don't think truly private search engine exists, they all cooperate either with google, bing or baidu directly or indirectly because only those 3 have their own index.
That's just disappointing.

Signed,

Matthew Campbell
 
The concept of a search engine is that it is supposed to create a list of websites on the Internet and look for a file named /robots.txt and a site map. Then it is supposed to crawl (look around in) that website according to the rules listed in robots.txt. Then it should keep a searchable index of what it finds so when users try to use the search engine to find something it compares the search request with what it knows about all of those websites. I really don't know how it finds which websites exist so it can crawl them.

Signed,

Matthew Campbell
 
I really don't know how it finds which websites exist so it can crawl them.
by crawling hyperlinks.

You can basically start with say 1000 manually inputted websites and let the crawler use hyperlinks on those sites as starting point.

Once the index is big and the crawler is constantly working there is no need for starting input, it just discovers new links over and over again.
 
If that information re DDg and google..ie "that they have started sharing all search requests with Google"" is not readily available, then it is my contention that it did not happen.
Quote ....Most of our search result pages feature one or more Instant Answers. To deliver Instant Answers on specific topics, DuckDuckGo leverages many sources, including specialized sources like Sportradar and crowd-sourced sites like Wikipedia. We also maintain our own crawler (DuckDuckBot) and many indexes to support our results. Of course, we have more traditional links and images in our search results too, which we largely source from Bing.
and
When we send a request to a partner for information used in search results, the transfer of information is proxied through our servers so it stays anonymous. That means our partners see those requests as though they came from us instead of our users, and no unique identifiers are passed in that process (e.g., your IP address). That way, we can work with partners to produce relevant search result pages, while keeping you anonymous to them (and us!).

All of the above quoted from :https://duckduckgo.com/duckduckgo-help-pages/results/sources/


Aside from that, become familiar with your own Linux system. Go look around and learn about it. Know what normal looks like.
Now THAT ^, I totally agree with. "know what normal looks like" The best security device you will ever own is sitting in the chair in front of your computer
 
by crawling hyperlinks.

You can basically start with say 1000 manually inputted websites and let the crawler use hyperlinks on those sites as starting point.

Once the index is big and the crawler is constantly working there is no need for starting input, it just discovers new links over and over again.
What about web sites that aren't referenced by other websites? How does it find those? Can it be told to look?

Signed,

Matthew Campbell
 
@Condobloke
I suggest to anyone interested to learn about the procedure of data deanonymization, what is aggregate data and how aggregate data can be "reverse engineered" (there is a special term for this but I forgot it).

What about web sites that aren't referenced by other websites? How does it find those? Can it be told to look?
Not sure, but I think site names could be accessible from registrars, google is registrar so it has full access to domain names registered by google.

I suppose they purchase or exchange that info from other registrars if their own database is not enough.
 
Top