Dealing with Security Issues

Rob

Administrator
Staff member
Joined
Oct 27, 2011
Messages
1,249
Reaction score
2,391
Credits
3,828
Administering Secure Systems

Batten down the hatches! Even people who seldom log on to the Internet are faced with the possibility that their machine's security could be compromised. Microsoft Windows has been particularly good at attracting all sorts of exploits, worms and viruses. Compared with Windows, Linux appears less friendly to the unwanted. As a matter of fact, several large Linux consulting firms have offered large sums of money to people who could infect a well-maintained Linux system with a virus. But there's the key! The system must be well-maintained. It's imperative to check popular Linux security sites on a daily basis. A good place to start is your distribution vendor/developer's own website.

You must also know that running programs as 'root' must be done as little as possible. However, even taking the appropriate precautions doesn't insure that you won't become the host of the most significant risk facing Linux, the trojan. Named after the famous Trojan Horse from Greek history/mythology, this is simply a program that appears perfectly normal on the surface, but it has in fact been modified to serve as a back-door for entering your system or for using it as a host for attacks against other systems. These trojans don't necessarily have to reside in little used programs either. In August 2002, a "trojaned" version of OpenSSH, the open source version of the secure shell program, was distributed from official download sites. This case is particularly nasty owing to the fact that secure shell is used extensively as a security measure itself. This is like having corrupt policemen on duty. You may be saying to yourself: If I download a popular program from an official mirror, what fault of mine is it if it's been tampered with and I didn't know it. Well, the fact is, as a good system administrator, you have the tools at your disposal to check for tampering

Checking integrity of packages

Most Linux distributions come with a tool so that you can verify the "authenticity" of a downloaded package. It is called md5sum. If you've produced a finished product, this package will have a certain number of bytes. md5sum calculates the number of bytes and creates a "hash" based on that number. Therefore, if someone were to tamper with the program, they would increase the number of bytes in the packages. If you created a software package, for example, you could make an md5 checksum of that package. You would simply do this:

Code:
md5sum my_package > my_package.md5

and publish this on your website. Then people who were interested in using your program could download the package along with the checksum and verify the authenticity of the program. The checksum file is actually nothing more than this:

ac953e19a05816ed2159af092812f1de my_package

Those who are interested in checking the integrity of the file would type this:

Code:
md5sum -c my_package.md5

If the file hasn't been tampered with, you should get a message like this:

my_package: OK

If someone has done some funny business to it, you would theoretically get output like this:

my_package: FAILED
md5sum: WARNING: 1 of 1 computed checksum did NOT match

And it would be assumed that people would get in touch with you to tell you that your program's checksum does not match the package. Someone has obviously increased the number of bytes in the package and its checksum doesn't match. It should be pointed out, however, that any evil cracker worth his or her salt is not just going to substitute the program on your server with a trojaned one. They will most likely provide a checksum to suit their changes as well. As you can see, md5sum is good, but it is not 100 percent reliable for checking the integrity of a package. So how can you be sure? We need to go to a higher level of reliability.

GnuPG

GnuPG is encryption software that now comes standard on most major Linux distributions. It is the Free Software version of the popular PGP (Pretty Good Privacy) personal encryption program developed by Phil Zimmermann in the early 1990's (and for which he was the object of a US Government investigation for a few years!). I bring up the government investigation because it was assumed at one point that encryption would only be used to hide criminal activity from the authorities. In truth, you can't really impede unscrupulous people from using technology for bad purposes, but that doesn't mean you should stop law-abiding citizens from using it for good purposes. GnuPG and PGP can be used to encrypt files for secure communication but it is also a tool that is being used more and more to establish the authenticity of a company's or an individual's work. A package is normally "signed" with a public key generated by either of these two programs. If you were interested in checking the signed package, you would first get the public key from the developer's website. These are normally plain-text files with the extension *.asc. Then you would import the key into your "keyring".

Code:
gpg -import acme.asc

Then you would download the package and its signature. This signature usually has the same name as the package but also with an *.asc extension. Now you can verify the package:

Code:
gpg -verify acme_cool_app.tar.gz.asc

You should get a message about when the signature was created, by whom and whether it is good or not. If its good, then you should feel fairly confident that you're dealing with an authentic package.


RPMs

RPM files come with their own built-in mechanism for verifying packages. As with the above example, you should get the developer's public key and import it. The most recent version of the RPM system uses its own process of importing the key. Check the documentation on your system to see what version you're using and how to do it. You can then check the integrity of a downloaded RPM in this way:

Code:
rpm -checksig acme_app.0.02.rpm

You will get a message like this:
acme_app.0.02.rpm: md5 gpg OK

As you can see, it's now not enough just to keep track of security alerts and download and install the updated packages. You should now take extra measures to assure that the packages you've downloaded are the "real deal" and haven't been tampered with in any way.

Bug Fixes

Computer programs contain bugs. This is, at present, an inescapable fact of life. Some of these bugs can be exploited for less than noble purposes and then they become security issues. Some are just silly little things (developer forgot to make menu item 3 do something). Other bugs may cause the program to crash at inopportune times and result in data loss. Regardless of the severity of the bug, you will need to update programs from time to time because of either harmless or extremely evil and annoying bugs. You should follow the same procedures above to verify the authenticity of the packages.

Installing New Versions

Developers normally release new versions of their software. Change is really the name of the game of software development. The changes that concern us here are not really those that are mentioned above. If there is a bug fix or a security issue, it's imperative that you install a new version. Although a new version may be released because of one of these issues, what we'll consider here is the installation of a new version that has been released to offer users new features.

Major Updates

Sometimes a company or individual developer releases a new version that contains major changes to the program. It could be a total re-write of the application in question. It may be the addition of multiple new features. If you're running web server in a production environment (a public server that is vital to your company's revenue stream, for example), you sometimes need to make some hard decisions about updating to a major version change. The update might "break" existing scripts. If you don't have a development or test server to try out the new version, you might be playing with fire if you just go ahead with the update on your public server. It's always best to ease the changes in. Try them out first in a development environment. Create a mirror of your production environment on a different server and observe any anomalies. You may find that the major version isn't worth installing. Recently, for example, many organizations running the Apache webserver in version 1.3.x have considered it unnecessary to update to the latest major version change, version 2.0.

Simple Programs

You've heard that a fairly small program that you can't live without has been updated. In the Linux world you can be fairly sure that an update isn't going to break anything major. Most Linux programs aren't created under a strict profit incentive system, so there's no reason for the developer not to provide backwards compatibility. My relationship with Windows 95 was soured very quickly when I saw that I couldn't open up my Word for Windows 2 files in Windows 95's WordPad. I have yet to have an experience like this in Linux. Of course, some programs dynamically link to new libraries. If you don't have these libraries installed on the system, you will normally be unable to run the new version of the program. Both the RPM system and Debian's apt-get package system will check dependencies for you before you install. You may find that you'll have to update some libraries.

Libraries

Without going into a lot of detail, a library is a piece of code that provides your program with something it will use over and over again. The buttons that GUI programs commonly use are rendered using libraries. This is just one example but there are literally thousands of different types of libraries that your Linux system could take advantage of. There are also two basic types of libraries. There are those that are compiled right into the program. These are called statically linked libraries. You will seldom have a problem installing programs with these kind of libraries because the developer put the libraries that the program needs right into the binary (or executable file). The other type of library is the dynamically linked or shared library. This means that you have a program that depends on the existence of a certain library to run. When you start the program, it looks for that library to provide functionality that it needs. For example, if you download the Opera Internet browser, on their download page you will see that they offer two types of files.

As the browser depends on the QT libraries for its GUI, Opera provides a file with these libraries statically linked and another one without the libraries. In the latter case, the executable will look for the QT libraries that are already on your system. The advantage of the dynamic or shared library is the size of the executable file. The major disadvantage, however, is that you may not have that library installed, or worse yet, that you may have an older version of the library installed. I say worse because you may find that a number of programs need the older version of the library to run. If you updated your libraries, you would invariably ``break'' these programs. Actually, if you use standard tools like RPM or apt-get, you would be told explicitly that the new libraries conflict with dependencies of other programs. This is the nasty and dreaded dependency conflict. Here you're faced with two options: update the older programs too or forget about the new version of the other program. That's not what management gurus would call awin-win situation. These are, of course, value judgments that you have to make, along with appropriate guidance from the people who pay the bills!

Server applications

As alluded to earlier with Apache, your organization may decide that it's time to update the web server, mail server or any other server software the your machines may be running. Again, as we mentioned previously, it might be a thorny issue. Much like the situation with the dynamic or shared libraries, some servers also depend on secondary modules to help out with the work they're doing. Such is the case with Apache, which employs modules to provide features for delivering web content. Two common modules used by Apache to provide for interactive web pages are mod_perl and mod_php. These allow Apache to deliver content using Perl and PHP scripts respectively. Perl, at the same time, is a programming language that also has its own modules. As a recent case I was involved with shows, you may make a decision to update Perl modules (or remove some) and find that Perl scripts on your server ``break''. That's not good in a production environment.

The Linux Kernel

The mother of all parts of the Linux operating system is the Linux kernel. That is what Linux really is. That is what Linus Torvalds started working on in 1991 and that's what eventually has turned into the base of what the whole Linux world is about. At the time of this writing, kernel 2.4 is the most recent major stable version of the kernel and development on version 2.6 (called version 2.5 as it is still not ``stable'') is quite advanced. 2.6 is reportedly right around the corner, so that always brings up the question: Should I update to the new kernel?.

Hardware considerations

This is mostly taken care of. The Linux kernel is all about supporting hardware, as any kernel is. A new kernel brings things along from past versions so you should have no problem on the hardware side. People normally run into issues with bleeding edge hardware. As we're not talking about backwards compatibility, it's not an update problem. But just as anecdotal evidence that you may run into problems, I noticed that when I switched from the 2.2 to 2.4 kernel, the driver for a common network interface card, RealTek 3189, was changed in the 2.4 kernel. This driver, in my opinion, was worse than the old one and did cause some problems at first. Normally, however, you shouldn't run into hardware support problems on existing equipment when updating to the newest version of the kernel.

Software considerations

The two major reasons for updating kernel versions is to get support for new hardware and to take advantage enhanced features in the kernel for running programs. Some of these enhanced features may necessitate a move to new software for some tasks. A major change in the latest stable kernel that comes to mind is the greatly improved network filtering capabilities. I remember that this prompted me to move to using netfilter or iptables for developing firewalls. Previously, the standard firewall method was using ipchains and iptables provided a distinct improvement over it.

There are all kinds of things to keep in mind when you compile a new kernel. The issues are as varied as the types of machines you might run into out there. We'll go into more detail on kernel issues in our section on compiling the Linux Kernel.
 


As I recall, a file is signed using the secret key rather than the public key so others can use the public key to verify the signature.
 

Members online


Top