The Linux Kernel: Configuring the Kernel Part 10

Does this series need improvement? If so, please post or email what can be improved.

  • Yes, this needs improvement

    Votes: 0 0.0%
  • No, this is perfect

    Votes: 0 0.0%

  • Total voters
    0
D

DevynCJohnson

Guest
Series Index - http://www.linux.org/threads/linux-kernel-reading-guide.5384/

Wireless broadband devices that use the WiMAX protocol can be enabled (WiMAX Wireless Broadband support). This type of wireless connection usually works only if the connection service is provided by a service provider (this is the same concept as with 3G/4G). WiMAX stands for Worldwide Interoperability for Microwave Access. WiMAX is intended to be a replacement for DSL. Broadband refers to the wide bandwidth and transportation of numerous signals.

RF switches are used in many Wifi and Bluetooth cards (RF switch subsystem support). The “RF” stands for Radio Frequency. RF switches route high-frequency signals.

Input support for RF switches also exists in the kernel (RF switch input support).

The kernel can control and query radio transmitters (Generic rfkill regulator driver). Enabling this will make a device file (/dev/rfkill). This device file acts as an interface to such radio devices.

The Linux kernel supports the 9P2000 protocol (Plan 9 Resource Sharing Support (9P2000)). This network protocol is sometimes called Styx. Plan 9's windowing system (Rio) uses Styx the same way Linux's X11 uses Unix Network Sockets. Linux systems may use Styx on Styx networks. Plan 9 and Linux can use Styx on a network.

The "9P Virtio Transport" system provides transports to and from guest and host partitions on virtual systems.

RDMA transport is also supported by the kernel (9P RDMA Transport (Experimental)). RDMA stands for Remote Direct Memory Access. This is Plan 9's protocol for accessing memory from a remote computer.

The 9P system has debugging support like many of the other kernel components (Debug information).

"CAIF support" support can also be enabled in the kernel. CAIF stands for Communication CPU to Application CPU Interface. This is a MUX protocol that uses packets and is used with ST-Ericsson's modems. ST-Ericsson is the company that developed this protocol. Android and MeeGo phones use this protocol. (Yes, MeeGo and Android are Linux systems, and yes, I am talking about the popular Android by Google.) A MUX protocol is a multiplexing protocol. Multiplexing was mentioned in a previous article.

Next, cephlib can be added to the kernel which is used in the rados block devices (rbd) and the Ceph filesystem (Ceph core library). cephlib is the complete core library for Ceph. Ceph is a storage platform. CephFS (the Ceph filesystem) is a filesystem that runs on top of another. Usually, CephFS runs on top of EXT2, ZFS, XFS, or BTRFS. Rados devices are block storage units that use CephFS.

This debugging feature for ceph harms the kernels performance, so only use it if needed (Include file:line in ceph debug output).

The CONFIG_DNS_RESOLVER facility will perform DNS lookups when this feature is enabled (Use in-kernel support for DNS lookup).

Near Field Communication (NFC) devices are also supported by the Linux kernel (NFC subsystem support).

The NFC Controller Interface (NCI) should be enabled if the above feature is enabled (NCI protocol support). This allows the host and the NFC controller to communicate.

NFC devices that process HCI frames will need this next feature to be enabled (NFC HCI implementation).

Some HCI drivers need a SHDLC link layer (SHDLC link layer for HCI based NFC drivers). SHDLC is a protocol that checks integrity and manages the order of the HCI frames.

"NFC LLCP support" is usually enabled if NFC features (like the above) are enabled.

Next, there are some drivers for specific NFC devices. The first one is a "NXP PN533 USB driver".

The next NFC driver supports Texas Instrument's BT/FM/GPS/NFC devices (Texas Instruments NFC WiLink driver).

Next is the "NXP PN544 NFC driver".

The driver for Inside Secure microread NFC chips is also provided by the kernel (Inside Secure microread NFC driver).

Now, we will be moving on to various drivers that are not network related. First, we can allow a path to the uevent helper (path to uevent helper). Many computers today should not have this feature because one uevent helper is executed for every one fork a process possess. This can quickly consume resources.

On boot-up, the kernel will make a tmpfs/ramfs filesystem (Maintain a devtmpfs filesystem to mount at /dev). This offers the complete /dev/ directory system. Of the two filesystems (tmpfs and ramfs), ramfs is the most simple of the two. “tmpfs” stands for temporary filesystem and “ramfs” stands for ram filesystem.

The next setting is the code for the devtmpfs filesystem which is also mounted at /dev/ (Automount devtmpfs at /dev, after the kernel mounted the rootfs).

The following feature allows modules to be loaded into user space (Userspace firmware loading support).

To "Include in-kernel firmware blobs in kernel binary" (which will add proprietary firmware to the kernel) enable this feature.

Some binary proprietary drivers need to be used on boot-up. This feature allows such software to do so (External firmware blobs to build into the kernel binary). Some computers have boot devices that require special firmware that may only be proprietary binaries. Without this feature enabled, the system will not boot.

Enabling "Fallback user-helper invocation for firmware loading" allows user-helper (udev) to load firmware drivers as a fallback for the kernel fails to load firmware drivers. udev can load firmware that resides in a non-standard path for firmware.

The part of the kernel that manages drivers can produce debugging messages if permitted (Driver Core verbose debug messages).

Next, the devres.log file will be used if this feature is enabled (Managed device resources verbose debug messages). This is a debugging system for device resources.

This next feature makes a connection between the userspace and kernelspace via a netlink socket (Connector - unified userspace <-> kernelspace linker). This socket uses the netlink protocol. This is another example of a Linux system needing networking abilities even if the computer will never be on a physical network.

The userspace can be informed on process events via a socket (Report process events to userspace). Some reported events include ID changes, forks, and exit status. Some previously enabled kernel features may need this. It is best to follow what the configuration tool recommends.

Systems that use solid state drives will need MTD support (Memory Technology Device (MTD) support). MTD devices are solid state storage devices. Typical storage drivers are different than Solid State Drives (SSD). The standard routines used on magnetic storage units do not work on SSDs (read, write, and erase).

Most desktop computers have parallel ports (a connector with 25 holes), so they need this feature (Parallel port support). Parallel ports are used for printers and ZIP drives among many other less known uses. Parallel ports are the ports with twenty-five pins.

Enable this feature for IBM compatible computers (PC-style hardware). There are different types of computers. Besides IBM computers (which commonly run Windows), there are Apple computers. Linux runs on nearly every type of computer.

Linux also supports multi-IO PCI cards (Multi-IO cards (parallel and serial)). Multi-IO PCI cards have both parallel and serial ports. Serial ports send or receive one bit at a time.

This next feature allows the kernel to "Use FIFO/DMA if available". This is used on certain parallel port cards to speed up printing. FIFO stands for “First In, First Out”. DMA is Direct Memory Access as mentioned before.

The next feature is for probing Super-IO cards (SuperIO chipset support). These probes find the IRQ numbers, DMA channels, and other types of addresses/numbers of such devices. Super-IO is a type of integrated IO controller.

PCMCIA support for parallel ports can be enabled (Support for PCMCIA management for PC-style ports).

NOTE: For many of these features, it may be best to do what the configuration tool recommends unless you have a specific reason for not doing so. Usually, if you are cross-compiling or making a kernel for a broad range of computers, then you should be familiar with what you are wanting to support and make the choices accordingly.

Parallel ports on AX88796 network controllers need this support (AX88796 Parallel Port).

"IEEE 1284 transfer modes" allows Enhanced Parallel Port (EPP) and Enhanced Capability Port (ECP) on parallel ports and status readback for printers. Status readback is the retrieval of the printer's status.

"Plug and Play support" (PnP) should be enabled. This allows users to plugin devices while the system is still on and then immediately utilize them. Without this feature, users could not plugin a USB device, printer, or some other device without performing any special tasks. The system will manage the rest automatically.

Next, users can enable block devices (Block devices). This is a feature that should be enabled because block devices are very common.

Floppy disks are block devices that can be enabled (Normal floppy disk support).

IDE devices that connect to parallel ports are also supported (Parallel port IDE device support). Some external CD-ROM devices can connect via parallel ports.

External IDE storage units can also be connected to parallel ports (Parallel port IDE disks).

ATA Packet Interface (ATAPI) CD-ROM drives that connect to parallel ports will need this driver (Parallel port ATAPI CD-ROMs). ATAPI is an extension of the ATA protocol used in Parallel ATA (PATA) devices.

Other ATAPI disk devices can be plugged into the parallel ports (Parallel port ATAPI disks). This driver will support other disk types besides CD-ROMs.

The kernel also supports ATAPI tape devices that connect via the parallel ports (Parallel port ATAPI tapes).

There are many other ATAPI devices that can connect to the parallel ports. As a result, a generic driver was made to manage the other devices not supported by the previously mentioned drivers (Parallel port generic ATAPI devices).

IDE devices attached to the parallel ports need a special protocol for communication purposes. There are many such protocols, one of them being the "ATEN EH-100 protocol".

An alternate protocol for parallel IDE devices is the "MicroSolutions backpack (Series 5) protocol".

There is yet again another parallel IDE device protocol (DataStor Commuter protocol) and another (DataStor EP-2000 protocol) and another (FIT TD-2000 protocol).

Once again, there is another protocol, but this one is highly recommended for the newer CD-ROM and PD/CD devices that plug into parallel ports (FIT TD-3000 protocol).

This next protocol is mainly for parallel port devices made by SyQuest, Avatar, Imation, and HP (Shuttle EPAT/EPEZ protocol).

Imation SuperDisks need support for the Shuttle EP1284 chip (Support c7/c8 chips).

Some other parallel IDE protocols that can be enabled next include
Shuttle EPIA protocol
Freecom IQ ASIC-2 protocol - used by Maxell Superdisks
FreeCom power protocol
KingByte KBIC-951A/971A protocols
KT PHd protocol - used in by 2.5 inch external parallel port hard-drives.
OnSpec 90c20 protocol
OnSpec 90c26 protocol

NOTE: These protocols and support for plugging devices into the parallel port is meant to be like hotplugging devices just like USB devices are put in USB ports. USB and Firewire are still the most popular ports to use because of their size and speed. A parallel storage unit is larger than a USB flash drive because parallel ports are larger than USB ports.

Next, we have a driver for Micron PCIe Solid State Drives (Block Device Driver for Micron PCIe SSDs).

You may have guessed it – in the next article there is still more to configure.
 

Attachments

  • slide.JPG
    slide.JPG
    49.6 KB · Views: 95,005
Last edited:


Awesome picture... May have missed it, but have you enabled/disabled firewire?

I know you are getting a LOT of suggestions, but maybe a "how to slim your kernel"?

I will probably write an article on that...definitely need to show how to slim on -generic kernels.
 
I've never seen much benefit to slimming down the kernel, unless you're pushed for disk space...? On a fairly default config, most drivers are built as modules, so the actual image should be fairly small. There are probably better ways to save resources.

The one big advantage of slimming down is reduced compile times. If this is a factor, then localmodconfig may be a good option, but it can be a pain if you need to install new hardware later (or forgot about usb flash memory, etc).
 
Awesome picture... May have missed it, but have you enabled/disabled firewire?

I know you are getting a LOT of suggestions, but maybe a "how to slim your kernel"?

I will probably write an article on that...definitely need to show how to slim on -generic kernels.

Thanks Ryanvade. I will make that a part of this series after the kernel is configured. As for Firewire, I used the default option which I think was enable if I am not mistaken. From now on, I will just select the defaults as I go through this tutorial. Feel free to offer suggestions. Yes, I get a lot of emails on suggested kernel topics, but that makes the series more interesting. After I discuss configuring the kernel, there is still a lot to explain about the kernel.
 
I've never seen much benefit to slimming down the kernel, unless you're pushed for disk space...? On a fairly default config, most drivers are built as modules, so the actual image should be fairly small. There are probably better ways to save resources.

The one big advantage of slimming down is reduced compile times. If this is a factor, then localmodconfig may be a good option, but it can be a pain if you need to install new hardware later (or forgot about usb flash memory, etc).

A smaller kernel is a faster kernel. A faster kernel results in better performance. Better performance makes the user happy. A happy user uses Linux (^u^).
 
If drivers are built as modules, they don't get loaded... they just sit there taking up disk space. If you build the drivers for your hardware into the kernel - faster, but you're talking about a kernel being faster to load, rather than an overall increase in performance. Regardless, you really won't notice on modern hardware. Building a preemptible kernel or setting the tick rate to 1000Hz, etc, will yield bigger improvements.

Have you actually built a "smaller, faster" kernel and can you provide any real data? (serious question - as I have shit old computer and would do so, if there was any hope that it would give a significant improvement in performance...).

My point is that there are better and easier ways to improve performance than just building a slimmed down kernel. e.g. there is not much point in running a stripped down kernel if you're running a sack of bloat like 'buntu or one of it's spin offs...
 
Last edited:
If drivers are built as modules, they don't get loaded... they just sit there taking up disk space.

Building a preemptible kernel or setting the tick rate to 1000Hz, etc, will yield bigger improvements.

Have you actually built a "smaller, faster" kernel and can you provide any real data?

Thanks for your comments and questions. True, modules are not loaded, but at least they do not consume resources besides disk space (which is inexpensive). In this series, I will later explain how to load modules.

A smaller kernel consumes less memory and CPU resources. A Linux kernel with every feature installed and added will slow down the system because the kernel is performing many tasks at once. For example, if all debugging features are enabled, the system will perform slowly because the kernel is monitoring and reporting many events. Fedora Rawhide, for instance, usually uses a kernel with additional features added for debugging (http://fedoraproject.org/wiki/RawhideKernelNodebug). This slows down the kernel. If unneeded features are enabled, they still consume resources and harm performance. The reason many light-weight distros work well on older systems is due to the smaller kernel (think about Puppy Linux). This link may help you (http://superuser.com/questions/370586/how-can-a-linux-kernel-be-so-small). By the way, it is possible to have a kernel that is too small.

Yes, I have tested "fat" and "slim" kernels. There is a big difference. The worst kernel to have is one with numerous imported modules. Modules are good in moderation, but having a slim kernel with many modules attached will cause major performance issues. Modules are intended to be packaged with a kernel that may use rare features or hardware. Then, very few users will load very few modules. If all of the drivers were modules, then all of the users would load most of the modules. Do you think I should explain modules more clearly in the series?

You are very right about alternate ways to amp performance. Tweaking certain features will indeed make significant improvements.

View kernel sizes like body weight. Too much or too little are equally bad, so making an anorexic kernel can harm performance. For example, disabling ACPI will make a smaller kernel, but will the performance be better? Most likely not. I should probably explain kernel performance enhancement better in a future article.


In summary, "slim" kernels are faster than "fat" kernels be do not make "anorexic" kernels. Rather, make "healthy" kernels. Modules are like deserts; some is good, too much is bad. The more complicated the software, the more resources that will be consumed. Consumed resources leads to performance costs. There is more than one way to enhance software performance.

Thanks for your questions and comments.
 
I'm afraid that it's a myth that loadable kernel modules are slower than compiled in base modules. Please refer to the tldp:

http://www.tldp.org/HOWTO/Module-HOWTO/x73.html
LKMs are not slower, by the way, than base kernel modules. Calling either one is simply a branch to the memory location where it resides
You will probably get some slight slow down at boot time due to hardware probing, but that's about it.

Yes you can strip down the kernel and I certainly agree, with regards to kernel debugging, but in most cases the perfomance gain would be small.
 
I'm afraid that it's a myth that loadable kernel modules are slower than compiled in base modules. Please refer to the tldp:

http://www.tldp.org/HOWTO/Module-HOWTO/x73.html

You will probably get some slight slow down at boot time due to hardware probing, but that's about it.

Yes you can strip down the kernel and I certainly agree, with regards to kernel debugging, but in most cases the perfomance gain would be small.

It is not that the module itself is slow. The issue arises in the fragmentation of the kernel on memory. Having the kernel and modules on different locations throughout the memory is specifically what causes the performance difference.

Clever thinking though. This is good food for thought.
 
The article I linked to explains this idea of 'memory fragmentation' due to LKMs as a myth...

Loadable modules may be a security risk, but they are not a performance killer.

You have succeed in getting me interested in building a stripped down kernel (again) however...
 
The article I linked to explains this idea of 'memory fragmentation' due to LKMs as a myth...

Loadable modules may be a security risk, but they are not a performance killer.

You have succeed in getting me interested in building a stripped down kernel (again) however...

I like how our views differ. You are very interesting to talk to. Thanks!
 
My main reason for a slimmed down kernel is disk space. Yes, hard drives are getting inexpensive, but when you have $0 for extra disk space and you have to dual boot with Windows..disk space adds up. I have always found smaller kernels with less options enabled to be faster then bloated kernels. No idea why, just what I have seen. Again though, that is not my main reason for slimming down:
df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda6 120G 44G 71G 39% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
udev 2.8G 4.0K 2.8G 1% /dev
tmpfs 587M 1.1M 586M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 2.9G 1.2M 2.9G 1% /run/shm
none 100M 12K 100M 1% /run/user
/dev/sda5 149G 113G 28G 81% /home
 
Interesting to see what size kernel images people have

Code:
$ ls -lh  /boot/vmlinu*
 
Interesting to see what size kernel images people have

Code:
$ ls -lh  /boot/vmlinu*

I will show you mine if you show me yours. (web surfers quickly reading this are going to get a very bad idea about this site) :)


I will go ahead an show you mine.

collier@Nacho-Laptop:~$ ls -lh /boot/vmlinu*
-rw------- 1 root root 5.2M Jul 8 20:46 /boot/vmlinuz-3.8.0-27-generic
-rw------- 1 root root 5.2M Aug 13 16:10 /boot/vmlinuz-3.8.0-29-generic

This is my stable (;)) AMD64 Ubuntu system. I have other Linux system though.
 
ls -lh /boot/vmlinu*
-rw-r--r-- 1 root root 5.2M Sep 8 21:47 /boot/vmlinuz-3.11.0
-rw-r--r-- 1 root root 5.2M Sep 15 21:16 /boot/vmlinuz-3.11.1
-rw------- 1 root root 5.2M May 1 12:00 /boot/vmlinuz-3.8.0-19-generic
-rw------- 1 root root 5.2M May 16 10:42 /boot/vmlinuz-3.8.0-22-generic
-rw-r--r-- 1 root root 5.2M Aug 8 13:19 /boot/vmlinuz-3.8.0-23-generic
-rw-r--r-- 1 root root 5.2M Aug 30 16:33 /boot/vmlinuz-3.8.0-27-generic
-rw-r--r-- 1 root root 5.2M Aug 31 05:36 /boot/vmlinuz-3.8.0-29-generic
-rw------- 1 root root 5.2M Aug 22 16:21 /boot/vmlinuz-3.8.0-30-generic


muahaha ha. ;)
 
ls -lh /boot/vmlinu*
-rw-r--r-- 1 root root 5.2M Sep 8 21:47 /boot/vmlinuz-3.11.0
-rw-r--r-- 1 root root 5.2M Sep 15 21:16 /boot/vmlinuz-3.11.1
-rw------- 1 root root 5.2M May 1 12:00 /boot/vmlinuz-3.8.0-19-generic
-rw------- 1 root root 5.2M May 16 10:42 /boot/vmlinuz-3.8.0-22-generic
-rw-r--r-- 1 root root 5.2M Aug 8 13:19 /boot/vmlinuz-3.8.0-23-generic
-rw-r--r-- 1 root root 5.2M Aug 30 16:33 /boot/vmlinuz-3.8.0-27-generic
-rw-r--r-- 1 root root 5.2M Aug 31 05:36 /boot/vmlinuz-3.8.0-29-generic
-rw------- 1 root root 5.2M Aug 22 16:21 /boot/vmlinuz-3.8.0-30-generic


muahaha ha. ;)

I have two kernels installed on my Ubuntu system seen above, one for main use and one just in case the newest one is unstable/broken. Why do you need 8 :eek: Linux kernels? I can under stand up to three (beta-testing, backup, main use), but eight?;)
 
I have two kernels installed on my Ubuntu system seen above, one for main use and one just in case the newest one is unstable/broken. Why do you need 8 :eek: Linux kernels? I can under stand up to three (beta-testing, backup, main use), but eight?;)
I use the 3.11.x kernels because of my wireless card. I just haven't gotten around to removing the older kernels yet. Still testing out Diamond II-B KDE.
 
ls -lh /boot/vmlinu*
-rw-r--r-- 1 root root 5.2M Sep 8 21:47 /boot/vmlinuz-3.11.0
-rw-r--r-- 1 root root 5.2M Sep 15 21:16 /boot/vmlinuz-3.11.1
-rw------- 1 root root 5.2M Aug 22 16:21 /boot/vmlinuz-3.8.0-30-generic

uname -a
Linux ryan-linux-laptop 3.11.1 #1 SMP Sun Sep 15 18:46:30 CDT 2013 x86_64 x86_64 x86_64 GNU/Linux

3.12-rc1 is out. Got to download it NOW...
 
ls -lh /boot/vmlinu*
-rw-r--r-- 1 root root 5.2M Sep 8 21:47 /boot/vmlinuz-3.11.0
-rw-r--r-- 1 root root 5.2M Sep 15 21:16 /boot/vmlinuz-3.11.1
-rw------- 1 root root 5.2M Aug 22 16:21 /boot/vmlinuz-3.8.0-30-generic

uname -a
Linux ryan-linux-laptop 3.11.1 #1 SMP Sun Sep 15 18:46:30 CDT 2013 x86_64 x86_64 x86_64 GNU/Linux

3.12-rc1 is out. Got to download it NOW...

When it comes to beta-testing, you are brave. I would never test a RC kernel on my system. As many people say, "Somebody needs to do it". Do you submit bug reports to the Linux developers via GitHub? If not, you should.
 
Those are pretty big kernel images.

I'm also guilty of not cleaning up enough, though I did clean up after 3.8 and 3.9 kernels some time ago.

This is my Slackware 14 system
Code:
$ ls -lh  /boot/vmlinu*
lrwxrwxrwx 1 root root   27 Aug  2 22:28 /boot/vmlinuz -> vmlinuz-huge-smp-3.2.45-smp
-rw-r--r-- 1 root root 2.9M Sep 21 19:34 /boot/vmlinuz-custom-3.10.12-smp
-rw-r--r-- 1 root root 3.3M Aug  3 14:12 /boot/vmlinuz-custom-3.10.4-smp
-rw-r--r-- 1 root root 3.3M Aug  4 20:46 /boot/vmlinuz-custom-3.10.5-smp
-rw-r--r-- 1 root root 3.3M Aug 16 20:19 /boot/vmlinuz-custom-3.10.7-smp
-rw-r--r-- 1 root root 3.3M Aug 25 18:31 /boot/vmlinuz-custom-3.10.9-smp
-rw-r--r-- 1 root root 2.8M May 31 23:37 /boot/vmlinuz-generic-3.2.45
-rw-r--r-- 1 root root 2.9M May 31 22:48 /boot/vmlinuz-generic-smp-3.2.45-smp
-rw-r--r-- 1 root root 5.6M May 31 23:47 /boot/vmlinuz-huge-3.2.45
-rw-r--r-- 1 root root 5.8M May 31 22:59 /boot/vmlinuz-huge-smp-3.2.45-smp
3.10.12 is a stripped down kernel I built yesterday (localmodconfig + stepped through disabling a lot of unneeded options).

Slackware's 3.2 generic kernel is a tad smaller than my latest custom build, though if I build 3.10.x against the generic 3.2 config (after running oldconfig and building in ext4 - no other modifications), I get the kind of size you can see with the other 3.10 kernels - so the extra, albeit small amount of, bloat is probably due to code which has been added to the kernel tree between 3.2 and 3.10.

I always had a feeling that Slackware was faster and more responsive than some other distros.

I stay away from RC kernels, unless I'm specifically chasing some kind of bug fix or driver for newer hardware (rare as I run old junk) I try to stick with stable or LTS kernels where possible, though the usefulness of the latter is open to debate.
 
Last edited:

Members online


Top