Can I completely stop memory caching?

Debian_SuperUser

Active Member
Joined
Mar 18, 2024
Messages
126
Reaction score
31
Credits
1,438
Just for experimental purposes, I want to try completely disabling any memory caching that goes on at runtime. Only memory that is really needed should be allocated. Is something like this possible?
 


Just for experimental purposes, I want to try completely disabling any memory caching that goes on at runtime
If you're referring to caching as seen in htop (orange memory bar), that's file cache, files loaded into memory instead of being read from disk.
To remove this cache you can increase swappiness.
 
@osprey

Drop caches is something I can do, but it just clears some type of cache, so not even all cache, and I want to completely disable caching rather than clearing it every time. Just for experimental purposes of course.

@CaffeineAddict

Right now, I am referring to the cache numbers showed in gnome system monitor.
 
Right now, I am referring to the cache numbers showed in gnome system monitor.
can't help with that but know that caching is a good thing for performance, you might end up releasing some memory but degrading performance.

Also it's very likely that this cache in memory is swapped out to disk as soon as more memory is needed for dynamic allocation, meaning it does not consume memory at all (thus if so, it's counted as free not occupied).
Therefore making use of your memory instead of wasting it.
 
Well stopping caching is oposite of performance. Usually making system use all RAM is what one wants. Free RAM is just free RAM. Pretty useless. There are some options with kernel for embedded systems that may help though. I never tested them so not sure if this will help. Nevertheless re-compiling modified kernel is one way to go.
 
Well stopping caching is oposite of performance. Usually making system use all RAM is what one wants. Free RAM is just free RAM. Pretty useless. There are some options with kernel for embedded systems that may help though. I never tested them so not sure if this will help. Nevertheless re-compiling modified kernel is one way to go.

To this day I still can't understand the logic of memory caching.

For me, caching is a good solution, for underpowered machines. That means, machines low on memory. Or, high on usage, you can also look at it this way.

I don't get the concept of : not all memory is used, we can't do that, we must fill the memory with whatever we can find, so that it is full, and then we need to find a solution to carry on, despite all memory being used.

For me, memory is a capacity. Like the top speed of your car. You can go that fast, but if you don't, you don't, end of story.

I don't buy a CPU so that my CPU usage is constantly at 100%, because then "I'm using it".
I buy a CPU to NEVER get to 100%. Same for memory. I buy these for the option to never run out.

Disk is different, as disk is storage and you can move storage.
 
@Diputs
There are also CPU caches, aka. L1, L2, L3 etc cache.
These are the fastest caches on a PC, what follows is RAM cache followed by disk cache.
Each one of these from L1 toward disk cache is significantly slower.

Eliminating either one of the cache basically means moving the cache to higher level (slower) cache which is simply performance hit that nobody wants except for lack of understanding and hoping for false beliefs.

Eliminating cache does NOT mean removing the cache entirely at all as that's something which software (developer) has to do explicitly in their software do to be possible. (or with the help of API's or kernel functions, it doesn't matter)

I suggest that the OP thoroughly study about these thing before trying to achieve something they don't really want.
 
It may not matter, as on Windows, and it seems even more on Linux, the concept is preferred by the vendor, so they won't make it look like you don't need any caching.
The problem is: some computers really need it. And then it must work. But I ran my Windows (several) without memory caching, and had no issue no performance issue. Memory usage is fairly low obviously, 40% already high and almost never reaching 60%. Gaming is something else though. But have a check : what do games require these days ? 64GB ? 128GB ?
No, 6 GB.

CPU caching is something completely different, it is part of the system.
Memory caching is for extension. The fact you CAN disable it tells you it's possibly not required actually.
Can't disable CPU caching, I guess. Don't want to either.
 
It may not matter, as on Windows, and it seems even more on Linux, the concept is preferred by the vendor, so they won't make it look like you don't need any caching.
Windows will hide a ton of things from you because MS doesn't want to confuse users with technical stuff,
If you rely on Task Manager (a qualified liar) then I suggest you switch to Process Explorer for more technical data layout

To understand Windows memory the best article is called "Pushing the limits of Windows memory" or something like that, but good luck finding it, it's some 6 pages of read published by MS.

edit:
It's here, was cached on google for years:

This is a bare minimum any windows admin or programmer needs to know, no excuses.
 
Last edited:
@osprey

Drop caches is something I can do, but it just clears some type of cache, so not even all cache, and I want to completely disable caching rather than clearing it every time. Just for experimental purposes of course.

@CaffeineAddict

Right now, I am referring to the cache numbers showed in gnome system monitor.
There are different types of caches that the linux kernel controls and they are not all controllable from user space. I'm not familiar with gnome system monitor, but the size of the caches in memory which are provided for users' information are in the file: /proc/meminfo like so:
Code:
[tom@min ~]$ grep -i cache /proc/meminfo
Cached:          1191624 kB
SwapCached:            0 kB
The information is also available in the output of the commands: free, vmstat, top, btop, glances, among others.

Caches include page cache, inode cache, buffer cache and the TLB (translation lookaside buffer). They variously hold information about files and memory addresses so that the kernel doesn't have to go re-finding the information from original disk source when the system asks for the same information again which is such a common aspect of a user's use of software.

In relation to the drop_caches facilities mentioned in post #2, it's possible to script a daemon that will empty those caches it controls, or send their contents to /dev/null on arrival to the cache so that there is effectively no caching of pages, directory entries and other file information as mentioned in the kernel docs. That is the first "experiment" that seems feasible to me in such an experimental quest. As for swap, it can simply be turned off in various ways.

It would be most interesting if you proceeded with that testing and could let readers know of the results, perhaps measured in times taken for various processes or activities in a system without caching (so far as you can reduce caches) compared to baselines from a fully cached system of those same various processes.

The L caches, as mentioned by @CaffeineAddict are however, caches that cannot be accessed from userspace in normal computer usage. Those caches inside the CPU, are faster than the main memory of the system and are basically essential to the efficient performance of the system and have been part of CPU technology since the 1970s, though improved on since then. One would need low level programming to get at them.
 
Last edited:
To this day I still can't understand the logic of memory caching.

For me, caching is a good solution, for underpowered machines. That means, machines low on memory. Or, high on usage, you can also look at it this way.

I don't get the concept of : not all memory is used, we can't do that, we must fill the memory with whatever we can find, so that it is full, and then we need to find a solution to carry on, despite all memory being used.

For me, memory is a capacity. Like the top speed of your car. You can go that fast, but if you don't, you don't, end of story.

I don't buy a CPU so that my CPU usage is constantly at 100%, because then "I'm using it".
I buy a CPU to NEVER get to 100%. Same for memory. I buy these for the option to never run out.

Disk is different, as disk is storage and you can move storage.
This just mean that you do not understand the difference between CPU and RAM usage or worse how computer hardware works.
When I compile program, I forceall CPUs to be fully utilized. This way program is compiled faster. At the same time I make sure that system is usable. When compilation is finished CPU utilisation goes back to 0.7% but thrre always spikes so CPU is never idle.
With RAM, is the same. OS uses RAM by design because it is faster than disk and if there is not memory available stuff is getging written to your disk so it. So RAM usage fluctuates. If it does not or it goes up in time then this is caused by memory leak.

When you keep your system up for several days/weeks. OS will use swap to release RAM and make system fast.

Windows never had good memory management that is why windows starts to swap almost instantly.

I would suggest to read up about RAM, CPU VM otherwise you will keep freeing RAM for no reason slowing down your box.
Also the best way to free memory is by turning off your computer.
 


Top