I recently noticed that htop displays a much lower ‘memory in use’ number than free -h, top, or fastfetch on my Ubuntu 25.04 server.
I am using ZFS on this server and I’ve read that ZFS will use a lot of RAM. I also read a forum where someone commented that htop doesn’t show caching used by the kernel but I’m not sure how to confirm ZFS is what’s causing the discrepancy.
I’m also running a bunch of docker containers and am concerned about stability since I don’t know what number I should be looking at. I either have a usable ~22GB of available memory left, ~4GB, or ~1GB depending on what tool I’m using. Is htop the better metric to use when my concern is available memory for new docker containers or are the other tools better?
Server Memory Usage:
- htop =
8.35G / 30.6G - free -h =
total used free shared buff/cache available
Mem: 30Gi 26Gi 1.3Gi 730Mi 4.2Gi 4.0Gi
- top =
MiB Mem : 31317.8 total, 1241.8 free, 27297.2 used, 4355.9 buff/cache - fastfetch =
26.54GiB / 30.6GiB
EDIT:
tldr: all the tools are showing correct numbers. Htop seems to be ignoring ZFS cache. For the purposes of ensuring there is enough RAM for more docker containers in the future, htop seems to be the tool that shows the most useful number with my setup.


Linux aggressively caches things.
4 GB of RAM is not running out of memory.
If you start using swap, you’re running into a situation where you might run out of memory.
If
oomkillerstarts killing processes, then you’re running out of memory.Well, you could want to not dig into swap.
That’s pretty much where I’m at on this. As far as I’m concerned, if my system touches SWAP at all, it’s run out of memory. At this point, I’m hoping to figure out what percent of the memory in use is unimportant cache that can be closed vs important files that process need to function.
That’s a swap myth. Swap is not an emergency memory, it’s about creating a memory reclamation space on disk for anonymous pages (pages that are not file-backed) so that the OS can more efficiently use the main memory.
The swapping algorithm does take into account the higher cost of putting pages in swap. Touching swap may just mean that a lot of system files are being cached, but that’s reclaimable space and it doesn’t mean the system is running out of memory.
From what I can tell, my system isn’t currently using swap at all but it does have 8GB of available swap if needed.
To make sure I’m following what you are saying, if I upgraded my system to 64GB and changed nothing else, and let’s assume ZFS didn’t trying caching more stuff, would there still be a potential for my system to use swap just because the system wanted to even if it wasn’t memory constrained?
yes, the system will likely use some swap if available even when there’s plenty of free RAM left:
Src: https://www.kernel.org/doc/gorman/html/understand/understand014.html
In my recently booted system with 32GB and half of that free (not even “available”), I can already see 10s of MB of swap used.
As rule of thumb, it’s only a concern or indication that the system is/was starved of memory if a significant share of swap is in use. But even then, it might just be some cached pages hanging around because the kernel decided to keep instead of evicting them.
TIL. Thanks for the information
If that’s the case you should look into your swappiness settings. You can set this to zero meaning the swap will only be used if you’re actually out of memory, but as others have noted that is maybe not a healthy decision…
I’m currently not in a situation where swap is being used so I think my system is doing fine right now. I’m not against swap, I get it’s better to have it than not but my intention was to figure out how close is my system getting to using swap. If it went from not using swap at all to using it constantly, I’d probably want to upgrade my ram, right? If nothing else just to avoid system slow downs and unneeded wear on my SSD
It’s just that the system freezes for me when I used to run out of memory when I had only 32 GB of memory. Then I couldn’t do anything and had to hard reset the computer with its reset button. Then it would be nice to have a little bit of swap to kill some stuff before literally everything just stops working.
Is there a good way to tell what percent of RAM in use is used by less important caching of files that could be closed without any adverse effects vs files that if closed, the whole app stops functioning?
Basically, I’m hoping htop isn’t broken and is reporting I have 8GB of important showstopping files open and everything else is cache that is unimportant/closable without the need to touch SWAP.
https://stackoverflow.com/questions/30869297/difference-between-memfree-and-memavailable
Looking at the htop source:
https://github.com/htop-dev/htop/blob/main/MemoryMeter.c
It’s adding used, shared, and compressed memory, to get the amount actually tied up, but disregarding cached memory, which, based on the above comment, is problematic, since some of that may not actually be available for use.
top, on the other hand, is using the kernel’s MemAvailable directly.https://gitlab.com/procps-ng/procps/-/blob/master/src/free.c
In short: You probably want to trust /proc/meminfo’s MemAvailable, (which is what
topwill show), andhtopis probably giving a misleadingly-low number.Thank you for the detailed explanation
No problem. It was an interesting question that made me curious too.
Came across some more info that you might find interesting. If true, htop is ignoring the cache used by ZFS but accounting for everything else.
link
Yes, ZFS cache has been contentious for exactly the reason you posted, but it is generally not a functional issue.
ZFS will release cache under memory pressure, however nice values of virtualizing can potentially demand it sooner than ZFS can release it.
There have been many changes to ZFS to improve this, but the legacy of “invisible cache” is still around.
This is the job for the OS.
You can run most Linux systems with stupid amounts of swap and the only thing you’ll notice is that stuff starts slowing down.
In my experience, only in extremely rare cases are you smarter than the OS, and in 25+ years of using Linux daily I’ve seen it exactly once, where
oomkillerkilled runningmysqldprocesses, which would have been fine if the developer had used transactions. Suffice to say, they did not.I used a 1 minute cron job to reprioritize the process, problem “solved” … for a system that hadn’t been updated for 12 years but was still live while we documented what it was doing and what was required to upgrade it.