89

I have seen in several site which recommend to reduce swappiness to 10-20 for better performance.

Is it a myth or not? Is this a general rule? I have a laptop with 4GB Ram and 128GB SSD hard, what value do you recommend for my swappiness?

Thanks.

Jorge Castro
  • 73,907

7 Answers7

128

Because a lot of people believe swapping = bad and that if you don't reduce swappiness, the system will swap when it really doesn't need to. Neither of those are really true. People associate swapping with times where their system is getting bogged down - however, it's mostly swapping because the system is getting bogged down, not the other way around. When the system swaps, Linux has already factored the performance cost in to its decision to swap, and decided that not doing so would likely have a greater penalty in system performance or stability.

The default setting has been arrived at after extensive testing on countless different hardware and software setups, being incredibly well tested by virtue of how many people use Linux and the variety of ways they use it. It wouldn't be adjustable if there weren't use cases for adjusting in response to particular needs, but in doing so it's important to consider the risk of unintended consequences and corner cases that weren't considered, which increase the more you alter the behavior from the defaults. Adjusting swappiness isn't a "simple fix" to all performance problems, but a compromise of many different facets.

If you want a simple fix, the simplest possible fixes is always to simply install more physical RAM, or purchase a system with more RAM. This solution is virtually guaranteed to have no unintended drawbacks.

How Linux uses RAM

Linux can use RAM for memory allocated by programs, or it can use it for mirroring the content of files on disk - whether that be code or data files open for reading or files recently read as "cache". Absent any shortage of available RAM, Linux will keep recently read or used file data in memory in case it is needed again, as there is no cost in doing so and it can potentially speed up the system if the same files are wanted to be read in the future. This leads to the typical situation where most RAM that was not allocated by programs will be utilized for caching files.

If your memory use increases to the point that you are getting low on available RAM, the Linux kernel has the ability to either discard some file-backed memory pages, reducing cache, or (assuming you have swap enabled) it also has the ability to swap some memory allocated by programs by swapping it out of physical RAM and onto the swap device. Exactly what it does is controlled by algorithms.

Eventually, if memory usage continues to rise and swap is filled up (or swap is disabled), further memory use is not possible and your system will reach a point where it can't satisfy a need to allocate more RAM, and it will need to crash or kill a running program to recover memory.

How Linux uses swap

To combat these problems, if you have swap enabled your system can re-allocate some seldom-used application memory to the swap device, freeing RAM. The additional RAM can prevent processes dying due to running out of memory, and can also leave more space for file-backed storage so reading from disk files can operate more smoothly.

To decide when swapping will be used, the system uses a complex algorithm that takes into account the relative cost of swapping unused program memory, in comparison to relinquishing file-backed memory (memory that mirrors the contents of files).

The "swappiness" tunable does not represent a threshold or a percentage of RAM, even though it has been misrepresented as such by many sources. It is a weighting that tells the system the cost of swapping relative to the cost of re-reading files from disk. For a few years now in Linux, "swappiness" is a value that can go up to 200. "0" is a special value that effectively disables swap unless it's a last resort. Otherwise, values 1 to 200 represent different relative balances between the cost of swapping vs re-reading files, where values over 100 should be used only when your swap device is significantly faster than the system drive.

Within the range of 1 to 100, 1 tells the system to heavily favor relinquishing file-backed memory whereas 100 tells the system to treat both options as equal in terms of cost, with values in between striking reasonable balances for systems where swapping is on the same speed device as your system drive. The algorithm deciding whether to swap still takes into account factors such how long ago the memory in question was last accessed and several other things. The default value now is 60 which is suitable for a range of hard drive technologies including SSDs. While lowering to around 40 may still make sense if you have a traditional HDD with slow access times compared to its sequential read time and increasing it to around 90 may make sense for a modern SSD with fast random access, the default of 60 is still a reasonable value in both these situations.

Letting your system swap when the system deems it necessary is overall a very good thing, even if you have a lot of RAM. Letting your system swap if it needs to gives you peace of mind that if you ever run into a low memory situation even temporarily (while running a short process that uses a lot of memory), your system has a second chance at keeping everything running. If you go so far as to disable swapping completely, then you risk processes being killed due to not being able to allocate memory.

What is happening when the system is bogged down and swapping heavily?

Swapping is a slow and costly operation, so the system avoids it unless it calculates that the trade-off in cache performance will make up for it overall, or if it's necessary to avoid killing processes.

A lot of the time people will look at their system that is thrashing the disk heavily and using a lot of swap space and blame swapping for it. That's the wrong approach to take. If swapping ever reaches this extreme, it means that swapping is your system's attempt to deal with low memory problems, not the cause of the problem, and that without swapping your running process will just randomly die.

What about desktop systems? Don't they require a different approach?

Users of a desktop system do indeed expect the system to "feel responsive" in response to user-initiated actions such as opening an application, which is the type of action that can sometimes trigger a swap due to the increase in memory required.

One way some people try to tweak this is to reduce the swappiness parameter which can increase the system's tolerance to applications using up memory and running low on cache space.

However, this is just shifting goalposts. The first application may now load without a swap operation, but it will leave less slack for the next application that loads. The same swapping may just occur later, when you next open an application instead. In the meantime, the system performance is lower overall due to the system purging file caches. Thus, any benefit from the reduced swappiness setting may be hard to measure, reducing swapping delay at some times but causing other slow performance at other times. Reducing swappiness to as low as 10 can leave much lower cache sizes and even have the potential to create a different type of disk thrashing where files the system wants to read keep being purged requiring re-reads.

Disabling swap completely should be avoided as you lose the added protection against out-of-memory conditions which can cause processes to crash or be killed.

The most effective remedy by far is to install more RAM if you can afford it.

Can swap be disabled on a system that has lots of RAM anyway?

If you have far more RAM than you're likely to need for applications, then you'll rarely need swap. Therefore, disabling swap probably won't make a difference in all usual circumstances. But if you have plenty of RAM, leaving swap enabled also won't have any penalty because the system doesn't swap when it doesn't need to.

The only situations in which it would make a difference would be in the unlikely situation the system finds itself running low on available memory, and it's in this type of situation where you would want swap most. So you can safely leave swap on its normal settings for added peace of mind without it ever having a negative effect when you have plenty of memory.

But how can swap speed up my system? Doesn't swapping slow things down?

The act of transferring data from RAM to swap can be a slow operation, but it's only taken when the kernel predicts the overall benefit as a result of keeping a reasonable cache and d size will outweigh this. If your system is getting really slow as a result of disk thrashing, swap is not causing it but only trying to alleviate it.

Once data is in swap, when does it come out again?

Any given part of memory will come back out of swap as soon as it's used - read from or written to. However, typically the memory that is swapped is memory that has not been accessed in a long time and is not expected to be needed soon.

Transferring data out of swap is assumed to be about as time-consuming as putting it in there. Your kernel won't remove data from it if it doesn't need to. While data is in swap and not being used, it leaves more memory for other things that are being used, and more system cache.

Other technologies you can use to alter your system swap behavior

zram is a method of having a compressed swap device in memory. This can be used as a way to avoid the relative slowness of reading and writing to disk or SSD storage, because writing to memory is much faster. For this increase in swap performance you are trading some CPU, because it needs to perform compression and decompression, and some physical memory space, because even though the zram device is compressed it still occupies some RAM, which then can't be recovered (see however "zram writeback" for a potential alleviation).

zswap is an alternative technology that creates an in-memory compressed write-back cache of a swap device. It gives the same type of trade-off of CPU and memory for the benefit of improved swap performance as zram. Unlike zram, zswap always requires a regular swap device to be configured as well (and this shouldn't be a zram device). The idea is that zswap will then decompress and page out memory to the backing swap device when its cache becomes full or in some cases if memory pages are incompressible.

These two technologies can help reduce the performance cost of swapping, suggesting that it may be appropriate to significantly increase Linux's swappiness setting. However, despite their significantly reduced cost they aren't without any cost; as discussed, there is a little cost to CPU and to the memory space that the compressed store occupies. Should you wish to increase the value, a wise approach would be to do so conservatively, and test representative workloads in a low memory situation and observe the results. To some this will be too much work, to which I'd suggest that staying with the default remains a very safe course of action even if some types of workload may be further improved with an adjustment. The default swappiness setting of 60 will still work, and will not completely prevent you from experiencing the benefits of the faster swapping.

thomasrutter
  • 37,874
  • 2
    Thank you for your thorough description. I think in my case(4GB Ram and 128GB SSD hard) and with my usage (Java EE development and several os in vitual box) swappiness=20 is suitable. What do you think? – Saeed Zarinfam Sep 05 '12 at 05:55
  • 1
    I think the default of 60 would be best, in my opinion. – thomasrutter Sep 05 '12 at 06:10
  • If i remember correctly default ubuntu is 60, at 8 gb ram sometimes Ubuntu take a little ram, when set 10 or lower Ubuntu won't take any. http://namhuy.net/1563/how-to-tweak-and-optimize-ssd-for-ubuntu-linux-mint.html – Blanca Higgins Jun 07 '14 at 17:30
  • 4
    @BlancaHiggins did you read the post you commented on? Your comment doesn't seem to describe what swappiness actually does. – thomasrutter Jun 08 '14 at 04:45
  • 3
    This is an excellent answer. Thank you so much for such a great explanation. – Dan Barron Dec 10 '15 at 04:16
  • and take a look at this too, https://help.ubuntu.com/community/SwapFaq#What_is_swappiness_and_how_do_I_change_it.3F – azerafati May 17 '16 at 12:46
  • 2
    It may not look like it but as a result of "wall of text" accusations I've made an effort to simply this answer a lot while still retaining the relevant information. – thomasrutter Dec 18 '17 at 05:12
  • 1
    Glad to see this answer here... I've seen so many ppl lately claim that adding swap files reduces performance. – Rondo Sep 15 '18 at 18:04
  • 6
    Part of the info in that SwapFaq is misleading in my opinion: that setting it to 100 will "aggressively" swap. I think it's more accurate to say that is a very cautious, pro-active setting, swapping at the first sign that the available memory or cache is getting even a little bit low. Whereas low settings like 10 are more of a risky, thrillseeking setting, avoiding doing any swapping until available memory is very low and the cache is pretty much completely gone, leaving the system without much wiggle room. – thomasrutter Feb 21 '19 at 22:35
  • 2
    The official documentation doesn't explicitly state this, but I believe that 100 is not the maximum value for swappiness, and that it is not a "percentage" as many profess. I haven't tried, but values over 100 may result in even earlier swapping. Not that I'd recommend them. – thomasrutter Jul 15 '19 at 23:57
  • 1
    This answer seems to contradict my actual experience. After a runaway process causes my laptop to write a few gigabytes to swap, it takes more than a few days for the browser to recover to normal speeds. Running swapoff -a; swapon -a (when it finally completes) immediately restores browser speed. Can you add an explanation for this phenomenon? Your claim that memory that is needed is rapidly restored seems patently false. – David Roundy Mar 27 '20 at 12:51
  • @DavidRoundly it is not possible for memory that is in swap to be read from or written to. It must be swapped back out to be used. That said, this only happens on a per-page basis so it is possible that what you're experiencing is that in the following days, you're accessing different sets of those swapped pages and freeing those piecemeal. In such a case swapoff does all this in one go - it probably takes a while but you'll go back to feeling like before the event where you ran out of memory. Swap is not the cause. You can't tune swap to have no penalty when you fully run out of memory. – thomasrutter Mar 28 '20 at 09:54
  • ... but in your specific case swapoff can prevent later lagginess by imposing the full penalty of swapping out in one go instead of piece by piece at random later times. – thomasrutter Mar 28 '20 at 09:59
  • 1
    why 10 isn't good option? please explain. I still give the system a chance to swap by not setting it to zero, i just don't like file-backed memory for desktop, reading file using SSD is already fast enough (regular desktop usage) – Tommy Oct 02 '24 at 01:44
  • @Tommy please read the entire answer and then let me know if you have any specific questions about anything in the answer, pointing out what part of the answer your question is referring to. – thomasrutter Oct 02 '24 at 03:47
  • 1
    On Ubuntu 24.04 with 32GB ram I very often run into unresponsive windows and irritating delays due to swap usage (application RAM usage is less than 10GB and the rest is "cache"). If I turn off swap the system functions perfectly and I don't notice any delay anywhere. The fact that there is less ram to be used for cache maybe hurts performance somewhere but it is definitely not something I can notice. This might be due to some bug in Ubuntu 24.04 or an application but it is just the practical case. If you are having freezes due to swap definitely try turning it off and see how it behaves. – Kvothe Nov 24 '24 at 16:04
  • Long answer to express "Don't touch it because it's good". No. It's not good. I've seen swap being used without a reason: sum of used memory and used swap was below my memory capacity). I'm not buying this. – Krzysztof Tomaszewski Nov 28 '24 at 13:35
  • @KrzysztofTomaszewski check under "Once data is in swap, when does it come out again?" - if you have an event which temporarily uses up a lot of memory but then releases it, the data that it swaps, if any, will remain in swap after even if you have plenty of free memory, for reasons explained in that section. Not knowing anything else about your situation that's my suggestion for what you may be seeing there. As a troubleshooting step, disable your swap and do whatever you did prior to that and see what you observe. – thomasrutter Dec 03 '24 at 00:49
24

On a usual desktop, you have 4-5 active tasks that consume 50-60% of memory. If you set swappiness to 60, then about 1/4-1/3 of the ACTIVE task pages will be swapped out. That means, for every task change, for every new tab you opened, for every JS execution, there will be a swapping process.

The solution is to set swappiness to 10. By practical observations, this causes the system to give up disk io cache (that plays little to no role on desktop, as read/write cache is virtually not used at all. Unless you constantly copying LARGE files) instead of pushing anything into swap. In practice, that means system will refuse to swap pages, cutting io cache instead, unless it hits 90% used memory. And that in turn means a smooth, swapless, fast desktop experience.

On the file server, however, I would set swappiness to 60 or even more, because server does not have huge active foreground tasks that must be kept in the memory as whole, but rather a lot of smaller processes that are either working or sleeping, and not really changing their state immediately. Instead, server often serves (pardon) the exact same data to clients, making disk io caches much more valueable. So on the server, it is much better to swap out the sleeping processes, freeing memory space for disk cache requests.

On desktops, however, this exact setting leads to swapping out blocks of memory of REAL applications, that near constantly modify or access this data.

Oddly enough, browsers often reserve large chunks of memory, that they constantly modify. When such chunks are swapped out, it takes a while if they are requested back - and at the same time, browser goes forth updating its caches. Which causes huge latencies. In practice, you will be sitting 2 minutes waiting for the single web page in a new tab to load.

Desktop does not really care about disk io, because desktop rarely reads and writes cacheable repeating big portions of data. Cutting on disk io in order to just prevent swaping so much as possible is much more favorible for desktop, than to have 30% of memory reserved for disk cache with 30% of RAM (full of blocks belonging to actively used applications) swapped out.

Just launch htop, open a browser, GIMP, LibreOffice - load few documents there and then browse for several hours. Its really that easy.

terdon
  • 104,404
  • 5
    +1 for server vs desktop differences description. Server's disk cache could be done on a disk field. – Dee Dec 11 '14 at 15:21
  • 1
    If this is the case, why do both server and desktop versions of Ubuntu default to a swappiness of 60? If what you state is true, then it would make more sense for the desktop version to be provided with a default of 20 or even 10, but it is not. – JAB Nov 08 '17 at 04:19
  • 1
    Reference for the implication that swappiness is a direct percentage of ram that gets swapped? I don't think it works like that. – Xen2050 Mar 23 '18 at 10:05
  • 5
    It doesn't. Swappiness is not related to a percentage of RAM. It's a knob that tweaks a fuzzy algorithm towards being more or less likely to swap in a given problem situation. I also think the description of server vs desktop workloads in this answer makes a bunch of assumptions that don't always hold. – thomasrutter Oct 25 '18 at 23:57
  • 7
    This answer is unclear and riddled with false assumptions and assertions. The currently accepted answer is much more accurate and I would caution anyone taking anything from this answer as read. – agittins Sep 04 '20 at 09:21
  • 1
    Swappiness is just flat out NOT related to a "percentage of RAM" as this answer claims. The entire premise behind this answer is wrong before even getting to wild inaccuracies like cache not mattering for desktop performance. – thomasrutter Aug 14 '23 at 02:46
13

If you run a Java server on your Linux system you should really consider reducing swappiness by much from the default value of 60. So 20 is indeed a good start. Swapping is a killer for a garbage collecting process because collections each time need to touch large parts of the process memory. The OS does not have the means to detect such processes and get things right for them. It is best practice to avoid swapping as much as you possibly can for productive application servers.

Andreas
  • 139
  • 1
    It's true that if you dedicate a server to a specialized workload that you know won't benefit from system cache (like a database server) then reducing swappiness might make sense. I don't think that garbage collection is a specialized enough case though. If memory is touched frequently it's not going to be swapped, it'll be kept in physical RAM. The only time this isn't the case is if you have a severe low memory situation - and swapping is not responsible. – thomasrutter Nov 12 '18 at 23:23
  • 1
    Usually all JAVA garbage collectors are generational - there are Young Generation (~1/3 of heap) collected and allocation pool used very often and Old generation (~2/3 of heap) touched relatively rarely by GC, so it seems Old Generation pages could be swapped without any problems – ALZ Nov 22 '19 at 14:08
7

I would suggest doing some experiments whilst having system monitor open to see exactly how much load your machine is under, I am also running with 4GB of memory and a 128GB SSD so changed the swappiness value to 10 which not only improved performance whilst under load but as a bonus will also increase the life of the SSD drive as it will suffer less writes.

For a simple video tutorial on how to do this with a full explanation see the YouTube video below

http://youtu.be/i6WihsFKJ7Q

  • 1
    Great video that you made, but the video doesn't really answer the question directly, its more of a howto on changing swappiness. – jmunsch Jun 07 '14 at 17:00
  • 1
    +1 for SSD life hint, for SSD is best if system is as much as possible read-only, rest should stay in memory and today, memory is usually not a big problem on current desktop PCs. – Dee Dec 11 '14 at 14:59
5

I want to add some perspective from a Big Data Performance engineer to give others more background on 2017 technology.

My personal experience is that while I have typically disabled swapping to guarantee that my systems are running at max speed, on my workstation for a specific problem, I have found that swappiness of 1 and 10 leads to freezing (forever) and long pauses. Swappiness of 80 for this particular application leads to much better performance and shorter pauses than the default (60). Note that I had 8GB RAM and 4x 256GB of swap backed by 1 HDD. I would normally state precise statistics seen in my benchmarks and the full hardware specs, but I haven't done any yet and it's a recent low-end desktop that is not important here.

Back at my former company, the reason we did not enable swappiness on Spark servers with [500GB to 4TB] x [10-100] nodes is that we saw poor performance as a sign to redesign the data pipeline and data structures in a more efficient manner. We also did not want to benchmark the HDDs/SSDs. Also, swapping that much RAM would need 10-30 disks per node with parallel writes to minimize disk access time.

Today, 20 years ago and 20 years in the future, the case will still remain that some problems are too large for the RAM. With infinite time and money, we can buy/lease more hardware or redesign any process to get the performance to a desirable level. Swapping is just a hack to allow us to ignore the real problem (we don't have enough ram and we don't want to spend more money).

For those that think higher swappiness is a bad advice, here is a little perspective. In the past, HDs had just a few kb of cache if any. The interface was IDE/Parallel ATA. The CPU bus was also much slower along with RAM and many other things. In short, systems were very slow (relative to today) in every way. A couple years ago, HDDs used SATA3. Today, they use the NVMe protocol, which has significant latency improvements. HDs have many MB of cache. And the most interesting part is when you use a modern SSD (much more stable read/write endurance and perf) with NVMe or PCIe as your swap storage. It's the best compromise between cost and performance. Please do not try this with cheap or old SSDs.

Swap+SSDs! With high-performance volatile storage, I would highly recommend experimenting with a high swappiness value. It mainly depends on the memory access patterns (randomly accessing all memory vs rarely accessing most), memory usage, if the disk bandwidth is already saturated, and the actual cost of thrashing.

ldmtwo
  • 150
3

A personal anecdote. I didn't know about this swappiness and in hindsight it might have fixed my problem. My system is old and RAM was 4GB.

I upgraded my Linux OS to the next latest long term support version. That version was "passively" using more RAM. This made my system use more swap. The system started boggling down because the swap is on HDD.

Looking at the stats the RAM and the swap combined was not greater than my max RAM. The problem was partially as 'Linux dude' mentioned that browsers often reserve large chunks of memory, that they constantly modify. So I was using Firefox (YouTube particularly is heavy) and due to that large chunks were going into swap but were actually needed.

I ended up getting more RAM which did solve my problem but it might have been possible to postpone the RAM buying if I tried putting swappiness at lower value. I don't regret buying the RAM it was a good upgrade but not every one can make an upgrade.

h3dkandi
  • 191
0

It could be that a lot of the perceived swapping behaviour on startup or on opening programs is linux reading configuration files etc. from disk. So it maybe best to look using the system monitor program before assuming that the hard drive access is due to swapping.

Seth
  • 59,442
  • 44
  • 149
  • 201
user1740850
  • 139
  • 3