The Anaconda OS installer used by Fedora, RHEL and their derivatives have been many times criticized for its memory requirements being bigger than memory requirements of the installed OS. Maybe a big surprise for users who don’t see too deep into the issue, no surprise for people who really understand what’s going on in the OS installation and how it all works. The basic truth (some call it an issue) about OS installation is that the installer cannot write to physical storage of the machine before user tells it to do so. However, since the OS installation is quite a complex process and since it has to be based on the components from the OS itself there are many things that have to be stored somewhere for the installer to work. The only space for data that for sure doesn’t contain any data the installer shouldn’t overwrite or leave some garbage in is RAM.
Thus for example when doing installation from PXE or with a minimal ISO file (
vmlinuz (kernel) is loaded to RAM,
initrd.img is loaded and extracted to RAM and
squashfs.img containing the installation environment the part of which is the Anaconda installer itself is loaded to RAM as well with on-the-fly extraction of the required data. That’s quite a lot of RAM taken with Anaconda consuming 0 MB as it is not even started yet. On top of that the Anaconda starts, loading a lot of libraries covering all areas of an OS from storage and packaging over language, keyboard and localization configuration to firewall, authentication, etc.
Tricks to lower RAM consumption
Although all the pieces listed above are crucial for the OS installation to work and provide users high-level UI there are some tricks that can be done to lower RAM consumption.
kernel + initrd.img
Obviously, kernel has to stay as it is and in order to support wide range of HW and SW technologies it cannot be made smaller. The situation of the
initrd.img on the other hand is quite different. It is loaded and extracted to RAM and used when the system boots to the installation environment and when it reboots to the newly installed OS, but the rest of the time it just lies around taking space and doing nothing. Thus it can be compressed when the boot is finished and then decompressed back when the machine reboots saving RAM in time when Anaconda needs it most. It means users have to wait for a few seconds (or tens of seconds) longer when (re)booting the machine, but that doesn’t make much difference in the whole OS installation taking tens of minutes.
squashfs.img containing the installation environment and thus the Anaconda installer itself is either loaded to RAM in case of PXE, minimal/netinst installations or mounted from a storage device in case of DVD or USB flash drive installations. See the trick to need less RAM for the installation? Place the
squashfs.img to some mountable media and it doesn’t have to be loaded to RAM, saving over 250 MB of RAM space 1. In case of a virtual machine, the easiest way is to put the
squashfs.img into some directory and create an ISO file from it by running
mkisofs -V SQUASH on it and then attaching the ISO file to the VM. By using
-V SQUASH we give the ISO file a label/ID/name which we can then use for identification by passing
inst.stage2=hd:LABEL=SQUASH:/squashfs.img to the Anaconda installer as a boot option. For a physical machine the easiest solution is probably a USB drive with a file system having a label containing the
squashfs.img. A universal solution is then an NFS server exporting directory containing the
RSS (Resident Set Size)
"The resident set size is the portion of a process’s memory that is held in RAM." (taken from Wikipedia) The RSS of the anaconda process is something around 160/235 MB in text/graphical. Quite a lot you may think, but this 160/235 MB of RAM contains information about all the locales, keyboards, timezones and package groups available plus partitioning layout information and a lot more. You may imagine it as running yum, parted, system-config-keyboard/language/timezone and many other tools together in a single process. Then there is also the
anaconda-yum process running the Yum transaction installing packages which takes 120 MB RAM or more depending on the size of the package set.
However, even in this area some tricks can be done to lower memory consumption. One nice example is switching from python-babel to langtable for getting information about available languages and locales and searching for best matches 2 that lead into ~20MB decrease of Anaconda’s RSS. Also another potential trick would be dropping languages, locales, keyboard layouts and timezones objects from memory once the installation begins and user can do no further changes or in text mode where there is no interactive language/locale and keyboard configuration at all.
Then it’s also worth to run the
top utility from a console when the installation environment is fully established and see which processes consume most RAM. Obviously, number one is
anaconda, but for example recently the
dhclient started taking more than 10 MB of RAM which is quite a lot for a process that basically just sits there and does nothing complicated. A bug report has been filed on
dhclient‘s RAM consumption and hopefully it will be soon fixed. Another useful command is
du -sh /tmp because the
tmpfs file system mounted to
/tmp is used as a storage for all data that are stored in a form of files (logs, configuration files, etc.) and also e.g. as a cache for Yum.
A traditional solution for an issue with not enough RAM but not requiring super-high performance is using (more) swap. However, that’s not applicable for the OS installer that cannot touch disks before user selects them OS installation and clearly confirms that selection. A swap device can be detected by the installer, but it could contain hibernated system or any other data that shouldn’t be overwritten. And even if the swap was freshly formatted and contained nothing, using it would cause many troubles when doing partitioning, because it would be equivalent to a mounted file system that cannot be unmounted (where to put data from it if RAM is full?).
The only difference is a swap device created as part of the new OS. That is freshly created or clearly marked to be used by the new OS so it can be used. But that happens only after its partition or logical volume is created and formatted as swap. It’s useful anyway, because package installation that happens afterwards requires quite a lot of RAM and space in the
/tmp used as a cache so Anaconda activates such swap devices ASAP. Activating them presents a critical point for minimum RAM needed for the OS installation — once big swaps located on HDDs/SSDs are activated, there is enough space for basically whatever could be needed by the installer and its components. The critical point is thus right after users confirm they really want to install the OS with the chosen configuration and storage layout when partitions are created and formatted.
Anaconda zRAM swap
Recently (only a week ago actually), one more nice trick has been applied to the Anaconda sources in the area of memory requirements — adding a zRAM swap.
zRAM is a block device similar to
tmpfs introduced in kernel 3.14. But when compared with
tmpfs it has two very neat features — it is a block device and its content is compressed (with the
LZ4 algorithm). It can thus be used in combination with arbitrary file system as a replacement for
tmpfs potentially making the capacity of such RAM-located file system bigger thanks to the compression.
But a very clever and neat usage is using it as a swap device as big as the amount of available RAM. Take a few seconds to think about it. See? Compressed RAM! When kernel runs out of physical memory which happens basically immediately when something starts allocating memory that is reserved for the zRAM devices, kernel starts using swap devices. By giving zRAM swap devices high priority among swaps it is assured that kernel uses zRAM swaps first even if there some other swap devices. Since zRAM blocks are located in RAM, the only difference is that memory pages are compressed. That of course requires some CPU cycles, but the
LZO compression is really fast and by dividing amount of available RAM by the number of available CPUs and creating one zRAM device for each CPU the compression can be easily made parallel. The result is that for example on a machine we use for VMs running Fedora and RHEL installations over and over again the average compression ratio is between 50 and 60 % without any noticeable CPU performance drop! Nice, isn’t it?
The compression ratio in the Anaconda installer environment was not investigated yet 4. But what has been tested is that when having a
squashfs.img on a mountable media, 320 MB RAM is enough to install Fedora Rawhide in text mode installation. 400 MB RAM is enough for a graphical installation if the
inst.dnf=1 boot option is used to tell Anaconda it should use DNF instead of Yum (more about that in the next section). The zRAM swap is activated when less than 1 GB of RAM is available because with more RAM, there is not need for it. Everything would work even without zRAM swap with 1 GB RAM, but things are much faster with it because instead of heavy use of swap on HDD/SSD, zRAM is used primarily.
DNF vs. Yum
With Yum being written completely in Python and DNF using many C libraries for the underlying operations DNF is expected to require less RAM. The current situation is that it depends on one’s answer for the following question: "What does it mean to require less RAM?" Without the
inst.dnf=1 boot option, the amount of memory required by the installation environment when the actual installation begins (the critical point) is ~232 MB whereas with the
inst.dnf=1 option it is 190 MB. Seems like 40 MB less, right? But… (there always is some) shortly before this point there is a peak of memory consumption when DNF does metadata processing and dependency solving which is over 320 MB and causes Anaconda being killed by
oom_kill with less than 350 MB of RAM (compressed by zRAM). But that reveals a fact that although DNF takes 120 MB peak RAM than Yum with zRAM the difference is only 30 MB (320 MB enough with Yum, 350 MB enough with DNF) and thus the extra data can be heavily compressed.
With a graphical installation, 400 MB RAM is enough with DNF but too little with Yum. That probably looks weird when the paragraph above is considered. But it has a simple explanation — the DNF’s peak happens before the peak of other components taking place in the graphical installation and 400 MB is enough for all of them whereas Yum’s peak meets one of the other peaks and grows over 400 MB and requires at least 410 MB.
Although memory requirements of a full-featured OS installer have to be quite high, this post tries to show that there are many things that can improve the situation quite a lot. It’s not easy and it needs clever ideas and wide overview of the system and available technologies, but hey, this is open source, anybody having a great idea or knocking their heads on my lack of knowledge: Patches and suggestions welcome! There are probably many other things that can be improved, but it usually takes quite a lot of testing and experimenting which is time consuming and to be honest, these things don’t have a priority of show-stopper bugs like release blockers or crashes/freezes on traditional systems with standard HW equipment. Maybe with various ARM machines, cloud images and other restricted systems the focus on lowering RAM consumption will increase. Hard to tell, RAM is quite cheap these days. But 400 MB of RAM for a full-featured OS installation is quite nice, isn’t it? Do you know about any other OS installer that requires less RAM and provides at least to some extent comparable feature set?
the image contains many packages required for the installation, but many unneeded files are pruned (documentation in general, many binaries, libraries, images, etc.) and
squashfsprovides good compression↩
which is the default↩
I’ll add a comment or blog post with such numbers once I have time to determine them↩