Tag Archives: Anaconda

libblockdev reaches the 1.0 milestone!

A year ago, I started working on a new storage library for low-level operations with various types of block devices — libblockdev. Today, I’m happy to announce that the library reached the 1.0 milestone which means that it covers all the functionality that has been stated in the initial goals and it’s going to keep the API stable.

A little bit of a background

Are you asking the question: "Why yet another code implementing what’s already been implemented in many other places?" That’s, of course, a very good and probably crucial question. The answer is that I and people who were at the birth of the idea think that this is for the first time such thing is implemented in a way that it is usable for a wide range of tools, applications, libraries, etc. Let’s start with the requirements every widely usable implementation should meet:

  1. it should be written in C so that it is usable for code written in low-level languages
  2. it should be a library as DBus is not usable together with chroot() and things like that and running subprocesses is suboptimal (slow, eating lot of random data entropy, need to parse the output, etc.)
  3. it should provide bindings for as many languages as possible, in particular the widely used high-level languages like Python, Ruby, etc.
  4. it shouldn’t be a single monolithic piece required by every user code no matter how much of the library it actually needs
  5. it should have a stable API
  6. it should support all major storage technologies (LVM, MD RAID, BTRFS, LUKS,…)

If we take the candidates potentially covering the low-level operations with blockdev devices — Blivet, ssm and udisks2 (now being replaced by storaged) — we can easily come to a conclusion that none of them meets the requirements above. Blivet 1 covers the functionality in a great way, but it’s written in Python and thus hardly usable from code written in other languages. The same applies to ssm 2 is also written in Python, it’s an application and it doesn’t cover all the technologies (it doesn’t try to). udisks2 3 and now storaged 4 provide a DBus API and don’t provide for example functions related to BTRFS (and even LVM in case of udisks2).

The libblockdev library is:
  • written in C,
  • using GLib and providing bindings for all languages supporting GObject instrospection (Python, Perl, Ruby, Haskell*,…),
  • modular — using separate plugins for all technologies (LVM, Btrfs,…),
  • covering all technologies Blivet supports 5 plus some more,

by which it fulfills all the requirements mentioned above. It’s only a wish, but a strong one, that every new piece of code written for low-level manipulation with block devices 6, should be written as part of the libblockdev library, tested and reused in as many places as possible instead of writing it again and again in many, many places with new, old, weird and surprising and custom bugs.


As mentioned above, the library loads plugins that provide the functionality, each related to one storage technology. Right now, there are lvm, btrfs, swap, loop, crypto, mpath, dm, mdraid, kbd and s390 plugins. 7 The library itself basically only provides a thin wrapper around its plugins so that it can all be easily used via GObject introspection and so that it is easy to setup logging (and probably more in the future). However, each of the plugins can be used as a standalone shared library in case that’s desired. The plugins are loaded when the bd_init() function is called 8 and changes (loading more/less plugins) can later be done with the bd_reinit() function. It is also possible to reload a plugin in a long-running process if it gets updated, for example. If a function provided by a plugin that was not loaded is called, the call fails with an error, but doesn’t crash and thus it is up to the caller code to deal with such situation.

The libblockdev library is stateless from the perspective of the block device manipulations. I.e., it has some internal state (like tracking if the library has been initialized or not), but it doesn’t hold any state information about the block devices. So if you e.g. use it to create some LVM volume groups and then try to create a logical volume in a different, non-existing VG, it just fails creating it at the point where LVM realizes that such volume group doesn’t exist. That makes the library a lot simpler and "almost thread-safe" with the word "almost" being there just because some of the technologies doesn’t provide any other API than running various utilities as subprocesses which cannot generally be considered thread-safe. 9

Scope (provided functionality)

The first goal for the library was to replace the Blivet’s devicelibs subpackage that provided all the low-level functions for manipulations with block devices. That fact also defined the original scope of the library. Later, we realized that we would like to add the LVM cache and bcache support to Blivet and the scope of the library got extended to the current state. The supported technologies are defined by the list of plugins the library uses (see above) and the full list of the functions can be seen either in the project’s features.rst file or by browsing the documentation.

Tests and reliability

Right now, there are 135 tests run manually and by a Jenkins instance hooked up to the project’s Git repository. The tests use loop devices to test vast majority of the functions the library provides 10. They must be run as root, but that’s unavoidable if they should really test the functionality and not just some mocked up stubs that we would believe behave like a real system.

The library is used by Fedora 22’s installation process as F22’s Blivet has been ported to use libblockdev before the Beta release. There have been few bugs reported against the library (majority of them were related to FW RAID setups) with all bugs being fixed and covered by tests for those particular use cases (based on data gathered from the logs in bug reports).

Future plans

Although the initial goals are all covered by the version 1.0 of the library there are already many suggestions for additional functionality and also extensions for some of the functions that are already implemented (extra arguments, etc.). The most important goal for the near future is to fix reported bugs in the current version and promote the library as much as possible so that the wish mentioned above gets fulfilled. The plan for a bit further future (let’s say 6-8 months) is to work on additional functionality targetting version 2.0 that will break the API for the purpose of extending and improving it.

To be more concrete, for example one of the planned new plugins is the fs plugin that will provide various functions related to file systems. One of such functions will definitely be the mkfs() function that will take a list (or dictionary) of extra options passed to the particular mkfs utility on top of the options constructed by the implementation of the function. The reason for that is the fact that some file systems support many configuration options during their creation and it would be cumbersome to cover them all with function parameters. In relation to that, at least some (if not all) of the LVM functions will also get such extra argument so that they are useful even in very specific use cases that require fine-tuning of the parameters not covered by functions’ arguments.

Another potential feature is to add some clever and nice way of progress reporting to some functions that are expected to take a lot of time to finish –like lvresize(), pvmove(), resizefs() and others. It’s not always possible to track the progress because even the underlying tools/libraries don’t report it, but where possible, libblockdev should be able to pass that information to its callers ideally in some unified way.

So a lot of work behind, much more ahead. It’s a challenging world, but I like taking challenges.

  1. a python package used by the Anaconda installer as a storage backend

  2. System Storage Manager

  3. daemon used by e.g. gnome-disks and the whole GNOME "storage stack"

  4. a fork of udisks2 adding an LVM API and being actively developed

  5. the first goal for the library was to replace Blivet’s devicelibs subpackage

  6. at higher than the most low-level layers, of course

  7. I hope that with the exception of kbd which stands for Kernel Block Devices the related technologies are clear, but don’t hesitate to ask in the comments if not.

  8. or e.g. BlockDev.init(plugins) in Python over the GObject introspection

  9. use Google and "fork shared library" for further reading

  10. 119 out of 132 to be more precise

Snakes++: Anaconda goes Python 3

Anaconda is the OS installer used by Fedora and RHEL GNU/Linux
distributions and all their derivatives. It’s written in the Python programming
language and many people say it’s one of the biggest and most complex pieces of
software written in this dynamic programming language. At the same time it is
one oldest big Python projects. [1]

[1] the first commit in Anaconda’s git repository is from Apr 24 1999, but
that’s the beginning of the GUI code being developed so the core actually
predates even these "IT-old times"

Fedora, Anaconda, Python 3

Over the time, the Python language has been evolving and so has been Anaconda’s
codebase getting not only new features and bug fixes, but also code improvements
using new language features. [2] Such evolution have been happening in small
steps over the time, but in recent years the community around the Python
language have been slowly migrating to a new backwards-incompatible version of
the language — Python 3. Python 3 is the version of Python that will get
future improvements and generally the vast majority of focus and work. There
will only be bugfixes for Python 2 in the future. [3] Users of Fedora may have
noticed that there was a proposal for a major Python 3 as Default change that
suggested migrating core components (more or less everything that’s available on
a live media) to Python 3 for Fedora 22. Since some developers and partially
also QA ignored it (intentionally or not), the deadline was missed and the
change was postponed to Fedora 23. And together with this Anaconda’s deadline
for the "Python 3 switch" (see below) was postponed to the time when Fedora 22
gets released as we identified three key facts during the discussions about the
original feature proposal (for F22):

  1. no matter how much the target is clear, people ignore it if things are not broken (at which point they start complaining :-P)
  2. it’s hard and inefficient to maintain both Python 2 and Python 3 versions of such big project as anaconda is
  3. QA has not enough capacity to test both versions and thus switching between them during the release would make things broken moving the whole process at least few weeks back
[2] an interesting fact is that the beginning of history of the Anaconda’s
sources predate the existence of True and False (boolean) values
in Python
[3] https://www.python.org/dev/peps/pep-0373/#maintenance-releases

"Python 3 only" vs. "Python 2/3 compa­tible"

Python 3 is a major step and making some code compatible with both Python 2 and
Python 3 usually requires adding if s checking which version of Python the
code is run in and doing one or another thing based on such check. There is a
very useful module called six [5] that provides many functions, lists and
checks that hide the if s, but even when using this module, the code gets
more complicated, worse readable and harder to debug (and thus maintain) by
making it Python 2/3 compatible. While for libraries (or more precisely python
packages), it is worth it as it makes them usable for a wider variety of user
code, for applications written in Python 2, it is easier and in many ways better
to just switch to Python 3.

[5] http://pythonhosted.org/six/

For the reasons described above the Red Hat’s Installer team as the group of
Anaconda’s developers and maintainers decided to make all their libraries Python
2/3 compatible and to move Anaconda to Python 3. The only exception is the
pyblock (CPython) library that was developed in 2005 to provide quite a wide
range of functionality (not only) for the Anaconda installer, but which has been
over time getting more and more replaced by other libraries, utilities and other
means and become only used by the installer. Thus instead of porting the whole
library to Python 3 we decided to drop it and implement the few required
functions in the (new) libblockdev library [6] that was being born at that

[6] using GLib and GObject introspection as shown in some of my other
posts and thus being both Python 2 and Python 3 compatible

Yum vs. DNF

Not everything used by the Anaconda installer is, of course, developed and
maintained by the Installer team. There were few "external" python libraries
that were required to be made Python 2/3 compatible and then there was Yum,
used by Anaconda for one of the key things — package installation. Yum is
usually used as the yum utility, but it is also a Python package which is
the way Anaconda has been making use of it. However, Yum has been being slowly
replaced by a new project called DNF that started as a fork of Yum and that
has been replacing Yum code with either new code or calls to newly born (C)
libraries. It has been decided that Yum will never get the Python 3 support as
it will stay in the maintainance mode with new features and development being
focused on DNF. A result for Anaconda was that with the switch to Python 3 it
will also have to say "good bye" to Yum and use DNF instead. Fortunately, the
author and first maintainer of DNF — Aleš Kozumplík — gave his former team
great help and wrote the vast majority of the Anaconda’s DNFPayload
class. Still, the switch from Yum to DNF [7] in the installer was expected to
be a problematic thing and was one of the "official reasons" why the switch to
Python 3 was postponed to Fedora 23. [8]

[7] by default (previously only activated by the inst.dnf=1 boot option
[8] although so far the DNFPayload seems to be working great with only
few smaller (non-key) features being missing and added over time and
being the default for Fedora 22 (with inst.nodnf turning it off)

We need you

As can be seen from the above there are lots of things behing something that
might look like "just a switch to a newer version of Python". And where there is
lot of stuff behing something there’s also a lot of stuff that can go wrong. Be
it the Anaconda’s code (no, the 2to3 utility really doesn’t do it all
magically) or any of the libraries it uses, there is quite a lot to test and
check. That’s why we decided to give us a head start and did the switch to
Python 3 in a separate branch of the project using Copr to do unofficial
builds and composes. [9] At first, we had only been creating Rawhide composes,
but that turned out to be not enough as we spent the same time hitting and
debugging Python 3 related issues as with unrelated Rawhide issues. That’s why
we decided to spent extra time on it, ported the f22-branch code to Python 3
and started creating F22 Python 3 composes that are stable and do not suffer
with issues caused by unstable Rawhide packages.

The images are at https://m4rtink.fedorapeople.org/anaconda/images/python3/f22/
and we would like to encourage everybody having some free "testing time" to
download and test them by doing various more or less weird Fedora 22
installations. Please report issues at Anaconda’s GitHub project page and if
you have a patch, please a submit pull requests against our development

Last but not least, big THANKS for any help!

[9] https://copr.fedoraproject.org/coprs/bkabrda/py3anaconda/

Introducing blivet-gui — a new GUI storage management tool


Let’s be honest and start directly with the inconvenient truth 1 — storage management is hard. There are many technologies using many ways how to make our data available faster, more reliably, with high availability, over long distances, etc. What’s even harder is that all those technologies can be combined together to get a combination of their advantages. Do you think having tens of iSCSI LUNs combined in an MD RAID exporting RAID devices used as LVM PVs together with two SSD drives combined in a another MD RAID providing fast dm-cached LVs with an XFS file system on top of all this exported as a GlusterFS sounds crazy, overly complicated and totally unusable? 2 Five words: “Welcome to the enterprise storage!” Where something like that is a perfectly usable solution providing all the features mentioned in the beginning of this post.

The Anaconda installer and blivet

Being a storage expert or at least a senior system administrator it is quite easy to run a few commands in a shell to set the stack described above up. But we are in the age of nice, shiny graphical user interfaces that are expected and thus required to support many many more or less crazy combinations and configuration options. And when does storage configuration usually happen? When the system is installed. That’s the reason why the most comprehensive and feature-complete storage management (or more precisely, storage configuration) UI was over the years implemented in the Anaconda installer, the RHEL and Fedora 3 OS installer. Just a small demonstration — the only two technologies from the example described above currently unsupported in the Anaconda installer are GlusterFS and dm-cache 4 with the list of supported technologies being at least three times as long as the list of abbreviations from that example.

Having probably the only code in the world supporting so many technologies it started to become evident that it would be useful to put such code in a library usable for other projects. And thus the code majorly rewritten by David Lehman in 2009 was in early 2013 split into a library (or more precisely Python package) which was due to its nature given name blivet“[blivet] is an undecipherable figure, an optical illusion and an impossible object” 5 –because it seems impossible to provide high-level and easy to understand API for something that is so complicated as (enterprise) storage.

Birth of blivet-gui

When Vojtěch Trefný approached me last year telling me that he doesn’t like Anaconda installer’s partitioning UI and he saw a bachelor thesis topic focused on improving it (by implementing a visualisation of the changes that will happen on the disks) I was really happy to see somebody who doesn’t “just complain”, but actually wants to make some effort to make things better. Since the topic focused on visualisation had already been taken by somebody else 6, we needed to come up with something new. And that’s how the idea of a new storage management tool based on the blivet library was born. We gave Vojtěch a link to blivet documentation and he then, to my surprise, came after a few months to me as a supervisor for the thesis with an announcement that he has it done and in a shape that fulfills the requirements specified in the bachelor thesis’ description — partitions and LVM support, user and developer documentation and the whole thing being embedable into another application. After defending the thesis and the implementation Vojtěch has been continuing with the work on the blivet-gui tool adding new features (LUKS, kickstart generation,…), fixing bugs and improving the UX (clickable objects in the visualisation,…) with me being partially a QA, source of some ideas and comments about the implementation. And in the most recent days a press agent spreading the word about such a great and promising tool the blivet-gui already is 7.

Now and the future

The blivet-gui tool is under heavy development and for now, it doesn’t support all the features the blivet library supports. But it was intentionally announced 8 and made publicly known in this shape becase I’m sure Fedora and the whole open-source world is a great community with a lot of clever and productive people who can help with development, QA, documentation, ideas, feature requests and everything else. We are a community of great people ready to do great things that can literally help thousands and millions of people all around the world. And we all know that storage management is a hard thing to do even harder when trying to do it right. So let’s go guys, everybody can help to make blivet-gui more user- or developer-friendly, secure, reliable and feature-complete! Tell us what you like or hate on it most, tell us what you miss in it most, tell us what other storage management tools do better. Every rational and non-aggressive input is welcome, the patches being welcome most, of course. :) Cloning the blivet-gui repository at GitHub is a great way to start.


The purpose of the blivet-gui tool is not to replace GParted, gnome-disks-utility or any other storage management tool. It is just another option for storage management providing a different (although not disjoint) feature set than other storage management tools.

  1. reaching for the Nobel Prize for peace, of course
  2. I really like how storage gives you a chance to use 20 cryptic abbreviations in a single sentence
  3. and their derivatives
  4. both already being discussed and planned by the Anaconda-storage sub-team
  5. http://en.wikipedia.org/wiki/Blivet
  6. not implemented yet though
  7. http://blog.vojtechtrefny.cz/blivet-gui
  8. https://lists.fedoraproject.org/pipermail/devel/2014-September/202105.html

Fedora Rawhide installation with 320 MB RAM


The Anaconda OS installer used by Fedora, RHEL and their derivatives have been many times criticized for its memory requirements being bigger than memory requirements of the installed OS. Maybe a big surprise for users who don’t see too deep into the issue, no surprise for people who really understand what’s going on in the OS installation and how it all works. The basic truth (some call it an issue) about OS installation is that the installer cannot write to physical storage of the machine before user tells it to do so. However, since the OS installation is quite a complex process and since it has to be based on the components from the OS itself there are many things that have to be stored somewhere for the installer to work. The only space for data that for sure doesn’t contain any data the installer shouldn’t overwrite or leave some garbage in is RAM.

Thus for example when doing installation from PXE or with a minimal ISO file (netinst/boot.iso) vmlinuz (kernel) is loaded to RAM, initrd.img is loaded and extracted to RAM and squashfs.img containing the installation environment the part of which is the Anaconda installer itself is loaded to RAM as well with on-the-fly extraction of the required data. That’s quite a lot of RAM taken with Anaconda consuming 0 MB as it is not even started yet. On top of that the Anaconda starts, loading a lot of libraries covering all areas of an OS from storage and packaging over language, keyboard and localization configuration to firewall, authentication, etc.

Tricks to lower RAM consumption

Although all the pieces listed above are crucial for the OS installation to work and provide users high-level UI there are some tricks that can be done to lower RAM consumption.

kernel + initrd.img

Obviously, kernel has to stay as it is and in order to support wide range of HW and SW technologies it cannot be made smaller. The situation of the initrd.img on the other hand is quite different. It is loaded and extracted to RAM and used when the system boots to the installation environment and when it reboots to the newly installed OS, but the rest of the time it just lies around taking space and doing nothing. Thus it can be compressed when the boot is finished and then decompressed back when the machine reboots saving RAM in time when Anaconda needs it most. It means users have to wait for a few seconds (or tens of seconds) longer when (re)booting the machine, but that doesn’t make much difference in the whole OS installation taking tens of minutes.


The squashfs.img containing the installation environment and thus the Anaconda installer itself is either loaded to RAM in case of PXE, minimal/netinst installations or mounted from a storage device in case of DVD or USB flash drive installations. See the trick to need less RAM for the installation? Place the squashfs.img to some mountable media and it doesn’t have to be loaded to RAM, saving over 250 MB of RAM space 1. In case of a virtual machine, the easiest way is to put the squashfs.img into some directory and create an ISO file from it by running mkisofs -V SQUASH on it and then attaching the ISO file to the VM. By using -V SQUASH we give the ISO file a label/ID/name which we can then use for identification by passing inst.stage2=hd:LABEL=SQUASH:/squashfs.img to the Anaconda installer as a boot option. For a physical machine the easiest solution is probably a USB drive with a file system having a label containing the squashfs.img. A universal solution is then an NFS server exporting directory containing the squashfs.img.

RSS (Resident Set Size)

"The resident set size is the portion of a process’s memory that is held in RAM." (taken from Wikipedia) The RSS of the anaconda process is something around 160/235 MB in text/graphical. Quite a lot you may think, but this 160/235 MB of RAM contains information about all the locales, keyboards, timezones and package groups available plus partitioning layout information and a lot more. You may imagine it as running yum, parted, system-config-keyboard/language/timezone and many other tools together in a single process. Then there is also the anaconda-yum process running the Yum transaction installing packages which takes 120 MB RAM or more depending on the size of the package set.

However, even in this area some tricks can be done to lower memory consumption. One nice example is switching from python-babel to langtable for getting information about available languages and locales and searching for best matches 2 that lead into ~20MB decrease of Anaconda’s RSS. Also another potential trick would be dropping languages, locales, keyboard layouts and timezones objects from memory once the installation begins and user can do no further changes or in text mode where there is no interactive language/locale and keyboard configuration at all.

Then it’s also worth to run the top utility from a console when the installation environment is fully established and see which processes consume most RAM. Obviously, number one is anaconda, but for example recently the dhclient started taking more than 10 MB of RAM which is quite a lot for a process that basically just sits there and does nothing complicated. A bug report has been filed on dhclient‘s RAM consumption and hopefully it will be soon fixed. Another useful command is du -sh /tmp because the tmpfs file system mounted to /tmp is used as a storage for all data that are stored in a form of files (logs, configuration files, etc.) and also e.g. as a cache for Yum.


A traditional solution for an issue with not enough RAM but not requiring super-high performance is using (more) swap. However, that’s not applicable for the OS installer that cannot touch disks before user selects them OS installation and clearly confirms that selection. A swap device can be detected by the installer, but it could contain hibernated system or any other data that shouldn’t be overwritten. And even if the swap was freshly formatted and contained nothing, using it would cause many troubles when doing partitioning, because it would be equivalent to a mounted file system that cannot be unmounted (where to put data from it if RAM is full?).

The only difference is a swap device created as part of the new OS. That is freshly created or clearly marked to be used by the new OS so it can be used. But that happens only after its partition or logical volume is created and formatted as swap. It’s useful anyway, because package installation that happens afterwards requires quite a lot of RAM and space in the /tmp used as a cache so Anaconda activates such swap devices ASAP. Activating them presents a critical point for minimum RAM needed for the OS installation — once big swaps located on HDDs/SSDs are activated, there is enough space for basically whatever could be needed by the installer and its components. The critical point is thus right after users confirm they really want to install the OS with the chosen configuration and storage layout when partitions are created and formatted.

Anaconda zRAM swap

Recently (only a week ago actually), one more nice trick has been applied to the Anaconda sources in the area of memory requirements — adding a zRAM swap. zRAM is a block device similar to tmpfs introduced in kernel 3.14. But when compared with tmpfs it has two very neat features — it is a block device and its content is compressed (with the LZO3 or LZ4 algorithm). It can thus be used in combination with arbitrary file system as a replacement for tmpfs potentially making the capacity of such RAM-located file system bigger thanks to the compression.

But a very clever and neat usage is using it as a swap device as big as the amount of available RAM. Take a few seconds to think about it. See? Compressed RAM! When kernel runs out of physical memory which happens basically immediately when something starts allocating memory that is reserved for the zRAM devices, kernel starts using swap devices. By giving zRAM swap devices high priority among swaps it is assured that kernel uses zRAM swaps first even if there some other swap devices. Since zRAM blocks are located in RAM, the only difference is that memory pages are compressed. That of course requires some CPU cycles, but the LZO compression is really fast and by dividing amount of available RAM by the number of available CPUs and creating one zRAM device for each CPU the compression can be easily made parallel. The result is that for example on a machine we use for VMs running Fedora and RHEL installations over and over again the average compression ratio is between 50 and 60 % without any noticeable CPU performance drop! Nice, isn’t it?

The compression ratio in the Anaconda installer environment was not investigated yet 4. But what has been tested is that when having a squashfs.img on a mountable media, 320 MB RAM is enough to install Fedora Rawhide in text mode installation. 400 MB RAM is enough for a graphical installation if the inst.dnf=1 boot option is used to tell Anaconda it should use DNF instead of Yum (more about that in the next section). The zRAM swap is activated when less than 1 GB of RAM is available because with more RAM, there is not need for it. Everything would work even without zRAM swap with 1 GB RAM, but things are much faster with it because instead of heavy use of swap on HDD/SSD, zRAM is used primarily.

DNF vs. Yum

With Yum being written completely in Python and DNF using many C libraries for the underlying operations DNF is expected to require less RAM. The current situation is that it depends on one’s answer for the following question: "What does it mean to require less RAM?" Without the inst.dnf=1 boot option, the amount of memory required by the installation environment when the actual installation begins (the critical point) is ~232 MB whereas with the inst.dnf=1 option it is 190 MB. Seems like 40 MB less, right? But… (there always is some) shortly before this point there is a peak of memory consumption when DNF does metadata processing and dependency solving which is over 320 MB and causes Anaconda being killed by oom_kill with less than 350 MB of RAM (compressed by zRAM). But that reveals a fact that although DNF takes 120 MB peak RAM than Yum with zRAM the difference is only 30 MB (320 MB enough with Yum, 350 MB enough with DNF) and thus the extra data can be heavily compressed.

With a graphical installation, 400 MB RAM is enough with DNF but too little with Yum. That probably looks weird when the paragraph above is considered. But it has a simple explanation — the DNF’s peak happens before the peak of other components taking place in the graphical installation and 400 MB is enough for all of them whereas Yum’s peak meets one of the other peaks and grows over 400 MB and requires at least 410 MB.


Although memory requirements of a full-featured OS installer have to be quite high, this post tries to show that there are many things that can improve the situation quite a lot. It’s not easy and it needs clever ideas and wide overview of the system and available technologies, but hey, this is open source, anybody having a great idea or knocking their heads on my lack of knowledge: Patches and suggestions welcome! There are probably many other things that can be improved, but it usually takes quite a lot of testing and experimenting which is time consuming and to be honest, these things don’t have a priority of show-stopper bugs like release blockers or crashes/freezes on traditional systems with standard HW equipment. Maybe with various ARM machines, cloud images and other restricted systems the focus on lowering RAM consumption will increase. Hard to tell, RAM is quite cheap these days. But 400 MB of RAM for a full-featured OS installation is quite nice, isn’t it? Do you know about any other OS installer that requires less RAM and provides at least to some extent comparable feature set?

  1. the image contains many packages required for the installation, but many unneeded files are pruned (documentation in general, many binaries, libraries, images, etc.) and squashfs provides good compression

  2. more about that can be found in one of my previous posts

  3. which is the default

  4. I’ll add a comment or blog post with such numbers once I have time to determine them