Tag Archives: blivet

Introducing libbytesize

Problem area

Many project have to deal with representing sizes of storage or memory. In general, sizes in bytes. What may seem to be a trivial thing turns into hundreds of lines of of code if the following things are to be covered properly:

  • using binary (GiB,…) and decimal (GB) units correctly
  • handling sizes bigger than MAXUINT64 (which is 16 EiB – 1)
  • parsing users’ input correctly with:
    • binary and decimal units
    • numeric values in various formats (traditional, scientific)
  • handling localization and internationalization correctly
    • different radix characters used in different languages
    • units being translated and typed in by users in their native format (even with non-latin scripts)
  • handling negative sizes
    • it sometimes make sense to work with these for example when some storage space is missing somewhere

Of course, not all projects working with sizes in bytes have hundreds of lines for dealing with the above points, but the result is a bad user experience. In some cases, valid localized inputs are not accepted and correctly parsed or no matter what the current locale and language configuration is the users always get the English format and unit. One of the biggest problems I see in many projects is that binary and decimal units are not used and differentiated correctly. If something shows the value 10 G, does it mean 10 GiB and thus 10240 MiB or is it 10 GB and thus 10000 MB? Sometimes one can find this piece of information in the documentation (e.g. man pages), but often one just have to guess and try. Fortunately quite rarely one can be really surprised with the documented behaviour. For example in case of the lvm utilities where g means GiB and G means GB. We should generally be doing a much better job in handling sizes right and consistently in all projects, that have to handle those. However, it’s obvious that having a few hundreds of lines of code in every such project is nonsense.

An existing solution

One of the projects that I can gladly call a good example of how to deal with sizes in bytes is the Blivet python package used mainly by the Anaconda OS (Fedora, RHEL,…) installer. It has all the concerns mentioned above addressed in a proper and well-tested way in its class called simply Size. As the title of this post reveals, I’m trying to introduce a new library here so the obvious question is: Why to invent and write something new when a good and well-tested solution already exists? The answer lies in the description of Blivet and it is the fact that it is written in Python which makes its implementation of the Size class hardly usable from any other language/environment.

One step further

The obvious solution to move further towards a widely reusable solution was to rewrite the Blivet’s Size class in C so that it can be used from this low-level language and many other languages that very often facilitate use of C libraries. However, again what may seem to be an easy thing to do is not at all that simple. The Blivet’s Python implementation is based on the Python’s type Decimal which is a numeric type supporting unlimitted precision and arbitrarily big numbers. Also, dealing with strings and their processing is way simpler in Python than in C.

Nevertheless, C also has some nice libraries for working with big and highly precise numbers, namely the GMP and MPFR libraries that were created as part of the GNU project and which are for example used by many tools and libraries doing some serious maths. So it soon became clear, that writing a C implementation of the Size class shouldn’t be an overly complicated task. And it turned out be the case.

Here it is

The result is the libbytesize library that uses GMP and MPFR together with GObject Introspection to provide a nice object-oriented API facilitating the work with sizes in bytes. It properly takes care of all the potential issues mentioned in the beginning of this post and is widely usable due to the broad support of GObject Introspection in many high-level languages. The library provides a single class called (warning: here comes the surprise) Size which right now is basically a very thin wrapper around the mpz_t type provided by the GMP library for arbitrarily big integer numbers and thus it actually stores byte sizes as numbers of bytes. That is actually the precision limitation, but since no storage provides or works with fractions of bytes, it’s no real limitation at all.

There are (at this point) four constructors 1:

  • bs_size_new() which creates a new instance initialized to 0 B,
  • bs_size_new_from_bytes() which creates a new instance initialized to a given number of bytes,
  • bs_size_new_from_str() which creates a new instance initialized to the number of bytes the given string (e.g. "10 GiB") represents,
  • bs_size_new_from_size() which is a copy constructor.

Then there are some query functions the most important of which are the following two:

  • bs_size_convert_to() which can be used to convert a given size to some particular unit and
  • bs_size_human_readable() which gives a human-readable representation of a given size – i.e. with such unit that the the resulting number is not too big nor too small

Last but not least there are many methods for doing arithmetic and logical operations with sizes in bytes. It’s probably wise to mention here that not all arithmetic operations implemented for the mpz_t type are implemented for sizes. Some of them just don’t make sense – multiplication of size by size (what is GiB**2?), the raising operation, (square) root and others. However, there are some extra ones that don’t really make much sense for generic numbers, but are quite useful when working with sizes namely the bs_size_round_to_nearest() which rounds a given size (up or down) to a nearest multiple of another size. Like for example if you need to know how much space an LVM LV of requested size will take in a VG with some particular extent size.

Since the GObject Introspection allows for having overrides and the new library is expected to be used by Blivet instead of its own Python-only implementation of the Size class, there already are Python overrides making the work with the libbytesize’s Size class really simple. Here as example python interpret session demostrating the simplicity of use:

>>> from gi.repository.ByteSize import Size
>>> s = Size("10 GiB")
>>> str(s)
'10 GiB'
>>> repr(s)
'Size (10 GiB)'
>>> s2 = Size(10 * 1024**3)
>>> s2
Size (10 GiB)
>>> s + s2
Size (20 GiB)
>>> s - s2
Size (0 B)
>>> s3 = Size(s2)
>>> sum([s, s2, s3])
Size (30 GiB)
>>> -s2
Size (-10 GiB)
>>> abs(-s2)
Size (10 GiB)

And here come the dogs

I mean docs. The project is hosted on GitHub together with its documentation. The current release is 0.2 where the zero in the beginning means that it is not a stable release yet. The API is unlikely to change in any significant way for the (stable) release 1.0, but since the library is not being used in any big project right now, we are leaving us with some "manipulation space" for potential changes. So if you find the API of the library wrong, feel free to let us know and we might change it according to your favor! If you want to get a quick but still quite comprehensive overview of the library’s API, have a look at the header file it provides.

The last thing I’d like to mention here is that the library is packaged for the Fedora GNU/Linux distribution so if you happen to be using this distribution, you can easily start playing with the library by typing this into your shell:

$ sudo dnf install libbytesize python-libbytesize ipython
$ ipython

Using ipython also gives you the TAB-completion. See the above intepret session example to get a better idea about what to type in then. Have fun and don’t forget to share your ideas in the comments!


  1. bs is the "namespace" prefix and size is the class prefix

libblockdev reaches the 1.0 milestone!

A year ago, I started working on a new storage library for low-level operations with various types of block devices — libblockdev. Today, I’m happy to announce that the library reached the 1.0 milestone which means that it covers all the functionality that has been stated in the initial goals and it’s going to keep the API stable.

A little bit of a background

Are you asking the question: "Why yet another code implementing what’s already been implemented in many other places?" That’s, of course, a very good and probably crucial question. The answer is that I and people who were at the birth of the idea think that this is for the first time such thing is implemented in a way that it is usable for a wide range of tools, applications, libraries, etc. Let’s start with the requirements every widely usable implementation should meet:

  1. it should be written in C so that it is usable for code written in low-level languages
  2. it should be a library as DBus is not usable together with chroot() and things like that and running subprocesses is suboptimal (slow, eating lot of random data entropy, need to parse the output, etc.)
  3. it should provide bindings for as many languages as possible, in particular the widely used high-level languages like Python, Ruby, etc.
  4. it shouldn’t be a single monolithic piece required by every user code no matter how much of the library it actually needs
  5. it should have a stable API
  6. it should support all major storage technologies (LVM, MD RAID, BTRFS, LUKS,…)

If we take the candidates potentially covering the low-level operations with blockdev devices — Blivet, ssm and udisks2 (now being replaced by storaged) — we can easily come to a conclusion that none of them meets the requirements above. Blivet 1 covers the functionality in a great way, but it’s written in Python and thus hardly usable from code written in other languages. The same applies to ssm 2 is also written in Python, it’s an application and it doesn’t cover all the technologies (it doesn’t try to). udisks2 3 and now storaged 4 provide a DBus API and don’t provide for example functions related to BTRFS (and even LVM in case of udisks2).

The libblockdev library is:
  • written in C,
  • using GLib and providing bindings for all languages supporting GObject instrospection (Python, Perl, Ruby, Haskell*,…),
  • modular — using separate plugins for all technologies (LVM, Btrfs,…),
  • covering all technologies Blivet supports 5 plus some more,

by which it fulfills all the requirements mentioned above. It’s only a wish, but a strong one, that every new piece of code written for low-level manipulation with block devices 6, should be written as part of the libblockdev library, tested and reused in as many places as possible instead of writing it again and again in many, many places with new, old, weird and surprising and custom bugs.

Architecture

As mentioned above, the library loads plugins that provide the functionality, each related to one storage technology. Right now, there are lvm, btrfs, swap, loop, crypto, mpath, dm, mdraid, kbd and s390 plugins. 7 The library itself basically only provides a thin wrapper around its plugins so that it can all be easily used via GObject introspection and so that it is easy to setup logging (and probably more in the future). However, each of the plugins can be used as a standalone shared library in case that’s desired. The plugins are loaded when the bd_init() function is called 8 and changes (loading more/less plugins) can later be done with the bd_reinit() function. It is also possible to reload a plugin in a long-running process if it gets updated, for example. If a function provided by a plugin that was not loaded is called, the call fails with an error, but doesn’t crash and thus it is up to the caller code to deal with such situation.

The libblockdev library is stateless from the perspective of the block device manipulations. I.e., it has some internal state (like tracking if the library has been initialized or not), but it doesn’t hold any state information about the block devices. So if you e.g. use it to create some LVM volume groups and then try to create a logical volume in a different, non-existing VG, it just fails creating it at the point where LVM realizes that such volume group doesn’t exist. That makes the library a lot simpler and "almost thread-safe" with the word "almost" being there just because some of the technologies doesn’t provide any other API than running various utilities as subprocesses which cannot generally be considered thread-safe. 9

Scope (provided functionality)

The first goal for the library was to replace the Blivet’s devicelibs subpackage that provided all the low-level functions for manipulations with block devices. That fact also defined the original scope of the library. Later, we realized that we would like to add the LVM cache and bcache support to Blivet and the scope of the library got extended to the current state. The supported technologies are defined by the list of plugins the library uses (see above) and the full list of the functions can be seen either in the project’s features.rst file or by browsing the documentation.

Tests and reliability

Right now, there are 135 tests run manually and by a Jenkins instance hooked up to the project’s Git repository. The tests use loop devices to test vast majority of the functions the library provides 10. They must be run as root, but that’s unavoidable if they should really test the functionality and not just some mocked up stubs that we would believe behave like a real system.

The library is used by Fedora 22’s installation process as F22’s Blivet has been ported to use libblockdev before the Beta release. There have been few bugs reported against the library (majority of them were related to FW RAID setups) with all bugs being fixed and covered by tests for those particular use cases (based on data gathered from the logs in bug reports).

Future plans

Although the initial goals are all covered by the version 1.0 of the library there are already many suggestions for additional functionality and also extensions for some of the functions that are already implemented (extra arguments, etc.). The most important goal for the near future is to fix reported bugs in the current version and promote the library as much as possible so that the wish mentioned above gets fulfilled. The plan for a bit further future (let’s say 6-8 months) is to work on additional functionality targetting version 2.0 that will break the API for the purpose of extending and improving it.

To be more concrete, for example one of the planned new plugins is the fs plugin that will provide various functions related to file systems. One of such functions will definitely be the mkfs() function that will take a list (or dictionary) of extra options passed to the particular mkfs utility on top of the options constructed by the implementation of the function. The reason for that is the fact that some file systems support many configuration options during their creation and it would be cumbersome to cover them all with function parameters. In relation to that, at least some (if not all) of the LVM functions will also get such extra argument so that they are useful even in very specific use cases that require fine-tuning of the parameters not covered by functions’ arguments.

Another potential feature is to add some clever and nice way of progress reporting to some functions that are expected to take a lot of time to finish –like lvresize(), pvmove(), resizefs() and others. It’s not always possible to track the progress because even the underlying tools/libraries don’t report it, but where possible, libblockdev should be able to pass that information to its callers ideally in some unified way.

So a lot of work behind, much more ahead. It’s a challenging world, but I like taking challenges.


  1. a python package used by the Anaconda installer as a storage backend

  2. System Storage Manager

  3. daemon used by e.g. gnome-disks and the whole GNOME "storage stack"

  4. a fork of udisks2 adding an LVM API and being actively developed

  5. the first goal for the library was to replace Blivet’s devicelibs subpackage

  6. at higher than the most low-level layers, of course

  7. I hope that with the exception of kbd which stands for Kernel Block Devices the related technologies are clear, but don’t hesitate to ask in the comments if not.

  8. or e.g. BlockDev.init(plugins) in Python over the GObject introspection

  9. use Google and "fork shared library" for further reading

  10. 119 out of 132 to be more precise

bcache and/vs. LVM cache

What’s going on here?

One of the bottlenecks of today’s computers is storage. While CPUs, buses and
other components of computers have really nice values of throughput going up to
several GiBs/s, disks are really slow compared to them. HDDs give few hundreds
of MiBs/s at most when performing sequential read/write and much less when doing
random I/O operations. While SSDs are much faster than HDDs especially in doing
random I/O operations they are much more expensive and thus not so great for big
amounts of data. As usual in today’s world, the key word for a win-win solution
is the word "hybrid". In this case a combination of HDD and SSD (or just their
technologies in a single piece of hardware) using a lot of HDD-based space
together with small SSD-based space as a cache providing fast access to
(typically) most frequently used data. There are many hardware solutions that
provide such hybrid disks, but they have the same drawbacks as hardware RAIDs —
they are not at all flexible and really good just for a particular use case. And
as with the hardware RAIDs the solution for better flexibility and broader range
of use cases is to use a software RAID, with hybrid disks software comes into to
this game (to win it, maybe?) with multiple approaches. Two most widely used and
probably also most advanced are bcache and LVM-cache (or dm-cache as
explained below). So what these two are and how they differ? Let’s focus on each
separately and then compare them a bit.

bcache

What it is?

bcache or Block (level) cache is a software cache technology being developed
and maintained as part of the Linux kernel codebase which as it’s name suggests
provides cache functionality on top of arbitrary (pair of) block devices. As
with any other cache technology bcache needs some backing space (holding data
that should be cached), typically on a slow device, and some cache space,
typically on a fast device. Combined with the fact that bcache is a block
level
cache we get the fact that both backing space and cache space could be
arbitrary block devices — i.e. disks, partitions, iSCSI LUNs, MD RAID devices,
etc.

Deployment options

The simplest solution is to use an HDD (let’s say /dev/sda) together with an
SSD (let’s say /dev/sdb), create a bcache on top of them (as described
below) and then partition the bcache device for the system. A bit more
complicated solution is to create partitions on the HDD and SSD and create one
or more bcache devices on top of partitions that are desired to be cached. Why
one should even think about this more complicated solution? It provides much
better flexibility. While by creating bcache on top of the whole HDD and SSD
devices gives us basically the same as hybrid disks except that we need two SATA
ports and we can choose from more HDD and SSD sizes creating bcache(s) on top of
partitions allows us e.g. to have some data (e.g. system data) directly on SSD
and some other data in a bcache (HDD+SSD) or even have multiple bcache devices
with different backing space and cache space sizes or even caching policies (see
below for details).

Setting up

So, let’s say we have an HDD (/dev/sda) and an SSD (/dev/sdb) and we
have some partitions created on them — let’s say /dev/sda1 on the whole HDD
(to be used for /mnt/data) and /dev/sdb1 used for system (/) plus
/dev/sdb2 (dedicated for cache) on SSD.

First of all we need to install the tools that will allow us to create,
configure and monitor the bcache. These are typically a part of a package
called bcache-tools or similar. So on my Fedora 21 system, I need to run the
following command to get it (# means it should be run as root):

# dnf install bcache-tools

Another tool we will need is the wipefs tool which is part of the
util-linux package that should already be installed in the system.

With all the necessary tools available, we can now proceed to the bcache
creation. But before we start creating something new we need to first wipe all
old weird things from the block devices (in our case partitions) we want to
use (WARNING: this removes all file system and other signatures from /dev/sda1
and /dev/sdb2 partitions
):

# wipefs -a /dev/sda1
# wipefs -a /dev/sdb2

Cleaned up. Now, as is usual with basically all storage technologies, we need to
write some metadata to the devices we want to use for our bcache so that the
code providing the cache technology can identify such devices as bcache devices
and so that it can store some configuration, status, etc. data there. Let’s do
it then:

# make-bcache -B /dev/sda1

This command writes bcache metadata for the backing device (space) to the
partition /dev/sda1 (which is on the HDD). Believe it or not, but this is
all we needed to create a bcache device. If udev is running and appropriate
udev rules are effective (if not, we have to do it manually [1]), we should
now be able to see the /dev/bcache0 device node and the /dev/bcache/
directory (try listing it to see what’s inside) in our file system hierarchy
which we could start using. Really? Is that everything that needs to be done?
Well, it’s not that easy. Remember that every cache technology needs backing
space and cache space and with the command above we have only defined the
backing device (space). So we now of course have to define the cache device
(space) again by writing some metadata into it:

# make-bcache -C /dev/sdb2

The result is that we now have the metadata written to both the backing device
(space) and the cache device (space). However, these devices don’t know about
each other and the caching code (i.e. the kernel in case of bcache) has no idea
about our intention of using /dev/sdb2 as a cache device for
/dev/sda1. Remember that the first make-bcache run created the
/dev/bcache0 device that was from the first moment usable? Well, it was
usable as a bcache device, but without any caching device which is not really
useful. The last step missing is to attach the cache device to our bcache device
bcache0 by writing the Set UUID from the make-bcache -C run to the
appropriate file:

# echo C_Set_UUID_VALUE > /sys/block/bcache0/bcache/attach

From now on we can enjoy the speed, noise and other improvements provided by the
use of our cache. The /dev/bcache0 device is just a common block device and
the easiest thing to do with it is to run e.g. mkfs.xfs on it, mount the
file system to e.g. /mnt/data and copy some data to it. If we later want to
detach the cache device from the bcache device, we just use the detach file
instead of the attach file in the same directory under /sys.

As I’ve mentioned in the beginning of this post, SW-based cache solutions
provide more flexibility as HW solutions. One area of such flexibility is
configuration because it is quite easy to make a SW solution configurable and
extensible compared to a HW solutions. The configuration of our bcache can be
controlled by reading and writing files under the /sys file system. The most
useful and easiest example is changing the mode of cache — the default is
writethrough which is the safest one, but which on the other hand doesn’t
save the backing device (HDD) from many random write operations. Another typical
mode is writeback which keeps the data in the cache (SSD) and once in a
while writes them back to the backing device. To change the mode we simply run
the following command:

# echo writeback > /sys/block/bcache0/bcache/cache_mode

However, this change is only temporary and we have to do the same after every
boot of the system if we want to always use the writeback mode (of course we
can do this in a udev rule, systemd service, init script or whatever we
prefer instead of doing it manually after each boot).

[1] by running # echo /dev/sda1 > /sys/fs/bcache/register

Monitoring and maintenance

Even though it is usually possible to see (and even hear [2]) the difference once
bcache is created and used instead of just using the HDD people are curious
and always want to know something more. A typical question is: "How well is the
new solution performing?"
In case of cache, the most clear performance metric
is the ratio of read/write hits and misses. Of course, the more hits compared to
misses the better. To find out more about the current state, status and stats of
a bcache another tool from the bcache-tools package can be used:

# bcache-status -s

In the output we should see quite a lot of interesting information and we can
for example also check that the desired cache mode is being used. There are
other configuration options and other stats that might be important for many
users, but these are left to the kind reader for further exploration.

[2] if the writeback mode is used many writes to the backing device are
spared and the rest is serialized as most as possible which makes the HDD
quite a lot less noisy due to R/W header not moving randomly

LVM cache (dm-cache)

Why?

We have seen in the previous part of this post that bcache is quite a powerful
and flexible solution for using HDD and SSD in a combination giving us great
performance (of the SSD) and big capacity (of the HDD). So one may ask why we
even bother with a description of some other solution. What could possibly be
better with LVM cache (dm-cache) compared to bcache?

A little bit about terminology

First of all, let’s start with clarification of why I up until now always
referred to this technology as "LVM cache (dm-cache)". Some people know,
some may not, that LVM (which stands for Logical Volume Management) is a
technology of user space abstract volume management using the Device Mapper
functionality (in both user space and kernel). As a result of that, everything
that can be done with LVM can be done by directly using the Device Mapper (even
though it is typically incomparably more complex) and anything that LVM does
needs to have the underlying (or low-level if you prefer) support in the Device
Mapper. The same applies to the caching technology which is provided by the
cache Device Mapper target and made "consumable" by the LVM cache
abstraction.

Okay, okay, but why?

Now, let’s get back to the big question from the first paragraph of this
section. The answer is clear and simple to people who like LVM — the LVM
cache
for bcache is what LVM is for plain partitions. For people who don’t
like, use or totally don’t get LVM an example of quite a big difference could
be the best argument. The first step we did in order to set our bcache up was
wiping all signatures from block devices we wanted to use for both backing space
and cache space. That means that any file systems that could potentially existed
on those block devices would be removed leaving the data unreadable and
practically lost. With LVM cache it is possible to take and existing LV
(Logical Volume) with an existing (even mounted) file system and convert it to a
cached LV without any need of moving the data to some temporary place and even
without any downtime [3]. And the same applies if we for example later decide
that we want to stripe the cache pool to two SSDs (RAID 0) to get more cache
space and really nice performance or on the other hand mirror the backing device
to get better reliability (or both of course). So we may easily start with some
basic setup and improve it later as we have more HW available or different
requirements. The LVM cache also provides better control and even more
flexibility by allowing user to manually define the data and metadata parts of
the cache space with various different parameters (e.g. mirrored metadata part
on more reliable devices with striped data part for more space and better
performance).

[3] a typical approach to convert a block device into a "bcached" block
device is to freeze the data on it, move/copy it somewhere else, set the
bcache up and move the data back

Setting up

Let’s assume we have the same HW as in case of bcache — a HDD and a SSD —
but this time let’s also assume that we already have LVM set up on the HDD (or
even multiple HDDs, that makes no difference for the commands we are going to
use) and that the SSD provides 20 GiB of space . Setting up LVM on top of HDD(s)
would be a nice topic for another blog post, so let me know if you are
interested in such topic in the comments. Now we want to demonstrate one of the
benefits of the LVM cache over bcache so let’s assume all the basic LVM
setup work is done and we have an LV (Logical Volume) with some file system
and data on it using the HDD for its physical extents [4] the name of which is
DataLV and which is part of the data VG (Volume Group) (the backing
space is called Origin in LVM’s terminology). We will basically follow the
steps described in the lvmcache (7) man page (another benefit over bcache
from my point of view).

As the first step, we need to add the SSD (/dev/sdb) into the same volume
group as where our LV holding the data (DataLV) is. To do that, we need to
tell LVM that the /dev/sdb block device should become an LVM member device
(we could use a partition on /dev/sdb if we wanted to combine partitions and
LVM on our disks):

# pvcreate /dev/sdb

If that fails because of some old metadata (disk label, file system
signature…) being left on the disk we could either use the wipefs tool (as
in case of the bcache) or add the --force option to the pvcreate
command.

Once LVM marks the /dev/sdb device as an LVM member device [5] we can now
add it to the data VG:

# vgextend data /dev/sdb

The data VG now sees the SSD as a free space for allocation if we create
more LVs in it or grow some existing ones. But we want to use it as a cache
space, right? Well, LVM only knows PVs (Physical Volumes), VGs and
LVs. However, LVs can be of various types (linear, striped, mirror, RAID, thin,
thin pool,…) which can be changed online. So let’s start with creation of a
good old LV with the size we want for our cache space and with it’s PEs
(Physical extents) being allocated on the SSD:

# lvcreate -n DataLVcache -L19.9G data /dev/sdb

I believe a concentrated reader now asks why only 19.9 GiB when we have 20
GiB
of space on the SSD. The reason is that we are going the "hard" (more
controlled) way and we need some space for a separate metadata volume which we
can now create:

# lvcreate -n DataLVcacheMeta -L20M data /dev/sdb

with the size of 20 MiB because the LVM documentation (the man page) says it
should be 1000 times smaller than the cache data LV, with a minimum size of
8MiB
. If we wanted to have the DataLVcache and/or DataLVcacheMeta more
special (like mirrored), we could have created them as such right away now. Or
we could convert them later if we want to. But for now, let’s just follow our
simple (and probably most common) case. The next step we need to do is to
"engage" the data cache LV and metadata cache LV in a single LV called cache
pool
. A cache pool is an LV that provides the cache space for the backing
space with metadata being written and kept in it. And as such, it is created from
the data cache LV, more precisely converted:

# lvconvert --type cache-pool --cachemode writethrough --poolmetadata data/DataLVcacheMeta data/DataLVcache

As you may see, we specify the cache mode on cache pool creation. The bad thing
about it is that it cannot be changed later, but the good thing about it is that
it is persistent. And honestly, other then playing with various technologies,
how often one needs to change the cache mode? If it’s really needed, the cache
pool can be simply created again with a different cache mode.

It’s been a long way here, I know, but we are almost at the end now, I
promise. The only missing step is to finally make our DataLV cached. And as
usual with LVM, it is a conversion:

# lvconvert --type cache --cachepool data/DataLVcache data/DataLV

And with that, we are done. We can now continue using the DataLV logical
volume, but from now on as a cached volume using the cache space on the
SSD.

Unfortunately, there seems to be no nice tool shipped with the LVM that would
give us all the cool stats just like bcache-status does for bcache. The
only such tool I’m aware of is the lvcache tool written by Lars
Kellogg-Stedman available from this git repository:
https://github.com/larsks/lvcache. Hopefully this will change when the LVM
cache
starts to be more widely deployed and used.

[4] LVM’s units of physical space allocation
[5] try running wipefs (without the “-a“ option!) on it
[6] with lvcreate --type cache-pool -L20G -n DataLVcache data /dev/sdb

Summary

I know it probably seemed really complicated and much harder to set up LVM
cache
than setting up bcache, but if we wanted to, we could have dropped the
separate data and metadata cache LVs creation and do it in a single step
creating the cache pool right away. [6] I just wanted to demonstrate extra
control and possibilities the LVM cache provides. Without that, the LVM
cache
setup would really be very similar to the bcache setup, but still we a
big advantage of doing everything online without any need to move data somewhere
else and back.

I don’t think that any of the two SW cache technologies presented in this blog
post is better than the other one. Just like I mentioned in the very beginning
of the LVM cache description, LVM cache for bcache is what LVM is for
partitions. So if somebody has some advanced knowledge and likes having things
configured the exact complex way that they think is best for their use case or
if somebody needs to deploy cache online without any downtime then LVM cache
is probably the better choice. On the other hand, if somebody just wants to make
use of their SSD by setting up SW cache on a fresh pair of SSD and HDD and they
don’t want to bother with all the LVM stuff and commands, the bcache is
probably the better choice.

And as usual, having to independent and separate solutions for a single problem
leads into many new and great ideas that are in the end shared because what gets
implemented in one of them usually sooner or later makes it to the other too,
typically even improved somehow. Let’s just hope that this will also apply to
bcache and LVM cache and that both technologies are deployed widely enough
to be massively supported, maintained and further developed.

Updates

  • As Barry Shilliday pointed out in the comments, the LVM cache
    can be done even easier with cache pool creation and LV conversion into a cached LV in
    a single step by:
    # lvcreate –type cache -L 19.9G -n DataLV_cachepool data/DataLV /dev/sdb
  • I was informed that the lvs command now supports options to list some
    cache stats. The # lvs -o help 2>&1 | grep cache command lists all the cache
    stats/settings that can be printed out by commands like:
    # lvs -o cache_read_hits,cache_read_misses data/DataLV.
  • According to the updated lvmcache(7) man page:
    The cache mode can be changed on an existing LV with the command:
    
           lvconvert --cachemode writethrough|writeback VG/CacheLV
    



Introducing blivet-gui — a new GUI storage management tool

Introduction

Let’s be honest and start directly with the inconvenient truth 1 — storage management is hard. There are many technologies using many ways how to make our data available faster, more reliably, with high availability, over long distances, etc. What’s even harder is that all those technologies can be combined together to get a combination of their advantages. Do you think having tens of iSCSI LUNs combined in an MD RAID exporting RAID devices used as LVM PVs together with two SSD drives combined in a another MD RAID providing fast dm-cached LVs with an XFS file system on top of all this exported as a GlusterFS sounds crazy, overly complicated and totally unusable? 2 Five words: “Welcome to the enterprise storage!” Where something like that is a perfectly usable solution providing all the features mentioned in the beginning of this post.

The Anaconda installer and blivet

Being a storage expert or at least a senior system administrator it is quite easy to run a few commands in a shell to set the stack described above up. But we are in the age of nice, shiny graphical user interfaces that are expected and thus required to support many many more or less crazy combinations and configuration options. And when does storage configuration usually happen? When the system is installed. That’s the reason why the most comprehensive and feature-complete storage management (or more precisely, storage configuration) UI was over the years implemented in the Anaconda installer, the RHEL and Fedora 3 OS installer. Just a small demonstration — the only two technologies from the example described above currently unsupported in the Anaconda installer are GlusterFS and dm-cache 4 with the list of supported technologies being at least three times as long as the list of abbreviations from that example.

Having probably the only code in the world supporting so many technologies it started to become evident that it would be useful to put such code in a library usable for other projects. And thus the code majorly rewritten by David Lehman in 2009 was in early 2013 split into a library (or more precisely Python package) which was due to its nature given name blivet“[blivet] is an undecipherable figure, an optical illusion and an impossible object” 5 –because it seems impossible to provide high-level and easy to understand API for something that is so complicated as (enterprise) storage.

Birth of blivet-gui

When Vojtěch Trefný approached me last year telling me that he doesn’t like Anaconda installer’s partitioning UI and he saw a bachelor thesis topic focused on improving it (by implementing a visualisation of the changes that will happen on the disks) I was really happy to see somebody who doesn’t “just complain”, but actually wants to make some effort to make things better. Since the topic focused on visualisation had already been taken by somebody else 6, we needed to come up with something new. And that’s how the idea of a new storage management tool based on the blivet library was born. We gave Vojtěch a link to blivet documentation and he then, to my surprise, came after a few months to me as a supervisor for the thesis with an announcement that he has it done and in a shape that fulfills the requirements specified in the bachelor thesis’ description — partitions and LVM support, user and developer documentation and the whole thing being embedable into another application. After defending the thesis and the implementation Vojtěch has been continuing with the work on the blivet-gui tool adding new features (LUKS, kickstart generation,…), fixing bugs and improving the UX (clickable objects in the visualisation,…) with me being partially a QA, source of some ideas and comments about the implementation. And in the most recent days a press agent spreading the word about such a great and promising tool the blivet-gui already is 7.

Now and the future

The blivet-gui tool is under heavy development and for now, it doesn’t support all the features the blivet library supports. But it was intentionally announced 8 and made publicly known in this shape becase I’m sure Fedora and the whole open-source world is a great community with a lot of clever and productive people who can help with development, QA, documentation, ideas, feature requests and everything else. We are a community of great people ready to do great things that can literally help thousands and millions of people all around the world. And we all know that storage management is a hard thing to do even harder when trying to do it right. So let’s go guys, everybody can help to make blivet-gui more user- or developer-friendly, secure, reliable and feature-complete! Tell us what you like or hate on it most, tell us what you miss in it most, tell us what other storage management tools do better. Every rational and non-aggressive input is welcome, the patches being welcome most, of course. :) Cloning the blivet-gui repository at GitHub is a great way to start.

Disclaimer

The purpose of the blivet-gui tool is not to replace GParted, gnome-disks-utility or any other storage management tool. It is just another option for storage management providing a different (although not disjoint) feature set than other storage management tools.


  1. reaching for the Nobel Prize for peace, of course
  2. I really like how storage gives you a chance to use 20 cryptic abbreviations in a single sentence
  3. and their derivatives
  4. both already being discussed and planned by the Anaconda-storage sub-team
  5. http://en.wikipedia.org/wiki/Blivet
  6. not implemented yet though
  7. http://blog.vojtechtrefny.cz/blivet-gui
  8. https://lists.fedoraproject.org/pipermail/devel/2014-September/202105.html