Introduction to LVM in Linux

I’m going to go over the Logical Volume Manager (LVM) today, because it plays an important part in next Fridays’ howto. I’m not going to let the cat out of the bag, but just know that it’s been a long time coming. There are lots of resources for the technical management of LVMs, and they cover the many uses and flags of the commands far better than I could here, so some resources will be listed after the entry. I’m just going to concentrate on the “why” of LVM. I’ve often found that learning why you want to do something is more difficult than the technical process by which you accomplish it.

First off, lets discuss life without LVM. Back in the bad old days, you had a hard drive. This hard drive could have partitions. You could install file systems on these partitions, and then use those filesystems. Uphill both ways. It looked a lot like this:

(click for full size)

You’ve got the actual drive, in this case sda. On that drive are two partitions, sda1 and sda2. There is also some unused free space. Each of the partitions has a filesystem on it, which is mounted. The actual filesystem type is arbitrary. You could call it ext3, reiserfs, or what have you. The important thing to note is that there is a direct one-to-one corrolation between disk partitions and possible file systems.

Lets add some logical volume management that recreates the exact same structure:

(click for full size)

Now, you see the same partitions, however there is a layer above the partitions called a “Volume Group”, literally a group of volumes, in this case disk partitions. It might be acceptible to think of this as a sort of virtual disk that you can partition up. Since we’re matching our previous configuration exactly, you don’t get to see the strengths of the system yet. You might notice that above the volume group, we have created logical volumes, which might be thought of as virtual partitions, and it is upon these that we build our file systems.

Lets see what happens when we add more than one physical volume:

(click for full size)

Here we have three physical disks, sda, sdb, and sdc. Each of the first two disks has one partition taking up the entire spae. The last, sdc, has one partition taking up half of the disk, with half remaining unpartitioned free space.

We can see the volume group above that which includes all of the currently available volumes. Here lies one of the biggest selling points. You can build a logical partition as big as the sum of your disks. In many ways, this is similar to how RAID level 0 works, except there’s no striping at all. Data is written for the most part linearly. If you need redundancy or the performance increases that RAID provides, make sure to put your logical volumes on top of the RAID arrays. RAID slices work exactly like physical disks here.

Now, we have this volume group which takes up 2 and 1/2 disks. It has been carved into two logical volumes, the first of which is larger than any one of the disks. The logical volumes don’t care how big the actual physical disks are, since all they see is that they’re carved out of myVolumeGroup01. This layer of abstraction is important, as we shall see.

What happens if we decide that we need the unused space, because we’ve added more users?

Normally we’d be in for some grief if we used the one to one mapping, but with logical volumes, here’s what we can do:

(click for full size)

Here we’ve taken the previously free space on /dev/sdc and created /dev/sdc2. Then we added that to the list of volumes that comprise myVolumeGroup01. Once that was done, we were free to expand either of the logical volumes as necessary. Since we added users, we grew myLogicalVolume2. At that point, as long as the filesystem /home supported it, we were free to grow it to fill the extra space. All because we abstracted our storage from the physical disks that it lives on.

Alright, that covers the basic why of Logical Volume Management. Since I’m sure you’re itching to learn more about how to prepare and build your own systems, here are some excellent resources to get you started:


As always, if you have any questions about the topic, please comment on the story, and I’ll try to help you out or point you in the right direction

  • Dan C

    One thing I’ve always been a little unclear on, and never had the time to test extensively, is what the performance impact is when introducing LVM as an extra layer of block abstraction.

    That is in the most simple context, when not when using LVM to join multiple disks into a VG or anything clever.

    Have you any experiences or thoughts?

  • Matt


    I’m not really sure what you mean, I think.

    It was my understanding that blocks are a filesystem construct, rather than being part of a physical device. If you mean sectors, then I really have no idea.

    The general consensus seems to show a slowdown of around 20%, but there are apparently methods of tuning it a bit.

  • JeffHengesbach

    Great intro Matt. I totally agree on your “Why” statement. Understanding the why of doing something the first step of ‘should’ it be done.

    In terms of performance I have 2 comments. Every environment is unique so the impact will vary, but like Dan C mentioned, adding a layer of abstraction in the block device stack -has- to impose some overhead. How much depends on alot – processor speed, lvm tuning, disk subsystem layout, controller configurations….

    In a true performance critical situation, you’re most likely dealing with storage at the spindle level (even on a SAN). You don’t see too many high transaction databases on the same raid group as a mailbox store that is sliced up with LVM/other.

    That said – I love LVM.

  • Matt


    Ah, I knew I had more to learn about LVM, so I’ll have to investigate the performance penalties involved.

    Thanks for the kinds words, also :-)

  • JeffHengesbach

    A nice simplified visual representation showing how the parts of linux IO architecture fit together.

    The “Generic Block Device” layer can be stacked many times over using the likes of md, lvm, drbd, etc. Software raid10 for instance is a raid0 layer overtop of multiple raid1’s. An IO trying to make its way too / from the physical device has pass through each layer – adding latency.

  • Matt


    Wow, great link. Thanks!

  • Ben C

    I still prefer the “old way” for desktops — maybe because that’s what I’m used to? The most useful application of LVM that I’ve found is the ability to set arbitrary hard quotas. On our main file server, there are 500 GB partitions, and researchers can purchase space in units of 100GB. This makes quota enforcement limited to a weekly e-mail that says “hey, you’re using too much!” On a newer server, I set up LVM slices so that each researcher gets their own partition — exactly the size they pay for — that I can grow or shrink as needed.

  • Pingback: I’m here to shard data and chew bubblegum… | Standalone Sysadmin()

  • Pingback: What are the benefits of the Logical Volume Manager - Admins Goodies()

  • Pingback: What is the difference between LVM and LUN - Admins Goodies()

  • Iced

    First, great article!

    I am interested in your thoughts on how to manage removing physical drives from a logical volume. I’ve bought into the idea of using LVM a couple years ago, but have been bitten twice by a bad drive and had nothing but headaches trying to remove the bad drive from the logical volume.

    Losing a physical drive in a logical volume is an exercise of patience especially when what appears to be a random subset of files gone missing in the logical volume it’s part of.

    I can understand that a minimum of RAID-1 sitting above the physical layer would be nice but I guess the caveat is that if you are going to put anything of importance in a logical volume using LVM, make sure it’s on top of some sort of a RAID array.

    Thoughts, tips?

  • Pingback: What is the difference between LVM and LUN - Just just easy answers()

  • Pingback: Unified Extensible Firmware Interface (UEFI) and Logical Volume Manager (LVM) adventures: How upgrading to UEFI resulted in this epic post | The Autodidactic Hacker()

  • Pingback: Fix: What is the difference between LVM and LUN #computers #it #fix | SevenNet()

  • Pingback: Solution: What is the difference between LVM and LUN #it #programming #solution | Good Answer()

  • Pingback: How to: What is the difference between LVM and LUN #solution #development #computers | IT Info()