Talk:Logical Volume Manager (Linux)

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Disk breakage?[edit]

If I have several physical disks forming one logical volume, what happens if one of those physical disks breaks? Do I lose all of my data on that logical volume or just the data that was stored on the broken disk?

Sam

  • Your question is not best answered here; try asking on a Linux (or HP-UX) forum or newsgroup or mailing list... —Preceding unsigned comment added by 216.165.132.250 (talk) 00:50, 14 November 2007 (UTC)[reply]
    • I find the question relevant to the article though, perhaps a line or two about the possibilites of data-loss in multi-disk setups is in order? Zpeidar (talk) —Preceding undated comment added 02:53, 28 February 2009 (UTC).[reply]

HP-UX LVM[edit]

LVM is a HP-UX product - and in fact, the HP-UX page connects to this page via "Logical Volume Manager". Certainly, LVM under Linux is worth an entry - but there should be an entry for LVM under HP-UX too, da? Why isn't there one? Need I make one? Or did I miss something?  DavidDouthitt  (Talk) 00:54, 14 November 2007 (UTC)[reply]

This page could really use an image showing the different LVM "Layers" (PV,VG,LV) —Preceding unsigned comment added by 99.151.254.241 (talk) 20:07, 8 April 2008 (UTC)[reply]

The diagram shows limits and problems for LVM - version 1.
This should be replaced by LVM2 as no-one uses LVM1 and hasn't done so for years.
Since LVM is part of the Linux kernel, it is open source with no real "owner".
Yeah, rewriting this article from scratch has been on my to-do list for at least a few months. Unfortunately, I haven't found enought time (yet) to do the rewrite. — Dsimic (talk | contribs) 21:21, 3 July 2014 (UTC)[reply]

Barriers and ext3[edit]

I'm confused about this article's statement about journaled filesystems. The cited article seems to say that LVM merely does not support barriers, which are disabled by ext3 by default anyway. So, at least according to that source, it does not seem that there is any disadvantage to running LVM with ext3 filesystems, as compared to running normal ext3 filesystems without LVM. Does anyone know whether this is correct? —Preceding unsigned comment added by 71.206.25.3 (talk) 19:39, 13 July 2008 (UTC)[reply]

What's a barrier? Is there a Wikipedia article regarding the kind of barrier mentioned in this article ? (Please remove this comment when the article either links to a Wikipedia article on barriers or the barrier reference is removed from the article).

Kernel vs. distribution[edit]

The beginning of the article stipulates that LVM is part of the kernel (which, from the minimal research I've done; appears correct) and yet, in the very next paragraph there is a (fairly comprehensive) listing of specific Linux distributions - which would seem redundant, as any Linux (kernel) based operating system (GNU/Linux); can potentially support LVM. This article as a whole could probably to with a clean up. 203.11.167.3 (talk) 00:44, 31 October 2008 (UTC)Anonymous[reply]

Yes, they can potentially support lvm. But having a distribution that was made with lvm support makes using lvm a lot easier. Eeekster (talk) 02:59, 28 February 2009 (UTC)[reply]

Fraud warning[edit]

Following the link: LVM2 Resource Page to http://sourceware.org/lvm2/ makes Opera issue a "fraud warning". "The page you are trying to open has been reported as fraudulent. It will likely attempt to trick you into sharing personal or financial information. Opera Software strongly discourages visiting this page."

Yes, opera does whine about the page. I don't see anything there that merits the warning. Eeekster (talk) 02:58, 28 February 2009 (UTC)[reply]

This article lacks proper citations[edit]

This article does not cite many sources, in particular claims regarding how LVM2 is shipped with SUSE, LVM2's roots in HP-UX and 'caveats' only site one source. I realize that reliable sources for this kind of thing are sometimes difficult to gather, I am posting this as a marker. —Preceding unsigned comment added by Tinkertim (talkcontribs) 11:57, 4 March 2009 (UTC)[reply]

PE definition is confusing[edit]

Can someone who understands LVM provide a concise, correct definition of PE, or point to a resource for researching it, so I can write one?

The lowest level (most granular) concept (PE) is placed highest (upper-most) in the initial diagram. I find this placement confusing. Is the diagram upside-down?

PE is the first textual item in the diagram. PE is first used in the text in section 3. But it is not defined, there is only a link to another page. The link shows "Physical Extent" in the hover-tag, and the URL is promising: https://secure.wikimedia.org/wikipedia/en/wiki/Physical_extent. But the linked page has a confusingly similar headline "Logical Volume Management" with sub-head "(Re-directed from Physical Extent)". On this page, PE's are defined (finally!) this way: "Volume management treats PVs as sequences of chunks called physical extents (PEs)." Chunks, however, are not defined. Could they be sectors? The PE definition goes on to state: "PEs simply map one-to-one to logical extents (LEs)." But LEs are not defined.

The role of an actual disk-drive in the diagram is unclear. Is there a disk drive represented on the diagram? Disk drives are what LVM is all about. Disk drives are the block devices that support logical volumes, but what on the diagram corresponds to a block (sector) or a drive? I think representing blocks (sectors) and drives would improve the diagram. Disk drives have a long-accepted standard representation in diagrams [I guess I need a cite for this?]. This standard shape should be used.

I think a complete hierarchy for the concept should start at the bottom with sectors, above that should go PE's, etc.

(Nitpick: there are ranges given in the diagram for how many of each thing can exist, but the examples do not extend to how big each level can be, just some levels. How big can an LVM partition be? The diagram needs an answer for this. The capacity limit for an LVM partition is probably a very common question.)

(I apologize for the length of this comment.) Chasmo (talk) 04:26, 26 March 2011 (UTC)[reply]

If you look at the output of LVM2 commands it seems to choose a "chunk" size (4MB appears to be common). Creating a logical volume uses sufficient chunks to make one segment (probably several segments if they are on different physical devices); If you extend a logical volume it will allocate more segment(s) using sufficient chunks to do so.

Wrong facts etc[edit]

Hi,

this article really needs some work:

  • LVM - including the term - was introduced by IBM on AIX 4.x around 1993
  • The Linux LVM is indeed based on the HP-UX LVM but does not implement multiple features from there, so it should state that this is an imcomplete implementation
  • Maybe the article should be renamed to be called "Linux LVM" or something
  • Description of pvmove is technically flawed, imho completely wrong. The mechanism is atomic copies of single PEs and it is not very well done. I don't remember if the target space is fully allocated at the start, would be most sad if it's done that way and I don't think thats how it is implemented.
  • iirc Mirroring is only a part of cLVM. It would be good to warn that no "PVG"s exist and even "strict" mirroring was only added recently.
  • LVM Mirroring and Raid1 only got in common that both make two on-disk copies of a given piece of data. Also: No mention of quorum mechanisms?? —Preceding unsigned comment added by 188.174.67.0 (talk) 16:45, 5 April 2011 (UTC)[reply]
The first point here isn't really a correction. That doesn't contradict anything in the article, just adds more context. I suppose we can add it to the lead section if we can find some sort of citation to this effect. For the second point, that seems more a candidate for Logical Volume Management since that's where a side-by-side comparison with other LVM implementations exists. This article is about Linux LVM, exhaustive comparisons detract from its purpose. Third point may have been accurate when it was written (although I'm not seeing where this page was moved, merged or renamed) but it _does_ currently state it's Linux specific in the title. For pvmove it's possible that section has inaccurate information as it's not thoroughly cited. I'm in the process of reworking the article myself so I'll get there eventually. Mirroring is a core part of LVM, having no special relationship with CLVM even at the time you wrote this. CLVM is a locking mechanism unrelated to LVM's raid functionality. On your last point, LVM RAID is indeed different than actual RAID. My understanding is that "LVM RAID" only includes "RAID" as a way of explaining its use. It's not in an SNIA-compatible format but that's well in line with other volume management schemes (such as ZFS and BTRFS) which also claim RAID status without being SNIA either. Most people think of "RAID" as being a functional description rather than an actual standard anyways. Quorum is probably too specific for a general LVM overview and any discussion is probably going to veer close to "how-to" territory.
Long and short of it: the only (possibly) valid point you see to have is the one about pvmove but I'll still need to verify that as I work through the article. Bratchley (talk) 15:14, 22 May 2015 (UTC)[reply]

Non-standard UUIDs[edit]

LVM uses a 32 character string containing digits with uppercase and lowercase letters, per the code at http://git.fedorahosted.org/git/?p=lvm2.git;a=tree;f=lib/uuid;hb=HEAD it looks like it s based purely on a random number generator. At 62^32=2.27*10^57=2^190 possibilities, this is much larger space than standard UUIDs, and require a different sort of storage than a standard 128 bit UUID. Drf5n (talk) 20:39, 30 July 2012 (UTC)[reply]

Confusing JBOD and SPAN (BIG)[edit]

Under "Common Uses" is states that LVM is capable of creating a single logical volume out of multiple physical volumes, and then compares this to RAID 0 and JBOD. Technically speaking, the latter of which is incorrect, as JBOD does not combine multiple physical volumes into a single logical volume, it just creates a set of disks, each with its own logical volume. Please see this: http://en.wikipedia.org/wiki/Non-RAID_drive_architectures It should be comparing this to SPAN or BIG rather than JBOD. — Preceding unsigned comment added by 129.21.119.58 (talk) 02:02, 9 December 2012 (UTC)[reply]

The article you linked indicates that "SPAN" as a concept has an overlap with "JBOD" (and implies the author thinks of SPAN as a subset of JBOD) and that reflects my experience as well. From your linked article "Hard drives may be handled independently as separate logical volumes, or they may be combined into a single logical volume using a volume manager like LVM; such volumes are usually called 'spanned'" Given that, I don't think it's entirely inaccurate to saw that it's both "SPAN" and JBOD. The linked article also seems to have some contention going on with it as well. Until they resolve those issues I don't think we can safely cite it as an example of the correct terminology. Bratchley (talk) 16:53, 22 May 2015 (UTC)[reply]
Hello! As a contributor to the Non-RAID drive architectures article, I can only emphasize that the distinction between JBOD and SPAN is somewhat mangled, what also applies to JBOD itself. The term "JBOD" has been coined primarily to describe layouts in which hard disk drives attached to a hardware RAID controller aren't specifically managed by the controller and are passed to the operating system "as-is". Thus, using JBOD outside that specific scenario is always going to be a little blurred. — Dsimic (talk | contribs) 19:37, 22 May 2015 (UTC)[reply]