Depende de como você deseja configurá-lo. Se você usar a opção -l
, estará especificando-a em termos das extensões lógicas. Se você usar a opção -L
, você pode especificá-la em termos de tamanho (qualquer coisa com uma unidade depois disso - 150GB - por exemplo).
-l, --extents LogicalExtentsNumber[%{VG|PVS|FREE|ORIGIN}]
Gives the number of logical extents to allocate for the new
logical volume. The number can also be expressed as a percentage
of the total space in the Volume Group with the suffix %VG, as a
percentage of the remaining free space in the Volume Group with
the suffix %FREE, as a percentage of the remaining free space for
the specified PhysicalVolume(s) with the suffix %PVS, or (for
a snapshot) as a percentage of the total space in the Origin
Logical Volume with the suffix %ORIGIN.
-L, --size LogicalVolumeSize[bBsSkKmMgGtTpPeE]
Gives the size to allocate for the new logical volume. A size
suffix of K for kilobytes, M for megabytes, G for gigabytes, T for
terabytes, P for petabytes or E for exabytes is optional.
Default unit is megabytes.
O que são extensões?
Sim, isso me confundiu quando eu configurei o LVM no meu RAID também. Eu sempre me refiro a essa fonte para refrescar minha memória:
trecho de Guia de Gerenciamento de Volume Lógico da Administração Unix / Linux
Extents:
When creating a volume group from one or more physical volumes, you must specify the size of the "extents" of each of the physical volumes that make up the VG. Each extent is a single contiguous chunk of disk space, typically 4M in size, but can range from 8K to 16G in powers of 2 only. (Extents are analogous to disk blocks or clusters.) The significance of this is that the size of logical volumes are specified as a number of extents. Logical volumes can thus grow and shrink in increments of the extent size. A volume group's extent size cannot be changed after it is set.
The system internally numbers the extents for both logical and physical volumes. These are called logical extents (or LEs) and physical extents (or PEs), respectively. When a logical volume is created a mapping is defined between logical extents (which are logically numbered sequentially starting at zero) and physical extents (which are also numbered sequentially).
To provide acceptable performance the extent size must be a multiple of the actual disk cluster size (i.e., the size of the smallest chunk of data that can be accessed in a single disk I/O operation). In addition some applications (such as Oracle database) have performance that is very sensitive to the extent size. So setting this correctly also depends on what the storage will be used for, and is considered part of the system administrator's job of tuning the system.
Isso explica o que eles são. Eu uso este artigo para descobrir como calculá-los:
trecho de Gerenciando RAID e LVM com Linux (v0.5)
The default value for the physical extent size can be too low for a large RAID array. In those cases you'll need to specify the -s option with a larger than default physical extent size. The default is only 4MB as of the version in Fedora Core 5. The maximum number of physical extents is approximately 65k so take your maximum volume size and divide it by 65k then round it to the next nice round number. For example, to successfully create a 550G RAID let's figure that's approximately 550,000 megabytes and divide by 65,000 which gives you roughly 8.46. Round it up to the next nice round number and use 16M (for 16 megabytes) as the physical extent size and you'll be fine:
# vgcreate -s 16M <volume group name>
Assim, com esse comando acima, você criou o grupo de volumes vazio. Você pode consultá-lo para ver quantas Extensões (PEs) físicas estão disponíveis.
Você pode então usar o comando vgdisplay
para ver o número real de PEs:
$ vgdisplay lvm-raid
.
.
Free PE / Size 57235 / 223.57 GB
Se você quiser atribuir todos a seu volume lógico, faça o seguinte:
$ lvcreate -l 57235 lvm-raid -n lvm0
Você pode então confirmar usando lvdisplay
. O caminho não é uma concatenação do volume lógico (lvm-raid) e do volume lógico (lvm0), principalmente /dev/lvm-raid/lvm0
.
$ lvdisplay /dev/lvm-raid/lvm0
--- Logical volume ---
LV Name /dev/lvm-raid/lvm0
VG Name lvm-raid
LV UUID FFX673-dGlX-tsEL-6UXl-1hLs-6b3Y-rkO9O2
LV Write Access read/write
LV Status available
# open 1
LV Size 223.57 GB
Current LE 57235
Segments 1
Allocation inherit
Read ahead sectors 0
Block device 253:2