lvcreate (8)


NAME

lvcreate - create a logical volume in an existing volume group

SYNOPSIS

lvcreate [--addtag Tag] [--alloc AllocationPolicy] [-a|--activate [a|e|l]{y|n}] [-A|--autobackup {y|n}] [-C|--contiguous {y|n}] [-d|--debug] [-h|-?|--help] [--noudevsync] [--ignoremonitoring] [--monitor {y|n}] [-i|--stripes Stripes [-I|--stripesize StripeSize]] {[-l|--extents LogicalExtentsNumber[%{VG|PVS|FREE}] | -L|--size LogicalVolumeSize[bBsSkKmMgGtTpPeE]] | -V|--virtualsize VirtualSize[bBsSkKmMgGtTpPeE]} [-M|--persistent {y|n}] [--minor minor] [-m|--mirrors Mirrors [--nosync] [--mirrorlog {disk|core|mirrored} | --corelog] [-R|--regionsize MirrorLogRegionSize]] [-n|--name LogicalVolume{Name|Path}] [-p|--permission {r|rw}] [-r|--readahead {ReadAheadSectors|auto|none}] [-t|--test] [-T|--thin [-c|--chunksize ChunkSize] [--discards {ignore|nopassdown|passdown}] [--poolmetadatasize MetadataSize[bBsSkKmMgG]]] [--thinpool ThinPoolLogicalVolume{Name|Path}] [--type SegmentType] [-v|--verbose] [-Z|--zero {y|n}] VolumeGroup{Name|Path}[/ThinPoolLogicalVolumeName] [PhysicalVolumePath[:PE[-PE]]...]

lvcreate [-l|--extents LogicalExtentsNumber[%{VG|FREE|ORIGIN}] | -L|--size LogicalVolumeSize[bBsSkKmMgGtTpPeE]] [-c|--chunksize ChunkSize] [--noudevsync] [--ignoremonitoring] [--monitor {y|n}] [-n|--name SnapshotLogicalVolume{Name|Path}] -s|--snapshot {[VolumeGroup{Name|Path}/]OriginalLogicalVolumeName -V|--virtualsize VirtualSize[bBsSkKmMgGtTpPeE]}

DESCRIPTION

lvcreate creates a new logical volume in a volume group (see vgcreate(8), vgchange(8)) by allocating logical extents from the free physical extent pool of that volume group. If there are not enough free physical extents then the volume group can be extended (see vgextend(8)) with other physical volumes or by reducing existing logical volumes of this volume group in size (see lvreduce(8)). If you specify one or more PhysicalVolumes, allocation of physical extents will be restricted to these volumes.

The second form supports the creation of snapshot logical volumes which keep the contents of the original logical volume for backup purposes.

OPTIONS

See lvm(8) for common options.
-a, --activate {y|ay|n|ey|en|ly|ln}
Controls the availability of the Logical Volumes for immediate use after the command finishes running. By default, new Logical Volumes are activated (-ay). If it is possible technically, -an will leave the new Logical Volume inactive. But for example, snapshots can only be created in the active state so -an cannot be used with --snapshot. Normally the --zero n argument has to be supplied too because zeroing (the default behaviour) also requires activation. If autoactivation option is used (-aay), the logical volume is activated only if it matches an item in the activation/auto_activation_volume_list set in lvm.conf. For autoactivated logical volumes, --zero n is always assumed and it can't be overridden. If clustered locking is enabled, -aey will activate exclusively on one node and -aly will activate only on the local node.
-c, --chunksize ChunkSize
Gives the size of chunk for snapshot and thin pool logical volumes. For snapshots the value must be power of 2 between 4KiB and 512KiB and the default value is 4. For thin pools the value must be between 64KiB and 1048576KiB and the default value starts with 64 and scales up to fit the pool metadata size within 128MB, if the poolmetadata size is not specified. Older dm thin pool target version (<1.4) requires the value to be power of 2. The newer version requires to be the multiple of 64KiB, however discard is not supported for non power of 2 values. Default unit is in kilobytes.
-C, --contiguous {y|n}
Sets or resets the contiguous allocation policy for logical volumes. Default is no contiguous allocation based on a next free principle.
--discards {ignore|nopassdown|passdown}
Set discards behavior. Default is passdown.
-i, --stripes Stripes
Gives the number of stripes. This is equal to the number of physical volumes to scatter the logical volume.
-I, --stripesize StripeSize
Gives the number of kilobytes for the granularity of the stripes.
StripeSize must be 2^n (n = 2 to 9) for metadata in LVM1 format. For metadata in LVM2 format, the stripe size may be a larger power of 2 but must not exceed the physical extent size.
-l, --extents LogicalExtentsNumber[%{VG|PVS|FREE|ORIGIN}]
Gives the number of logical extents to allocate for the new logical volume. The number can also be expressed as a percentage of the total space in the Volume Group with the suffix %VG, as a percentage of the remaining free space in the Volume Group with the suffix %FREE, as a percentage of the remaining free space for the specified PhysicalVolume(s) with the suffix %PVS, or (for a snapshot) as a percentage of the total space in the Origin Logical Volume with the suffix %ORIGIN.
-L, --size LogicalVolumeSize[bBsSkKmMgGtTpPeE]
Gives the size to allocate for the new logical volume. A size suffix of K for kilobytes, M for megabytes, G for gigabytes, T for terabytes, P for petabytes or E for exabytes is optional.
Default unit is megabytes.
--minor minor
Set the minor number.
-M, --persistent {y|n}
Set to y to make the minor number specified persistent.
-m, --mirrors Mirrors
Creates a mirrored logical volume with Mirrors copies. For example, specifying "-m 1" would result in a mirror with two-sides; that is, a linear volume plus one copy.

Specifying the optional argument --nosync will cause the creation of the mirror to skip the initial resynchronization. Any data written afterwards will be mirrored, but the original contents will not be copied. This is useful for skipping a potentially long and resource intensive initial sync of an empty device.

The optional argument --mirrorlog specifies the type of log to be used. The default is disk, which is persistent and requires a small amount of storage space, usually on a separate device from the data being mirrored. Using core means the mirror is regenerated by copying the data from the first device again each time the device is activated, for example, after every reboot. Using "mirrored" will create a persistent log that is itself mirrored.

The optional argument --corelog is equivalent to --mirrorlog core.

-n, --name LogicalVolume{Name|Path}
The name for the new logical volume.
Without this option a default names of "lvol#" will be generated where # is the LVM internal number of the logical volume.
--noudevsync
Disable udev synchronisation. The process will not wait for notification from udev. It will continue irrespective of any possible udev processing in the background. You should only use this if udev is not running or has rules that ignore the devices LVM2 creates.
--monitor {y|n}
Start or avoid monitoring a mirrored or snapshot logical volume with dmeventd, if it is installed. If a device used by a monitored mirror reports an I/O error, the failure is handled according to mirror_image_fault_policy and mirror_log_fault_policy set in lvm.conf.
--ignoremonitoring
Make no attempt to interact with dmeventd unless --monitor is specified.
-p, --permission {r|rw}
Set access permissions to read only or read and write.
Default is read and write.
--poolmetadatasize MetadataSize[bBsSkKmMgG]
Set the size of thin pool's metadata logical volume. Supported value is in range between 2MiB and 16GiB. Default value is (Pool_LV_size / Pool_LV_chunk_size * 64b). Default unit is megabytes.
-r, --readahead {ReadAheadSectors|auto|none}
Set read ahead sector count of this logical volume. For volume groups with metadata in lvm1 format, this must be a value between 2 and 120. The default value is "auto" which allows the kernel to choose a suitable value automatically. "None" is equivalent to specifying zero.
-R, --regionsize MirrorLogRegionSize
A mirror is divided into regions of this size (in MB), and the mirror log uses this granularity to track which regions are in sync.
-s, --snapshot OriginalLogicalVolume{Name|Path}
Create a snapshot logical volume (or snapshot) for an existing, so called original logical volume (or origin). Snapshots provide a 'frozen image' of the contents of the origin while the origin can still be updated. They enable consistent backups and online recovery of removed/overwritten data/files. Thin snapshot is created when the origin is a thin volume and the size is not specified. Thin snapshot shares same blocks within the thin pool volume. The snapshot with the specified size does not need the same amount of storage the origin has. In a typical scenario, 15-20% might be enough. In case the snapshot runs out of storage, use lvextend(8) to grow it. Shrinking a snapshot is supported by lvreduce(8) as well. Run lvdisplay(8) on the snapshot in order to check how much data is allocated to it. Note that a small amount of the space you allocate to the snapshot is used to track the locations of the chunks of data, so you should allocate slightly more space than you actually need and monitor the rate at which the snapshot data is growing so you can avoid running out of space.
-T, --thin, --thinpool ThinPoolLogicalVolume{Name|Path}
Creates thin pool or thin logical volume or both. Specifying the optional argument --size will cause the creation of the thin pool logical volume. Specifying the optional argument --virtualsize will cause the creation of the thin logical volume from given thin pool volume. Specifying both arguments will cause the creation of both thin pool and thin volume using this pool. Requires device mapper kernel driver for thin provisioning from kernel 3.2 or newer.
--type SegmentType
Create a logical volume that uses the specified segment type (e.g. "raid5", "mirror", "snapshot", "thin", "thin-pool"). Many segment types have a commandline switch alias that will enable their use (-s is an alias for --type snapshot). However, this argument must be used when no existing commandline switch alias is available for the desired type, as is the case with error, zero, raid1, raid4, raid5 or raid6.
-V, --virtualsize VirtualSize[bBsSkKmMgGtTpPeE]
Create a sparse device of the given size (in MB by default) using a snapshot or thinly provisioned device when thin pool is specified. Anything written to the device will be returned when reading from it. Reading from other areas of the device will return blocks of zeros. Virtual snapshot is implemented by creating a hidden virtual device of the requested size using the zero target. A suffix of _vorigin is used for this device.
-Z, --zero {y|n}
Controls zeroing of the first KB of data in the new logical volume.
Default is yes.
Volume will not be zeroed if read only flag is set.
Snapshot volumes are zeroed always.


Warning: trying to mount an unzeroed logical volume can cause the system to hang.

Examples

Creates a striped logical volume with 3 stripes, a stripesize of 8KB and a size of 100MB in the volume group named vg00. The logical volume name will be chosen by lvcreate:

lvcreate -i 3 -I 8 -L 100M vg00

Creates a mirror logical volume with 2 sides with a useable size of 500 MiB. This operation would require 3 devices (or option --alloc anywhere) - two for the mirror devices and one for the disk log:

lvcreate -m1 -L 500M vg00

Creates a mirror logical volume with 2 sides with a useable size of 500 MiB. This operation would require 2 devices - the log is "in-memory":

lvcreate -m1 --mirrorlog core -L 500M vg00

Creates a snapshot logical volume named /dev/vg00/snap which has access to the contents of the original logical volume named /dev/vg00/lvol1 at snapshot logical volume creation time. If the original logical volume contains a file system, you can mount the snapshot logical volume on an arbitrary directory in order to access the contents of the filesystem to run a backup while the original filesystem continues to get updated:

lvcreate --size 100m --snapshot --name snap /dev/vg00/lvol1

Creates a sparse device named /dev/vg1/sparse of size 1TiB with space for just under 100MiB of actual data on it:

lvcreate --virtualsize 1T --size 100M --snapshot --name sparse vg1

Creates a linear logical volume "vg00/lvol1" using physical extents /dev/sda:0-7 and /dev/sdb:0-7 for allocation of extents:

lvcreate -L 64M -n lvol1 vg00 /dev/sda:0-7 /dev/sdb:0-7

Creates a 5GiB RAID5 logical volume "vg00/my_lv", with 3 stripes (plus a parity drive for a total of 4 devices) and a stripesize of 64KiB:

lvcreate --type raid5 -L 5G -i 3 -I 64 -n my_lv vg00

Creates 100MiB pool logical volume for thin provisioning build with 2 stripes 64KiB and chunk size 128KiB together with 1TiB thin provisioned logical volume "vg00/thin_lv":

lvcreate -i 2 -I 64 -c 256 -L100M -T vg00/pool -V 1T --name thin_lv

SEE ALSO

lvm(8), vgcreate(8), lvchange(8), lvremove(8), lvrename(8) lvextend(8), lvreduce(8), lvdisplay(8), lvscan(8)