Like most operating systems, UNIX storage was based around the concept of spinning magnetic disks and the way it manages its storage reflects this. Today, of course, disks are presented to the operating systems as carved up portions of a RAID array, or as VDisks from an SVC and the underlying storage could easily be Flash Disk. This means that the concept of carving up a disk into different performance bands is pretty much redundant.
UNIX also has the ability to combine a lot of small physical volumes together into a bigger logical volume, to cater for old, small volumes. This is not really required these days as that virtualisation can happen in the storage subsystem, and also, modern volumes are now several terabytes in size, but you still need to understand how UNIX works with volumes as it is a fundamental part of the operating system.
In summary, Physical Volumes are used to create Volume Groups and the space created is split up into Physical Partitions. One or more Physical Partitions can be mapped to a Logical Partition, and these Logical Partitions are used to create Logical Volumes. These terms are discussed in a bit more detail below.
A traditional physical volume is a set of spinning magnetic disks arranged in a stack, separated by a set of read/write heads that traverse the disks. The volume is split into 5 regions the outer-edge, outer-middle, center, inner-middle, and inner-edge, and data is allocated into these regions as fixed size physical partitions. The read/write heads move as a unit, so they read all the disks at the same time. This means that each region is effectively a cylinder which spans all the spinning disks in the stack. The read/write heads are usually parked in the middle of the disks so the data storage on middle cylinder can be read with minimal head movement and so gets best performance.
However, that's a traditional volume. Most disks these days are organised into raid arrays, so the physical volume seen by a UNIX box is consists of a set of data that is striped over several disks and so these performance regions have little practical application. Just remember that when we talk about a physical volume in UNIX, we might mean a raw, physical disk, but we are more likely to mean a virtual volume presented from a Raid array or even a virtual disk presented by an SVC.
In UNIX, every physical volume (PV) has a name, usually /dev/hdiskx. You can list them out with the lspv command, and list out detail for an individual disk with lspv -l hdisk0. This tells you how many logical and physical partitions are allocated to each filespace, and the data distribution over the 5 regions.
If you use SAN multi-pathing, you will see multiple images of the same disk with the lspv command. For example, hdisk1-hdisk4 above are all the same physical disk as the SAN has 4 paths.
Physical volumes are combined together into volume groups. This was partly because physical volumes used to be small, so this allowed UNIX to create file spaces that were bigger than an individual volume.
AIX creates one volume group at install time called the rootvg. This contains all the file spaces required to start the system plus any other file spaces that are created by the installation script. The rootvg is best kept small as it is backed up for DR purposes with the mksysb command. So while it is possible to add more physical volumes and file spaces to the rootvg for user data, these are usually created on separate volume groups. Physical volumes are added to a volume group by using the extendvg command, and new volume groups are created using the mkvg command.
List the logical volumes associated with a volume group
:/users/xc085357 $ lsvg
get specific data for a logical volume
To see which physical volumes are associated with a volume group use
A volume group can consist of one or more physical volumes, but a physical volume can just be a member of one volume group, and the whole physical volume is assigned to the group.
It is possible to create a volume group that consists of different types of physical volumes, but it seems intuitively wrong to mix spinning disks and flash volumes in the same volume group.
When a physical volume is added to a volume group, the volume space is carved up into Physical Partitions. The partition size defaults to 4MB, and is fixed at the time the volume group is created. A Logical Volume is built from Logical Partitions, and these are made up of between 1 and 3 physical partitions, this number being decided when the logical volume is created. This allows the logical volume to have up to 2 mirrors for resilience. Logical volumes sizes can be changed by adding more logical partitions and the mirroring can be changed after the logical volume is created.
The Logical Volume is the volume that is presented to the operating system as a storage unit. It can then be used as a raw volume, or a file system such as JFS can be installed on it.
The Linux Logical Volume Manager version 2 (LVM2) was introduced in the Linux 2.6 kernel. The basic point behind it is that it splits and combines physical volumes into logical volumes, and introduced volume mirroring and volume clustering. LVM2 can have 4,370,464,768 physical or logical volumes, and each device can hold 8 Exabytes, provided other restrictions do not apply. For comparison, LVM1 devices could be 2 TB in size, and only 256 physical or logical volumes were allowed.
Logical volume management is a widely-used technique for deploying logical rather than physical storage. The basic process is illustrated below, but a physical disk is split into logical volumes. Logical volumes are then assigned to logical volume groups, which can span physical volumes. This means that a logical volume group can be bigger than a physical volume. Logical volume groups are then split into logical volumes. The logical volumes consist of fixed size logical extents; the default size is 4 MB. Each logical extent maps to a physical extent on a physical volume and a physical extent must be the same size as a logical extent. These logical extents can map to physical extents that are held on different physical volumes. This leads to two different types of mapping between logical and physical extents.
In Linear Mapping the logical extents are mapped sequentially to extents on a physical volume until that volume is full, then the mapping starts on the next physical volume. This means that the logical volume can be bigger than a physical volume.
In Striped Mapping groups of contiguous physical extents called stripes are mapped from different physical volumes to a logical volume as show below. This has performance advantages as the IO workload is shared between different disk spindles.
LVM has the following advantages over raw physical partitions:
These commands assume you have physical volumes called vol1, vol2, vol3 and a logical disk group called volg-01
To create a physical volume use the pvcreate command
to create a volume group use the vgcreate command
vgcreate volg-01 /dev/vol1 /dev/vol2
To add a volume to an existing volume group use the command
vgextend volg-01 /dev/vol3
To remove a volume from a volume group use
vgreduce volg-01 /dev/vol3
Note that any logical volumes using physical extents from volg-01 /dev/vol3 will be removed as well.
Assuming that your logical volumes are called LVOL1 and LVOL2; to create a 150 GB logical volume use
lvgcreate -n lvol1 -- size 150G volg-01
This will be use linear mapping as that is the default. Each logical volume name within a volume group must be unique, but volumes is different volume groups can have the same name. A logical volume is stored in the device directory as /dev/vol-group-name/logical-volume-name
To create a 150 GB logical volume called LVOL2 that uses striped mapping the command is
lvcreate -i2 -I4 - -size 150G -n LVOL2 volg-01
The -i2 means use two stripes, the -I4 means the stripe size if 4 KB. You could put /dev/vol1 and /dev/vol2 at the end of the command to force the logical volume to use stripes from those two physical volumes.
To remove a logical volume first unmount with the command
Then remove it with the command
To add an extra 50GB to the LVOL1 logical volume use the command
lvextend -L+50G /dev/volg-01/LVOL1
Once the logical volume is extended you need to expand the file system to use the additional space
Use the mount command to mount file systems on servers. mount with no parameters will list the current file systems and their mount locations.The syntax of the command is:
mount [-r] -t fstype device_name mountpoint
-r means mount as read-only. for -t fstype, you can say what type of file system you are mounting. Options include ext4, ext3, reiserfs or iso9660 for mounting a CD. However if the hard disk is defined in the file /etc/fstab, then the device type will be picked up automatically.
The mountpoint is the directory where you are mounting this file system.
Use the umount mountpoint command to unmount a mounted drive from the file system.
Use the df command to display the amount of disk space available. df with no parameters will show the available space on all the currently mounted file systems. The full command syntax is
df [-H -t] directory
This limits the result to a single directory. -H shows the number of occupied blocks in gigabytes, megabytes, or kilobytes; -t shows the type of file system (ext2, nfs, etc.)
To display the total disk space occupied by files and subdirectories in the current directory use the du command.
Optional parameters are -a; display the size of each individual file, -h; display output in human-readable form, -s; display only the calculated total size. You can also add a path on the end to display space on a different directory
You can check for disk performance problems with the IOSTAT command, which can typically look like this, which will gather 3 collections at 30 second intervals. You can change both the interval and number of collections to suit your needs.
/usr/bin/iostat -xtk 30 3
Sample output will look like this
The primary statistics to review are the await and %util. You want the "await" to be 5ms or less and the %util to be 70% or less. Don't worry about occcasional spikes, but sustained values above 5ms and/or 70% utilization can indicate a disk problem.