The Definitive Guide to KVM Virtualization

T

KVM Virtualization or Kernel-based Virtual Machine (KVM) technology has been around from 2006. They merged it in to the mainline Linux kernel way back in 2007. KVM is a type-1 (bare metal) hypervisor. It shares control over the physical hardware along with the Linux kernel allowing the guest VMs to run on bare metal. According to IBM, the TCO of KVM virtualization as compared to VMWare virtualization is typically lower by 39%. While they did this calculation in 2012, it is still relevant as of this writing.

In today’s article we will look at how to install KVM on an Ubuntu 18.04 server. This is the first article in a bigger series on KVM virtualization. In future articles, we will explore topics such as KVM Clustering, different networking types in KVM, CEPH storage with KVM, KVM BAU tasks, Kimchi for KVM Administration, KVM Graphical Support.

Lab Setup

While the recommended configuration for KVM virtualization in a production environment will depend on the individual use case; for this lab, a system with 4 cores, 8 GB of RAM and 2 Hard Drives of 20 GB each should be adequate. You can get away with much lesser hardware requirements for the lab in case of a resource crunch. If you have only a single hard drive available, you need to skip the “Storage Management” section below. The lab machine should have Ubuntu 18.04 server installed and an active internet connection.

Nested KVM Virtualization

In my case, I have gone for nested virtualization. I have Manjaro on my primary machine on which I have installed KVM. Within this KVM environment, I created a virtual machine with Ubuntu 18.04 server. Let us call this virtual machine, ubuntu-primary. This ubuntu-primary machine has acted as the KVM host for this lab. I have tried this lab on a bare-metal server as well and it works seamlessly. In case you have an existing KVM installation and wish to carry out nested KVM virtualization, you can check for nested KVM virtualization support by issuing the command $ cat /sys/module/kvm_intel/parameters/nested. If you see a “Y” in the output, you are good to go. If required, you can enable it by issuing the command $ echo 'options kvm_intel nested=1' >> /etc/modprobe.d/qemu-system-x86.confand rebooting your system. I believe that nested virtualization is not possible on AMD CPUs, however, check their official support site for any updates on this topic.

CPU Virtualization Extensions

KVM virtualization requires hardware support. To determine if your system supports virtualization extensions, run the following command:

$ egrep -c '(svm|vmx)' /proc/cpuinfo

If you get an output other than “0”, your CPU supports full virtualization. At times, hardware manufacturers disable virtualization extension support in the BIOS. In that event, enable it in your BIOS settings.

Now, let us install the kvm-ok utility. This program helps us determine if the system can host harware accelerated KVM virtual machines.

$ sudo apt install cpu-checker
$ kvm-ok
INFO: /dev/kvm exists
KVM acceleration can be used

We are now ready to install KVM virtualization.

Installation

We begin our KVM virtualization journey by installing the required packages.

$ sudo apt-get update && sudo apt-get upgrade -y
$ sudo apt-get install qemu qemu-kvm bridge-utils libvirt-bin virtinst libguestfs-tools -y

Explanation of Packages Installed

qemu: QEmu or Quick Emulator is a PC system emulator simulating peripherals like buses, adapters, serial ports, sound cards, USB controllers, etc.
qemu-kvm: KVM extensions for QEmu acceleration.
libvirt-bin: Libvirt is a collection of programs for virtual machine management and other virtualization functionality like storage and network interface management.
virtinst: virtinst is a set of command-line tools that help to provision new virtual machines using command-line options.
libguestfs-tools: libguesfstools is a set of packages that allow you to access and modify virtual machine disk images.

In Ubuntu 18.04, on installation of the qemu and libvirt-bin packages, the local user is added to the libvirt group and the libvirt service is enabled and starts automatically. You can check the status of the libvirtd service by running the following command:

$ service libvirtd status

● libvirtd.service - Virtualization daemon
   Loaded: loaded (/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled)
   Active: active (running) since Tue 2019-07-02 05:22:49 UTC; 1min 19s ago
     Docs: man:libvirtd(8)
           https://libvirt.org
 Main PID: 18517 (libvirtd)
    Tasks: 19 (limit: 32768)
   CGroup: /system.slice/libvirtd.service
           ├─18517 /usr/sbin/libvirtd
           ├─19038 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/lib/libvirt/libvirt_leaseshelper
           └─19039 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/lib/libvirt/libvirt_leaseshelper

Jul 02 05:22:55 earth dnsmasq-dhcp[19038]: DHCP, sockets bound exclusively to interface virbr0
Jul 02 05:22:55 earth dnsmasq[19038]: reading /etc/resolv.conf
Jul 02 05:22:55 earth dnsmasq[19038]: using nameserver 127.0.0.53#53
Jul 02 05:22:55 earth dnsmasq[19038]: read /etc/hosts - 7 addresses
Jul 02 05:22:55 earth dnsmasq[19038]: read /var/lib/libvirt/dnsmasq/default.addnhosts - 0 addresses
Jul 02 05:22:55 earth dnsmasq-dhcp[19038]: read /var/lib/libvirt/dnsmasq/default.hostsfile
Jul 02 05:22:55 earth dnsmasq[19038]: reading /etc/resolv.conf
Jul 02 05:22:55 earth dnsmasq[19038]: using nameserver 127.0.0.53#53
Jul 02 05:22:55 earth dnsmasq[19038]: reading /etc/resolv.conf
Jul 02 05:22:55 earth dnsmasq[19038]: using nameserver 127.0.0.53#53

In case the libvirtd service is not running, you can start and enable it with the following commands:

$ sudo systemctl start libvirtd
$ sudo systemctl enable libvirtd

KVM Networking

KVM virtualization supports the following networking options:

Host Only Network: In a Host-Only Network, the host assigns an IP to the guest VM and only the host machine can communicate with the guest. No other machine can access the guest. Note that in a host only network, the guest VM will not have internet access despite the host having a working internet connection.

NAT Network: In a NAT environment, we assign the guest VM an IP on a different subnet then the host, however, the guest can access the outside world but only the host has access to the guest. To access the services on the guest from outside the host, you need to enable NAT port forwarding. In this article, we will explore the NAT network configuration without port forwarding. Our guest VMs will have access to the outside world but only the host can access the guest VMs. In a later article we will look at NAT port forwarding along with other networking modes.

Bridged Network: In this case, the guest VM will be on the same network as the host. It has similar access to the host machine and any machine on the host network can access it.

Routed Network: A routed mode is similar in network segmentation as a NAT network, however, it relies on you, the operator to make routing changes to send and receive network traffic. You will be responsible to set up a static route to the upstream router. You use Routed mode in an enterprise deployment when you want to assign public IPs to the guest VMs or you have other routing solutions deployed.

Now that you know the different networking modes, you can choose the appropriate mode for your use case.

Creating a NAT Network

As mentioned above, for our deployment of KVM virtualization, we will create a NAT network. Let us start by creating a configuration file that will define our NAT network.

$ sudo nano virbr1.xml

network>
  <name>virbr1</name>
  <forward mode='nat'>
    <nat>
      <port start='1024' end='65535'/>
    </nat>
  </forward>
  <bridge name='virbr1' stp='on' delay='0'/>
  <ip address='192.168.10.1' netmask='255.255.255.0'>
    <dhcp>
      <range start='192.168.10.2' end='192.168.10.100'/>
    </dhcp>
  </ip>
</network>

Here we have named our bridge as “virbr1” and we will define and start the network using virsh commands.

$ sudo virsh net-define virbr1.xml
Network virbr1 defined from virbr1.xml

$ sudo virsh net-start virbr1
Network virbr1 started

$ sudo virsh net-list --all
 Name                 State      Autostart     Persistent
----------------------------------------------------------
 default              active     yes           yes
 virbr1               active     no            yes

The virsh net-list command lists all available networks. Our new network is active, however, not marked to auto start on system reboot. Let us fix that and confirm the auto start status.

$ sudo virsh net-autostart virbr1
Network virbr1 marked as autostarted

$ sudo virsh net-list --all
 Name                 State      Autostart     Persistent
----------------------------------------------------------
 default              active     yes           yes
 virbr1               active     yes           yes

The $ brctl showcommand will also list our newly created bridge.

$ brctl show
bridge name     bridge id               STP enabled     interfaces
virbr0          8000.5254006eca83       yes             virbr0-nic
virbr1          8000.52540080c52d       yes             virbr1-nic

Since we will not be using the default network, we will destroy and undefine it from our network. If you are the curious types, you can see the configuration of the default network by typing the command $ virsh net-dumpxml defaultbefore destroying it.

$ sudo virsh net-destroy default
Network default destroyed

$ sudo virsh net-undefine default
Network default has been undefined

$ sudo virsh net-list --all

Name                 State      Autostart     Persistent
----------------------------------------------------------
 virbr1               active     yes           yes

$ brctl show
bridge name     bridge id               STP enabled     interfaces
virbr1          8000.52540080c52d       yes             virbr1-nic

We need IP forwarding enabled for our network to work properly. To check the status of IP forwarding, issue the command:

$ sysctl net.ipv4.ip_forward
net.ipv4.ip_forward = 1

If you see “0” in the output then you can enable IP forwarding for the current session by typing sudo sysctl -w net.ipv4.ip_forward=1and for the change to persist, issue the command $ sudo sed -i "/net.ipv4.ip_forward=1/ s/# *//" /etc/sysctl.conf

Storage Management

We manage storage in KVM through storage pools and volumes. Storage pools are quantities of storage set aside by a system or storage administrator, often by the latter, for use by KVM guest virtual machines. System administrators assign portions of storage pools called volumes to virtual machines as block devices. By default, libvirt uses directory based storage pools /var/lib/libvirt/images/to manage virtual machine volumes. We do not require storage pools for the proper functioning of virtual machines. In absence of storage pools, it is up to the system administrator to ensure that storage is always available to the virtual machines. You will have to manually mount the storage if required and change your /etc/fstabfile. If you do not want to use storage pools or have only a single hard drive, you can safely skip this section and continue to the next section on “Gues Virtual Machine Creation”.

KVM supports various types of storage pools including directory pools, filesystem pools, disk pools, LMV pools, iSCSI pools, GlusterFS pools, ZFS pools, etc. You can read more about storage pools on the libvirt site under storage management.

In our case, our guest machine images will reside on LVM volumes. The abstraction layer provided by LVM will allow us to increase the storage as and when required. Note that while libvirt allows you to define an LVM based storage pool, it only supports complete disk partitions and does not support thin provisioning.

Let us start by looking at all our block devices:

$ sudo lsblk
[sudo] password for msambare: 
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
loop0    7:0    0 88.5M  1 loop /snap/core/7270
loop1    7:1    0 88.4M  1 loop /snap/core/7169
sr0     11:0    1 1024M  0 rom  
vda    252:0    0   20G  0 disk 
├─vda1 252:1    0    1M  0 part 
└─vda2 252:2    0   20G  0 part /
vdb    252:16   0   20G  0 disk

I would like to place all my virtual machine images (LVM volumes) on vdb. The “v” in “vdb” is because I am using nested virtualization. It virtualizes all my hardware. Please choose your drive carefully as the below commands will wipe all existing data. We will begin by creating the physical volume followed by the volume group.

$ sudo pvcreate /dev/vdb
  Physical volume "/dev/vdb" successfully created.

$ sudo vgcreate kvm_store /dev/vdb
  Volume group "kvm_store" successfully created

We will now define the storage pool and start it.

$ sudo virsh pool-define-as guest_images_lvm logical - - /dev/vdb kvm_store /dev/kvm_store
Pool guest_images_lvm defined

$ sudo virsh pool-start guest_images_lvm
Pool guest_images_lvm started

The storage pool needs to start automatically every time on system boot. Let us enable it and check its status.

$ sudo virsh pool-autostart guest_images_lvm
Pool guest_images_lvm marked as autostarted

$ sudo virsh pool-list --all
 Name                 State      Autostart 
-------------------------------------------
 guest_images_lvm     active     yes

Now that we have set up the storage pool, we can move on to creating our first guest.

Guest Virtual Machine Creation

Before we can create our first guest, we need to download the ISO of the target OS. In my case, I will create an Ubuntu 18.04 guest and hence will download that CD image.

$ sudo mkdir /iso
$ cd /iso
$ sudo wget http://cdimage.ubuntu.com/releases/18.04/release/ubuntu-18.04.2-server-amd64.iso

With the ISO in place, we can now spin our first virtual machine. One final step is to choose the correct os-variant flag while installing the guest virtual machine. We can check all the guest operating systems along with their variants, supported by KVM for installation using the osinfo-query command. You will need to install the libosinfo-bin package to get the list.

$ sudo apt install libosinfo-bin
$ osinfo-query os

If you do not find the desired os variant in the list, you can choose the closest match for the flag.

I suggest you start a screen session for the guest OS installation. Once you have installed the OS and the guest system reboots you can exit the screen session by typing the keyboard combo [Ctrl + a] \and continue with the lab. Do not forget to select OpenSSH Server during the installation process else you cannot ssh into your virtual machine.

$ screen -S ubuntu

$ sudo virt-install --name ubuntu1804 \
  --ram 2048 \
  --disk path=/dev/kvm_store/ubuntu1804,size=8 \
  --vcpus 2 \
  --os-type linux \
  --os-variant ubuntu18.04 \
  --network bridge=virbr1,model=virtio \
  --graphics none \
  --console pty,target_type=serial \
  --location /iso/ubuntu-18.04.2-server-amd64.iso \
  --extra-args 'console=ttyS0,115200n8 serial'

All the install options are self-explanatory. In case you have skipped the “Storage Management” session of the lab, specify the disk path as --disk path=/var/lib/libvirt/images/ubuntu1804.img,size=8in the virt-install command above. If you want to know all the options, you can look up the man pages or type $ virt-install --helpat your command prompt.

Connecting To Our Virtual Machine

To SSH into your virtual machine, you need to know the IP address assigned to it by the DHCP server. Before we get the IP address of the virtual machine, let us ensure that the guest is running.

$ virsh list --all

 Id    Name                           State
----------------------------------------------------
 1     ubuntu1804                     running

Now that we know that the guest is running, let us get its IP address and ping it from the host so we know that it is reachable.

$ sudo virsh net-dhcp-leases virbr1

 Expiry Time          MAC address        Protocol  IP address                Hostname        Client ID or DUID
-------------------------------------------------------------------------------------------------------------------
 2019-07-05 17:23:40  52:54:00:44:00:ad  ipv4      192.168.10.21/24          ubuntu-guest    ff:32:39:f9:b5:00:02:00:00:ab:11:4a:3a:2f:68:2f:d9:d4:4f

$ ping -c3 192.168.10.21

PING 192.168.10.21 (192.168.10.21) 56(84) bytes of data.
64 bytes from 192.168.10.21: icmp_seq=1 ttl=64 time=0.341 ms
64 bytes from 192.168.10.21: icmp_seq=2 ttl=64 time=0.331 ms
64 bytes from 192.168.10.21: icmp_seq=3 ttl=64 time=0.333 ms

--- 192.168.10.21 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2049ms
rtt min/avg/max/mdev = 0.331/0.335/0.341/0.004 ms

Alternatively, you can also use the following commands to retrieve the MAC and IP address:

$ virsh dumpxml ubuntu1804 | grep "mac address" | awk -F\' '{ print $2}'

52:54:00:44:00:ad

$ arp -an | grep 52:54:00:44:00:ad

? (192.168.10.21) at 52:54:00:44:00:ad [ether] on virbr1

The final step in our KVM virtualization, will be to ssh into the guest virtual machine and ping the outside world.

$ ssh mangesh@192.168.10.21

$ ping -c3 google.com

PING google.com (172.217.163.46) 56(84) bytes of data.
64 bytes from maa05s01-in-f14.1e100.net (172.217.163.46): icmp_seq=1 ttl=55 time=18.2 ms
64 bytes from maa05s01-in-f14.1e100.net (172.217.163.46): icmp_seq=2 ttl=55 time=20.2 ms
64 bytes from maa05s01-in-f14.1e100.net (172.217.163.46): icmp_seq=3 ttl=55 time=18.8 ms

--- google.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 18.225/19.092/20.217/0.833 ms

Static IP Assignment

When you do KVM virtualization, there will be times when you need to assign static IPs to your virtual machines. If you wish to assign a static IP to a virtual machine, you can do so easily once you know the MAC address. The command virsh dumpxml ubuntu1804 | grep "mac address" | awk -F\' '{ print $2}'will get the virtual machine’s MAC address. Once we know the MAC address, we can edit the configuration of our network to finish the assignment. Let us start by examining our network configuration.

$ sudo virsh net-dumpxml virbr1
<network>
  <name>virbr1</name>
  <uuid>37fa930b-30bf-43f5-b431-aa252043c99e</uuid>
  <forward mode='nat'>
    <nat>
      <port start='1024' end='65535'/>
    </nat>
  </forward>
  <bridge name='virbr1' stp='on' delay='0'/>
  <mac address='52:54:00:80:c5:2d'/>
  <ip address='192.168.10.1' netmask='255.255.255.0'>
    <dhcp>
      <range start='192.168.10.2' end='192.168.10.100'/>
    </dhcp>
  </ip>
</network>

Just below the DHCP range start line, we need to append the line <host mac='52:54:00:44:00:ad' name='ubuntu1804' ip='192.168.10.10'/>using the $ sudo virsh net-edit virbr1command. Our final configuration will look like this:

<network>
  <name>virbr1</name>
  <uuid>37fa930b-30bf-43f5-b431-aa252043c99e</uuid>
  <forward mode='nat'>
    <nat>
      <port start='1024' end='65535'/>
    </nat>
  </forward>
  <bridge name='virbr1' stp='on' delay='0'/>
  <mac address='52:54:00:80:c5:2d'/>
  <ip address='192.168.10.1' netmask='255.255.255.0'>
    <dhcp>
      <range start='192.168.10.2' end='192.168.10.100'/>
      <host mac='52:54:00:44:00:ad' name='ubuntu1804' ip='192.168.10.10'/>
    </dhcp>
  </ip>
</network>

We now need to restart the DHCP service.

$ sudo virsh net-destroy virbr1
Network virbr1 destroyed

$ sudo virsh net-start virbr1
Network virbr1 started

Restart your virtual machine if it is running and ping it with the new static IP address.

$ ping -c3 192.168.10.10
PING 192.168.10.10 (192.168.10.10) 56(84) bytes of data.
64 bytes from 192.168.10.10: icmp_seq=1 ttl=64 time=0.804 ms
64 bytes from 192.168.10.10: icmp_seq=2 ttl=64 time=0.617 ms
64 bytes from 192.168.10.10: icmp_seq=3 ttl=64 time=0.619 ms

--- 192.168.10.10 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2029ms
rtt min/avg/max/mdev = 0.617/0.680/0.804/0.087 ms

If you want to assign static IPs to multiple machines, just append as many lines as you need below the DHCP start range as above.

Final Thoughts

In this article, we looked at how to do KVM virtualization the right way. In future KVM virtualization articles, we will explore topics such as KVM Clustering, different networking types in KVM, CEPH storage with KVM, KVM BAU tasks, Kimchi for KVM Administration, KVM Graphical Support. Let me know your thoughts on this article via the comments section. All suggestions are welcome.

About the author

Mangesh Sambare

I am a passionate technologist and an evangelist when it comes to open-source technologies and their adoption. In addition to computing; photography, politics and religion pique my interest.

I am also the CTO and Co-Founder of Stribog IT Solutions, an IAAS and managed service provider with presence in India and the United States.

Add Comment

4 + fifteen =

Mangesh Sambare

I am a passionate technologist and an evangelist when it comes to open-source technologies and their adoption. In addition to computing; photography, politics and religion pique my interest.

I am also the CTO and Co-Founder of Stribog IT Solutions, an IAAS and managed service provider with presence in India and the United States.

Get in touch

Feel free to reach out to me for any questions, comments, suggestions or just to say a Hi!