During Belatrix’s induction to Software Quality Assurance, we remind our new hires, “don’t let your production network be your test lab!” But there are several reasons that lead a QA Engineer to perform testing in the production environment. The main reason is usually the lack of a controlled testing environment isolated from the organization’s systems.

Why don’t more organizations have their own testing lab?

The answer is often — the high cost of servers and licenses make it difficult to justify the investment.

Everyone talks about virtualization for datacenters, virtualization for IT labs, development environments and even virtualization for Testing Labs. Virtualization provides a significant degree of efficiency, flexibility and cost savings. So, how is it possible that despite these advantages, there are companies that do not yet have Testing Labs? It´s the same answer. Costs are too high, Intel Hyper-V requires expensive licenses, Citrix XenServer and XVMWare ESX are as expensive as the Intel solution, and even pointing to the free VMWare ESXi requires special hardware to enable acceptable performance.

This post shares my experience of trying to solve this problem, using standard PCs and free software (as the company has taken the initiative to migrate all its applications to OpenSource). I’ll introduce the tool I’ve used, and then explain how to install it and how to do a live virtual machine migration in a small cluster.

Of course, there are many virtualization platforms other than those mentioned above. My goal and the challenge was to focus on finding one that would allow me to squeeze the most out of the old PCs and provide a service available fairly acceptable. Obviously the hardware limitations were challenging. I had several old PCs but none by itself could be considered “a server.” My search therefore was focused on a solution that would: provide high compatibility with our different hardware, and “the clustering” became in a very important function to consider.

We quickly dismissed the option to create several small servers running a Linux distribution and OpenVZ. The administration became overwhelming and wasted resources (of course, I didn’t at that point in time think about scalability). However, the approach was not entirely wrong. I started looking for some kernel optimized for virtualization. That was how my search led me to ProxmoxVE!

“Proxmox VE is a complete virtualization management solution for server virtualization – a fully open source virtualization platform.” The reasons for choosing Proxmox as the virtualization platform were: It …

  • Is free software.
  • Enables KVM and OpenVZ to virtualize.
  • Has an easy to use web interface
  • Is Debian based – it can be administered by SSH and install any package built for Debian.
  • Supports several advanced features without requiring the purchase of additional licenses:
    • Live Migration.
    • Clustering of servers.
    • Automatic backups of VMs.
    • Possibility to connect to a NAS / SAN with NFS, iSCSI.

Installing Proxmox VE

Proxmox VE is an x86_64 distribution, so you cannot install it on an i386 system. Also, if you want to use KVM (for fully virtualized systems), your CPU must support hardware virtualization (Intel VT or AMD-V). This is not required if want to use OpenVZ.

In this tutorial, I will create a small cluster of two machines, the Proxmox master (server1.example.com with the IP 192.168.0.100) and a slave (server2.example.com, IP: 192.168.0.101) so that I can demonstrate the live migration feature and also the creation and management of virtual machines on remote hosts through Proxmox VE. Of course, it is perfectly fine to run Proxmox VE on just one host.

Download the latest Proxmox VE ISO image fromhttp://pve.proxmox.com/wiki/Downloads, burn it onto a CD, and boot your system from it. Press ENTER at the boot prompt

Proxmox
Proxmox
Accept the Proxmox license agreement (GPL)

Proxmox
Proxmox
Select the hard drive on which you want to install Proxmox. Please note that all existing partitions and data will be lost!

Proxmox
Proxmox
Select your country, time zone, and keyboard layout

Proxmox
Proxmox
Type in a password (this is the root password that allows you to log in on the shell and also to the Proxmox web interface) and your email address

Proxmox
Proxmox
Now we come to the network configuration. Type in the hostname (server1.example.com in this example), IP address (e.g. 192.168.0.100), netmask (e.g. 255.255.255.0), gateway (e.g. 192.168.0.1), and a nameserver (e.g. 145.253.2.75)

Proxmox
Proxmox
After, Proxmox is installed. The installer will automatically partition your hard drive using LVM – that’s why there is no partition dialogue in the installer. Proxmox uses LVM because that allows you to create snapshot backups of virtual machines.

Proxmox
Proxmox
Reboot the system afterwards

Proxmox
Proxmox
After server1 has rebooted, you can open a browser and go to http://192.168.0.100/ This will redirect you to https://192.168.0.100/.If you’re using Firefox 3 and use HTTPS, Firefox will complain about the self-signed certificate, therefore you must tell Firefox to accept the certificate. To do this, click on the “Or you can add an exception” link

Proxmox
Proxmox
The Add Security Exception window opens. In that window, click on the Get Certificate button first and then on the Confirm Security Exception button

Proxmox
Proxmox
After, you will see the Proxmox login form. Type in root and the password you’ve created during the installation

Proxmox
Proxmox
This is how the Proxmox control panel looks

Proxmox

Create the cluster

Having a Proxmox server cluster is extremely useful since it enables migrating virtual machines between any server in the cluster and allows you to manage all servers from a single interface, located in the master node.

Fortunately, creating the cluster with Proxmox is completely trivial and is done as follows (note that you should have installed Proxmox in all the computers that are part of the cluster and its must have different names):

Run the following commands on the nodes that will the Master:

server1:~# pveca -c
cluster master successfully created

server1:~# pveca -l
CID—-IPADDRESS—-ROLE-STATE——–UPTIME—LOAD—-MEM—DISK
1 : 192.168.0.100 M A day 05:11 0.08 30% 3%

 

Run the following commands on the slave node (you can have as many slaves as you want):

server2:~# pveca -l
local node ‘192.168.0.101’ not part of cluster

server2:~# pveca -a -h 192.168.0.100
cluster node successfully created

server2:~# pveca -l
CID—-IPADDRESS—-ROLE-STATE——–UPTIME—LOAD—-MEM—DISK
1 : 192.168.0.100 M A 1 day 05:13 0.01 30% 3%
2 : 192.168.0.101 N A 01:57 0.00 11% 2%

 

Live Migration

For live migration, you need to use a NAS/SAN server in which to allocate the disks of your virtual machines. The reason for this is that live migration consists in moving the data the machine to be migrated has in memory, from one computer to another. It will not need to move the disk data since it is allocated in a storage server. That way migration is very fast.

Using the example above, we only have two servers; the master node is used as storage server. We will create a directory which is then exported by NFS and added to the “Storage”.

 

Run the following commands on the Master node:

server1:~# aptitude install nfs-kernel-server
server1:~# mkdir /var/lib/vz/storage
server1:~# vi /etc/exports
/var/lib/vz/storage 192.168.0.101(rw,no_root_squash) 192.168.0.100(rw,no_root_squash)
server1:~# /etc/init.d/nfs-kernel-server restart

 

Done!! You simply need to add the new storage device through the web interface. Then, when you create a virtual machine, the disk for it should be kept in the storage network. Now you are ready to do live migrations from the web interface!

Leave a comment