XenServer Scalabilitiy — Performance Tuning

XenServer Scalabilitiy — Performance Tuning
 In field we noticed performance issues within a particular VDI environment in terms of the amount of virtual machines the costumer was able to start on their XenServer hosts, as well as a bad performance within the user-sessions.
 In that particular environment it was not possible to have more than 60–65 Machines operated on one XenServer host. With a higher count of machines we recognized that the dom0 was full loaded and the PVS driver shows a bad throughput of approximately 4 MB/s or worst. The user experience massively dropped after having 65 VMs started.
 Following scenario:
 · PVS 6.1
 · XenServer 6.0.2
 · Provisioned Windows 7 (64 Bit) Clients
 · PVS-Network not separated (10 Gbit Broadcom)
 · XenServer-Host: Dell M910 Blade Server (4*10 Cores, 512 GB RAM)
 · SAN Fibre Channel Connection
 · WriteCache-Method : Cache on device hard disk (SAN Storage)
 · PVS-Server not virtualized
 The more machines we had started the longer was the complete switch on and startup process of each machine.
 Of course in PVS-environments you having a high network and storage use and all traffic goes through dom0. Despite of that we were wondering that the dom0 goes fully loaded already by 65 started machines.
 I’m aware of our Best Practice Guide saying don’t have more than 70–130 machines on one host. But the challenge was to be able to start at least 100 machines.
 The modification of the following two parameters let us start more than 150 machines on the same host, while having a good performance.
 1. Increase the count of vCPUs assigned to dom0 to 8
 That can be done by executing following command onto the XenServer console:
 /opt/xensource/libexec/xen-cmdline –set-xen dom0_max_vcpus=8 echo ‘NR_DOMAIN0_VCPUS=8′ > /etc/sysconfig/unplug-vcpus
 1.1 Attention: Before executing these commands the C-STATE Option in the XenServer host BIOS should be disabled if necessary, according following article:http://support.citrix.com/article/CTX127395
 This article relies to a special type of CPU. If you don’t attend the advice your XenServer-Host could crash.
 2. Increase the amount of storage buffer size to the limit of 256
 Following command has to be executed to achieve that:
 /opt/xensource/libexec/xen-cmdline –set-dom0 blkbk.reqs=256
 This command improves the storage throughput.
 These two parameters in combination gave us the possibility to have a huge amount of provisionend virtual machines running on the same host while having a great performance. It also was necessary to increase the amount of memory used by dom0 to the maximum of 2940 MB.
 ( http://support.citrix.com/article/CTX126531)
 If you have the need to tune your XenServer environment by that way, always keep in mind facing following three indicators in order to extrapolate the performance level of your host:
 1. Boot time of the virtual provisioned machine (apparent from PVS Target device driver)
 2. PVS data throughput (apparent from PVS Target device driver)
 3. Load of dom0 (apparent by executing the “Xentop” command on the XenServer console)
 These settings should only be used if there is a really need and then it should be adequately tested in means of for instance begin with the first server, let it running for at least two days under usage. If server is still running without any complications go to the next server. It is still recommended to use the XenServer default settings. Increasing the count of dom0 CPUs is not supported. If you go this way it is necessary to implement all steps stated in this post.
 Keep in mind: These settings were just only tested in this specific environment.
 There’s no warranty that this solution is working without any issues in your environment.
 Article by Mauro Cesar Fileto
 You can also:
 Follow me on Twitter at @m_fileto
 Find me on Facebook at Mauro Fileto
 Find me on Google Plus at Mauro C. Fileto
 Connect with me on LinkedIn at Mauro Cesar Fileto

Click to Post

Show your support

Clapping shows how much you appreciated Mauro Cesar Fileto’s story.