How many virtual CPU's does the View desktop have assigned? Enter same value when asked 'How many processors (physical and virtual) does the server have?'
How much RAM in GB does the View desktop have assigned? Enter same value when asked 'How much memory did the server have in gigabytes?'
Was the computer 64-bit?
Was the desktop a Linked Clone
A large delta between memory active and vRAM allocated could indicate over-sized VM's. Consider reducing memory allocation to VM's and monitor.
See http://www.vmware.com/files/pdf/perf-vsphere-memory_management.pdf for information on how VMware manages memory!!!]]>
When VMware tools are installed on a virtual machine they provide device drivers into the host virtualization layer, from within the guest operating system. Part of this package that is installed is the balloon driver or "vmmemctl" which can be observed inside the guest. The balloon driver communicates to the hypervisor to reclaim memory inside the guest when it is no longer valuable to the operating system. If the Physical ESX server begins to run low on memory it will grow the balloon driver to reclaim memory from the guest. This process reduces the chance that the physical ESX host will begin to swap, which you will cause performance degradation.
Ballooning indicates that your ESXi hosts have some memory contention. In general, a little bit of ballooning is not bad. Ballooning at a low level allows ESXi to reclaim inactive memory pages from guest VM's. However, if there are not enough inactive pages to claim, active pages may be reclaimed. This can hurt performance of the guest VM.
Resolution
- Verify Distributed Resource Scheduler is enabled on your cluster. DRS can redistribute VM's from over-subscribed hosts to less busy hosts.
- Install more memory in physical hosts.
- Change the power policy of VMware View pools to suspend or power-off to conserve host resources. Note: this is at the expense of increased logon times for users.
- Add additional hosts to cluster. Note configuration maximums of 8 hosts per cluster in older versions of vSphere and View when using VMFS-based Linked Clones.
- Reduce the amount of memory given to View desktops. Review the Memory Active in MB counter (above). If memory active is substantially below configured memory, reduce the amount of memory given to desktops. Recommended starting values are 2GB for XP, 2GB for Win7 x86 and 3GB for Win7 x64.
See http://www.virtualinsanity.com/index.php/2010/02/19/performance-troubleshooting-vmware-vsphere-memory/ for more.]]>
In vSphere, it's possible to set memory limits that define a maximum quantity of memory a VM is allowed to consume. You might want to set such limits to protect physical memory from being over-consumed by a memory-hungry VM.
However, if you don't plan carefully, setting a VM memory limit can have an unexpected impact on VM performance. That impact occurs because the VM is unaware that a limit has been placed upon it. A limited VM with 4GB of assigned vRAM operates under the assumption that it has its full memory assignment. As it attempts to use more memory than its limit, that memory must come from ballooning or swapping. Either of these actions can incur a performance tax on the host, which can impact the performance of other VMs on that host.
It's generally not a good idea to set memory limits on VMs. Instead, you should adjust downward the quantity of vRAM assigned to that VM.
The Memory Limit value shown below should be equal to the amount of vRAM allocated to your View desktop. If the value is less than vRAM assigned, you are limited.]]>
See this page for vSphere 5 overhead values: http://pubs.vmware.com/vsphere-50/index.jsp?topic=%2Fcom.vmware.vsphere.resmgmt.doc_50%2FGUID-B42C72C1-F8D5-40DC-93D1-FB31849B1114.html
If you chronically oversize your VM's, the overhead consumed will increase. This leads to a less efficient datacenter and reduced resources for running VM's as overhead memory cannot be shared, swapped or ballooned. Right-size your VM's.]]>
Reservations may be used in View environments to reduce the footprint of VM's on disk as reserved memory is included in the .vswap file written to disk for each VM.
A small reservation on the parent VM can also help ensure that each VM in the pool has at least a little bit of vRAM to work in heavily oversubscribed environments. Take care to not over reserve resources in your environment - do not reserve 100% of memory for each View desktop.
Starting values for View desktop reservations might be around 10%. For example, If reserving 10% of a 2GB desktop, our vswap is reduced by 205MB per desktop. On a 200 desktop deployment, this can lead to 41GB disk savings (205mb * 200). 205mb reserved memory will probably be enough to keep a Windows desktop running (barely) even if all other memory is ballooned or swapped out if you failed to plan and over-over-subscribed your environment.
Note: This example 41GB of disk savings will be lost to reserved ESXi host RAM across the cluster. You will not be able to use this host RAM, even if the VM's with reserved memory are totally idle. This can exacerbate problems caused by oversubscription. To mitigate this impact, consider setting your View pools where you have set a reservation to have a powered-off power policy. A suspend power policy will consume disk space, negating the savings you were trying to achieve by reserving memory.]]>
Sharing is good!
In View environments, most running VM's will share a common image, especially with Linked Clone pools where all VM's are running against a common base disk.
Read more about Transparent Page Sharing (TPS) in View environemnts here: http://myvirtualcloud.net/?p=1797 and here: http://myvirtualcloud.net/?p=2545.
When multiple virtual machines are running, some of them may have identical sets of memory content. This presents opportunities for sharing memory across virtual machines (as well as sharing within a single virtual machine). For example, several virtual machines may be running the same guest operating system, have the same applications, or contain the same user data. With page sharing,the hypervisor can reclaim the redundant copies and only keep one copy, which is shared by multiple virtual machines in the hostphysical memory. As a result, the total virtual machine host memory consumption is reduced and a higher level of memory overcommitment is possible.
In ESXi, the redundant page copies are identified by their contents. This means that pages with identical content can be shared regardless of when, where, and how those contents are generated. ESX scans the content of guest physical memory for sharing opportunities. Instead of comparing each byte of a candidate guest physical page to other pages, an action that is prohibitively expensive, ESX uses hashing to identify potentially identical pages.]]>
This shows the total amount of memory saved on the particular host this VM was running on. More is better. If this value is near zero (0), investigate your host configuration.
Review http://myvirtualcloud.net/?p=1328 for information on tuning TPS settings to drive higher levels of memory sharing per host.]]>
This value alone is not significant. Shares must be considered across all VM's on the host, cluster, and resource pool, if used. If the VM with performance problems has a lower number of shares (or is in a resource pool with a lower number of shares, or oversubscribed shares (http://vmtoday.com/2012/03/vmware-vsphere-resource-pools-resource-allocation-revisited/) than other VM's it may suffer performance degradation in environments where there is resource contention and/or memory oversubscription.
Shares play an important role in determining the allocation targets when memory is overcommitted. When the hypervisor needs memory, it reclaims memory from the virtual machine that owns the fewest shares-per-allocated page.
A significant limitation of the pure proportional-share algorithm is that it does not incorporate any information about the actual memory usage of the virtual machine. As a result, some idle virtual machines with high shares can retain idle memory unproductively, while some active virtual machines with fewer shares suffer from the lack of memory.
ESX resolves this problem by estimating a virtual machine's working set size and charging a virtual machine more for the idle memory than for the actively used memory through an idle tax. A virtual machine's shares-per-allocated page ratio is adjusted to be lower if a fraction of the virtual machine’s memory is idle. Hence, memory will be reclaimed preferentially from the virtual machines that are not fully utilizing their allocated memory. The detailed algorithm can be found in Memory Resource Management in VMware ESX Server.
]]>
If this is much above zero for an active machine you've got problems. TPS and Ballooning have failed to provide sufficient memory for the host to use for other running VM's because you are oversubscribed, so ESXi is now swapping to disk. Swapping is a slow operation and causes severe degradation of guest performance.
Resolution
- Verify Distributed Resource Scheduler is enabled on your cluster. DRS can redistribute VM's from over-subscribed hosts to less busy hosts.
- Install more memory in physical hosts.
- Change the power policy of VMware View pools to suspend or power-off to conserve host resources. Note: this is at the expense of increased logon times for users.
- Add additional hosts to cluster. Note configuration maximums of 8 hosts per cluster in older versions of vSphere and View when using VMFS-based Linked Clones.
- Reduce the amount of memory given to View desktops. Review the Memory Active in MB counter (above). If memory active is substantially below configured memory, reduce the amount of memory given to desktops. Recommended starting values are 2GB for XP, 2GB for Win7 x86 and 3GB for Win7 x64.
]]>
From VMware KB 1033115: When a CPU Limit is set on a virtual machine resource settings, the virtual machine is deliberately held from being scheduled to a PCPU when it has used up its allocated CPU resource. This happens regardless of the CPU utilization. If the limit is set to 500MHz, the virtual machine is descheduled from the PCPU and has to wait before it is allowed to be scheduled again. As such, the virtual machine might experience performance degradation.
Note: For an SMP virtual machine, the sum of all vCPUs cannot exceed the specified limit. For example, 4 vCPU virtual machine with a limit of 1200MHz and equal load among vCPUs would result in a max of 300MHz per vCPU.
Translated, this means that we're not slowing down the pCPU for this VM - that's not possible. What we are doing is basically artificially introducing CPU Ready (for more on CPU Ready see here: http://vmtoday.com/2010/08/high-cpu-ready-poor-performance/) for the VM.
If the value shown here is less than the 'Host Processor Speed in MHz' value, you are limited. The default for VM's is 'unlimited', which should appear as some unwieldy number in the range of terahertz or petahertz (not that your servers can do this).
]]>
See http://www.yellow-bricks.com/2010/07/08/reservations-primer/ and http://frankdenneman.nl/2010/06/reservations-and-cpu-scheduling/for more info on reservations.
In general, you should not be setting a reservation for Veiw desktops. There may be some unique use cases for reserving CPU for high-profile users' desktops (CEO, CTO, your own admin console) in View. Do this sparingly, as reservations can limit the ESXi scheduler and make your virtual datacenter inefficient. CPU Reservations can introduce higher CPU ready for VM's without reservations. Reservations may not solve CPU Ready issues for the VM's they are set on either (http://joshodgers.com/2012/07/22/common-mistake-using-cpu-reservations-to-solve-cpu-ready/]]>
Each virtual machine is granted a number of CPU shares. The more shares a virtual machine has, the more often it gets a time slice of a CPU when there is no CPU idle time. Shares represent a relative metric for allocating CPU capacity.
This value alone is not significant. Shares must be considered across all VM's on the host, cluster, and resource pool, if used. If the VM with performance problems has a lower number of shares (or is in a resource pool with a lower number of shares, or oversubscribed shares (http://vmtoday.com/2012/03/vmware-vsphere-resource-pools-resource-allocation-revisited/) than other VM's it may suffer performance degradation in environments where there is resource contention and/or memory oversubscription.
Read this VMware PDF for a deep-dive into CPU Scheduling: http://www.vmware.com/files/pdf/techpaper/VMW_vSphere41_cpu_schedule_ESX.pdf
If this value is variable (not a flat line), this may indicate that your VM is in a resource pool configured with a set number of shares. As your View environment dynamically spins up and down VM's within the pool, the percentage of shares for a particular machine will vary. Understand the impact of shares, reservations and limits on VMs and Resource Pools!!!!
]]>
This is CPU Ready!!!
For more on CPU Ready see: http://vmtoday.com/2010/08/high-cpu-ready-poor-performance/
CPU Ready is often a sign of over subscribing CPU resources or over-sizing your virtual machines. This counter shows CPU Ready time as a percentage per milisecond that the CPU was ready to run but could not be scheduled by the ESXi Scheduler. Multiply by 100 to get percentage of CPU Ready. In general, anything above 10% CPU Ready will result in poor performance.
For more on Scheduling/Time Keeping in VM's see: http://www.vmware.com/files/pdf/Timekeeping-In-VirtualMachines.pdf. Also see the vSphere Performance Monitoring Guide here: http://pubs.vmware.com/vsphere-50/topic/com.vmware.ICbase/PDF/vsphere-esxi-vcenter-server-50-monitoring-performance-guide.pdf
If this counter is in a warning or critical state, check CPU Ready for the host that this VM is running on in vCenter. The value in the realtime graph is a summation of miliseconds VM's were ready but could not be scheduled. Divide the value shown in the vCenter ESXi Performance tab by the number of milisecond timeslices in the graph (realtime graph = 20 second refresh = 20000 slices). Example: If CPU ready is 5200 on the CPU/RealTime graph, divide 5200/20000 = .26 x 100 = 26% CPU RDY. Also check ESXTOP on the ESXi host that this VM is running on. If multiple VM's show high CPU ready, you are over-provisioning VM's with too many vCPU's or have too high of a consolidation ration (too many VM's per host). Remove vCPU's or add additional pCPU's or hosts.
]]>
This is the amount of time that the CPU is active. If the VM is using vSMP, this is the aggregate average of all CPU's in the VM.
If this value is regularly over 70%, you may benefit from adding an additional vCPU. Be aware, however, that an additional CPU multiplied over all VM's may drive CPU Ready upwards, negatively impacting performance.]]>
approximate average effective speed of the VM's virtual CPU over the time period between the two samples.
Remember, A VM's vCPU cannot go any faster than the speed of a single core/thread of the physical CPU that's backing it.]]>
This is the clock speed of the physical processor backing the VM. If this value fluxuates, it could be due to a couple issues:
1.) vMotion of VM between hosts with different physical hardware. Best practice for VMware vSphere clusters suggests maintaining hosts with identical hardare configurations to prevent unpredictable performance as VM's are vMotioned from host to host. Identical host configuration may also help VMware vSphere Distributed Resource Scheduler recommend best placement for VM's.
2.) Power Management settings - if your host is configured with active/dynamic power management in BIOS. Consider setting a static 'High Performance' power management profile or disable power management in BIOS. See this VMware KB for more info: http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1018206]]>
This value is incremental - it will continue to climb over the term of the PCoIP session. Rapid climbing suggests a user is inputing audio - this may be dictation activity (medical or legal settings), WebEx sessions, etc. Consider implementing a PCoIP Group Policy () with the 'Configure the PCoIP session audio bandwidth limit' setting enabled. PCoIP Session Audio Bandwidth Limit will configure audio compression to control maximum audio bandwidth.
The audio processing monitors the bandwidth used for audio. The processing selects the audio compression algorithm that provides the best audio possible, given the current bandwidth utilization. If a bandwidth limit is set, the processing reduces quality by changing the compression algorithm selection until the bandwidth limit is reached. If minimum quality audio cannot be provided within the bandwidth limit specified, audio is disabled.
To allow for uncompressed high quality stereo audio, set this value to higher than 1600 kbit/s. A value of 450 kbit/s and higher allows for stereo, high-quality, compressed audio. A value between 50 kbit/s and 450 kbit/s results in audio that ranges between FM radio and phone call quality. A value below 50 kbit/s might result in no audio playback.
This setting applies to View Agent only. You must enable audio on both endpoints before this setting has any effect.
In addition, this setting has no effect on USB audio.
If this setting is disabled or not configured, a default audio bandwidth limit of 500 kilobits per second is configured to constrain the audio compression algorithm selected. If the setting is configured, the value is measured in kilobits per second, with a default audio bandwidth limit of 500 kilobits per second.
]]>
This value is incremental - it will continue to climb over the term of the PCoIP session. Rapid climbing suggests a user is streaming audio. Consider implementing a PCoIP Group Policy () with the 'Configure the PCoIP session audio bandwidth limit' setting enabled. PCoIP Session Audio Bandwidth Limit will configure audio compression to control maximum audio bandwidth.
The audio processing monitors the bandwidth used for audio. The processing selects the audio compression algorithm that provides the best audio possible, given the current bandwidth utilization. If a bandwidth limit is set, the processing reduces quality by changing the compression algorithm selection until the bandwidth limit is reached. If minimum quality audio cannot be provided within the bandwidth limit specified, audio is disabled.
To allow for uncompressed high quality stereo audio, set this value to higher than 1600 kbit/s. A value of 450 kbit/s and higher allows for stereo, high-quality, compressed audio. A value between 50 kbit/s and 450 kbit/s results in audio that ranges between FM radio and phone call quality. A value below 50 kbit/s might result in no audio playback.
This setting applies to View Agent only. You must enable audio on both endpoints before this setting has any effect.
In addition, this setting has no effect on USB audio.
If this setting is disabled or not configured, a default audio bandwidth limit of 500 kilobits per second is configured to constrain the audio compression algorithm selected. If the setting is configured, the value is measured in kilobits per second, with a default audio bandwidth limit of 500 kilobits per second.
]]>
http://pubs.vmware.com/view-51/topic/com.vmware.view.administration.doc/GUID-6C22A209-AFC1-47EF-9DFF-39AFB38D655D.html]]>
http://myvirtualcloud.net/?p=1537 for more info.]]>
http://myvirtualcloud.net/?p=1537 for more info.]]>
Bandwidth requirements for PCoIP are defined here: http://pubs.vmware.com/view-51/index.jsp?topic=%2Fcom.vmware.view.planning.doc%2FGUID-5DC232B4-778B-4D9C-B995-B8850CF35096.html and here: http://www.vmware.com/files/pdf/view/VMware-View-5-PCoIP-Network-Optimization-Guide.pdf
With the reduced bandwidth consumption in View 5, adding 3D users is satisfactory on a WAN with up to approximately 100ms latency.
Ensure that the round-trip network latency is less than 250 ms as a minimum.
]]>
USB Filtering) http://pubs.vmware.com/view-51/topic/com.vmware.view.administration.doc/GUID-C6E7AF06-1D9F-4096-8753-D6F6C7B58DB1.html#GUID-C6E7AF06-1D9F-4096-8753-D6F6C7B58DB1__TABLE_8F439159BE6344C0BFEED401A49AD9A6]]>
USB Filtering) http://pubs.vmware.com/view-51/topic/com.vmware.view.administration.doc/GUID-C6E7AF06-1D9F-4096-8753-D6F6C7B58DB1.html#GUID-C6E7AF06-1D9F-4096-8753-D6F6C7B58DB1__TABLE_8F439159BE6344C0BFEED401A49AD9A6]]>