Update 1: This issue was fixed as of 3/15/2012 in ESXi 5.0 Update 1 per the original knowledge base article: VMware KB 2008144. To download ESXi 5.0/vCenter Server 5.0 Update 1, see the VMware Download Center.
Update 2 (Nov 7 2012): While VMware says the issue was fixed in vSphere 5.0 Update 1, I have not found this to be the case. I’ve had several environments where the bug was still present, and in some cases this nasty bug still appeared when implementing the Failback = NO setting on the vSwitch. Additionally, setting Failback = No may cause some other problems – see my update here for information from Chris Towles.
Update 3 (Nov 8 2012): I received an email from Nick Eggleston about this issue – Nick experienced this problem at a customer site and heard from VMware support that “There is a vswitch failback fix which may be relevant and has not yet shipped in ESXi 5.0 (it postdates 5.0 Update 1 and is currently targeted for fixing in Update 2). However this fix *has* already shipped in the ESXi 5.1 GA release.” Thanks, Nick!
Update 4 (Nov 9 2012): Once I knew what this bug looked like, I’ve found other symptoms that can help you diagnose the issue in your environment. One of the most telling is that the switches (assuming Cisco) carrying your iSCSI traffic may show high Output Queue Drops (OQD) counters when you run a ‘Show Interface Summary’. This is because the packets sent out the wrong vmnic have no business being on the switch they were routed too, so the switch drops the packet. I’ve also noticed that when this bug is causing issues the MAC address table (show mac address table) loses all entries for the iSCSI vmknic’s and in some cases the storage array’s front end ports.
I recently stumbled on two vSphere 5 ESXi networking bugs that I thought I would share. The issues are very similar from a cursory level, but have different symptoms, troubleshooting steps, and implications for your architecture, so I’m going to split the issues into two separate posts. Because troubleshooting these issues was a real pain, I’ll provide some details on how to identify these issues in your environments and wrap up with a third post on what I believe to be some best practices to avoid these same problems and achieve greater redundancy and resiliency in your vSphere environments.
The Problem
Today, we’ll look at an ESXi 5 networking issue that caused massive iSCSI latency, lost iSCSI sessions, and lost network connectivity. I’ve been able to reproduce this issue in several environments, on different hardware configurations. Here’s the background information on how all this started: I upgraded an ESXi 4.1 host to ESXi 5 using vSphere Update Manager (VUM). Note that I did use the host upgrade image that contained the ESXi500-201109001 iSCSI fixes – if you are upgrading to vSphere 5 and have iSCSI in your environment, use this image. Here’s a quick look at how the networking was configured on this host:
The iSCSI networking was configured in a very typical setup, and per best practices, as outline in VMware’s documentation, as well as from many vendors (see EMC’s Chad Sakac’s ‘A Multivendor Post on using iSCSI with VMware vSphere’), with two vmnic uplinks, two vmknics, with one active adapter on the correct layer-2/layer-3 network, and the other unused.
After the upgrade, the standard vSwitch with two vmnics for uplinks (Broadcom NetXtreme II BCM5709 1000Base-T) and two vmknics that serviced the software iSCSI adapter failed to pass traffic (vmkping to the iSCSI targets failed) and could not mount ANY iSCSI LUN’s. VM network, management, and vMotion ports were not affected.
If I let the host sit long enough, it *might* find a couple paths to the storage, but even then performance was deteriorated per the vmkernel.log:
WARNING: ScsiDeviceIO: 1218: Device naa.60026b90003dcebb000003ca4af95792 performance has deteriorated. I/O latency increased from average value of 5619 microseconds to 495292 microseconds.
Troubleshooting
I’m going to dump a whole bunch of my troubleshooting steps on you – hopefully they not only help folks dealing with this particular bug, but help with general network and configuration troubleshooting in VMware vSphere. During troubleshooting, I removed the vmk binding for these two on the iSCSI adapter, removed the software iSCSI Adapter itself, removed the vmknics on the vSwitch, and removed the vSwitch itself. I then recreated the vSwitch, set vSwitch MTU to 9000, recreated two vmk ports, set 9000MTU, assigned IP, and set failover order for multipath iSCSI. I then re-created the software iSCSI adapter and bound the two vmk ports. I was able to pass vmk traffic and mount iSCSI LUN’s. Great – problem solved!?!?! Not so much – I rebooted the host and the problem returned.
Here are my next troubleshooting steps:
- I repeated the procedure above and re-gained connectivity, but the problem returns on subsequent reboots. I can verifiably recreate the problem.
- I verified end-to-end connectivity for other hosts on the same Layer 1, Layer 2, and Layer 3 network as the iSCSI initiator and iSCSI targets.
- I verified the ESXi host’s networking configuration using the vSphere client, double-checking the vSwitch, vmnic uplinks, and vmknic configurations. Everything looked good so I canceled out.
- I then reinstalled ESXi from scratch (maybe something was left over from 4.1 that a clean install would weed out), built up the same configuration, and was again able to re-create the problem.
- I poured over logs (vmkernel.log, syslog.log and storagerm.log primarily). I could see an intermittent loss of storage connectivity, failure to log into the storage targets (duh – there is no connectivity, no vmkping) and high storage latency on hosts where I had rebuilt the iSCSI stack and run a few VM’s.
- I switched out the Broadcom NIC for an Intel NIC (the Broadcom had hardware iSCSI capabilities – I wanted to be sure the hardware iSCSI was not interfering).
- I verified TOE was enabled.
The ‘Ah-Ha’ Moment
Next, I verified the ESXi host’s networking configuration using the vSphere client one more time – the properties of the vSwitch, the properties of the vmkernel (vmk) ports, the manual NIC teaming overrides, IP addressing, etc. Everything looked correct – I MADE NO CHANGES – but when I clicked OK (last time I canceled) to close the vSwitch properties and was greeted with this warning:
Wait a second… I didn’t change anything, why am I being prompted with a you’re ‘Changing an iSCSI Initiator Port Group’ warning? I like to live dangerously, and wanted to see what would happen, so I said ‘Yes’.
Much to my surprise, after only viewing and closing the vSwitch and iSCSI vmk port group settings, I was able to complete a vmkping on the iSCSI-bound vmk’s. And moreover, I completed a Rescan of all storage adapters and my iSCSI LUN’s were found, mounted, and ready for use. Problem solved? Nope. The same ugly issue re-appeared after a reboot.
While the problem wasn’t solved, I now had something to work with. My go-to troubleshooting question “What Changed?” could maybe be answered. Even though I didn’t change anything in the vSwitch Properties GUI, something changed. To see what changed in the background, I compared the output of the following ESXi Shell (or vCLI, or PowerCLI) commands before and after making ‘the change’ happen (by viewing the properties of the vSwitch/vmk ports), but found no changes.
- esxcfg-vswitch -l
- esxcfg-vmknic -l
- esxcfg-nics -l
Then, I made backup copy of esx.conf
cp /etc/vmware/esx.conf /etc/vmware/esx.conf.bak
Then I caused ‘the change’ and then compared checksums using md5sum, but found no differences:
md5sum /etc/vmware/esx.conf /etc/vmware/esx.conf.bak
I compared the running .conf and the backup .conf, but found no differences:
diff /etc/vmware/esx.conf /etc/vmware/esx.conf.bak
Call in Air Support
At this point, I was out of ideas so I called for help: “Hello, 1-866-4VMWARE, option 4, option 2 – help!”
After repeating many of the same troubleshooting steps, the support engineer decided that I had hit on a known, and not yet patched, bug. The details of the bug are included in KB 2008144: Incorrect NIC failback occurs when an unused uplink is present. That’s right – my iSCSI traffic, vmkpings, etc were being sent down the wrong NIC – the UNUSED NIC. Ouch. The bug caused the networking stack to behave in a very unpredictable way, making my troubleshooting steps next to useless, and any other advanced troubleshooting ideas I had (sniffing, logs, etc.)
Once I knew what the issue was, I could see a bit of evidence in the logs:
WARNING: VMW_SATP_LSI: satp_lsi_pathIsUsingPreferredController:714:Failed to get volume access control data for path "vmhba33:C0:T0:L4": No connection
NMP: nmp_DeviceUpdatePathStates:547: Activated path "NULL" for NMP device "naa.60026b90003dcebb0000c7454d5cc946".
WARNING: ScsiPath: 3576: Path vmhba33:C0:T0:L4 is being removed
Notice the NULL path – the path can’t be interpreted correctly when being sent down the wrong (unsued) vmnic that is on a different subnet and VLAN. The gotcha on this issue is that I had followed best practices where applicable, and accepted default settings on the vSwitch and vmknics.
The Quick Fix
VMware KB 2008144 offers two workaround for this bug. The quick fix for the problem is to simply change the Failback setting on either the vSwitch running the software iSCSI vmknic’s to “No” (default is yes), or to change the setting on the vmknic itself if you have other port groups on the vSwitch (such as a VM Network port group to give your guest VM’s access to the iSCSI network).
Changing Failback = No on the iSCSI vmknics and then rescanning the storage adapters fix the glitch immediately.
Architecture Changes
The second workaround from VMware is “Do not have any unused NICs present in the team.”. This translates to a slightly different architecture than that described in many documents. To achieve this workaround, the configuration would have to change to two vSwitches, each with a single vmnic uplink and a single vmk port, bound to the iSCSI adapter. This change does not impact redundancy or availability when compared with the single-vSwitch:two-vmk configuration that I was running with as one of the vmnics was set to unused anyway. This workaround does add a bit more complexity, as there are a few more elements to configure, monitor, manage, and document.
This problem seems to only present itself on vSphere Standard Switches (vSwitch), although I could not get confirmation of this (please post a comment if you know!). Assuming this is true, a vDistributed Switch (vDS) could be used for Software iSCSI traffic. Mike Foley has a write-up on how to migrate iSCSI from a vSwitch to a vDS on his blog here: https://www.yelof.com/?p=72.
A Couple More Notes
My troubleshooting fix of viewing the vSwitch settings and clicking ok seemed to temporarily resolve the issues because it triggered an up/down event on the vmk of the unused uplink. This caused the network stack to re-evaluate paths and start using the correct, Active, uplink.
Note that this problem can occur outside of my iSCSI use case – any vSwitch, Port Group, or VMKNIC with an unused adapter set in the NIC Teaming Failover Order are susceptible to this bug, so watch for it on redundant vMotion networks (vMotion randomly fails), VM Network networks (sudden loss of guest connectivity), or even your management network (hosts fall out of manageability from vCenter, and can’t be contacted via SSH, vSphere client, etc.
Leave a comment if you’ve experienced this bug – your notes on the problem may help others find and fix the issue until VMware releases a fix. I understand that a fix for this particular bug is not due out until at least vSphere 5 Update 1.
I’ll have another (shorter) writeup on the 2nd networking bug I found in ESXi 5 later in the week – check back here for a link once it is published.
ubergiek says
Great write-up Joshua. I ran into a very similar issue, with similar log entries, and latencies. I too had to call in for “Air Support”. In my case, both EMC/VMware were needed to fix some UCS B-series and IBM 3550s connecting via iSCSI/FC to an EMC VNX 5700 running Block OE 5.31.000.5.709. After reviewing the configuration, EMC informed me that there was a bug with OE 5.31 that made ESXi5.0 incompatible with VMware’s NMP Round Robin fail-over policy. Their fix action was to change all paths to FIXED.
Once this was accomplished, the latency was reduced to acceptable levels.
What is interesting is in later versions of ESXi5.0 Update 1, when the hosts were set to ALUA, they automatically changed to NMP FIXED. I had never seen this behavior before, so I can’t confirm 100%, but I can say that our latency issue were resolved.
Thanks again for the post!
Ubergiek
ubergiek says
Great write-up Joshua. I ran into a very similar issue, with similar log entries, and latencies. I too had to call in for “Air Support”. In my case, both EMC/VMware were needed to fix some UCS B-series and IBM 3550s connecting via iSCSI/FC to an EMC VNX 5700 running Block OE 5.31.000.5.709. After reviewing the configuration, EMC informed me that there was a bug with OE 5.31 that made ESXi5.0 incompatible with VMware’s NMP Round Robin fail-over policy. Their fix action was to change all paths to FIXED.
Once this was accomplished, the latency was reduced to acceptable levels.
What is interesting is in later versions of ESXi5.0 Update 1, when the hosts were set to ALUA, they automatically changed to NMP FIXED. I had never seen this behavior before, so I can’t confirm 100%, but I can say that our latency issue were resolved.
Thanks again for the post!
Ubergiek
habibalby says
Thanks for the great info…
I’m facing the issue when I enable the VMKernel binding for IBM Targets but leaving the EMC out of this binding.. As soon as I reboot the host, the EMC LUNs disappear and whatever I do it doesn’t comeback unless I remove the VMKernel Binding of the IBM targets.. But the LUNs that are presented via IBM SAN Storage are appear and are not effected.. When I remove the Binding, all the LUNs of EMC appear again.
habibalby says
Thanks for the great info…
I’m facing the issue when I enable the VMKernel binding for IBM Targets but leaving the EMC out of this binding.. As soon as I reboot the host, the EMC LUNs disappear and whatever I do it doesn’t comeback unless I remove the VMKernel Binding of the IBM targets.. But the LUNs that are presented via IBM SAN Storage are appear and are not effected.. When I remove the Binding, all the LUNs of EMC appear again.
habibalby says
Hello,
Just found that after Binding all the VMkernel Ports into the iSCSI Initiator adapter, all the LUNs are appear for both SANs IBM + EMC.
VMware ESXi 5.0.0 build-623860
habibalby says
Hello,
Just found that after Binding all the VMkernel Ports into the iSCSI Initiator adapter, all the LUNs are appear for both SANs IBM + EMC.
VMware ESXi 5.0.0 build-623860
Donald says
We have had several hosts that lost connection to vCenter, and when I look at the hosts themselves they show the exact same problems you have been having: NULL multipath and high latency. VMware tech support always points to the storage, and says a reboot is the only way to fix this. I have tried re-scanning the HBAs from the esxi terminal, but it doesn’t help either. We are using redundant vNics for iSCSI, redundant switches, and redundant controllers for the EQ’s, which do not show the host as every losing connection, but the host still shows the LUNs having high latency.
Any progress with VMware as to when they may have this fixed?
Donald says
We have had several hosts that lost connection to vCenter, and when I look at the hosts themselves they show the exact same problems you have been having: NULL multipath and high latency. VMware tech support always points to the storage, and says a reboot is the only way to fix this. I have tried re-scanning the HBAs from the esxi terminal, but it doesn’t help either. We are using redundant vNics for iSCSI, redundant switches, and redundant controllers for the EQ’s, which do not show the host as every losing connection, but the host still shows the LUNs having high latency.
Any progress with VMware as to when they may have this fixed?
Chris A says
Let me preface this by saying I’m sure I must have some incorrect configuration somewhere. But I have 3 ESXi 5.5 hosts, one of them that experienced the issue described in your article. Each host has 4 vmknics dedicated to iSCSI traffic, all configured on a single vSwitch, all with manually configured dedicated physical adapters. All physical adapters are attached to dual redundancy switches, which are in turn attaced to a Dell MD3200i with 2 controllers. I believe in VMware lingo, the array is active/passive, as ESXi can see all ports on both controllers, but ports can actively be passing traffic from ONLY one controller at any time. Each controller has 4 ports. Meaning in ESXi, there should be 8 paths (with only 4 actively passing traffic).
2 of my hosts were indeed seeing 8 paths, but the other host was only seeing 5 paths. I spent a couple of weeks on it, pulling out my hair, until I finally reduced my troubleshooting to ensure all the configuration settings on that 1 host, were the same as the other 2. As it turns out, the host with only 5 paths, did NOT have the “Failback” setting to “No”. As soon as I “corrected” that misconfiguration, and did a rescan, that host immediately saw all 8 paths. Again, I’m sure this problem must have been solved by now by VMware. I MUST have an incorrect configuration somewhere that I’m missing. But thought it interesting it matched the symptoms in your article.