Comparing Virtual Network Adapter Types

vlance

Emulated version of the AMD 79C970 PCnet32. Older 10 Mbps NIC with drivers available in most 32-bit guest OSes except Windows Vista and newer.

VMXNET

Paravirtualized adapter, optimized for performance in virtual machines. VMware Tools is required for VMXNET driver.

e1000

Emulated version of the Intel 82545EM 1Gbps NIC. Available in Linux versions 2.4.19 and later, Windows XP Professional x64 Edition and later, and Windows Server 2003 (32-bit) and later. No jumbo frames support prior to ESX/ESXi 4.1.

e1000e

Emulated version of the Intel 82574 1Gbps NIC. Only available on hardware version 8 or newer VMs in vSphere 5.x. Default vNIC for Windows 8 and newer Windows guest OSes. Not available for Linux OSes from the UI.

VMXNET2

Paravirtualized adapter, providing more features than VMXNET, such as hardware offloads and jumbo frames. Limited guest OS support for VMs on ESX/ESXi 3.5 and later.

VMXNET3

Paravirtualized adapter, unrelated to previous VMXNET adapters. Offers all VMXNET2 features as well as multiqueue support, MSI/MSI-X interrupt delivery, and IPv6 offloads. Supported only for hardware version 7 or later with limited guest OS support.

The vmxnet adapters are paravirtualized device drivers for virtual networking. A paravirtualized driver improves performance since it shares a ring buffer between the virtual machine and the VMkernel. This uses zero-copy, reducing internal copy operations between buffers, which saves CPU cycles. The vmxnet adapters can offload TCP checksum calculations and TCP segmentation to the network hardware instead of using the virtual machine monitor’s CPU resources.

Virtual Machine Files

vSphere administrators should know the components of virtual machines. There are multiple VMware file types that are associated with and make up a virtual machine. These files are located in the VM’s directory on a datastore. The following table will provide a quick reference and short description of a virtual machine’s files.

N_Port Virtualization (NPIV) Requirements

N_Port Virtualization ( is an ANSI T11 standard that describes how a single Fibre Channel Physical HBA port can register with a fabric using several worldwide port names (WWPNs), what might be considered Virtual WWNs. This in turn means that since there are multiple Virtual HBAs per physical HBA, we can allow WWNs to be assigned to each VM.

An advantage I see from VMs having their own WWNs is possibly Quality of Service (QoS) measurement. With each VM having its own WWN, you could perceivably track Virtual Machine traffic in the fabric if you had the appropriate tools. Masking and zoning could be configured per virtual machine vice per adapter. Also, you may get more visibility of VMs at the storage array level. But you have to use Raw Device Mappings (RDMs) mapped to the VM for NPIV, which means you do not get all the benefits associated with VMFS and VMDKs

If you plan to enable NPIV on your virtual machines, you should be aware of certain requirements.

The following requirements exist:

-NPIV can be used only for virtual machines with RDM disks. Virtual machines with regular virtual disks use the WWNs of the host’s physical HBAs.

-HBAs on your host must support NPIV.

See the vSphere Compatibility Guide and refer to your vendor documentation for more information.

-Use HBAs of the same type, either all QLogic or all Emulex. VMware does not support heterogeneous HBAs on the same host accessing the same LUNs.

-If a host uses multiple physical HBAs as paths to the storage, zone all physical paths to the virtual machine. This is required to support multipathing even though only one path at a time will be active.

-Make sure that physical HBAs on the host have access to all LUNs that are to be accessed by NPIV-enabled virtual machines running on that host.

-The switches in the fabric must be NPIV-aware.

-When configuring a LUN for NPIV access at the storage level, make sure that the NPIV LUN number and NPIV target ID match the physical LUN and Target ID.

-Use the vSphere Client to manipulate virtual machines with WWNs.

vSphere Management Assistant (vMA)

A very nice tool that an administrator can download is a virtual appliance called the vMA which is Linux based (SUSE Linux Enterprise Server 11, SP1), it has several components, including the vCLI and vSphere SDK for Perl. The requirements for the vMA aren’t too crazy, must have 1 vCPU, at least 600 MB of memory, 3GB disk minimum and needs to be deployed on one of these platforms:
 
– vSphere ESX 4.0 U2 or newer
– vSphere ESXi 4.1 or newer
– vCenter 4.0 U2 or newer 
 
This appliance gets deployed like any other; first we download it from VMware’s website. Once we have the *.ova or *.ovf file then go to File > Deploy OVF Template in the vSphere Client and finish the wizard. 
 
The vMA will allow us to manage our hosts via command line as well as run scripts. Our vMA commands sent directly to our host are send via vSphere SDK for Perl API, and if we are sending commands to a host but through vCenter then it will be sent to vCenter via vSphere SDK for Perl and then from vCenter to the host using vCenter’s private protocol. 
 
Using our vifp interface, we can add, show, or remove target servers. A target server is what we are trying to connect to here. If we want to go ahead and set that target as the default for this session then we will type to vifptarget command. We can choose to establish multiples servers as target servers. We will just need to use the – -server option to go ahead and specify which one we are going to be running our commands against. So, for example: 
 
To add a server as a target: 
vifp addserver 
 
To list target servers:
vifp listservers
 
To set a server as the target for the vMA session: 
vifptarget -s 
 
To remove a target server:
vifp removeserver 

Configure vCenter Server storage filters

vCenter Server provides storage filters in its advanced settings to help you avoid storage device corruption or performance degradation that can be caused by an unsupported use of LUNs. These filters are available by default. We have a few different ones to choose from: VMFS filter, RDM filter, Same Host and Transports filter, and Host Rescan Filter.
We see the keys on the right next to the different kind of filters, we can look for these in vCenter Advanced settings to change or disable them. To do this:
– Log into vCenter using the vSphere Client
– Go to Home > vCenter Server Settings
– Click Advanced Settings
– Enter or modify the keys above
– Click ok

Understand and apply VMFS resignaturing

Each VMFS 5 datastore contains a Universally Unique Identifier (UUID). The UUID is stored in the metadata of your file system called a superblock and is a unique hexadecimal number generated by VMware.

When a duplicated (byte-for-byte) copy of a datastore or underlying LUN is executed, the resultant copy will contain the same UUID. As a VMware administrator, you have two options when bringing the duplicate datastore online:

1. VMware prevents two data stores with the same UUID from mounting. What you can do is choose to unmount the initial datastore and bring the second, duplicate datastore with the same UUID online after.

2. Alternatively, you may create a new UUID (this is known as a resignature) for the datastore and this way both disks may be brought online at the same time. A resignatured disk is no longer a duplicate of the original because it has a different UUID.

Windows administrators can relate to this when they clone Windows systems. A new SID must be generated for any cloned machine. Otherwise, you will encountered duplicate SID errors on your network when you bring multiple machines with the same SID online at the same time.

net-dvs Command

In order to view more information about the distributed switch configuration, use the net-dvs command. This is only available in the local shell. Notice that it specifies information like the UUID of the distributed switch and the name. We can also see information regarding Private VLANs if we have those set up.

If we keep scrolling down, we can see the MTU and CDP information for the distributed switch. Notice that we can set up LLDP for a distributed switch. Next we see information regarding the port groups and how they are configured, we see VLAN and security policy information here. At the bottom we see some stuff on a network resource pool if we have network i/o control enabled and are using this feature.

The last section we see on the net-dvs output we see is some information that is very useful during the troubleshooting process. We can see whether or not packets are being dropped and we can see from the amount of traffic going in and out and decide on whether we need to traffic shape.

esxtop Memory View

There are many useful things to look at when in the memory view of esxtop.

Several important things to look at near the top of the esxtop.

PMEM /MB – memory for the host

VMKMEM /MB – memory for the VMkernel

PSHARE /MB – ESXi page sharing statistics

SWAP /MB – ESXi swap usage statistics

ZIP /MB – ESXi compression statistics

MEMCTL /MB – ESXi balloon statistics

Now looking at the virtual machines down below host information, you can see several counters listed that can be of use when troubleshooting an individual VM or group of VMs:

MEMSZ – amount of configured guest physical memory

GRANT – amount of guest physical memory granted

SZTGT – amount of memory to be allocated to a machine

TCHD – amount of guest physical memory recently used by the VM

TCHD_W – write working set estimate for a resource pool

SWCUR – current swap usage

SWTGT – expected swap usage

SWR/s – swap in from disk rate

SWW/s – swap out to disk rate

LLSWR/s – memory read from host cache rate

LLSWW/s – memory write to host cache rate

OVHDUW – overhead memory reserved for the vmx user world of a VM group.

OVHD – amount of overhead currently consumed by a VM

OVHDMAX – amount of reserved overhead memory for a VM

Ideally, you’ll look at esxtop and never see any kind of numbers for balloon, compression or swap activity. However if you do see this activity then the ESXi host is overcommitted and is in contention. More resources need to be added the the ESXi host, the cluster or some of the VMs need to be moved to an ESXi host with memory resources available.

esxtop CPU View

The default view of esxtop is CPU, there are several useful counters in this view.

GID – group ID

NAME – virtual machine name

NWLD – number of worlds

%USED – percentage physical CPU time accounted to this world

%RUN – percentage of total scheduled time for the world to run

%SYS – percentage of time spend by system services for that world

%WAIT – percentage of time spent by the world in a wait state

%VMWAIT – derivative of %WAIT except it doesn’t include %IDLE

%RDY – percentage of time the world was ready to run

%IDLE – percentage of time the vCPU world is in idle loop

%OVRLP – percentage of time spend by system services on behalf of other worlds

%CSTP – percentage of time the world spend in ready, co-deschedule state (only relevant to SMP VMs)

%MLMTD – percentage of time world was ready to run but was not scheduled because that would violate “CPU limit” settings

%SWPWT – percentage of time the world is waiting for the VMkernel swapping memory

High CPU ready time is a major indicator of CPU performance issues, you may have excessive usage of vSMP or a limit set (check %MLMTD for that). Another metric to check is %CSTP, this will help you determine whether you can decrease the amount of vCPUs for some of the virtual machines which will help with improving scheduling opportunities.

%SYS is usually caused by high IO virtual machine. %SWPWT is usually caused by memory overcommitment.

esxtop Network View

The last post discussed navigating esxtop, now let’s get into each view a little bit more.

There are several network counters that are default when you go to the networking view, here’s a brief overview of each:

PKTTX/s – # of packets transmitted per second

MbTX/s – MegaBits transmitted per second

PKTRX/s – # of packets received per second

MbRX/s –  MegaBits received per second

%DRPTX – percentage of transmit packets dropped

%DRPRX – percentage of receive packets dropped

A major indicator of potential network performance issues is dropped packets. This can be indicative of a physical device failing, queue congestion, bandwidth issues, etc.

Something else to check when having network issues is high CPU usage, the CPU Ready Time counter (%RDY) can be beneficial when diagnosing CPU issues.

If you are having these issues in your environment, consider using jumbo frames, taking advance of hardware features provided by the NIC like TSO (TCP Segmentation Offload) and TCO (TCP Checksum Offload)

Also, make sure to check out physical network trunks, interswitch links, etc for overloaded pipes.

Consider: moving the VM with high network demand to another switch, adding more uplinks to a virtual switch and check for which vNIC driver is being used.