Standard vSwitch Port Scale

Reading through the documentation for vSphere 5.5 something caught my eye…thought I’d share:

“For hosts running ESXi 5.1 and earlier, you can configure the number of ports that are available on a standard switch as the requirements of your environment change. Each virtual switch on hosts running ESXi 5.1 and earlier provides a finite number of ports through which virtual machines and network services can reach one or more networks. You have to increase or decrease the number of ports manually according to your deployment requirements.

NOTE Increasing the port number of a switch leads to reserving and consuming more resources on the host.If some ports are not occupied, host resources that might be necessary for other operations remain locked and unused.

To ensure efficient use of host resources on hosts running ESXi 5.5, the ports of virtual switches are dynamically scaled up and down. A switch on such a host can expand up to the maximum number of ports supported on the host. The port limit is determined based on the maximum number of virtual machines that the host can handle.”

I learn something new every day!!

Transparent Page Sharing (TPS) Changes

Quick review of TPS: Transparent Page Sharing allows pages that are identical to be stored in same place. When there is idle CPU time, vSphere looks for pages located across virtual machines that can be matched with one another and shared in physical RAM.  This is basically a deduplication method applied to RAM rather than storage. It allows vSphere to use just a single copy of these shared bits even though the bits are shared across two virtual machines.  This may provide may provide substantial memory savings.

Historically this has been enabled by default. But this is about to change.

Transparent Page Sharing will be disabled by default at the next major version release as well as the future updates to vSphere 5.x.

See Kbs 2080735 and 2091682 for more information.

vSphere 5.5 Update 2

VMware released vSphere 5.5 Update 2 on Tuesday, September 9. You can find a summary of fixes here.

I’d like to point out one of the changes:

Unable to edit settings for virtual machines with hardware version 10 using the vSphere client

When you attempt to perform the Edit Settings operation using the vSphere Client (C# Client) in a virtual machine with hardware version 10 (vmx-10), the attempt fails with the following error message:

You cannot use the vSphere client to edit the settings of virtual machines of version 10 or higher. Use the vSphere Web Client to edit the settings of this virtual machine.

This issue is resolved in this release.”

This should make a lot of customers happy!!

Book Review: Disaster Recovery using VMware vSphere Replication and vCenter Site Recovery Manager

This book is a great reference for SRM and/or vSphere Replication and is written for the current version (5.5). It details step-by-step how to to deploy and configure both products. The author does an excellent job making the concepts easy to consume and uses diagrams to make the more advanced topics more understandable. Perfect for a beginner and has lots of value-add for the experienced administrator.

The book is a smooth and straightforward read. As a VMware Certified Instructor, I will happily recommend this book for students to read.

Check it out here: http://bit.ly/1kosrhz

Comparing Virtual Network Adapter Types

vlance

Emulated version of the AMD 79C970 PCnet32. Older 10 Mbps NIC with drivers available in most 32-bit guest OSes except Windows Vista and newer.

VMXNET

Paravirtualized adapter, optimized for performance in virtual machines. VMware Tools is required for VMXNET driver.

e1000

Emulated version of the Intel 82545EM 1Gbps NIC. Available in Linux versions 2.4.19 and later, Windows XP Professional x64 Edition and later, and Windows Server 2003 (32-bit) and later. No jumbo frames support prior to ESX/ESXi 4.1.

e1000e

Emulated version of the Intel 82574 1Gbps NIC. Only available on hardware version 8 or newer VMs in vSphere 5.x. Default vNIC for Windows 8 and newer Windows guest OSes. Not available for Linux OSes from the UI.

VMXNET2

Paravirtualized adapter, providing more features than VMXNET, such as hardware offloads and jumbo frames. Limited guest OS support for VMs on ESX/ESXi 3.5 and later.

VMXNET3

Paravirtualized adapter, unrelated to previous VMXNET adapters. Offers all VMXNET2 features as well as multiqueue support, MSI/MSI-X interrupt delivery, and IPv6 offloads. Supported only for hardware version 7 or later with limited guest OS support.

The vmxnet adapters are paravirtualized device drivers for virtual networking. A paravirtualized driver improves performance since it shares a ring buffer between the virtual machine and the VMkernel. This uses zero-copy, reducing internal copy operations between buffers, which saves CPU cycles. The vmxnet adapters can offload TCP checksum calculations and TCP segmentation to the network hardware instead of using the virtual machine monitor’s CPU resources.

Virtual Machine Files

vSphere administrators should know the components of virtual machines. There are multiple VMware file types that are associated with and make up a virtual machine. These files are located in the VM’s directory on a datastore. The following table will provide a quick reference and short description of a virtual machine’s files.

N_Port Virtualization (NPIV) Requirements

N_Port Virtualization ( is an ANSI T11 standard that describes how a single Fibre Channel Physical HBA port can register with a fabric using several worldwide port names (WWPNs), what might be considered Virtual WWNs. This in turn means that since there are multiple Virtual HBAs per physical HBA, we can allow WWNs to be assigned to each VM.

An advantage I see from VMs having their own WWNs is possibly Quality of Service (QoS) measurement. With each VM having its own WWN, you could perceivably track Virtual Machine traffic in the fabric if you had the appropriate tools. Masking and zoning could be configured per virtual machine vice per adapter. Also, you may get more visibility of VMs at the storage array level. But you have to use Raw Device Mappings (RDMs) mapped to the VM for NPIV, which means you do not get all the benefits associated with VMFS and VMDKs

If you plan to enable NPIV on your virtual machines, you should be aware of certain requirements.

The following requirements exist:

-NPIV can be used only for virtual machines with RDM disks. Virtual machines with regular virtual disks use the WWNs of the host’s physical HBAs.

-HBAs on your host must support NPIV.

See the vSphere Compatibility Guide and refer to your vendor documentation for more information.

-Use HBAs of the same type, either all QLogic or all Emulex. VMware does not support heterogeneous HBAs on the same host accessing the same LUNs.

-If a host uses multiple physical HBAs as paths to the storage, zone all physical paths to the virtual machine. This is required to support multipathing even though only one path at a time will be active.

-Make sure that physical HBAs on the host have access to all LUNs that are to be accessed by NPIV-enabled virtual machines running on that host.

-The switches in the fabric must be NPIV-aware.

-When configuring a LUN for NPIV access at the storage level, make sure that the NPIV LUN number and NPIV target ID match the physical LUN and Target ID.

-Use the vSphere Client to manipulate virtual machines with WWNs.