If you like immudb, give us a star on Github!

Potentially poor vSphere NFS Read I/O performance with 10GbE vmnics

VMware released a knowledge base article about a real performance issue when using NFS with certain 10GbE network adapters in the VMware ESXi host.

In vSphere 6.0, NFS Read I/O performance (in IO/s) for large I/O sizes (of 64KB and above) with an NFS datastore may exhibit significant variations. This issue is observed when certain 10 Gigabit Ethernet (GbE) controllers are used. The performance variability reported in this KB is specific to ESXi's NFS client and does not pertain to NFS clients in a virtual machine.


If you think you´re vSphere NFS performance is affected the following symptoms:

  • Varying performance (IOPS) with read workloads.
  • Physical NIC shows increasing packet error counts:ethtool -S vmnicX | grep rx_errors

You should check KB article Potentially poor NFS Read I/O performance with 10GbE vmnics (2120163)


You need to open a command line session to your vSphere ESXi Host:

  • List the NICs: esxcli network nic list
  • Check the receive ring speed (current hardware settings - "RX", default 496): ethtool -g vmnic# (# - NIC number, i. e. vmnic2)
  • Set the receive ring speed to 4096: ethtool -G vmnic# rx 4096 (# - NIC number, i. e. vmnic2)

Warp Speed for vSphere NFS

Photo courtesy of dolbinator1000(CC Attribution)