1. VICs will have vNICs. These vNICs are presented to ESXi and on ESXi these are called VMNICs. These VMNICs are presented to the vswitch as the uplink ports. The downlink ports of the vSwitch will grouped into different port groups. These port groups are presented to the Virtual Machines. The Virtual Machines will have vNICs which will connect to the portgroups and the traffic goes out via the VMNICs to the physical world.
2. When you have standard vswitch and if you are doing vMotion from one ESXi to another ESXi then the portgroups name should match (case sensitive) on those ESXi. This is a tedious task and this is where the we Distributed Virtual Switch will be useful. DVS supports NIOC.
3. The DVS is spread across the Datacenter, meaning all the ESXi hosts inside all the clusters inside that datacenter will see this DVS. so when you create a port group on the DVS the port group is seen by all ESXi.The portgroup in DVS is called Distributed Portgroup.
4. When you create the DVS, the unused VMNICs of the ESXi hosts are connected to the DVS.
5. You can create multiple DVS for specific function and you can control the access to these DVS individually. When you create multiple DVS you assign different unused VMNICs to different DVS so that the traffic is kept separate. It is recommended to have different DVS for different type of traffic. For Mgmt you create a DVS, for IP Storage you create a different DVS. VMkernel NIC/VMkernel port/VMkernel port group are needed for IP storage.
6. The VMNICs can be configured for static Etherchannel on standard vSwitch. The VMNICs can be configured for static or LACP negotiated etherchannel when you use DVS. But be aware that when you have Cisco VIC card presenting different vNICs as VMNICs to ESXi and those vNICs are connected to different FI, then you cannot have LAG constituting those VMNICs going to different FI. Because FIs do not support vPC.
7. VMware NSX does not support the “Route based Physical NIC Load” type of Load Balancing.
8. For iSCSI initiator, it should be VMKernel port group. Also only one VMNIC should be used and all others VMNIC should be unused. If this is not met then iSCSI port binding will not be met.
9. In vSphere 6 there are 3 different TCPIP stack, default TCPIP stack for regular VM Network Traffic, another TCPIP stack for VMotion because it requires higher bandwidth so it can have different gateway, another TCPIP stack for Provisioning (Cloning of VMs).
To add new stack, on the CLI use “esxcli network ip netstack add -N <name of stack>. The custom created stack can be chosen when creating a new VMKernel adapter.
10. If you have only two 10G adapter on the physical server then Bandwidth can be controlled by Network IO Control (NIOC).
11. For iSCSI you create the datastore as VMFS but for NFS you create it as NFS type datastore. When there is lot of paths available to the storage, it is advised to create NFS based Storage. This is the reason HyperConverged vendor like Nutanix are using the NFS based datastore. vSphere 6 can support upto 4 ISCSI VMkernel port and you can have have upto 4 different paths.
12. vSphere6 supports NFS4.1. NFS4.1 has the support for Kerberos Authentication. Also NFS4.1 supports Multipathing IO.NFS3 does not support Multipathing IO.
13. VSAN is supported both on Standard and DVS. On DVS there is much more capability. For VSAN to work properly, there underlying physical network should have low latency, BW guarantee and Resiliency. VSAN uses VMKernel ports. There need to be a VSAN Network need to created and the VMKernel ports talk to the VSAN Network.
For stretched VSAN cluster (support in VSAN 6.1) the underying Network Infra should be more reliable and high speed (Metro Ethernet). Do not do strectched cluster over WAN. Earlier versions of VSAN used to work with hybrid mode (Disks and PCIe Flash). With VSAN 6.1 there is a support for DIMM Flash.
14. VSAN 6.1 support clustering like Oracle RAC, Microsoft Clusters, DIMM Flash and vSphere Replication (Replication of data to a Replication Site).
15. VSAN uses multicasting for the heartbeat and to identify the VSAN nodes that constitutes the VSAN cluster(other hosts that has VMKernel ports that is connected to the VSAN Network). Ensure the Physical Network support/configured for IGMP Snooping.
16. Both DVS and Standard Switch supports Basic Multicast filtering but only DVS supports IGMP Snooping and MLD Snooping. Basic Multicast filtering depends on the MAC filtering. The issue is 32 Multicast IP is mapped to single Multicast MAC which makes Basic filtering less useful. IGMP Snooping is based on the IP meaning whenever there is Join message from the VM, Virtual switch knows the MAC/IP/port of the VM that sends the Join so any multicast traffic for that specific multicast group will be sent only to the VM.
17. For adding a customized services port so that it can be allowed on the vsphere firewall, this needs to be done on the CLI. on GUI you cannot do it.
a. change the directory
b. create a new service.xml file
vi service.xml newservice.xml
First it opens the existing service.xml file and from that choose respective and yank it. To search for a specific service do slash followed by the service (eg:/dhcp). To yank, first count the number of lines in the service definition, for eg there is 17 lines for the DHCP service in the service.xml file, from the start of the line for that service type 17yy, this will copy those 17lines to the buffer. Then type :n, this will take you the new file, then enter p this will paste the lines in the copied buffer. On the top line of the new file do <ConfigRoot> and in the bottom of the file </ConfigRoot>. To change the name of the copied Service ID and name of the service, service port number press w, this will delete the entire word. To save and quit :wq!
c. do esxcli network firewall refresh. This will add the service to the list. You can do “esxcli network firewall ruleset list” or “esxcli network firewall ruleset rule list” to verify if the new service is added.
18. To check the iscsi adapter and portal list, use the command “esxcli iscsi adapter list” and “esxcli iscsi physicalnetworkportal list” and “esxcli iscsi logicalnetworkportal list”. Also to check the networkportal details use “esxcli iscsi networkportal list”. To check iscsi sessions, use “esxcli iscsi session list | more” and “esxcli iscsi session connection list”.
19. To check the network related Tx/Rx, use esxtop and then hit ? and then hit n. To see new fields which were not shown, you can add them to the displayed list by doing esxtop and then hit f. If you use capital letters it adds the respective field to the output and if you type lower case it will remove the respective field from the esxtop output.
20. To do packet capture, first from the esxtop output (network related output) find the port id for the VM of which the packet capture is needed. Then do “pktcap-uw –switchport <port id> -c <count of packets> -s <size of packets> -o <filename.pcap>”. To see the packet capture, use “tcpdump-uw -r filename.pcap”. If you want to capture the traffic on the uplink “pktcap-uw -c <count of packets> -s <size of packets> -o <filename.pcap> –uplink <vmnic1> –direction <0 for receive and 1 for send>”. To capture mgmt traffic of ESXi, use
“pktcap-uw –vmk vmk0”. To trace the flow of traffic you can use “”pktcap-uw -c <count of packets> -s <size of packets> -o <filename.pcap> –uplink <vmnic1> –direction <0 for receive and 1 for send> –trace –console”. To check various options of capture, use “pktcap -h” and check out the flow filter option.
21. By default NIOC is enabled in vSphere 6.x. NIOC is available only on DVS With NIOC, you can allocate Reservation quota of Bandwidth per Network Resource Pool. Network Resource Pool will be assigned per Port-Group. Based on the Allocated Reservation quota of Bandwidth per Network Resource Pool the VMs in those port group share the BW.
If you have three 1G Adapters assigned to the DVS (3G of available BW), first you need to reserve the BW for the Virtual Machine traffic (for example 750Mbps). And then the rest of the available BW (2.25g) is allocated as reservation quota for the Network Resource Pool (for example create VM prod Network Resource Pool and allocate 1500Mbps and VM Dev Network Resource Pool and allocate 500Mps). After that assign these Network Resource Pools to the respective port groups. It is advised not to play with Network Resource Pool instead stick with the “system traffic” and edit the shares value.