07 Building Webster’s Lab V2 – Create vSphere Networking and Network Storage
June 1, 2021
Before creating a vSphere Distributed Switch (vDS), we need to create a Datacenter.
Verify you are connected and logged in to the vCenter console.
Click Menu and click VMs and Templates, as shown in Figure 1.
Right-click the vCenter VM and click New Datacenter…, as shown in Figure 2.
Enter a Name for the Datacenter and click OK, as shown in Figure 3.
The new Datacenter is shown in the vCenter console, as shown in Figure 4.
Now the hosts are added to vCenter.
Click the Hosts and Clusters node in vCenter, as shown in Figure 5.
Expand the Hosts and Clusters node, right-click the Datacenter and click Add Host…, as shown in Figure 6.
Enter the Hostname or IP address or the host to add and click Next, as shown in Figure 7.
Enter a User name and Password to connect to the ESXi host and click Next, as shown in Figure 8.
Because of the host’s self-signed certificate, click Yes on the Security Alert popup, as shown in Figure 9.
Click Next, as shown in Figure 10.
Select the license to assign the host and click Next, as shown in Figure 11.
Selected the preferred Lockdown mode and click Next, as shown in Figure 12.
Click Next, as shown in Figure 13.
If all the information is correct, click Finish, as shown in Figure 14. If the information is not correct, click Back, correct the information, and then continue.
Repeat the steps outlined in Figures 6 through 14 to add additional hosts to vCenter.
Next is creating a Cluster.
Once you add all hosts to vCenter, right-click the Datacenter and click New Cluster…, as shown in Figure 15.
Enter a Name for the cluster and click OK, as shown in Figure 16.
Note: For my lab, at this time, I do not use the “single image” host and cluster lifecycle. Please read About Managing Host and Cluster Lifecycle if you want more information on this new vCenter 7 feature.
Click Finish, as shown in Figure 17.
In the left pane, click the new cluster as shown in Figure 18.
In Cluster quickstart, in the right pane, click the Edit button in the Cluster basics box, as shown in Figure 19.
Enable vSphere DRS (to allow the automatic creation of vMotion later) and click OK, as shown in Figure 20.
In Cluster quickstart, click the Add button in the Add hosts box in the right pane, as shown in Figure 21.
Select Existing hosts, select the hosts to add and click Next, as shown in Figure 22.
Click Next, as shown in Figure 23.
If all the information is correct, click Finish, as shown in Figure 24. If the information is not correct, click Back, correct the information, and then continue.
In the vCenter console, you can see the hosts added to the cluster, as shown in Figure 25.
Now on to the fun stuff: Networking.
I tried to come up with my explanation of port groups, virtual switches, physical NICs, VMkernel NICs, TCP/IP stacks, uplinks, and other items but found VMware already had something even I could understand.
I found the following information in the vSphere Networking Concepts Overview documentation, Copyright VMware, Inc., available at https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.networking.doc/GUID-2B11DBB8-CB3C-4AFF-8885-EFEA0FC562F4.html.
Networking Concepts Overview
A few concepts are essential for a thorough understanding of virtual networking. If you are new to ESXi, it is helpful to review these concepts.
A network of physical machines that are connected so that they can send data to and receive data from each other. VMware ESXi runs on a physical machine.
A network of virtual machines running on a physical machine that are connected logically to each other so that they can send data to and receive data from each other. Virtual machines can be connected to the virtual networks that you create when you add a network.
An opaque network is a network created and managed by a separate entity outside of vSphere. For example, logical networks that are created and managed by VMware NSX ® appear in vCenter Server as opaque networks of the type nsx.LogicalSwitch. You can choose an opaque network as the backing for a VM network adapter. To manage an opaque network, use the management tools associated with the opaque network, such as VMware NSX ® Manager or the VMware NSX API management tools.
Note: With NSX-T 3.0, it is now possible to run NSX-T directly on vSphere Distributed Switch (vDS) version 7.0 or later. Such networks are not opaque and are identified as NSX logical segments running on vDS 7.0. For more information, see Knowledge Base article KB #79872.
Physical Ethernet Switch
A physical ethernet switch manages network traffic between machines on the physical network. A switch has multiple ports, each of which can be connected to a single machine or another switch on the network. Each port can be configured to behave in certain ways depending on the needs of the machine connected to it. The switch learns which hosts are connected to which of its ports and uses that information to forward traffic to the correct physical machines. Switches are the core of a physical network. Multiple switches can be connected together to form larger networks.
vSphere Standard Switch
It works much like a physical Ethernet switch. It detects which virtual machines are logically connected to each of its virtual ports and uses that information to forward traffic to the correct virtual machines. A vSphere standard switch can be connected to physical switches by using physical Ethernet adapters, also referred to as uplink adapters, to join virtual networks with physical networks. This type of connection is similar to connecting physical switches together to create a larger network. Even though a vSphere standard switch works much like a physical switch, it does not have some of the advanced functionality of a physical switch.
vSphere Distributed Switch
A vSphere distributed switch acts as a single switch across all associated hosts in a data center to provide centralized provisioning, administration, and monitoring of virtual networks. You configure a vSphere distributed switch on the vCenter Server system and the configuration is propagated to all hosts that are associated with the switch. This lets virtual machines maintain consistent network configuration as they migrate across multiple hosts.
Host Proxy Switch
A hidden standard switch that resides on every host that is associated with a vSphere distributed switch. The host proxy switch replicates the networking configuration set on the vSphere distributed switch to the particular host.
Standard Port Group
Network services connect to standard switches through port groups. Port groups define how a connection is made through the switch to the network. Typically, a single standard switch is associated with one or more port groups. A port group specifies port configuration options such as bandwidth limitations and VLAN tagging policies for each member port.
A port on a vSphere distributed switch that connects to a host’s VMkernel or to a virtual machine’s network adapter.
Distributed Port Group
A port group associated with a vSphere distributed switch that specifies port configuration options for each member port. Distributed port groups define how a connection is made through the vSphere distributed switch to the network.
NSX Distributed Port Group
A port group associated with a vSphere distributed switch that specifies port configuration options for each member port. To distinguish between vSphere distributed port groups and NSX port groups, in the vSphere Client the NSX virtual distributed switch, and its associated port group is identified with the icon. NSX appears as an opaque network in vCenter Server, and you cannot configure NSX settings in vCenter Server. The NSX settings displayed are read-only. You configure NSX distributed port groups using VMware NSX ® Manager or the VMware NSX API management tools.
NIC teaming occurs when multiple uplink adapters are associated with a single switch to form a team. A team can either share a load of traffic between physical and virtual networks among some or all of its members or provide passive failover in the event of a hardware failure or a network outage.
VLAN enables a single physical LAN segment to be further segmented so that groups of ports are isolated from one another as if they were on physically different segments. The standard is 802.1Q.
VMkernel TCP/IP Networking Layer
The VMkernel networking layer provides connectivity to hosts and handles the standard infrastructure traffic of vSphere vMotion, IP storage, Fault Tolerance, and vSAN.
Any form of storage that uses TCP/IP network communication as its foundation. iSCSI and NFS can be used as virtual machine datastores and for direct mounting of .ISO files, which are presented as CD-ROMs to virtual machines.
TCP Segmentation Offload
TCP Segmentation Offload, TSO, allows a TCP/IP stack to emit large frames (up to 64KB) even though the maximum transmission unit (MTU) of the interface is smaller. The network adapter then separates the large frame into MTU-sized frames and prepends an adjusted copy of the initial TCP/IP headers.
My TinkerTry server has two 1Gb NICs and two 10Gb NICs. I want to use all four NICs. Two vDSes are required; one for the two 1Gb NICs and one for the two 10Gb NICs.
In Cluster quickstart, click the Configure button in the Configure cluster box as shown in Figure 26.
I want to create a vDS for each pair of NICs and name them to associate them with the NIC port speed.
Using the scrollbar, scroll down so you can see the Distributed switches and Physical adapters on one screen.
As shown in Figures 27 and 28:
- From the Number of distributed switches dropdown, select 2
- Give each distributed switch a Name
- Select which Port group to use for vMotion
- Select the Physical adapters to match to each distributed switch
- When complete, click
Change the IP Type to Static IPs, enter the IP information for vMotion as shown in Figure 29, and click Next.
Select the appropriate options and click Next, as shown in Figure 30.
If all the information is correct, click Finish, as shown in Figure 31. If the information is not correct, click Back, correct the settings, and then continue.
Click the Networking icon, expand the cluster, and each vDS, as shown in Figure 32.
Verify that the vDS 10Gb switch has an MTU of 9000.
Right-click vDS 10Gb, click Settings, and click Edit Settings…, as shown in Figure 33.
Click Advanced and verify that MTU (Bytes) equals 9000. If the value is not 9000, change it to 9000 and click OK, as shown in Figure 34.
I plan to use the vDS 10Gb switch for VM, vMotion, and Storage traffic. To accomplish that, two different Port Groups are required. We created the vMotion Port Group earlier in Figure 29.
Right-click the vDS 10Gb switch, select Distributed Port Group, select New Distributed Port Group…, as shown in Figure 35.
The Name for this port group is VM Traffic vDS 10Gb. Click Next, as shown in Figure 36.
For my lab, the default general properties are suitable. Click Next, as shown in Figure 37.
If all the information is correct, click Finish, as shown in Figure 38. If the information is not correct, click Back, correct the information, and then continue.
Repeat these steps to create a different port group with the Name: NFS vDS 10Gb.
When complete, the Distributed Port Groups should look like Figure 39.
vMotion and NFS storage require VMkernel NICs. The vMotion VMkernel NIC was created earlier as part of the vDS creation wizard (Figure 29).
Right-click the NFS vDS 10Gb distributed port group and click Add VMkernel Adapters…, as shown in Figure 40.
Click Attached hosts…, as shown in Figure 41.
Select all hosts and click OK, as shown in Figure 42.
Click Next, as shown in Figure 43.
Enter the following information:
- MTU: Enter the MTU for your 10G switch (typically 9000)
- TCP/IP stack: select Default from the dropdown list
- Available services -> Leave all unselected
Click Next, as shown in Figure 44.
Select Use static IPv4 settings. For each host, enter the IP address and subnet for the NFS VMkernel and click Next, as shown in Figure 45.
If all the information is correct, click Finish, as shown in Figure 46. If the information is not correct, click Back, correct the information, and then continue.
Now that our networking is complete let’s move on to configuring NFS Storage.
Click Storage, as shown in Figure 47.
First up is an NFS datastore to hold VMs.
Right-click the cluster, click Storage, click New Datastore…, as shown in Figure 48.
Select NFS and click Next, as shown in Figure 49.
Select NFS 4.1 and click Next, as shown in Figure 50.
Note: VMware has an article to show the differences in the capabilities of NFS 3 and NFS 4.1. Please see NFS Protocols and ESXi.
Enter a Datastore Name, the Folder on the NFS server, the NFS Server name or IP address, and click Add, as shown in Figure 51.
Click Next, as shown in Figure 52.
Select the Kerberos authentication option required for your NAS and click Next, as shown in Figure 53.
Select all hosts in the cluster and click Next, as shown in Figure 54.
If all the information is correct, click Finish, as shown in Figure 55. If the information is not correct, click Back, correct the information, and then continue.
Next, let’s verify the VAAI configuration for the datastore. Click the VM datastore, click Configure and click Hardware Acceleration, as shown in Figure 56.
Repeat the steps outlined in Figures 48 through 55 to create an NFS datastore to contain ISOs.
After creating all datastores, they appear in vCenter, as shown in Figure 57.
To verify the NFS ISO datastore, upload an ISO to the datastore.
Right-click the ISOs datastore and click Browse Files, as shown in Figure 58.
Click Upload Files as shown in Figure 59.
Browse to an ISO file, select it, and click Open, as shown in Figure 60. I am uploading a Windows Server 2019 ISO.
Note: You can download a 180-day evaluation copy of Windows Server 2019 (among other Microsoft products) from the Microsoft Evaluation Center.
The ISO file starts uploading to the ISOs datastore, as shown in Figure 61.
The ISOs show in the datastore once the ISO file upload is complete, as shown in Figures 62 and 63.
Now that we have created shared datastores, we need to move the vCenter storage from the local datastore to the shared VMs datastore. Moving datastores eliminates the storage warning on the local datastore shown in Figure 64.
In the left pane, click VMs and Templates, right-click the vCenter VM, and click Migrate…, as shown in Figure 65.
Select Change storage only and click Next, as shown in Figure 66.
Select the datastore created for VM storage and click Next, as shown in Figure 67.
If all the information is correct, click Finish, as shown in Figure 68. If the information is not correct, click Back, correct the information, and then continue.
As shown in Figure 69, the storage migration starts.
Note: In my lab, the storage migration took just over one hour.