07 Building Webster’s Lab – Creating the vSphere Distributed Switch
Before creating a vSphere Distributed Switch (vDS), a Datacenter is required.
Verify you are connected and logged in to the vCenter console.
Click Menu and click VMs and Templates, as shown in Figure 1.
Right-click the vCenter VM and click New Datacenter…, as shown in Figure 2.
Enter a Name for the Datacenter and click OK, as shown in Figure 3.
The new Datacenter is shown in the vCenter console, as shown in Figure 4.
Now the hosts are added to vCenter.
Click the Hosts and Clusters node in vCenter, as shown in Figure 5.
Right-click the Datacenter and click Add Host…, as shown in Figure 6.
Enter the Hostname or IP address or the host to add and click Next, as shown in Figure 7.
Enter a User name and Password to connect to the ESXi host and click Next, as shown in Figure 8.
Because of the host’s self-signed certificate, click Yes on the Security Alert popup, as shown in Figure 9.
Click Next, as shown in Figure 10.
Select the license to assign the host and click Next, as shown in Figure 11.
Selected the preferred Lockdown mode and click Next, as shown in Figure 12.
Click Next, as shown in Figure 13.
If all the information is correct, click Finish, as shown in Figure 14. If the information is not correct, click Back, correct the information, and then continue.
Repeat the steps outlined in Figures 6 through 14 to add additional hosts to vCenter.
Next is creating a Cluster.
Once you add all hosts to vCenter, right-click the Datacenter and click New Cluster…, as shown in Figure 15.
Enter a Name for the cluster and click OK, as shown in Figure 16.
Right-click the new cluster and click Add Hosts…, as shown in Figure 17.
Select Existing hosts, select the hosts to add and click Next, as shown in Figure 18.
Click Next, as shown in Figure 19.
If all the information is correct, click Finish, as shown in Figure 20. If the information is not correct, click Back, correct the information, and then continue.
In the vCenter console, you can see the hosts added to the cluster, as shown in Figure 21.
Now on to the fun stuff: Networking.
I tried to come up with my explanation of port groups, virtual switches, physical NICs, VMkernel NICs, TCP/IP stacks, uplinks, and other items but found VMware already had something even I could understand.
I found the following information in the vSphere Networking PDF, Copyright VMware, Inc., available at https://docs.vmware.com/en/VMware-vSphere/6.7/com.vmware.vsphere.networking.doc/GUID-2B11DBB8-CB3C-4AFF-8885-EFEA0FC562F4.html. The following is from the Networking Concepts Overview section (with grammar corrections).
Networking Concepts Overview
A few concepts are essential for a thorough understanding of virtual networking. If you are new to ESXi, it is helpful to review these concepts.
Physical Network A network of physical machines that are connected so that they can send data to and receive data from each other. VMware ESXi runs on a physical machine.
Virtual Network A network of virtual machines running on a physical machine that are connected logically to each other so that they can send data to and receive data from each other. Virtual machines can connect to the virtual networks that you create when you add a network.
Opaque Network An opaque network is a network created and managed by a separate entity outside of vSphere. For example, logical networks that are created and managed by VMware NSX® appear in vCenter Server as opaque networks of the type nsx.LogicalSwitch. You can choose an opaque network as the backing for a VM network adapter. To manage an opaque network, use the management tools associated with the opaque network, such as VMware NSX® Manager™ or the VMware NSX® API™ management tools.
Physical Ethernet Switch It manages network traffic between machines on the physical network. A switch has multiple ports, each of which can connect to a single machine or another switch on the network. Each port can be configured to behave in certain ways depending on the needs of the machine connected to it. The switch learns which hosts are connected to which of its ports and uses that information to forward traffic to the correct physical machines. Switches are the core of a physical network. Multiple switches can connect to form larger networks.
vSphere Standard Switch It works much like a physical Ethernet switch. It detects which virtual machines are logically connected to each of its virtual ports and uses that information to forward traffic to the correct virtual machines. A vSphere standard switch can connect to physical switches by using physical Ethernet adapters, also referred to as uplink adapters, to join virtual networks with physical networks. This type of connection is similar to connecting physical switches to create a larger network. Even though a vSphere standard switch works much like a physical switch, it does not have some of the advanced functionality of a physical switch.
Standard Port Group It specifies port configuration options such as bandwidth limitations and VLAN tagging policies for each member port. Network services connect to standard switches through port groups. Port groups define how a connection is made through the switch to the network. Typically, a single standard switch is associated with one or more port groups.
vSphere Distributed Switch It acts as a single switch across all associated hosts in a data center to provide centralized provisioning, administration, and monitoring of virtual networks. You configure a vSphere distributed switch on the vCenter Server system, and the configuration populates across all hosts that are associated with the switch. This lets virtual machines to maintain consistent network configuration as they migrate across multiple hosts.
Host Proxy Switch A hidden standard switch that resides on every host that is associated with a vSphere distributed switch. The host proxy switch replicates the networking configuration set on the vSphere distributed switch to the particular host.
Distributed Port A port on a vSphere distributed switch that connects to a host’s VMkernel or to a virtual machine’s network adapter.
Distributed Port Group A port group associated with a vSphere distributed switch and specifies port configuration options for each member port. Distributed port groups define how a connection is made through the vSphere distributed switch to the network.
NIC Teaming NIC teaming occurs when multiple uplink adapters are associated with a single switch to form a team. A team can either share the load of traffic between physical and virtual networks among some or all of its members or provide passive failover in the event of a hardware failure or a network outage.
VLAN VLAN enables a single physical LAN segment to be further segmented so that groups of ports are isolated from one another as if they were on physically different segments. The standard is 802.1Q.
Networking Layer The VMkernel networking layer provides connectivity to hosts and handles the standard infrastructure traffic of vSphere vMotion, IP storage, Fault Tolerance, and Virtual SAN.
IP Storage Any form of storage that uses TCP/IP network communication as its foundation. iSCSI can be used as a virtual machine datastore, and NFS can be used as a virtual machine datastore and for direct mounting of .ISO files, which are presented as CD-ROMs to virtual machines.
TCP Segmentation Offload TCP Segmentation Offload, TSO, allows a TCP/IP stack to emit large frames (up to 64KB) even though the maximum transmission unit (MTU) of the interface is smaller. The network adapter then separates the large frame into MTU-sized frames and prepends an adjusted copy of the initial TCP/IP headers.
My TinkerTry server has two 1Gb NICs and two 10Gb NICs. I want to use all four NICs. Two vDSes are required; one for the two 1Gb NICs and one for the two 10Gb NICs.
In the vCenter console, click the Cluster and then Configure, as shown in Figure 22.
Click Quickstart, as shown in Figure 23.
In the right pane, click CONFIGURE in the Configure Cluster box, as shown in Figure 24.
I want to create a vDS for each pair of NICs and name them in a way to associate them to the NIC port speed.
Using the scrollbar, scroll down so you can see the Distributed switches and Physical adapters in one screen.
As shown in Figures 25a and 25b:
- From the Number of distributed switches dropdown, select 2
- Give each distributed switch a Name
- Select which Port group to use for vMotion
- Select the Physical adapters to match to each distributed switch
- When complete, click
Enter the IP information for vMotion as shown in Figure 25c and click Next.
Select the appropriate options and click Next, as shown in Figure 26.
If all the information is correct, click Finish, as shown in Figure 27. If the information is not correct, click Back, correct the settings, and then continue.
Click the Networking icon, expand the cluster, and each vDS, as shown in Figure 28.
I plan to use the vDS 10Gb switch for VM, vMotion, and Storage traffic. To accomplish that, three Port Groups are required.
Right-click the vDS 10Gb switch, select Distributed Port Group, select New Distributed Port Group…, as shown in Figure 29.
The Name for this port group is VM Traffic vDS 10Gb, click Next, as shown in Figure 30.
For my lab, the default general properties are good. Click Next, as shown in Figure 31.
If all the information is correct, click Finish, as shown in Figure 32. If the information is not correct, click Back, correct the information, and then continue.
Repeat these steps to create two additional port groups with the following settings:
Name: NFS vDS 10Gb
Name: vMotion vDS 10Gb
When complete, the Distributed Port Groups should look like Figure 33.
vMotion and NFS storage require VMkernel NICs.
Right-click the NFS vDS 10Gb distributed port group and click Add VMkernel Adapters…, as shown in Figure 34.
Click Attached hosts…, as shown in Figure 35.
Select all hosts and click OK, as shown in Figure 36.
Click Next, as shown in Figure 37.
Enter the following information:
- IP settings: Select IPv4 from the dropdown list
- MTU: Enter the MTU for your 10G switch (typically 9000)
- TCP/IP stack: select Default from the dropdown list
- Available services -> Leave all unselected
Click Next, as shown in Figure 38.
Select Use static IPv4 settings. For each host, enter the IP address and subnet for the NFS VMkernel and click Next as shown in Figure 39.
If all the information is correct, click Finish, as shown in Figure 40. If the information is not correct, click Back, correct the information, and then continue.
Repeat the steps shown in Figures 34 through 40 for the vMotion vDS 10Gb distributed port group to create vMotion VMkernel.
Note: While there is a TCP/IP stack specifically for vMotion, the vMotion stack is for using vMotion across IP subnets that have a dedicated default gateway that is different from the gateway on the management network. Please see Networking Best Practices for vSphere vMotion.
Now that our networking is complete, let’s move on to configuring NFS Storage.
Click Storage, as shown in Figure 41.
First up is an NFS datastore to hold VMs.
Right-click the cluster, click Storage, click New Datastore…, as shown in Figure 42.
Select NFS and click Next, as shown in Figure 43.
For my lab, I could only get NFS version 3 to work.
Select NFS 3 and click Next, as shown in Figure 44.
Enter a Datastore name, the Folder on the NFS server, the NFS Server name or IP address and click Next, as shown in Figure 45.
Select all hosts in the cluster and click Next, as shown in Figure 46.
If all the information is correct, click Finish, as shown in Figure 47. If the information is not correct, click Back, correct the information, and then continue.
To verify that VAAI is enabled for the datastore, click Configure and click Hardware Acceleration, as shown in Figure 48.
Repeat the steps outlined in Figures 42 through 47 to create an NFS datastore to contain ISOs.
After creating all datastores, they appear in vCenter, as shown in Figure 49.
To verify the NFS ISO datastore, upload an ISO to the datastore.
Right-click the ISOs datastore and click Browse Files as shown in Figure 50.
Click Upload Files as shown in Figure 51.
Browse to an ISO file, select it, and click Open as shown in Figure 52. I am uploading a Windows Server 2019 ISO.
The ISO file starts uploading to the ISOs datastore, as shown in Figure 53.
Once the ISO file upload is complete, the ISOs show in the datastore, as shown in Figures 54 and 55.
Next up: Additional vCenter Configuration.