I created a vSphere network based on standard switches. I put everything in the network including management, storage, VM network, vMotion and I made everything possible to failover to a second NIC to come as close to a production setup as possible.
I played around with the design and configuration of the migration into the vDS for a few days. At a CBT nugget video Greg Shields demonstrates how he leaves one NIC to remain out of two in a redundant management standard switch when he migrates the vSS into vDS. The purpose of this was to keep a backup. However, from my own investigation it is not necessary to leave anything behind when migrating for several reasons.
- The vmkernel is migrated – as in moved – from the standard switch to the distributed switch, and the vSS is thus rendered useless unless you manually create a new vmkernel for it. Also, you cannot utilize two vmkernels for management in a standard and distributed switch respectively at the same time either.
- Even if vCenter goes down and the management of the vDS is not possible then everything will still run fine as vSphere actually creates the standard switches invisibly for you in the background. Source: VMware vSphere: Networking – Distributed Virtual Switch
Before this realization I used Greg’s method and tried to leave one NIC to remain in the standard switch. This failed at first because I migrated the standby NIC, and left the active NIC hanging with no vmkernel in the switch (and because I forgot to tag the standby nic as a management adapter in the DCUI). Upon rolling back (vSphere does this automatically) vSphere set every single NIC in my environment to standby and none of my hosts could connect to vCenter (bug maybe?).
So, for the networking designs I have done so far I have always separated everything into each its own switch, and as thus haven’t felt the need to utilize VLAN tagging inside the virtual standard switches, but only on the back-end switches. At first I migrated every single standard switch into a single vDS and utilized VLAN tagging in the vDS to separate the traffic, but this failed seemingly due to not having the back-end networking prepared at my hands.
So for this demonstration I will make an exact replica of my standard switch design, and then migrate everything into the distributed switches.
Because of the web client’s lack of option to show the complete network I used the C# client for that.
- First create a distributed switch. On the web client’s homepage select vCenter -> Networking. Right-click the Datacenter and select New Distributed Switch.
- I have named the first distributed switch dMangement and selected the newest distributed switch version available. On the Edit settings page select the amount of uplinks in the distributed you want. Unselect Create a default port group.
Note: The number of uplinks you specify should be the maximum number of uplinks one single ESXi host can contribute with to a single vDS. Think of uplinks as the slots available for an ESXi’s NICs to plug in to. Each of my two ESXi hosts have 8 physical network interface cards installed, so I should set 8 if I wish to migrate all port groups into the same switch. However, since I will create a distributed switch per port group I only need to specify to. The number of uplinks can also be changed after deployment.
- Select the new switch and create a new distributed port group.
- Name the new port group for Management and press Next.
- Leave everything at default and press Next and then Finish to create the port group.
- Select Add hosts and press Next.
- Press the New hosts icon and add the hosts you wish to add the new distributed switch to.
- Select both Manage physical adapters and Manage VMkernel adapters and press Next.
- Select the first vmnic that belongs to the management vmkernel adapter and press Assign uplink.
- Assign vmnic0 to uplink 1 in the distributed switch and then assign vmnic1 to uplink 2.
- So for both ESXi hosts it will look like this:
- On the next page select the vmkernel adapter vmk0 and press Assign port group.
- Since by my design there will only be “one port group per switch” there is only one to select.
- So it is pretty straight forward. Make sure both vmkernels from each ESXi host becomes reassigned from the standard switch to the new vmkernel port group on the new distributed switch and press Next.
- No impact on iSCSI since I will not touch this yet. Click Next and then Finish to migrate the vmkernel and the physical NICs.
- So once migration is done you will be able to see that the vmkernel has been removed from the standard switch (vSS), so leaving a NIC as backup serves no purpose.
- Exact same steps are made with vMotion vmkernel adapters, so moving forward to migrating virtual machines create a new distributed switch. Remember you could also just create a new port group but then I would have to separate traffic with VLAN tagging, and I like to keep things simple and easy.
- Assign two uplink ports to the new distributed switch, and it actually makes more sense to enable Network I/O Control in this scenario than utilizing it on a management network I just didn’t think about that when I created the vDS for the management network the first time.
- Press the Add hosts icon again.
- Select Add hosts and press Next.
- Press the New hosts icon and add the hosts you wish to add the new distributed switch to.
- This time around select Manage physical adapters and Migrate virtual machine networking and press Next.
- Assign the vmnics that belongs to the VM port group to each its own uplink in the new distributed switch.
- On the Migrate VM networking select the VMs to migrate and press Assign port group.
- Select the one and only available distributed port group.
- Make sure all virtual machines are migrated to the new port group and press Next and then Finish on the following page to migrate.
- Virtual machines successfully migrated but notice that the port group has remained in the vSS.
Next up is migrating the iSCSI connections.