Month: February 2015

How to create and mount an NFS datastore

NFS is not optimized for virtual machines just like VMFS is, but is still widely used by admins. It is the storage solution with the worst performance among iSCSI and FC. An NFS datastore has an unlimited size as opposed to VMFS which can hold a maximum of 2TB in vSphere 5.1 and 64TB in 5.5.

NFS Properties:

  • The file system on an NFS datastore is dictated by the NFS server.
  • NFS works like a fileshare.
  • No maximum size on an NFS datastore. This is dictated by the NFS server.
  • An NFS datastore can be used for both file and VM storage.
  • All the nitty vMotion and HA stuff is also supported on an NFS share just like VMFS.

I have used my Domain Controller as an NFS location. NFS is file based and not block based storage.

  • In the Server Manager dashboard go to Manage and select Add Roles and Features.

1

 

  • At the Server Roles page expand File and iSCSI Services and enable Server for NFS.

2

 

  • Press Add Features to enable NFS services on the server.

3

 

  • Continue and install the feature.

4

 

  • Create a folder on the server, right-click it and choose Properties.

5

 

  • Go to the NFS Sharing tab and select Manage NFS Sharing.

6

 

  • Enable Share this folder and then select Permissions.

7

 

  • From the drop-down menu in Type of access select Read-Write and then enable Allow root access. Without root access the ESXi host will not be able to reach the NFS share. Press OK.

8

 

  • Click Apply and then OK to exit.

9

 

  • Login to the vSphere Web client and go to vCenter -> Storage and right-click your Datacenter and go to All vCenter Actions -> New Datastore.

10

 

  • Click Next.

11

 

  • Select NFS and click Next.

12

 

  • Name the datastore appropriately and type the IP of the file storage followed by the name of the folder you created and then press Next.

13

 

  • Select the host(s) you’ll want the new NFS datastore to be visible to.

14

 

  • Review and press Finish to add the NFS datastore.

15

 

NFS datastore created and mounted.

16

 

 

Virtual switch security policies explained

2015-02-15 20_20_15-vlabdkcphvc01 - VMware Workstation

  1. Promiscuous mode

This mode allows the virtual adapter of a VM to observe traffic in the virtual switch, so any traffic coming from and to another VM can be observed. By default this is set to reject since that would pose a security risk, but there are situations where this is needed – for example by an IDS system (intrusion detection system) that needs to observe the traffic to inspect packages. Can be set at both the switch and port group.

  1. MAC address changes

MAC address changes are concerned with blocking traffic to a VM (incoming) if its initial and effective MAC address do not match.

  1. Forged Transmits

Forged Transmits are concerned with blocking traffic from a VM (outgoing) if its initial and effective MAC address do not match.

So if both forged transmits and MAC address changes are set to reject then no traffic will be allowed to and from the VM as long as its two MAC addresses do not match. If set to accept then no problem. MAC address can change when moved to another location. VMware recommends to set both to reject for the highest level of security possible. Although if a feature such as Microsoft Network Load Balancing (NLB) in a cluster set in unicast mode is needed the set both to accept. The default settings of a new standard virtual switch has both of these set to accept.

2015-02-15 20_06_41-MAC address change and forget transmits.vsdx - Visio Professional

Upload and install applications to a VM through vSphere

If you have some software you would like to install on a virtual machine, and one that does not have a network connection. You can then upload software to a VM through a datastore, but this can only be done if it has been converted to the ISO format.

  • An example: There are four applications you would like to install on a VM placed on your desktop. Compress all of these into a Compressed (zipped) folder.

1

 

  • Download and install the Zip2ISO You can also use other applications such as MagicISO or anything else you like. Drag and drop the compressed (zipped) folder into the Zip2ISO.

2

 

All applications will then be converted into a single image.

3

 

  • In the vSphere Web Client go to Home -> Storage -> Select a datastore -> Files and press the Upload

4

 

  • Navigate to the image you created with the Zip2ISO application and upload it. Once done you are now ready to utilize the image through the datastore you chose.
  • In the vSphere Web Client right-click a virtual machine and choose Edit Settings.
  • Use the drop-down window next to the CD/DVD drive 1 and choose Datastore ISO File.

5

 

  • Navigate and select the image you uploaded to the datastore. Also remember to put a checkmark in Connected otherwise the CD/DVD drive will not be connected to the VM.

6

 

  • Now simply open the CD/DVD drive in the virtual machine and your applications you converted will be there. It is recommended copying them to the desktop of the VM because the vSphere CD/DVD emulator can be slow to execute files.

7

 

 

 

 

How to migrate iSCSI storage from a standard switch to a distributed switch

This part makes up the end of my first post on this subject found here

Next up is migrating the iSCSI initiators. This can be done without downtime if I have spare NICs to work with. If not then I will have to disconnect the storage to get the NICs off the iSCSI adapters, and this will cause downtime.

I have spare NICs to work with so I will migrate the iSCSI storage without downtime.

So remember that my storage network has two NICs utilized to enable failover capability. For this reason I can safely remove one NIC.

1

 

  • I will jump back into the vSphere client for a moment to do this. Select a host in vCenter and go to Configuration -> Storage Adapters and right-click the iSCSI Software Adapter and select Properties.

2

 

  • Select a vmkernel adapter and press Remove and then Close. vSphere wants to rescan the adapters after this.

3

 

  • Next go into Properties of the virtual switch with the iSCSI initiators.

4

 

  • Remove the iSCSI initiator you removed off the iSCSI software adapter. Do not mind the warning as long as you remove the appropriate initiator.

5

 

  • Remove the appropriate NIC from the switch.

6

 

  • So now the storage switch will have only one active NIC left, and I just killed the high availability setup. This is also why there is an exclamation mark at the host in vCenter because it lost its uplink network redundancy. Do this for all hosts.

7

 

  • So back in the vSphere web client I now have two available NICs to use. They have not been assigned to any switches yet.

8

 

  • So I have already prepared the distributed switch with two vmkernel adapters (iSCSI initiator 1 and 2). Now press the Add Hosts

9

 

  • Add both hosts to the switch and on the Select network adapter tasks page select Manage physical adapters.

10

 

  • Assign each of the two free NICs to each its own uplink on both hosts.

11

 

  • Go to Edit on iSCSI-initiator 1.

12

 

  • On the Teaming and failover page make Uplink 1 as the only active uplink, and move the rest to the Unused It is very important to move them to unused and not standby otherwise it will not work as iSCSI cannot utilize two NICs in the same vmkernel at the same time. This way the vmkernels becomes iSCSI compliant.

13

 

  • Do the same for iSCSI-initiator 2 but this time make Uplink 2 the only active adapter and move the rest to Unused.

14

 

  • In Hosts and Clusters select a host and go to Manage -> NetworkingVirtual Switches and press the Add host networking

15

 

  • Select VMkernel Network Adapter and press Next.

16

 

  • Select Select an existing distributed port group and press Browse.

17

 

  • Select the first iSCSI initiator and press OK.

18

 

  • Leave everything at default on the 3a Port properties

19

 

  • Select Use static IPv4 settings and type an IP address and a subnet mask and press Next.

21

 

  • Do this for both iSCSI adapters on both ESXi hosts. Check below for what it should look like on one host.

22

 

  • Now go to Hosts and Clusters and select a host and go to Manage -> Storage -> Storage Adapters and select the iSCSI Software Adapter. Go to Network Port Binding and press the Add.

23

 

  • Add both iSCSI initiators to the binding port group.

24

 

  • So it will look like this;

25

 

  • Do this for the second host as well and remember to rescan the iSCSI adapter on both hosts and it will eventually look like this;

26

 

  • You can now safely remove the iSCSI port binding from the standard switch.

27

 

  • And you can safely remove the iSCSI vmkernel port and associated NIC.

28

 

So the essential part of “migrating” the iSCSI connections from a standard switch to a distributed switch is to make sure other vmkernel port bindings are available to the network port binding on the iSCSI network adapter. So it isn’t really a migration per say but rather an exchange of vmkernels and physical network adapters.

29

How to migrate vSS to vDS

I created a vSphere network based on standard switches. I put everything in the network including management, storage, VM network, vMotion and I made everything possible to failover to a second NIC to come as close to a production setup as possible.

I played around with the design and configuration of the migration into the vDS for a few days. At a CBT nugget video Greg Shields demonstrates how he leaves one NIC to remain out of two in a redundant management standard switch when he migrates the vSS into vDS. The purpose of this was to keep a backup. However, from my own investigation it is not necessary to leave anything behind when migrating for several reasons.

  1. The vmkernel is migrated – as in moved – from the standard switch to the distributed switch, and the vSS is thus rendered useless unless you manually create a new vmkernel for it. Also, you cannot utilize two vmkernels for management in a standard and distributed switch respectively at the same time either.
  2. Even if vCenter goes down and the management of the vDS is not possible then everything will still run fine as vSphere actually creates the standard switches invisibly for you in the background. Source:  VMware vSphere: Networking – Distributed Virtual Switch

Before this realization I used Greg’s method and tried to leave one NIC to remain in the standard switch. This failed at first because I migrated the standby NIC, and left the active NIC hanging with no vmkernel in the switch (and because I forgot to tag the standby nic as a management adapter in the DCUI). Upon rolling back (vSphere does this automatically) vSphere set every single NIC in my environment to standby and none of my hosts could connect to vCenter (bug maybe?).

So, for the networking designs I have done so far I have always separated everything into each its own switch, and as thus haven’t felt the need to utilize VLAN tagging inside the virtual standard switches, but only on the back-end switches. At first I migrated every single standard switch into a single vDS and utilized VLAN tagging in the vDS to separate the traffic, but this failed seemingly due to not having the back-end networking prepared at my hands.

So for this demonstration I will make an exact replica of my standard switch design, and then migrate everything into the distributed switches.

Because of the web client’s lack of option to show the complete network I used the C# client for that.

1

 

  • First create a distributed switch. On the web client’s homepage select vCenter -> Networking. Right-click the Datacenter and select New Distributed Switch.

2

 

  • I have named the first distributed switch dMangement and selected the newest distributed switch version available. On the Edit settings page select the amount of uplinks in the distributed you want. Unselect Create a default port group.

Note: The number of uplinks you specify should be the maximum number of uplinks one single ESXi host can contribute with to a single vDS. Think of uplinks as the slots available for an ESXi’s NICs to plug in to. Each of my two ESXi hosts have 8 physical network interface cards installed, so I should set 8 if I wish to migrate all port groups into the same switch. However, since I will create a distributed switch per port group I only need to specify to. The number of uplinks can also be changed after deployment.

3

 

  • Select the new switch and create a new distributed port group.

4

 

  • Name the new port group for Management and press Next.

5

 

  • Leave everything at default and press Next and then Finish to create the port group.

4

  • Next press the Add hosts

5

 

  • Select Add hosts and press Next.

6

 

  • Press the New hosts icon and add the hosts you wish to add the new distributed switch to.

7

 

  • Select both Manage physical adapters and Manage VMkernel adapters and press Next.

8

 

  • Select the first vmnic that belongs to the management vmkernel adapter and press Assign uplink.

9

 

  • Assign vmnic0 to uplink 1 in the distributed switch and then assign vmnic1 to uplink 2.

10

 

  • So for both ESXi hosts it will look like this:

11

 

  • On the next page select the vmkernel adapter vmk0 and press Assign port group.

12

 

  • Since by my design there will only be “one port group per switch” there is only one to select.

13

 

  • So it is pretty straight forward. Make sure both vmkernels from each ESXi host becomes reassigned from the standard switch to the new vmkernel port group on the new distributed switch and press Next.

14

 

  • No impact on iSCSI since I will not touch this yet. Click Next and then Finish to migrate the vmkernel and the physical NICs.

15

 

  • So once migration is done you will be able to see that the vmkernel has been removed from the standard switch (vSS), so leaving a NIC as backup serves no purpose.

16

 

  • Exact same steps are made with vMotion vmkernel adapters, so moving forward to migrating virtual machines create a new distributed switch. Remember you could also just create a new port group but then I would have to separate traffic with VLAN tagging, and I like to keep things simple and easy.

17

 

  • Assign two uplink ports to the new distributed switch, and it actually makes more sense to enable Network I/O Control in this scenario than utilizing it on a management network I just didn’t think about that when I created the vDS for the management network the first time.

18

 

  • Press the Add hosts icon again.

19

 

  • Select Add hosts and press Next.

6

 

  • Press the New hosts icon and add the hosts you wish to add the new distributed switch to.

7

 

  • This time around select Manage physical adapters and Migrate virtual machine networking and press Next.

20

 

  • Assign the vmnics that belongs to the VM port group to each its own uplink in the new distributed switch.

21

 

  • On the Migrate VM networking select the VMs to migrate and press Assign port group.

22

 

  • Select the one and only available distributed port group.

23

 

  • Make sure all virtual machines are migrated to the new port group and press Next and then Finish on the following page to migrate.

24

 

  • Virtual machines successfully migrated but notice that the port group has remained in the vSS.

25

Next up is migrating the iSCSI connections.

 

How to join a vCenter 5.5 server to a domain post-installation

Warning! Before jumping into changing the default Domain of a vCenter vSphere deployment, or joining the vCenter vSphere environment to a Domain post-installation – be aware that you may have issues upgrading your vSphere environment in the future. I experienced this! I joined the vCenter to a domain post-installation, and upon trying to upgrade from 5.5 to 6.0 an SSL naming mismatch made the upgrade abort! I was not able to solve that initially and started from scratch.

I made a vSphere deployment with two hosts and a vCenter server. I used OpenFiler as storage and I did not create a domain. So far I have only tried joining a vCenter server to a domain during the installation but never after. I heard it could be tricky to do it, so I set out to try it for myself.

  • After the vSphere deployment I made a new 2012 R2 server and promoted it to a domain controller.
  • I configured DNS on it.
  • I configured the DNS settings on the vCenter server and joined it to the new domain.
  • Login to the vCenter server with the web client and use the SSO master credentials configured during installation.

1

  • Go to Administration.

2

  • Under Single Sign-On go to Configuration and from here you will see the current configured identity sources including the default vsphere.local domain. Press the green plus icon to configure a new identity source.

3

  • Select Active Directory (Integrated Windows Authentication), type the name of the domain and select Use SPN for a Security Token Service account for VMware to use as an authentication service if you wish. For this installation I selected Use this machine account.

More about the SPN here at VMware KB: 2058298 

4

  • Under Single Sign-On select Configuration and make the new domain the default one if you wish.

5

  • Now if you go to Users and Groups and select Administrators you will be able to see that there is only one user available to modify SSO settings. If the password for this user is somehow forgotten you will have trouble if you want to modify the SSO settings later, so specify an alternative group by pressing the Add Group Members. Remember this is not the same as assigning a group vCenter administrator permissions.

6

  • First select the domain from the drop-down menu, and then search for the desired group in the domain of which you would like to have access to modify the SSO in vCenter. Finally press Add and then OK.

7

  • To assign a domain group administrative privileges to the vCenter server itself go to the Home page of the vCenter server and select vCenter -> vCenter servers and select the vCenter server installed. Then go to the Manage tab and select Permissions. Here press the green plus icon to add another group.

8

  • From the Assigned Role drop-down menu select Administrator and then press Add.

9

  • Select the domain from the drop-down menu and again add the desired group.

10

  • The desired domain group should appear in the list and that is it. The vCenter server is now part of a new domain. I found no issues during this configuration, and thus was not tricky at all.

11

Source: CBT Nuggets VMware vSphere 5.5 VCA-DCV.VCP5-DCV

Running the vSphere Web Client on Windows Server 2012

Running the vSphere Web Client requires Adobe Flash Player. In server 2008 I usually copy an offline installer onto the desktop and install the flash player. In 2012 the flash player comes as pre-installed and just has to be enabled first.

1

  • In Server Manager go to Manage -> Add Roles and Features.

2

  • Fast forward to the feature selection menu expand User Interfaces and Infrastructure and select Desktop Experience. Install and done!

3

How to import a VMware Workstation 6 VM into a vSphere environment

So a colleague asked me how he would go about exporting a VM he had in VMware Workstation 6, and then import it onto a vSphere 5.5 platform. I told him to remove the VM from the inventory of Workstation 6 so he didn’t risk the vmx file being locked. Then upload the folder directly to a Datastore in vSphere and right-click the vmx file and register it… Except that didn’t really work!:)

I did not know Workstation 6 was not supported by any vSphere version. Although simple I have outlined the steps below to properly import a hardware version 6 VM into vSphere.

VMware Workstation 6 was released in 2008 and any virtual machine created in version 6 will have a hardware version of 6 as well. Hardware version 6 isn’t supported by ESXi, so a VM from Workstation 6 first has to be upgraded before importing it into ESXi vSphere.

1

  • If you open the vmx file of the virtual machine created in Workstation 6 you can confirm the hardware version.

2

To upgrade the hardware version I used Workstation 11.

  • Go to File -> Open in Workstation 11 and navigate and select the virtual machine you want to import.

3

  • When imported make sure the VM is powered off and then right-click it and go to Manage -> Change Hardware Compatibility.

4

  • On the Hardware Compatibility selection screen choose any version above 6. I recommend version 8 as version 10 will force you into the web client.

5

Finish the conversion of the hardware upgrade.

6

  • Power on the VM and then right-click and select Update VMware Tools. If Workstation asks you whether you moved or copied it then select “I moved it”.

7

  • When VMware Tools have been upgraded then shut down the VM and it is ready to be imported into vSphere.
  • First export the VM from Workstation by going File and select Export to OVF.

8

  • In vCenter go to File and select Deploy OVF Template.

9

Doing a hard copy upload of the VM files directly from the directory of the VM repository in Workstation to a Datastore in vSphere will not work.

  • Follow the import wizard in vSphere to finish, and the VM will be ready for use.