VMware vSphere

vSphere DRS and HA

Turning on DRS (Distributed Resource Scheduler) in a new Cluster will add an additional option to choose from.

1

Automation Level has three options:

  • Manual

The DRS Cluster will suggest relocation of VMs but not execute them.

  • Partially Automated

DRS will locate a newly created VM based on work load, and will also do this at VM power ON, but not after. So the only automation here is when a VM is first created or powered ON. Afterwards it just acts like the manual setting.

  • Fully Automated

Fully automated makes the DRS move VMs around at all times, and an additional option comes with this setting which is;

  • Migration Threshold

Conservative is the result of a partially imbalanced cluster, and an aggressive is the result of trying to come as close to a balanced cluster as possible. Moving VMs around requires resources too, so a slightly conservative threshold might be recommend although best practice is to leave it at default.

Distributed Power Management is something that is not initially visible when first creating a cluster.

To enable this setting right-click your cluster -> Settings -> vSphere DRS and press Edit.

4

DPM works in conjunction with DRS to balance the load on as little resources as possible. DPM uses WOL, IPMI or iLO to wake up the host when its resources are needed. Be careful with this as in some cases hosts might shutdown and startup all the time, or a host might just not wake up when it is needed.

vSphere HA comes with the option Enable admission control, and you can set this setting if you want vSphere to prevent you from powering ON too many virtual machines that would eventually violate your cluster resources. So if you have two hosts and each of them could power 10 VMs, then you would not be able to power on more than 10 VMs because the cluster is going to reserve the other host for failover capability.

In the event that you do not enable admission control and over subscribe your resources then vSphere will still try to power up as many VMs as possible when you have a host failure, but there is no guarantee. You can use VM restart priority to boot up critical VMs first;

6

You have to decide whether high availability is more important than running as many VMs as possible when choosing admission control.

2

You can set the policy on admission control based on how many hosts your cluster can tolerate, or how big of a percentage you want to reserve for failover.

If you have a two node setup and set the Host failures cluster tolerates to 1 then that would be the same as reserving 50% of CPU and memory capacity for that particular setup.

The VM Monitoring Status comes with three options:

  • Disabled
  • VM monitoring only

VM monitoring only provides vSphere the ability to monitor VMware Tools and restart the VM on another host if VMware Tools becomes unavailable.

  • VM monitoring and application

If you set it to also monitor application you must set this in conjunction with a vendor. So this is vendor specific and will not work right out of the box.

Now the last setting is EVC.

3

EVC mode is for making hosts of the same CPU brand (Intel and AMD) compatible between models/generations so you can vMotion. Sometimes you may want to add additional hosts to an existing cluster but the model has gone out of production, and you can only get a newer model with a CPU with lots of new features. You would then turn on EVC and downgrade the cluster to the “worst” CPU to make them compatible and mask out all those new CPU features in the new model.

I usually think of Microsoft’s Domain and Forest Functional level where you downgrade the level to the lowest version of a domain controller you have. So if you have a 2003 and 2008 domain controller you would downgrade the functional level to 2003 in that forest.

If you edit vSphere HA you will see some additional settings not visible when set on a newly created cluster.

5

I touched base on the VM restart priority earlier, and this can be overwritten. The next one is the Host isolation response. So what would you like to do in the event that a host would become isolated and unable to actively migrate VMs while powered ON?

  • Leave powered on
  • Power off, then failover
  • Shut down, then failover

What is a Host isolation response?

Host isolation is when the management network of a host becomes unavailable. This can happen for many reasons, but this means you cannot move VMs around while they are powered ON. They will still run because they have access to the storage network as seen here;

8

To address this issue VMware introduced Datastore Heartbeating.

7

This will help prevent the status of a host isolation – a second layer defense.

How to create and mount an NFS datastore

NFS is not optimized for virtual machines just like VMFS is, but is still widely used by admins. It is the storage solution with the worst performance among iSCSI and FC. An NFS datastore has an unlimited size as opposed to VMFS which can hold a maximum of 2TB in vSphere 5.1 and 64TB in 5.5.

NFS Properties:

  • The file system on an NFS datastore is dictated by the NFS server.
  • NFS works like a fileshare.
  • No maximum size on an NFS datastore. This is dictated by the NFS server.
  • An NFS datastore can be used for both file and VM storage.
  • All the nitty vMotion and HA stuff is also supported on an NFS share just like VMFS.

I have used my Domain Controller as an NFS location. NFS is file based and not block based storage.

  • In the Server Manager dashboard go to Manage and select Add Roles and Features.

1

 

  • At the Server Roles page expand File and iSCSI Services and enable Server for NFS.

2

 

  • Press Add Features to enable NFS services on the server.

3

 

  • Continue and install the feature.

4

 

  • Create a folder on the server, right-click it and choose Properties.

5

 

  • Go to the NFS Sharing tab and select Manage NFS Sharing.

6

 

  • Enable Share this folder and then select Permissions.

7

 

  • From the drop-down menu in Type of access select Read-Write and then enable Allow root access. Without root access the ESXi host will not be able to reach the NFS share. Press OK.

8

 

  • Click Apply and then OK to exit.

9

 

  • Login to the vSphere Web client and go to vCenter -> Storage and right-click your Datacenter and go to All vCenter Actions -> New Datastore.

10

 

  • Click Next.

11

 

  • Select NFS and click Next.

12

 

  • Name the datastore appropriately and type the IP of the file storage followed by the name of the folder you created and then press Next.

13

 

  • Select the host(s) you’ll want the new NFS datastore to be visible to.

14

 

  • Review and press Finish to add the NFS datastore.

15

 

NFS datastore created and mounted.

16

 

 

Virtual switch security policies explained

2015-02-15 20_20_15-vlabdkcphvc01 - VMware Workstation

  1. Promiscuous mode

This mode allows the virtual adapter of a VM to observe traffic in the virtual switch, so any traffic coming from and to another VM can be observed. By default this is set to reject since that would pose a security risk, but there are situations where this is needed – for example by an IDS system (intrusion detection system) that needs to observe the traffic to inspect packages. Can be set at both the switch and port group.

  1. MAC address changes

MAC address changes are concerned with blocking traffic to a VM (incoming) if its initial and effective MAC address do not match.

  1. Forged Transmits

Forged Transmits are concerned with blocking traffic from a VM (outgoing) if its initial and effective MAC address do not match.

So if both forged transmits and MAC address changes are set to reject then no traffic will be allowed to and from the VM as long as its two MAC addresses do not match. If set to accept then no problem. MAC address can change when moved to another location. VMware recommends to set both to reject for the highest level of security possible. Although if a feature such as Microsoft Network Load Balancing (NLB) in a cluster set in unicast mode is needed the set both to accept. The default settings of a new standard virtual switch has both of these set to accept.

2015-02-15 20_06_41-MAC address change and forget transmits.vsdx - Visio Professional

Upload and install applications to a VM through vSphere

If you have some software you would like to install on a virtual machine, and one that does not have a network connection. You can then upload software to a VM through a datastore, but this can only be done if it has been converted to the ISO format.

  • An example: There are four applications you would like to install on a VM placed on your desktop. Compress all of these into a Compressed (zipped) folder.

1

 

  • Download and install the Zip2ISO You can also use other applications such as MagicISO or anything else you like. Drag and drop the compressed (zipped) folder into the Zip2ISO.

2

 

All applications will then be converted into a single image.

3

 

  • In the vSphere Web Client go to Home -> Storage -> Select a datastore -> Files and press the Upload

4

 

  • Navigate to the image you created with the Zip2ISO application and upload it. Once done you are now ready to utilize the image through the datastore you chose.
  • In the vSphere Web Client right-click a virtual machine and choose Edit Settings.
  • Use the drop-down window next to the CD/DVD drive 1 and choose Datastore ISO File.

5

 

  • Navigate and select the image you uploaded to the datastore. Also remember to put a checkmark in Connected otherwise the CD/DVD drive will not be connected to the VM.

6

 

  • Now simply open the CD/DVD drive in the virtual machine and your applications you converted will be there. It is recommended copying them to the desktop of the VM because the vSphere CD/DVD emulator can be slow to execute files.

7

 

 

 

 

How to migrate iSCSI storage from a standard switch to a distributed switch

This part makes up the end of my first post on this subject found here

Next up is migrating the iSCSI initiators. This can be done without downtime if I have spare NICs to work with. If not then I will have to disconnect the storage to get the NICs off the iSCSI adapters, and this will cause downtime.

I have spare NICs to work with so I will migrate the iSCSI storage without downtime.

So remember that my storage network has two NICs utilized to enable failover capability. For this reason I can safely remove one NIC.

1

 

  • I will jump back into the vSphere client for a moment to do this. Select a host in vCenter and go to Configuration -> Storage Adapters and right-click the iSCSI Software Adapter and select Properties.

2

 

  • Select a vmkernel adapter and press Remove and then Close. vSphere wants to rescan the adapters after this.

3

 

  • Next go into Properties of the virtual switch with the iSCSI initiators.

4

 

  • Remove the iSCSI initiator you removed off the iSCSI software adapter. Do not mind the warning as long as you remove the appropriate initiator.

5

 

  • Remove the appropriate NIC from the switch.

6

 

  • So now the storage switch will have only one active NIC left, and I just killed the high availability setup. This is also why there is an exclamation mark at the host in vCenter because it lost its uplink network redundancy. Do this for all hosts.

7

 

  • So back in the vSphere web client I now have two available NICs to use. They have not been assigned to any switches yet.

8

 

  • So I have already prepared the distributed switch with two vmkernel adapters (iSCSI initiator 1 and 2). Now press the Add Hosts

9

 

  • Add both hosts to the switch and on the Select network adapter tasks page select Manage physical adapters.

10

 

  • Assign each of the two free NICs to each its own uplink on both hosts.

11

 

  • Go to Edit on iSCSI-initiator 1.

12

 

  • On the Teaming and failover page make Uplink 1 as the only active uplink, and move the rest to the Unused It is very important to move them to unused and not standby otherwise it will not work as iSCSI cannot utilize two NICs in the same vmkernel at the same time. This way the vmkernels becomes iSCSI compliant.

13

 

  • Do the same for iSCSI-initiator 2 but this time make Uplink 2 the only active adapter and move the rest to Unused.

14

 

  • In Hosts and Clusters select a host and go to Manage -> NetworkingVirtual Switches and press the Add host networking

15

 

  • Select VMkernel Network Adapter and press Next.

16

 

  • Select Select an existing distributed port group and press Browse.

17

 

  • Select the first iSCSI initiator and press OK.

18

 

  • Leave everything at default on the 3a Port properties

19

 

  • Select Use static IPv4 settings and type an IP address and a subnet mask and press Next.

21

 

  • Do this for both iSCSI adapters on both ESXi hosts. Check below for what it should look like on one host.

22

 

  • Now go to Hosts and Clusters and select a host and go to Manage -> Storage -> Storage Adapters and select the iSCSI Software Adapter. Go to Network Port Binding and press the Add.

23

 

  • Add both iSCSI initiators to the binding port group.

24

 

  • So it will look like this;

25

 

  • Do this for the second host as well and remember to rescan the iSCSI adapter on both hosts and it will eventually look like this;

26

 

  • You can now safely remove the iSCSI port binding from the standard switch.

27

 

  • And you can safely remove the iSCSI vmkernel port and associated NIC.

28

 

So the essential part of “migrating” the iSCSI connections from a standard switch to a distributed switch is to make sure other vmkernel port bindings are available to the network port binding on the iSCSI network adapter. So it isn’t really a migration per say but rather an exchange of vmkernels and physical network adapters.

29

How to migrate vSS to vDS

I created a vSphere network based on standard switches. I put everything in the network including management, storage, VM network, vMotion and I made everything possible to failover to a second NIC to come as close to a production setup as possible.

I played around with the design and configuration of the migration into the vDS for a few days. At a CBT nugget video Greg Shields demonstrates how he leaves one NIC to remain out of two in a redundant management standard switch when he migrates the vSS into vDS. The purpose of this was to keep a backup. However, from my own investigation it is not necessary to leave anything behind when migrating for several reasons.

  1. The vmkernel is migrated – as in moved – from the standard switch to the distributed switch, and the vSS is thus rendered useless unless you manually create a new vmkernel for it. Also, you cannot utilize two vmkernels for management in a standard and distributed switch respectively at the same time either.
  2. Even if vCenter goes down and the management of the vDS is not possible then everything will still run fine as vSphere actually creates the standard switches invisibly for you in the background. Source:  VMware vSphere: Networking – Distributed Virtual Switch

Before this realization I used Greg’s method and tried to leave one NIC to remain in the standard switch. This failed at first because I migrated the standby NIC, and left the active NIC hanging with no vmkernel in the switch (and because I forgot to tag the standby nic as a management adapter in the DCUI). Upon rolling back (vSphere does this automatically) vSphere set every single NIC in my environment to standby and none of my hosts could connect to vCenter (bug maybe?).

So, for the networking designs I have done so far I have always separated everything into each its own switch, and as thus haven’t felt the need to utilize VLAN tagging inside the virtual standard switches, but only on the back-end switches. At first I migrated every single standard switch into a single vDS and utilized VLAN tagging in the vDS to separate the traffic, but this failed seemingly due to not having the back-end networking prepared at my hands.

So for this demonstration I will make an exact replica of my standard switch design, and then migrate everything into the distributed switches.

Because of the web client’s lack of option to show the complete network I used the C# client for that.

1

 

  • First create a distributed switch. On the web client’s homepage select vCenter -> Networking. Right-click the Datacenter and select New Distributed Switch.

2

 

  • I have named the first distributed switch dMangement and selected the newest distributed switch version available. On the Edit settings page select the amount of uplinks in the distributed you want. Unselect Create a default port group.

Note: The number of uplinks you specify should be the maximum number of uplinks one single ESXi host can contribute with to a single vDS. Think of uplinks as the slots available for an ESXi’s NICs to plug in to. Each of my two ESXi hosts have 8 physical network interface cards installed, so I should set 8 if I wish to migrate all port groups into the same switch. However, since I will create a distributed switch per port group I only need to specify to. The number of uplinks can also be changed after deployment.

3

 

  • Select the new switch and create a new distributed port group.

4

 

  • Name the new port group for Management and press Next.

5

 

  • Leave everything at default and press Next and then Finish to create the port group.

4

  • Next press the Add hosts

5

 

  • Select Add hosts and press Next.

6

 

  • Press the New hosts icon and add the hosts you wish to add the new distributed switch to.

7

 

  • Select both Manage physical adapters and Manage VMkernel adapters and press Next.

8

 

  • Select the first vmnic that belongs to the management vmkernel adapter and press Assign uplink.

9

 

  • Assign vmnic0 to uplink 1 in the distributed switch and then assign vmnic1 to uplink 2.

10

 

  • So for both ESXi hosts it will look like this:

11

 

  • On the next page select the vmkernel adapter vmk0 and press Assign port group.

12

 

  • Since by my design there will only be “one port group per switch” there is only one to select.

13

 

  • So it is pretty straight forward. Make sure both vmkernels from each ESXi host becomes reassigned from the standard switch to the new vmkernel port group on the new distributed switch and press Next.

14

 

  • No impact on iSCSI since I will not touch this yet. Click Next and then Finish to migrate the vmkernel and the physical NICs.

15

 

  • So once migration is done you will be able to see that the vmkernel has been removed from the standard switch (vSS), so leaving a NIC as backup serves no purpose.

16

 

  • Exact same steps are made with vMotion vmkernel adapters, so moving forward to migrating virtual machines create a new distributed switch. Remember you could also just create a new port group but then I would have to separate traffic with VLAN tagging, and I like to keep things simple and easy.

17

 

  • Assign two uplink ports to the new distributed switch, and it actually makes more sense to enable Network I/O Control in this scenario than utilizing it on a management network I just didn’t think about that when I created the vDS for the management network the first time.

18

 

  • Press the Add hosts icon again.

19

 

  • Select Add hosts and press Next.

6

 

  • Press the New hosts icon and add the hosts you wish to add the new distributed switch to.

7

 

  • This time around select Manage physical adapters and Migrate virtual machine networking and press Next.

20

 

  • Assign the vmnics that belongs to the VM port group to each its own uplink in the new distributed switch.

21

 

  • On the Migrate VM networking select the VMs to migrate and press Assign port group.

22

 

  • Select the one and only available distributed port group.

23

 

  • Make sure all virtual machines are migrated to the new port group and press Next and then Finish on the following page to migrate.

24

 

  • Virtual machines successfully migrated but notice that the port group has remained in the vSS.

25

Next up is migrating the iSCSI connections.

 

How to join a vCenter 5.5 server to a domain post-installation

Warning! Before jumping into changing the default Domain of a vCenter vSphere deployment, or joining the vCenter vSphere environment to a Domain post-installation – be aware that you may have issues upgrading your vSphere environment in the future. I experienced this! I joined the vCenter to a domain post-installation, and upon trying to upgrade from 5.5 to 6.0 an SSL naming mismatch made the upgrade abort! I was not able to solve that initially and started from scratch.

I made a vSphere deployment with two hosts and a vCenter server. I used OpenFiler as storage and I did not create a domain. So far I have only tried joining a vCenter server to a domain during the installation but never after. I heard it could be tricky to do it, so I set out to try it for myself.

  • After the vSphere deployment I made a new 2012 R2 server and promoted it to a domain controller.
  • I configured DNS on it.
  • I configured the DNS settings on the vCenter server and joined it to the new domain.
  • Login to the vCenter server with the web client and use the SSO master credentials configured during installation.

1

  • Go to Administration.

2

  • Under Single Sign-On go to Configuration and from here you will see the current configured identity sources including the default vsphere.local domain. Press the green plus icon to configure a new identity source.

3

  • Select Active Directory (Integrated Windows Authentication), type the name of the domain and select Use SPN for a Security Token Service account for VMware to use as an authentication service if you wish. For this installation I selected Use this machine account.

More about the SPN here at VMware KB: 2058298 

4

  • Under Single Sign-On select Configuration and make the new domain the default one if you wish.

5

  • Now if you go to Users and Groups and select Administrators you will be able to see that there is only one user available to modify SSO settings. If the password for this user is somehow forgotten you will have trouble if you want to modify the SSO settings later, so specify an alternative group by pressing the Add Group Members. Remember this is not the same as assigning a group vCenter administrator permissions.

6

  • First select the domain from the drop-down menu, and then search for the desired group in the domain of which you would like to have access to modify the SSO in vCenter. Finally press Add and then OK.

7

  • To assign a domain group administrative privileges to the vCenter server itself go to the Home page of the vCenter server and select vCenter -> vCenter servers and select the vCenter server installed. Then go to the Manage tab and select Permissions. Here press the green plus icon to add another group.

8

  • From the Assigned Role drop-down menu select Administrator and then press Add.

9

  • Select the domain from the drop-down menu and again add the desired group.

10

  • The desired domain group should appear in the list and that is it. The vCenter server is now part of a new domain. I found no issues during this configuration, and thus was not tricky at all.

11

Source: CBT Nuggets VMware vSphere 5.5 VCA-DCV.VCP5-DCV

Running the vSphere Web Client on Windows Server 2012

Running the vSphere Web Client requires Adobe Flash Player. In server 2008 I usually copy an offline installer onto the desktop and install the flash player. In 2012 the flash player comes as pre-installed and just has to be enabled first.

1

  • In Server Manager go to Manage -> Add Roles and Features.

2

  • Fast forward to the feature selection menu expand User Interfaces and Infrastructure and select Desktop Experience. Install and done!

3

Building a vSphere 5.5 home lab: Part 9 – iSCSI Storage

Next up is setting up storage with iSCSI. As the original article suggests I will also use Microsoft’s iSCSI solution.

  • Make VMnet3
  • Add 2 NICS from both ESXi hosts to VMnet3

First go to VMware Workstation’s Virtual Network Editor and create a VMnet3.

2014-04-07 20_04_46-Camtasia Studio

Add another Network Adapter to the vCenter server and assign it to VMnet3.

2014-04-07 20_06_26-Camtasia Studio

Assign an IP to the new NIC.

2014-04-07 20_09_05-Camtasia Studio

2014-04-07 20_09_26-Camtasia Studio

Next go to the each of the ESXi servers in VMware Workstation and assign the additional network adapters previously added to VMnet3.

2014-04-07 20_27_11-Virtual Machine Settings

Open Internet Explorer on the vCenter server and login to the vSphere environment https://ip-of-vcenter-server:9443/vsphere-client. Then go to vCenter -> Hosts and Clusters -> VLAB -> VLAB Cluster -> 192.168.0.11.

2014-04-07 20_16_19-VLABDKCPHVC01 - VMware Workstation

Once ESXi01 has been selected click the Networking tab and press the Add Host Networking icon.

2014-04-07 20_17_51-Camtasia Studio

Select Vmkernel Network Adatper.

2014-04-07 20_20_33-Camtasia Studio

Select New standard switch.

2014-04-07 20_21_35-Camtasia Studio

Click the green plus sign.

2014-04-07 20_22_27-Camtasia Studio

Add two adapters. Active adapters will not show up on the list, so do not worry picking one already in use.

2014-04-07 20_23_32-Camtasia Studio

If you do not have any adapters available then add some more network adapters to the ESXi server through VMware Workstation.

What it should look like;

2014-04-07 20_24_59-Camtasia Studio

Name the VMkernel for iSCSI_Initiator_1 and press Next.

2014-04-07 20_29_34-Camtasia Studio

Assign a static IP. Mine is 192.168.1.11/24.

2014-04-07 20_31_07-Camtasia Studio

Create a new VMkernel but add it to the existing vSwitch1.

2014-04-07 20_32_37-Camtasia Studio

Name it iSCSI_Initiator_2 with a static IP of 192.168.1.12.

Select vSwitch1 and the iSCSI_Initiator_2 and press the Edit icon.

2014-04-07 20_35_02-Camtasia Studio

Go to Team and failover and enable Override. Remove one of the NICs from the active adapters list to the unused adapters list.

2014-04-07 20_36_57-Camtasia Studio

Next do the same for iSCSI_Initator_1 so that each iSCSI initiator VMkernel is utilizing its very own active network adapter.

iSCSI_Initiator_1 = Active Adapter vmnic0, Unused Adapter vmnic1
iSCSI_Initiator_2 = Active Adapter vmnic1, Unused Adapter vmnic0

Follow the same steps for ESXi02. Create a new vSwitch with two iSCSI initators but assign them different IP addresses. For instance 192.168.1.13 and 192.168.1.14, and then override the failover order and change around the adapters.

Next go to Storage and add a storage adapter.

2014-04-07 20_41_25-Camtasia Studio

Copy the IQN identifier address.

2014-04-07 20_43_01-Camtasia Studio

Next create a new hard disk on the vCenter server. I have 1TB non-SSD hard disk that I am using for other stuff. I have 600gb available on that disk and I will be spending 300gb for the VM storage. I will not be running a lot of VMs to start with anyway so this will suffice for now. Remember to place the VMDK file somewhere else than the SSD.

2014-04-07 20_44_40-Camtasia Studio

Then initialize the disk so it is usable. This will be the iSCSI storage E.

2014-04-07 20_49_01-Camtasia Studio

Next go to Server Manager and Add a new role. Expand File And Storage Services -> File and iSCSI Services and select iSCSI Target Server.

2014-04-07 20_50_39-Camtasia Studio

2014-04-07 20_52_15-Camtasia Studio

Select To create an iSCSI virtual disk, start the New iSCSI Virutal Disk Wizard.

2014-04-07 20_52_47-Camtasia Studio

Select the E drive created earlier.

2014-04-07 20_53_55-Camtasia Studio

Name it Datastore1.

2014-04-07 20_56_32-Camtasia Studio

Select about half of the total iSCSI drive and then select Dynamically expanding. I would not recommend fixed size as this would allocate all the space straight away, and since I do not know if I will be using all of the space then it is better to choose dynamic.

2014-04-28 20_17_27-VLABDKCPHVC01 - VMware Workstation

Select New iSCSI target.

2014-04-07 21_00_17-Camtasia Studio

Name it by the FQDN of the ESXi servers.

2014-04-28 20_17_00-VLABDKCPHVC01 - VMware Workstation

Click Add.

2014-04-07 21_02_52-Camtasia Studio

Select Enter a value for the selected type and paste the IQN identifier of the first ESXi serverfrom the vSphere environment in here.

2014-04-07 21_03_32-Camtasia Studio

Add both of the iSCSI adapter IQN identifiers. One IQN identifier from each ESXi server.

2014-04-28 19_53_01-VLABDKCPHVC01 - VMware Workstation

Finish the wizard with no additional configured settings.

Go to Tasks and select New iSCSI Virtual Disk.

2014-04-07 21_05_54-Camtasia Studio

Select the 300GB (E) drive.

2014-04-07 21_09_21-Camtasia Studio

Name it Datastore2.

2014-04-07 21_10_18-Camtasia Studio

Again assign half of the total iSCSI storage drive and select the Dynamically expanding. The reason I am giving 149GB and 148GB to Datastore1 and Datastore2 respectively is being able to identify which virtual disk needs fixing in case of a break down.

2014-04-28 20_24_09-VLABDKCPHVC01 - VMware Workstation

Choose the Existing iSCSI target.

2014-04-28 20_27_17-VLABDKCPHVC01 - VMware Workstation

Finish the wizard with no additional settings to create the new iSCSI virtual disk.

Go back to the vCenter and make sure the newly created iSCSI adapter is selected on ESXi01. Then go to Network Port Binding and press the Add icon.

2014-04-07 21_14_16-Camtasia Studio

Add both of the VMkernel iSCSI initiators.

2014-04-07 21_18_26-Camtasia Studio

What it should look like;

2014-04-07 21_19_25-Camtasia Studio

Next select the Targets tab and the Dynamic Discovery and then press Add.

2014-04-07 21_20_17-Camtasia Studio

Type the IP of the iSCSI server which in my case is 192.168.1.2.

2014-04-07 21_21_57-Camtasia Studio

Rescan the adapter and the LUNs should appear.

2014-04-07 21_23_06-Camtasia Studio

Repeat this on ESXi2.

Two LUNs should appear on each of the ESXi servers with 4 paths each. This will allow failover.

2014-04-28 20_30_48-VLABDKCPHVC01 - VMware Workstation

2014-04-28 20_41_35-VLABDKCPHVC01 - VMware Workstation

 

Building a vSphere 5.5 home lab: Part 1 – VMware Workstation 10 configuration

Building a vSphere 5.5 home lab: Part 2 – Base Template

Building a vSphere 5.5 home lab: Part 3 – Prepare the base template

Building a vSphere 5.5 home lab: Part 4 – Domain Controller

Building a vSphere 5.5 home lab: Part 5 – The SQL Server

Building a vSphere 5.5 home lab: Part 6 – The vCenter Server

Building a vSphere 5.5 home lab: Part 7 – Install Elastic Sky X Integrated

Building a vSphere 5.5 home lab: Part 8 – Add hosts to the vCenter

Building a vSphere 5.5 home lab: Part 9 – iSCSI Storage

 

Building a vSphere 5.5 home lab: Part 8 – Add hosts to the vCenter

Go the the vCenter server and enable the Desktop Experience. The server will restart a few times. The desktop experience enables one to utilize the embedded Flash in IE11 which is something the vCenter server needs to access the vSphere environment through the web browser.

2014-04-06 16_40_47-Camtasia Studio

Type https://ip-of-vcenter-server:9443/vsphere-client in the web browser and select Continue to this website (not recommended) on the following page.

2014-04-06 19_03_39-Camtasia Studio

Type the administrator@vsphere.local as the user name and then the SSO password.

2014-04-06 19_06_45-Camtasia Studio

Go to vCenter.

2014-04-06 19_08_53-Camtasia Studio

Then go to vCenters Servers.

2014-04-06 19_09_26-Camtasia Studio

Select the vCenter FQDN and press Create Datacenter. No hosts can be added without first having a datacenter.

2014-04-06 19_09_59-Camtasia Studio

Name it.

2014-04-06 19_10_51-Camtasia Studio

Click Create a cluster.

2014-04-06 19_11_32-Camtasia Studio

Name it and leave everything at default. Do not turn on anything yet. It can be turned on later when storage is available.

2014-04-06 19_12_11-Camtasia Studio

Now click Add a host.

2014-04-06 19_12_35-Camtasia Studio

Type the IP of the first ESXi.

2014-04-06 19_13_14-Camtasia Studio

Type root and the root password created during the ESXi installation.

2014-04-06 19_14_08-Camtasia Studio

Finish the wizard and add the 2nd host too.

Then click the house on top of the screen (Home) and then choose Hosts and Clusters.

2014-04-06 19_15_55-Camtasia Studio

Expand VLAB and right-click the cluster. Choose Move Hosts into Cluster.

2014-04-06 19_17_07-Camtasia Studio

Select both ESXi hosts.

2014-04-06 19_18_48-Camtasia Studio

And this is what is should look like.

2014-04-06 19_19_15-Camtasia Studio

 

Next up: Building a vSphere 5.5 home lab: Part 9 – iSCSI Storage

Building a vSphere 5.5 home lab: Part 1 – VMware Workstation 10 configuration

Building a vSphere 5.5 home lab: Part 2 – Base Template

Building a vSphere 5.5 home lab: Part 3 – Prepare the base template

Building a vSphere 5.5 home lab: Part 4 – Domain Controller

Building a vSphere 5.5 home lab: Part 5 – The SQL Server

Building a vSphere 5.5 home lab: Part 6 – The vCenter Server

Building a vSphere 5.5 home lab: Part 7 – Install Elastic Sky X Integrated

Building a vSphere 5.5 home lab: Part 8 – Add hosts to the vCenter

Building a vSphere 5.5 home lab: Part 9 – iSCSI Storage