vAnders.se

— Virtual and software defined stuff. And more! —


Leave a comment

Do NOT overprovision your virtual machines

It can never be said enough! Don´t overprovision your VMs. There, I said it again.
Overprovisioned VMs is probably the most common problem in vSphere Environments around the globe and, sadly, the business application industry still haven´t grasped this and continues to demand monster VMs in order to support their Products.

But there comes a time when you are forced to troubleshoot performance in your Environment, although your expensive, state of the art datacenter, was designed to run for years to come. Suddenly you´re running out of Resources in your cluster(s).

To prevent this form happening a few golden rules can come in handy.

Enable hot-add

Make sure you have enabled the hot-add feature for vCPU and vRAM on your VMs. This can only be done when the virtual machine is powered off but it´s better to Power it off once than every time you want to change Resources. Windows Server 2008 and newer supports both these features. See VMware kb 2051989 for a complete support matrix.

vRAM is better

Always try more vRAM Before adding extra vCPUs. Basically the same rule that has been used for many years now to speed up physical Windows computers.

Easy on the vCPUs

More vCPUs can improve perfomance but only up to a Point. The CPU Scheduler in your hypervisor has to Schedule the instructions from your vCPUs at the same time, More vCPUs makes it harder for your hypervisor to do this.

Conclusion

So how do I respond to my suppliers demands? The answer is, you don´t. Instead suggest a compromise that you start on a “reasonable” level and add more vCPUs and vRAM if and when it´s really needed.

Advertisements


1 Comment

Can´t add cluster in OnCommand Performance Manager 1.1

Netapp_iconI installed an instance of the OnCommand Performance Manager 1.1 appliance for evaluation purposes and if you haven´t tried yet, I recommend you do. Performance Manager together with the OnCommand Unified Manager appliance is excellent if you are running Clustered DataOntap.

I decided to deploy it in production so I destroyed the appliance and deployed a new, fresh one. A bit too quick  it turned out since I had added two clusters to it and didn´t remove them before throwing away the virtual machine.

If you followed my (bad) example, when you try to add your cDot-clusters during the Performance Manager setup guide you get the following error message:

 

Error:

Cluster <xxx> is currently managed by the following instance of OnCommand Performance Manager:

URL:                       https://<your_cluster>:443

System ID:          <uuid-here>

Managing a cluster with multiple instances of the same application will impact the performance of the cluster.

You must remove cluster <xxx> from the instance of OnCommand Performance Manager above before adding it to this instance of OnCommand Performance Manager.

 

Ok, so I should have removed the clusters. I know! But even worse, those two clusters were the only clusters I have and I´m now stuck and can´t finish the setup guide.

 

Solution (Thanks TKENDALL)

Go into “diag” mode on your cluster:  set -privilege diag

Run  application-record show

This should show you the OPM that the cluster is associated with.

 

Run  application-record delete -name <Record Name>

You should now be able to add this cluster to your new OPM.

 


Leave a comment

SnapDrive and Cluster disk creation

Netapp_iconSeveral blogs tell you to add shared disk to your Windows Failover Cluster nodes one at a time. If your disks reside on Netapp storage you can speed this up by adding disks to both nodes at the same time.

Just make sure you have created your cluster before adding disk. Then use SnapDrive to Create a new disk or Connect to an existing disk. You will now be able to create LUN mappings for both cluster nodes at the same time.

 

 


Leave a comment

Clustering SQL Server 2012 on VMware vSphere – Planning

Clustering your SQL Servers can increase availablity for your databases but, first of all, it let´s you perform maintenance on your SQL Server without downtime. Database servers have a lot of dependencies and can be really tough to reboot in a production Environment. If you also have two VMware clusters running on separate hardware then you´re a few steps closer to full redundancy.

I´m currently working on a clustering solution on behalf of a client. Two virtual Windows Server 2012 R2 nodes with SQL Server 2012 on top of vSphere 5.5 and with shared iSCSI-storage on Netapp.

Clustering in a virtual environment is fully supported by Microsoft and VMware. Great! Then I found this kb article on the VMware knowledge base where you can find out the real truth. What do you know, DRS and vMotion is NOT supported. Everything will work as expected except that you won´t be able to move your VMs around between host.

http://kb.vmware.com/kb/1037959

 

Other things I have stumbled upon

  • The paravirtual SCSI adapter (PVSCSI) is not supported in a Microsoft Failover Cluster (MFCS)
  • Use separate disk controllers for your local drives.
  • You cannot use Managed Service Accounts (MSAs or GMSAs) in a cluster.
  • TempDb is now supported to run on local disk because it is flushed every time you restart the SQL Server service.
  • Use mount Points instead of drive letters. Best practice is to create a separate disk that holds all of your mount Points and it doesnt have to be big. However, it must be over 3 Gb if one of the mount Points is were you will install the SQL binaries. The installer checks free space on the root disk and can´t see that your mounted disks, one step down in the tree structure, is large enough. I read in Microsofts documentation that mount Points are totally transparent. No, not really!
  • Manage your shared disks through SnapDrive. ALWAYS! Do not remove disks in Failover Cluster Manager. If you do, you can look forward to a few extra hours of disk removal, cleaning up LUN mappings and starting over from the beginning with adding your disks (speaking from experience here).
  • Sometimes the SQL Server installer can´t move your cluster disks to the SQL Server cluster role and stops with an error message and an incomplete installation. This requires an uninstall and a reboot of the Windows node your working on Before you are back on track. Create a new cluster role and choose “create empty role”. Rename it to “SQL Server (MSSQLSERVER)” which is the default name the installer would suggest. Now you can move your disks to the new role and next time you run the installer it will detect this and skip this part.
    Read more in this article by Chandra550 (many thanks)

 

To be continued…