vAnders.se

— Virtual and software defined stuff. And more! —


1 Comment

Can´t add cluster in OnCommand Performance Manager 1.1

Netapp_iconI installed an instance of the OnCommand Performance Manager 1.1 appliance for evaluation purposes and if you haven´t tried yet, I recommend you do. Performance Manager together with the OnCommand Unified Manager appliance is excellent if you are running Clustered DataOntap.

I decided to deploy it in production so I destroyed the appliance and deployed a new, fresh one. A bit too quick  it turned out since I had added two clusters to it and didn´t remove them before throwing away the virtual machine.

If you followed my (bad) example, when you try to add your cDot-clusters during the Performance Manager setup guide you get the following error message:

 

Error:

Cluster <xxx> is currently managed by the following instance of OnCommand Performance Manager:

URL:                       https://<your_cluster>:443

System ID:          <uuid-here>

Managing a cluster with multiple instances of the same application will impact the performance of the cluster.

You must remove cluster <xxx> from the instance of OnCommand Performance Manager above before adding it to this instance of OnCommand Performance Manager.

 

Ok, so I should have removed the clusters. I know! But even worse, those two clusters were the only clusters I have and I´m now stuck and can´t finish the setup guide.

 

Solution (Thanks TKENDALL)

Go into “diag” mode on your cluster:  set -privilege diag

Run  application-record show

This should show you the OPM that the cluster is associated with.

 

Run  application-record delete -name <Record Name>

You should now be able to add this cluster to your new OPM.

 

Advertisements


Leave a comment

SnapDrive and Cluster disk creation

Netapp_iconSeveral blogs tell you to add shared disk to your Windows Failover Cluster nodes one at a time. If your disks reside on Netapp storage you can speed this up by adding disks to both nodes at the same time.

Just make sure you have created your cluster before adding disk. Then use SnapDrive to Create a new disk or Connect to an existing disk. You will now be able to create LUN mappings for both cluster nodes at the same time.

 

 


Leave a comment

Clustering SQL Server 2012 on VMware vSphere – Planning

Clustering your SQL Servers can increase availablity for your databases but, first of all, it let´s you perform maintenance on your SQL Server without downtime. Database servers have a lot of dependencies and can be really tough to reboot in a production Environment. If you also have two VMware clusters running on separate hardware then you´re a few steps closer to full redundancy.

I´m currently working on a clustering solution on behalf of a client. Two virtual Windows Server 2012 R2 nodes with SQL Server 2012 on top of vSphere 5.5 and with shared iSCSI-storage on Netapp.

Clustering in a virtual environment is fully supported by Microsoft and VMware. Great! Then I found this kb article on the VMware knowledge base where you can find out the real truth. What do you know, DRS and vMotion is NOT supported. Everything will work as expected except that you won´t be able to move your VMs around between host.

http://kb.vmware.com/kb/1037959

 

Other things I have stumbled upon

  • The paravirtual SCSI adapter (PVSCSI) is not supported in a Microsoft Failover Cluster (MFCS)
  • Use separate disk controllers for your local drives.
  • You cannot use Managed Service Accounts (MSAs or GMSAs) in a cluster.
  • TempDb is now supported to run on local disk because it is flushed every time you restart the SQL Server service.
  • Use mount Points instead of drive letters. Best practice is to create a separate disk that holds all of your mount Points and it doesnt have to be big. However, it must be over 3 Gb if one of the mount Points is were you will install the SQL binaries. The installer checks free space on the root disk and can´t see that your mounted disks, one step down in the tree structure, is large enough. I read in Microsofts documentation that mount Points are totally transparent. No, not really!
  • Manage your shared disks through SnapDrive. ALWAYS! Do not remove disks in Failover Cluster Manager. If you do, you can look forward to a few extra hours of disk removal, cleaning up LUN mappings and starting over from the beginning with adding your disks (speaking from experience here).
  • Sometimes the SQL Server installer can´t move your cluster disks to the SQL Server cluster role and stops with an error message and an incomplete installation. This requires an uninstall and a reboot of the Windows node your working on Before you are back on track. Create a new cluster role and choose “create empty role”. Rename it to “SQL Server (MSSQLSERVER)” which is the default name the installer would suggest. Now you can move your disks to the new role and next time you run the installer it will detect this and skip this part.
    Read more in this article by Chandra550 (many thanks)

 

To be continued…