Over the past few releases, SAS has offered high availability for servers through various failover techniques. So I’ve been wondering how metadata clustering differs and why does SAS 9.4 provide it.
The “why” is an easy question to answer. Today’s SAS software is used in a wide array of business-critical applications that require SAS servers to be up and running 24-7. Achieving this level of availability for customers meant implementing a more robust approach.
Answering the “what” took a little more study on my part. Here are some key take-aways from what I’ve learned about SAS 9.4 metadata clustering:
Server failure—feels like it never happened! SAS 9.4 users can enjoy minimal disruption in service because SAS 9.4 has eliminated single points of failure for metadata servers. Each server, or node, in a cluster (minimum of three) is a full metadata server instance with a complete copy of all metadata. When one node in the cluster fails, requests are routed to another node that’s fully capable of processing any valid request from a SAS client application.
A metadata cluster looks and acts like one server. Client applications shipped with SAS 9.4 are cluster-aware and maintain a list of all server nodes in the cluster. If a node isn’t available, the SAS client will try other nodes in the list until it makes a successful connection. Additionally, most metadata administration tasks apply to the cluster as a whole and work just as they do for an unclustered metadata server.
The cluster provides load balancing. The SAS 9.4 metadata cluster uses IOM load balancing to control access to the cluster, so any client connection may be redirected invisibly and in a round-robin fashion to another node. Additionally, you may see some improvement in performance for metadata tasks that are read-intensive.
All metadata updates are automatically synchronized within the cluster. In SAS 9.4, a master node-slave node configuration and a journaling process ensure that changes to one metadata server are applied across all servers. If any step in that process fails, then the update will be backed out and an exception is returned to the client. Even if there’s a catastrophic failure within the cluster, the cluster can be recovered as long as a single node or even just the last backup and a single journal file survives.
Configuring the cluster is easy. You can set up the entire cluser by configuring the first server using the SAS Deployment Wizard--the same as you would with any SAS metadata server deployment. To bring the the second, third and any other nodes online, choose the appropriate options from your plan file and run the Deployment Wizard again to install and synchronize nodes across the cluster. The same configuration process applies if you need to scale-up at a later time.
More detail is available. Here are key sources of information about SAS metadata clustering—how the cluster provides high availability, how to convert to a cluster, performance characteristics, potential error conditions and more: