Server load balancing continues to be a core element of IT infrastructure, even as applications move from traditional data center architectures to the Cloud. As an IT professional, you know that there is always a need to intelligently distribute workloads across multiple servers, whether those servers are real or virtual, permanent or ephemeral.
Gartner feels that load balancing is so important to Infrastructure as a Service (IaaS) Cloud that they list load balancing as a required network feature when evaluating IaaS providers. (Gartner, Evaluation Criteria for Cloud Infrastructure as a Service, 2016)
However, there remains a chronic gap in the industry’s ability to reliably distribute workloads across multiple clouds, multiple data centers, and hybrid infrastructures. The result is maldistributed workloads and degraded application performance that could be avoided if workloads were better managed at a global level. In short, there is a need for better Global Server Load Balancing, or GSLB.
There are two basic approaches to multi-datacenter, multi-cloud GSLB. One is to use a traditional managed DNS provider for basic GSLB. This has the advantage of being very easy to implement, low cost, no capital outlay, and reliable. Unfortunately, these solutions have only minimal GSLB capabilities such as round robin DNS and Geo routing. These approaches do not prevent maldistribution of workloads because they use static rules rather than basing traffic routing on the actual, real time workloads at each data center. They do not take into account current workload or the current capacity at the data center - both of which are key to making an intelligent traffic routing decision.
To address this limitation, some application delivery controller (ADC) vendors offer their own purpose built DNS appliances that have a tighter integration with their load balancers. These can make traffic management decisions based on actual utilization levels at each data center, by receiving real time load information from the local load balancers. The problem is the customer needs to deploy and operate a DNS that is globally highly performant, reliable, and protected from DDoS. That is an operational and cost burden that many enterprises cannot accept.
NS1 offers an approach that offers the best of both worlds. NS1 ingests real time performance telemetry from industry leading load balancers and associates that telemetry with the DNS record corresponding to the public IP of each load balancer. It then uses that data to make advanced traffic management decisions when responding to DNS queries. The solution allows a wide range of factors to determine end user routing, including priority, data center load, network latency, and billing metrics to name a few.
This is all about delivering better end user quality of experience and driving higher rates of converting visitors to customers.
Come talk with us about your projects and goals for driving better performance and better business results. Chances are we can help you get more value out of your data centers and Cloud points of presence.