How to Optimize Application Performance with NS1 Traffic Steering Skip to main content Skip to search
Ben Ball
Posted by
Ben Ball on
August 4, 2022
NS1 News

How to Optimize Application Performance with NS1 Traffic Steering

If you’re in the business of delivering applications and services, the fierce urgency of “now” is a logistical headache. NS1 uses the power of DNS to automatically steer traffic to the best-performing service available.

Sign Up for Our Newsletter

“I want it now!”

It’s not just something spoiled children sing. It’s what we demand every time we click on a link, stream video content, or access an online application.

The bar for application performance is constantly moving higher. Even as the volume and complexity of internet traffic balloons, we expect ever-faster response times from the services we use and the content we consume. Our trigger fingers are twitchier than they’ve ever been. We click away if what we’re looking for doesn’t appear instantly.

If you’re in the business of delivering applications and services, the fierce urgency of “now” is a logistical headache. On the back end, internet traffic has to move through a spaghetti of different clouds, CDNs, and other core services. Delivering consistently high performance requires a routing system that can optimize traffic between the services your application depends on.

NS1 uses the power of DNS to automatically steer traffic to the best performing service available, so you can deliver applications and services users expect. With a few basic rules, NS1 uses monitoring data to switch endpoints on the fly. You set the rules and the priorities upfront; everything after that happens automatically.

On the NS1 platform, traffic steering configurations are applied to individual records within DNS zones. These configurations determine how NS1 will handle queries against each record — specifically, which answer(s) to return. Each filter chain uses a unique logic to process queries. You can create combinations of filters to achieve a specific outcome based on your operational or business needs.

Of course, “optimizing performance” can mean different things to different businesses. So we put together a quick guide to the different ways you can steer traffic using NS1.

Round Robin (Shuffle)

In this basic load balancing use case, NS1 distributes application traffic evenly across a series of endpoints. This keeps any one particular service from becoming overloaded and prevents overdependence on any single service provider.

The first filter in the chain is “Up”. This will tell the system whether the service provider’s endpoint is currently operational or not.

The second filter in the chain is “Shuffle”. If the “Up” filter returns a “false” answer for any endpoint, it will automatically distribute traffic to other providers. Shuffle then distributes that traffic randomly among the service providers you’ve designated.

Round Robin (Shuffle) with Session Persistence

This slight variation on the first use case is used to balance traffic load while delivering a consistent application experience. NS1 uses the same logic to distribute traffic among different service providers while defaulting to the same provider for queries that originate from the same location. This prevents mid-stream changeovers simply for the sake of load balancing.

As in the first version of this use case, the first filter in the chain is “Up”. This will tell the system whether the service provider’s endpoint is currently operational or not.

The second filter in the chain is “Sticky Shuffle”. If the “Up” filter returns a “false” answer for any endpoint, it will automatically distribute traffic to other providers. Sticky Shuffle ensures that load balancing distribution doesn’t happen in the middle of an ongoing session or connection.

Distribute Application Traffic Based on Site Capacity

In this use case, you’re distributing traffic in a more deliberate way. Where the round robin use cases described above spreads queries evenly among providers, here you’re consistently favoring one service over another. This way, you can send more traffic to services that are cheaper or perform better while keeping other services available for load balancing purposes.

As before, the first filter in the chain is “Up”. This will tell the system whether the service provider’s endpoint is currently operational or not.

The second filter in the chain is “Weighted Shuffle”. If the “Up” filter returns a “false” answer for any endpoint, it will automatically distribute traffic to other providers. “Weighted Shuffle” distributes it based on weights you provide.

Another option here is “Weighted Sticky Shuffle”, which allows you to distribute traffic using a weighting system while maintaining consistency within a single user session.

Send User to Closest Location (Geotargeting)

Geography is often a significant factor in application performance - sometimes even more than which back-end service provider handles the query. This sequence steers traffic to endpoints based on the originating location, with different options available for how granular you want to set that location.

As before, the first filter in the chain is “Up”. This will tell the system whether the service provider’s endpoint is currently operational or not.

The second filter in this chain can be one of three options:

  • Geotarget Country will narrow down the answers to service providers that match the originating country of the query. If no service provider is available in that country, this part of the chain will effectively be skipped.

  • Geotarget Region will narrow down answers to queries with metadata indicating the geographical region. If no service provider is available in that country or the query doesn’t contain regional metadata, this part of the chain will effectively be skipped.

  • Geotarget Latlong will choose the closest service provider based on a calculation of the distance between where the query originated and the GeoIP database.

Distribute Application Traffic Based on Current Site Load (Shed Load)

You’ve done the math. You’ve reviewed the contracts. The conclusion: it only makes sense to use CDNs or service providers up to a certain limit. Now you need a way to enforce those limits, in real-time, automatically. This is where the Shed Load filter comes in.

The Shed Load filter distributes DNS traffic across multiple endpoints based on load-related metrics that you define — such as the load average, the number of active connections, or the number of active requests. If you’re close to reaching a contract or usage limit, the Shed Load filter will steer traffic to an endpoint less often. When you hit the limit, the Shed Load filter will send traffic to other endpoints.

As before, the first filter in the chain is “Up”. This will tell the system whether the service provider’s endpoint is currently operational or not.

The second filter is “Shed Load”, which automatically steers traffic to service providers which are within the contractual or cost limits you set within the metadata for the filter. (More information about the settings for the Shed Load filter are in our NS1 documentation portal.)


See all of this in action with our recent webinar:

DNS Traffic Steering 101

Further Reading

Request a Demo

Contact Us

Looking for help? Please email [email protected]

Get Pricing

Learn More About our Partner Program