The Internet economy is massive: if it were a country it would rank as the fifth largest in the world, barely edging out Germany. In developed countries, the Internet contributes to 5-9 percent of GDP. Meanwhile, in underdeveloped countries the Internet’s contribution to GDP is smaller but growing by more than 25 percent annually. You get the point – the Internet is big, growing, and undoubtedly important. Unfortunately, the Internet is also extremely vulnerable.
Hackers and typos alike have taken down some of the most visible Internet properties in the world. It’s inevitable that at some point, something is going to happen that will cause an outage. That’s why application developers, security professionals, and operations teams are increasingly looking to build infrastructure that focuses on resiliency. DNS is being leveraged as a key component of a resilient application strategy, and ultimately a more reliable Internet.
DNS was introduced in the early 1980s in response to the limitations that a primitive “HOSTS.TXT” system posed for finding resources on a rapidly growing Internet. Over time, DNS has undergone some significant improvements, but at its core it still does the same thing – connect end users with applications and websites by converting human-readable domain names to IP addresses and other service information.
The underlying beauty of DNS is that it was designed to simplify the user experience as well as function as a massive database directory for the majority of Internet traffic, making it both ubiquitous in the technology stack, and widely adopted by end users. Within the cacophony of protocols that make up the Internet, DNS and HTTP are among the most commonly used everyday. This makes DNS an enormously critical, yet often forgotten about, component of modern applications.
Because DNS is so widely used and is typically the first point of contact between a user and an application, it can be utilized to extend the plane of control far beyond the confines of the data center – where legacy appliances tend to dominate – to the extreme edge of the Internet. Developers are now managing failover, disaster recovery, load-shedding, load-balancing, and traffic management functions at points much closer to the end user. By doing so, they are improving performance while creating a new orchestration layer far away from the core application itself, thereby bolstering resiliency.
Additionally, traditional approaches to managing Internet traffic are oversimplistic and can take action only on a limited number of parameters, such as a server’s up/down state or the geo-distance between a user and the server. For modern, distributed applications, infrastructure performance isn’t just black and white and there is a great deal of value in utilizing the gray.
For instance, if your server in Tokyo can service 30,000 simultaneous connections, and you see a spike in traffic at 4 p.m. due to a major news event, you may want to redirect a portion of your traffic. The traffic could be sent to your servers in Delhi, where traffic volume is lower, before Tokyo reaches its limit and experiences performance degradation. Even though Delhi may be further away, it may be a better path for end users and certainly is a better alternative than overloading Tokyo.
The key here is integrating telemetry from your Tokyo infrastructure with your DNS layer, therefore enabling you to automatically shed load to another location before ever reaching a critical state in Tokyo. Taking it a step further, you can automate your systems in Tokyo to spin up new infrastructure and then bring that rerouted Delhi traffic back to Tokyo, truly making DNS part of your DevOps ecosystem.
The application of DNS based advanced traffic management isn’t limited to only managing spikes in traffic. DNS can also minimize the negative impacts of unplanned outages, allowing you to proactively reroute traffic for planned maintenance, rapidly respond to configuration issues and even provide an extra layer of control to respond to in the event of a malicious attack. You can leverage ISP infrastructure versus your own to route traffic, you aren’t reliant on a single Infrastructure as a Service vendor or the health of a physical data center, and you can take action long before traffic hits your data center or application.
Truth be told, you can undertake similar traffic management actions, albeit not at a macro-level, using traditional appliances inside the data center — which is how many legacy applications are presently designed. However, from a cost, resiliency, and performance standpoint, it’s not even close – DNS is superior because you can leverage ISP infrastructure versus your own to route traffic, you aren’t reliant on a single Infrastructure as a Service vendor or the health of a physical data center, and you can take action long before traffic hits your data center or application.
For forwarding thinking application developers, DNS isn’t a passive player in the stack: it’s the front line. There is no silver bullet when it comes to application resiliency and solving the inherent vulnerabilities of the Internet is still a work in progress. DNS done right provides another powerful lever in the application stack that enables developers to improve end user experience, mitigate outages, and gain greater control over applications and infrastructure. And when you are dealing with something as universally important to the global economy as the Internet, every lever matters.