From the Blogosphere
The #IoT and DNS | @ThingsExpo #BigData #IoT #M2M #DigitalTransformation
The Internet of Things will result in an increasing need for scalable DNS services
By: Lori MacVittie
Nov. 5, 2016 03:30 PM
JANUARY 8, 2014 02:00 PM EST
When we talk about the impact of BYOD and BYOA and the Internet of Things, we often focus on the impact on data center architectures. That's because there will be an increasing need for authentication, for access control, for security, for application delivery as the number of potential endpoints (clients, devices, things) increases. That means scale in the data center.
What we gloss over, what we skip, is that before any of these "things" ever makes a request to access an application it had to execute a DNS query. Every. Single. Thing.
Maybe that's because we assume DNS can handle the load. So far it's done well. You rarely, if ever, hear of disruptions or outages due directly to the execution of DNS. Oh, there has been some issues with misconfiguration of DNS and of exploitation of DNS (hijacking, illicit use in reflection attacks, etc...) but in general there's rarely a report that a DNS service was overwhelmed by traffic and fell over.
"Success breeds complacency. Complacency breeds failure. Only the paranoid survive." - Andrew Grove.
In the face of rapidly expanding endpoints (things), it behooves us all to take a second look at DNS and ensure it's ready to meet the challenge.
This is not just about availability. Remember operational axiom #2 - as load increases, performance decreases. That's true for DNS, too. It doesn't get a pass. That's why it's called an axiom, after all, because it's kind of the law, like gravity.
Browsers do a good job of obfuscating the latency incurred by DNS, and native mobile applications never show such gory details, so it's difficult for a user to separate latency associated with an overloaded DNS service from a generally poorly performing application. Not that they care, actually. A slow app is a slow app to an end user. They aren't interested in the gory details, they're interested in speedy applications. Period.
Interestingly, though, the Internet of Things is made up of more than just users. Lots of devices and applications make up the myriad endpoint "overlay" network created by connections between these devices and "things".
Devices don't care about latency (unless of course they're being driven by users, then the users care, but the devices surely don't). But the thing about DNS is that the latency is generally incurred at initial connection time. There's no way to differentiate before a connection is made whether it's a device or a real, live person on the other end. Even after a connection is made, UDP isn't exactly the most verbose of protocols and it's nearly impossible to differentiate via UDP, too. You've only got a few headers, and none of them offer insight into what kind of endpoint is making the request.
The imperative, then, is to ensure really fast connections and responses to every single query.
That may mean you need to reevaluate your DNS infrastructure to ensure it's ready to handle the coming flood of "things". Test and verify the maximum queries per second (QPS) your systems can manage while maintaining what your business defines as acceptable latency. Make sure to plot out latency based on connections and queries per second to get an idea of at what point your DNS starts to become part of the performance problem.
As the Internet expands and more devices and users are accessing your applications, it would be a mistake to forget about DNS. We all know the old saying about "assuming" things - and that certainly holds true when you simply assume your DNS is able to handle the increasing load.
Be paranoid. Test often. CYA(pps).
Latest Cloud Developer Stories
Subscribe to the World's Most Powerful Newsletters
Subscribe to Our Rss Feeds & Get Your SYS-CON News Live!
SYS-CON Featured Whitepapers
Most Read This Week