Before I get started here:
- There is a bit of performance data at the end if that’s what you came here for.
- I’m looking at this from a “small potato” point of view. Indie dev, small business, etc. In other words if you’re on “enterprise” packages you’ll be looking at things from a different point of view and this may not help you as much.
- Route 53’s Geo DNS isn’t quite the same as Cloudflare’s “Geo Steering”. Route 53 GeoDNS lets you target by continent and country (and state in the US), whereas Cloudflare’s Geo Steering is somewhat closer to Route 53’s latency-based routing in that you are effectively choosing a region like US East or US West and requests are directed based on what region the CF servers are in that answer the DNS request.
Due to differences in design it’s a little harder to compare the 2 offerings directly but I’ll do my best to give you an overall grasp of the way each works.
Cloudflare charges you from a “per server” perspective, where each origin (server) can either be an IP address or CNAME – more on that much later. Pick a 2/4/6-origin package, add whatever monitoring upgrades you want, and pay on a recurring monthly basis. If you have multiple websites in your account you can share your package with all of them at no extra cost, though this only works if those sites use the same IP addresses or CNAMEs.
Route53 charges you from a “per domain” (hosted zone) perspective and lets you add any extras “piecemeal”. $0.50 per domain, and you can add all the records you want to it. Want 20 subdomains? No extra charge. Want 50 servers all with different IP addresses under that domain? No extra charge. Where you *will* pay is for monitoring and requests as you’ll see in the next section.
Pricing – In Depth
To use GeoDNS with Cloudflare you have to make use of the Load Balancer feature. You have to know exactly what you want beforehand because you’re setting up a subscription before you can actually configure the load balancer. Assuming you are indeed using Geolocation, these are the prices:
- $15/month for 2 origins (servers)
- $25/month for 4 origins (servers)
- $30/month for 5/6 origins (servers)
…if you’re not an enterprise user, the 5 pool limit will likely result in an effective cap of 5 origins unless you’re using the 6th as a failover or load balancer. In other words, you can’t use the 6th origin as a separate geographic pool unless on an enterprise plan.
All of the above give you free HTTP/HTTPS monitoring with keyword matching on a 60-second interval from 1 region (you can choose which 1 region for each pool).
You get 500k DNS queries free with each additional 500k costing $0.50. Usage is rounded up (501k becomes 1m).
Monitoring can be upgraded: add $10 to increase to 30 second checks, or $15 to increase to 15 seconds. If you want checking from more than 1 region, add $10 for 4 regions or $15 for 8 regions.
To use GeoDNS with Route53, you’ll pay:
- $0.50 per domain (hosted zone)
- $0.70 per million queries for GeoDNS
- $0.60 per million queries instead if you’d rather use Latency-Based Routing
- $0.40 per million queries for standard queries (ie subdomains you might not be geolocating)
- $0.75 for each health check you want (monitoring)
- +$2.00 to add HTTPS to a health check
- +$2.00 to add String Matching to a health check
- +$2.00 to add a 10s frequency (default 30s) to a health check
- +$2.00 to add latency measurement to a health check.
The domain (hosted zone) is charged per month, though if you create one, change your mind, and delete it within 12 hours you aren’t charged for it. One of the appeals of Amazon is that most of their services either have a grace period, are pay-per-use, or are prorated based on time, making them a low cost option if you aren’t sure whether something will work for you or not before trying.
Queries are based on what you use. None come free. However if you just have a few thousand queries you might only be paying a penny.
Health checks are pro-rated for partial months, but are where the cost really comes into play. If you don’t care about automatic failover then no problem. But if you do need it and require some of the extras, health checks can quickly become the biggest part of your overall Route53 bill.
Pricing – Value
Route 53 absolutely decimates Cloudflare when it comes to price in nearly every configuration. There are 2 exceptions:
First exception is if you have a lot of domains (hosted zones) that receive very little traffic. Cloudflare lets you share your Load Balancer across all domains on your site whereas Route 53’s $0.50/each can start to make a dent when you get to high numbers (though this decreases to $0.10/each once you hit 26+ domains). Of course if your many domains get a lot of DNS queries, Route 53 will still come out ahead due to the cheaper query costs.
Second exception is if you have high-traffic records (ie subdomains) that do not require Geo-Routing under a domain. The standard-routing records can stay as part of Cloudflare’s free plan (you only pay for load-balanced stuff), whereas on Route53 you’re paying the $0.40/million for standard queries.
I’ll admit it: I had to work a bit to find those exceptions. On the cost level Cloudflare’s geolocation only wins in really edge cases.
Value measured in other ways besides pricing…?
If you’re already in the Cloudflare “orange cloud” ecosystem, their geolocation obviously has increased value. Your load balancer with geo has the option of being “orange cloud”.
This has the advantage of immediate failover on Cloudflare’s backend – if a server (origin) goes down, you don’t have to wait for the TTL to expire because Cloudflare has the ability to transparently pull from the next healthy server almost immediately, with your visitor being none the wiser. This aspect doesn’t require geolocation though and is an aspect provided by their standard load balancer (which is $10 cheaper if you don’t need geo).
Where geo *does* fit into this is when you’re using “orange cloud” for reasons beyond the CDN aspect, yet you have multiple servers available. If Cloudflare is basically proxying most (or all) of the requests to your servers anyway, enabling geolocation should allow it to pull from your closest server.
Another forward-looking potential value-add is Cloudflare’s latest “22.214.171.124” public DNS entry. For years the Google “126.96.36.199” has been pretty predominant for any custom set-ups. If over time a number of people transition to Cloudflare’s public DNS, response times for those visitors will become even faster if you’re using Cloudflare as your authoritative DNS, particularly when long TTLs aren’t an option. This doesn’t relate specifically to load balancing and geolocation, but is worth considering regardless if your hope is to set something up once and stick with it for a number of years.
Finally, Cloudflare does anycast their IPs. So your visitors shouldn’t hit any “dud” DNS servers located across the ocean the way they will with Amazon.
If you’re in the Amazon ecosystem, many AWS resources can be pointed to with Route53 with a simple “Alias” record. Costs for health checks against an Amazon resource are lower too.
For those not in the ecosystem, Route 53 is significantly more flexible: you can have dozens of servers with geolocation-based routing. You can do complex things like putting latency-based routing on top of geo-location-based routing.
(above: I couldn’t fit all 70-something records into the screenshot, but you get the idea….)
Every single record can be attached to a health check. You can create health checks that monitor the health of other health checks. If you have 4 servers and 4 health checks, those 4 health checks can be shared across your dozens of records.
Route 53 can also do “true” geo-location. You can target specific countries and specific US states and route each individually. This is in contrast to Cloudflare which is a little closer to Route 53’s “latency based routing” in concept.
One aspect that Route 53 provides is a theoretical high reliability. You get 4 DNS servers: .com .net .co.uk and .org. So if… say… a .com root server has a problem or goes down, you still have 3 DNS servers up. The 4 DNS servers you get are also geographically distributed. So if a disaster takes out a datacenter, again no problem: the other 3 servers are still going. All 4 are on different anycasted IPs as well so if BGP has a fit with one route you again have the other 3.
The reliability does come with an unfortunate downside though… performance. One DNS server will always be closest, but you have a 25% chance of hitting it. You also have a 25% chance of hitting the one furthest away. Some DNS resolvers have methods to try preferring the fastest DNS server but this (and other optimizations) vary and can not be relied upon. Often if you test the latency to each of your assigned DNS servers you’ll find that 2 are poor performers in certain areas – I’ve even seen them referred to as “sucker” name servers. If the vast majority of your traffic comes from a very specific area you may want to consider only using the fastest 2 of the 4 at the potential expense of decreased reliability. At the very least, ordering the results so the 2 that are “usually fastest” are listed first might help them to be attempted first in some cases.
A Quick Walk Through Setup
In the traffic section of the website, you start by choosing your plan. The cheapest non-geo (2 servers, no extras) is $5. Adding geo brings that to $15. The “everything maxed” price is $60.
Next is creating a host name, setting up pools, setting up health checks, and traffic steering (geo). Cloudflare has guides that walk through all this in detail. Before I get to the short of setup though, a note for those using IPv6:
**Warning: If using both IPv4 and IPv6 on your servers (origins), the following setup can be a little groddy since you can’t list both addresses. Thus, create fake subdomain records first (ie ip.mysite.com A 188.8.131.52 and ip.mysite.com AAAA a01:b02:c03:d04:e05:f:1:2:3), but point them to the actual correct IP addresses. Then we’ll use “ip.mysite.com” as an origin server below. Do this for each server location (ip-europe.mysite.com, ip-north-america.mysite.com, etc) before proceeding. If you plan to share your load balancer across domains in your account this means all the domains will need to have the same IPv4 and IPv6 addresses – this can be problematic for IPv6 address binding on the webserver sometimes so make sure you test.
Okay so here we go:
- You create a host name (ie loadbalancer.mysite.com)
- You create pools but keep in mind that on Cloudflare’s end,
if (!enterpriseAccount) maxPools = Math.min(5,originsPurchased);
…and add an origin (server) to each: an IP address or if using IPv4/v6 the “ip.mysite.com” bits from the warning above.
- You create health checks and attach them to the pools (a health check will test each server in the pool). Each pool can have 1 health check assigned. You can select which location(s) health checks are done from.
- You create the Geo steering order. For each of the 13 regions, add all pools in the order you want them in – 1st server is used unless it goes down, then 2nd server, etc. If you don’t set up a region it’ll use whatever pool is listed first from step #2.
Load balancer is now set up. All you have to do at this point is go to the normal “DNS” tab in Cloudflare and for any domain/subdomain you want pointing at the load balancer. To do this, you remove any preexisting A/AAAA records for the pertinent records and create a new CNAME that points to your loadbalancer.mysite.com.
Note that Cloudflare does “flatten” the CNAME chains, so your visitors don’t have a bunch of extra DNS lookups to contend with.
Amazon Route 53
Assuming you’ve created a hosted zone…
- Create a record set.
- Add the name (blank if mysite.com, “something” if something.mysite.com).
- Add the record type (A – IPv4 address)
- Under Value, put the IP address
- Under Routing Policy choose Geolocation. Use “Default” for your first one so you always have a fallback – you’ll use continents/countries/states for your others.
- If you’ve set up a health check for this server, select it. If not, you can edit this record later.
Repeat those steps for the IPv6 addresses if applicable.
Then repeat those steps for every Geolocation. Note that most specific will override others. For example if you have a visitor from California but do not have California set up, US will be used. If US wasn’t set up, North America will be used. If North America wasn’t set up, then Default will be used. If you have 2 US servers and want to split the states across them, instead of making a record for each and every state, it is easiest to have server #1 set as “US”, and then create the 20-something records for server #2 for half of the states.
In a case where you want geolocation for most of the world but want latency-based routing for the US, you can do a similar workaround to the Cloudflare IPv6 issue. Create your other world-wide geolocation records: when it comes time to do the US one, first create “fake” subdomain records (latency-us.mysite.com) and use latency-based routing for them – in this case you would want to create at least 2 “latency-us.mysite.com” latency-based records – one pointing at Amazon’s east location and the other pointing at an Amazon west location. Then create an ALIAS record for your intended site (mysite.com) but make it a geolocation record for the US and have the ALIAS point to latency-us.mysite.com . That way all US traffic goes to your latency-us.mysite.com and gets directe
If the above paragraph on geo-to-latency sounded complex, that’s because it is. Test heavily afterwards because there are ample opportunities for a typo.
Eventually, if using both IPv4 and IPv6 you’ll probably have at least 50 records, and that’s just for the root domain. If you have subdomains to add (www)… expect it to become 100. If you always redirect non-WWW to WWW, it may be prudent to use a static (non-geo) record to 1 site that handles the redirect. That way you’ll only have 51 records instead of 100.
The other option which is more simple is to use latency-based routing instead of Geolocation. As long as your servers are close enough to AWS datacenters, routing might also be close enough for you to handle and you’ll need fewer records in most cases. The latency DNS lookups are also a dime cheaper per million.
Final option here is using AWS “traffic flow” though it comes at a whopping $50/month. It’ll add the ability to use “geoproximity” (distance-based) routing to your servers.
Route 53 is absolutely more configurable, more flexible, and doesn’t really place much on you in the way of limitations. However, it is also considerably more time consuming to set up.
In the past I had relied on Google Analytics data to see how fast things like DNS were performing. However, GA has a few issues:
- It uses the average. When numbers are close to 0 (ie 0.020s dns lookup time) and you have an outlier that took 20 seconds because someone is on a satellite connection and a storm was rolling by, it highly skews the results for that location.
- It includes 0 ms lookup times (dns is locally cached). That’s okay for some things, but when comparing DNS provider speeds, you don’t care at all about those 0 ms results and probably want to exclude them.
- For page timings, Analytics is limited to 10% of your views (or a minimum of 100 if your views are less than 1k). That’s probably useful enough for a massive site that gets many thousands of page views per day. But if you’re not a huge site it can take many days to amass enough data on multiple countries to actually see a performance trend.
The next obvious avenue was something like DNSPerf or SolveDNS. These are great tools for measuring the latency between machines that may-or-may-not-be in the same datacenter, but the reality is that very few actual users see those latencies, as you’ll see below.
So, using the mean (half faster, half slower) and excluding 0 ms lookups, here’s what I observed:
Performance-wise, Cloudflare has the definite edge. I was seeing a mean of ~60ms to visitors, with the US and Canada traffic each getting ~55ms. Australia performed quite badly at ~300ms but the major Europe destinations (GB/FR/DE) were in the 35-45ms range. India was ~300ms. Most of the others countries didn’t have enough visits over the time period to draw any conclusions.
It should be noted that since Cloudflare does have numerous data centers close to users, it is always possible that some “true” 0ms results may have been omitted (they have a datacenter only 1 ms away from me so it’s ultimately possible some people can get a DNS result in 0 ms). However before someone goes “see that’s why you get different results from the DNS benchmarking sites”, note that if I include 0 ms results the worldwide mean becomes ~40ms and that *definitely* includes repeat visits where the dns is locally cached.
Route 53 on the other hand had a mean in the ~70-85ms range, with the US being in the ~60-65ms range and Canada closer to ~75ms. Major Europe destinations here were in the ~45-60ms range. Australia showed in at ~270-320ms. India was at ~220-230ms. If you’re wondering why I’m showing larger ranges for Route 53, the order you put your DNS servers in seems to have an impact – they’re not all anycasted from every location and resolvers seem to have an affinity towards the first record returned. DNS order has the most significant impact in Australia as currently only 1 of the 4 DNS servers resides within AU with the other 3 having routing that goes through Korea and Japan (or via the US on IPv6).
Note that this was observed over a very short period: just long enough for the mean to “settle” at a number. Also keep in mind that my “total” averages are based on my traffic.
The short version: Cloudflare handily wins in North America and Europe, and overall (worldwide) is 10-15ms faster than Route53.
So who “wins” here?
It depends on your own setup and goals.
Personally, I look at a “desired” setup and try to look at the big picture with a keen eye on price. I can get by with $5/month from Route 53. The Cloudflare variant is $25. Cloudflare is 10-15ms faster for my traffic. Those are the facts.
A few key factors that *I* consider:
- Once you add the (SSL) connect, request and response times for my site, the DNS portion really takes about 16% of the total time (~15% with Cloudflare, ~17-18% with Route53). Cloudflare’s 10-15ms improvement allows the user to have the entire page downloaded a few percent faster. However, it costs 500% of what Route 53 does (an additional $20).
- Choosing Route 53 thus results in a $20 savings which could potentially be used to speed up a different portion of the time-till-visitor-has-entire-page (the other 82-85%).
For me, the added expense of Cloudflare just doesn’t bring enough value to justify the cost. If the price gap were smaller, the decision might be tough. But it’s not even close, and thus Route 53 was the clear winner.
However, if every ounce of speed “no matter the cost” is what you’re after, Cloudflare will take the cake. 15ms on its own might not be enough for a visitor to notice, but every speed boost adds up.
In addition, if you’re using Cloudflare for the “orange cloud” already and have multiple servers, their load balancer + geo will certainly be worth a look. Sure, it has restrictive server limits and the cost is steeper than AWS. Cloudflare also won’t be ideal if you need certain countries to hit certain servers since their routing is region-based instead of country-based. But as long as none of that applies to you, the Cloudflare geographic load balancing is pretty quick to set up, and it does work well.