From time to time I’ve served my sites from multiple servers around the world.
Why? Performance! I’m one of the people who have commonly been on a 1 megabit connection. I notice the few sites that are really fast, the few that are painfully slow, and the majority that are somewhere in the middle. When I imagine what it might be like for someone on the opposite side of the world on a similar connection to visit a North American site, I don’t imagine a very positive experience. Providing servers closer to visitors to improve their experience is one of the areas I’ve toyed with over the years.
Why not just use a CDN? Doing this manually rather than using a CDN takes a
littlelot more work, but it’s often more cost effective in the long run. It also brings higher flexibility and in many cases higher performance.
Before delving in, here’s a very superficial high-level look at Anycast vs Geo DNS from more of a “typical website” point of view:
I’ll gloss over the details a bit since there’s plenty of information out there, but essentially:
- Requires DNS that supports Geolocation (Route 53, Cloudflare LB+DNS, Constellix, etc).
- Uses DNS GeoIP lookups to send the user to the appropriate server.
- All your servers can (and generally will) have different IP addresses.
- When someone tries to visit your site, the DNS will look up the visitors IP, look up their likely location from a database, and give them the server IP address that you want visitors from that region to use.
- Set-up is usually cheaper (especially via the mentioned providers).
- Set-up is usually easier – you can set up a few records to simply direct people to servers based on the continent the visitor comes from. Some GeoDNS will let you go to to a country level, and even a state and city level. All this happens in your DNS control panel. Want to send a visitor from Australia to your US West coast server? No problem. You can implement all sorts of desired combinations. Or you can keep things simple. It’s up to you.
- Monitoring is easier.
/usr/bin/curl --resolve "website.com:443:126.96.36.199" https://website.com/will hit the site via that IP address. Many monitoring services let you specify the IP address you want to reach the site via.
- No risk of a visitors TCP session being split between servers (can happen in Anycast during BGP route changes).
- If you want to take a server out of service you can simply drop the DNS records for that IP, or point those records to another server. Once the TTL has expired and traffic has dropped you can take the original server offline without impact to visitors.
- If you have Geo-restricted content, Geo DNS can be used to point visitors at the desired server (or point them nowhere at all!).
- With enough fine tuning, you can approach visitor latency similar to (or possibly better than) Anycast.
- Some visits will only come with a Continent or Country Code and these visitors might not be routed to the fastest possible location.
- GeoIP is far from perfect. Sometimes GeoIP databases have bad or outdated information (it’s a cat and mouse game). This can result in the DNS sending some visitors to the wrong server.
- As privacy becomes a larger consideration (resolvers such as Cloudflare 188.8.131.52 not sending the originating IP), GeoIP and thus GeoDNS routing may become less accurate and less reliable over time.
- Adding additional servers in the future via DNS is not usually fun. Numerous countries (or states) often end up being shuffled around each time you grow your server location list. Imagine the US states divided across 3 servers with meticulous routing set up for each state. Now imagine adding a 4th server located somewhere in the middle of the original 3 – you almost need to draw out a “before” and “after” map to do this with minimal pain.
- No fancy DNS stuff required.
- All your servers will have their normal unique IP address, but will also have a “shared” (anycast) public IP address as part of a /24 block (IPv4) or /48 block (IPv6).
- In a “typical website” setup, all your servers will be running BGP which announces the “shared” IP address block.
- When someone tries to visit your site, BGP routing that takes place on the internet will see many routes to your many servers and try to choose the “shortest” route for that visitor, which usually corresponds to the server closest to your visitor’s location.
- Simple DNS setup. You can usually use most free/cheap DNS providers as you do not require GeoDNS and thus can treat an Anycast IP like a normal IP as far as DNS is concerned.
- If a route to a server goes down (example: New York datacenter that hosts your server has a major outage), BGP will simply drop that route and visitors will get pointed at other nearby servers.
- Takes a little more work to DDOS.
- Tends to be faster than GeoDNS “out of the box” when there are numerous locations in close proximity.
- Not as susceptible to poor ISP routing (ie a Canada ISP that only peers in the US)
- Adding a future server in a new location can be quick. Configure it, start up your BGP daemon, and you’re up! No need to re-organize which countries and states go where.
- You must own or lease an IP address block and bring it to a provider. This can be costly.
- BGP by default tends to consider “fewest hops” to be the “shortest path”. While usually fewer hops correlates with lower latency, this is not always the case. Sometimes you can get traffic making multiple trips across the ocean because each of those trips is only considered 1 “hop” despite the high latency.
- Configuration is more complex. BGP is one more component running in your system. Setting up a BGP server is a new thing to learn too. If you delve into impacting routes of certain locations with prefixes and communities, it becomes even more complex.
- Pulling down a server does not immediately re-route traffic. It takes some time for BGP tables to update which means there will be a period where some visitors unsuccessfully “hit” a dead server. There is a “graceful shutdown” community that can be used to try avoiding this, but there is no guarantee it will be honored by your upstream.
- Risk of TCP sessions being split between servers during BGP route updates and causing resets and issues. Higher risk for long sessions. Any backend services on your servers that keep user “state” can not assume that a visitor will hit the same sever.
- Monitoring the Anycast IP addresses on each of your servers/locations with standard web monitoring services (Pingdom, etc) is significantly more difficult. Thus, setting up monitoring will usually take longer and be more complex.
Performance Testing and Preliminary Data:
Some forewarning. Similar to my earlier GeoDNS comparison, this was tested with my traffic, and not over a substantial period of time.
Since the earlier DNS writeup, I added servers and tweaked GeoDNS via Route 53 multiple times, and did this down to the US state level. Where a region could not normally be fine-tuned enough on it’s own, I’d try latency-based routing to the contenders. Change, get data, compare, change, get data, compare, etc. I won’t go so far as to say that everything was perfectly GeoDNS routed, but will say that I probably hand-tuned it more than an average person would be expected to.
With that in mind, I looked at Connect(SSL) + Request + Response. This combines all the server-related steps necessary for a visitor to connect to the server and download the page. Latency, multiple round trips (note that this when TLS 1.3 was only in Beta/Edge browser releases), and bandwidth are all accounted for to some degree. Again, this is for my set of traffic with my particular server locations, so don’t go assuming it will apply to yours.
GeoDNS Worldwide Mean: ~310ms
Anycast Worldwide Mean: ~290ms
GeoDNS USA Mean: ~280ms
Anycast USA Mean: ~285ms
GeoDNS Canada Mean: ~255ms
Anycast Canada Mean: ~285ms
GeoDNS Brazil Mean: ~760ms
Anycast Brazil Mean: ~740ms
GeoDNS USA Anonymous Mean: ~413ms
Anycast USA Anonymous Mean: ~245ms
…I’m going to stop right there and just point out some notes based on my observations of the data:
- The results are close enough that I’d call it a “draw”.
- In Canada, Anycast is legitimately behind. There are very few paths down to the USA for center/west Canada. AB, SK, MB in particular can require strong encouragement to route the fastest way. In Route 53 with Geo DNS, careful testing and “fake” AWS latency-based-routing locations on top of the Canada location allowed me to have those provinces routing nicely previously.
- In Brazil (and some other locations), most items routed via Anycast the same way I had done through GeoDNS. However there are outliers that hit a different (still sensibly within reach) server and they do seem to retain good times.
- USA visitors with an anonymous state code (A1) do seem to route quite a bit better with Anycast. This is to be expected as there are a few US servers and if GeoDNS doesn’t know what state they’re in they all just hit the DEFAULT. With Anycast they stand a good chance of being routed to a server close by.
Update: Careful examination of logs showed that there can be some additional pain points with Anycast. Here are a few additional bits, though keep in mind this is for my particular setup, locations, etc. It may not be applicable to your situation but you may find it interesting none the less.
- 10-20% of Great Britain traffic was getting pulled to New York instead of London.
- 40% of India traffic went to the US West coast instead of the nearby Singapore/Europe locations. Yikes!
- Oceania…. let’s just say the US West coast is one big high-latency magnet with its low-hop connections. Australia and New Zealand both get pulled nicely to Sydney for the most part. However, the rest of the Oceanic region had a strong tendency to get pulled to the US instead of Singapore.
- I had to be careful trying to tack on Geo to correct behavior here (which I did later): for example part of the Philippines would always go through Tokyo first and if Singapore was the Geo-destination it would go Philippines-Tokyo-US-Singapore which is a very long trip. Of course other areas of the Philippines were happy to go straight to Singapore. The saving grace was that most of the traffic that was happy direct to Singapore was also happy to go to Tokyo. So that’s where the Philippines ended up going.
- The positive notes: very little US traffic leaked out across the ocean, and the bit that did was mostly bots. In Europe, aside from the GB issue mentioned, the rest of the continent was really well-behaved.
- Again, this is based on my setup – other providers/locations may exhibit different behavior.
Combining Anycast and GeoDNS – The Best (or Worst) of Both Worlds?
Worth mentioning that combining Geo + Anycast is not uncommon. Some CDN providers have done this. Brief overview of how it might be set up:
SERVER US EAST: 184.108.40.206 220.127.116.11
SERVER US WEST 18.104.22.168 22.214.171.124
SERVER EUROPE EAST: 126.96.36.199 188.8.131.52
SERVER EUROPE CENTRAL: 184.108.40.206 220.127.116.11
SERVER EUROPE WEST: 18.104.22.168 22.214.171.124
SERVER AUSTRALIA: 126.96.36.199 188.8.131.52
GEO DNS DEFAULT: 184.108.40.206
GEO DNS NORTH AMERICA: 220.127.116.11
GEO DNS EUROPE: 18.104.22.168
GEO DNS OCEANIA: 22.214.171.124
This is basically giving each server multiple Anycast IPs and then controlling them via DNS.
The above example tries to avoid a situation where BGP sends people across the ocean. You keep Anycast within a continent where desired. If a visitor from ASIA tries to visit in this example, they’ll get the 126.96.36.199 IP address and can potentially hit any server (standard Anycast BGP routing). But someone in Europe will get the 188.8.131.52 IP address and will only hit a Europe server.
If you have Geo-restricted content, this is also a common way to utilize Anycast while still “containing” visitors within the intended region.
There are some downsides. Your complexity shoots up, and you get virtually all the disadvantages of each system. Not to mention you generally need a number of /24 blocks to reliably do this on IPv4.
Final Thoughts and Conclusion
Having tried both GeoDNS and now Anycast, I was surprised to find the overall (total) performance differences so minimal. Turns out I didn’t have Anycast routing going so crazy that it ballooned the total numbers, and I also didn’t have a bunch of hidden issues via GeoDNS that Anycast was about to fix either. Sure, a few areas with non-ideal routing, and a bit of nicer routing in other areas that couldn’t have been pulled off via Geo DNS, but for the most part it evened out in the grand scheme of things.
Bots and web crawlers in particular are usually directed a little better with Anycast – they usually only have the country indicated in GeoIP databases (no state/city) so with GeoDNS they simply end up at whatever is set as the DEFAULT server.
As far as users go: I doubt anyone noticed a difference. I do get my share of A1 (anonymous country code) visitors so at the very least Anycast should have helped route some of them to a nearby location. Little gains exist in some areas, but probably not substantial enough for most visitors to notice compared to the previous GeoDNS setup.
Cost wise, you can do GeoDNS really cheap. Grab a few low end boxes from whatever mix of providers you want, hit up Route 53, and have at it.
With Anycast you’ll be limited to providers that will advertise a BGP route for you, and thus have less flexibility in both price and location. You also have to bring your own IP space for Anycast, and IPv4 space isn’t exactly something anyone ever measures in “cups of coffee per month”. Of course IPv6 space can be virtually free (or at least cheap) and can be Anycasted with the obvious caveat being that IPv6 adoption is still painfully slow.
To come to something of a conclusion: in the end you could make the case for either. If you want absolute minimum latency to visitors you’ll probably be combining both to some degree. However if you do choose only 1, chances are if you’re looking at a generic worldwide audience (ie not focused on a specific region), you’ll be looking at which you’d rather implement and maintain as opposed to trying to compare them on a performance level… unless of course cost comes in to play in which case the cost of an IPv4 /24 is high enough that GeoDNS is likely to be an easy win.