I recently ran a poll on my LinkedIn account:
Yes, it’s a very small sample size. Yes, my connections on LinkedIn are very likely heavily weighted on one side. Let’s talk about it anyway.
Datacenter with Datacenter operated Internet Exchange
These are very common at the very large interconnection buildings across the United States. Examples include Equinix (multiple cities), Coresite (multiple cities) as well as smaller IXes like OmahaIX.
There are some advantages to this arrangement. Firstly, for the participant there is a single company to contract with as well as deal with on the support side. It doesn’t matter if the issue is with the cross connect or with the IX. In either case there can be a single point of contact.
This functionality can also be provided across markets. If a carrier is in NoVA as well as Chicago the contacts and processes are the same. While these locations tend to be more expensive, features like service-level-agreements (SLAs) are available and are enforceable via commercial paper.
On the flip side, there is a tendency to create a walled off ecosystem by the facility operator. It’s very common to have “anti-longstraw” policies to prevent operators from buying transport into the facility and just connecting to the IX without purchasing space and power. Some datacenters also have policies preventing IX services or have special commercial terms for anyone providing “IX-like” services in the facility.
Datacenter with Third Party IX
This style seems to be fairly popular in Europe. Several of the European IXes are actively deploying this over the top of several of the very large datacenter owned IXes in the Tier 1 markets such as New York and Chicago. I’m lumping DE-CIX and AMS-IX into this category. I’ll let the readers argue if they should be in this category or the last category.
The IXes tend to be multi-datacenter within a market. This helps mitigate some of the restrictive policies of the data center provider by providing some competition.
On the flip side, the IX ends up providing a service akin to metro ethernet service between the locations. These circuits cost money which requires some method of repayment usually through port fees.
Datacenter with Not-for-Profit/Community IX
This is pretty much the same thing as the second category. The primary difference is the IX is typically controlled by a local group of volunteers. The big benefit here is costs and lack of contractual requirements. Many times the IX ports are free.
Not having a payroll makes these internet exchanges very, very cheap to operate. Many time the space and power is donated by the data center. The low cost also makes it more feasible for very small networks to peer.
There are some downsides as well. Giving away free ports makes it really difficult to afford upgrades. Many (or most?) rely on one or two volunteers to do everything. If someone changes jobs that creates a big risk of the IX failing or being adrift.
Thoughts
The most successful IXes with the not-for-profit model tend to be started by a small group of local ISPs usually with an eyeball. Many times they can bring together enough traffic demand for one of the large CDN networks to connect a cache. This starts the ball rolling bringing in additional eyeballs and helping the IX attract the next members.
A quick look at Peeringdb shows a LOT of internet exchanges started by datacenters that have less than five peers. Sometimes the datacenter does the work themselves or sometimes it is contracted out to one of the “IX in a box”.
A newer trend is to partner with a research and education network or other government entity and then deploy one of the “IX in a box”/commercial solutions. I think it remains to be seen if this is a viable way to build an IX ecosystem.