12 November, 2020

Edge blog no 1 graphic 1 transparentIf you are reading this then you have probably heard about “the edge”, but if you ask 3 people “what is the edge?”, you will get at least 4 answers. It’s sort of fair enough, as none of the definitions are necessarily wrong, they just come from different viewpoints. Telecoms companies have one view (at least), content delivery networks have another, cloud companies yet another, and so on… The one common theme is about putting IT resources nearer to where they get used. The edge is merely a location where you put some form of processing power – because you have to. This is an important distinction. We have spent the last few years centralising processing power in enterprise data centres and cloud services – it was meant to be the most efficient way. So why are we looking at a distributed network of systems, with a whole network of small data centres, apparently breaking the mould that was meant to represent best practice and deliver some form of IT panacea?

 

Edge blog no 1 graphic 2
The reason is simple – distributed infrastructure allows you to deploy a new class of application (more on that in my blog on “Edge-Native Applications”). These applications need a raft of special features that classic or cloud architectures cannot provide in all locations, including low network latency and high bandwidth. Latency is probably the one that most influences where the edge must be. Latency, the delay between transmitting and receiving data on a network, is highly sensitive to distance, so it follows that the nearer you can put processing power to its users, the lower the latency. So let’s focus on what you might call the low latency edge.


Edge blog no 1 graphic 3There is no standard yet for what constitutes low latency but most of us agree that it’s a few milliseconds, ideally as low as 2ms, as measured from the point at which the user connects to the high-speed fibre network to the servers in the edge data centre. In terms of distance, that means 40-60 km if we want to pretty much guarantee latency in the region of 2ms. So, an edge data centre every 40 km or so in any direction. Sounds like a lot but you can actually cover the entire UK with just over 200, less if you ignore the really remote areas. Of course, you would not build a grid in such a rigid pattern, but it gives a good idea of what the edge looks like and where it will be for low latency applications. It is worth noting that this grid approach means that no matter where you are, you will always be within 40kms of at least 2 and usually 4 edge data centres, and within 60kms of between 6 and 8. That way edge networks can provide resilience as well as low latency connectivity.


Figure 1 - A 40km grid on the UK
Figure 2 - Multiple edges within reach of all users

 


The low latency edge – a network of small data centres to support edge-native applications, within 40-60km of users with a latency of around 2ms.

Explore Data Centre Solutions