Excitement at the edge…


So far, the history of computing has shown that innovations seem to arrive at regular intervals and as time passes, some of these turn out to be echoes of earlier innovations. As examples of this, we had the original mainframes; large, expensive machines dedicated to a specific task. These were then augmented by so called ‘mini computers’, that could mimic many of the capabilities of mainframes, which were not as powerful but a lot cheaper. The mini computer was usurped by the original PC’s and then the PC based servers moving some of the processing tasks to servers local to the actual users. This resulted in the massive shift to ‘client server’ computing which harnessed powerful networks of PC’s to large servers and could be deployed right across large enterprises. Then the Internet arrived and illustrated the power of globally connected computers and outsourced data centres, this is now described as Cloud computing.

To those of us that have lived through this, we see clear similarities in the availability of Cloud services and the original concept of ‘Timesharing’ where the computer power of a mainframe could be accessed, for a fee, by anyone who needed access. Similarly, client server models are now replicated by mobile devices accessing information from central servers in the Cloud via dedicated ‘apps’. Obviously the mechanisms by which these happen now, are very different due to advances in technology and a massive reduction in costs. The burgeoning concept of ‘Edge’ computing also fits this pattern. That of processing data very close to the point of acquisition to save time and reduce the possibility of failure due to a communications network interruption.

We have said many times over the past few years that the explosion in the manufacturing of smart phones has collapsed the cost of sensors. These sensors now form the basis of many of the ‘Internet of Things’ devices that are invading our homes and lives. This is a revolution that is only getting started, but it brings with it a multitude of challenges. Despite the enormous power and scope of Cloud services, they are still constrained by the communications network bandwidth necessary to access them. If you live in a country like the UK for example, the general availability of gigabit network capacity is very poor. Contrast this with places like South Korea, or The Netherlands, where you can easily sign up for fibre-based services delivering capacity in the hundreds of megabits as a minimum.

Why should this matter you may ask, as most people are happy so long as they can stream film and video services regularly. Well if you are moving along a highway in a driverless car and it has to make a decision about changing lanes because the car in front has stopped suddenly, are you happy to wait for it to call into a Cloud based data centre on the other side of the planet to decide what to do? I suspect not.

It is this situation that shows why moving some processing to the ‘Edge’ of the network makes a lot of sense. The sheer volume of data that is now being generated by intelligent devices is staggering and is challenging the capacity of the biggest data pipes now available.

We should pause at this point to just remind ourselves about how data capacity is referenced:

A Gigabyte is roughly 1,000 Megabytes. (The average UK download speed is 6 Megabytes, approx 46 Mbits per second )

A Terabyte is roughly 1,000 Gigabytes

A Petabyte is roughly 1,000 Terabytes  

Confusingly, communications networks usually refer to data traffic in Mega‘bits’ which are about 1/8th of a Megabyte.

If we take the car as an example, automotive manufacturers are working on the assumption that the self driving vehicles will be generating something like 4 Terabytes of data per day, per car. So even with the very high speed 5G networks being developed, that will be nothing like the necessary capacity required. So as much processing and analytics necessary to control the car will have to happen in the vehicle itself. Obviously things like GPS for navigation and communications will still happen over the network, but in concert with, and probably subservient to, any local processing happening in vehicle.  

It is not only vehicles that will require local processing. Things like connected aircraft are estimated to be generating something like 5 Terabytes per day, related to engines and systems data, location information and onboard communications and entertainment. It is impractical at the the moment to try and handle that amount of data via conventional networks due to bandwidth restrictions, so almost all of todays aircraft communications are concerned with engine and systems monitoring and location data. Hospitals and many factories will, in future, be generating somewhere between 3 Terabytes and 2 Petabytes per day.

These are huge numbers, but only reflect the expected numbers of smart devices that will be generating data. In a modern economy everyone is becoming conditioned to the availability of data and information at any time in any location, and they are upset if the expected stream of data is interrupted. It is questionable how many of the existing systems we have are able to ingest that amount of data, but many systems being designed now are planning for such volumes. This is why it makes sense to process as much relevant data as is required, in or close to the place where it is needed. Calculations need to be done that trade necessity for analysis and answers, versus the cost of communication and the latency (time) that it takes to get answers from an app or a service in the Cloud. The “I think this is a person in the road I’m approaching so do I stop?”  kind of question.

For the logistics industry, communications has always been a fundamental requirement. This increasingly applies to information systems and services. As every part of a modern supply chain starts to generate huge amounts of data, how that data is used and analysed will inform where the processing needs to take place. By exploiting local processing power such as on board a vehicle, or on an aircraft, ship, distribution centre, or factory, will determine the shape of an organisation and its information systems infrastructure. Based on my experience, it is always sensible to imagine the largest amount of capacity required and use that as the norm. One thing I am comfortable predicting is that the necessity for processing power and communications capacity is only going to increase. This is irrespective of it happening inside the network or at the edge.

Source: Transport Intelligence, September 5, 2018

Author: Ken Lyon