Sophistication at the Edge Part 1

Jamal Robinson
10 min readJan 9, 2018

--

Part 1 of a 2 part series on how edge computing will have an increasingly important role in an enterprise’s cloud & digital transformation strategy. Part 1 defines what the edge is and the included technologies where Part 2 discusses the criteria that drives business to adopt an edge strategy along with some common use cases seen across industries.

Introduction

A s we reach a time of numerous successful born in the cloud companies (SalesForce, Netflix, Workday, etc) and many legacy enterprises either have migrated all or some portion of their workloads to the cloud, we also find ourselves fastly approaching the next era in cloud computing. This phase will be lead by edge computing. As a primer on the importance of edge computing I recommend you check out Peter Levine’s “End of Cloud Computing” talk from the 2016 a16z Summit if you haven’t already. Or you can read Tom Bittman’s article titled, The Edge Will Eat the Cloud where he states,

“The edge will eat the cloud. And this is perhaps as important as the cloud computing trend ever was.” — Tom Bittman, Vice President and Distinguished Analyst at Gartner Research

I won’t go as far as telling you this is the end of cloud computing or predict what will be eating who as Peter and Tom have, but will explore the increasingly important role edge computing has along side the cloud and discuss a few of the reasons I believe this will represent the biggest trend in cloud computing in the coming years.

Figure 1: Flood of Data Expected by 2020

By now you’ve probably seen an image similar to Figure 1 shown at Intel’s 2017 Investor Meeting that indicates data generation rates are moving at an increasingly exponential rate (scientist say 90% of the world’s data was created in the past 2 years) and the less transparent message from the graphic is that most of the data is being generated by machines rather than humans. As cloud networks can’t handle that level of bandwidth while meeting the sub-millisecond latency requirements of the applications, alternative solutions had to be created. Enter edge and fog computing.

Defining the Edge

For the purpose of this article I’m going to define edge computing as,

“Edge computing is a method of optimizing cloud computing systems by performing data processing at the edge of the network, near the source of data generation.”

Notice I said “edge of network” and more importantly “near the source of data generation.” Often I hear edge computing associated with geographic locations, mainly outside of the cloud provider’s data centers, which isn’t always true as some customers source of data generation can be in the data center. For edges outside of the data center, you’ll see there are often requirements for a layer between the cloud and the edge which is called the fog.

Figure 2: Typical Cloud, Fog, Edge Architecture

The concept of fog computing was created by Cisco and is defined as,

“A decentralized computing infrastructure in which data, compute, storage and applications are distributed in the most logical, efficient place between the data source and the cloud.”

If your wondering, fog (or fogging for fog computing) isn’t an acronym but a term some intelligent technologist came up with as fogs are like clouds but closer to the ground. Clever naming aside, edge computing and fogging efforts arose out of the need to process the massive amount of data shown in Figure 1 that mostly originate outside of the cloud.

So…Why Edge Computing Now and Not Before?

At a high level, this question was already answered in Figure 1 as the unprecedented levels of data has driven us to have to think of alternative solutions such as fog and edge computing. To truly understand the answer, I think it is important to dig deeper on what I’m calling Distributed Cloud Computing 3.0. I’m sure some readers will think to themselves, Wait, we already have distributed computing as I’m in the cloud and have my compute and storage resources distributed geographically across my cloud providers data centers and/or Why Distributed Cloud Computing 3.0 vs 2.0? Well, the 3.0 is because I’m assuming an overzealous sales person somewhere has already taken 2.0 in a presentation so I figured bumping to 3.0 is a safer bet. To the first question about compute distribution, I agree cloud service providers (CSP) like Amazon Web Services (AWS), Microsoft’s Azure and Google Cloud Platform (GCP) provide distributed compute infrastructures but your ability to compute is primarily limited to their cloud.

Figure 3: True Distribution of Compute, Not Limited to Just Data Centers

While some CSPs offer edge devices like AWS Snowball Edge or Azure IoT Edge, most of the time when people say they have distributed computing they are indicating their ability to distribute storage and compute across data centers. They aren’t talking about equivalent storage, compute and analytics distribution across the cloud, edge and IoT devices as shown in Figure 3, which allows those customer to realize value from the data where they see fit.

In Distributed Cloud Computing 3.0 more customers will demand specialized storage and compute ability across the cloud, the edge and IoT devices giving you the ability to compute where it matters to them.

One G, Two Gs, Three Gs 5G

Figure 4: The more Gs the better right?

Different technologies exist around the edge to allow true distributed storage, analytic and computation to all work successfully. At the source are the IoT devices that generate the data by interacting with a user or the outside world through sensors. Some of those devices even have advanced mobile processors capable of allowing computing artificial intelligence workloads on the device. Then there is the fog cluster of mobile edge compute resources which represents additional compute and storage for IoT devices as some connected IoT devices have no storage or CPU (think temperature sensor in a weather system). The fog cluster can then connect to a data center or public cloud to analyze, store or further compute data. Connecting all the devices around the edge is the most important piece, 5G networking.

5G networking is the single most important technology that will fuel adoption of edge computing. This makes sense because need for speed at the edge is the primary factor pushing companies to develop an edge strategy, as you’ll see in Part 2. So whats so great about 5G? 5G networking provides wireless theoretical speeds of 10–20Gbps where Huawei was able to pull down 3.6Gbps in a field test a few years back during their initial testing. Allowing connected devices to communicate on wireless high-speed 5G networks will have the following primary benefits:

  • Ultra-low latency. To reduce latency, cloud services are often deployed across multiple geographically-distributed data centers. As of this writing, the cloud leader by revenue, AWS, has 8 regions and 49 availability zones. This pales in comparison with the potential to have a majority of the existing five million mobile towers outfitted with 5G which will allow for edge connections at up to 20 Gbps speeds. Reduced latency isn’t just about speed optimization for existing applications either, as you’ll see later in our use case discussion. There are applications, like autonomous vehicles, that can only work with a sub 10-millisecond latency that just isn’t possible when making round trips to the cloud.
  • Higher bandwidth. The differences between 56 Mbps (3g), 1 Gbps (4g) and 20 Gbps (5G) are very significant when it comes to enabling your edge application. Higher-bandwidth allows for more horizontal machine to machine communications at the edge, eliminating network congestion and allowing companies to derive answers from sensor and user input almost immediately.
  • Location-Specific Computing. With cloud computing, most architectures work based on the presumption your edge devices will have access to a stable, high-bandwidth connection back to the cloud which isn’t always true. 5G networking, mixed with fog and edge compute devices, will allow corporations to compute and analyze where it matters most, providing results locally at or on the devices.

The combination of lower-latency, higher-bandwidth 5G networks that can conduct compute operations at the source of data generation with fog and edge devices will have meaningful effects across many industries. Oil and gas is a great example of where advanced networking and compute at the edge increase system uptime, reducing cost of exploration that can cost millions of dollars per hour down, as outlined in “How Edge Computing and Drones Reduce the Cost of Oil & Gas Exploration” written by Schnedier Electric. Now that we have defined the edge and provided a clear picture of what technologies make it run, lets put it all together with an example.

Edge Computing Example with a Jet Engine

To drive the definitions home and make the pyramid from Figure 2 come to life, lets use a real world example of a jet engine. In 2015 a C Series jetliner that carries Pratt & Whitney’s Geared Turbo Fan (GTF) engine fitted with 5,000 sensors generated about 10 GB of data per second. For a single twin-engine aircraft with an average 2.5 hour flight-time, that means up to 176 TB of data. Delta Airlines claims about 5,800 flights a day and I’ll spare you the math of multiplying that many flights by 176 TB of data per flight but if this jet configuration is replicated across the airline industry that will represent a large amount of data generation. Lets break down an example on where flight data for the jet can be collected, stored and analyzed across cloud, fog and edge nodes in 3 phases:

  1. Edge (Real-Time Analysis). Plane takes off from aircraft carrier for flight. An edge device sitting on the plane reads the 5,000 sensors, scanning the 10 GB of data each second trying to discern whether there is any threat of immediate engine failure or if the engine can complete the flight. This engine failure check is ran each second, if the answer is no then the 10GB of data is stored and the proceeding second is checked. This process continues until the plane lands on the runway. Another operation running in parallel is an aggregation of the values each second. NOTE: As hard drives, CPUs and jet engines don’t pair well, for this edge device the sensor and compute are separate but for others they may be combined in the same unit.
  2. Fog (Short-Term Analysis). As our jet lands back on the aircraft carrier, the edge devices use a 5G network speeds of up to 20Gbps to extract 5 TB of aggregated flight data in 40mins. The extracted data is relevant for short and long-term analysis and storage. The Fog nodes compute the plane’s flight data to predict whether the engine is able to successfully make another flight. The fog node uses data from all other flights arriving on and leaving the aircraft carrier, as they will have similar flight patterns and their flight data for the day is a relevant indicating factor when determine the next flight along similar flight paths.
  3. Cloud (Long-Term Analysis). After computation and massage of the data at the edge, getting rid of what isn’t relevant for long-term historical analysis, the edge nodes send 1 TB of relevant data off to the cloud. Once the 1 TB of data lands in the cloud, long-term historical analysis occurs. Business analyst in procurement analyze how long an engine stays active before needing to be replaced and can set a forecast for jet engine maintenance and replacement costs.

Two important things to point out from this scenario. First, the closer you are to the data, the more you can focus on a real-time response vs long-term analysis. As Craig Strong, Chief Technology Officer for Insight Software Ltd states, real-time analysis

“transforms organizations from reactive environments, being managed by static and aged data, to automated continuous learning environments in real time”

providing them a competitive advantage. The second thing to notice is the potential for overlap between layers. One airline may assess whether the plane is ready to go out for another flight at the fog layer where another may make that assessment at the very edge, on the plane itself. This largely depends on industry, amount of data and purpose that data serves. If we were to switch from an airplane to an oil rig the business decisions and type of data gathered are different, but the dicing of compute from the pyramid in Figure 2 remains. The oil company needs to read sensors to know the tension on the drill in real-time (edge) to make sure they don’t break it, conduct short-term analysis (fog) around things like current temperatures and atmospheric pressure before drilling for the day to tune the drill speed and run a long-term historical analysis (cloud) across drilling sites to help geologist better select the next drilling site(s).

The Wrap Up

Now that we understand what it means to distribute our compute across cloud, fog and edge devices, we will dive into the business drivers that are accelerating edge adoption and some challenges that may slow edge computing growth in Part 2 of this series. There are also discussions about flying network-enabled cows and financial investing advice not typically found in edge computing articles (seriously).

If you enjoyed this article, please tap the claps 👏 button.

Interested in learning more about Jamal Robinson or want to work together? Reach out to him on Twitter or through LinkedIn.

--

--

Jamal Robinson
Jamal Robinson

Written by Jamal Robinson

Enterprise technologist with experience across cloud, artificial intelligence, machine learning, big-data and other cool technologies.

No responses yet