On a farm in rural Oregon, a set of gauges monitor weather and drought conditions to promote optimal crop production.

An automated drone soars through the Grand Canyon taking photographs, and gathering environmental and geographical data.

A semi-truck barreling down I-5 transmits its location, load weight, and operating condition to a central transportation system thousands of miles away.

En route to your favorite coffee shop, you place your latte order from a mobile app so it’s ready and waiting when you arrive.

What do these four scenarios have in common? They all rely on edge computing.

Edge computing happens any time you move part of your application closer to where the “action” is. That “action” may be a source of data you want to process (weather conditions on a farm) or track (the semi-truck on the highway), the end user of your application (a person ordering their latte), or a system you’re controlling (an automated drone).

But why should you perform some computing tasks in an edge environment and others in a centralized cloud environment? What’s the difference between the two, and why would you use one instead of the other? What are the benefits and challenges of each approach?

The edge, the cloud, or both? The case of the autonomous car

Edge computing is all about putting time-sensitive operations closer to where they will have an impact. It puts computation—data collection and analysis—where it can operate most efficiently, as opposed to where it’s most convenient for developers or operators.

In cloud computing, data is collected and analyzed in some centralized location where developers and operators have more control over processing and system communications.

Putting computation at the edge is harder and riskier than keeping it together in the cloud, so you want to do it only when there’s a good reason to do so. That said, how do you decide whether to put computation in the cloud, at the edge, or somewhere in between?

To help figure it out, let’s consider the case of the driverless car.

Today’s autonomous cars are chock full of cameras, sensors, and controls that detect the road and various obstacles. The car uses these data feeds and controls to determine what’s really happening around it: Is that blob over there a human crossing the street or a road-closed barrier?

Autonomous cars have controls for steering, braking, and applying power. But they also have controls and sensors for monitoring the health of the car itself: Is the motor operating efficiently? Is the passenger compartment a comfortable temperature? Does the car need to deploy an airbag right now? Some of this computation must occur in the car, but much of it can occur in the cloud.

In many cases, performance considerations mean computations must occur in the car, including image recognition (is that another car near me or some other type of object?); threat detection (is that car in front of me suddenly applying its brakes?); road management (am I staying in my lane?); collision control (quick—swerve right to avoid a crash!). These kinds of time-sensitive calculations must be as fast and accurate as possible. The results of these computations can’t suddenly lag or go offline because of a bad internet connection. This is why autonomous cars need edge computing.

On the other hand, many driverless-vehicle processes can—and should—occur farther away, in the cloud. For instance, the cloud is a suitable place to run navigation systems that address such questions as: How do I get from point A to point B? What’s the most optimal route? Is there construction or traffic on this route that makes taking another preferable?

These systems need access to centralized data (such as maps and traffic information), and they need to correlate information from other sources to complete the computations. But these tasks are rarely time sensitive, at least not down to the level of microseconds it takes to communicate with the cloud.

Considering when and how to separate computation between the edge and the cloud is important because computation at the edge typically requires much more effort to properly manage. Upgrading software, troubleshooting bugs, monitoring performance—all of these things are easier to do at a centralized location rather than within a distributed and remote system. Just remember: When your software needs to scale, you have to remember that scaling has a different meaning for edge and cloud computing.

The edge’s advantages and challenges

Let’s look at some of the advantages of edge computing, and let's also look at some of the challenges it presents—especially when dealing with large numbers of nodes.

First, consider the advantages of edge computing:

  • The edge provides more time-sensitive and responsive processing.
  • The edge is less dependent on network connectivity, which increases reliability in the face of unknown connectivity.
  • The edge enables dedicated processing for a single, specific task.
  • The edge provides ready access to highly individualized data.

But edge computing also poses unique challenges, typically centered around the need for a large number of nodes distributed over a wide geographic area. Edge computing challenges include:

  • Managing deployments across a fleet of edge nodes. For example, how do you ensure all edge devices are running the same version of your software? Additionally, the edge has variable and unique provisioning issues (more on that below).
  • Monitoring the usage and performance of edge software becomes increasingly difficult as you add nodes.
  • Remote debugging and troubleshooting becomes increasingly difficult as you add nodes.
  • Determining if the root cause of an incident lies at the system level or at the single-node level becomes increasingly difficult as you add nodes.
  • The edge isn’t ideal for use cases that require unpredictable or “bursty” amounts of CPU. Such tasks are better suited to the dynamic computation available in the cloud.
  • The edge isn’t ideal for use cases that require ready access to global data. This data is more accessible from cloud-based services.

Seven keys to successfully build edge computing into your application

Fortunately, careful implementation of edge computing can dramatically increase your odds of success. These seven best practices are a good place to start:

  1. Be smart about what you manage at the edge versus in the cloud. Do a thorough analysis of your use case and make an active decision. Both the cloud and the edge have unique advantages and disadvantages, but as general rule of thumb: When in doubt, use the cloud. The edge is optimized for computations that require immediate, individualized results, which does include not the majority of today’s use cases.
  2. Don’t throw away DevOps principles at the edge. It’s tempting to abandon established best practices when you adopt new technologies, and edge computing is no exception. Yes, edge computing represents a highly specialized situation, and it often requires new processes and procedures. But don’t discount the utility of existing DevOps principles and best practices. Process and tools may change, but your teams still require ownership, accountability, and distributed decision making—just the kinds of things that DevOps provides.
  3. Automate distributed deployments. All too often, developers ship new applications and leave deployment automation as something to “fix later.” Automated and repeatable deployments are critical for all applications, but they are even more important for edge applications, due to the remote nature of the applications and the huge number of nodes involved.
  4. Reduce versioning as much as possible. At the same time, you need to limit the number of deployments in edge computing architectures. This may seem antithetical to the DevOps emphasis on continuous integration/continuous deployment (CI/CD), but the high number of distributed nodes in an edge computing environment mean you can’t always deploy to the edge as fast or as often as you can deploy to the cloud. Similarly, to ease software management on a large number of nodes, each node should run the same hardware, firmware, and software configurations. Of course, this isn’t always possible; mobile applications often run on a plethora of hardware and software configurations. But the more you can reduce the number of variables, the easier it is to manage the software for these nodes.
  5. Understand the difference between scaling at the edge and in the cloud. Edge software often runs thousands of instances of the software in a highly distributed manner, but each instance typically performs a single task or manages one device. These instances are often distributed geographically in many different locations. Cloud software, on the other hand, runs a few instances—perhaps on multiple distributed servers—but each instance typically manages actions for thousands of users, and the instances run in a small number of locations. Scaling in the cloud is about addressing how much each cloud instance can handle. This is typically done via vertical scaling—increasing the size of individual nodes as demand increases. Edge scaling, on the other hand, is dictated by how many edge nodes you can handle at a give time. This is horizontal scaling: adding more nodes to meet demand, rather than making individual nodes larger.
  6. Deploy monitoring and analytics all along the edge. As you deploy more distributed nodes at the edge, it becomes increasingly important to understand how each node is performing at any given time. To properly manage a highly scaled and distributed system of edge nodes, you need a continuous view into the health of every node. If you have a fleet of autonomous vehicles, for example, you can’t predict the behavior of the entire fleet based on metrics gathered from just one car.
  7. The edge is not magic. Edge computing is not new; it’s just a rebranding of what we’ve been doing for years. What we used to call a “browser application” or a “mobile application” or a “point of sale” device has always been the edge. The edge is just a new label on an existing class of computation, but the increasing popularity of the term will encourage a future with better, edge-focused tooling and custom-tailored services.

Remember: Edge computing is really no more than a new way to think about what we’re already doing with many of our applications. Successful edge computing is all about managing modern applications and their components, whether these reside in the cloud or on the edge.