Last week I attended Edge Field Day 1, a Tech Field Day event focused on edge computing solutions. Some of the sessions really made me think.
Edge infrastructures are quite different from anything in the data center or the cloud: the farther from the center you go, the tinier devices become. Less CPU power, less memory and storage, less network and connectivity all pose serious challenges. That’s before considering physical and logical security requirements that are less important in the data center or the cloud, where the perimeter is well protected.
In addition, many edge devices stay in the field for several years, posing environmental and lifecycle challenges. To complicate things even further, edge compute resources can run mission-critical applications, which are developed for efficiency and resiliency. Containers and Kubernetes (K8s) may be a good option here, but does the edge really want the complexity of Kubernetes ?
Assessing the value of Kubernetes at the Edge
To be fair, Edge Kubernetes has been happening for some time. A number of vendors now deliver optimized Kubernetes distributions for edge use cases, plus management platforms to manage huge fleets of tiny clusters. The ecosystem is growing and many users are adopting these solutions in the field.
But does Edge Kubernetes make sense? Or more accurately, how far from the cloud-based core can you deploy Kubernetes, before it becomes more trouble than it’s worth? Kubernetes adds a layer of complexity that must be deployed and managed. And there are additional things to keep in mind:
- Even if an application is developed with microservices in mind (as small containers), it is not always so big and complex that it needs a full orchestration layer.
- K8s often needs additional components to ensure redundancy and data persistence. In a limited-resource scenario where few containers are deployed, the Kubernetes orchestration layer could consume more resources than the application!
In the GigaOm report covering this space, we found most vendors working on how to deliver K8s management at scale. Different approaches, but they all include some forms of automation and, lately, GitOps. This solves for infrastructure management but doesn’t cover resource consumption, nor does it really enable container and application management, which remain concerns at the edge.
While application management can be solved with additional tools, the same you are using for the rest of your K8s applications, resource consumption is something that doesn’t have a solution if you keep using Kubernetes. And this is particularly true when instead of three nodes, you have two or one, and maybe that one is also of a very small size.
Alternatives to Kubernetes at the Edge
Back at the Tech Field Day, an approach that I found compelling was shown by Avassa. They have an end-to-end container management platform that doesn’t need Kubernetes to operate. It does all you expect for a small container orchestrator at the edge, while removing complexity and unnecessary components.
As a result, the edge-level component has a tiny footprint compared to (even) edge-optimized Kubernetes distributions. In addition, it implements management and monitoring capabilities to provide visibility on important application aspects, including deployment and management. Avassa currently offers something quite differentiated, even with other options to remove K8s from the (edge) picture, not least Web Assembly.
Key Actions and Takeaways
To summarize, many organizations are evaluating solutions in this space, and applications are usually written following very precise requirements. Containers are the best way to deploy them, but are not synonymous with Kubernetes.
Before installing Kubernetes at the edge, it is important to check if it is worth doing so. If you have already deployed, you will likely have found its value increases with the size of the application. However, that value diminishes with the distance from the data center, and the size and number of edge compute nodes.
It may therefore be wise to explore alternatives to simplify the stack, and therefore improve TCO of the entire infrastructure. If the IT team in charge of edge infrastructure is small, and has to interact every day with the development team, this becomes even more true. The skills shortage across the industry, and particularly around Kubernetes, make it mandatory to consider options.
I’m not saying that Kubernetes is a no-go for edge applications. However, it is important to evaluate the pros and cons, and establish the best course of action, before beginning what may be a challenging journey.