We are often asked "why automation or orchestration is needed for Edge computing in general, since Edge computing is not a new concept". In this blog, you'Il learn about the role of an orchestrator in unleashing the true potential of Edge computing environments.
Recapping the uniqueness of Edge environments, a blog 'Why Edge Orchestration is Different' by Amar Kapadia highlights the following attributes:
- Scale
- Dynamic in nature
- Heterogeneity
- Dependency between workloads and infrastructure
We will see the challenges in accomplishing them, and it will be obvious why automation plays a critical role in this process. Also, as explained in the previous blog, the Edge environments include both Infrastructure and the Applications that run on them (Physical or Virtual/Cloud-native). So all the above factors need to be considered for both of them in case of Edge computing.
The scale of Edge environments clearly prohibits manual mode of operating them since this will involve bringing up each environment, with its own set of initial (day-0) configurations, independent of each other. The problem is compounded when these environments need to be managed on an ongoing basis (day-N). This also brings up the challenge of the dynamic nature of these environments, where the configurations can keep changing based on the business needs of the users. Each such change will result in potentially tearing down the previous environment and bringing up another one, possibly with a different set of configurations. Some of these environments may take days to bring up, with expertise from different domains (which is another challenge), and any change will mean a few more days to bring it up again, even for a potentially minor change.
Another challenge with the Edge environment is their heterogeneity in nature, unlike the public cloud which is generally homogeneous. This would mean that multiple tools, possibly from different vendors, need to be used. These tools could be proprietary or standard tools such as Terraform, Ansible, Crossplane and so on. Each vendor of the Infrastructure or the applications/network functions could be using a different tool, and even in cases where they use standard tools, there may be multiple versions of the artifacts (eg., Terraform plans, Ansible scripts) that need to be dealt with.
The workloads on edge may need to talk to or integrate with some applications on the central location / cloud. This would need setting up connectivity between edge and other sites as may be desired. The orchestrator should also be able provision this in an automated manner.
Lastly, as we saw in the previous blog, there may be dependencies between Infrastructure and the workloads, as well as between various workloads (eg., Network Functions such as 5G that are used by other applications). This will make it extremely difficult to bring them up manually or with home-grown solutions.
All these challenges will mean that unless the Edge environment is extremely small and contained, it will need a sophisticated automation framework or an orchestrator. The only scalable way to accomplish this is to specify the topology of the environment as an Intent, which is rolled out by the Orchestrator. In addition, the Orchestrator should constantly monitor the deployment, and make necessary adjustments (constant reconciliation) deployment and management of the topologies in the Edge environment. When there is a change required, a new intent (configuration) is specified which should be rolled out seamlessly. The tool should also be able to work with various tools such as Terraform/OpenTofu, Ansible and so on, as well as provide ways to integrate with proprietary vendor solutions.
At Aarna.ml, we offer open source, zero-touch, Intent based orchestrator, AMCOP (also offered as a SaaS AES) for lifecycle management, real-time policy, and closed loop automation for edge and 5G services. If you’d like to discuss your orchestration needs, please contact us for a free consultation.