Aarna.ml

Resources

resources

Blog

Towards Autonomous Operation of 5G and Beyond (5GB) Networks
Find out more

5G & Beyond (5GB) networks are being instrumented for data collection at all layers providing opportunities for new applications and innovations for AI/ML. Leveraging new AI/ML techniques that understand multi-modal models and inferring customer intents and building new knowledge planes driven by closed loop data-driven reinforcement learning to complement the management plane, control plane, and data plane 5G&B networks can realize the promise of dynamic and efficient autonomous operation.

At the IEEE Future Networks Work Forum event held in Montreal and virtually, Amar Kapadia, Aarna.ml CEO and CoFounder joined at roster of industry experts from industry, academia, standards bodies, open-source projects, and user community to address the challenge of autonomous operation of 5G&B networks. This provided a unique forum for practitioners and researchers to share perspectives on recent developments, evolving landscape of AI/ML based autonomous operation of networks, deployment use cases and business benefits.

Learn more about the session (available on-demand through Nov 15, 2022)

See Amar's presentation covering IEEE Open Source Orchestration of 5G & Edge Services and learn how O-RAN orchestration can drive 49% lower CAPEX.

Aarna

Equinix and Aarna.ml Win ETSI & LF Edge Hackathon
Find out more

For the last several months, a team from Equinix and Aarna – codename Team DOMINO – has been busy developing a submission for the ETSI and LF Edge Hackathon. Participants were asked to develop an innovative edge application or solution, utilizing ETSI MEC service APIs and LF Edge Akraino Blueprints. The Hackathon ran remotely from June to September with a short-list of the best teams invited to the Hackathon “pitch-off” at the Edge Computing World Global event in Santa Clara, California Silicon Valley on October 10th-12th. 

We’re happy to announce that the Team DOMINO submission “Build your Edge Application or Solution with ETSI MEC APIs and LF Edge Akraino Blueprints” has been named the Hackathon WINNER at Edge Computing World. This solution uses Akraino Public Cloud Edge Interface (PCEI) blueprint to demonstrate orchestration of federated MEC infrastructure and services, including 5G Control and User Plane Functions, MEC and Public Cloud IaaS/SaaS, across two operators/providers (a 5G operator and a MEC provider), as well as deployment and operation of end-to-end cloud native IoT application making use of 5G access and distributed both across geographic locations and across hybrid MEC (edge cloud) and Public Cloud (SaaS) infrastructure.

A big congratulations to Oleg Berzin and Vivekanandan Muthukrishnan on crafting and presenting the winning submission and congratulations as well to the other finalists. And finally, a big “thank you”  to the event hosts ETSI and LF Edge

The winning submission demonstrates orchestration of federated MEC infrastructure and 

services, including:

  • Bare metal, interconnection, virtual routing for MEC and Public Cloud IaaS/SaaS, across two operators/providers (a 5G operator and a MEC provider)
  • 5G Control and User Plane Functions
  • Deployment and operation of end-to-end cloud native IoT application making use of 5G access and distributed both across geographic locations and across hybrid MEC (edge cloud) and Public Cloud (SaaS) infrastructure

By orchestrating, bare metal servers and their software stack, 5G control plane and user plane functions, interconnection between the 5G provider and MEC provider, connectivity to a public cloud as well as the IoT application and the MEC Location API service, we show how it is possible for providers to enable sharing of their services in a MEC Federation environment.

Learn more about the winning submission and get the submission materials here: https://lnkd.in/eVbHfUVc

Want to learn more about PCEI and how open source approaches are impacting the edge? Contact us: info@aarnanetworks.com.

Amar Kapadia

Reduce 46-49% of Public 5G Network CAPEX with a Vendor Neutral O-RAN SMO
Find out more
Advantages of a vendor neutral SMO

There are three types of 5G RAN customers I run into:

  1. Don’t see the value of O-RAN 
  2. See the value of O-RAN but don’t see the value of a vendor neutral O-RAN SMO (Service Management & Orchestrator)
  3. See the value of a vendor neutral SMO

This blog is addressed to group #2 to convince them to move to group #3.

The thought process of the group #2 is as follows: “I see standards as a way to get discounts from existing vendors; however, I don’t see enough value in a third-party vendor neutral SMO. The “safety” of one throat-to-choke outweighs the “adventure” of disaggregation, even if disaggregation promises innovations, further cost reductions, or prevents vendor lock-in. So, I’m just going to buy my SMO from the RU/DU/CU vendor.”

Chris Lamb’s presentation from GTC Fall 2022 titled “Using AI Infrastructure in the Cloud for 5G vRAN,'' is fascinating. While it doesn’t directly address the above argument, it can be used to do so. Chris starts by stating that up to 65% of a public 5G network’s CAPEX is spent on RAN. Furthermore, only 25-30% of the RAN is utilized. In other words:

65% cost of public 5G x 70-75% underutilization = 46-49% 5G Network CAPEX is Wasted

We are talking $10Bs… if not more; this is a huge waste. Clearly, there are a lot of sustainability issues to consider as well. 

What if this 46-49% CAPEX could be reduced or eliminated? Chris describes a whole array of AI/ML applications that could be run on the same hardware resources during times of underutilization. In my view, if sustainability was your primary concern, you could simply turn off servers as opposed to using them for something else.

For the above schemes to work, the Service Management and Orchestration component (or SMO + workload/service orchestrator) needs to orchestrate and manage the infrastructure (servers, networks, GPU/DPU), transport, RU, DU, CU, NearRT RIC, and edge computing applications. Given that these components are unlikely to be from one single vendor, the SMO  has to be vendor neutral.

I hope you are convinced to move to group#3. You could save your company billions of dollars! And help reduce your carbon footprint as well.

Let us know if you’d like free consolation about your SMO strategy or try out AMCOP O-RAN SMO at no cost in the Azure marketplace.

Sandeep Sharma

Nephio Technical Overview Video
Find out more

By Sandeep Sharma, Software Engineer, Aarna.ml

Since launching in the spring, the Nephio Project has witnessed tremendous growth; in members, participants, and industry watchers. It’s been my pleasure to join the Nephio TSC and begin leading and contributing to this exciting new project. Nephio’s goal is to deliver carrier-grade, simple, open, Kubernetes-based cloud native intent automation and common automation templates that materially simplify the deployment and management of multi-vendor cloud infrastructure and network functions across large scale edge deployments.

Nephio Demo Video

The TSC is made up of 2 SIGs (Standardized Information Gatherings); the Automation SIG, exploring infrastructure and microservices deployment; and the Networking Architecture SIG, exploring how networking challenges can be addressed by Nephio (currently working to automate Free5GC with Nephio). 

The Automation SIG recently developed a controller through workload application (DNS orchestration) and we are now working to extend it to include AWS infrastructure automation. In this new Nephio Technical Overview and Demo video, I demonstrate the infrastructure and workload automation of the Nephio platform by executing commands and showing the results on an AWS console. By specifying simple intents using the Nephio controller, users can automate infrastructure and workload intent. 

Aarna.ml is actively contributing to Nephio infrastructure automation as well as network service orchestration and plan to consume this work in our upcoming offerings. I encourage you to learn more about Nephio below and don’t hesitate to contact us with any questions. 

Video: Technical Overview & Demo

Executive Guide: Project Nephio

Blog: What is the Nephio Project? 

Video: What is the Nephio Project? 

Amar Kapadia

Nephio is a new open source project seeded by Google and hosted at the Linux Foundation that is getting substantial attention in the industry.
Find out more

Nephio is a new open source project seeded by Google and hosted at the Linux Foundation that is getting substantial attention in the industry. I attended the first Nephio Developer Summit last week in Sunnyvale, June 22-23 and wanted to share my key takeaways. As a member of the Nephio project with Sandeep Sharma from our team holding a Technical Steering Committee (TSC) seat, it is no surprise that we at Aarna are big fans of the project. Here are my observations of Nephio along with the pros and cons as I see them.

Scope

The stated goal of Nephio is to “simplify the deployment and management of multi-vendor cloud infrastructure and network functions across large scale edge deployments.” This is a clear and self-explanatory definition. At the meeting, Google stressed that Domain Orchestration as opposed to Service Orchestration is the focus of the project. Of course, the lines blur. Is a 5G service consisting of UPF+AMF+SMF a domain or a service? I think from Nephio’s point-of-view, this would be considered a domain. In other words, Nephio can deploy and manage a 5G service with a variety of NFs. So, that leaves very little (if anything) for the “Service Orchestration” layer to do.

What I found fascinating about Nephio is that it considers cloud infrastructure within its scope as well. Other projects, such as the Linux Foundation Networking ONAP project, have only worked on the service/NFVO/VNFM layers. I think considering both infra+NFs together is a huge plus for the 5G + MEC (multi-access edge computing) era. We at Aarna are seeing evidence of this trend from groups such as the O-RAN Alliance, where FOCOM (Federated O-Cloud Orchestration and Management), NFO (Network Function Orchestration), and NF (Network Function) Configuration Management, Performance Management, and Fault Management are all within the scope of the O-RAN Service Management and Orchestration (SMO) entity.

Nephio Technical Overview

Very simply put, Nephio uses Kubernetes (K8s) automation for cloud infrastructure and NFs. I had not appreciated this point, but Kubernetes is general purpose. It just happens to be used for container orchestration first, but it is not limited to that use case. With that understanding, we can see that Nephio is applying Kubernetes to a new use case.

Needless to say, Kubernetes comes with tremendous benefits. It is mature. It is declarative and intent driven (an intent driven system monitors the end state and continuously reconciles it with the intended state). Kubernetes can be expanded through mechanisms such as CRDs (Custom Resource Definitions) and Operators. Custom Resources are extensions of the Kubernetes API that can declaratively express user intent for a particular domain. Operators or Custom Controllers (apologies if they are not exact synonyms, I am using them as such) listen to the APIs and perform actions to fulfill the declarative intent. Ultimately declarative intent has to be converted to imperative. That is the job of the Custom Controller.


So is that it? Then why do we need Nephio? Clearly there’s more…


Distributed State

Nephio creates the concept of a centralized Nephio K8s cluster with platform controllers which reconcile the high level user intents expressed in KRM files. From that standpoint, the Nephio cluster runs the user intent through a series of Custom Controllers to produce the state that can be consumed by the edge cluster(s).

The state is transmitted to the edge cluster using a “pull” mechanism using an open source project called ConfigSync. However, ConfigSync may be replaced by alternatives such as ArgoCD or Flux v2. A pull mechanism is significantly more scalable than a push approach. It also moves the burden of maintaining the state to the edge cluster as opposed to the Nephio cluster. Again much more scalable.

The edge clusters in-turn use the input provided to them by the Nephio cluster for their own Operators/K8s cluster configuration that may include edge cluster infrastructure and NF automation.

GitOps

That’s not all. Nephio also bakes in the concept of GitOps into the project. The user provides KRM files in a package called kpt that is checked into a Git repo. kpt uses the principle of configuration as data (APIs) rather than configuration as code (templates or Domain Specific Languages). The Custom Controllers on the Nephio cluster successively refine the kpt package in the git repo. Finally the edge cluster pulls the state from the Git repo to apply it to the local K8s cluster. This architecture is both pragmatic and clever. It’s like infusing Fluoride into water. The user gets the benefit of GitOps without explicitly knowing or worrying about it.

Pros

Nephio has a number of key benefits.

  • Simplicity: Like Kubernetes, I think Nephio will disrupt open source networking vis-à-vis cloud infrastructure and greatly simplify network service delivery.
  • Google backing: Google is not only behind the open source project, they also seem to be committed to Nephio based cloud service(s). This is ideal backing for an ambitious open source project.
  • Common across Infra, Platform, Workloads: The same descriptors and project can be applied for setting up the infrastructure (on-prem or cloud), the CaaS platform (K8s with different plugins and software components such as Multus, SR-IOV, DPU, Istio, Prometheus etc.), and workloads (NFs and MEC applications).
  • GitOps Built-in: Users don’t have to bolt on a DevOps framework on top of Nephio. It’s inbuilt. I love this feature.
  • Distributed: Nephio is inherently built for a world with a large number of edge clouds. This again distinguishes it from prior projects where a centralized entity can struggle to scale.
  • Data-model first: By having CRDs first, the data model is essentially agreed upon even before writing the 1st line of code. This is the right way to do things. Current projects either approach the data model in parallel to writing code or often as an afterthought.
  • Community excitement: If the attendance at the event is any indication, the community is truly energized by Nephio. It also includes several active end users. This is a positive sign.

Cons

In my opinion, Nephio comes with some architectural assumptions that might slow down its adoption.

  • Developer effort: Nephio is just a framework. Without CRDs and Custom Controllers, Nephio doesn’t actually do anything. This means that the developer burden, as compared to prior solutions or open source projects, is definitely higher. In addition, the ops personnel at telcos will need to be comfortable with KRM files and kpt packages, which requires sophistication. Of course, there could be a GUI to front-end and simplify this mechanism.
  • Moving the burden to NF vendors: Philosophically, Nephio moves the control to NF vendors (aka the sVNFM model). In the past, systems such as ONAP SO+CDS+SDN-C, had tried to wrest control away from NF vendors and seek common approaches via a gVNFM. I don’t think the Nephio approach is either good or bad. After all, the NF vendor is the expert on how to manipulate their NF. Why not give the control to them? But this does mean waiting for NF vendors to create Operators.
  • KRM vs. Helm: With one exception, every NF vendor I have talked with creates CNFs (cloud native network functions) via Helm Charts. It seems Nephio doesn’t hold Helm Charts in a positive light at this time since it mixes declarative with imperative. However, this position might slow Nephio adoption.

Conclusion

I am a Nephio believer. After having seen prior approaches, I believe that software simplicity is the number one factor that determines its success. And Nephio fully embodies simplicity. I think Nephio will have a big impact on 5G in general and O-RAN and MEC specifically (Nephio has the O2 interface as one of its stated use cases). We at Aarna are onboard. We will announce our Nephio strategy later in Q3’22 and will publish blogs and videos on the Nephio architecture. Want to learn more? Check out the Aarna's Nephio Executive Brief. Feel free to reach out to us if you have any Nephio needs or questions.


Aarna

Join Aarna.ml at the LF Networking DTF Next Week In Porto!
Find out more

Updated: Jul 11

The Linux Foundation Networking Developer & Testing Forum is being held from June 13-16, 2022, at Porto in Portugal. In this event, various LFN project technical communities will present their project architecture, direction, and integration points; and will explore future possibilities through the open source networking stack. This is the primary technical event for the LFN project communities, where community members converge via sessions, workshops, tutorials, You can register for the event here  and explore the schedule here.

Aarna is excited to be participating in 10 sessions! Please join us live in Porto, online via the Zoom Bridge, or post-event in the event recordings. Send any questions to info@aarna.ml

1. Plenary: The LFX Dashboard: Tool Suite and Community Review

Mon Jun 13 2022

Speakers - Henry Quaye, Linux Foundation; Brandon Wick, Aarna.ml

Description - LFX tool suite has been designed to support the various project communities of the Linux Foundation. Through this demo learn setting up your individual dashboard. Also learn to view, parse, and analyze your project community metrics. A brief overview of the tool would be given, with emphasis on how to update and leverage LFX. Explore the LFX Tool

Recorded Session.

2. Plenary: Marketing for LFN Projects

Mon Jun 13 2022

Speakers - Heather Kirksey, Linux Foundation; Bob Monkman, Intel; Brandon Wick, Aarna.ml.

Description - A brief overview and tutorial for how to market LFN Projects

  • Internal Marketing – how do we convince our bosses
  • Marketing for projects, including operational aspects – easy to use, easy to discover
  • How to make more friendly to developers – what needs to be in place (ties to tooling)
  • How to make more consumable to end users – what needs to be in place (ties in to documentation)
  • How to get the word out

3. ONAP: An O-RAN SMO Use Case with Netconf Notifications

Tue Jun 14 2022

Speakers - Sriram Rupanagunta, Aarna.ml; Bhanu Chandra, Aarna.ml

Description -

Part 1: Demonstration of O-RAN SMO use case built with ONAP.

Part 2: Session covering extending Netconf notification support for ONAP SND-C/SDN-R.

Part 1: Topic Overview:

  1. Aarna.ml built an O-RAN SMO using open source components from ONAP projects SDNR, DCAE, etc.
  2. CapGemini offers O-RAN compliant CU/DU that follows the O-RAN WG1 O1 spec and supports various features like Provisioning Management, Fault Management, File management, and more.
  3. Aarna and CapGemini are working together for a private 5G O-RAN deployment.
  4. Aarna and CapGemini doing interoperability testing between SMO and CU/DU.

We will show the following Demo:

  1. Bring Up the CU/DU and SMO
  2. Connect  CU/DU to SMO (manually/plug n play)
  3. Configuration Management (config push from SMO to CU/DU)
  4. Show Fault Management

Part 2: Topic Overview:

  1. Support for netconf notifications is limited in the ONAP SDN-C/SDN-R.
  2. This presentation shows how to extend the support based on your need.
  3. Code walk through on adding new netconf notification and corresponding netconf notification to ves conversion

Why do we need this? As per the ORAN specs, SMO has to handle various netconf notifications on O1 side like, fileready, software activate, inprogress.. etc.

We will show the following Demo:

  1. Generate netconf notification on simulator
  2. Receive netconf notification on SDNR from karaf log
  3. Ves collector logs to show the converted netconf notification to ves conversion
  4. Show the event in DB.

Recorded Sessions -

Topic 1 and Topic 2

4. EMCO: BackUp and Restore

Tue Jun 14 2022

Speaker - Sriram Rupanagunta, Aarna.ml

Description - This presentation will showcase how an EMCO deployment can be configured to handle disaster recovery. We use the Velero tool to take a backup of an active EMCO deployment and store it on cloud. We simulate disruption by destroying the cluster namespace. Next we use the recovery function in the velero tool and restore the entire deployment to original state.

Recorded Session.

5. EMCO : Deploying on ROSA Custer

Tue Jun 14 2022

Speaker - Sriram Rupanagunta, Aarna .ml

Description -

In this presentation, we will cover the following.

  1. A brief description of the ROSA platform and how it simplifies deployment of      complex infrastructure on kubernetes cluster.
  2. We will show how EMCO deployed on ROSA is able to orchestrate Free 5G core on target cluster.
  3. Description of how Free 5G deployed on K8 cluster works.

Recorded Session.

6. EMCO: Open Policy Agent Service Assurance in the Telcom Edge

Tue Jun 14 2022

Speaker - Sriram Rupanagunta, Aarna.ml

Description - Policy-driven closed-loop automation is vital for enabling AI and machine learning in 5G and edge systems. Open Policy Agent is a simple, cloud-native and domain agnostic policy engine with a simple policy language. OPA has a small memory footprint and better performance compared to traditional policy engines. OPA's policy language, Rego, is an intuitive and natural declarative policy language. In this talk, we will look at the use of OPA in service assurance use-case, specifically in telecom edge use.

We propose a framework for building a general-purpose policy evaluation framework for EMCO, which can monitor application-specific activities on the edge clusters and trigger actions based on policy evaluation. We are working on integrating EMCO’s temporal workflow manager as the Policy Enforcement Point (PEP). The Temporal workflow engine allows users to define, deploy and track custom workflows. EMCO’s Workflow Manager provides interfaces for defining workflow intents and managing the life cycle of workflows. The Workflow Manager with the proposed OPA-based policy controller gives a mechanism to define policy-based service assurance applications in EMCO. With standardized workflows, composite applications can be assigned with ‘policy intents’ for the policy-driven life cycle management.

Recorded Session.

7. ONAP: PCEI Edge to Cloud connectivity and application deployment

Wed Jun 15 2022

Speakers - Sriram Rupanagunta, Aarna.ml; Vivekanandan Muthukrishnan, Aarna.ml; Oleg Berzin, Equinix

Description -

Complex Multi-domain orchestration across Edge and Public Clouds, using ONAP CDS and Terraform plans

The purpose of Public Cloud Edge Interface (PCEI) Blueprint is to develop a set of open APIs, orchestration functionalities and edge capabilities for enabling Multi-Domain Interworking across the Operator Network Edge, the Public Cloud Core and Edge, the 3rd-Party Edge as well as the underlying infrastructure such as Data Centers, Compute Hardware and Networks.

In this presentation/demo, we will be showing how ONAP module CDS can be used (along with Terraform plans) to provision Infrastructure on a Bare Metal (Equinix Metal Cloud), install K8S on Bare Metal, and configure Azure Cloud (Express Route, Peering, VNET, VM, IoT Hub). We will then interconnect Edge Cloud with Public Cloud (Equinix Fabric), and deploy Edge Application (PCE), which includes dynamic K8S Cluster Registration to EMCO, dynamic onboarding of App Helm Charts to EMCO, and finally, design and instantiate composite cloud native app deployment and end-to-end operation.

Recorded Session.

8. EMCO: Enhancing the EMCO GUI with RBAC

Wed Jun 15 2022

Speaker - Sriram Rupanagunta, Aarna.ml

Description - We will show the implementation of Role Based Access control in EMCO GUI. Currently it supports ADMIN and TENANT roles. Once the user logs in based on the role, different views are presented to the user. The Admin is like a super user with all the privileges. The Admin has the capability to add users (with tenant role) and assign them to a project.

When a tenant user is logged in, the user will be redirected to the project view to which a tenant belongs to. Currently one tenant can be the owner of only one user. RBAC in UI is translated to EMCO with the help of logical clouds. When a tenant or admin creates a logical cloud, we pass the user’s email ID as the user for that particular logical cloud.

Recorded Session.

9. EMCO: Orchestration and Demo of LCM of AnyLog using EMCO

Wed Jun 15 2022

Speakers - Sriram Rupanagunta, Aarna.ml; Raghuram Gopalshetty, Aarna.ml; Ori Shadmon, AnyLog

Description - As of today, there are no data services at the edge which are similar to what the cloud is able to offer. The integration of the AnyLog Network with EMCO delivers a "cloud-like" solution at the edge and provides a new, unique option for the industry. Using AnyLog, developers are able to connect cloud and edge applications with the data at the edge without intermediaries (the public clouds). It is an opportunity to replace the cloud providers in servicing the data, provide real-time insight to edge data, and lower the cost.

In this demo, we’ll show that using EMCO, developers can deploy and manage AnyLog instances at the edge from a single point and using the AnyLog Network, manage and view the distributed edge data from a single point.

EMCO Demo:

  1. Onboard all 4 target clusters to EMCO.
  2. Onboard AnyLog Master, Operator and Query helm charts to EMCO.
  3. Orchestrate AnyLog Master to Cluster-1.
  4. Orchestrate AnyLog Operator to Cluster-2 and Cluster-3 (NEW_CLUSTER: "demo-cluster2").
  5. Orchestrate AnyLog Query to Cluster-4.
  6. Orchestrate Grafana to Cluster-4.

AnyLog Demo:

  1. AnyLog - Explain/Show the setup with multiple (2) nodes hosting data
  2. AnyLog - Explain/Show that data processing at the edge is automated (from schema creation to HA).
  3. Explain the current approach - to have a unified view, data moved to the cloud
  4. AnyLog - Explain/Show a unified view of all the edge data (from the 2 nodes).
  5. Aarna - Explain/Show deploying a new storage node (I think we need to automate the simulator to push data when the node is up and running)
  6. AnyLog - Explain/Show a unified view of all the edge data (now from from the 3 edge nodes).

Recorded Session.

10. EMCO: Orchestration and Service Assurance 5GC Functions with EMCO

Wed Jun 15 2022

Speaker - Sriram Rupanagunta, Aarna.ml Ulysses Lu, Quanta; Sandeep Sharma, Aarna.ml

Description - We show the orchestration of QCT's 5GC functions on multiple k8s clusters, using EMCO. We configure Prometheus to scrape for events (CPU utilization) from the target clusters, and create a closed loop using ONAP CDS as the actor that can scale out one of the 5GC network functions. We use a sample NWDAF and an application function (AF) to show this functionality.

Recorded Session.