Aarna.ml

Resources

resources

Blog

Aarna

With the help of several ONAP community members, such as Aniello and Krishna, we have been able to successfully create a 5G Slice on ONAP Guilin+ release.
Find out more

With the help of several ONAP community members, such as Aniello and Krishna, we have been able to successfully create a 5G Slice on ONAP Guilin+ release. You can follow  the steps below or join our End-to-End 5G Network Slicing technical meetup on Monday March-29 at 7AM PDT.

Below are the wiki links that we followed for creating the setup.

https://wiki.onap.org/display/DW/Template+Design+for+Option2

https://wiki.onap.org/display/DW/Setup+related+issues

https://wiki.onap.org/display/DW/Install+Minimum+Scope+for+Option+1

https://wiki.onap.org/display/DW/External+Core+NSSMF+Simulator+Use+Guide

https://wiki.onap.org/display/DW/External+RAN+NSSMF

https://wiki.onap.org/pages/viewpage.action?pageId=92996521

https://wiki.onap.org/display/DW/Manual+Configuration+for+5G+Network+Slicing

In addition a few additional things to note.

- The above documentation does not work on Guilin branch and it won't work on master either. Besides the image changes described in above documentation we had to do the following changes.

    a. We used the master branch from 19th March 2021

    b. Downgraded the following images

           Image:         nexus3.onap.org:10001/onap/optf-osdf:3.0.2

           Image:         nexus3.onap.org:10001/onap/optf-has:2.1.2

- The above documentation has a mention  but was not very clear. We added all the ARs to AAI, otherwise distribution will fail.

curl --user AAI:AAI -X PUT -H "X-FromAppId:AAI" -H  "X-TransactionId:get_aai_subscr" -H "Accept:application/json" -H "Content-Type:application/json" -k -d '{ "model-invariant-id": "bb9c30d4-552b-4231-a172-c24967e8ee24", "model-type": "resource", "model-vers": { "model-ver": [ { "model-version-id": "f10a33da-114e-41b6-89b2-851c31a1e0dc", "model-name": "Slice_AR", "model-version": "1.0" } ] } }' "https://10.43.189.167:8443/aai/v21/service-design-and-creation/models/model/bb9c30d4-552b-4231-a172-c24967e8ee24

curl --user AAI:AAI -X PUT -H "X-FromAppId:AAI" -H  "X-TransactionId:get_aai_subscr" -H "Accept:application/json" -H "Content-Type:application/json" -k -d '{ "model-invariant-id": "43515a2b-5aa7-4544-9c40-f2ce693b99bd", "model-type": "service", "model-vers": { "model-ver": [ { "model-version-id": "81d13710-811f-47f7-9871-affc666ba11a", "model-name": "EmbbNst_O2", "model-version": "1.0" } ] } }' "https://10.43.189.167:8443/aai/v21/service-design-and-creation/models/model/43515a2b-5aa7-4544-9c40-f2ce693b99bd" | python -m json.tool

curl --user AAI:AAI -X PUT -H "X-FromAppId:AAI" -H  "X-TransactionId:get_aai_subscr" -H "Accept:application/json" -H "Content-Type:application/json" -k -d '{ "model-invariant-id": "7a1ffb3c-7bd0-415c-b3d8-cf6170de805e", "model-type": "resource", "model-vers": { "model-ver": [ { "model-version-id": "e551c7cd-9270-4d15-b1f8-9634e0ab0ffe", "model-name": "EmbbAn_NF_AR", "model-version": "1.0" } ] } }' "https://10.43.189.167:8443/aai/v21/service-design-and-creation/models/model/7a1ffb3c-7bd0-415c-b3d8-cf6170de805e"   | python -m json.tool

curl --user AAI:AAI -X PUT -H "X-FromAppId:AAI" -H  "X-TransactionId:get_aai_subscr" -H "Accept:application/json" -H "Content-Type:application/json" -k -d '{ "model-invariant-id": "29620dfe-cffc-4959-93cc-314279d17f96", "model-type": "resource", "model-vers": { "model-ver": [ { "model-version-id": "529f5a61-7a8c-4b8b-b591-0bba72f9fd2e", "model-name": "Tn_BH_AR", "model-version": "1.0" } ] } }' "https://10.43.189.167:8443/aai/v21/service-design-and-creation/models/model/29620dfe-cffc-4959-93cc-314279d17f96"   | python -m json.tool

- We needed to follow a specific order to push policies, here is how we did it.

python3 policy_utils.py create_policy_types policy_types

python3 policy_utils.py generate_nsi_policies EmbbNst_O2

python3 policy_utils.py create_and_push_policies gen_nsi_policies

python3 policy_utils.py generate_nssi_policies EmbbAn_NF minimize latency

python3 policy_utils.py create_and_push_policies gen_nssi_policies

python3 policy_utils.py generate_nssi_policies Tn_ONAP_internal_BH  minimize latency

python3 policy_utils.py create_and_push_policies gen_nssi_policies

python3 policy_utils.py generate_nssi_policies EmbbCn_External minimize latency

python3 policy_utils.py create_and_push_policies gen_nssi_policies

- For core simulator, the vendor name should match the one passed in the SDC templates, here is what worked for us.

"esr-system-info": [

   {

     "esr-system-info-id": "nssmf-an-01",

     "type": "an",

     "vendor": "huawei",

     "user-name": "admin",

     "password": "123456",

     "system-type": "thirdparty-sdnc",

     "ssl-cacert": "test.ca",

     "ip-address": "192.168.122.198",

     "port": "8443",

     "resource-version": "1616417615150"

   }

 ]

Please join the technical meetup mentioned above or contact us to get tips on how to replicate this setup in your lab.

Aarna

Fully Automated 5G Core Services Management and Orchestration: A Joint Amantya & Aarna Solution.
Find out more

Unlike 4G, which mainly focussed on the Telco space, 5G has a broader focus across numerous industry vertical use cases. The below figure shows the primary capabilities of 5G and the use cases around reliability, capacity, latency, and connectivity.

Figure 1: Benefits of 5G Core Orchestration (courtesy Amantya Technologies)

Another change with 5G is that we are moving to a completely software-driven infrastructure. This means that there is now a need for more nuanced control of the network infrastructure centered around automation. This is where intent-based orchestration of 5G Core with closed-loop automation comes into play.

Amantya and Aarna Joint Solution:

In our technical meetup last week, Amantya Technologies and Aarna.ml presented a joint 5G Core solution. The solution has two key components:

  • Amantya’s Cloud-Native 5G Core
  • The Aarna.ml Multi-Cluster Orchestration Platform (AMCOP)
  • In this demo, AMCOP is installed on one Kubernetes cloud and the Kubernetes cloud for the Amantya 5GC is a separate one in a private cloud. AMCOP provides an easy-to-use intuitive end-to-end 5G service creation and management environment that abstracts the inherent complexity of the underlying network behind a layer of intuitive GUI-based workflows and automation interfaces.

    Figure 2: Aarna's and Amantya's Joint 5G Core Orchestration Solution

    Amantya’s solution is an integrated 5G and 4G core. Its implementation is completely cloud-native which makes the deployment of this core on public and private clouds very simple.

    Key Features of Amantya’s 5G Integrated Core:

    • Multi-tech Integrated Core
    • Standard Deployment Models – ENDC + SA
    • Release 15 Complaint
    • Cloud-Native – COTS HW
    • High-Performance User Plane
    • Full Slicing Support
    • Node Discovery
    • IPv4 / IPv6 support

    A high-level block diagram of Amantya's Integrated 5G Core is given below:

    Figure 3: A high-level block diagram of AMANTYA's Integrated 5G Core

    Key Features of AMCOP 2.0:

    1) Integrates OpenNESS-EMCO and ONAP-CDS with a unified run-time GUI

    2) Early access Aarna Analytics Platform (AAP) functionality

    3) Support for GKE, AKS, EKS, OpenShift, Anuket, and open-source K8s NFVI

    4) AMCOP is now available on the Azure marketplace for easy deployment onto Azure

    5) AMCOP also includes ONAP Guilin network slicing

    A high-level block diagram of AMCOP 2.0 is shown below:

    Figure 4: A high-level block diagram of AMCOP 2.0

    See a recording of the technical meetup that covered this topic in more depth along with a hands-on demo. If you would like to replicate any of this work in your environment, please contact us.

    Aarna

    AMCOP can be used to orchestrate both cloud native network functions (CNF) and cloud native applications (CNA) on public clouds or edge cloud offerings from public cloud vendors.
    Find out more

    Public cloud vendors will play a big role in the upcoming 5G + edge computing space. Customers will either use the public cloud directly for services that are not latency, bandwidth, or location sensitive, or use edge computing offerings from these same public cloud providers. The Aarna.ml Multi Cluster Orchestration Platform (AMCOP) can be used to orchestrate both cloud native network functions (CNF) and cloud native applications (CNA) on public clouds or edge cloud offerings from public cloud vendors.

    As we know there are different types of clouds:

    • Public Clouds (such as Microsoft Azure AKS, GCP/GKE, AWS EKS)
    • Edge Cloud Offerings from Public Cloud Vendors (such as Microsoft Azure Edge Zones, Google Anthos, AWS Outposts/Wavelength)
    • Private Clouds (such as Kubernetes or OpenStack on Bare Metal Servers)
    • Hybrid Clouds (such as IBM Hybrid Clouds)

    Out of these, we’ll only focus on the first two categories.

    What is a CNF/CNA Orchestration?

    CNFs are networking elements packaged as a set of cloud native microservices. CNAs are any arbitrary application (in our case edge computing applications) packaged as a set of cloud native microservices. These CNFs/CNAs need to have a “package” that describes how to orchestrate the application.

    Before we orchestrate these CNFs/CNAs to edge clouds, AMCOP first needs to register each of the Kubernetes clusters. Next, each CNF/CNA package needs to be “onboarded”. Once this is complete, a user can construct network services (in the case of CNFs) or composite applications (in the case of CNAs) and orchestrate them to the right Kubernetes cluster based on placement intent. These clusters could be private, public or even hybrid, and can run in the same or different locations.

    The way to interact to an Orchestrator is either through REST APIs, or through CLI, or through a Web Interface.

    Figure 1: Block diagram of AMCOP

    How Does a CNF Orchestration Setup Look Like?

    Figure 2: High-Level Overview of a CNF Orchestration Setup

    We have two Kubernetes (K8s) independent Clusters: one cluster is used for deploying AMCOP and on the right-hand side we have another set of clusters for CNFs (also called Edge/Target Clusters). For simplicity the diagram only shows one target cluster. Here are some common challenges faced on Public Clouds:

    • CNFs require multiple network interface for operation, most public K8s clouds do not offer this nor do they offer a way to install additional plugins such as Multus
    • We have to make sure that the clusters are not publicly accessible
    • Orchestration and the EDGE cluster should be on the same subnet

    The below table shows what interoperability testing has been done between AMCOP and public clouds as of the date of writing this blog (Feb-2021).

    Figure 3: Current state of compatibility of AMCOP with various public clouds

    See a recording of the technical meetup that covered this topic in more depth along with a hands on demo. If you would like to replicate any of this work in your environment, please contact us.

    Aarna

    Three Ways to Manage Edge Computing Applications With Ease using AMCOP.
    Find out more

    5G and edge computing are expected to be large new market opportunities; ABI research predicts a $1.5T market size by 2030. However, the advent of 5G and Edge Computing has led to an exponential stress on application management. Gone are the days when every network component was a piece of hardware with fixed functionalities, in 5G and edge computing everything is a piece of software. Looking at this problem quantitatively:

    Stress on application management = Number of edge/core sites x Application instances x Application changes per unit of time.

    In 5G and edge computing, there are 100,000s of edge sites, 10,000s of application instances created by a combination of a large number of applications and network slicing (which causes multiple instances of an application to be created) and 10s of application changes per hour. So, the stress on application management is a million times greater than anything we do today.

    Let’s shift gears a little bit and explore an analogy before continuing. The popular pets vs cattle analogy was earlier applied to infrastructure where each server was treated as a pet, as admins would upgrade and maintain servers individually before onboarding applications on them. With cloud computing, now a group of servers is treated as a unit. However, applications are still treated as pets and because of the huge stress on application management, the pets approach will not work. We need to adapt to a cattle methodology to simplify application management. See our prior blog on the pets vs. cattle analogy. With this new approach, the impact on initial orchestration will be:

    • Register K8s clouds with the orchestrator (manual/automatic)
    • Onboard Helm chart for each application
    • Orchestrate onto 1 to N clouds with multiple instances with a click of a button

    For ongoing Life Cycle Management, also, the cattle methodology works best, as it is not feasible to use millions of application management endpoints (GUI or API). LCM is handled best by the cattle methodology by doing the following:

    • For app independent LCM actions, one should use a unified endpoint for all app instances
    • For category dependent LCM actions (e.g.: O-RAN, 5G Core, SD-WAN, Firewall etc.), one should use a unified dashboard for that particular application that manages all instances from any vendor
    • For app dependent LCM actions (e.g.: AR/VR, drone control), one should use management endpoints retrofitted to connect to multiple instances of that application

    For service assurance, as well, the cattle methodology is helpful. With pets methodology you would have to log into many management endpoints to troubleshoot and raise tickets. Whereas in the Cattle methodology application (and optionally infra) telemetry is sent to a closed loop automation system (big data or AI/ML) that makes corrective actions automatically.

    With the recently announced 2.0 version of the Aarna.ml Multi Cluster Orchestration Platform (AMCOP), we are solving all three aspects of network service and application management:

    • Initial Orchestration
    • Ongoing Lifecycle Management (LCM)
    • Service Assurance or Real-Time Policy Driven Closed Loop Automation

    AMCOP 2.0 has three new capabilities:

    1. It has a full integration of the Intel OpenNESS EMCO (our orchestration engine) with the ONAP CDS project for full day 1 & 2 configuration and lifecycle management of cloud native network functions (CNF) and cloud native applications (CNA).
    2. There is an early access version of the Aarna Analytics Platform based on Google’s CDAP project that can be used for real-time policy-driven closed-loop automation. The analytics platform will also be the foundation of additional technologies coming from us such as the Non-Real-Time RIC (NONRTRIC) for O-RAN and Network Data Analytics Function (NWDAF).
    3. AMCOP 2.0 has full support for end-to-end 5G network slicing.

    A high-level block diagram of AMCOP 2.0 is shown below:

    AMCOP 2.0 is available for a free trial. Give it a shot. You can onboard a free 5GC and orchestrate that onto a Kubernetes cloud.

    Also, don’t forget to join our “Cloud Native Application (CNA) Orchestration on Multiple Kubernetes Edge Clouds” meetup on Monday Feb-22 at 7AM PT. In this hands-on technical meetup, we will show you how to onboard and orchestrate edge computing applications on multiple K8s edge clouds.

    Amar Kapadia

    Pets Vs. Cattle Part II: Hint This Time it is the Applications
    Find out more

    If you thought the Pets vs. Cattle saga was over in 2016, you'll be thrilled or disappointed, as the case may be, that I'm going to resuscitate the thread but in a different context.

    Pets Vs. Cattle part I was all about the infrastructure. In the good old days, we used to give each server tender loving care (TLC) just like we would to a pet. Operating system provisioning and updates, BIOS updates, BMC firmware updates, peripheral (e.g. NIC) installation and configuration, RAID configuration, logical volume management, remote KVM access, IPMI management, server debug, and more were all performed manually on a per server basis. Applications would then be installed manually on a given server(s). During the 2009-2016 timeframe, the cloud architecture completely standardized and automated the entire server management task. Server management was now akin to managing cattle, hence the term. Applications were no longer installed on a particular server, instead they were deployed on the "cloud" and the cloud layer—public cloud, OpenStack, or Kubernetes (K8s)—would take care of placing the specific VMs or containers onto individual server nodes (amongst other things).

    However, applications have continued to be treated as pets. Each application receives TLC. Even in cattle-infrastructure aka a cloud framework, applications are installed onto an individual cloud using a declarative template such as Terraform, Helm Charts, or OpenStack Heat and configured using manual techniques or tools such as Ansible. Service assurance, or fixing problems, has revolved around humans looking at application dashboards (also called Element Management System or EMS) and alerts, and closing tickets manually.

    Let's do a thought experiment to see how well a pets approach works for application management in the edge computing context. Let's assume 1,000 edge locations and 200 K8s applications per edge location. An application could be a network function like a UPF, AMF, SMF, vFirewall, SD-WAN; or a cloud native application such as AR/VR, drone control, ad-insertion, 360 video, cloud gaming; or AI/ML at the edge such as video surveillance, radiology anomaly detection; or IoT/PaaS framework such as EdgeXFoundry, Azure IoT edge, AWS Green Grass, XGVela. Furthermore, assume that the number of application instances go up to 1,000 per edge site with 5G network slicing. So this means, there would be 1,000,000 application instances across all the edge sites in this example.

    Here is the impact on application management:

    Initial orchestration:  The Ops team would have to edit 1,000,000 Helm Charts (to change Day 0 parameters), log into 1,000 K8s masters, and run a few million CLI commands. Clearly, this is not possible.

    Ongoing lifecycle management: Log into a 1,000,000 dashboards and manage the associated application instance (since very few application management dashboards manage multiple instances) OR run 200 Ansible scripts 5,000 times each with different parameters which means executing the scripts a 1,000,000 times. This is not practical either.

    Service assurance: Monitor 1,000,000 dashboards and fix issues based on tickets opened. This is also not feasible.

    Keep in mind, actual edge environments could scale even more. There could be 100,000 edge sites and 1,000 applications. mushrooming to 10,000 application instances per edge site with 5G network slicing. If you are thinking this scale is a pipe-dream I'd remind you of Thomas Watson's comment from 1943 where he said, "I think there is a world market for maybe five computers."

    So what's the solution to this seemingly impossible problem? Join us for the "What's New in AMCOP 2.0" meetup on Monday Feb-15, 2021 at 7AM Pacific Time for the answer or see the next installment of this blog series next week.

    Bhanu Chandra

    The O-RAN alliance is defining the various components and interfaces required to disaggregate 5G RAN.
    Find out more

    The O-RAN alliance is defining the various components and interfaces required to disaggregate 5G RAN. According to the O-RAN site, the "O-RAN ALLIANCE is transforming the Radio Access Networks industry towards open, intelligent, virtualized and fully interoperable RAN." One of the components defined by O-RAN is the Non-Real-Time Radio Intelligent Controller (NONRTRIC).

    According to the O-RAN Software Community (O-RAN-SC), that implements parts of the O-RAN specification, the NONRTRIC performs the following:

    The Non-Real-Time RIC (RAN Intelligent Controller) is an Orchestration and Automation function described by the O-RAN Alliance for non-real-time intelligent management of RAN (Radio Access Network) functions. The primary goal of the NONRTRIC is to support non-real-time radio resource management, higher layer procedure optimization, policy optimization in RAN, and providing guidance, parameters, policies and AI/ML models to support the operation of near-RealTime RIC functions in the RAN to achieve higher-level non-real-time objectives.

    Earlier this week, I demonstrated the NONRTRIC development environment from the O-RAN-SC Bronze release. As the name suggests, the development environment allows users to develop different functionalities and interfaces for the NONRTRIC. Please check  out the webinar recording here.

    Ultimately, we will offer a fully productized commercial-grade NONRTRIC as part of our Aarna.ml Multi Cluster Orchestration Platform (AMCOP) product.

    Separately, ONF announced the first Release of SD-RAN v1.0 Software Platform for Open RAN and mentioned that we are a member of the project. Our goal is to work on the interop between our NONRTRIC and the ONF Near-Real-Time RIC.

    If you are interested in replicating any of this in your lab, feel free to contact us.