Aarna.ml

Resources

resources

Blog

Amar Kapadia

5G and multi-access edge computing (MEC) will require a number of brand-new skills.
Find out more

5G and multi-access edge computing (MEC) will require  a number of brand new skills. There simply aren't enough people you can hire or outsource to, and so CSPs will have to cultivate some of these new skills in-house. Stack disaggregation will put further burden on this critical need.  2019 is the right time to start gaining these skills. If you wait too long, you are the risk of not having enough in-house expertise to implement 5G and MEC!

Examples of news skills needed are as follows.

Non-functional skills

  • Agile programming  
  • DevOps and CI/CD  
  • API/CLI instead of GUI  
  • Modeling languages — TOSCA, Heat, YAML etc.  
  • Plan/build ops. mentality instead of break/fix  
  • Functional testing of NFV/SDN stack  
  • Performance testing NFV/SDN stack

General functional skills

  • SDN and NFV basics  
  • MEC basics  
  • Using OpenStack, Kubernetes (k8s), SDN controllers e.g. ODL or Tungsten Fabric  
  • Operating OpenStack, k8s, and SDN controllers  
  • Using big data stacks  
  • Modern monitoring techniques  
  • AI/ML  
  • Hypervisors (vSphere or KVM) and  containers  
  • Hardware acceleration/Enhanced Platform Awareness (EPA)

Specialized functional skills

  • Using ONAP/OSM  
  • Operating ONAP/OSM  
  • New VNFs such as vRAN, SD-WAN, NGFW etc.

There is no one-stop shop to acquire these skills. Nor does any one individual need to know all aspects.

Our courses fill a small aspect of the above list (ONAP, OPNFV that covers testing, and NFV101). If these are gaps you are looking to fill, consider taking one of our courses. We have public classes coming up in Berlin in late Feb (ONAP Bootcamp II) and then in San Jose in early April just before ONS (ONAP Bootcamp, ONAP+AI/ML intro). Or you can request an onsite private training just for your company employees.

Sriram Rupanagunta

This technical blog explains how to run NSB (Network Services Benchmarking) using Yardstick, in a virtual environment, using the Aarna.ml ONAP Distribution (ANOD) on GCP.
Find out more

This technical blog explains how to run NSB (Network Services Benchmarking) using Yardstick, in a virtual environment, using the Aarna.ml ONAP Distribution (ANOD) on GCP. You can get free access ANOD for GCP here.

We believe that Yardstick NSB will complement ONAP very well by helping validate and certify the performance of VNFs before they are onboarded. See my August blog that talks about this more. See also my YouTube video titled "OPNFV Yardstick NSB Demo".

  1. Deploy OPNFV on CentOS 7.x server or a GCE instance, with vcpus as 16, and memory as 32GB.

sudo -i

cd /opnfv_deploy

nohup opnfv-deploy -v --debug --virtual-cpus 16 --virtual-default-ram 32 -n network_settings.yaml -d os-nosdn-nofeature-noha.yaml

# This takes about 90 minutes to complete!

  1. The IP address of Openstack Horizon interface can be found in Openstack credentials file on the Undercloud instance of Openstack (log in using the command “opnfv-util undercloud” , and refer to the file overcloudrc.v3).

sudo -i

opnfv-util undercloud

cat overcloudrc.v3

  1. Log into Horizon dashboard to examine Openstack parameters such as number of resources for Hypervisors and so on (after setting up SOCKS proxy tunnel).  
  2. Create a KVM instance of Ubuntu 16.04, with the following resources. You can refer to Aarna’s Lab ONAP300 (which sets up Ubuntu 16 VM on GCE instance). If you are running this on a local server, you need to create this VM using a standard Ubuntu 16.04 cloud image. Instead of using this image for deploying ONAP, you can use the same VM to run NSB/Yardstick. This VM requires the following resources:

8 VCPUs

100GB RAM

(Note: NSB scripts do not work on CentOS distributions, so it cannot be run from the base Jump host)

  1. Login to Ubuntu instance as user “aarna”, and copy openstack credentials file to this (in directory /opnfv-yardstick). Edit this file, and remove comments and shell commands, and retain only environment variables (openstack.creds.sh).  
  2. Run the following as sudo user on Ubuntu VM:

sudo -i

cd /home/aarna

git clone https://gerrit.opnfv.org/gerrit/yardstick

cd yardstick

# Switch to latest stable branch

# git checkout

git checkout stable/euphrates


# Run as sudo user...

nohup ./nsb_setup.sh /opnfv-yardstick/openstack.creds.sh &

# This command take about 1 hour to complete

  1. Once this command completes successfully, you can see yardstick container created on this VM. Run the bash shell on this container. There is no need to explicitly start the container (nsb_setup does it).

docker ps -a # You should see yardstick container

docker exec -it yardstick /bin/bash # execute shell in the container

  1. The nsb_setup script also prepares the cloud image (yardstick-samplevnfs) for Openstack, and adds it to Glance on the Openstack. This image contains all the needed utilities for running all the NSB sample VNF’s.  
  2. Create config file for yardstick

cp ./etc/yardstick/yardstick.conf.sample /etc/yardstick/yardstick.conf

# Edit this file, and add file as the destination (in addition to http). It can also

# be set to influxdb for viewing the results in Grafana

  1. For the purpose of this blog, we will use the sample application l2fwd, which does L2 forwarding functions. Edit the l2fwd test case from prox VNF (yardstick/samples/vnf_samples/nsut/prox/tc_prox_heat_context_l2fwd-2.yaml), and reduce the resource requirements in the context section of the file.  
  2. vcpus to 8  
  3. memory to 10G    
  4. Edit the following file and make few changes. These changes are not needed in a hardware based deployment, but since the VM deployment takes more time to boot up the VMs on Openstack, these changes are required.  
  5. File: yardstick/ssh.py  
  6. Change the timeout from 120 to 480  
  7. Change SSH retry interval from 1 sec to 2 sec      
  8. Set up the environment to run the test:

source /etc/yardstick/openstack.creds

export EXTERNAL_NETWORK="external”

  1. Run the sample NSB test from yardstick

yardstick --debug task start  samples/vnf_samples/nsut/prox/tc_prox_heat_context_l2fwd-2.yaml

  1. This takes few minutes to complete. While this is running, you can examine Horizon dashboard, and look at the Stack created (which includes all the resources needed to run the tests), and the VM instances (2 of them) which are created. At the end of the test, the stack will be undeployed.  
  2. The results of the test will be in /tmp/yardstick.out, which you can examine.  
  3. Influxdb/Grafana can be configured to view the results graphically (NOTE: This is not included in the recorded session).

The L2 forwarding service consists of 2 VNF’s - TG (Traffic Generator) and L2FWD (L2 Forwarding function). Both these functions are implemented using the open source tool called prox (Packet PROcessing eXecution engine). Prox can be configured to run in various modes, and for the purpose of this network service, we will be running prox on both the VNF’s, in different modes. On the TG VNF, prox will be run as a traffic generator, by passing the appropriate configuration file (gen-l2fwd-2.cfg). On the L2FWD VNF, prox will be run as a L2 forwarding function, by passing the configuration file (handle-l2fwd-2.cfg).

The prox application is built using the open source library DPDK.

The network configuration of the service is as shown below:

Curios to learn more about ONAP? Consider signing up for one of our upcoming trainings.

Amar Kapadia

To Onboard or Not to Onboard: ONAP Presents a Dilemma for VNF Vendors.
Find out more

When it comes to NFV/SDN management, orchestration, and automation, ONAP is clearly a leading open source project. The number of operators behind ONAP represent more than 60% world-wide mobile subscribers. Needless to say, ONAP has a high likelihood of success and that should serve as a motivator for VNF (and PNF) vendors to onboard their products onto ONAP. On the other hand, ONAP is in very early stages of production deployment, and that might cause business decision makers to question the urgency.

Additionally, VNF vendors are hearing about ONAP nuances such as Heat vs. TOSCA VNF descriptors, sVNFM vs. gVNFM, ETSI compliance vs. non-ETSI compliance further creating confusion in their minds.

To aid in cutting through this confusion and help VNF vendors craft an ONAP strategy, we have a new "VNF onboarding for ONAP" white paper. The document is meant mostly for business decision makers. Check it out, let us know what you think.

Amar Kapadia

The Linux Foundation ONAP project promises to automate not just the orchestration and lifecycle management (LCM) of network services, but also service
Find out more

The Linux Foundation ONAP project promises to automate not just the orchestration and lifecycle management (LCM) of network services, but also service assurance through something called closed loop automation. Closed loop automation works as follows:

  • All monitoring data — events, alarms, logs, metrics, files — go to an analytics engine. A closed loop recipe, i.e. a sequence of big data analytics microservices, process that data. For example, a sustained increase in packet loss may trigger a packet loss event.  
  • The event from the analytics engine goes to a policy engine. The policy engine decides what action to take. For example, the policy engine may decide to do nothing if the packet loss is below a threshold. On the other hand, it may publish an action for the orchestration/LCM side of the house. In the above case, it could trigger a scale-out or configure throttling settings.

However, life is not usually this straightforward where every closed loop can be clearly defined ahead of time. Wouldn't it be nice if an AI/ML microservice was part of the closed loop recipe? This way, we wouldn't have to figure out every possible closed loop recipe permutation and the AI/ML microservice could assist.

Now we can assist you with the above needs. We have partnered with another SF Bay Area startup called Davinci Networks. Davinci Networks entire focus is on enabling intelligent networks through AI/ML microservices built using specialized deep neural networks. These deep neural networks use network internal monitoring data and combine it with external data to improve the quality of intelligence. Through our partnership, we will provide professional services, training, and over time products that span across ONAP and AI/ML microservices.

Curious to learn more? Sign up for one of our joint 1.5 courses. Get a boost in your career by learning about 2 hot technologies: ONAP + AI/ML.

Register for the Berlin Feb 26-27 ONAP+AI/ML Course

Register for the San Jose Apr 2-1 ONAP+AI/ML Course

Of course, you can always try out our ONAP distribution "ANOD" on GCP, sign up for our regular ONAP courses, or request professional services around ONAP.

Sriram Rupanagunta

Today we made two presentations at the joint ONAP Dublin DDF + OPNFV Gambia Plugfest event in Paris. Both presentations/demos were successful and well
Find out more

Today we made two presentations at the joint ONAP Dublin DDF + OPNFV Gambia Plugfest event in Paris. Both presentations/demos were successful and well received. A quick summary of the two presentations:

Mobile Content Cloud (MCC) Network Service Using ONAP:

Alex Xia from Affirmed and I showed how to use Affirmed Networks' MMC related VNFs and onboard them through the ONAP SDC design tool. We next showed how to create a single network service using these VNFs through ONAP SDC. Then we perform SDN-C preload of VNFs and deployed the network service (MCC) using VID in the ONAP portal. Finally, we showed  post-deployment configuration for the VNFs. See video here.

L2 Forwarder Using Yardstick NSB and ONAP:

In this presentation, we used a Sample VNF from OPNFV – L2 forwarder – to show an end-to-end lifecycle of a VNF using OPNFV + ONAP. We mean lifecycle from an operator point of view where the first step is to validate/certify a VNF and the last step is to deploy/manage it in production. We showed how we can do perf. benchmarking using OPNFV Yardstick NSB to validate/certify an VNF. Next we onboarded the VNF onto ONAP. Finally, we deployed the VNF using ONAP onto an OPNFV OpenStack scenario. See video here.

Want to learn more? Sign up for our public ONAP training (EU and US coming up in the next few months) or request a private onsite training. Or use our products/services for your specific project requirements.

Amar Kapadia

Four ONAP Myths Debunked
Find out more

Even though the Linux Foundation Open Network Automation Platform (ONAP) is well into its 3rd 6-month release (Casablanca came out in Dec’18), topics such as what ONAP is and how it’s architectured seem to be a subject of confusion. This is true even in sophisticated audiences as evidenced by a recent blog titled, “A Tale of Two Transformations: ONAP versus the Cloud” by Tom Nolle. At a high level, the blog compares ONAP and AWS (from a mindset point of view) and concludes that ONAP is business-as-usual and therefore incompatible with the transformative nature of cloud aka AWS. In my mind, a few key themes emerged from the above blog that are worth discussing.

But before we do that, it is important to consider what functionality ONAP includes. I call ONAP a MANO++, where ONAP includes the NFVO and VNFM layers as described by ETSI, but goes beyond by including service assurance/automation and a unified design tool. ONAP does not include the NFVI/VIM or the NFV cloud layer. In other words, ONAP doesn’t really care whether the NFV cloud is OpenStack, StarlingX, or in future, Kubernetes or Microsoft Azure. Nor does ONAP include VNFs. VNFs come from 3rd party companies or open source projects.

OK end of background. On to the four themes:

MODEL DRIVEN

The ONAP vs. Cloud blog states, “I told the ONAP people that I wasn’t interested in future briefings that didn’t focus on model-driven transformation of the ONAP architecture. Casablanca is the second release that has failed to provide that focus...” I found this statement perplexing. Model-driven is a central tenet of ONAP. In fact if anything, one might complain about there being too much model-driven thinking but not too little! There are models for:

  • VNF descriptor  
  • Network service descriptor  
  • VNF configuration  
  • Closed-loop automation template descriptor  
  • Policy  
  • APP-C/SDN-C directed graphs  
  • Orchestration workflow  
  • The big bang (just kidding)  
  • So on and so forth

The key idea of a model driven approach is to enable non-programmers to change the behavior of the platform with ease. And ONAP embraces this paradigm fully.

DEVICE ORIENTATION

The ONAP vs. Cloud blog continues, “ECOMP [as a precursor to ONAP] is following what I’ve characterized as the “device networking” vision of the future, and not the “cloud-native” vision.” The blog asserts that the lack of a model driven approach means ONAP is relegated to a device-networking approach. This couldn’t be further from the truth either. ONAP goes through great pains of creating a hierarchy and providing the highest level of abstraction to the OSS/BSS layers. The below show a couple of examples.

Service Orchestration & LCM (the left-hand side item feeds into the right-hand side item):

VF ⇛ Module ⇛ VNF  ⇛ Network/SDN service   ⇛ E2E network service ⇛ Product (future) ⇛ Offer (future)

Service Assurance:

Analytics Microservices & Policies ⇛ Closed Loop Templates

With upcoming MEC applications, the million dollar question is, will ONAP orchestrate MEC applications as well? This is to be determined, but if this happens, ONAP will be even further from device-orientation than it already is.

CLOUD NATIVE

The blog further states, “the Casablanca release is replete with comments about VNFs and PNFs, and the wording makes it clear that what ONAP is now focused on is providing the operational layer of NFV. Since NFV isn’t cloud-native, ONAP doesn’t need to be [i.e. isn’t].”

There are three sub-myths in the paragraph that surrounds the above text. First is that VNFs can’t be cloud-native. In fact they can be and ONAP highly encourages, I daresay, insists upon it (see ONAP VNF Development requirements here)? Cloud-native or containerized network functions (CNFs) are just around the corner and they will be fully supported by ONAP (when we say VNF, we include CNFs in that discussion). Second, the argument that ONAP documentation being replete with VNFs and PNFs is a net negative misses the point. ONAP refers to VNFs and PNFs since they constitute higher level services. The argument is tantamount to saying that if AWS uses the words VM or container, they need to be written off as outmoded. Moreover, new services such as 5G are expected to be built out of physical network functions (PNFs) — for performance reasons — and VNFs. Therefore, ONAP anticipates orchestrating and managing the lifecycle of PNFs. Finally, the third assertion is that VNFs not being written in a cloud-native manner is somehow indicative of ONAP being mis-architected. It is true that a large number of VNFs are VNFs-in-name-only (i.e. PNFs that have been virtualized, but not much else); however, this is orthogonal to ONAP. As mentioned above, ONAP does not include VNFs.

LACK OF INNOVATION

The final section of the above-mentioned blog can be summarized as saying that there is a lack of innovation on the ONAP side. The ONAP vs. Cloud blog offers examples such as Amazon Firecracker, Lambda and Outpost as evidence. Let’s take each one:

  • Amazon Firecracker: ONAP can already support unikernels (i.e. ridiculously small VMs) and will support Kata containers when the MultiCloud project fully supports k8s. So I don't see any issue.  
  • Lambda: Here there is some truth, ONAP can’t handle function-as-a-service. For NFV, I am not sure ONAP ever needs to support function-as-a-service. However, if ONAP ends up orchestrating MEC applications, it might have to. But I don’t view this as a showstopper. When the time comes, I’m sure the ONAP community will figure this issue out — it's not that hard of a problem.  
  • Outpost: ONAP and Outpost are different layers of the software. There are two ways they can interoperate. Today, ONAP is containerized and can run on any cloud that supports k8s. I assume Outpost will support k8s, so ONAP can already run on Outpost. Second, if an Outpost adapter is written for the ONAP MultiCloud project, ONAP will be able to orchestrate VNFs onto Outpost. It’s just a small-matter-of-programming :).

In summary, even amongst sophisticated circles there is a lot of confusion about ONAP’s ability to support a model driven, service oriented, cloud native, transformative future. Hopefully this blog clarifies some of those points.

Want to learn more about ONAP? Try Aarna.ml ONAP Distribution 1.0 (ANOD) — Development for free on GCP. Request ONAP Training