Aarna.ml

Resources

resources

Blog

Sriram Rupanagunta

This blog explains how the ONAP can be deployed seamlessly on a bare metal server(s) or on Cloud, using ANOD 1.0 Development.
Find out more

There is tremendous interest in development around ONAP. We hear CSPs and vendors talk about activities such as:

  • Trying out a specific ONAP blueprint e.g. vFW, vDNS, vCPE etc.  
  • Building a PoC for internal purposes/customer demos  
  • VNF onboarding (from both a vendor point-of-view and CSP)  
  • Network service design  
  • OSS/BSS interfacing  
  • Policy development  
  • Closed-loop automation template design  
  • Alarm correlation

Most developers naturally gravitate towards doing these kinds of development activities on their laptop. However, ONAP cannot be tried out on a typical laptop since it is resource hungry. To solve this problem, we have created our first product -- the Aarna.ml' ONAP Distribution 1.0 Development (or ANOD 1.0 Development for short). We label it "Development" since ANOD 1.0 is not available for production; only for development. Our subsequent releases, of course, will be supported in production deployments. Our ONAP development distribution, based on the ONAP Beijing release, can be installed on GCE or on one single (or more) bare metal server. How did we get it to fit? We have replaced DCAE with CDAP-all-in-one and done some other magic. This way the functionality is intact but the footprint is very light. We have also automated a number of steps to make the distribution very easy to install. Just recently we came across two customers who were stuck at different stages of ONAP installation. One could not get all the containers up, while another one was stuck at vFW deployment (could not complete the closed-loop automation exercise). ANOD 1.0 Development solves issues such as this by being repeatable and simple to use. The one final benefit of ANOD is that it is self-contained with all the working container images for ONAP, so works in environment with limited internet access — something we are finding to be extremely commonplace.

This blog explains how the ONAP can be deployed seamlessly on a bare metal server(s) or on Cloud, using ANOD 1.0 Development.

The ONAP OOM project supports deploying ONAP using Kubernetes. ONAP  deployment can be done on a single server (all in one), a single VM (such as Google cloud instance) or multiple servers/VMs. If it is installed on a single server or a Google cloud instance, we will be creating another KVM instance inside this server or VM (using Nested VM feature). If the VM/server requires Openstack too (in addition to ONAP), it is deployed using OPNFV Apex installer.  In this case, the number of GCE VMs goes up to 2. We use OOM for deploying ONAP, using Aarna’s pre-built ONAP QCOW images (available on GCloud) for creating ONAP Kubernetes cluster.

ANOD 1.0 Development can be deployed in the following configurations:

  1. On GCE, using 2 VM’s (one of them running Openstack, and another running ONAP). This can also be supported on other cloud based technologies that support nested VM feature. (Note: This is also used in Aarna’s ONAP Training Labs)  
  2. On single or multiple Bare metal server (that supports nested KVM capabilities), pointing to an external Openstack (not deployed using ANOD)  
  3. On a single Bare metal server (that supports nested KVM capabilities), with Openstack as part of the deployment (all-in-one)

These reference architectures are shown below.

The reference architecture in case of GCE looks as follows:

The reference architecture in case of Bare metal installation (with external Openstack) looks as follows:

The reference architecture in case of a single server (all-in-one) installation looks as follows:

The ONAP deployment is divided into the following steps:  

  1. Deploying OpenStack as a target NFVI cloud for ONAP. This step is optional, in case you already have a Openstack (public or private) deployment that you plan to use for ONAP.  
  2. Downloading KVM images for ONAP that are part of ANOD, which are pre-installed with all the required packages and automation tools.  
  3. Setting up Rancher for Kubernetes environment, which is used to deploy ONAP  
  4. Preparing OpenStack for ONAP, which includes creating the required Cloud resources  
  5. ONAP deployment using Kubernetes/OOM and verifying the functionality

Once these steps are done, the deployment is ready for use, and SDC design and NS/VNF onboarding can be done, using the sample vFW service, or any other custom network service/VNFs.

Each of these steps can be automated, either for deployment in development environment or as part of Jenkins job in Continuous Integration (CI) process. In case of live deployments, the process obviously needs lot more automation and tuning of various components.

In summary, the benefits of ANOD are:

  1. Able to try out ONAP on 1 GCP VM or single bare metal node  
  2. Easy repeatable installation  
  3. Installation possible without internet connectivity

On GCP: one instance of ANOD is free. Request access here.

On bare metal: ANOD is available on an annual subscription basis. Often times our customers request ANOD with ONAP training. Please contact us for pricing.

Amar Kapadia

CableLabs NFV101 Course: A Boon for Cable Operators.
Find out more

Do you find yourself in meetings where NFV buzz-words are being thrown around and the prevailing attitude of "everyone should know this stuff" preventing you from asking what these terms mean? Have you also been frustrated that most of the NFV material out there is telco-centric and not really tailored for Cable Operators, meaning it does not directly address your needs?

Well I have some good news for you. There is now a CableLabs NFV101 course that covers the basics of NFV, with a 100% Cable Operator point of view. The training will take around 3 hours (virtual) or 1/2 day in-person. It is broken into 7 chapters:

  • NFV basics  
  • Key NFV requirements  
  • Use cases  
  • Current industry landscape  
  • The role of open source and standards  
  • Future trends

You can take it for free online. Or you can contact us for an in-person fee-based training.

After having designed the first pre-DOCSIS CMTS at HP in '96-'97 (yes HP was in the CMTS business for a short while) and having worked on the first QAM/MPEG2 based digital set-top boxes in '97-'99 at VLSI Technology (acquired by NXP), I'm super excited to be back working with the cable industry, and am really looking forward with meaningful conversations.

Feb-2019 Update: The course is now also available on Intel Network Builders here.

Amar Kapadia

Directed Graphs in ONAP
Find out more

The Linux Foundation Open Network Automation Project or ONAP takes its promise of being able to change the behavior of the platform without requiring programming, to the degree possible, quite seriously. The philosophy is pervasive -- ranging from areas such as creating closed loop automation pipelines, describing hardware platform requirements for VNFs, modifying/creating new network service orchestration workflows or modifying/creating new behavior for APP-C (application) and SDN-C (Software Defined Networking) controllers.

Take SDN-C and APP-C for example. In addition to configuration and other data in the form of models (e.g. YANG), they take programming inputs represented through something called Directed Graphs (DGs). These DGs are processed by a service logic interpreter and converted to Java objects. Most importantly, DGs can be written by non-programmers. They can also be changed dynamically without requiring any system restart. APP-C, for example, comes with DGs that can configure, start/stop, rebuild a VNF along with numerous other actions. They can be modified by users. Furthermore, they can be supplemented with additional DGs to create new features.

The ONAP wiki has a nice writeup on how to install the DG Editor on an Ubuntu desktop and next how to create your first DG.

Figure: ONAP DG Editor

Unfortunately, I don't have an Ubuntu desktop. Rather than figuring out how to install the DG Editor on my mac, I decided to spin it up on Google Cloud. Here are the instructions. You can watch a YouTube video of this demo as well.

1. Create an Ubuntu 16.04 VM with 2vCPUs. Go with the default memory and disk size that come with 2 vCPUs.

2. Install relevant packages and start the DG editor:

# Install these packages one-at-a-time

sudo apt-get install default-jre

sudo add-apt-repository ppa:webupd8team/java

sudo apt-get update

sudo apt-get install oracle-java8-installer

sudo apt-get install -y nodejs

sudo ln -s /usr/bin/nodejs /usr/bin/node

sudo apt-get install maven

sudo apt-get install graphviz

sudo apt-get install npm

# Install DG Builder editor

# This entire command is one line

git clone http://gerrit.onap.org/r/sdnc/oam && (cd oam && curl -kLo `git rev-parse --git-dir`/hooks/commit-msg http://gerrit.onap.org/r/tools/hooks/commit-msg; chmod +x `git rev-parse --git-dir`/hooks/commit-msg)

cd oam/dgbuilder

# username can be anything e.g. akapadia in my case

# enter one of your email addresses as well

# This whole command is one line

./createReleaseDir.sh 15.10   ~/sdnc/1510/service-logic

npm install sqlite3

./start.sh 15.10

http://:3101/#

# Username: username from above

# Password: test123

# Ignore following errors

# 25 Sep 00:04:51 - [red] Flows file not found...

# Could not load the file /home/.../oam/dgbuilder/releases/15.10/selected_modules

3. Try out the "Your First Graph" exercise. Of course, you won't be able to upload or activate the DG since the editor is not connected to ONAP. But you will be able to learn about the various DG "nodes" in the areas of flow control, device management, Java plugin support, logging/recording and resource management. If there's sufficient interest, I might do a subsequent blog explaining each of the various DG nodes.

If you would like to learn more about ONAP, please request one of our ONAP training courses. Or if you would like to set up ONAP in your lab, check out our professional services. Setting up ONAP in your own lab seems to be a trend right now. We've just completed 3 such setups in the last month. Pardon the sales pitch, but don't be late on this critical technology, start playing with it in your lab ASAP.

Amar Kapadia

Non-ONAP Vendors Need Not Apply
Find out more

Earlier this month, Iain Morris from Light Reading reported that Orange is starting their '5G Plus Automation' RFP effort this year and the message to vendors is clear, "If you can't support ONAP interfaces, don't bother responding."

As an ONAP products & services, company, this news could not have been better. ONAP is about 1.5 years old and the commercial activity around the project has had a slow start. However, this announcement should change that. Once an operator adopts ONAP, vendors will have no option but to ensure interoperability. As more vendors support ONAP, more operators will adopt, thus kicking off the proverbial virtuous cycle.

But what does supporting ONAP interfaces mean? There are three broad categories of interfaces:

  • Northbound API: that interface to OSS/BSS applications, E-Services (e.g. self-service web/mobile applications for end-users), big data applications  
  • Southbound interfaces: to OpenStack other SDN controllers  
  • Onboarding interfaces: for VNFs, PNFs, analytic microservices/applications, policies etc. I'm speculating here, but I imagine MEC application onboarding specs could come soon.

While all interfaces are important, the most critical one at this time is VNF/PNF (aka xNF) onboarding due to the sheer magnitude of xNFs out there. The ONAP community provides very detailed guidelines/documentation on how to write and package an xNF for ONAP consumption. The main requirements are:

  1. Cloud optimized/native VNF (the specification covers topics such as design, resiliency, security, modularity and DevOps) image  
  2. xNF descriptor (OpenStack Heat or TOSCA)  
  3. xNF lifecycle management support (with or without a sVNFM)  
  4. xNF configuration template (e.g. YANG model)  
  5. xNF monitoring (e.g. via VES or Google Protocol Buffers; with or without an EMS)  
  6. xNF CI tests  
  7. xNF license metadata  
  8. xNF documentation  
  9. Additional artifacts e.g. error-code XML file

As you can see, while VNF vendors might be able to get away with a quick demo with just item number 2, a full ONAP interop is a LOT more involved. Furthermore, even within a category e.g. monitoring, a vendor can support just 1 event for a demo, but will have to support the full set of events for production use.

The above article is just the start. More and more operators will require ONAP compatibility as 5G RFPs start to roll out. As a VNF vendor, I think your two choices are: A) Get a head-start and use ONAP compatibility as a competitive advantage, or B) Wait for the RFP and scramble. Not supporting ONAP at all, I don't think is a realistic option.

In either scenario, Aarna.ml is ready to help. If you are new to ONAP, check out our free eBook. Or request one of our popular ONAP trainings. When you are ready to play with ONAP, request our ONAP deployment or ONAP VNF onboarding professional services.

Amar Kapadia

Invitation to Join End-to-End VNF Onboarding Project on Intel DevMesh.
Find out more

In a prior blog titled "OPNFV Yardstick NSB: The Prequel to VNF Onboarding" Sriram explained how and end-to-end VNF onboarding flow requires VNF validation, characterization and certification; and how OPNFV YardStick NSB can help with this effort.

To make this abstract concept a little more real, we have started an Intel DevMesh project titled "End-to-end VNF onboarding". In this project, we are using an L2 forwarding VNF from the OPNFV SampleVNF project. We will first show how to validate the VNF by using Yardstick NSB. Then we will onboard that same VNF onto ONAP, create a simple network service and then deploy the network service on OpenStack.

If you'd like to join us on this project, we welcome your participation. More details on the project and how to join available here.

Sriram Rupanagunta

OPNFV Yardstick NSB: The Prequel to VNF Onboarding
Find out more

Before you can onboard a VNF onto a MANO software stack such as ONAP, you need to select a VNF vendor. While comparing the functionality of competing VNFs is well understood from the PNF days, comparing their performance is a lot more difficult for the following reasons:

  • Differences between vendor environments: The NFVI/VIM platform, its configurations and traffic generators used by various vendors are likely to be different. Similarly, the actual performance tests may differ significantly. This makes comparing VNFs based on vendor provided metrics very difficult. While present to some degree, this issue was not as pronounced in the pre-NFV world.  
  • Differences from a real production environment. If a vendor uses 10 NICs and uses up 100% of CPU resources available to generate metrics, then these metrics are not useful for a real world deployment. Similarly, the tests also need to reflect real world traffic conditions to be useful, which may or may not be the case in vendor provided results. Similar to the previous issue, this problem is also heightened in the NFV era.

The OPNFV Yardstick Network Service Benchmarking (NSB) tool is useful in solving the above problem. It runs performance tests on a VNF or entire network service. A CSP can thus use a consistent NFVI/VIM platform that reflects their production environment (could be an OPNFV scenario if there is desire to keep the validation platform vendor agnostic). The performance tests can also be written in a vendor agnostic manner. This methodology may be used to compare vendors in a consistent manner, validate VNFs and characterize their performance. NSB is fully automated, so can be plugged into a CI pipeline as well.

Curios to know more? Check out my OPNFV Yardstick demo video. If you have any feedback for me or want to deploy this methodology in-house, please contact us.