Aarna.ml

Resources

resources

Blog

Sriram Rupanagunta

In this blog, we will cover 2 other services that are part of vCPE use case -- namely, vG MUX Infra service and vBNG service.
Find out more

This blog is a continuation of the previous blog that explains deployment details (using TOSCA/HEAT templates) of various services that are part of vCPE blueprint in ONAP.

BTW, our ONAP Demystified book downloads and Amazon sales have crossed 400 as of today! Thanks for the tremendous interest. However, we haven't been doing so well on our ONAP merchandise. The merchandise, 100% pure-play ONAP without any Aarna branding, includes t-shirts, sweatshirts, hoodies, mugs, water bottles and much more. Feel free to buy & hopefully expense some of these items! There's minimal profit on each item and all proceeds go to the artist.

In this blog, we will cover 2 other services that are part of vCPE use case -- namely, vG MUX Infra service and vBNG service.

vG MUX Infra Service

vG_MUX provides the MUX functionality across all the links that are terminating at the Virtual Gateway.

The composition of this service is as shown below.



This service is modeled as a combination of TOSCA and HEAT templates, and the relationship is as shown:

The TOSCA model definitions file for this service can be found here.

The Environment file (base_vcpe_vgmux.env) for this service looks as follows:

parameters:

 bng_gmux_private_ip: "10.1.0.10"

 bng_gmux_private_net_cidr: "10.1.0.0/24"

 bng_gmux_private_net_id: "zdfw1bngmux01_private"

 bng_gmux_private_subnet_id: "zdfw1bngmux01_sub_private"

 brgemu_bng_private_net_cidr: "10.3.0.0/24"

 cloud_env: "openstack"

 dcae_collector_ip: "10.0.4.1"

 dcae_collector_port: "8081"

 demo_artifacts_version: "1.2.0"

 hc2vpp_patch_url: "https://git.onap.org/demo/plain/vnfs/vCPE/vpp-ves-agent-for-vgmux/src/patches/Hc2vpp-Add-VES-agent-for-vG-MUX.patch"

 hc2vpp_source_repo_branch: "stable/1704"

 hc2vpp_source_repo_url: "https://gerrit.fd.io/r/hc2vpp"

 install_script_version: "1.2.0-SNAPSHOT"

 key_name: "vgmux_key"

 libevel_patch_url: "https://git.onap.org/demo/plain/vnfs/vCPE/vpp-ves-agent-for-vgmux/src/patches/vCPE-vG-MUX-libevel-fixup.patch"

 mux_gw_private_net_cidr: "10.5.0.0/24"

 mux_gw_private_net_id: "zdfw1muxgw01_private"

 mux_gw_private_subnet_id: "zdfw1muxgw01_sub_private"

 onap_private_net_cidr: "10.0.0.0/16"

 onap_private_net_id: "ext-net"

 onap_private_subnet_id: "ext-net"

 pub_key: "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDQXYJYYi3/OUZXUiCYWdtc7K0m5C0dJKVxPG0eI8EWZrEHYdfYe6WoTSDJCww+1qlBSpA5ac/Ba4Wn9vh+lR1vtUKkyIC/nrYb90ReUd385Glkgzrfh5HdR5y5S2cL/Frh86lAn9r6b3iWTJD8wBwXFyoe1S2nMTOIuG4RPNvfmyCTYVh8XTCCE8HPvh3xv2r4egawG1P4Q4UDwk+hDBXThY2KS8M5/8EMyxHV0ImpLbpYCTBA6KYDIRtqmgS6iKyy8v2D1aSY5mc9J0T5t9S2Gv+VZQNWQDDKNFnxqYaAo1uEoq/i1q63XC5AD3ckXb2VT6dp23BQMdDfbHyUWfJN"

 public_net_id: "2da53890-5b54-4d29-81f7-3185110636ed"

 repo_url_artifacts: "https://nexus.onap.org/content/groups/staging"

 repo_url_blob: "https://nexus.onap.org/content/sites/raw"

 vcpe_flavor_name: "onap.medium"

 vcpe_image_name: "ubuntu-16.04-daily"

 vf_module_id: "vCPE_Intrastructure_Metro_vGMUX"

 vgmux_name_0: "zdcpe1cpe01mux01"

 vgmux_private_ip_0: "10.1.0.20"

 vgmux_private_ip_1: "10.0.101.20"

 vgmux_private_ip_2: "10.5.0.20"

 vnf_id: "vCPE_Infrastructure_vGMUX_demo_app"

 vpp_patch_url: "https://git.onap.org/demo/plain/vnfs/vCPE/vpp-ves-agent-for-vgmux/src/patches/Vpp-Add-VES-agent-for-vG-MUX.patch"

 vpp_source_repo_branch: "stable/1704"

 vpp_source_repo_url: "https://gerrit.fd.io/r/vpp"

Note the details about the networks for this service, such as private IP of bng_gmux link (10.1.x.x) and private IP networks of mux_gw (10.1.x.x, 10.0.x.x and 10.5.x.x) as well as the VNF details, including the pointer to the VPP source repo, since this VNF is based on VPP open source initiative.

Let us examine some of the interesting parts of HEAT template file (base_vcpe_vgmux.yaml) for this service. Complete copy of the HEAT template file can be found here.

heat_template_version: 2013-05-23

description: Heat template to deploy vCPE Infrastructure Metro vGMUX

##############

#            #

# PARAMETERS #

#            #

##############

parameters:

 vcpe_image_name:

   type: string

   label: Image name or ID

   description: Image to be used for compute instance

 ...

 bng_gmux_private_net_id:

   type: string

   label: vBNG vGMUX private network name or ID

   description: Private network that connects vBNG to vGMUX

 ...

 mux_gw_private_net_id:

   type: string

   label: vGMUX vGWs network name or ID

   description: Private network that connects vGMUX to vGWs

 ...

 brgemu_bng_private_net_cidr:

   type: string

   label: vBRG vBNG private network CIDR

   description: The CIDR of the vBRG-vBNG private network

 onap_private_net_id:

   type: string

   label: ONAP management network name or ID

   description: Private network that connects ONAP components and the VNF

 ...

 vgmux_private_ip_0:

   type: string

   label: vGMUX private IP address towards the vBNG-vGMUX private network

   description: Private IP address that is assigned to the vGMUX to communicate with the vBNG

 ...

 vnf_id:

   type: string

   label: VNF ID

   description: The VNF ID is provided by ONAP

 ...

 dcae_collector_port:

   type: string

   label: DCAE collector port

   description: Port of the DCAE collector

 ...

 cloud_env:

   type: string

   label: Cloud environment

   description: Cloud environment (e.g., openstack, rackspace)

 vpp_source_repo_url:

   type: string

   label: VPP Source Git Repo

   description: URL for VPP source codes

 ...

#############

#           #

# RESOURCES #

#           #

#############

resources:

 ...

 # Virtual GMUX Instantiation

 vgmux_private_0_port:

   type: OS::Neutron::Port

   properties:

     network: { get_param: bng_gmux_private_net_id }

     fixed_ips: [{"subnet": { get_param: bng_gmux_private_subnet_id }, "ip_address": { get_param: vgmux_private_ip_0 }}]

 ...

 vgmux_0:

   type: OS::Nova::Server

   properties:

     image: { get_param: vcpe_image_name }

     flavor: { get_param: vcpe_flavor_name }

     name: { get_param: vgmux_name_0 }

     key_name: { get_resource: my_keypair }

     networks:

       - network: { get_param: public_net_id }

       - port: { get_resource: vgmux_private_0_port }

       - port: { get_resource: vgmux_private_1_port }

       - port: { get_resource: vgmux_private_2_port }

     ...

           # Download and run install script

           curl -k __repo_url_blob__/org.onap.demo/vnfs/vcpe/__install_script_version__/v_gmux_install.sh -o /opt/v_gmux_install.sh

           cd /opt

           chmod +x v_gmux_install.sh

           ./v_gmux_install.sh

Note the details of the vG_MUX VNF (vgmux), and its ports/networks. Also note the installation details of this service and the script to install on a VM (v_gmux_install.sh).

vBNG MUX Service

This service consists of 2 VL’s connecting the associated VNF’s, as shown below.

This service is modelled as follows using TOSCA (Green) and HEAT (Orange) templates:

The TOSCA model definitions file for this service can be found here.

The Environment file (base_vcpe_vbng.env) for this service is as shown:

parameters:

 bng_gmux_private_net_cidr: "10.1.0.0/24"

 bng_gmux_private_net_id: "zdfw1bngmux01_private"

 bng_gmux_private_subnet_id: "zdfw1bngmux01_sub_private"

 brgemu_bng_private_net_cidr: "10.3.0.0/24"

 brgemu_bng_private_net_id: "zdfw1bngin01_private"

 brgemu_bng_private_subnet_id: "zdfw1bngin01_sub_private"

 cloud_env: "openstack"

 cpe_signal_net_id: "zdfw1cpe01_private"

 cpe_signal_private_net_cidr: "10.4.0.0/24"

 cpe_signal_subnet_id: "zdfw1cpe01_sub_private"

 dcae_collector_ip: "10.0.4.1"

 dcae_collector_port: "8081"

 demo_artifacts_version: "1.2.0"

 install_script_version: "1.2.0-SNAPSHOT"

 key_name: "vbng_key"

 onap_private_net_cidr: "10.0.0.0/16"

 onap_private_net_id: "ext-net"

 onap_private_subnet_id: "ext-net"

 pub_key: "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDQXYJYYi3/OUZXUiCYWdtc7K0m5C0dJKVxPG0eI8EWZrEHYdfYe6WoTSDJCww+1qlBSpA5ac/Ba4Wn9vh+lR1vtUKkyIC/nrYb90ReUd385Glkgzrfh5HdR5y5S2cL/Frh86lAn9r6b3iWTJD8wBwXFyoe1S2nMTOIuG4RPNvfmyCTYVh8XTCCE8HPvh3xv2r4egawG1P4Q4UDwk+hDBXThY2KS8M5/8EMyxHV0ImpLbpYCTBA6KYDIRtqmgS6iKyy8v2D1aSY5mc9J0T5t9S2Gv+VZQNWQDDKNFnxqYaAo1uEoq/i1q63XC5AD3ckXb2VT6dp23BQMdDfbHyUWfJN"

 public_net_id: "2da53890-5b54-4d29-81f7-3185110636ed"

 repo_url_artifacts: "https://nexus.onap.org/content/groups/staging"

 repo_url_blob: "https://nexus.onap.org/content/sites/raw"

 sdnc_ip_addr: "10.0.7.1"

 vbng_name_0: "zdcpe1cpe01bng01"

 vbng_private_ip_0: "10.3.0.1"

 vbng_private_ip_1: "10.0.101.10"

 vbng_private_ip_2: "10.4.0.3"

 vbng_private_ip_3: "10.1.0.10"

 vcpe_flavor_name: "onap.medium"

 vcpe_image_name: "ubuntu-16.04-daily"

 vf_module_id: "vCPE_Intrastructure_Metro_vBNG"

 vnf_id: "vCPE_Infrastructure_Metro_vBNG_demo_app"

 vpp_patch_url: "https://git.onap.org/demo/plain/vnfs/vCPE/vpp-radius-client-for-vbng/src/patches/Vpp-Integrate-FreeRADIUS-Client-for-vBNG.patch"

 vpp_source_repo_branch: "stable/1704"

 vpp_source_repo_url: "https://gerrit.fd.io/r/vpp"

Note the virtual link details of bng_gmux (10.1.x.x) and brgemu_bng (10.3.x.x), which are the 2 virtual links connecting BRG_Emulator to BNG_MUX and BNG_MUX to VG_MUX, and also two other network interfaces of BNG_MUX to connect to CPE_SIGNAL network (10.4.x.x), and lastly to ONAP OAM network (10.0.x.x).

Let us look at some of the interesting parts of HEAT template (base_vcpe_vbng.yaml) for this service. A full copy of the HEAT template file can be found here.

heat_template_version: 2013-05-23

description: Heat template to deploy vCPE virtual Broadband Network Gateway (vBNG)

##############

#            #

# PARAMETERS #

#            #

##############

parameters:

 ...

 brgemu_bng_private_net_id:

   type: string

   label: vBNG IN private network name or ID

   description: Private network that connects vBRG to vBNG

 ...

 bng_gmux_private_net_id:

   type: string

   label: vBNG vGMUX private network name or ID

   description: Private network that connects vBNG to vGMUX

 ...

 cpe_signal_net_id:

   type: string

   label: vCPE private network name or ID

   description: Private network that connects vCPE elements with vCPE infrastructure elements

 ...

 vbng_private_ip_0:

   type: string

   label: vBNG IN private IP address

   description: Private IP address that is assigned to the vBNG IN

..

 vnf_id:

   type: string

   label: VNF ID

   description: The VNF ID is provided by ONAP

 vf_module_id:

   type: string

   label: vCPE module ID

   description: The vCPE Module ID is provided by ONAP

 dcae_collector_ip:

   type: string

   label: DCAE collector IP address

   description: IP address of the DCAE collector

 ..

 vpp_source_repo_url:

   type: string

   label: VPP Source Git Repo

   description: URL for VPP source codes

 ..

#############

#           #

# RESOURCES #

#           #

#############

resources:

 ...

 # Virtual BNG Instantiation

 vbng_private_0_port:

   type: OS::Neutron::Port

   properties:

     network: { get_param: brgemu_bng_private_net_id }

     fixed_ips: [{"subnet": { get_param: brgemu_bng_private_subnet_id }, "ip_address": { get_param: vbng_private_ip_0 }}]

 ...

 vbng_0:

   type: OS::Nova::Server

   properties:

     image: { get_param: vcpe_image_name }

     flavor: { get_param: vcpe_flavor_name }

     name: { get_param: vbng_name_0 }

     key_name: { get_resource: my_keypair }

     networks:

       - network: { get_param: public_net_id }

       - port: { get_resource: vbng_private_0_port }

       - port: { get_resource: vbng_private_1_port }

       - port: { get_resource: vbng_private_2_port }

       - port: { get_resource: vbng_private_3_port }

     metadata: {vnf_id: { get_param: vnf_id }, vf_module_id: { get_param: vf_module_id }}

     user_data_format: RAW

     user_data:

       str_replace:

         params:

           ...

           # Download and run install script

           curl -k __repo_url_blob__/org.onap.demo/vnfs/vcpe/__install_script_version__/v_bng_install.sh -o /opt/v_bng_install.sh

           cd /opt

           chmod +x v_bng_install.sh

           ./v_bng_install.sh

Note the BNG download and instantiation details above, including the script for installation of BNG MUX software (v_bng_install.sh).

Another interesting artifact is the VF Modules metadata, which is shown below. Note a couple of interesting observations:

The min and max vf_module_in stances is 1, which means this service will not be scaled out.

[

 {

   "vfModuleModelName": "VcpevspVbng230518a..base_vcpe_vbng..module-0",

   "vfModuleModelInvariantUUID": "98476290-c537-4a7e-9bc0-92a7c116b1aa",

   "vfModuleModelVersion": "1",

   "vfModuleModelUUID": "dc431db4-54f5-4caa-af83-a28176de614a",

   "vfModuleModelCustomizationUUID": "ab440315-998b-4229-99ab-942ce519dbda",

   "isBase": true,

   "artifacts": [

     "0beb8864-d9b1-4650-a3c5-151bc35038ac",

     "81e24e47-fbc5-4367-85ad-86c5a169b31f"

   ],

   "properties": {

     "min_vf_module_instances": "1",

     "vf_module_label": "base_vcpe_vbng",

     "max_vf_module_instances": "1",

     "vfc_list": "",

     "vf_module_description": "",

     "vf_module_type": "Base",

     "availability_zone_count": "",

     "volume_group": "false",

     "initial_count": "1"

   }

 }

]

Interested in learning more about ONAP? Consider our ONAP training courses. Or want to try out vCPE or some other blueprint in your lab? Contact us for ONAP professional services.

Sriram Rupanagunta

This blog explains deployment details (using TOSCA/HEAT templates) of some of the important services of the vCPE blueprint in ONAP.
Find out more

This blog explains deployment details (using TOSCA/HEAT templates) of some of the important services of the vCPE blueprint in ONAP. It assumes that the reader is familiar with vCPE use case (for which there are several blogs/video available, including a free book from Aarna.ml -- ONAP Demystified, or the ONAP Confluence page).

The following block diagram provides an overview of the end to end service of vCPE, and how various constituent services are linked together.

vCPE end to end use case comprises of several services (some of which are optional, and will be replaced by equivalent services already existing in CSP’s environment), each of which contains one or more VNF’s and/or VL’s.

  1. vCPE General Infra Service  
  2. vG MUX Infra ServicevBNG Service  
  3. vBNG MUX Service  
  4. vBRG Emulation  
  5. vCPE Customer Service

This blog shows details of some of these services, and their associated model templates.

vCPE General Infra

This service consists of vDHCP, vAAA and vDNS VNF’s connected by 2 virtual links (VLs) - cpe_signal and cpe_public, both of which are Openstack Neutron networks. The cpe_public link is also connected to a Web Server.

Now, let us examine the Infra Service in SDC Catalog for its constituent components and their details.

The composition of this service is as follows, which shows the virtual links (VLs) and the VF that makes up all the VNF’s:

The CSAR file for this service contains the following details:

The service is modeled (in TOSCA and HEAT templates) as follows:

Notice that the 2 networks (CPE_PUBLIC and CPE_SIGNAL) are modeled in HEAT, and so is the VF module that contains VM’s for all the VNF’s (vDHCP, vAAA and vDNS + vDHCP). The TOSCA template includes node_templates for all the HEAT templates. The TOSCA model definition file for this service can be found here.

Let us take a closer look at the Environment file (base_vcpe_infra.env) of this service.

parameters:

 cloud_env: "openstack"

 cpe_public_net_cidr: "10.2.0.0/24"

 cpe_public_net_id: "zdfw1cpe01_public"

 cpe_public_subnet_id: "zdfw1cpe01_sub_public"

 cpe_signal_net_cidr: "10.4.0.0/24"

 cpe_signal_net_id: "zdfw1cpe01_private"

 cpe_signal_subnet_id: "zdfw1cpe01_sub_private"

 dcae_collector_ip: "10.0.4.1"

 dcae_collector_port: "8081"

 demo_artifacts_version: "1.2.0"

 install_script_version: "1.2.0-SNAPSHOT"

 key_name: "vaaa_key"

 mr_ip_addr: "10.0.11.1"

 onap_private_net_cidr: "10.0.0.0/16"

 onap_private_net_id: "ext-net"

 onap_private_subnet_id: "ext-net"

 pub_key: "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDQXYJYYi3/OUZXUiCYWdtc7K0m5C0dJKVxPG0eI8EWZrEHYdfYe6WoTSDJCww+1qlBSpA5ac/Ba4Wn9vh+lR1vtUKkyIC/nrYb90ReUd385Glkgzrfh5HdR5y5S2cL/Frh86lAn9r6b3iWTJD8wBwXFyoe1S2nMTOIuG4RPNvfmyCTYVh8XTCCE8HPvh3xv2r4egawG1P4Q4UDwk+hDBXThY2KS8M5/8EMyxHV0ImpLbpYCTBA6KYDIRtqmgS6iKyy8v2D1aSY5mc9J0T5t9S2Gv+VZQNWQDDKNFnxqYaAo1uEoq/i1q63XC5AD3ckXb2VT6dp23BQMdDfbHyUWfJN"

 public_net_id: "2da53890-5b54-4d29-81f7-3185110636ed"

 repo_url_artifacts: "https://nexus.onap.org/content/groups/staging"

 repo_url_blob: "https://nexus.onap.org/content/sites/raw"

 vaaa_name_0: "zdcpe1cpe01aaa01"

 vaaa_private_ip_0: "10.4.0.4"

 vaaa_private_ip_1: "10.0.101.2"

 vcpe_flavor_name: "onap.medium"

 vcpe_image_name: "ubuntu-16.04-daily"

 vdhcp_name_0: "zdcpe1cpe01dhcp01"

 vdhcp_private_ip_0: "10.4.0.1"

 vdhcp_private_ip_1: "10.0.101.1"

 vdns_name_0: "zdcpe1cpe01dns01"

 vdns_private_ip_0: "10.2.0.1"

 vdns_private_ip_1: "10.0.101.3"

 vf_module_id: "vCPE_Intrastructure"

 vnf_id: "vCPE_Infrastructure_demo_app"

 vweb_name_0: "zdcpe1cpe01web01"

 vweb_private_ip_0: "10.2.0.10"

 vweb_private_ip_1: "10.0.101.40"

Note the details about the constituent VNF’s (vAAA, vDHCP, vDNS and vWEB_Server), including their IP addresses, and the network addresses of the VL’s that these VNF’s are connected to (cpe_signal and cpe_public). For eg., vDHCP & vAAA are connected to cpe_signal network (10.4.x.x), and vDNS and vWebServer are connected to cpe_public network (10.2.x.x). Also, DCAE Collector service is connected at 10.0.4.x IP address.

Now, let us look at some of the interesting fields of HEAT template (base_vcpe_infra.yaml) of this service. This contains details about all the VNF’s that are part of this service, and how they will be instantiated using HEAT. Complete copy of the HEAT template can be found here.

heat_template_version: 2013-05-23

description: Heat template to deploy vCPE Infrastructure elements (vAAA, vDHCP, vDNS_DHCP, webServer)

##############

#            #

# PARAMETERS #

#            #

##############

parameters:

 vcpe_image_name:

   type: string

   label: Image name or ID

   description: Image to be used for compute instance

   …

 cpe_signal_net_id:

   type: string

   label: vAAA private network name or ID

   description: Private network that connects vAAA with vDNSs

 ...

 cpe_public_net_id:

   type: string

   label: vCPE Public network (emulates internet) name or ID

   description: Private network that connects vGW to emulated internet

 ...

 vaaa_private_ip_0:

   type: string

   label: vAAA private IP address towards the CPE_SIGNAL private network

   description: Private IP address that is assigned to the vAAA to communicate with the vCPE components

 ...

 vdns_private_ip_0:

   type: string

   label: vDNS private IP address towards the CPE_PUBLIC private network

 ...

 vdhcp_private_ip_0:

   type: string

   label: vDHCP  private IP address towards the CPE_SIGNAL private network

   description: Private IP address that is assigned to the vDHCP to communicate with the vCPE components

 ...

 vweb_private_ip_0:

   type: string

   label: vWEB private IP address towards the CPE_PUBLIC private network

   description: Private IP address that is assigned to the vWEB to communicate with the vGWs

 ...

   ...

 dcae_collector_ip:

   type: string

   label: DCAE collector IP address

   description: IP address of the DCAE collector

#############

#           #

# RESOURCES #

#           #

#############

resources:

….

 # Virtual AAA server Instantiation

 vaaa_private_0_port:

   type: OS::Neutron::Port

   properties:

     network: { get_param: cpe_signal_net_id }

     fixed_ips: [{"subnet": { get_param: cpe_signal_subnet_id }, "ip_address": { get_param: vaaa_private_ip_0 }}]

 …

 vaaa_0:

   type: OS::Nova::Server

   properties:

    ...

         template: |

           #!/bin/bash

           # Create configuration files

           mkdir /opt/config

           echo "__dcae_collector_ip__" > /opt/config/dcae_collector_ip.txt

           ...

           # Download and run install script

           curl -k __repo_url_blob__/org.onap.demo/vnfs/vcpe/__install_script_version__/v_aaa_install.sh -o /opt/v_aaa_install.sh

           cd /opt

           chmod +x v_aaa_install.sh

           ./v_aaa_install.sh

...

Note the details about various VNF’s (vAAA, vDHCP, vDNS and vWebServer) and the VL’s (Neutron networks - cpe_signal which connects vAAA with vDNS VNF’s and cpe_public, which connects vGW service to Emulate Internet) that are part of the Infrastructure service. Also note the vAAA instantiation, and details of DCAE Collector IP address, and the installation script (v_aaa_install.sh) in vAAA VNF. Other VNFs (vDNS, vDHCP & vWebserver) have been left out but you can refer to the link above for these details in the HEAT template file.

In the next blog, we will examine other Services and their details.

In the meantime check out our latest webinar on "What's new in ONAP Beijing" or request ONAP training if you/your team needs to learn more.

Amar Kapadia

"ONAP Demystified Book" and other ONAP Beijing Goodies.
Find out more

What's a software release without some celebration? I am going to point out two ways, hopefully new to you, to celebrate the ONAP Beijing release:

ONAP Demystified Book:

For those new to ONAP, the project can be a bit overwhelming. The community has been making it easier to sink your teeth into the project with improved documentation, online training (offered by LF, developed by Aarna) and in-person training.

Today, I am very excited to share a new resource with you -- a mini-book titled ONAP Demystified, written by yours truly. This 76 page book provides a high-level understanding and business perspective of the ONAP project and a guide for navigating, participating, and benefiting from the ONAP community. The book is also meant for vendors that wish to determine how to position or sell their products into the ONAP ecosystem. This book covers the Beijing release.

Get an actual book (not free) from Amazon here.

ONAP Beijing Merchandise:

Enjoy a wide range of ONAP merchandise (again not free), all without any vendor branding.  You can get merchandise such as ONAP Beijing t-shirts, muscle tanks, dolmans, throw pillows, and even phone cases.  

We'd love to hear your feedback, do let us know if you were able to take advantage of any of these resources.

Sriram Rupanagunta

Automating ONAP Deployment.
Find out more

The ONAP OOM project supports deploying ONAP using Kubernetes. This blog explains these steps and how they can be further automated.

Automation Leading to a Carefree Ops Engineer

The ONAP deployment is divided into the following logical steps:  

  1. Deploying OpenStack as a target NFVI cloud for ONAP. This step is optional, in case you already have a Openstack (public or private) deployment that you plan to use for ONAP.  
  2. Setting up Rancher for Kubernetes environment, which is used to deploy ONAP  
  3. Preparing OpenStack for ONAP, which includes creating the required Cloud resources  
  4. ONAP deployment using Kubernetes/OOM

Each of these steps can be automated, either for deployment in development environment or as part of Jenkins job in Continuous Integration (CI) process. In case of live deployments, the process obviously needs lot more automation and tuning of various components, which is not covered here.

The first step of ONAP deployment is Openstack installation and deployment which has been automated already as part of Openstack project. There are multiple installers that can be used but for the purpose of this blog, I will describe Apex installer which installs Openstack and OPNFV. This uses the functionality of OOO (Triple-O), which essentially installs one Openstack instance (overcloud) over another Openstack instance (undercloud). This process can be initiated by running a single Apex script from the shell (or from a Jenkins job) which sets up both the Openstack instances. For example, in case of OPNFV deployment using APEX installer, the steps to automate the process are:

  1. Install the APEX packages using the appropriate package manager of the Linux distro  
  2. Use the deployment script to start the Openstack Undercloud & Overcloud  
  3. Set up Openstack credentials on the Jump host, so that Openstack CLI can be used from Jump host, instead of from Undercloud instance

These can be included in a single script that can be invoked on the system as part of init scripts or the Jenkins job.

The second step of setting up Kubernetes is more involved, since it requires automating Rancher or Cloudify (which are the recommended tools for Kubernetes set up of ONAP). There are multiple steps involved in either of them, which require UI interactions from the user to set this up. In case of Rancher, it does provide CLI, and all the steps can be automated using the equivalent CLI commands (which is supported in the training lab of the upcoming Beijing release, but can also be run on Amsterdam).

The step of preparing Openstack is essentially to create the required cloud resources for deploying ONAP and it is fairly straightforward. The resources of course will vary depending on the type of deployment (HA, Network configuration etc.), but the basic set of resources are fairly simple to create. Openstack provides CLI tool that can be run from the Undercloud instance. Alternatively, the SSH credentials can be copied to the Jump host (which is where all the other commands will be run), and a single script (with the required openstack CLI commands) can be created to run all the required Openstack resources.

The last step is the deployment of ONAP components itself (using the ONAP Operations Manager, or OOM), which involves downloading the correct version of Containers, and deploying them using Kubernetes. This process is already automated as part of OOM in the ONAP distribution, and the deployment script takes care of starting all the required containers. The only other customization of these scripts that could be done is to run only the required containers for your application/use case, to avoid resource crunch.

In summary, all the necessary steps to install and deploy ONAP on a system (for the purpose of development or Devops) can be automated.

Please refer to Aarna’s free cloud images for more details on these steps and the necessary scripts.

Amar Kapadia

Amar had the opportunity to present at Telecom Council's Comtech Forum on "Mobile Edge, MEC & The Olympics" in Sunnyvale.
Find out more

I had the opportunity to present at Telecom Council's Comtech Forum on "Mobile Edge, MEC & The Olympics" in Sunnyvale on 6/21/18.

I hypothesized that ONAP will become a dominant automation platform for not just NFV/SDN services but also MEC applications. I speculated that over time, ONAP might include MEAO (mobile edge application orchestrator) and mobile edge platform manager functions. My conjecture is based on the following reason -- the edge cloud NFVI is going to be unified across VNFs and MEC applications. To me, it makes no sense to carve up a small edge environment into custom built servers distinct for different applications. That's the opposite of a cloud architecture going back to the PNF world! To get lower operational and capital costs, and to get high availability, it will be critical to have a single pool of similarly designed servers. Maybe a couple of flavors might work, but it doesn't make sense to have half a dozen hardware flavors. If you accept my argument that the NFVI layer will be generally uniform, then it makes no sense to have two orchestrators vying for the same NFVI resources. For this reason, I feel that an NFV/SDN automation platform such as ONAP and a MEC application orchestrator will have to be unified. Since ONAP is an open source community-driven project, it has a higher probability of being extended as compared to proprietary MANO offerings.

To be fair, I didn't completely cook this up. See China Mobile's presentation from the June OPNFV plugfest for similar thoughts (they didn't explicitly mention ONAP though).

Do you agree/disagree with me? Am I making sense here? I'd love to hear your feedback.

Amar Kapadia

The 2nd release of the Linux Foundation Open Network Automation Platform (ONAP) project, titled Beijing, occurred on 6/12.
Find out more

The 2nd release of the Linux Foundation Open Network Automation Platform (ONAP) project, titled Beijing, occurred on 6/12.

The operators and vendors I’ve talked to ask questions such as: “Is ONAP going to make it? … With other proprietary orchestrators further ahead in terms of functionality and VNF support, should I bother with ONAP? ... Is ONAP mature enough?” My answer is simple — remember CloudStack, Eucalyptus, Nimbula? Not many people remember these names. These were competitors to OpenStack up until 2013/2014. Users had similar concerns regarding OpenStack at the time, but OpenStack is the de facto VIM in 2018 (that could change with Kubernetes, but that's a topic for another day). Similarly, with solid community momentum, significant operator participation and a holistic architectural approach, I believe ONAP will end up being the de facto network automation (aka MANO++ aka orchestrator) software stack.

What’s in the release, you ask. First, given the name Beijing I just had to find an appropriate chinese proverb. The proverb 小洞不补,大洞吃苦 apparently means: “If small holes aren't fixed, then big holes will bring hardship”. I think this sums up the Beijing release quite well. A large percentage of the Beijing release comprises work done under the covers (non-functional enhancements) to improve the overall quality of ONAP.

Having said that, I view Beijing enhancements in three buckets:

Non-functional improvements are a major part of the Beijing release. In fact, S3P (Scalability, security, stability and performance) was a major area of focus for each of the 31 ONAP projects.

Additionally, the Beijing release has a number of important functional enhancements. The first enhancement was blueprint enrichment — manually triggered scaling (vDNS, VoLTE blueprints), change management (vCPE blueprint) and policy driven VNF placement (vCPE blueprint). The last item also included hardware platform awareness (HPA) that is critical to achieve the right price/performance characteristic for your network service. Second, ONAP northbound APIs are in the process of being harmonized with MEF Allegro, Legato and Interlude APIs and TMForum 633, 641, 638, 640 and 652 APIs. This API harmonization is very important to have consistent APIs for OSS/BSS. Third, the OOM project, that deploys a containerized version of ONAP using Kubernetes (k8s) and Helm, has been improved such that all ONAP components including DCAE have been containerized can now be installed and configured using k8s.

Lastly, the Beijing release has two new projects. The MUSIC project provides standard services to distribute applications across multiple clouds. The key idea of MUSIC is to help an application get to 5x9s availability with <=3x9s of infra (I'm sure that's music to your ears ;-). The Benchmark project tests the performance of ONAP across functionality, scalability and security with the goal of finding bottlenecks in these areas.

Want to learn more? Join our “What's new in ONAP Beijing Release?” webinar on 7/24 at 9:00AM Pacific Time.

Ready for more formal learning? Check out our training services.

Ready to get your hands dirty? View our products & services.