Aarna.ml

Resources

resources

Blog

Pavan Samudrala

O-RAN SMO & ODL UI Development Setup
Find out more

The O-RAN SMO platform can be accessed through various user interfaces (UIs), including the OpenDaylight (ODL) UI. OpenDaylight  is an open source software-defined networking (SDN) controller platform that provides a modular architecture for network automation and orchestration.

The ODL UI for O-RAN SMO provides a web-based graphical user interface for managing and orchestrating O-RAN network services. It allows users to visualize the network topology, configure network elements, and monitor network performance. The ODL UI also provides a REST API that can be used for programmatic access to the O-RAN SMO platform.

This document discusses about how to build and deploy ODL UI locally for development.

Requirements:

  1. NodeJs (v12.16.3)
  2. Yarn (1.22.17 - latest at the time) : npm install --global yarn

Steps to build and run UI locally

  1. Clone the repo : git clone  GitHub - onap/ccsdk-features
  2. Change the authentication from oauth to basic in
    ~/ccsdk-features/sdnr/wt/odlux/framework/src/index.dev.html
  3. Change proxy targets in /home/vikas/code/ccsdk-features/sdnr/wt/odlux/framework/webpack.config.js from "http://sdnr:8181" to the ip address of the server where the api gateway is running. This will be used by the UI to get the data from rest endpoints e.g. target: "http://54.242.17.220:30181".
  4. Now we need to build the base app which will wrap all the other apps like connect, faultApp etc.
    Go to ccsdk-features/sdnr/wt/odlux/framework and run the below commands :
  • yarn
  • yarn run vendor:dev
  • yarn run build:dev
  1. After this there will be a dist folder created at ccsdk-features/sdnr/wt/odlux. Now we can build the apps we want by going to the respective location and running yarn build:dev. e.g go to ccsdk-features/sdnr/wt/odlux/apps/connect and run yarn build:dev. For the UI to come up we need at least “connect“ and “faultApp“. This can be changed in /home/vikas/code/ccsdk-features/sdnr/wt/odlux/framework/src/index.dev.html
  2. Once we build the required apps we can run the gui by below command.
  • cd ccsdk-features/sdnr/wt/odlux/framework
  • yarn start
  • GUI can be accessed from localhost:3100

Using the OpenDaylight User Interface (DLUX) — OpenDaylight Documentation Oxygen documentation  

If you have any questions about this, please contract us here: https://www.aarna.ml.com/about-us#contact

Aarna

The New Middle Mile Report Is Here!
Find out more

"The middle mile holds a unique position in the communication infrastructure. This strategic Goldilocks location is the right distance from end-users and data centers, allowing it to facilitate faster communication and host next-generation services that demand a better experience. Given the importance of achieving application performance goals and the increasingly tight coupling between computing and networking, we propose examining the middle mile through a joint networking and computing lens, redefining the New Middle Mile (NMM)." - New Middle Mile Report (Page 1)

The New Middle Mile Report from AvidThink and Converge Network Digest is here! This has been in the works a while and represents a welcome addition to today’s networking research landscape. Traditionally, the Middle Mile has been a static, manual, and opaque environment. But the definition of the middle mile is changing to reflect the strategic importance of the infrastructure location from edge to cloud.

The New Middle Mile is expanding largely through a renaissance in 3 main areas:

1. MultiCloud Networking

2. Storage Repatriation

3. Cloud Edge Machine Learning

Stay tuned for an upcoming blog series in each of these areas.

In the meantime, learn more in this short video from Amar Kapadia, Co-founder and CEO of Aarna.ml.

Download the full report here.

The New Middle Mile Report 2023

Pavan Samudrala

O-RAN SMO & TLS Connectivity in O-RAN Networks
Find out more

Service Management & Orchestration (SMO) is the O-RAN component that oversees all orchestration, management, and automation of RAN elements in O-RAN networks. It supports the O1, A1 and O2 interfaces and uses the TLS (Transport Layer Security) to secure communicate to devices in the O-RAN network. Learn more about O-RAN Architecture and the SMO. This blog is a primer on how we set up SMO & TLS connectivity.

TLS is a cryptographic protocol that provides secure communication over a network by encrypting the data that is transmitted between two endpoints. In the case of O-RAN SMO, TLS is used to encrypt the communication between the SMO and devices in the network to ensure that the data is protected from unauthorized access.

First set up SMo GUI with SSL. SDNR is already configured and is listening for HTTPS on port 8443, but the port is not forwarded in the service by default. To use SMO with https, we need to forward port 8443 from SDNR Pod. Enable TLS Connection on RU/DU Simulators and then forward TLS port on RU/DU. Then set up TLS connectivity on Netconf device. To support clients connecting using TLS, configuration files tls_keystore.xml, tls_truststore.xml, and tls_listen.xml needs to be merged into sysrepo configuration of modules ietf-keystore, ietf-truststore, and ietf-netconf-server, respectively. After doing so, a NETCONF client can connect using client.crt certificate and client.key private key and having ca.pem CA certificate set as trusted. Now configure SMO to connect with Netconf Device using TLS certificates. We need the following certs:

  • Client Private Key
  • Client Cert
  • CA Key (Trusted CA)

To Establish TLS based connection, we need to perform the following steps on SMO:

  • Connect to SMO’s RestConf interface
  • Add Keystore Entry
  • Add Private Key
  • Add trusted CA Certificate

It is important to note that TLS connectivity is just one aspect of securing the O-RAN network. Other security measures such as authentication, authorization, and access control are also needed to ensure the security of the network.

Aarna.ml is an active contributor to the SMO project in the O-RAN Alliance and has developed the industry's first open source SMO. Contact us to learn more.

Brandon Wick

The Network-as-a-Service Future
Find out more

The power of “automation” is on display everywhere across the tech industry these days, from self-repairing robots, to AI tools writing code, to CI/CD deploying software into robust applications in production environments. The goals are simple – Simplicity, Speed, and Cost Savings. 

Network-as-a-Service

Today’s Communications Service Providers (CSPs) which carry data – the vital life-blood of today’s economies – between systems and users are no exception. Cloud computing has proven just how quickly IT can grow and transform, and sets a high standard for the networking industry to follow. In order to avoid becoming bottlenecks, today’s communications networks, both public and private, need to fully embrace automation. The key to doing this is through open source, cloud native, disaggregated “Network-as-a-Service” (NaaS) solutions. These are deployed on-premises, on a hybrid-cloud, and/or on multi-cloud and support workloads in edge and data center environments. This concept is laid out comprehensively by LF Networking and IBM in this Ebook: Network-as-a-Service – A Practical Approach for CSPs to Implement Cloud-Native Multi-Cloud Networks for Enterprises, and we strongly encourage you to read it.

Value Add

Massive industries don't transform themselves unless there is compelling value greater than the costs of transformation. In this case, the value of network automation derives primarily from:

  • Acceleration of digital services 
  • Highly distributed and accessible support
  • Adaptable security 
  • Operational efficiency 
  • Open-standards based solutions

The value of this transformation applies to all the major players in the communications industry – from CSPs who can avoid vendor lock-in and capture more of the value chain – to Hyperscalers that need to support complex workloads spread across cloud locations, edge, and on-premises – to Enterprises looking to move away from the business of managing their networks to focus on their core competencies – to the ecosystem of Vendors and SIs seeking to provide best-of-breed, interoperable solutions grow the size of the overall market. 

Open Source

Open source projects also serve as ‘de facto’ standards and ‘reference implementations’ allowing ecosystem participants to adopt emerging standards into their products and services. Several LF Networking projects such as ONAP, EMCO and Nephio are already addressing the needs of Enterprise NaaS, providing necessary functionality and features, including: 

  • End to end network orchestration
  • Observability and closed loop automation
  • Distributed edge workload placement
  • Intent based network configuration
  • Uniform automation control plane 

The Role of Aarna.ml

We at Aarna see our role in the industry as open source practitioners, pioneers, and collaborators. We literally wrote the book on ONAP, have assumed primarily leadership in EMCO, and are community leaders in the exciting new Nephio project. We see open source as the key to a Network-as-a-Service future and have built our flagship products – AMCOP and AES on these open source projects along with others, including those from the CNCF and the O-RAN Alliance SC. More specifically, AES is a commercialized version of the Akraino PCEI Blueprint, which can be thought of as a Network-as-a-Service use case. Our work around the blueprint was awarded first prize at last year’s MEC Hackathon along with our partners at Equinix. This approach brings significant flexibility to enterprise networks and the ability to run cutting edge applications such as AI, IOT, and more. 

Keep an eye on this exciting space as it evolves. If you’d like to learn more about the state of open source networking today, contact us for a free consultation. 

Reference:

Ebook: Network-as-a-Service – A Practical Approach for CSPs to Implement Cloud-Native Multi-Cloud Networks for Enterprises

Amar Kapadia

RAN-in-the-Cloud: Why must a successful O-RAN implementation run on a cloud architecture?
Find out more

A new radio area network standard called Open RAN (O-RAN) promises to accelerate the disaggregation of 5G networks. Until recently, the RAN was completely closed, creating vendor lock-in and allowing a handful of vendors to command high prices. Not only did this cause the cost of building out a RAN site to go up, but it also created inflexible and closed networks that did not allow new monetizable services. Mobile Network Operators (MNOs) and governments decided that this was not an ideal situation which led to the formation of the O-RAN Alliance, a standards development organization (SDO) that creates detailed specifications for the various internal interfaces of a RAN, thus allowing for its disaggregation. Disaggregation is expected to foster innovation, result in new monetizable services, and reduce costs.

What does the future hold for O-RAN? I think there are three possibilities:

  1. O-RAN is a failure
  2. O-RAN gets a hollow victory
  3. O-RAN is a true success

Let us evaluate each scenario.

Scenario#1 – O-RAN is a failure: This could happen if O-RAN is unable to meet or exceed existing proprietary RAN solutions on key performance, power, and cost metrics. I think the probability of this outcome is relatively low. Technologies such as NVidia Aerial and alternatives from other semiconductor vendors will ensure that O-RAN performs just as well on performance as proprietary RAN, at similar or lower price points. Nevertheless, we cannot eliminate this possibility yet as we need to see more proof points.

Scenario#2 – O-RAN gets a hollow victory: If O-RAN solely matches proprietary RAN on key metrics and solely provides disaggregation as the differentiator, there is a significant danger that incumbents will “O-RAN-wash” their products and the status quo will persist for 5G. The incumbents will call their vertically integrated products O-RAN compliant while in reality they will only support a few open interfaces. Interoperability with third parties will be suboptimal, forcing MNOs to purchase a vertically integrated stack. In this case, there simply won’t be enough leverage to force the incumbent vendors to truly open up nor will there be enough incentive for MNOs to try out a new vendor.

Scenario#3 – O-RAN is a true success: For this possibility, O-RAN based implementations must provide greater value than proprietary RAN. Let’s now explore this possibility.

Embracing Cloud Architecture will be a Game Changer

For O-RAN based implementations to provide more value than proprietary RAN, they must use an end-to-end cloud architecture and be deployed in a true datacenter cloud or edge environment; hence the term “RAN-in-the-Cloud”. The simple reason is that a cloud can run multiple workloads, meaning cloud hosted O-RAN can support multi-tenancy and multiple services on the same infrastructure. Since RAN implementations are built for peak traffic, they are underutilized, typically running at <50% utilization. In a traditional architecture that uses specialized acceleration or an appliance like implementation, nothing can be done to improve this utilization number. However, in a RAN-in-the-Cloud implementation, the cloud can run other workloads during periods of underutilization. An O-RAN implementation built by fully embracing cloud principles will function in a far superior manner to proprietary RAN as the utilization can be optimized. With increased utilization, the effective CAPEX and power consumption will be significantly reduced. The RAN will become flexible and configurable i.e., 4T4R, 32T32R or 64T64R or TDD/FDD on the same infrastructure. As an added benefit, when the RAN is underutilized, MNOs can pivot their GPU accelerated infrastructure to other services such as edge AI, video applications, CDN, and more. This will improve the monetization of new edge applications and services. Overall, these capabilities will provide MNOs with the leverage they need to force the incumbents to fully comply with O-RAN and/or try out new and innovative O-RAN vendors.

To be considered RAN-in-the-Cloud, the O-RAN implementation must use:

●      General purpose compute with a cloud layer such as Kubernetes

●      General purpose acceleration, for example NVidia GPU, that can be used by non-O-RAN workloads such as AI/ML, video services, CDNs, Edge IOT, and more

●      Software defined xHaul and networking

●      Vendor neutral SMO (Service Management and Orchestration) that can perform the dynamic switching of workloads from RAN→non-RAN→RAN; the SMO[1] also needs the intelligence to understand how the utilization of the wireless network varies over time. The Aarna.ml Multi Cluster Orchestration Platform SMO is a perfect example of such a component.

You can see an example of this architecture presented during the upcoming session at GTC this week: “Big Leap in VRAN: Full Stack Acceleration, Cloud First, AI and 6G Ready [S51797]”. In my view, this reference architecture will drive O-RAN to its full potential and is the type of architecture MNOs should be evaluating in their labs.

References:

●      NVIDIA Blog: https://developer.nvidia.com/blog/ran-in-the-cloud-delivering-cloud-economics-to-5g-ran/

●      Video: https://www.youtube.com/watch?v=FrWF1L8jI8c

●      Solution Brief: https://www.youtube.com/watch?v=FrWF1L8jI8c

[1] Strictly speaking, the SMO as defined by the O-RAN Alliance is only applicable for the RAN domain. However, we are using the term SMO more broadly to include orchestration of other domains such as edge computing applications, transport, and more.

Ankit Goel

Deploying Juniper cRPD on Ubuntu nodes with BGP peering
Find out more

Are you looking to deploy Juniper cRPD docker containers on Ubuntu nodes with BGP peering? Based on our recent experience, we've set up this comprehensive wiki page to share our learnings with step-by-step instructions and helpful tips to ensure a successful deployment.

This detailed guide walks you through the entire process, from installing Docker on two different ubuntu20.04 nodes to configuring the cRPD containers and setting up and verifying BGP peering.

Give this a shot a let us know what you find out.

Read the Wiki