" "
Aarna.ml

Resources

resources

Blog

Amar Kapadia

Insights from Mobile World Congress 2024
Find out more

The Mobile World Congress (MWC) 2024 in Barcelona brought together leaders from the telecommunications industry to discuss the future of digital connectivity. This event served as a platform for sharing advancements and exploring new technologies that are shaping the way we connect.

Amar Kapadia and Subramanian Sankaranarayanan at IEEE booth

Aarna.ml participated in MWC 2024, with
Amar Kapadia and Subramanian Sankaranarayanan showcasing the work in edge and private 5G management. We demonstrated the capabilities of NVIDIA Grace Hopper Servers and our AMCOP's orchestration abilities in collaboration with NVIDIA and IEEE. These demonstrations aimed to show practical applications of our technology in improving network performance and scalability. For those interested, we have made a demo video available here.

The congress highlighted several key developments in the industry. The evolution of 5G technology was a major topic, with discussions on how it can improve connectivity speeds and reduce latency. The use of artificial intelligence (AI) in network operations and service delivery was also emphasized, pointing towards more efficient telecommunications infrastructure. Quantum computing's potential in enhancing data security and processing power was explored, and there was a notable focus on sustainability, with talks on making operations more energy-efficient. These discussions at MWC 2024 indicate a move towards more efficient, secure, and environmentally friendly digital connectivity.

MWC 2024 offered a clear view of the current state and future possibilities in telecommunications, highlighting the industry's ongoing efforts to improve and innovate. Aarna.ml is excited to contribute to this progress, focusing on solutions that enhance connectivity and pave the way for future advancements.

Sriram Rupanagunta

Empowering Edge Computing: The Imperative Role of Automation Solution
Find out more

We are often asked "why automation or orchestration is needed for Edge computing in general, since Edge computing is not a new concept". In this blog, you'Il learn about the role of an orchestrator in unleashing the true potential of Edge computing environments. 

Recapping the uniqueness of Edge environments, a blog 'Why Edge Orchestration is Different' by Amar Kapadia highlights the following attributes:

  1. Scale
  2. Dynamic in nature
  3. Heterogeneity
  4. Dependency between workloads and infrastructure

We will see the challenges in accomplishing them, and it will be obvious why automation plays a critical role in this process. Also, as explained in the previous blog, the Edge environments include both Infrastructure and the Applications that run on them (Physical or Virtual/Cloud-native). So all the above factors need to be considered for both of them in case of Edge computing. 

The scale of Edge environments clearly prohibits manual mode of operating them since this will involve bringing up each environment, with its own set of initial (day-0) configurations, independent of each other. The problem is compounded when these environments need to be managed on an ongoing basis (day-N). This also brings up the challenge of the dynamic nature of these environments, where the configurations can keep changing based on the business needs of the users. Each such change will result in potentially tearing down the previous environment and bringing up another one, possibly with a different set of configurations. Some of these environments may take days to bring up, with expertise from different domains (which is another challenge), and any change will mean a few more days to bring it up again, even for a potentially minor change. 

Another challenge with the Edge environment is their heterogeneity in nature, unlike the public cloud which is generally homogeneous. This would mean that multiple tools, possibly from different vendors, need to be used. These tools could be proprietary or standard tools such as Terraform, Ansible, Crossplane and so on. Each vendor of the Infrastructure or the applications/network functions could be using a different tool, and even in cases where they use standard tools, there may be multiple versions of the artifacts (eg., Terraform plans, Ansible scripts) that need to be dealt with. 

The workloads on edge may need to talk to or integrate with some applications on the central location / cloud. This would need setting up connectivity between edge and other sites as may be desired. The orchestrator should also be able provision this in an automated manner.

Lastly, as we saw in the previous blog, there may be dependencies between Infrastructure and the workloads, as well as between various workloads (eg., Network Functions such as 5G that are used by other applications). This will make it extremely difficult to bring them up manually or with home-grown solutions. 

All these challenges will mean that unless the Edge environment is extremely small and contained, it will need a sophisticated automation framework or an orchestrator. The only scalable way to accomplish this is to specify the topology of the environment as an Intent, which is rolled out by the Orchestrator.  In addition, the Orchestrator should constantly monitor the deployment, and make necessary adjustments (constant reconciliation) deployment and management of the topologies in the Edge environment. When there is a change required, a new intent (configuration) is specified which should be rolled out seamlessly. The tool should also be able to work with various tools such as Terraform/OpenTofu, Ansible and so on, as well as provide ways to integrate with proprietary vendor solutions. 

At Aarna.ml, we offer open source, zero-touch, Intent based orchestrator, AMCOP (also offered as a SaaS AES) for lifecycle management, real-time policy, and closed loop automation for edge and 5G services. If you’d like to discuss your orchestration needs, please contact us for a free consultation.

Sriram Rupanagunta

Enabling the RAGOps
Find out more

This is a follow up blog to the earlier blog “From RAGs to the Riches” from my colleague, Amar Kapadia. 

Setting up the GenAI for an Enterprise involves multiple steps, and this can be categorized as: 

  • Infrastructure Orchestration, which includes the servers/GPUs with cloud software, virtualization tools and the networking infrastructure. There may be additional requirements depending on the Enterprise needs, such as: 
  • SD-WAN setup between their locations
  • Access to the Enterprise data from their SaaS infrastructure (Confluence/Jira/Salesforce etc.)
  • Connectivity to public clouds, if needed
  • Connectivity to the repos where the GenAI models are present (Huggingface etc.)
  • If this is set up on Cloud Edge DCs (such as Equinix), there may be a need to configure the fabric to connect to other Edge locations or the public clouds, using network edge devices (routers/firewalls that run as xNFs)
  • GenAI Orchestration, which includes bringing up the GenAI tools, either for training or for inferencing. 
  • RAG Orchestration, which includes building the necessary Vector DB from various Enterprise sources, and using that as part of the Inferencing pipeline. 

All of the above requires a sophisticated Orchestrator that can work in a generic manner, and provide a single-click (or a command) functionality. 

The flow will be as follows: 

  • The Admin creates a high-level Intent that describes the necessary infrastructure, connectivity requirements, site details and the tools 
  • The Orchestrator takes the Intent as input, and sets up the necessary infrastructure and applications
  • The Orchestrator also monitors the infra/applications for any failures/performance issues, and makes the necessary adjustments (it could work with one of the existing tools such as TMS for this function).

I hope this sheds some light on the topic and gives some clarity on how to go about setting up the underlying infrastructure for RAGOps. 

AMCOP can orchestrate AI (and more specifically, GenAI) workloads on various platforms.  At Aarna.ml, we offer open source, zero-touch, orchestrator, AMCOP (also offered as a SaaS AES) for lifecycle management, real-time policy, and closed loop automation for edge and 5G services. If you’d like to discuss your orchestration needs, please contact us for a free consultation.

Next Steps

Contact us for help on getting started with RAGOps. The Aarna.ml Multi Cluster Orchestration Platform orchestrates and manages edge environments including support for RAGOps. We have specifically created an offering that is suitable for NSPs by focusing not just on the FM and related ML components, but also on the infrastructure e.g. using Equinix Metal to speed up deployment and Equinix Fabric for seamless data connectivity. As an NVidia partner, we have deep expertise with server platforms like the NVidia GraceHopper and platform components such as NVidia Triton and NeMo.

Amar Kapadia

A Glimpse from PTC'24
Find out more

At the recent Pacific Telecommunications Council (PTC)'24 event held in Honolulu, Hawaii, Subramanian Sankaranarayanan, AVP at Aarna.ml, took the stage to deliver an insightful talk on “Multi-Domain Edge Connectivity Services for Equinix Metal, Network Edge, Fabric, and Multi-Cloud.

Subbu’'s presentation centered on the dynamic evolution of data centers towards Infrastructure-as-a-Service (IaaS) and the complexities inherent in multi-vendor IaaS deployments. He highlighted the innovative solutions offered by the Linux Foundation Edge Akraino PCEI, an award-winning blueprint, for orchestrating and managing cloud edge infrastructures.

A focal point of his discussion was Aarna Edge Services (AES), a SaaS platform instrumental in simplifying the deployment and orchestration of infrastructure, apps, and network services at the cloud edge. Subbu illustrated various use cases of AES, demonstrating its efficiency in reducing deployment time from weeks to less than an hour and optimizing cloud-adjacent storage and GenAI processes.

The session provided valuable insights into the future of cloud and edge computing, emphasizing the importance of seamless integration and efficient management in today's interconnected digital world.


Subbu's expertise and the innovative approaches discussed at PTC'24 paint an exciting picture of the future of cloud edge management and multi-cloud deployments, promising a more streamlined, efficient, and interconnected digital ecosystem.


We are grateful to Pacific Telecommunications Council (PTC) for this amazing opportunity and this memorable exposure and all the time we spent at the PTC’24.

If you couldn't connect with us at the event, feel free to contact us to arrange a meeting.

Amar Kapadia

Exploring Edge-Native Application Design Behaviors
Find out more

In December 2023, the tech community welcomed a groundbreaking whitepaper titled "Edge-Native Application Design Behaviours." This comprehensive document delves into the dynamic realm of Edge-native application design, providing invaluable insights for developers and architects navigating the unique challenges of Edge environments.
 


Evolution from CNCF IoT to Edge-Native Principles

Building upon the foundational principles outlined in the CNCF IoT Edge Native Application Principles Whitepaper, this latest release adapts and refines these principles specifically for Edge environments. The result is a guide that serves as an indispensable resource for those working on Edge-native applications, offering practical guidelines and illuminating insights.

Navigating Key Aspects of Edge-Native Design

The whitepaper meticulously explores key aspects crucial for Edge-native design, unraveling the intricacies of concurrency, scale, autonomy, disposability, capability sensitivity, data persistence, and operational considerations. A particular highlight is a real-world scenario, illustrating the application of these design behaviours in a tangible context.

Decoding Edge Native Application Design

Understanding Edge-native application design necessitates recognizing its departure from cloud-native design. Edges, as autonomous entities, play a pivotal role in ingesting, transforming, buffering, and displaying data locally. Distributed edge components complement these entities, handling functions to reduce bandwidth consumption and adhere to location-based policies.

Design Constraints and Principles

Edge-native applications face distinct design constraints, such as connectivity, data-at-rest, and resource constraints. The whitepaper emphasises the importance of evolving cloud-native application design principles to address these constraints effectively. Key principles include the separation of data and code, stateless processes, share-nothing entities, and the separation of build and run stages.

Guidelines for Edge-Native Development

For developers venturing into Edge-native applications, the whitepaper provides a detailed reference guide. Topics such as concurrency and scale, edge autonomy, disposability, capability sensitivity, data persistence, metrics/logs, and operational considerations are meticulously explored.

A Glimpse into the Future

As the digital landscape evolves, Edge-native application design becomes increasingly vital. The whitepaper not only serves as a guide but also charts a course for future development in this dynamic field. The principles and insights shared pave the way for innovation, ensuring that Edge-native applications are not just efficient but also resilient in the face of evolving technological landscapes.

Click here to download the whitepaper.

Amar Kapadia

From RAGs to Riches
Find out more

A Unique RAGOps Opportunity for NSPs to Offer RAG to their Enterprise Customers

Enterprises are going to embrace GenAI, of that there is no doubt. GenAI will add value in just about every function of an enterprise. The speed at which an enterprise adopts GenAI will clearly result in a competitive advantage. However, a more durable and lasting competitive moat will result by blending enterprise data with the GenAI model. The more data an enterprise can utilize for GenAI, the deeper their competitive moat. 

There are two options for an enterprise to mix corporate data with the GenAI model:

  1. Fine tune an existing Foundational Model: In this option, an enterprise fine tunes a private copy of an existing GenAI Foundational Model (FM) with their own corporate data. Though much simpler than training a new GenAI model, which we are not even considering, this option is difficult for most enterprises. It requires GPUs in the tune of $Ms, a high degree of skill set to set up Large Language Model Operations (LLMOps) pipelines, and the need to continuously fine tune the model to prevent it from drifting or getting stale.
  2. Retrieval Augmented Generation (RAG): In this approach, an enterprise uses a lightweight Foundational Model (FM) that has generic natural language processing capability but no real domain knowledge. Users will then supplement the prompt with real time augmented data to get a meaningful result. Finally, RAG can also prevent hallucination by citing the exact data source(s). However, this approach is network heavy in that with each prompt there may be a large amount of traffic to retrieve the relevant data. 

In that sense the two approaches are analogous to the following images:

Fine tuning an FM is akin to tapping into an intelligent employee who has been fully trained in your corporate data. Of course, they need to be trained on an ongoing basis to stay current.

RAG is similar to hiring an intelligent employee/consultant who doesn’t have prior knowledge of any specific domain, but is fast enough to read any information you want in real-time.

Given the above: Most enterprises will use RAG

There are three deployment models for RAG:

  1. Public model – In this option, a public model e.g. Microsoft is used for RAG. The public model will use corporate data to provide the response. The fly in the ointment is the requirement to move all the relevant data to a public GenAI service provider. Some enterprises might be comfortable with this but most will not be for a variety of reasons.
  2. Private model in a public cloud – In this approach, an enterprise uses a private FM in a public cloud along with other components such as vector databases. This is convenient but again, all the data needs to be shipped to the public cloud. This is perhaps less scary than the previous option since the data would reside in a private repository; nevertheless, it is a lot to swallow.
  3. Private model in a private cloud – In this option, the enterprise would use a private FM along with other components like a vector database in a private cloud. What makes this approach attractive is that the private cloud already has all the required network connections to internal data sources. However, this approach does require a bit more sophistication on the part of the user to deploy and manage RAG.
From the above, it is clear:  A RAG model in a private cloud will dominate

Enter Network Service Providers (NSP)

Unlike ML/LLMOps which require significant ML expertise, RAG does not. In fact, RAG requires expertise in data connectivity since the value of a RAG model is directly proportional to the amount of corporate data made available to it. Who better to provide managed RAG than the provider of SD-WAN and managed IP networks?

NSPs are best positioned to offer managed RAG

Getting Started with RAGOps

RAGOp may be summed up as DevOps based methodology to deploy and manage a RAG model. RAGOps requires the following steps:


To expand a bit more:

  • Deploy virtual infrastructure with GPUs to host the RAG model. This may be a combination of virtual compute (containers, VMs), storage, virtual networks, and Kubernetes/hypervisor layer.
  • Deploy an FM along with a vector database, text embedding, and other data sources.
  • Deploy supporting guardrail/management/monitoring components.
  • Set up data pipelines to collect Enterprise data from diverse sources and populate the vector database.
  • Monitor and manage (upgrade, scale, troubleshoot) the environment over Days 1,2 as needed.

Since NSPs can provide data connectivity, they hold a competitive advantage. However, the competitive advantage NSPs hold will not last forever. For this reason:

NSPs need to start RAGOps PoCs for enterprise customer ASAP

Next Steps

Contact us for help on getting started with RAGOps.

The Aarna.ml Multi Cluster Orchestration Platform orchestrates and manages edge environments including support for RAGOps. We have specifically created an offering that is suitable for NSPs by focusing not just on the FM and related ML components, but also on the infrastructure e.g. using Equinix Metal to speed up deployment and Equinix Fabric for seamless data connectivity. As an NVidia partner, we have deep expertise with server platforms like the NVidia GraceHopper and platform components such as NVidia Triton and NeMo.