" "
AI Cloud GPU-as-a-Service Providers
Enterprises using GPUs
Startups using NVIDIA HGX
Data centers, GPU-as-a-Service cloud or edge providers, and private cloud or edge providers: Build your own multi-tenant AI Cloud with our GPU-as-a-service software stack for Hopper and Blackwell architectures. Solve for network, storage and GPU isolation, Day 2 management, user APIs, and spot instance creation.
If you are a NVIDIA Cloud Partner (NCP), GPU-as-a-Service Cloud provider, or IT/OPS practitioner building a private AI cloud or edge, Aarna Multi Cluster Orchestration Platform (AMCOP) can deliver true multi-tenancy and network isolation for Infiniband and Ethernet, storage and GPU isolation while leveraging existing Base Command Manager features of NVIDIA.
Explore the convergence of AI/ML, cloud, and edge computing, and the benefits of running machine learning workloads at the cloud edge with Aarna Edge Services (AES) — the number one zero-touch orchestrator delivered as a service.
AI/ML applications today, such as for large language models (LLM), are mostly run on-prem or in the public cloud. Both approaches have pros and cons. But edge, cloud, and AI/ML have converged to a point where now there is a third way – applying machine learning at the cloud edge. Benefits of this approach include:
Computer vision can generate large amounts of data. With hundreds or thousands of cameras being deployed, the traffic can easily add up to multiple gigabits. Moving this amount of data to the public cloud for computer vision ML processing can be quite expensive. An alternative is to run ML processing at the cloud edge, i.e., the colocation or datacenter location where the last mile access network terminates.
Powered by large language models (LLM), Generative AI programs like ChatGPT are revolutionizing the way we live and work. Cloud edge in a private cloud is an ideal place to collect data and run AI/ML algorithms for business intelligence. When using open source models such as Llama or Dolly, the user can have full control over the LLM model meaning there’s zero probability of data leakage into the public domain.
Given that the cloud edge can be easily connected to a company’s private data with a dedicated link to their datacenter cage or through SD-WAN breakout (see figure below), a cloud edge LLM will have unrestricted access to sensitive data for training purposes than an LLM running in a public cloud.
The above figure shows a Cloud Edge ML implementation with connectivity to a company’s on-prem locations over SD-WAN. The ML workloads could be LLMs like Llama or Dolly or computer vision ones such as NVidia Metropolis.
One such edge location for AI/ML processing is the Radio Access Network (RAN). Ideally, a 5G radio access network would be hosted as a service in multi-tenant cloud infrastructure running as a containerized solution alongside other applications. This concept of RAN-in-the-Cloud allows RAN components (CU/DU) to be dynamically allocated, increasing utilization for better sustainability, and using spare capacity in off-peak hours to run AI/ML applications.
Aarna Edge Services (AES), is the number one zero-touch edge multicloud orchestrator delivered as a service. It features an easy-to-use GUI that can slash weeks of orchestration work into less than an hour. In case of a failure, AES includes fault isolation and roll-back capabilities. Support includes:
Aarna Networks, Predera, and NetFoundry have partnered to offer a Private, Zero-Trust, Fully Managed LLM for to help you explore the world of generative AI. Choose from a variety of foundational models that you can fine tune with your corporate data to discover new insights and revenue generating opportunities. See this Solution Document to learn more.
Or, request a free consultation to learn more about how to apply these approaches to your business requirements and cloud/edge machine learning strategies or request a Free Trial of AES today.