Aarna.ml

Resources

resources

Blog

Amar Kapadia

Amdahl’s Law and the Revenge of Private Clouds
Find out more

Let’s face it -- those of us on the private cloud side of the house have not had great years. While public clouds have grown exponentially, private clouds have fallen significantly short of expectations. Vendors such as HPE, Cisco, RackSpace etc. have all exited or massively reduced their OpenStack efforts. And on the customer side, telcos (broad adoption) and a handful of Fortune 500 types have adopted private clouds built on OpenStack.

Why is this? The answer lies in a the Amdahl’s Law:

Slatency(s)= 1/((1-p) + p/s)

where

  • Slatency is the theoretical speedup of the execution of the whole task;  
  • s is the speedup of the part of the task that benefits from improved system resources;  
  • p is the proportion of execution time that the part benefiting from improved resources originally occupied.

I’m going to modify the law, to bring in into a business context. We will look at the improvement to a business in response to a technology investment:

Simprovement(sp)= 1/((1-p)/sl + p/sp))

Where

 

  • Simprovement is the theoretical business improvement;  
  • sp is improvement to the part of the business that benefits from the investment;  
  • sl is the degradation to the part of the business not benefiting from the investment;  
  • p is the proportion of business benefiting from the investment.

Having developed an elaborate total-cost-of-ownership (TCO) model that compared public clouds and private clouds, it is easy to show a 30-60% cost reduction for private. Let’s take the average: 45%. So if you can get 1 VM for $1 on a public cloud, you can get 1VM for $0.55 on a private cloud (or 1.8VMs/$1). So sp = 1.8. That sounds pretty good right? Yes, until you apply the modified Amdahl’s law.

Let’s assume that IT/OPS costs are 20% while app development costs are 80% of the total IT budget. That means, p = 0.2.

Next comes the non-intuitive part, private clouds actually degrade app development productivity as compared to public clouds since they lack the gazillion services available on a public cloud ranging from the mundane database to the fancy AI engine. Let’s assume the degradation is a modest 15%. So if it takes 1 developer/app on a public cloud, it takes 1 developer/0.85 app on a private cloud.

Crunching the numbers:

Simprovement = 1/(0.8/0.85 + 0.2/1.8) = 0.95

So! Instead improving the business, a private cloud investment is actually detrimental by looking at just the economics. This explains the current private cloud malaise. Net-net people investing in a private cloud are doing it for reasons other than app developer productivity.

But that’s all about to change. MEC and edge computing require private cloud technologies.

Will public cloud vendors win on the edge? I don’t think so. I equate public clouds to fancy restaurants. There’s complete chaos in the kitchen, but a soothing calm prevails on the customer-facing part of the restaurant. Imagine if the restaurant exposed the kitchen to customers. That's tantamount to public clouds exposing their technology to customers. Net-net, despite Green Grass from AWS and IOT Edge from Microsoft, I don’t expect public clouds to win in the edge. And the early stalwarts of private clouds aka big companies are bruised and fatigued. Therefore it will be new open source projects and startups around those projects that will win; plus these new projects are unlikely to be OpenStack based.

Case in point, check out the Container4NFV project (formerly known as OpenRetriever, a project where I’m a committer) and specifically the Next Gen VIM scheduler requirements document . The document lays out requirements for a next-gen edge computing infrastructure platform. On behalf of all of us contributors, we’d welcome feedback!

In summary, while not terribly successful in the datacenter, I think private cloud technologies are going to be back with a vengeance in the edge computing use case.

Amar Kapadia

Free "Understanding OPNFV" Book
Find out more

Have you been staying up at night wondering what the Open Platform for NFV  or OPNFV is? Or why you should care about it and what's in it for you? This short book (144 pages) will explain OPNFV and its benefits in an easy to understand language. You will also get a sense of the broader NFV transformation and why it's more than just technology. You will learn about the various upstream projects integrated in OPNFV and how OPNFV contributes back to these projects. Next, you will get exposed to the sophisticated CI pipeline built by the OPNFV community. And get an in-depth view into OPNFV deployment and testing projects. Finally, get a brief overview of how to write and onboard VNFs for OPNFV.

If you are a technical or business leader at a telecom operator or enterprise, looking to accelerate your NFV journey, this book is perfect for you. Technology providers aka vendors will gain the information they need to help position their products in the OPNFV ecosystem. Finally individuals looking to make a career change into the rapidly growing NFV space will get a great understanding of OPNFV.

Sign up for your free eBook copy here. The book is written by my good friend Nick Chase and me. Once you read it, I'd love to hear your feedback!

If you'd like a physical book instead, stop by the Mirantis booth at the OpenStack Boston summit or get one at the OPNFV Beijing summit. Or of course you can get a copy on Amazon in a month or two as well (but this will not be free).

Amar Kapadia

The OPNFV community's 4th release, Danube, came out roughly 2 weeks ago.
Find out more

The OPNFV community's 4th release, Danube, came out roughly 2 weeks ago. This release integrates the MANO layer (OPEN-O, OpenBaton, OpenStack tacker), so now there is an open source reference architecture for the entire NFV stack minus VNFs. Or in other words OPNFV integrates MANO + VIM + SDN Controller + NFVI software with continuous testing. Pretty cool, huh? Additionally there are numerous enhancements around data plane acceleration, architecture improvements, feature enhancements and hardening. See the official OPNFV blog.

Want to learn more? Join Serg Melikyan from Mirantis & me for a deep dive webinar where we will discuss "What's new in OPNFV Danube".

Amar Kapadia

AT&T proposed two new OPNFV projects yesterday: Armada and CORD.
Find out more

AT&T proposed two new OPNFV projects yesterday: Armada and CORD. Here's my assessment of why they did so.

Armada is a new OPNFV installer. There are 4 installers already: Fuel (Mirantis), Apex (RedHat), JOID (Canonical) and Compass (Huawei). Plus there is another one in incubation -- Daisy (ZTE). So why would anybody want to propose yet another installer? Actually there is a really good reason. The future of OpenStack lifecycle management seems to be in the direction of containerizing all OpenStack services, and then orchestrating them through a COE (container orchestration engine) such as Kubernetes. Day 1 management i.e. initial install gets dramatically simpler and more flexible and Day 2 management i.e. post deployment changes such as configuration changes, functionality additions, capacity expansion, architecture changes, updates, upgrades, rollbacks become possible without huge amounts of manual effort. Moreover, over time updates and upgrades can be totally eliminated in favor of a CI/ CD pipeline. As an added bonus, one also gets access to modern monitoring tools such as Prometheus and fluentd. Armada uses Kubernetes, containerized OpenStack services and Helm (a package manager project for Kubernetes). Armada is also independent of any particular vendor. The other installers discussed above all have affiliations with vendors. Net-net Armada is the future, and none of the existing projects offer what it's shooting for. Daisy goes part of the way by using OpenStack Kolla, but falls short.

OpenCORD is another (other than OPNFV) open source NFV project. The core of OpenCORD uses OpenStack as the VIM, ONOS as the SDN Controller, XOS as the VNFM and OCP servers and bare metal switches.  Since OpenCORD is prescriptive, it could be considered an OPNFV scenario and tested as such. OpenCORD focuses on development, and again there's good synergy with OPNFV where a lot of effort is spent on integration and testing. However, to date OpenCORD was operating separately and independently from OPNFV. That's the gap the OPNFV CORD project fills -- it introduces OpenCORD as an OPNFV scenario. OpenCORD actually has three profiles: Enterprise, Residential, Mobile all of which use the same core technology (called a POD not to be confused with a Pharos POD) but add different VNFs and access connectivity. Over time, these different flavors can be tested as part of the OPNFV CORD project as well. In summary, this project makes great sense since it increases collaboration between open source projects and reduces duplication of effort.

The timeline for both projects is the Euphrates release i.e. this fall. Exciting times!

Aarna

There was a great turnout for Amar Kapadia's "OPNFV Overview: Navigating Its Many Projects" webinar conducted jointly with Intel and Mirantis.
Find out more

There was a great turnout for Amar Kapadia's "OPNFV Overview: Navigating Its Many Projects" webinar conducted jointly with Intel and Mirantis earlier today. The webinar (view recording) covered:

  • What is NFV  
  • What is OPNFV  
  • Upstream projects in OPNFV  
  • New feature projects  
  • Continuous integration projects  
  • Testing projects  
  • How can you take advantage of OPNFV  
  • Q&A

The webinar received good reviews (4.64/5), so pls. do check out the recording if you missed the live webinar!

Amar Kapadia

Nextgen VIM Scheduler: a New OPNFV Project Proposal
Find out more

A bunch of us made a new project proposal: Nextgen VIM Scheduler (NGVS). I know the name is boring, but hey it's descriptive!

The current scheduler (OpenStack Nova) works very well, but we anticipate a number of new requirements: 1) Ability to schedule VMs, containers and unikernels - possibly in the same service function chain 2) Need to use new scheduling techniques like serverless in addition to static. 3) Support for highly distributed NFV clusters.

We believe all three requirements are unique to NFV, and therefore are unlikely to be driven by enterprise-centric communities. Therefore, this effort is a logical fit for OPNFV. We are proposing this project to make sure the scheduler, arguably one of the most important elements of the VIM, keeps up with these needs.

Do you have feedback or would like to join (we hope you can join so we can fill up this empty table below)? We'd love to hear from you (on the wiki page, OPNFV TC mailing list, or direct email to one of the proposed committers).

Project posted here