Aarna.ml

Resources

resources

Blog

Amar Kapadia

Virtlet and Kubevirt - Impending TKO Win for Kubernetes?
Find out more

To a casual observer, OpenStack and Kubernetes seem to be having a lovefest. After all Kubernetes can run on OpenStack (containers in VMs) with project such as OpenStack Magnum. And OpenStack can be containerized and orchestrated by Kubernetes with project Kolla (and others). However, taking a 10,000 ft view, OpenStack and Kubernetes are both cloud technologies -- so when the dust settles, you might end up with jilted lovers. Up until now, OpenStack was better for VMs and Kubernetes was better for containers (I’m ignoring bare-metal).

Until now.

With two new open source projects: Virtlet from Mirantis and Kubevirt from RedHat, it will be possible to schedule VMs with Kubernetes. That just might be a TKO win for Kubernetes… and leave OpenStack in a dazed stupor. I acknowledge that OpenStack is sooo much more than just Nova, so you might end up with a mashup stack that uses Kubernetes for scheduling, orchestration, monitoring etc. and OpenStack for storage, networking, image repository, authentication, NFV etc.

With Kubernetes able to schedule VMs, why would you not use a lighter, newer, more streamlined piece of software to orchestrate both VMs and containers? Plus Kubernetes also takes into account important issues such as lifecycle management, self healing, monitoring and other day-2 concerns from the get-go, that have always been an afterthought for OpenStack.

There are differences between the two projects though. Kubevirt is similar to adding an outdoor garage to your house and Virtlet is similar to adding a new bedroom. Kubevirt uses the Kubernetes's third party resource concept that loosely weaves in VMs into Kubernetes. The plus here is that you can use your own mechanisms to manage the VM, but the downside is that the VM isn’t well integrated into Kubernetes. Virtlet on the other hand uses the Kubernetes Container Runtime Interface (CRI). This means that you are forced to use Kubernetes constructs to manage the VM, but the integration with Kubernetes is a lot tighter. You can use the same SDN controller, monitoring framework, health monitoring, scaling/ HA techniques etc. as containers.

Which one is better suited? That depends. For an enterprise use case, I might argue that Kubevirt is better. It allows more flexibility in terms of managing the VM and enterprise use cases don’t really care about a tight integration and coexistence between VMs and containers. For NFV, I’d say Virtlet wins hands-down since you can do service function chaining by mixing VM and container VNFs that have the same SDN, storage volumes and other Kubernetes mechanisms listed above.

What do you think? Let us know!

Aarna

OPNFV 102 Meetup: Serverless Unikernel VNF Scheduling.
Find out more

On 2/10/17 Wassim Haddad and Heikki Mahkonen from Ericsson and Amar Kapadia discussed "Serverless Unikernel VNF Scheduling" at the OPNFV 102 Meetup. Slides available here. Here is a quick summary:

Amar set the stage for why we might want to discuss an alternative VIM/ NFVI.

Wassim covered a wide variety of topics such as what are unikernels, what is serverless scheduling, and what are the benefits of applying these two mechanisms to NFV. Net-net, the benefits are staggering. VNF density per node can go up by 1-3 orders of magnitude. VNF performance can go up 2x. And VNF security improves dramatically (not sure how to quantify this benefit). We'll try to do some total-cost-of-ownership analysis at a future point to clearly quantify the first two benefits.

Next Heikki showed a demo where VNFs were popping up in response to requests and then disappearing. I'd be remiss in my characterization if I didn't say that people literally fell off their chairs when they saw the demo ;).

Finally, Amar ended by connecting the dots to OPNFV and discussing how a new project to work on a "next-gen compute scheduler" could be created in OPNFV to serve as an alternative to OpenStack Nova. The various options available are 1) Enhanced OpenStack Nova, 2) Kubernetes, 3) Docker Swarm, 4) Mesos, 5) VMware Photon Controller, 6) Intel ClearLinux CIAO, 7) a brand new open source project. Did we miss any option? Stay tuned for more on this topic...

Net-net, great experience and a wonderful audience! Thanks to the organizers Bin Hu and Ray Paik, and Spirent for hosting. The lunch was very nice indeed.

Amar Kapadia

Is NFV OpenStack’s Last Line of Defense?
Find out more
In the last few months, OpenStack has been under pressure on the enterprise side. VMware is proving to still be the preferred choice for “cloud hosted” apps (legacy apps that are simply hosted in a cloud) and public clouds are winning big on the “cloud-optimized” (somewhat optimized for the cloud) and “cloud-native” (apps built for the cloud i.e. microservices etc.) Whatever action was there for OpenStack, is also under attack by native container frameworks such as Kubernetes and Mesos. I'm not saying OpenStack won't have any place in the enterprise, simply that it's going to be quite a bit smaller than initially expected.

However, one use-case where OpenStack is thriving is NFV (Network Functions Virtualization). Open Source projects such as OPNFV use OpenStack as the exclusive VIM+NFVI component (VIM = virtualized infrastructure manager aka cloud software, and NFVI = NFV infrastructure i.e. the hypervisor + virtual storage + virtual networking layers) along with a few SDN controllers. (Strictly speaking OpenStack is a VIM, but it's often packaged with NFVI components making it a unified stack).

Despite the seeming inevitability of OpenStack in NFV, I think the situation is more fragile than people think. VMware, similar to the enterprise, has locked up “cloud-hosted” VNFs (virtual network functions i.e. the virtual versions of physical networking boxes). OpenStack has a really good shot at winning “cloud-optimized” and “cloud-native” VNFs if, and only if, the community and the involved vendors solve a few key problems.

 

  • Performance  
  • Improvements to the scheduler  
  • NFV-centric functionality  
  • Ease-of-use

Performance

A lot of people forget that NFV is first and foremost a networking workload meant to process packets. So performance is a primary concern. For instance, NFV is of no use if it takes 10 industry standards servers with virtual machines to match the performance of 1 physical networking box. Performance affects many layers of the stack. While there has concerted focus on technologies such as DPDK and real-time KVM, I think this topic can go a lot broader with hardware offload of OVS/ vRouter, security functions, networking probing; and use of shared memory instead of virtual switching for service-function chaining. Hardware offload could be implemented either in smartNICs or FPGAs. Net-net, OpenStack could place a lot more emphasis on performance.

Improvements to the scheduler

OpenStack Nova, arguably the essence of what makes OpenStack-OpenStack, will be a decade old technology by the time NFV takes off. In itself there's nothing wrong with that, but Nova needs to go through significant functionality investment (or be replaced entirely by a different scheduler) with features in following areas:

  • Support for containers in addition to VMs - this is different from the enterprise use-case where you could have two availability zones, one for VMs and one for containers. Here, you could literally have one VNF packaged as a container and the other as a VM, and both being part of the same service chain. So the container and VM may have to be in the same availability zone or even the same node, and share the same SDN and other infrastructure.  
  • Support for alternative schedule methods e.g. event-driven (also called serverless) will be required over time. AWS lambda has unleashed a powerful new way of maximizing hardware utilization by running a piece of code only when required, triggered by an event. A scheduler such as this could improve hardware utilization by 10x for NFV use cases such as vCPE.  
  • Support for distributed NFV (also called fog computing). vCPE with thin clients already requires distributed NFV. With 5G distributed or edge computing, also called fog computing, will become pervasive; and will need to be supported by the scheduler.

NFV-centric functionality

By virtue of being different from the enterprise use case, NFV has some unique requirements. These range from service chaining, service assurance, distributed NFV, smaller footprint (each cluster could be just a few nodes), higher levels of availability etc. Unless the community embraces relevant NFV projects (e.g. Vitrage, Gluon, Mistral, Congress, Blazar, Kingbird, Tricircle), rather than considering them extraneous, the OpenStack community will have a tough time convincing NFV users that they really care about this use case.

Ease-of-use

It has been almost 7 years since OpenStack was kicked off (June 2010), and “day-2” tasks such as post-deployment configuration changes, monitoring and troubleshooting, updates and upgrades continue to be extremely challenging. OpenStack vendors have not adequately invested in these areas making the technology difficult to use. With new cloud-native approaches, where OpenStack will be containerized and orchestrated by Kubernetes (see approaches by Mirantis, Kolla-Kubernetes and CoreOS Stackenetes), these “day-2” challenges should, one hopes, get a lot simpler. OpenStack expertise has also been hard to come by making OpenStack hard to consume. Vendors do offer short courses on OpenStack, but these courses are simply inadequate to create a true OpenStack IT/OPS expert. A 2 month bootcamp instead of a 5 day class might yield better results!

Net-net, NFV is OpenStack’s last line of defense. The sooner the community realizes this and throws their entire weight behind this use case, the better off OpenStack’s prospects will be.