Auto scaling with virtual node and Azure Kubernetes Service

Background of Azure container instances
Going through an open source project – Virtual Kubelet
Virtual Node in public preview


In this post we are going to talk about how to auto scale with AKS and a new public preview feature called Virtual Node. Virtual node is now a public preview feature with Azure Kubernetes Service. With virtual node you can burst into Azure Container Instances where you have no VM management. This enables you to scale your Kubernetes clusters even faster when you combine concepts like the horizontal pod autoscaler. The technology behind this is virtual kubelet which is an open source project supported by multiple cloud providers.

Background of Azure container instances

Sometimes, all you need to do is just run a container. VMs just take up too management for just trying to run a single workload and that is why we need Azure Container Instances. They takes away of VM management, it starts in seconds and it support Windows and Linux. It is also a hyper-v isolated, which means that you have a same level of isolation that you get in VMs today just at the container level. You can also configure the amount of memory and CPU that you want.

Going through an open source project – Virtual Kubelet

Virtual Kubulet is able to connect the Kuburnetes APIs to any other kind of service. For Azure it is connecting with outer container instances, but there is also another provider for the IOT edge. There is also a support for the rest of the community, not only Microsoft but also providers from Amazon, Hyper SH, VMware. With CI and Virtual Kubulet and AKS is build a preview called Virtual Node. In the image 1 is shown the architecture of Virtual Node in AKS.

Image 1 - Virtual node architecture in AKS

Image 1 – Virtual node architecture in AKS

You get the management of a chaos and the masters are already controlled for you because it is a managed control plane, but you also get an extra burst capacity through a CI and Virtual Node. Now you can have one to X amount of VMs in your cluster and then you can install Virtual Node in order to be able to scale out (basically infinitely) into a CI, and there is no VMs to think about. When you are scaling you don’t have to scale VMs and then think about the workloads that go on top of them, but all you do is scale out Azure container instances and you don’t realize it because the management plan is still Kubernetes.



More InfoIf you would like to learn more about what is the story behind containers and what drives or the needs for it, we will know why companies moved from traditional solution architecture to Microservices and how this put containers as the perfect solution for running them, we will get quick intro to clear some definitions, tools and keywords related to this ecosystem, for example, we will understand what is the different between VM, Container and Hyper-V Container, why we would prefer container over VM and when the VM is better, we will understand the different between container and image and know the life cycle of creating a new image and why I do that, like adding more layers to the base image, push that to container images registry on the cloud, then pull that from the registry to anywhere to have a new container. We will understand also different technologies and services around container, like Docker, Docker Swarm, Kubernetes, Azure Container Services (ACS), Azure Container Registry, etc.- have a look at this post – have a look at the this post


Virtual Node in public preview

Let us see it in practice. I have already installed a Virtual Node in my cluster. I have went ahead and installed an entire application, which is a simple front-end that simulates a Black Friday event (selling service books, X-boxes), in other words it is just a simple website. I started a big amount of load with load tester which is paying the front-end a lot in order to see how our cluster scales would that demand with the customer traffic. Now we are going to head over to application insights which have live metrics stream, where we can see the amount of incoming requests, how long the requests are actually during it for, and also the amount of servers that are coming up to (image 2).

Image 2 - Live metrics stream

Image 2 – Live metrics stream

In addition, we are going to use another dashboard, Grafana which is an open-source dashboarding tool. We can see that the RPS (request per second) and RPS per pod are going exponentially up, but on the other side the response time is actually going down because we are getting a bunch of more infrastructure starting up (image 3).


Image 3 - Grafana

Image 3 – Grafana

If we open the Azure Cloud Shell (image 4), we can see the amount of containers that are pending. This is where the Kubernetes is stepping in and helping to auto scale out. We are using the horizontal pot on a scaler which is an unknown thing in Kubernetes, that allows you to horizontally scale out your pods.


Image 4 - Azure Cloud Shell - pending containers

Image 4 – Azure Cloud Shell – pending containers

If we go back to the Live Metrics stream and refresh the page, we can see all of the container instances that are actually spinning up in the same resource group, and also we can note that we have so many of them that started up within a couple of seconds (image 5).


Image 5 - Container instances spinning up in the same resource group

Image 5 – Container instances spinning up in the same resource group

For this amount of containers, you will probably need 5 to 10 VMs, all starting up one after another, but in our case we just have 20 or 30 pots started up in tandem.



You can use Virtual Nodes to auto scale out from your cluster into a CI, giving you an easy solution for scaling in Kubernetes. Virtual Nodes allows AKS users to make use of the compute capacity of Azure Container Instances (ACI) to fire up additional containers rather than having to bring up Virtual Machine (VM)-based nodes. It is, Microsoft said, a simple act of flipping a switch to get access to all that resource.


Video You can see this video, if you would like to find more information about how to get started with Release Management and its advantages. See how to create a build definition using CI/CD Tools for VSTS Extensions (I will be using Package Extension and Publish Artifact tasks), and also using DevOps-VSTS-POC trigger in order to enable CI, all of that in order to be able to publish, share, install and query versions. You will see how to create release definition, choose an artifact and configure source for the artifact and default version. See how to create different environments or clone the existing one, in my case I am going to create QA, Preproduction and Production environment, each with one phrase and one task. See also how to configure Publish Extension task for each environment See an end-to-end continuous delivery pipeline using VSTS extension with Build and Release Management.


Share This: