Contents

AKS - Part 1 - Introduction

Azure Kuberenetes Service (AKS)

Basic Charged/Not charged AKS table

Not charged Charged
Kubernetes Control-Plane X
Billing support X
Basic Load Balancer X
Node VM X
Node Disks X
Bandwidth X
Storage for persistent volumes X
Standard Load Balancer X
Public IP address X
Log Analytics workspace X
SLA X

By using AKS you won’t have to thing/manage the infrastructure and some of the configurations needed in order to have such solution up and running. It will remove most of the complexity behind bringing your cluster up and keep it running so you can focus on develop your applications only. It will also introduce some limitations like you will have limited or no access to the control-plane and it might happen you won’t have all features available in a region.

Restricted Virtual Machines

Not all the available VMs can be used on AKS. That is because AKS requires a minimum hardware configuration that some of the VMs provided by Azure does not have. B2s, D1_v2, DS1_v2, B2ms are some VMs you can use.

Tags

AKS tags will be propagate to the node resource group and some of the tags are automatically created by the AKS. Those automatically created tags will be used by AKS to control the cluster so DO NOT REMOVE any of those tags.

Kubernetes version release and Azure

A new version of kubernetes is released every 3 months but Azure won’t be updating your AKS that fast. To update kubernetes version on the AKS Microsoft needs to perform tests and more tests in order to be sure the new version is stable and it won’t break anything clients have already so don’t expect to have the latest version of kubernetes running on your cluster.

AKS limitations

Resource Limit
Max cluster per subscription 100
Max nodes per cluster using AvaliabilitySet Basic Load Balancer 100
Max nodes per cluster with ScaleSet and Standard Load Balancer 1000 (100 per node pool )
Max pods per node using Kubernet and Basic Load Balancer 110
Max pods per node using Kubernet and Standard Load Balancer 400
Max pods per node using Azure CNI and Basic Load Balancer 110
Max pods per node using Azure CNI and Standard Load Balancer 250

Azure Basic Load Balancer vs Azure Standard Load Balancer

AKS operation

AKS reserves part of the VM CPU and memory to operate.

It is important to think about how much of your hardware capacity will be used to manage your cluster when defining it.

The bigger your cluster is the more resources you are going to need to operate.

CPU cores and host 1 2 4 8 16 32 64
Kube-reserved (mile cores) 60 100 140 180 260 420 740

How much memory AKS will get from your VM to operate?

Allocated memory to the VM ~
4 GB 25%
8 GB 20%
16 GB 10%
128 GB 6%
above 128 2%

Subscriptions have maximum hardware capacity. When creating a new AKS you might need to raise a ticket to get more resource capacity to your subscription

AKS Architecture

Kubernetes Control-Plane

An important note is that AKS abstracts the management of the Kubernetes Control-Plane so we can not SSH to a master node. We do have access to the nodes so we can add and remove nodes by scaling the cluster.

Nodes

Nodes are VMs created inside of the AKS node resource group and as such you can manage and SSH those VMs.

APIs

Kubernets offers an API to make it possible the communication to the Control-Pane. That communication is important manage your cluster. As clients for that API we have kubectl, curl, dashboard, etc

AKS offers two types of API for a cluster and they are not compatible to each other.

Public API

  • Internet exposed
  • AKS Default
  • Can be secure with api-server-authorized-ip-ranges
  • Not compatible with Private API

Private API

  • Local network
  • Disabled by default
  • Can be enabled with ‘enable-private-cluter’
  • Not compatible with Public API

Create AKS

Vnet/Subnet

The nodes IPs will come from a Subnet resource

Service CIDR

It is a range of IPs you can define when using AKS that will be used by certain kubernetes services. Keep in mind they are cluster IPs and used only inside kubernetes. They are not related to your network.

POD CIDR

It is the IP range of your PODs

Load Balancer

As soon as you create a AKS you will have one Internal and one External Load Balancer. The inbound and outbound traffic will go trough the External Load Balancer.

KubeDNS

KubeDNS is the only Kubernet Service that won’t take an IP from the CIDR range you created. It demands a static IP.

Selectors

It is a default Kubernetes behavior. It is the way Kubernetes sends traffic from Services to PODs. You will have Labels on your PODs and match the labels on your Services.

Docker Bridge Address

Define the IP range for the Docker Virtual Network

Network Plugin

Network Concepts for applications in AKS

You can choose in between Azure CNI or Kubernet

Kubernet uses Layer-3. It does IP-Forwarding to connects the traffic from PODs, Nodes and other resources in your network.

Azure CNI uses Layer-2. The POD CIDR will be your subnet CIDR and each POD will get an IP from your Cluster Subnet CIDR.

  • Kubenet requires you to create kubernetes services and Load Balancers to expose service. This is the correct way of exposing applications on kubernetes. You will have an extra way of security and hight availability cause you will have a Load Balancer distributing the traffic. But you will have to pay for those Load Balancers.
  • If you plan to use Azure Network Policy you can’t use Kubenet
  • If you are using Windows you can not use Kubenet
  • If you support to have more than 110 pods per node you can’t use kubenet
  • When using Azure CNI you need to be sure you have a correct security layer to protect your pods

One way to check which one is being used is to check the Pod CIDR on the overview and the Subnet. If your AKS is configured to use KUBENET those values won’t overlap each other. You can also check your AKS subnet to check if there is a route table (for the IP-Forwarding) created and set to it.

When using Kubenet a ping from a POD to a VM inside of the same network will reveal the IP of the VM hosting that POD

When using Azure CNI each POD has its own IP from the vNET subnet so, a ping from a POD to a VM inside of the same network will reveal the IP of the POD itself and not the IP of the VM in which that POD is running