Feature | Status | API Version | Example | Description |
---|---|---|---|---|
Managed Disks | Beta | vlabs |
kubernetes-vmas.json | Description |
Calico Network Policy | Alpha | vlabs |
kubernetes-calico.json | Description |
Custom VNET | Beta | vlabs |
kubernetesvnet.json | Description |
Enabling Managed Identity configures acs-engine to include and use MSI identities for all interactions with the Azure Resource Manager (ARM) API.
Instead of using a static servic principal written to /etc/kubernetes/azure.json
, Kubernetes will use a dynamic, time-limited token fetched from the MSI extension running on master and agent nodes. This support is currently alpha and requires Kubernetes v1.7.2 or newer.
Enable Managed Identity by adding useManagedIdentity
in kubernetesConfig
.
"kubernetesConfig": {
"useManagedIdentity": true,
"customHyperkubeImage": "docker.io/colemickens/hyperkube-amd64:3b15e8a446fa09d68a2056e2a5e650c90ae849ed"
}
By default, the cluster will be provisioned without Role-Based Access Control enabled. Enable RBAC by adding enableRbac
in kubernetesConfig
in the api model:
"kubernetesConfig": {
"enableRbac": true
}
See cluster definition for further detail.
Managed disks are supported for both node OS disks and Kubernetes persistent volumes.
Related upstream PR for details.
By default, each ACS-Engine cluster is bootstrapped with several StorageClass resources. This bootstrapping is handled by the addon-manager pod that creates resources defined under /etc/kubernetes/addons directory on master VMs.
The default storage class has been set via the Kubernetes admission controller DefaultStorageClass
.
The default storage class will be used if persistent volume resources don't specify a storage class as part of the resource definition.
The default storage class uses non-managed blob storage and will provision the blob within an existing storage account present in the resource group or provision a new storage account.
Non-managed persistent volume types are available on all VM sizes.
As part of cluster bootstrapping, two storage classes will be created to provide access to create Kubernetes persistent volumes using Azure managed disks.
Nodes will be labelled as follows if they support managed disks:
storageprofile=managed
storagetier=<Standard_LRS|Premium_LRS>
They are managed-premium and managed-standard and map to Standard_LRS and Premium_LRS managed disk types respectively.
In order to use these storage classes the following conditions must be met.
- The cluster must be running Kubernetes release 1.7 or greater. Refer to this example for how to provision a Kubernetes cluster of a specific version.
- The node must support managed disks. See this example to provision nodes with managed disks. You can also confirm if a node has managed disks using kubectl.
kubectl get nodes -l storageprofile=managed
NAME STATUS AGE VERSION
k8s-agent1-23731866-0 Ready 24m v1.7.2
-
The VM size must support the type of managed disk type requested. For example, Premium VM sizes with managed OS disks support both managed-standard and managed-premium storage classes whereas Standard VM sizes with managed OS disks only support managed-standard storage class.
-
If you have mixed node cluster (both non-managed and managed disk types). You must use affinity or nodeSelectors on your resource definitions in order to ensure that workloads are scheduled to VMs that support the underlying disk requirements.
For example
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: storageprofile
operator: In
values:
- managed
Using the default configuration, Kubernetes allows communication between all Pods within a cluster. To ensure that Pods can only be accessed by authorized Pods, a policy enforcement is needed. To enable policy enforcement using Calico refer to the cluster definition document under networkPolicy. There is also a reference cluster definition available here.
This will deploy a Calico node controller to every instance of the cluster using a Kubernetes DaemonSet. After a successful deployment you should be able to see these Pods running in your cluster:
kubectl get pods --namespace kube-system -l k8s-app=calico-node -o wide
NAME READY STATUS RESTARTS AGE IP NODE
calico-node-034zh 2/2 Running 0 2h 10.240.255.5 k8s-master-30179930-0
calico-node-qmr7n 2/2 Running 0 2h 10.240.0.4 k8s-agentpool1-30179930-1
calico-node-z3p02 2/2 Running 0 2h 10.240.0.5 k8s-agentpool1-30179930-0
Per default Calico still allows all communication within the cluster. Using Kubernetes' NetworkPolicy API, you can define stricter policies. Good resources to get information about that are:
ACS Engine supports deploying into an existing VNET. Operators must specify the ARM path/id of Subnets for the masterProfile
and any agentPoolProfiles
. After the cluster is provisioned there are some required modifications to VNET Route Tables.
Before provisioning, modify the masterProfile
and agentPoolProfiles
sections in the cluster definition to place masters and agents into your desired subnets:
"masterProfile": {
...
"vnetSubnetId": "/subscriptions/SUB_ID/resourceGroups/RG_NAME/providers/Microsoft.Network/virtualNetworks/VNET_NAME/subnets/MASTER_SUBNET_NAME",
"firstConsecutiveStaticIP": "10.239.255.239"
...
},
...
"agentPoolProfiles": [
{
...
"name": "agentpri",
"vnetSubnetId": "/subscriptions/SUB_ID/resourceGroups/RG_NAME/providers/Microsoft.Network/virtualNetworks/VNET_NAME/subnets/AGENT_SUBNET_NAME",
...
},
After a cluster finishes provisioning, fetch the id of the Route Table resource from Microsoft.Network
provider in your new cluster's Resource Group.
The route table resource id is of the format: /subscriptions/SUBSCRIPTIONID/resourceGroups/RESOURCEGROUPNAME/providers/Microsoft.Network/routeTables/ROUTETABLENAME
Existing subnets will need to use the Kubernetes-based Route Table so that machines can route to Kubernetes-based workloads.
Update properties of all subnets in the existing VNET he route table resource by appending the following to subnet properties:
"routeTable": {
"id": "/subscriptions/<SubscriptionId>/resourceGroups/<ResourceGroupName>/providers/Microsoft.Network/routeTables/k8s-master-<SOMEID>-routetable>"
}
E.g.:
"subnets": [
{
"name": "subnetname",
"id": "/subscriptions/<SubscriptionId>/resourceGroups/<ResourceGroupName>/providers/Microsoft.Network/virtualNetworks/<VirtualNetworkName>/subnets/<SubnetName>",
"properties": {
"provisioningState": "Succeeded",
"addressPrefix": "10.240.0.0/16",
"routeTable": {
"id": "/subscriptions/<SubscriptionId>/resourceGroups/<ResourceGroupName>/providers/Microsoft.Network/routeTables/k8s-master-<SOMEID>-routetable"
}
...
}
...
}
]
Kubernetes clusters can be configured to use the Azure CNI plugin which provides a Azure native networking experience. Pods will receive IP addresses directly from the vnet subnet on which they're hosted. To enable Azure integrated networking the following must be added to your cluster definition:
"kubernetesConfig": {
"networkPolicy": "azure"
}
In addition you can modify the following settings to change the networking behavior when using Azure integrated networking:
IP addresses are pre-allocated in the subnet. Using ipAddressCount you can specify how many you would like to pre-allocate. This number needs to account for number of pods you would like to run on that subnet.
"masterProfile": {
"ipAddressCount": 200
},
Currently, the IP addresses that are pre-allocated aren't allowed by the default natter for Internet bound traffic. In order to work around this limitation we allow the user to specify the vnetCidr (eg. 10.0.0.0/8) to be EXCLUDED from the default masquerade rule that is applied. The result is that traffic destined for anything within that block will NOT be natted on the outbound VM interface. This field has been called vnetCidr but may be wider than the vnet cidr block if you would like POD IPs to be routable across vnets using vnet-peering or express-route.
"masterProfile": {
"vnetCidr": "10.0.0.0/8",
},
When using Azure integrated networking the maxPods setting will be set to 30 by default. This number can be changed keeping in mind that there is a limit of 4,000 IPs per vnet.
"kubernetesConfig": {
"maxPods": 50
}