
Hyperlight Wasm: Fast, secure, and OS-free
We're announcing the release of Hyperlight Wasm: a Hyperlight virtual machine (VM)…
About a year ago the Kubernetes SIG Auth team announced the promotion of Structured Authentication to Beta in this blog post. Moving to beta is a crucial step on its journey to becoming a stable, generally available feature in Kubernetes. With this milestone, we can now test the feature on any Kubernetes cluster running version 1.30 or later and, in this blog, we’ll do just that.
Structured Authentication in Kubernetes aims to simplify and centralize the configuration of the kube-apiserver. Prior to Kubernetes 1.30, configuring kube-apiserver for authentication and authorization (AuthN/AuthZ) required setting numerous individual flags. Additionally, OIDC-based authentication was limited to a single provider.
This new approach allows you to configure multiple JWT authenticators and implement sophisticated token validation using the Common Expression Language (CEL). Even better, changes can now be applied without restarting the kube-apiserver, minimizing cluster downtime.
In this post, we’ll walk through setting up Structured Authentication on a local kind (Kubernetes IN Docker) cluster. You’ll learn how to configure both Microsoft Entra ID and Okta as JWT providers, and how to write a simple CEL-based token validation rule. By the end, you’ll have a practical understanding of the feature and how your organization can start taking advantage of it.
Before you begin, ensure you have the following tools installed:
You will also need a Microsoft Entra tenant and Okta Developer account. If you don’t have these, check out the following links:
We’ve created a simple Terraform configuration file to set up the demo environment.
Start by opening a new terminal and create a new working directory.
mkdir k8s-structured-auth-demo
cd k8s-structured-auth-demo
Download the Terraform configuration file.
curl https://gist.githubusercontent.com/pauldotyu/bedf470baed79fb12a064bf1227e4fc5/raw/78b8feb630748ef650741e39aafa64758b132cb0/k8s-structured-auth-demo-setup.tf -o main.tf
This configuration file will create the following and output several properties which will be used to configure structured authentication within the cluster:
Okta:
1. Create new group called k8s-readers
2. Adds your user account to the new k8s-readers group
3. Create a new OAuth application called k8s-oidc that:
4. Adds new app to k8s-readers group
5. Create auth server to enable authorization code flow and exposes group membership claims
Microsoft Entra:
1. Create a new security group called k8s-admins
2. Adds your user account to the new k8s-admins group
3. Create a new application registration called k8s-oidc that:
Before you run the Terraform script, you will need to set Okta variables for API access. Rather than setting these parameters each time, you can add your credentials to a file.
Create a new okta.auto.tfvars
file and add the following:
okta_org_name = "your_okta_org_name"
okta_api_token = "your_okta_api_token"
okta_user = "your_okta_user_primary_email"
Run the following commands to run the terraform configuration:
terraform init
terraform apply
Give it a few seconds and you should see the output properties in your terminal.
Export the output variables for later use.
MSFT_ISSUER_URL=$(terraform output -raw microsoft_issuer_url)
MSFT_TENANT_ID=$(terraform output -raw microsoft_tenant_id)
MSFT_CLIENT_ID=$(terraform output -raw microsoft_client_id)
MSFT_GROUP_ID=$(terraform output -raw microsoft_group_id)
OKTA_ISSUER_URL=$(terraform output -raw okta_issuer_url)
OKTA_CLIENT_ID=$(terraform output -raw okta_client_id)
With kind, you customize cluster deployments by creating a configuration file which is passed during cluster creation. You can view the configuration file here.
To configure authentication, we mount a structured-auth.yaml
file into the API server container using extraMounts and extraVolumes. The API server then utilizes a single authentication-config flag referencing this file.
Run the following command to create the auth config with a Microsoft Entra as a JWT provider:
cat <<EOF > structured-auth.yaml
apiVersion: apiserver.config.k8s.io/v1beta1
kind: AuthenticationConfiguration
jwt:
- issuer:
url: https://login.microsoftonline.com/$MSFT_TENANT_ID/v2.0
audiences:
- $MSFT_CLIENT_ID
claimMappings:
username:
claim: "email"
prefix: ""
groups:
claim: "groups"
prefix: ""
EOF
Run the following command to create a new kind cluster and configure JWT Authenticators using Structured Authentication Configuration.
kind create cluster --config <(curl -s https://gist.githubusercontent.com/pauldotyu/0390da968b44035d572550e8012eadad/raw/a33000036b0839fc2699c456aedd3800f8dfa1a1/structured-auth-kind-config.yaml)
With the structured authentication piece in place, we need to configure authorization to allow users that are members of the k8s-admin group to be cluster administrators. Run the following command to create the ClusterRoleBinding:
kubectl apply -f - <<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: azure-cluster-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: Group
name: $MSFT_GROUP_ID
EOF
Without this clusterrolebinding
, you will be authenticated but not authorized to do anything in the cluster.
To test, add a new Azure user to kubeconfig, using the oidc-plugin with Microsoft Entra information for kubectl authentication.
kubectl config set-credentials azure-user \
--exec-api-version=client.authentication.k8s.io/v1 \
--exec-interactive-mode=Never \
--exec-command=kubectl \
--exec-arg=oidc-login \
--exec-arg=get-token \
--exec-arg=--oidc-issuer-url=${MSFT_ISSUER_URL} \
--exec-arg=--oidc-client-id=${MSFT_CLIENT_ID} \
--exec-arg=--oidc-extra-scope="email offline_access profile openid"
With the user in place, we can test actions as that user. Run the following commands and confirm you are redirected to login commands run successfully:
kubectl run myhttpd --user=azure-user --image=httpd:alpine
kubectl expose pod myhttpd --user=azure-user --port 80
You will be redirected to a browser window where you can login and grant permissions requested by the application. Upon successful login, your user account will have full cluster admin privileges, enabling you to create any Kubernetes resource.
The structured authentication config currently supports one JWT provider which is no different than existing utilizing OIDC flags in the kube-apiserver. Its power lies in the ability to add further JWT providers without needing to restart the kube-apiserver.
Run the following command to drop in a new Okta-based JWT provider:
cat <<EOF >> structured-auth.yaml
- issuer:
url: $OKTA_ISSUER_URL
audiences:
- $OKTA_CLIENT_ID
claimMappings:
username:
claim: "email"
prefix: ""
groups:
claim: "groups"
prefix: ""
EOF
Because kind clusters run in Docker containers, you can exec into the control plane Pod to verify the structured-auth file has been updated.
docker exec -it kind-control-plane cat /etc/kubernetes/structured-auth.yaml
Check the kube-apiserver logs to see the configuration reloaded automatically—without restarting the apiserver. Wait 5-7 seconds, then run the verification command.
docker exec -it kind-control-plane sh -c "cat /var/log/containers/kube-apiserver-kind-control-plane_kube-system_kube-apiserver-*.log"
Create the Role and RoleBinding, assigning read access to Pods and Services to users within the ‘k8s-readers’ group.
kubectl apply -f https://gist.githubusercontent.com/pauldotyu/823a6bb1e73c3ac3ac1b2311429249f0/raw/1eb34efcc453e5024361fb7f486c03b401453994/okta-po-svc-reader.yaml
Add the Okta user to kubeconfig.
kubectl config set-credentials okta-user \
--exec-api-version=client.authentication.k8s.io/v1beta1 \
--exec-command=kubectl \
--exec-arg=oidc-login \
--exec-arg=get-token \
--exec-arg=--oidc-issuer-url=${OKTA_ISSUER_URL} \
--exec-arg=--oidc-client-id=${OKTA_CLIENT_ID} \
--exec-arg=--oidc-extra-scope="email offline_access profile openid"
Run these commands to verify you were redirected to the Okta login page—you are not authorized to create Pods or view Node information.
kubectl run mybusybox --user=okta-user --image=busybox --restart=Never --command -- sleep 3600
kubectl get nodes --user=okta-user
However, the created Role and RoleBinding does allow you to view Pod and Service information— confirm this by running the following commands.
kubectl get po --user=okta-user
kubectl get svc --user=okta-user
And just like that—we can drop in new JWT providers on the fly without needing to reboot the kube-apiserver! That’s pretty slick, isn’t it?
Structured authentication unlocks powerful features like claim validation and claim mappings—all built using Common Expression Language (CEL)! With CEL, you can define complex rules and enforce organizational policies. Run the following command to add a rule that only allows users with a name starting with ‘Bob’ to authenticate into the kube-apiserver:
cat <<EOF >> structured-auth.yaml
claimValidationRules:
- expression: "claims.name.startsWith('Bob')"
message: only people named Bob are allowed
EOF
Re-run the commands to verify the structured-auth file is updated and reloaded.
Since claim validations occur when the JWT token is evaluated, you’ll need to reset the OIDC token cache to trigger a new authentication.
kubectl oidc-login clean
Run the following commands and confirm that what used to work is no longer successful—unless your name is Bob 😅.
kubectl get po --user=okta-user
kubectl get svc --user=okta-user
Now, let’s change the config to use your name. Using the nano editor, find the name ‘Bob’ and replace it with your name.
nano structured-auth.yaml
Reset OIDC token cache.
kubectl oidc-login clean
Run the following commands again—you should now be able to view Pod and Service data!
Structured authentication supports any JWT-compliant token provider, giving you the power to define custom claim mapping logic using CEL expressions. This allows you to create a highly tailored and flexible authentication process.
With this practical guide, you now know how to secure your Kubernetes cluster using the structured –authentication feature, offering flexible integration with any JWT-compliant token provider. Its core strength lies in the ability to define granular access control via CEL, enabling you to extract and validate token claims against Kubernetes user attributes such as usernames and groups. This enables the use of complex logic to determine whether a token should be trusted—ensuring a highly customizable and secure authentication experience.
Once you’re done testing and exploring, run the following commands to clean up your environment.
kind delete cluster
kubectl config delete-user azure-user
kubectl config delete-user okta-user
kubectl oidc-login clean
terraform destroy --auto-approve
This upstream development—benefiting the entire Kubernetes community—is also being actively implemented for AKS customers, delivering a secure authentication solution with an emphasis on providing an intuitive user experience. We welcome your feedback and feature requests as we continue this work. For more details, check out the resources linked below.