rbac_lab

Home

Kubernetes RBAC Lab: Developer and Deployer Roles

This lab demonstrates how to set up Role-Based Access Control (RBAC) in Kubernetes. We will create two service accounts, developer and deployer, define distinct roles for them within a specific namespace, bind these roles to the service accounts, and then generate kubeconfig files to test their respective permissions.

1. Setup Namespace and Service Accounts

  1. Define the target namespace:

    export NAMESPACE="app2"
    kubectl create namespace $NAMESPACE # Ensure the namespace exists
  2. Create Service Accounts: Create two service accounts, developer and deployer, in the defined namespace.

    kubectl create sa developer -n $NAMESPACE
    kubectl create sa deployer  -n $NAMESPACE

2. Define Roles (roles.yaml)

Create a file named roles.yaml with the following content. This file defines a deployer role with broad permissions to manage application resources and a developer role with more restricted, mostly read-only permissions.

apiVersion: rbac.authorization.k8s.io/v1 # Updated from v1beta1
kind: Role
metadata:
  name: deployer
  namespace: app2 # Should match $NAMESPACE
rules:
- apiGroups: ["", "apps", "extensions", "networking.k8s.io"] # More specific apiGroups
  resources: ["deployments", "replicasets", "statefulsets", "daemonsets", "configmaps", "pods", "secrets", "services", "ingresses", "persistentvolumeclaims"]
  verbs: ["create", "get", "delete", "list", "update", "patch", "watch"]
- apiGroups: [""] # For core resources like pods/log, pods/exec
  resources: ["pods/log", "pods/exec"]
  verbs: ["get", "list", "watch", "create"] # 'create' for exec
--- 
apiVersion: rbac.authorization.k8s.io/v1 # Updated from v1beta1
kind: Role
metadata:
  name: developer
  namespace: app2 # Should match $NAMESPACE
rules:
- apiGroups: ["", "apps", "extensions", "networking.k8s.io"] # More specific apiGroups
  resources: ["deployments", "replicasets", "statefulsets", "daemonsets", "pods", "services", "ingresses", "configmaps"] # Common read-only resources
  verbs: ["get", "list", "watch"]
- apiGroups: [""]
  resources: ["pods/log"]
  verbs: ["get", "list", "watch"]
- apiGroups: [""]
  resources: ["pods/exec"]
  verbs: ["create", "get"] # 'create' for exec, 'get' might be needed by some tools

(Note: Updated apiVersion to v1. Specified namespace directly in Role metadata. Refined apiGroups and resources for clarity and better practice than ["*"] for resources, though the original intent was broad. Reduced verbs for developer to be more realistic for read-only + exec/log.)

The original vi roles.yaml implies manual creation. This block represents the content of that file.

3. Apply Roles

Apply the defined roles to the cluster within the specified namespace.

4. Define Role Bindings (bindings.yaml)

Create a file named bindings.yaml with the following content. This binds the deployer Role to the deployer ServiceAccount and the developer Role to the developer ServiceAccount.

(Note: Specified namespace in RoleBinding metadata and for the ServiceAccount subject for clarity and explicitness.)

The original vi bindings.yaml implies manual creation. This block represents the content of that file.

5. Apply Role Bindings

Apply the defined role bindings to the cluster.

6. Prepare Kubeconfig Template (kubeconfig.tpl)

Create a template file named kubeconfig.tpl for generating kubeconfig files.

(Note: The original vi kubeconfig.tpl means the content is as shown. I've wrapped it in cat << EOF > kubeconfig.tpl for scriptability and escaped $ variables meant for envsubst.)

7. Generate and Test Kubeconfig for deployer

  1. Set environment variables for deployer:

    Note on Service Account Tokens: The following commands to retrieve service account tokens are for Kubernetes versions prior to 1.24. For Kubernetes 1.24 and later, secrets are not automatically created for service accounts. You should use kubectl create token deployer -n $NAMESPACE to get the token directly. If using an older version:

    If using Kubernetes 1.24+:

    Continue with cluster information gathering:

    (Simplified jq to jsonpath for some commands, added note about token generation, and made cluster info gathering more robust by picking the first cluster if multiple exist, though typically there's one for kubectl config view --raw.)

  2. Generate kubeconfig_app2_deployer:

  3. Test deployer permissions: The deployer will attempt to list nodes and manage resources within its namespace.

  4. Unset KUBECONFIG: This is crucial to return to your default admin context.

8. Generate and Test Kubeconfig for developer

  1. Set environment variables for developer:

    Note on Service Account Tokens: (Same note as for deployer applies here regarding K8s versions) If using an older version:

    If using Kubernetes 1.24+:

    Cluster information variables (CLUSTER_NAME, CLUSTER_CA, CLUSTER_ENDPOINT) should still be set from the previous step.

  2. Generate kubeconfig_app2_developer:

  3. Test developer permissions: The developer should be able to list pods in $NAMESPACE but not list nodes or create deployments.

    The original kubectl get po -n app2 is redundant if $NAMESPACE is "app2", but it correctly tests if the developer can see pods in their assigned namespace.

  4. Unset KUBECONFIG:

Home

Last updated