This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Security

Learn about security considerations for your Kf cluster.

1 - Security Overview

Unerstand Kf’s security posture.

Kf aims to provide a similar developer experience to Cloud Foundry, replicating the build, push, and deploy lifecycle. It does this by building a developer UX layer on top of widely-known, broadly used and adopted technologies like Kubernetes, Istio, and container registries rather than by implementing all the pieces from the ground up.

When making security decisions, Kf aims to provide complete solutions that are native to their respective components and can be augmented with other mechanisms. Breaking that down:

  • Complete solutions means that Kf tries not to provide partial solutions that can lead to a false sense of security.
  • Native means that the solutions should be a part of the component rather than a Kf construct to prevent breaking changes.
  • Can be augmented means the approach Kf takes should work seamlessly with other Kubernetes and Google Cloud tooling for defense in depth.

Important considerations

In addition to the Current limitations described below, it is important that you read through and understand the items outlined in this section.

Workload Identity

By default, Kf uses Workload Identity to provide secure delivery and rotation of the Service Account credentials used by Kf to interact with your Google Cloud Project. Workload Identity achieves this by mapping a Kubernetes Service Account (KSA) to a Google Service Account (GSA). The Kf controller runs in the kf namespace and uses a KSA named controller mapped to your GSA to do the following things:

  1. Write metrics to Stackdriver
  2. When a new Kf space is created (kf create-space), the Kf controller creates a new KSA named kf-builder in the new space and maps it to the same GSA.
  3. The kf-builder KSA is used by Tekton to push and pull container images to Google Container Registry (gcr.io)

This diagram illustrates those interactions:

Workload identity overview diagram

NFS

In order to mimic Cloud Foundry’s UID/GID mapping, containers in Kf that mount NFS volumes need the ability to run as root and the ability to access the FUSE device of the kernel running the Node.

Current limitations

  • A developer pushing an app with Kf can also create Pods (with kubectl) that can use the kf-builder KSA with the permissions of its associated GSA.

  • Kf uses the same Pod to fetch, build, and store images. Assume that any credentials that you provide can be known by the authors and publishers of the buildpacks you use.

  • Kf doesn’t support quotas to protect against noisy neighbors. Use Kubernetes resource quotas.

Other resources

Google Cloud

General

Advanced protections

2 - Role-based access control

Learn about how to share a Kf cluster using roles.

Kf provides a set of Kubernetes roles that allow multiple teams to share a Kf cluster. This page describes the roles and best practices to follow when using them.

When to use Kf roles

Kf roles allow multiple teams to share a Kubernetes cluster with Kf installed. The roles provide access to individual Kf Spaces.

Use Kf roles to share access to a cluster if the following are true:

  • The cluster is used by trusted teams.
  • Workloads on the cluster share the same assumptions about the level of security provided by the environment.
  • The cluster exists in a Google Cloud project that is tightly controlled.

Kf roles will not:

Kf roles

The following sections describe the Kubernetes RBAC Roles provided by Kf and how they interact with GKE IAM.

Predefined roles

Kf provides several predefined Kubernetes roles to help you provide access to different subjects that perform different roles. Each predefined role can be bound to a subject within a Kubernetes Namespace managed by a Kf Space.

When a subject is bound to a role within a Kubernetes Namespace, their access is limited to objects that exist in the Namespace that match the grants listed in the role. In Kf, some resources are defined at the cluster scope. Kf watches for changes to subjects in the Namespace and grants additional, limited, roles at the cluster scope.

RoleTitleDescriptionScope
space-auditorSpace AuditorAllows read-only access to a Space.Space
space-developerSpace DeveloperAllows application developers to deploy and manage applications within a Space.Space
space-managerSpace ManagerAllows administration and the ability to manage auditors, developers, and managers in a Space.Space
SPACE_NAME-managerDynamic Space ManagerProvides write access to a single Space object, automatically granted to all subjects with the space-manager role within the named Space.Cluster
kf-cluster-readerCluster ReaderAllows read-only access to cluster-scoped Kf objects, automatically granted to all space-auditor, space-developer, and space-manager.Cluster

Information about the policy rules that make up each predefined role can be found in the Kf roles reference documentation.

Google Cloud IAM roles

Kf roles provide access control for objects within a Kubernetes cluster. Subjects must also be granted an Cloud IAM role to authenticate to the cluster:

  • Platform administrators should be granted the roles/container.admin Cloud IAM role. This will allow them to install, upgrade, and delete Kf as well as create, and delete cluster scoped Kf objects like Spaces or ClusterServiceBrokers.

  • Kf end-users should be granted the roles/container.viewer Cloud IAM role. This role will allow them to authenticate to a cluster with limited permissions that can be expanded using Kf roles.

Google Cloud IAM offers additional predefined Roles for GKE to solve more advanced use cases.

Map Cloud Foundry roles to Kf

Cloud Foundry provides roles are similar to Kf’s predefined roles. Cloud Foundry has two major types of roles:

  • Roles assigned by the User Account and Authentication (UAA) subsystem that provide coarse-grained OAuth scopes applicable to all Cloud Foundry API endpoints.
  • Roles granted within the Cloud Controller API (CAPI) that provide fine-grained access to API resources.

UAA roles

Roles provided by UAA are most similar to project scoped Google Cloud IAM roles:

  • Admin users in Cloud Foundry can perform administrative activities for all Cloud Foundry organizations and spaces. The role is most similar to the roles/container.admin Google Cloud IAM role.
  • Admin read-only users in Cloud Foundry can access all Cloud Foundry API endpoints. The role is most similar to the roles/container.admin Google Cloud IAM role.
  • Global auditor users in Cloud Foundry have read access to all Cloud Foundry API endpoints except for secrets. There is no equivalent Google Cloud IAM role, but you can create a custom role with similar permissions.

Cloud Controller API roles

Roles provided by CAPI are most similar to Kf roles granted within a cluster to subjects that have the roles/container.viewer Google Cloud IAM role on the owning project:

  • Space auditors in Cloud Foundry have read-access to resources in a CF space. The role is most similar to the space-auditor Kf role.
  • Space developers in Cloud Foundry have the ability to deploy and manage applications in a CF space. The role is most similar to the space-developer Kf role.
  • Space managers in Cloud Foundry can modify settings for the CF space and assign users to roles. The role is most similar to the space-manager Kf role.

What’s next

3 - Configure role-based access control

Learn to grant users different roles on a cluster.

The following steps guide you through configuring role-based access control (RBAC) in a Kf Space.

Before you begin

Please follow the GKE RBAC guide before continuing with the following steps.

Configure Identity and Access Management (IAM)

In addition to permissions granted through Kf RBAC, users, groups, or service accounts must also be authenticated to view GKE clusers at the project level. This requirement is the same as for configuring GKE RBAC, meaning users/groups must have at least the container.clusters.get IAM permission in the project containing the cluster. This permission is included by the container.clusterViewer role, and other more privilleged roles. For more information, review Interaction with Identity and Access Management.

Assign container.clusterViewer to a user or group.

gcloud projects add-iam-policy-binding ${CLUSTER_PROJECT_ID} \
  --role="container.clusterViewer" \
  --member="${MEMBER}"

Example member values are:

  • user:test-user@gmail.com
  • group:admins@example.com
  • serviceAccount:test123@example.domain.com

Manage Space membership as SpaceManager

The cluster admin role, or members with SpaceManager role, can assign role to a user, group or service account.

kf set-space-role MEMBER -t [Group|ServiceAccount|User]

The cluster admin role, or members with SpaceManager role, can remove a member from a role.

kf unset-space-role MEMBER -t [Group|ServiceAccount|User]

You can view members and their roles within a Space.

kf space-users

Examples

Assign SpaceDeveloper role to a user.

kf set-space-role alice@example.com SpaceDeveloper

Assign SpaceDeveloper role to a group.

kf set-space-role devs@example.com SpaceDeveloper -t Group

Assign SpaceDeveloper role to a Service Account.

kf set-space-role sa-dev@example.domain.com SpaceDeveloper -t ServiceAccount

App development as SpaceDeveloper

Members with SpaceDeveloper role can perform Kf App development operations within the Space.

To push an App:

kf push app_name -p [PATH_TO_APP_ROOT_DIRECTORY]

To view logs of an App:

kf logs app_name

SSH into a Kubernetes Pod running the App:

kf ssh app_name

View available service brokers:

kf marketplace

View Apps as SpaceManager or SpaceAuditor

Members with SpaceManager or SpaceAuditor role could view available Apps within the Space:

kf apps

View Kf Spaces within a cluster

All roles (SpaceManager, SpaceDeveloper, and SpaceAuditor) can view available Kf Spaces within a cluster:

kf spaces

View Space members and their roles within a Space.

kf space-users

Impersonation flags

To verify a member’s permission, a member with more priviliaged permission can test another member’s permissions using the impersonation flags: --as and --as-group.

For example, as a cluster admin, you can verify if a user (username: bob) has permission to push an App.

kf push APP_NAME --as bob

Verify a group (manager-group@example.com) has permission to assign permission to other members.

kf set-space-role bob SpaceDeveloper --as-group manager-group@example.com

4 - Enable compute isolation

Isolate the underlying nodes certain apps or builds are scheuled onto.

Kf Apps can be deployed on dedicated nodes in the cluster. This feature is required if you have the circumstances where you might want more control on a node where an App Pod lands. For example:

  • If you are sharing the same cluster for different Apps but want dedicated nodes for a particular App.
  • If you want dedicated nodes for a given organization (Kf Space).
  • If you want to target a specific operating system like Windows.
  • If you want to co-locate Pods from two different services that frequently communicate.

To enable compute isolation, Kf uses the Kubernetes nodeSelector. To use this feature, first add labels on the nodes or node pools where you want your App Pods to land and then add the same qualifying labels on the Kf Space. All the Apps installed in this Space then land on the nodes with matching labels.

Kf creates a Kubernetes pod to execute each Kf Build, the buildNodeSelector feature can be used to isolate compute resources to execute only the Build pods. One use case is to isolate Build pods to run on nodes with SSD, while running the App pods on other nodes. The BuildNodeSelectors feature provides compute resource optimization and flexibility in the cluster. Please refer to chapter ‘Configure BuildNodeSelectors and a build node pool’ on this page.

Configure nodeSelector in a Kf cluster

By default, compute isolation is disabled. Use the following procedure to configure labels and nodeSelector.

  1. Add a label (distype=ssd) on the node where you want your application pods to land.

    kubectl label nodes nodeid disktype=ssd
    
  2. Add the same label on the Kf Space. All Apps deployed in this Space will then land on the qualifying nodes.

    kf configure-space set-nodeselector space-name disktype ssd
    

    You can add multiple labels by running the same command again.

  3. Check the label is configured.

    kf configure-space get-nodeselector space-name
    
  4. Delete the label from the space.

    kf configure-space unset-nodeselector space-name disktype
    

Override nodeSelector for Kf stacks

Deployment of Kf Apps can be further targeted based on what stack is being used to build and package the App. For example, if you want your applications built with spaceStacksV2 to land on nodes with Linux kernel 4.4.1., nodeSelector values on a stack override the values configured on the Space.

To configure the nodeSelector on a stack:

  1. Edit the config-defaults of your Kf cluster and add the labels.

    $ kubectl -n kf edit configmaps config-defaults
    
  2. Add nodeSelector to the stacks definition.

    .....
    .....
    spaceStacksV2: |
    - name:  cflinuxfs3
            image: cloudfoundry/cflinuxfs3
            nodeSelector:
                  OS_KERNEL: LINUX_4.4.1 
    .....
    .....
    

Configure BuildNodeSelectors and a Build node pool

Build node selectors are only effective at overriding the node selectors for the Build pods, they do not affect App pods. For example, if you specify both the node selectors on the Space and the Build node selectors in Kfsystem, App pods will have the Space node selectors while the Build pods will have the Build node selectors from Kfsystem; if node selectors are only specified in the Space, both the App and Build pods will have the node selector from the Space.

  1. Add labels (disktype:ssd for example) on the nodes that you want your Build pods to be assigned to.

    kubectl label nodes nodeid disktype=ssd
    
  2. Add/update Build node selectors (in the format of key:value pairs) by patching KfSystem CR.

    kubectl patch kfsystem kfsystem --type='json' -p='[{'op': 'replace', 'path': '/spec/kf/config/buildNodeSelectors', 'value': {<key>:<value>}}]'
    

    For example, to add disktype=ssd as the Build node selector:

    kubectl patch kfsystem kfsystem --type='json' -p='[{'op': 'replace', 'path': '/spec/kf/config/buildNodeSelectors', 'value': {"disktype":"ssd"}}]'
    

5 - Kubernetes Roles

Understand how Kf uses Kubernetes’ RBAC to assign roles.

The following sections describe the Kubernetes ClusterRoles that are created by Kf and lists the permissions that are contained in each ClusterRole.

Space developer role

The Space developer role aggregates permissions application developers use to deploy and manage applications within a Kf Space.

You can retrieve the permissions granted to Space developers on your cluster using the following command.

kubectl describe clusterrole space-developer

Space auditor role

The Space auditor role aggregates read-only permissions that auditors and automated tools use to validate applications within a Kf Space.

You can retrieve the permissions granted to Space auditors on your cluster using the following command.

kubectl describe clusterrole space-auditor

Space manager role

The Space manager role aggregates permissions that allow delegation of duties to others within a Kf Space.

You can retrieve the permissions granted to Space managers on your cluster using the following command.

kubectl describe clusterrole space-manager

Dynamic Space manager role

Each Kf Space creates a ClusterRole with the name SPACE_NAME-manager, where SPACE_NAME-manager is called the dynamic manager role.

Kf automatically grants all subjects with the space-manager role within the Space the dynamic manager role at the cluster scope. The permissions for the dynamic manager role allow Space managers to update settings on the Space with the given name.

You can retrieve the permissions granted to the dynamic manager role for any Space on your cluster using the following command.

kubectl describe clusterrole SPACE_NAME-manager

Kf cluster reader role

Kf automatically grants the kf-cluster-reader role to all users on a cluster that already have the space-developer, space-auditor, or space-manager role within a Space.

You can retrieve the permissions granted to Space Kf cluster readers on your cluster using the following command.

kubectl describe clusterrole kf-cluster-reader