This is the multi-page printable view of this section. Click here to print.
Operate and Maintain Kf
- 1: Service brokers
- 1.1: Service Brokers Overview
- 1.2: About Kf Cloud Service Broker
- 1.3: Deploy Kf Cloud Service Broker
- 1.4: Set up NFS platform
- 2: Customizing Kf
- 3: Testing Kf Components
- 4: Kf dependencies and architecture
- 5: Kf reference architecture diagrams
- 6: Logging and Monitoring
- 6.1: Create and user monitoring dashboards.
- 6.2: Logging and monitoring
- 6.3: Logging and monitoring overview
- 6.4: View logs
- 7: Networking
- 8: Security
- 8.1: Security Overview
- 8.2: Role-based access control
- 8.3: Configure role-based access control
- 8.4: Enable compute isolation
- 8.5: Kubernetes Roles
- 9: Stacks and Buildpacks
1 - Service brokers
1.1 - Service Brokers Overview
Kf supports binding and provisioning apps to Open Service Broker (OSB) services.
Any compatible service broker can be installed using the create-service-broker
command, but only the Kf Cloud Service Broker is fully supported.
Special services such as syslog drains, volume services, route services, service keys, and shared services aren’t currently supported.
1.2 - About Kf Cloud Service Broker
The Kf Cloud Service Broker is a Service Broker bundle that includes the open source Cloud Service Broker and Google Cloud Brokerpak. It is made available as a public Docker image and ready to deploy as a Kubernetes service in Kf clusters. Once the Kf Cloud Service Broker service is deployed in a cluster, developers can provision Google Cloud backing services through the Kf Cloud Service Broker service, and bind the backing services to Kf Apps.
Requirements
- Kf Cloud Service Broker requires a MySQL instance and a service account for accessing the MySQL instance and Google Cloud backing services to be provisioned. Connection from the Kf Cloud Service Broker to the MySQL instance goes through the Cloud SQL Auth Proxy.
- Requests to access Google Cloud services (for example: Cloud SQL for MySQL or Cloud Memorystore for Redis) are authenticated via Workload Identity.
Override Brokerpak defaults
Brokerpaks are essentially a Terraform plan and related dependencies in a tar file. You can inspect the Terraform plans to see what the defaults are, and then you can tell Kf Cloud Service Broker to override them when creating new services.
For example, the Terraform plan for MySQL includes a variable called authorized_network
. If not overridden, the default
VPC will be used. If you’d like to override the default, you can pass that during service creation. Here are some examples:
- Override the compute region
config
.
kf create-service csb-google-postgres small spring-music-postgres-db -c '{"config":"YOUR_COMPUTE_REGION"}'
- Override the
authorized_network
and compute regionconfig
.
kf create-service csb-google-postgres small spring-music-postgres-db -c '{"config":"YOUR_COMPUTE_REGION","authorized_network":"YOUR_CUSTOM_VPC_NAME"}'
You can learn more by reading the MySQL Plans and Configs documentation.
Architecture
The following Kf Cloud Service Broker architecture shows how instances are created.
- The Kf Cloud Service Broker (CSB) is installed in its own namespace.
- On installation, a MySQL instance must be provided to persist business logic used by Kf Cloud Service Broker. Requests are sent securely from the Kf Cloud Service Broker pod to the MySQL instance via the MySQL Auth Proxy.
- On service provisioning, a Kf Service custom resource is created. The reconciler of the Kf Service provisions Google Cloud backing services using the Open Service Broker API.
- When a request to provision/deprovision backing resources is received, Kf Cloud Service Broker sends resource creation/deletion requests to the corresponding Google Cloud service, and these requests are authenticated with Workload Identity. It also persists the business logics (e.g. mapping of Kf services to backing services, service bindings) to the MySQL instance.
- On backing service creation success, the backing service is bound to an App via VCAP_SERVICES.
What’s next?
{% endblock %}
1.3 - Deploy Kf Cloud Service Broker
This page shows you how to deploy Kf Cloud Service Broker and use it to provision or deprovision backing resources. Read about the concepts and architecture to learn more about the Kf Cloud Service Broker.
Create environment variables
Linux
export PROJECT_ID=YOUR_PROJECT_ID export CLUSTER_PROJECT_ID=YOUR_PROJECT_ID export CSB_IMAGE_DESTINATION=YOUR_DOCKER_REPO_CSB_PATH export CLUSTER_NAME=kf-cluster export INSTANCE_NAME=cloud-service-broker export COMPUTE_REGION=us-central1
Windows PowerShell
Set-Variable -Name PROJECT_ID -Value YOUR_PROJECT_ID Set-Variable -Name CLUSTER_PROJECT_ID -Value YOUR_PROJECT_ID Set-Variable -Name CSB_IMAGE_DESTINATION=YOUR_DOCKER_REPO_CSB_PATH Set-Variable -Name CLUSTER_NAME -Value kf-cluster Set-Variable -Name INSTANCE_NAME -Value cloud-service-broker Set-Variable -Name COMPUTE_REGION -Value us-central1
Build the broker
First you’ll want to download and build the broker and push it to your container registry:
git clone --single-branch --branch main https://github.com/google/kf.git kf
cd kf/samples/cloud-service-broker
docker build --tag ${CSB_IMAGE_DESTINATION} .
docker push ${CSB_IMAGE_DESTINATION}
Set up the Kf Cloud Service Broker database
Create a MySQL instance.
gcloud sql instances create ${INSTANCE_NAME} --cpu=2 --memory=7680MB --require-ssl --region=${COMPUTE_REGION}
Create a database named
servicebroker
in the MySQL instance.gcloud sql databases create servicebroker -i ${INSTANCE_NAME}
Create a username and password to be used by the broker.
gcloud sql users create csbuser -i ${INSTANCE_NAME} --password=csbpassword
Set up a Google Service Account for the broker
Create a Google Service Account.
gcloud iam service-accounts create csb-${CLUSTER_NAME}-sa \ --project=${CLUSTER_PROJECT_ID} \ --description="GSA for CSB at ${CLUSTER_NAME}" \ --display-name="csb-${CLUSTER_NAME}"
Grant
roles/cloudsql.client
permissions to the Service Account. This is required to connect the service broker pod to the CloudSQL for MySQL instance through the CloudSQL Proxy.gcloud projects add-iam-policy-binding ${CLUSTER_PROJECT_ID} \ --member="serviceAccount:csb-${CLUSTER_NAME}-sa@${CLUSTER_PROJECT_ID}.iam.gserviceaccount.com" \ --role="roles/cloudsql.client"
Grant additional Google Cloud permissions to the Service Account.
gcloud projects add-iam-policy-binding ${CLUSTER_PROJECT_ID} \ --member="serviceAccount:csb-${CLUSTER_NAME}-sa@${CLUSTER_PROJECT_ID}.iam.gserviceaccount.com" \ --role="roles/compute.networkUser"
gcloud projects add-iam-policy-binding ${CLUSTER_PROJECT_ID} \ --member="serviceAccount:csb-${CLUSTER_NAME}-sa@${CLUSTER_PROJECT_ID}.iam.gserviceaccount.com" \ --role="roles/cloudsql.admin"
gcloud projects add-iam-policy-binding ${CLUSTER_PROJECT_ID} \ --member="serviceAccount:csb-${CLUSTER_NAME}-sa@${CLUSTER_PROJECT_ID}.iam.gserviceaccount.com" \ --role="roles/redis.admin"
Verify the permissions.
gcloud projects get-iam-policy ${CLUSTER_PROJECT_ID} \ --filter='bindings.members:serviceAccount:"CSB_SERVICE_ACCOUNT_NAME"' \ --flatten="bindings[].members"
Set up Workload Identity for the broker
Bind the Google Service Account with the Kubernetes Service Account.
gcloud iam service-accounts add-iam-policy-binding "csb-${CLUSTER_NAME}-sa@${CLUSTER_PROJECT_ID}.iam.gserviceaccount.com" \ --project=${CLUSTER_PROJECT_ID} \ --role="roles/iam.workloadIdentityUser" \ --member="serviceAccount:${CLUSTER_PROJECT_ID}.svc.id.goog[kf-csb/csb-user]"
Verify the binding.
gcloud iam service-accounts get-iam-policy "csb-${CLUSTER_NAME}-sa@${CLUSTER_PROJECT_ID}.iam.gserviceaccount.com" \ --project=${CLUSTER_PROJECT_ID}
Set up a Kubernetes Secret to share configuration with the broker
Create a config.yml file.
cat << EOF >> ./config.yml gcp: credentials: "" project: ${CLUSTER_PROJECT_ID} db: host: 127.0.0.1 password: csbpassword user: csbuser tls: false api: user: servicebroker password: password EOF
Create the
kf-csb
namespace.kubectl create ns kf-csb
Create the Kubernetes Secret.
kubectl create secret generic csb-secret --from-file=config.yml -n kf-csb
Install the Kf Cloud Service Broker
Copy the
kf-csb-template.yaml
intokf-csb.yaml
for working:cp kf-csb-template.yaml /tmp/kf-csb.yaml
Edit
/tmp/kf-csb.yaml
and replace placeholders with final values. In the example below,sed
is used.sed -i "s|<GSA_NAME>|csb-${CLUSTER_NAME}-sa@${CLUSTER_PROJECT_ID}.iam.gserviceaccount.com|g" /tmp/kf-csb.yaml sed -i "s|<INSTANCE_CONNECTION_NAME>|${CLUSTER_PROJECT_ID}:${COMPUTE_REGION}:${INSTANCE_NAME}|g" /tmp/kf-csb.yaml sed -i "s|<DB_PORT>|3306|g" /tmp/kf-csb.yaml sed -i "s|<CSB_IMAGE_DESTINATION>|${CSB_IMAGE_DESTINATION}|g" /tmp/kf-csb.yaml
Apply yaml for Kf Cloud Service Broker.
kubectl apply -f /tmp/kf-csb.yaml
Verify the Cloud Service Broker installation status.
kubectl get pods -n kf-csb
Create a service broker
kf create-service-broker cloud-service-broker servicebroker password http://csb-controller.kf-csb/
Validate installation
Check for available services in the marketplace.
kf marketplace
If everything is installed and configured correctly, you should see the following:
$ kf marketplace
Broker Name Namespace Description
cloud-service-broker csb-google-bigquery A fast, economical and fully managed data warehouse for large-scale data analytics.
cloud-service-broker csb-google-dataproc Dataproc is a fully-managed service for running Apache Spark and Apache Hadoop clusters in a simpler, more cost-efficient way.
cloud-service-broker csb-google-mysql Mysql is a fully managed service for the Google Cloud Platform.
cloud-service-broker csb-google-postgres PostgreSQL is a fully managed service for the Google Cloud Platform.
cloud-service-broker csb-google-redis Cloud Memorystore for Redis is a fully managed Redis service for the Google Cloud Platform.
cloud-service-broker csb-google-spanner Fully managed, scalable, relational database service for regional and global application data.
cloud-service-broker csb-google-stackdriver-trace Distributed tracing service
cloud-service-broker csb-google-storage-bucket Google Cloud Storage that uses the Terraform back-end and grants service accounts IAM permissions directly on the bucket.
Clean up
Delete cloud-service-broker.
kf delete-service-broker cloud-service-broker
Delete CSB components.
kubectl delete ns kf-csb
Delete the broker’s database instance.
gcloud sql instances delete ${INSTANCE_NAME} --project=${CLUSTER_PROJECT_ID}
Remove the IAM policy bindings.
gcloud projects remove-iam-policy-binding ${CLUSTER_PROJECT_ID} \ --member='serviceAccount:csb-${CLUSTER_NAME}-sa@${CLUSTER_PROJECT_ID}.iam.gserviceaccount.com' \ --role=roles/cloudsql.client
gcloud projects remove-iam-policy-binding ${CLUSTER_PROJECT_ID} \ --member='serviceAccount:csb-${CLUSTER_NAME}-sa@${CLUSTER_PROJECT_ID}.iam.gserviceaccount.com' \ --role=roles/compute.networkUser
gcloud projects remove-iam-policy-binding ${CLUSTER_PROJECT_ID} \ --member='serviceAccount:csb-${CLUSTER_NAME}-sa@${CLUSTER_PROJECT_ID}.iam.gserviceaccount.com' \ --role=roles/redis.admin
Remove the GSA.
gcloud iam service-accounts delete csb-${CLUSTER_NAME}-sa@${CLUSTER_PROJECT_ID}.iam.gserviceaccount.com \ --project=${CLUSTER_PROJECT_ID}
1.4 - Set up NFS platform
Kf supports Kubernetes native NFS, and exposes them with a dedicated nfsvolumebroker
service broker for developers to consume. This broker has an nfs
service offering which has a service plan named existing
.
Use kf marketplace
to see the service offering:
$ kf marketplace
...
Broker Name Namespace Description
nfsvolumebroker nfs mout nfs shares
...
Use kf marketplace -s nfs
to see the service plan:
$ kf marketplace -s nfs
...
Broker Name Free Description
nfsvolumebroker existing true mount existing nfs server
...
Requirements
You need an NFS volume that can be accessed by your Kubernetes cluster. For example Cloud Filestore which Google’s managed NFS solution that provides access to clusters in the same gcloud project.
Prepare NFS
If you have an existing NFS service, you can use that. If you want a Google managed NFS service, create a Filestore instance and Kf will automaticlaly configure the cluster to use it.
Warning: You only need to create the NFS instance. Kf will create relevant Kubernetes objects, including PersistentVolume and PersistentVolumeClaims. Do not manually mount the volume.
What’s next?
2 - Customizing Kf
2.1 - Customizing Overview
Kf supports customizing some cluster wide configurations by manipulating the kfsystem
custom rsource.
2.2 - Cluster Configuration
Kf uses a Kubernetes configmap named config-defaults
in
the kf
namespace to store cluster wide configuration settings.
This document explains its structure and fields.
Structure of the config-defaults configmap
The configmap contains three types of key/value pairs in the .data
field:
- Comment keys prefixed by
_
contain examples, notes, and warnings. - String keys contain plain text values.
- Object keys contain a JSON or YAML value that has been encoded as a string.
Example:
_note: "This is some note"
stringKey: "This is a string key that's not encoded as JSON or YAML."
objectKey: |
- "These keys contain nested YAML or JSON."
- true
- 123.45
Example section
The example section under the _example
key contains explanations for other
fields and examples. Changes to this section have no effect.
Space container registry
The spaceContainerRegistry
property is a plain text value that specifies the
default container registry each space uses to store built images.
Example:
spaceContainerRegistry: gcr.io/my-project
Space cluster domains
The spaceClusterDomains
property is a string encoded YAML array of domain objects.
Each space in the cluster adds all items in the array to its list of domains that developers can bind their apps to.
Fields | |
---|---|
domain |
The domain name to make available. May contain one of the following substitutions:
|
gatewayName |
(Optional) Overrides the Istio gateway routes will be bound to.
Defaults to |
Example:
spaceClusterDomains: |
# Support canonical and vanity domains
- domain: $(SPACE_NAME).prod.example.com
- domain: $(SPACE_NAME).kf.us-east1.prod.example.com
# Using a dynamic DNS resolver
- domain: $(SPACE_NAME).$(CLUSTER_INGRESS_IP).nip.io
# Creating an internal domain only visible within the cluster
- domain: $(SPACE_NAME)-apps.internal
gatewayName: kf/internal-gateway
Buildpacks V2 lifecycle builder
The buildpacksV2LifecycleBuilder
property contains the version of the Cloud Foundry
builder
binary used execute buildpack v2 builds.
The value is a Git reference. To use a specific version, append an @
symbol
followed by a Git SHA to the end.
Example:
buildpacksV2LifecycleBuilder: "code.cloudfoundry.org/buildpackapplifecycle/builder@GIT_SHA"
Buildpacks V2 lifecycle launcher
The buildpacksV2LifecycleLauncher
property contains the version of the Cloud Foundry
launcher
binary built into every buildpack V2 application.
The value is a Git reference. To use a specific version, append an @
symbol
followed by a Git SHA to the end.
Example:
buildpacksV2LifecycleLauncher: "code.cloudfoundry.org/buildpackapplifecycle/launcher@GIT_SHA"
Buildpacks V2 list
The spaceBuildpacksV2
property is a string encoded YAML array that holds an ordered
list of default buildpacks that are used to build applications compatible with
the V2 buildpacks process.
Fields | |
---|---|
name |
A short name developers can use to reference the buildpack by in their application manifests. |
url |
The URL used to fetch the buildpack. |
disabled |
Used to prevent this buildpack from executing. |
Stacks V2 list
The spaceBuildpacksV2
property is a string encoded YAML array that holds an
ordered list of stacks that can be used with Cloud Foundry compatible builds.
Fields | |
---|---|
name |
A short name developers can use to reference the stack by in their application manifests. |
image |
URL of the container image to use as the stack. For more information, see https://kubernetes.io/docs/concepts/containers/images. |
Stacks V3 list
The spaceStacksV3
property is a string encoded YAML array that holds an ordered
list of stacks that can be used with
Cloud Native Buildpack
builds.
Fields | |
---|---|
name |
A short name developers can use to reference the stack by in their application manifests. |
description |
A short description of the stack shown when running |
buildImage |
URL of the container image to use as the builder. For more information, see https://kubernetes.io/docs/concepts/containers/images. |
runImage |
URL of the container image to use as the base for all apps built with . For more information, see https://kubernetes.io/docs/concepts/containers/images. |
nodeSelector |
(Optional) A NodeSelector used to indicate which nodes applications built with this stack can run on. |
Example:
spaceStacksV3: |
- name: heroku-18
description: The official Heroku stack based on Ubuntu 18.04
buildImage: heroku/pack:18-build
runImage: heroku/pack:18
nodeSelector:
kubernetes.io/os: windows
Default to V3 Stack
The spaceDefaultToV3Stack
property contains a quoted value true
or false
indicating whether spaces should use V3 stacks if a user doesn’t specify one.
Feature flags
The featureFlags
property contains a string encoded YAML map of feature flags
that can enable and disable features of Kf.
Flag names that aren’t supported by Kf will be ignored.
Flag Name | Default | Purpose |
---|---|---|
disable_custom_builds | false | Disable developer access to arbitrary Tekton build pipelines. |
enable_dockerfile_builds | true | Allow developers to build source code from dockerfiles. |
enable_custom_buildpacks | true | Allow developers to specify external buildpacks in their applications. |
enable_custom_stacks | true | Allow developers to specify custom stacks in their applications. |
Example:
featureFlags: |
disable_custom_builds: false
enable_dockerfile_builds: true
enable_some_feature: true
ProgressDeadlineSeconds
ProgressDeadlineSeconds
contains a configurable quoted integer indicating the maximum allowed time between state transition and reaching a stable state before provisioning or deprovisioning when pushing an application. The default value is 600
seconds.
TerminationGracePeriodSeconds
The TerminationGracePeriodSeconds
contains a configurable quoted integer indicating the time between when the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. The default value is 30
seconds.
2.3 - Customizing Kf Features
Build Retention
You can control how many Kf Builds are kept before being garbage collected.
kubectl patch \ kfsystem kfsystem \ --type='json' \ -p="[{'op': 'replace', 'path': '/spec/kf/config/buildRetentionCount', 'value': 1}]"
Task Retention
You can control how many Kf Tasks are kept before being garbage collected.
kubectl patch \ kfsystem kfsystem \ --type='json' \ -p="[{'op': 'replace', 'path': '/spec/kf/config/taskRetentionCount', 'value': 1}]"
Enable or Disable the Istio Sidecar
If you do not require the Istio sidecar for the Build pods, then they can be disabled by setting the value to true
. Enable by setting the value to false
.
kubectl patch \ kfsystem kfsystem \ --type='json' \ -p="[{'op': 'replace', 'path': '/spec/kf/config/buildDisableIstioSidecar', 'value': true}]"
Enable or Disable Routing Retries
Allows enabling/disbaling retries in the VirtualServices that route traffic to Apps. Kf leaves this value unset by default and it’s inherited from Istio.
Istio’s default retry mechanism attempts to make up for instability inherent in service meshes, however allowing retries requires the contents of the payload to be buffered within Envoy. This may fail for large payloads and the buffering will need to be disabled at the expense of some stability.
Values for routeDisableRetries
:
false
Inherit Istio’s retry settings. (Default)true
Set retries to 0.
kubectl patch \
kfsystem kfsystem \
--type='json' \
-p="[{'op':'add','path':'/spec/kf/config/routeDisableRetries','value':true}]"
Enable or Disable Routing Hosts Ignoring Any Port Numbers
Allows enabling/disabling routing hosts ignoring any specified port number. By default hosts are matched using the exact value specified in the route Host (e.g a request with a Host header value of example.com:443 does not match with preconfigured route Host example.com). By enabling, ports are ignored and only hosts are used (e.g example.com:443 matches example.com)
Note: Feature will only work in clusters with istio 1.15+. In older versions it will function as though it where disabled.
Values for routeHostIgnoringPort
:
false
Will match the Host header in request to the exact configured route Host. (Default)true
Will use regexp to match to the configured route Host ignoring any port specified in the Host header of the request.
kubectl patch \
kfsystem kfsystem \
--type='json' \
-p="[{'op':'add','path':'/spec/kf/config/routeHostIgnoringPort','value':true}]"
Build Pod Resource Limits
The default pod resource size can be increased from the default to accommodate very large builds. The units for the value are in Mi
or Gi
.
kubectl patch \ kfsystem kfsystem \ --type='json' \ -p="[{'op': 'replace', 'path': '/spec/kf/config/buildPodResources', 'value': {'limits': {'memory': '234Mi'}}}]"
Read Kubernetes container resource docs for more information about container resource management.
Robust Build Snapshots
Kf uses Kaniko to build the final application containers in the v2 buildpack lifecycle. To produce image layers, Kaniko needs to take “snapsohts” of the image which requires iterating over all files in the image and checking if they’ve changed.
The fast mode checks for file attribute changes (like timestamps and size) and the slow mode checks full file hashes to determine if a file changed.
Kf apps aren’t expected to overwrite operating system files in their build, so fast mode should be used to reduce disk pressure.
Values for buildKanikoRobustSnapshot
:
false
Will use a fast snapshot mode for v2 builds. (Default)true
Will use a robust snapshot mode for v2 builds to catch uncommon cases.
kubectl patch \ kfsystem kfsystem \ --type='json' \ -p="[{'op': 'replace', 'path': '/spec/kf/config/buildKanikoRobustSnapshot', 'value': true}]"
Self Signed Certificates for Service Brokers
If you want to use self signed certificates for TLS (https
instead of http
) for the service broker URL, the Kf controller requires the CA certificate. To configure Kf for this scenario, create an immutable Kubernetes secret in the kf
namespace and update the kfsystem.spec.kf.config.secrets.controllerCACerts.name
object to point to it.
Create a secret to store the self-signed certificate.
kubectl create secret generic cacerts -nkf --from-file /path/to/cert/certs.pem
Make the secret immutable.
kubectl patch -nkf secret cacerts \ --type='json' \ -p="[{'op':'add','path':'/immutable','value':true}]"
Update kfsystem to point to the secret.
kubectl patch \ kfsystem kfsystem \ --type='json' \ -p="[{'op':'add','path':'/spec/kf/config/secrets','value':{'controllerCACerts':{'name':'<var>cacerts</var>'}}}]"
Set CPU minimums and ratios
Application default CPU ratios and minimums can be set in the operator.
Values are set in
CPU units.
Units are typically expressed in millicpus (m
), or thousandths of a CPU.
The spec.kf.config.appCPUMin
property specifies a minimum amount of CPU per
application, even if the developer has specified less.
kubectl patch \
kfsystem kfsystem \
--type='json' \
-p="[{'op':'add','path':'/spec/kf/config/appCPUMin','value':'<var>200m</var>'}]"
The spec.kf.config.appCPUPerGBOfRAM
property specifies a default amount of CPU
to give each app per GB or RAM requested.
You can choose different approaches based on the desired outcome:
- Choose the ratio of CPU to RAM for the cluster’s nodes if you want to maximize utilization.
- Choose a ratio of 1 CPU to 4 GB of RAM which typically works well for I/0 or memory bound web applications.
kubectl patch \
kfsystem kfsystem \
--type='json' \
-p="[{'op':'add','path':'/spec/kf/config/appCPUPerGBOfRAM','value':'<var>250m</var>'}]"
Set buildpacks using git tags
Buildpacks can support pinning by using git tags instead of automatically sourcing the latest buildpack from a git repository.
Add a new buildpack as follows and use a git tag to specify which version of the buildpack the app should use. Otherwise the buildpack will default to the latest version.
For example, to pin Golang buildpack version 1.9.49 do:
kubectl patch \
kfsystem kfsystem \
--type='json' \
-p='[{"op":"add","path":"data/spec/kf/config/spaceBuildpacksV2","value":[{"name":"go_buildpack_v1.9.49","url":"https://github.com/cloudfoundry/go-buildpack.git#v1.9.49"}]}]'
This command will add the following to the config-defaults configmaps resource:
data:
SpaceBuildpacksV2: |
- name: go_buildpack_v1.9.49
url: https://github.com/cloudfoundry/go-buildpack.git#v1.9.49
The kubectl patch
command will replace all the existing buildpacks in the config-defaults configmaps. If the user would like the existing buildpacks to remain, these too need to be included in the command.
To get the list of existing buildpacks in the configmaps run the following command:
kubectl describe configmaps config-defaults -n kf
Set ProgressDeadlineSeconds
ProgressDeadlineSeconds can be set in the kfsystem operator.
kubectl patch \
kfsystem kfsystem \
--type='json' \
-p="[{'op':'add','path':'/spec/kf/config/progressDeadlineSeconds','value':100}]"
Set TerminationGracePeriodSeconds
TerminationGracePeriodSeconds can be set in the kfsystem operator.
kubectl patch \
kfsystem kfsystem \
--type='json' \
-p="[{'op':'add','path':'/spec/kf/config/terminationGracePeriodSeconds','value':200}]"
Enable/Disable App Start Command Lookup
Allows enabling/disbaling start command lookup in the App reconciler.
This behavior requires the reconciler to fetch container configuration for every app from the container registry
and enables displaying the start command on kf push
and in kf app
.
Enabling this behavior on a large cluster may make the reconcilation times for Apps slow.
Values for appDisableStartCommandLookup
:
false
Enable start command lookup. (Default)true
Disable start command lookup.
kubectl patch \
kfsystem kfsystem \
--type='json' \
-p="[{'op':'add','path':'/spec/kf/config/appDisableStartCommandLookup','value':true}]"
Enable/Disable Kubernetes service Account (KSA) overrides
Allows enabling/disbaling the ability to override the Kubernetes Service Account for Apps via annotation.
Values for appEnableServiceAccountOverride
:
false
Don’t allow overriding service account. (Default)true
Allow developers to override KSAs for their Apps.
kubectl patch \
kfsystem kfsystem \
--type='json' \
-p="[{'op':'add','path':'/spec/kf/config/appEnableServiceAccountOverride','value':true}]"
Set default Kf Task timeout
Kf uses Tekton TaskRuns as its mechanism to run Kf Tasks. Tekton may impose a default timeout on TaskRuns that differs depending on the version of Tekton you have installed.
You can override this setting either on the Tekton configmap – which will set the value for both Kf Tasks and Builds or on the Kf operator to apply the value only to Tasks.
The following values are supported:
null
- Inherit the value from Tekton (Default).- Value <= 0 - Tasks get an infinite timeout.
- Value >= 0 - Tasks get the given timeout.
Consider setting a long, but non-infinite timeout to prevent improperly programmed tasks from consuming resources.
kubectl patch \
kfsystem kfsystem \
--type='json' \
-p="[{'op':'add','path':'/spec/kf/config/taskDefaultTimeoutMinutes','value':-1}]"
Enable/Disbale NFS in Kf Tasks
You can enable/disable the ability for Tasks run from Apps to mount NFS volumes.
Mounting NFS volumes requires FUSE which grants Task Pods with NFS additional system privileges on the node and may be a security concern.
kubectl patch \
kfsystem kfsystem \
--type='json' \
-p="[{'op':'add','path':'/spec/kf/config/taskDisableVolumeMounts','value':true}]"
3 - Testing Kf Components
This page describes the tests that Kf runs and notable gaps you may want to cover when using Kf as your platform.
Components
Unit tests
Kf uses Go’s unit testing functionality to perform extensive validation of the business logic in the CLI, the control plane components, and “golden” files which validate the structure of I/O that Kf expects to be deterministic (e.g. certain CLI output).
Unit tests can be executed using the ./hack/unit-test.sh
shell script in the Kf repository.
This script skips exercising the end to end tests.
Unit tests are naturally limited because they have to mock interactions with external services e.g. the Kubernetes control plane and container registries.
End to end tests
Kf uses specially marked Go tests for acceptance and end to end testing.
These can be executed using the ./hack/integration-test.sh
and ./hack/acceptance-test.sh
shell scripts. These scripts will test against the currently targeted Kf cluster.
These tests check for behavior between Kf and other components, but are still limited because they tend to test one specific thing at a time and are built for speed.
Load tests
Kf can run load tests against the platform using the ./ci/cloudbuild/test.yaml
Cloud Build
template with the _STRESS_TEST_BASELOAD_FACTOR
environment variable set.
This will deploy the Diego stresds test workloads to Kf to check how the platform behaves with failing Apps.
Gaps
Kf runs unit tests and end to end tests for each release. You may want to augment with additional qualificatoin tests on your own cluster to check for:
- Compatibility with your service brokers.
- Compatibility with all buildpacks you normally use.
- Compatibility with represenative workloads.
- Compatibility with intricate combinations of features (e.g. service bindings, docker containers, buildpacks, routing).
- Compatibility with non-Kf Kubernetes components.
4 - Kf dependencies and architecture
Kf requires Kubernetes and several other OSS projects to run. Some of the dependencies are satisfied with Google-managed services—for example, GKE provides Kubernetes.
Dependencies
Get CRD details
Kf supports the kubectl
subcommand explain
. It allows you to list the fields in Kf CRDs to understand how to create Kf objects via automation instead of manually via the CLI. This command is designed to be used with ConfigSync to automate creation and management of resources like Spaces across many clusters. You can use this against any of the component kinds
below.
In this example, we examine the kind
called space
in the spaces
CRD:
kubectl explain space.spec
The output looks similar to this:
$ kubectl explain space.spec
KIND: Space
VERSION: kf.dev/v1alpha1
RESOURCE: spec <Object>
DESCRIPTION:
SpaceSpec contains the specification for a space.
FIELDS:
buildConfig <Object>
BuildConfig contains config for the build pipelines.
networkConfig <Object>
NetworkConfig contains settings for the space's networking environment.
runtimeConfig <Object>
RuntimeConfig contains settings for the app runtime environment.
Kf components
Kf installs several of its own Kubernetes
custom resources
and controllers.
The custom resources effectively serve as the Kf API and
are used by the kf
CLI to interact with the system. The controllers use
Kf’s CRDs to orchestrate the other components in the
system.
You can view the CRDs installed and used by Kf by running the following command:
kubectl api-resources --api-group=kf.dev
The output of that command is as follows:
NAME SHORTNAMES APIGROUP NAMESPACED KIND
apps kf.dev true App
builds kf.dev true Build
clusterservicebrokers kf.dev false ClusterServiceBroker
routes kf.dev true Route
servicebrokers kf.dev true ServiceBroker
serviceinstancebindings kf.dev true ServiceInstanceBinding
serviceinstances kf.dev true ServiceInstance
spaces kf.dev false Space
Apps
Apps represent a twelve-factor application deployed to Kubernetes. They encompass source code, configuration, and the current state of the application. Apps are responsible for reconciling:
- Kf Builds
- Kf Routes
- Kubernetes Deployments
- Kubernetes Services
- Kubernetes ServiceAccounts
- Kubernetes Secrets
You can list Apps using Kf or kubectl
:
kf apps
kubectl get apps -n space-name
Builds
Builds combine the source code and build configuration for Apps. They provision Tekton TaskRuns with the correct steps to actuate a Buildpack V2, Buildpack V3, or Dockerfile build.
You can list Builds using Kf or kubectl
:
kf builds
kubectl get builds -n space-name
ClusterServiceBrokers
ClusterServiceBrokers hold the connection information necessary to extend
Kf with a service broker. They are responsible for
fetching the catalog of services the broker provides and displaying them in
the output of kf marketplace
.
You can list ClusterServiceBrokers using kubectl
:
kubectl get clusterservicebrokers
Routes
Routes are a high level structure that contain HTTP routing rules. They are responsible for reconciling Istio VirtualServices.
You can list Routes using Kf or kubectl
:
kf routes
kubectl get routes -n space-name
ServiceBrokers
ServiceBrokers hold the connection information necessary to extend
Kf with a service broker. They are responsible for
fetching the catalog of services the broker provides and displaying them in
the output of kf marketplace
.
You can list ServiceBrokers using kubectl
:
kubectl get servicebrokers -n space-name
ServiceInstanceBinding
ServiceInstanceBindings hold the parameters to create a binding on a service broker and the credentials the broker returns for the binding. They are responsible for calling the bind API on the broker to bind the service.
You can list ServiceInstanceBindings using Kf or kubectl
:
kf bindings
kubectl get serviceinstancebindings -n space-name
ServiceInstance
ServiceInstances hold the parameters to create a service on a service broker. They are responsible for calling the provision API on the broker to create the service.
You can list ServiceInstances using Kf or kubectl
:
kf services
kubectl get serviceinstances -n space-name
Spaces
Spaces hold configuration information similar to Cloud Foundry organizations and spaces. They are responsible for:
- Creating the Kubernetes Namespace that other Kf resources are provisioned into.
- Creating Kubernetes NetworkPolicies to enforce network connection policies.
- Holding configuration and policy for Builds, Apps, and Routes.
You can list Spaces using Kf or kubectl
:
kf spaces
kubectl get spaces
Kf RBAC / Permissions
The following sections list permissions for Kf and its components to have correct access at the cluster level. These permissions are required and enabled by default in Kf; do not attempt to disable them.
Components | Namespace | Service Account |
---|---|---|
controller | kf | controller |
subresource-apiserver | kf | controller |
webhook | kf | controller |
appdevexperience-operator | appdevexperience | appdevexperience-operator |
Note that the appdevexperience-operator
service account has the same set of
permissions as controller
. The operator is what deploys all Kf
components, including custom resource definitions and controllers.
RBAC for Kf service accounts
The following apiGroup
definitions detail which access control
permissions components in {{product_name}} have on which API groups and resources for both the controller
and appdevexperience-operator
service accounts.
- apiGroups:
- "authentication.k8s.io"
resources:
- tokenreviews
verbs:
- create
- apiGroups:
- "authorization.k8s.io"
resources:
- subjectaccessreviews
verbs:
- create
- apiGroups:
- ""
resources:
- pods
- services
- persistentvolumeclaims
- persistentvolumes
- endpoints
- events
- configmaps
- secrets
verbs: *
- apiGroups:
- ""
resources:
- services
- services/status
verbs:
- create
- delete
- get
- list
- watch
- apiGroups:
- "apps"
resources:
- deployments
- daemonsets
- replicasets
- statefulsets
verbs: *
- apiGroups:
- "apps"
resources:
- deployments/finalizers
verbs:
- get
- list
- create
- update
- delete
- patch
- watch
- apiGroups:
- "rbac.authorization.k8s.io"
resources:
- clusterroles
- roles
- clusterrolebindings
- rolebindings
verbs:
- create
- delete
- update
- patch
- escalate
- get
- list
- deletecollection
- bind
- apiGroups:
- "apiregistration.k8s.io"
resources:
- apiservices
verbs:
- update
- patch
- create
- delete
- get
- list
- apiGroups:
- "pubsub.cloud.google.com"
resources:
- topics
- topics/status
verbs: *
- apiGroups:
- ""
resources:
- namespaces
- namespaces/finalizers
- serviceaccounts
verbs:
- get
- list
- create
- update
- watch
- delete
- patch
- watch
- apiGroups:
- "autoscaling"
resources:
- horizontalpodautoscalers
verbs:
- create
- delete
- get
- list
- update
- patch
- watch
- apiGroups:
- "coordination.k8s.io"
resources:
- leases
verbs: *
- apiGroups:
- "batch"
resources:
- jobs
- cronjobs
verbs:
- get
- list
- create
- update
- patch
- delete
- deletecollection
- watch
- apiGroups:
- "messaging.cloud.google.com"
resources:
- channels
verbs:
- delete
- apiGroups:
- "pubsub.cloud.google.com"
resources:
- pullsubscriptions
verbs:
- delete
- get
- list
- watch
- create
- update
- patch
- apiGroups:
- "pubsub.cloud.google.com"
resources:
- [pullsubscriptions/status
verbs:
- get
- update
- patch
- apiGroups:
- "events.cloud.google.com"
resources: *
verbs: *
- apiGroups:
- "keda.k8s.io"
resources: *
verbs: *
- apiGroups:
- "admissionregistration.k8s.io"
resources:
- mutatingwebhookconfigurations
- validatingwebhookconfigurations
verbs:
- get
- list
- create
- update
- patch
- delete
- watch
- apiGroups:
- "extensions"
resources:
- ingresses
- ingresses/status
verbs: *
- apiGroups:
- ""
resources:
- endpoints/restricted
verbs:
- create
- apiGroups:
- "certificates.k8s.io"
resources:
- certificatesigningrequests
- certificatesigningrequests/approval
- certificatesigningrequests/status
verbs:
- update
- create
- get
- delete
- apiGroups:
- "apiextensions.k8s.io"
resources:
- customresourcedefinitions
verbs:
- get
- list
- create
- update
- patch
- delete
- watch
- apiGroups:
- "networking.k8s.io"
resources:
- networkpolicies
verbs:
- get
- list
- create
- update
- patch
- delete
- deletecollection
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- list
- watch
- update
- patch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
The following table lists how the RBAC permissions are used in Kf, where:
- view includes the verbs: get, list, watch
- modify includes the verbs: create, update, delete, patch
Permissions | Reasons |
---|---|
Can view all secrets | Kf reconcilers need to read secrets for functionalities such as space creation and service instance binding. |
Can modify pods | Kf reconcilers need to modify pods for functionalities such as building/pushing Apps and Tasks. |
Can modify secrets | Kf reconcilers need to modify secrets for functionalities such as building/pushing Apps and Tasks and service instance binding. |
Can modify configmaps | Kf reconcilers need to modify configmaps for functionalities such as building/pushing Apps and Tasks. |
Can modify endpoints | Kf reconcilers need to modify endpoints for functionalities such as building/pushing Apps and route binding. |
Can modify services | Kf reconcilers need to modify pods for functionalities such as building/pushing Apps and route binding. |
Can modify events | Kf controller creates and emits events for the resources managed by Kf. |
Can modify serviceaccounts | Kf needs to modify service accounts for App deployments. |
Can modify endpoints/restricted | Kf needs to modify endpoints for App deployments. |
Can modify deployments | Kf needs to modify deployments for functionalities such as pushing Apps. |
Can modify mutatingwebhookconfiguration | Mutatingwebhookconfiguration is needed by {{mesh_name}}, a Kf dependency, for admission webhooks. |
Can modify
customresourcedefinitions customresourcedefinitions/status | Kf manages resources through Custom Resources such as Apps, Spaces and Builds. |
Can modify horizontalpodautoscalers | Kf supports autoscaling based on Horizontal Pod Autoscalers. |
Can modify namespace/finalizer | Kf needs to set owner reference of webhooks. |
Third-party libraries
Third-party library source code and licenses can be found in the /third_party
directory of any Kf container image.
You can also run kf third-party-licenses
to view the third-party licenses for
the version of the Kf CLI that you downloaded.
5 - Kf reference architecture diagrams
Kf can benefit from the rich ecosystem of Anthos and GKE, including automation, managed backing services, and development tools.
GKE reference architecture
Kf clusters are managed just like any other GKE cluster, and access resources in the same way.
ConfigSync
Kf can work with
ConfigSync to
automate Space creation across any number of clusters.
Create new namespaces on non-Kf clusters, and Spaces on Kf clusters.
You can also manage Spaces by removing the Kf CLI from Space management, if desired. Use kubectl explain
to get Kf CRD specs to fully manage your product configuration via GitOps.
6 - Logging and Monitoring
6.1 - Create and user monitoring dashboards.
You can use Google Cloud Monitoring dashboards to create custom dashboards and charts. Kf comes with a default template which can be used to create dashboards to monitor the performance of your applications.
Application performance dashboard
Run the following commands to deploy a dashboard in your monitoring workspace in Cloud monitoring dashboards to monitor performance of your apps. This dashboard has application performance metrics like requests/sec, round trip latency, HTTP error codes, and more.
git clone https://github.com/google/kf
cd ./kf/dashboards
./create-dashboard.py my-dashboard my-cluster my-space
System resources and performance dashboard
You can view all the system resources and performance metrics such as list of nodes, pods, containers and much more using a built-in dashboard. Click the link below to access the system dashboard.
System dashboardMore details about this dashboard can be found here.
Create SLO and alerts
You can create SLOs and Alerts on available metrics to monitor performance and availability of both system and applications. For example, you can use the metrics istio.io/service/server/response_latencies
to setup an alert on the application roundtrip latency.
Configure dashboard access control
Follow these instructions to provide dashboard access to developers and other members on the team. The role roles/monitoring.dashboardViewer
provides read-only access to dashboards.
6.2 - Logging and monitoring
Kf can use GKE’s Google Cloud integrations to send a log of events to your Cloud Monitoring and Cloud Logging project for observability. For more information, see Overview of GKE operations.
Kf deploys two server side components:
- Controller
- Webhook
To view the logs for these components, use the following Cloud Logging query:
resource.type="k8s_container"
resource.labels.project_id=<PROJECT ID>
resource.labels.location=<GCP ZONE>
resource.labels.cluster_name=<CLUSTER NAME>
resource.labels.namespace_name="kf"
labels.k8s-pod/app=<controller OR webhook>
6.3 - Logging and monitoring overview
By default, Kf includes native integration with Cloud Monitoring and Cloud Logging. When you create a cluster, both Monitoring and Cloud Logging are enabled by default. This integration lets you monitor your running clusters and help analyze your system and application performance using advanced profiling and tracing capabilities.
Application level performance metrics is provided by Istio sidecar injection which is injected alongside applications deployed via Kf. You can also create SLO and Alerts using this default integration to monitor performance and availability of both system and applications.
Ensure the following are setup on your cluster:
Cloud Monitoring and Cloud Logging are enabled on the Kf cluster by default unless you disabled them explicitly, so no extra step is required.
Istio sidecar injection is enabled. Sidecar proxy will inject application level performance metrics.
6.4 - View logs
Kf provides you with several types of logs. This document describes these logs and how to access them.
Application logs
All logs written to standard output stdout
and standard error stderr
, are uploaded to Cloud Logging and stored under the log name user-container
.
Open Cloud Logging and run the following query:
resource.type="k8s_container" log_name="projects/YOUR_PROJECT_ID/logs/user-container" resource.labels.project_id=YOUR_PROJECT_ID resource.labels.location=GCP_COMPUTE_ZONE (e.g. us-central1-a) resource.labels.cluster_name=YOUR_CLUSTER_NAME resource.labels.namespace_name=YOUR_KF_SPACE_NAME resource.labels.pod_name:YOUR_KF_APP_NAME
You should see all your application logs written on standard stdout
and standard error stderr
.
Access logs for your applications
Kf provides access logs using Istio sidecar injection. Access logs are stored under the log name server-accesslog-stackdriver
.
Open Cloud Logging and run the following query:
resource.type="k8s_container" log_name="projects/YOUR_PROJECT_ID/logs/server-accesslog-stackdriver" resource.labels.project_id=YOUR_PROJECT_ID resource.labels.location=GCP_COMPUTE_ZONE (e.g. us-central1-a) resource.labels.cluster_name=YOUR_CLUSTER_NAME resource.labels.namespace_name=YOUR_KF_SPACE_NAME resource.labels.pod_name:YOUR_KF_APP_NAME
You should see access logs for your application. Sample access log:
{ "insertId": "166tsrsg273q5mf", "httpRequest": { "requestMethod": "GET", "requestUrl": "http://test-app-38n6dgwh9kx7h-c72edc13nkcm.***. ***.nip.io/", "requestSize": "738", "status": 200, "responseSize": "3353", "remoteIp": "10.128.0.54:0", "serverIp": "10.48.0.18:8080", "latency": "0.000723777s", "protocol": "http" }, "resource": { "type": "k8s_container", "labels": { "container_name": "user-container", "project_id": ***, "namespace_name": ***, "pod_name": "test-app-85888b9796-bqg7b", "location": "us-central1-a", "cluster_name": *** } }, "timestamp": "2020-11-19T20:09:21.721815Z", "severity": "INFO", "labels": { "source_canonical_service": "istio-ingressgateway", "source_principal": "spiffe://***.svc.id.goog/ns/istio-system/sa/istio-ingressgateway-service-account", "request_id": "0e3bac08-ab68-408f-9b14-0aec671845bf", "source_app": "istio-ingressgateway", "response_flag": "-", "route_name": "default", "upstream_cluster": "inbound|80|http-user-port|test-app.***.svc.cluster.local", "destination_name": "test-app-85888b9796-bqg7b", "destination_canonical_revision": "latest", "destination_principal": "spiffe://***.svc.id.goog/ns/***/sa/sa-test-app", "connection_id": "82261", "destination_workload": "test-app", "destination_namespace": ***, "destination_canonical_service": "test-app", "upstream_host": "127.0.0.1:8080", "log_sampled": "false", "mesh_uid": "proj-228179605852", "source_namespace": "istio-system", "requested_server_name": "outbound_.80_._.test-app.***.svc.cluster.local", "source_canonical_revision": "asm-173-6", "x-envoy-original-dst-host": "", "destination_service_host": "test-app.***.svc.cluster.local", "source_name": "istio-ingressgateway-5469f77856-4n2pw", "source_workload": "istio-ingressgateway", "x-envoy-original-path": "", "service_authentication_policy": "MUTUAL_TLS", "protocol": "http" }, "logName": "projects/*/logs/server-accesslog-stackdriver", "receiveTimestamp": "2020-11-19T20:09:24.627065813Z" }
Audit logs
Audit Logs provides a chronological record of calls that have been made to the Kubernetes API Server. Kubernetes audit log entries are useful for investigating suspicious API requests, for collecting statistics, or for creating monitoring alerts for unwanted API calls.
Open Cloud Logging and run the following query:
resource.type="k8s_container" log_name="projects/YOUR_PROJECT_ID/logs/cloudaudit.googleapis.com%2Factivity" resource.labels.project_id=YOUR_PROJECT_ID resource.labels.location=GCP_COMPUTE_ZONE (e.g. us-central1-a) resource.labels.cluster_name=YOUR_CLUSTER_NAME protoPayload.request.metadata.name=YOUR_APP_NAME protoPayload.methodName:"deployments."
You should see a trace of calls being made to the Kubernetes API server.
Configure logging access control
Follow these instructions to provide logs access to developers and other members on the team. The role roles/logging.viewer
provides read-only access to logs.
Use Logs Router
You can also use Logs Router to route the logs to supported destinations.
7 - Networking
7.1 - Set up a custom domain
All Kf Apps that serve HTTP traffic to users or applications outside of the cluster must be associated with a domain name.
Kf has three locations where domains can be configured. Ordered by precedence, they are:
- Apps
- Spaces
- The
config-defaults
ConfigMap in thekf
Namespace
Edit the config-defaults
ConfigMap
The config-defaults
ConfigMap holds cluster-wide settings for Kf and can be edited by cluster administrators.
The values in the ConfigMap are read by the Spaces controller and modify their configuration.
Domain values are reflected in the Space’s status.networkConfig.domains
field.
To modify Kf cluster’s domain, edit the kfsystem
, the operator will then popluate the change to config-defaults
configmap under kf
namespace:
kubectl edit kfsystem
Add or update the entry for the spaceClusterDomains
key under spec.kf.config
like the following:
spaceClusterDomains: my-domain.com
To validate the configuration was updated correctly, check the domain value in a Space:
kf space SPACE_NAME -o "jsonpath={.status.networkConfig.domains[]['domain']}"
The output will look similar to:
Getting Space some-space
some-space.my-domain.com
Each Space prefixes the cluster domains with its own name. This prevents conflicts between Apps.
Assign Space domains
Spaces are the authoritative location for domain configuration.
You can assign domains and sub-domains to each Space for developers to use.
The field for configuring domains is spec.networkConfig.domains
.
Use kf space
to view the domains assigned to a Space:
kf space SPACE_NAME
In the output, the Spec
field contains specific configuration for the Space
and the Status
field reflects configuration for the Space with cluster-wide
defaults appended to the end:
...
Spec:
Network Config:
Domains:
Domain: my-space.mycompany.com
...
Status:
Network Config:
Domains:
Domain: my-space.mycompany.com
Domain: my-space.prod.us-east1.kf.mycompany.com
Add or remove domains using the CLI
The kf
CLI supports mutations on Space domains. Each command outputs
a diff between the old and new configurations.
Add a new domain with kf configure-space append-domain
:
kf configure-space append-domain SPACE_NAME myspace.mycompany.com
Add or make an existing domain the default with kf configure-space set-default-domain
:
kf configure-space set-default-domain SPACE_NAME myspace.mycompany.com
And finally, remove a domain:
kf configure-space remove-domain SPACE_NAME myspace.mycompany.com
Use Apps to specify domains
Apps can specify domains as part of their configuration.
Routes are mapped to Apps during kf push
using the following logic:
let current_routes = The set of routes already on the app
let manifest_routes = The set of routes defined by the manifest
let flag_routes = The set of routes supplied by the --route flag(s)
let no_route = Whether the manifest has no-route:true or --no-route is set
let random_route = Whether the manifest has random-route:true or --random-route is set
let new_routes = Union(current_routes, manifest_routes, flag_routes)
if new_routes.IsEmpty() then
if random_route then
new_routes.Add(CreateRandomRoute())
else
new_routes.Add(CreateDefaultRoute())
end
end
if no_route then
new_routes.RemoveAll()
end
return new_routes
If an App doesn’t specify a Route, or requests a random Route, the first domain on the Space is used. If the first domain on a Space changes, all Apps in the Space using the default domain are updated to reflect it.
Customize domain templates
Kf supports variable substitution in domains. Substitution allows a single
cluster-wide domain to be customized per-Space and to react to changes to the
ingress IP. Substitution is performed on variables with the syntax $(VARIABLE_NAME)
that occur in a domain.
Variable | Description |
---|---|
CLUSTER_INGRESS_IP | The IPV4 address of the cluster ingress. |
SPACE_NAME | The name of the Space. |
Examples
The following examples demonstrate how domain variables can be used to support a variety of different organizational structures and cluster patterns.
Using a wildcard DNS service like nip.io:
$(SPACE_NAME).$(CLUSTER_INGRESS_IP).nip.io
Domain for an organization with centrally managed DNS:
$(SPACE_NAME).cluster-name.example.com
Domain for teams who manage their own DNS:
cluster-name.$(SPACE_NAME).example.com
Domain for a cluster with warm failover and external circuit breaker:
$(SPACE_NAME)-failover.cluster-name.example.com
Differences between Kf and CF
- Kf Spaces prefix the cluster-wide domain with the Space name.
- Kf does not check for domain conflicts on user-specified routes.
7.2 - Set up HTTPS ingress
You can secure the ingress gateway with HTTPS by using simple TLS, and enable HTTPS connections to specific webpages. In addition, you can redirect HTTP connections to HTTPS.
HTTPS creates a secure channel over an insecure network, protecting against man-in-the-middle attacks and encrypting traffic between the client and server. To prepare a web server to accept HTTPS connections, an administrator must create a public key certificate for the server. This certificate must be signed by a trusted certificate authority for a web browser to accept it without warning.
Edit the gateway named external-gateway in the kf
namespace using the built-in Kubernetes editor:
kubectl edit gateway -n kf external-gateway
- Assuming you have a certificate and key for your service, create a Kubernetes secret for the ingress gateway. Make sure the secret name does not begin with
istio
orprometheus
. For this example, the secret is namedmyapp-https-credential
. - Under
servers:
- Add a section for port 443.
- Under
tls:
, set thecredentialName
to the name of the secret you just created. - Under
hosts:
, add the host name of the service you want to secure with HTTPS. This can be set to an entire domain using a wildcard (e.g.*.example.com
) or scoped to just one hostname (e.g.myapp.example.com
). - There should already be a section under
servers:
for port 80 HTTP. Keep this section in the Gateway definition if you would like all traffic to come in as HTTP. - To redirect HTTP to HTTPS, add the value
httpsRedirect: true
undertls
in the HTTP server section. See the Istio Gateway documentation for reference. Note that adding this in the section wherehosts
is set to*
means that all traffic is redirected to HTTPS. If you only want to redirect HTTP to HTTPS for a single app/domain, add a separate HTTP section specifying the redirect.
Shown below is an example of a Gateway spec
that sets up HTTPS for myapp.example.com
and redirects HTTP to HTTPS for that host:
spec:
selector:
istio: ingressgateway
servers:
- hosts:
- myapp.example.com
port:
name: https
number: 443
protocol: HTTPS
tls:
credentialName: myapp-https-credential
mode: SIMPLE
- hosts:
- myapp.example.com
port:
name: http-my-app
number: 80
protocol: HTTP
tls:
httpsRedirect: true
- hosts:
- '*'
port:
name: http
number: 80
protocol: HTTP
7.3 - Set up network policies
Kf integrates tightly with Kubernetes and Istio to provide robust network policy enforcement.
By default, Kf workloads are run in the Kubernetes cluster and resolve addresses using Kubernetes DNS. This DNS resolver will first attempt to resolve addresses within the cluster, and only if none are found will attempt external resolution.
Each Kf App gets run with an Envoy sidecar injected by Istio or the Anthos Service Mesh (ASM). This sidecar proxies all network traffic in and out of the Kubernetes Pod.
Each Kubernetes Pod is executed on a Node, a physical or virtual machine responsible for managing the container images that make up a Pod. Nodes exist on a physical or virtual network.
Together, these form a hierarchy of systems you can apply network policies. These are listed below from least to most granular.
Network level policies
Workload protection starts with the network your GKE cluster is installed on.
If you’re running Kf on a GKE cluster on GCP, Kf recommends:
- Placing your GKE cluster on a Virtual Private Cloud (VPC) network.
- With Private Google Access enabled.
- Using Cloud NAT to control egress.
Node level policies
You can set up policies for containers running on the Node using Kubernetes NetworkPolicies. These are the closest mapping to Cloud Foundry network policies that exist in Kubernetes.
NetworkPolicies are backed by a Kubernetes add-on. If you set up your own GKE cluster, you will need to enable NetworkPolicy enforcement.
Kf labels Apps with kf.dev/networkpolicy=app
and builds with kf.dev/networkpolicy=build
.
This allows you to target NetworkPolicies directly at Pods running Apps or Builds.
Each Kf Space creates two NetworkPolicies to start with, one targeting Apps and
one targeting Builds. You can change the configuration on the Space’s
spec.networkConfig.(app|build)NetworkPolicy.(in|e)gress
fields.
These fields can be set to one of the following values:
Enum Value | Description |
---|---|
PermitAll | Allows all traffic. |
DenyAll | Denies all traffic. |
By default Kf uses a permissive network policy. This allows the following functionality that Kf uses:
- North/South routing to the cluster ingress gateway
- Egress to the Internet e.g. to fetch Buildpacks
- East/West routing between Apps
- Access to the Kubernetes DNS server
- Access to container registries
- Direct access to the VPC network
- Access to Google services like Cloud Logging
- Access to the Workload Identity server for automatic rotating credentials
Service mesh policies
If you need fine-grained networking control, authentication, authorization, and observability you can apply policies using Anthos Service Mesh.
A service mesh is an infrastructure layer that enables managed, observable and secure communication across your services, letting you create robust enterprise applications made up of many microservices on your chosen infrastructure.
You can see the list of supported features here.
8 - Security
8.1 - Security Overview
Kf aims to provide a similar developer experience to Cloud Foundry, replicating the build, push, and deploy lifecycle. It does this by building a developer UX layer on top of widely-known, broadly used and adopted technologies like Kubernetes, Istio, and container registries rather than by implementing all the pieces from the ground up.
When making security decisions, Kf aims to provide complete solutions that are native to their respective components and can be augmented with other mechanisms. Breaking that down:
- Complete solutions means that Kf tries not to provide partial solutions that can lead to a false sense of security.
- Native means that the solutions should be a part of the component rather than a Kf construct to prevent breaking changes.
- Can be augmented means the approach Kf takes should work seamlessly with other Kubernetes and Google Cloud tooling for defense in depth.
Important considerations
In addition to the Current limitations described below, it is important that you read through and understand the items outlined in this section.
Workload Identity
By default, Kf uses Workload Identity to provide secure delivery and rotation of the Service Account credentials used by Kf to interact with your Google Cloud Project. Workload Identity achieves this by mapping a Kubernetes Service Account (KSA) to a Google Service Account (GSA). The Kf controller runs in the kf
namespace and uses a KSA named controller
mapped to your GSA to do the following things:
- Write metrics to Stackdriver
- When a new Kf space is created (
kf create-space
), the Kf controller creates a new KSA namedkf-builder
in the new space and maps it to the same GSA. - The
kf-builder
KSA is used by Tekton to push and pull container images to Google Container Registry (gcr.io)
This diagram illustrates those interactions:
NFS
In order to mimic Cloud Foundry’s UID/GID mapping, containers in Kf that mount NFS volumes need
the ability to run as root
and the ability to access the FUSE device of the kernel running the
Node.
Current limitations
A developer pushing an app with Kf can also create Pods (with
kubectl
) that can use thekf-builder
KSA with the permissions of its associated GSA.Kf uses the same Pod to fetch, build, and store images. Assume that any credentials that you provide can be known by the authors and publishers of the buildpacks you use.
Kf doesn’t support quotas to protect against noisy neighbors. Use Kubernetes resource quotas.
Other resources
Google Cloud
General
Recommended protections
Advanced protections
8.2 - Role-based access control
Kf provides a set of Kubernetes roles that allow multiple teams to share a Kf cluster. This page describes the roles and best practices to follow when using them.
When to use Kf roles
Kf roles allow multiple teams to share a Kubernetes cluster with Kf installed. The roles provide access to individual Kf Spaces.
Use Kf roles to share access to a cluster if the following are true:
- The cluster is used by trusted teams.
- Workloads on the cluster share the same assumptions about the level of security provided by the environment.
- The cluster exists in a Google Cloud project that is tightly controlled.
Kf roles will not:
- Protect your cluster from untrusted developers or workloads. See the GKE shared responsibility model for more information.
- Provide isolation for your workloads. See the guide to harden your cluster for more information.
- Prevent additional Kubernetes roles from being defined that interact with Kf.
- Prevent access from administrators who have access to the Google Cloud project or cluster.
Kf roles
The following sections describe the Kubernetes RBAC Roles provided by Kf and how they interact with GKE IAM.
Predefined roles
Kf provides several predefined Kubernetes roles to help you provide access to different subjects that perform different roles. Each predefined role can be bound to a subject within a Kubernetes Namespace managed by a Kf Space.
When a subject is bound to a role within a Kubernetes Namespace, their access is limited to objects that exist in the Namespace that match the grants listed in the role. In Kf, some resources are defined at the cluster scope. Kf watches for changes to subjects in the Namespace and grants additional, limited, roles at the cluster scope.
Role | Title | Description | Scope |
---|---|---|---|
space-auditor | Space Auditor | Allows read-only access to a Space. | Space |
space-developer | Space Developer | Allows application developers to deploy and manage applications within a Space. | Space |
space-manager | Space Manager | Allows administration and the ability to manage auditors, developers, and managers in a Space. | Space |
SPACE_NAME-manager | Dynamic Space Manager | Provides write access to a single Space object, automatically granted to all subjects with the space-manager role within the named Space. | Cluster |
kf-cluster-reader | Cluster Reader | Allows read-only access to cluster-scoped Kf objects, automatically granted to all space-auditor , space-developer , and space-manager . | Cluster |
Information about the policy rules that make up each predefined role can be found in the Kf roles reference documentation.
Google Cloud IAM roles
Kf roles provide access control for objects within a Kubernetes cluster. Subjects must also be granted an Cloud IAM role to authenticate to the cluster:
Platform administrators should be granted the
roles/container.admin
Cloud IAM role. This will allow them to install, upgrade, and delete Kf as well as create, and delete cluster scoped Kf objects like Spaces or ClusterServiceBrokers.Kf end-users should be granted the
roles/container.viewer
Cloud IAM role. This role will allow them to authenticate to a cluster with limited permissions that can be expanded using Kf roles.
Google Cloud IAM offers additional predefined Roles for GKE to solve more advanced use cases.
Map Cloud Foundry roles to Kf
Cloud Foundry provides roles are similar to Kf’s predefined roles. Cloud Foundry has two major types of roles:
- Roles assigned by the User Account and Authentication (UAA) subsystem that provide coarse-grained OAuth scopes applicable to all Cloud Foundry API endpoints.
- Roles granted within the Cloud Controller API (CAPI) that provide fine-grained access to API resources.
UAA roles
Roles provided by UAA are most similar to project scoped Google Cloud IAM roles:
- Admin users in Cloud Foundry can perform administrative activities for all
Cloud Foundry organizations and spaces. The role is most similar to the
roles/container.admin
Google Cloud IAM role. - Admin read-only users in Cloud Foundry can access all Cloud Foundry API
endpoints. The role is most similar to the
roles/container.admin
Google Cloud IAM role. - Global auditor users in Cloud Foundry have read access to all Cloud Foundry API endpoints except for secrets. There is no equivalent Google Cloud IAM role, but you can create a custom role with similar permissions.
Cloud Controller API roles
Roles provided by CAPI are most similar to Kf roles granted within a cluster
to subjects that have the roles/container.viewer
Google Cloud IAM role on the owning project:
- Space auditors in Cloud Foundry have read-access to resources in a CF space.
The role is most similar to the
space-auditor
Kf role. - Space developers in Cloud Foundry have the ability to deploy and manage
applications in a CF space.
The role is most similar to the
space-developer
Kf role. - Space managers in Cloud Foundry can modify settings for the CF space and
assign users to roles.
The role is most similar to the
space-manager
Kf role.
What’s next
- Learn more about GKE security in the Security Overview.
- Make sure you understand the GKE shared responsibility model.
- Learn more about access control in GKE.
- Read the GKE multi-tenancy overview.
- Learn about hardening your GKE cluster.
- Understand the Kubernetes permissions that make up each Kf predefined role.
8.3 - Configure role-based access control
The following steps guide you through configuring role-based access control (RBAC) in a Kf Space.
Before you begin
Please follow the GKE RBAC guide before continuing with the following steps.
Configure Identity and Access Management (IAM)
In addition to permissions granted through Kf RBAC, users, groups, or service accounts must also be authenticated to view GKE clusers at the project level. This requirement is the same as for configuring GKE RBAC, meaning users/groups must have at least the container.clusters.get
IAM permission in the project containing the cluster. This permission is included by the container.clusterViewer
role, and other more privilleged roles. For more information, review Interaction with Identity and Access Management.
Assign container.clusterViewer
to a user or group.
gcloud projects add-iam-policy-binding ${CLUSTER_PROJECT_ID} \
--role="container.clusterViewer" \
--member="${MEMBER}"
Example member values are:
- user:test-user@gmail.com
- group:admins@example.com
- serviceAccount:test123@example.domain.com
Manage Space membership as SpaceManager
The cluster admin role, or members with SpaceManager role, can assign role to a user, group or service account.
kf set-space-role MEMBER -t [Group|ServiceAccount|User]
The cluster admin role, or members with SpaceManager role, can remove a member from a role.
kf unset-space-role MEMBER -t [Group|ServiceAccount|User]
You can view members and their roles within a Space.
kf space-users
Examples
Assign SpaceDeveloper role to a user.
kf set-space-role alice@example.com SpaceDeveloper
Assign SpaceDeveloper role to a group.
kf set-space-role devs@example.com SpaceDeveloper -t Group
Assign SpaceDeveloper role to a Service Account.
kf set-space-role sa-dev@example.domain.com SpaceDeveloper -t ServiceAccount
App development as SpaceDeveloper
Members with SpaceDeveloper role can perform Kf App development operations within the Space.
To push an App:
kf push app_name -p [PATH_TO_APP_ROOT_DIRECTORY]
To view logs of an App:
kf logs app_name
SSH into a Kubernetes Pod running the App:
kf ssh app_name
View available service brokers:
kf marketplace
View Apps as SpaceManager or SpaceAuditor
Members with SpaceManager or SpaceAuditor role could view available Apps within the Space:
kf apps
View Kf Spaces within a cluster
All roles (SpaceManager, SpaceDeveloper, and SpaceAuditor) can view available Kf Spaces within a cluster:
kf spaces
View Space members and their roles within a Space.
kf space-users
Impersonation flags
To verify a member’s permission, a member with more priviliaged permission can test another member’s permissions using the impersonation flags: --as
and --as-group
.
For example, as a cluster admin, you can verify if a user (username: bob) has permission to push an App.
kf push APP_NAME --as bob
Verify a group (manager-group@example.com
) has permission to assign permission to other members.
kf set-space-role bob SpaceDeveloper --as-group manager-group@example.com
8.4 - Enable compute isolation
Kf Apps can be deployed on dedicated nodes in the cluster. This feature is required if you have the circumstances where you might want more control on a node where an App Pod lands. For example:
- If you are sharing the same cluster for different Apps but want dedicated nodes for a particular App.
- If you want dedicated nodes for a given organization (Kf Space).
- If you want to target a specific operating system like Windows.
- If you want to co-locate Pods from two different services that frequently communicate.
To enable compute isolation, Kf uses the Kubernetes nodeSelector. To use this feature, first add labels on the nodes or node pools where you want your App Pods to land and then add the same qualifying labels on the Kf Space. All the Apps installed in this Space then land on the nodes with matching labels.
Kf creates a Kubernetes pod to execute each Kf Build, the buildNodeSelector feature can be used to isolate compute resources to execute only the Build pods. One use case is to isolate Build pods to run on nodes with SSD, while running the App pods on other nodes. The BuildNodeSelectors feature provides compute resource optimization and flexibility in the cluster. Please refer to chapter ‘Configure BuildNodeSelectors and a build node pool’ on this page.
Configure nodeSelector in a Kf cluster
By default, compute isolation is disabled. Use the following procedure to configure labels and nodeSelector.
Add a label (
distype=ssd
) on the node where you want your application pods to land.kubectl label nodes nodeid disktype=ssd
Add the same label on the Kf Space. All Apps deployed in this Space will then land on the qualifying nodes.
kf configure-space set-nodeselector space-name disktype ssd
You can add multiple labels by running the same command again.
Check the label is configured.
kf configure-space get-nodeselector space-name
Delete the label from the space.
kf configure-space unset-nodeselector space-name disktype
Override nodeSelector for Kf stacks
Deployment of Kf Apps can be further targeted based
on what stack is being used to build and package the App. For
example, if you want your applications built with spaceStacksV2
to land on
nodes with Linux kernel 4.4.1., nodeSelector
values on a stack override the
values configured on the Space.
To configure the nodeSelector
on a stack:
Edit the
config-defaults
of your Kf cluster and add the labels.$ kubectl -n kf edit configmaps config-defaults
Add
nodeSelector
to the stacks definition...... ..... spaceStacksV2: | - name: cflinuxfs3 image: cloudfoundry/cflinuxfs3 nodeSelector: OS_KERNEL: LINUX_4.4.1 ..... .....
Configure BuildNodeSelectors and a Build node pool
Build node selectors are only effective at overriding the node selectors for the Build pods, they do not affect App pods. For example, if you specify both the node selectors on the Space and the Build node selectors in Kfsystem, App pods will have the Space node selectors while the Build pods will have the Build node selectors from Kfsystem; if node selectors are only specified in the Space, both the App and Build pods will have the node selector from the Space.
Add labels (
disktype:ssd
for example) on the nodes that you want your Build pods to be assigned to.kubectl label nodes nodeid disktype=ssd
Add/update Build node selectors (in the format of
key:value
pairs) by patching KfSystem CR.kubectl patch kfsystem kfsystem --type='json' -p='[{'op': 'replace', 'path': '/spec/kf/config/buildNodeSelectors', 'value': {<key>:<value>}}]'
For example, to add
disktype=ssd
as the Build node selector:kubectl patch kfsystem kfsystem --type='json' -p='[{'op': 'replace', 'path': '/spec/kf/config/buildNodeSelectors', 'value': {"disktype":"ssd"}}]'
8.5 - Kubernetes Roles
The following sections describe the Kubernetes ClusterRoles that are created by Kf and lists the permissions that are contained in each ClusterRole.
Space developer role
The Space developer role aggregates permissions application developers use to deploy and manage applications within a Kf Space.
You can retrieve the permissions granted to Space developers on your cluster using the following command.
kubectl describe clusterrole space-developer
Space auditor role
The Space auditor role aggregates read-only permissions that auditors and automated tools use to validate applications within a Kf Space.
You can retrieve the permissions granted to Space auditors on your cluster using the following command.
kubectl describe clusterrole space-auditor
Space manager role
The Space manager role aggregates permissions that allow delegation of duties to others within a Kf Space.
You can retrieve the permissions granted to Space managers on your cluster using the following command.
kubectl describe clusterrole space-manager
Dynamic Space manager role
Each Kf Space creates a ClusterRole with the name
SPACE_NAME-manager
, where
SPACE_NAME-manager
is called the dynamic manager role.
Kf
automatically grants all subjects with the space-manager
role within the
Space the dynamic manager role at the cluster scope. The permissions for the
dynamic manager role allow Space managers to update settings on the Space with
the given name.
You can retrieve the permissions granted to the dynamic manager role for any Space on your cluster using the following command.
kubectl describe clusterrole SPACE_NAME-manager
Kf cluster reader role
Kf automatically grants the kf-cluster-reader
role to all users on a
cluster that already have the space-developer
, space-auditor
, or space-manager
role within a Space.
You can retrieve the permissions granted to Space Kf cluster readers on your cluster using the following command.
kubectl describe clusterrole kf-cluster-reader
9 - Stacks and Buildpacks
Learn about configuring stacks and buildpacks for your platform.
9.1 - Customize stacks
The Stack configuration can be updated by editing the kfsystem
Custom Resource:
kubectl edit kfsystem kfsystem
This example sets the Google Cloud buildpacks a V3 Stack:
spec:
kf:
config:
spaceStacksV3:
- name: google
description: Google buildpacks (https://github.com/GoogleCloudPlatform/buildpacks)
buildImage: gcr.io/buildpacks/builder:v1
runImage: gcr.io/buildpacks/gcp/run:v1
This new Stack can now be pushed:
kf push myapp --stack google
This example configures the Ruby V2 buildpack and sets the build pipeline defaults to use V2 Stacks:
spec:
kf:
config:
spaceDefaultToV3Stack: false
spaceBuildpacksV2:
- name: ruby_buildpack
url: https://github.com/cloudfoundry/ruby-buildpack
spaceStacksV2:
- name: cflinuxfs3
image: cloudfoundry/cflinuxfs3@sha256:5219e9e30000e43e5da17906581127b38fa6417f297f522e332a801e737928f5
9.2 - Customize stacks and buildpacks
Buildpacks are used by Kf to turn an application’s source code into an executable image. Cloud Native buildpacks use the latest Buildpack API v3. Companies are actively adding v3 support to existing buildpacks.
Kf supports buildpacks that conform to both V2 and V3 of the Buildpack API specification.
Compare V2 and V3 buildpacks
V2 buildpacks | V3 buildpacks | |
---|---|---|
Alternate names | Cloud Foundry buildpacks | Cloud Native buildpacks (CNB), Builder Images |
Status | Being replaced | Current |
Ownership | Cloud Foundry | Buildpacks.io |
Stack | Shared by builder and runtime | Optionally different for builder and runtime |
Local development | Not possible | Yes, with the pack CLI |
Custom buildpacks | Available at runtime | Must be built into the builder |
Buildpack lifecycle
Step | Cloud Foundry | Kf with buildpacks V2 | Kf with buildpacks V3 |
---|---|---|---|
Source location | BITS service | Container registry | Container registry |
Buildpack location | BOSH/HTTP | HTTP | Container registry |
Stack location | BOSH | Container registry | Container registry |
Result | Droplet (App binary without stack) | Image (Droplet on a stack) | Image |
Runtime | Droplet glued on top of stack and run | Run produced image | Run produced image |
Kf always produces a full, executable image as a result of its build process. Cloud Foundry, on the other hand, produces parts of an executable image at build time and the rest is added at runtime.
Kf chose to follow the model of always producing a full image for the following reasons:
- Images can be exported, run locally, and inspected statically
- Better security and auditing with tools like binary authorization
- App deployments are reproducible
Kf and buildpacks
Kf stores its global list of buildpacks and stacks in the
config-defaults
ConfigMap in the kf
Namespace. Modification of the buildpacks and stacks properties should be done at the kfsystem
Custom Resource, the Kf operator automatically updates the config-defaults
ConfigMap based on the values set at kfsystem
.
Each Space reflects these buildpacks in its status field.
For a Space named buildpack-docs
you could run the following to see the full
Space configuration:
kf space buildpack-docs
Getting Space buildpack-docs
API Version: kf.dev/v1alpha1
Kind: Space
Metadata:
Creation Timestamp: 2020-02-14T15:09:52Z
Name: buildpack-docs
Self Link: /apis/kf.dev/v1alpha1/spaces/buildpack-docs
UID: 0cf1e196-4f3c-11ea-91a4-42010a80008d
Status:
Build Config:
Buildpacks V2:
- Name: staticfile_buildpack
URL: https://github.com/cloudfoundry/staticfile-buildpack
Disabled: false
- Name: java_buildpack
URL: https://github.com/cloudfoundry/java-buildpack
Disabled: false
Stacks V2:
- Image: cloudfoundry/cflinuxfs3
Name: cflinuxfs3
Stacks V3:
- Build Image: cloudfoundry/cnb:cflinuxfs3
Description: A large Cloud Foundry stack based on Ubuntu 18.04
Name: org.cloudfoundry.stacks.cflinuxfs3
Run Image: cloudfoundry/run:full-cnb
Under the Build Config
section there are three fields to look at:
- Buildpacks V2 contains a list of V2 compatible buildpacks in the order they’ll be run
- Stacks V2 indicates the stacks that can be chosen to trigger a V2 buildpack build
- Stacks V3 indicates the stacks that can be chosen to trigger a V3 buildpack build
You can also list the stacks with kf stacks
:
kf stacks
Getting stacks in Space: buildpack-docs
Version Name Build Image Run Image Description
V2 cflinuxfs3 cloudfoundry/cflinuxfs3 cloudfoundry/cflinuxfs3
V3 org.cloudfoundry.stacks.cflinuxfs3 cloudfoundry/cnb:cflinuxfs3 cloudfoundry/run:full-cnb A large Cloud Foundry stack based on Ubuntu 18.04
Because V3 build images already have their buildpacks built-in, you must use kf buildpacks
to get the list:
kf buildpacks
Getting buildpacks in Space: buildpack-docs
Buildpacks for V2 stacks:
Name Position URL
staticfile_buildpack 0 https://github.com/cloudfoundry/staticfile-buildpack
java_buildpack 1 https://github.com/cloudfoundry/java-buildpack
V3 Stack: org.cloudfoundry.stacks.cflinuxfs3:
Name Position Version Latest
org.cloudfoundry.jdbc 0 v1.0.179 true
org.cloudfoundry.jmx 1 v1.0.180 true
org.cloudfoundry.go 2 v0.0.2 true
org.cloudfoundry.tomcat 3 v1.1.102 true
org.cloudfoundry.distzip 4 v1.0.171 true
org.cloudfoundry.springboot 5 v1.1.2 true
...
Customize V3 buildpacks
You can customize the buildpacks that are available to your developers by creating your own builder image with exactly the buildpacks they should have access to. You can also use builder images published by other authors.
Use a third-party builder image
A list of published CNB stacks is available from the
Buildpack CLI pack
.
As of this writing, pack suggest-stacks
outputs:
pack suggest-stacks
Stacks maintained by the community:
Stack ID: heroku-18
Description: The official Heroku stack based on Ubuntu 18.04
Maintainer: Heroku
Build Image: heroku/pack:18-build
Run Image: heroku/pack:18
Stack ID: io.buildpacks.stacks.bionic
Description: A minimal Cloud Foundry stack based on Ubuntu 18.04
Maintainer: Cloud Foundry
Build Image: cloudfoundry/build:base-cnb
Run Image: cloudfoundry/run:base-cnb
Stack ID: org.cloudfoundry.stacks.cflinuxfs3
Description: A large Cloud Foundry stack based on Ubuntu 18.04
Maintainer: Cloud Foundry
Build Image: cloudfoundry/build:full-cnb
Run Image: cloudfoundry/run:full-cnb
Stack ID: org.cloudfoundry.stacks.tiny
Description: A tiny Cloud Foundry stack based on Ubuntu 18.04, similar to distroless
Maintainer: Cloud Foundry
Build Image: cloudfoundry/build:tiny-cnb
Run Image: cloudfoundry/run:tiny-cnb
To modify Kf to use the stack published by Heroku, edit
kfsystem
Custom Resource, which automatically updates the config-defaults
ConfigMap in the kf
Namespace.
Add an entry to the spaceStacksV3
key like the following:
kubectl edit kfsystem kfsystem
spaceStacksV3: |
- name: org.cloudfoundry.stacks.cflinuxfs3
description: A large Cloud Foundry stack based on Ubuntu 18.04
buildImage: cloudfoundry/cnb:cflinuxfs3
runImage: cloudfoundry/run:full-cnb
- name: heroku-18
description: The official Heroku stack based on Ubuntu 18.04
buildImage: heroku/pack:18-build
runImage: heroku/pack:18
Then, run stacks
again:
kf stacks
Getting stacks in Space: buildpack-docs
Version Name Build Image Run Image Description
V2 cflinuxfs3 cloudfoundry/cflinuxfs3 cloudfoundry/cflinuxfs3
V3 org.cloudfoundry.stacks.cflinuxfs3 cloudfoundry/cnb:cflinuxfs3 cloudfoundry/run:full-cnb A large Cloud Foundry stack based on Ubuntu 18.04
V3 heroku-18 heroku/pack:18-build heroku/pack:18 The official Heroku stack based on Ubuntu 18.04
Create your own builder image
The Buildpack CLI pack
is used to create your own builder image. You can follow pack
’s
Working with builders using create-builder
documentation to create your own builder image. After it’s created, push it to a
container registry and add it to the kfsystem
Custom Resource.
Set a default stack
Apps will be assigned a default stack if one isn’t supplied in their manifest. The default stack is the first in the V2 or V3 stacks list. Unless overridden, a V2 stack is chosen for compatibility with Cloud Foundry.
You can force Kf to use a V3 stack instead of a V2 by setting the
spaceDefaultToV3Stack
field in the kfsystem
Custom Resource to be "true"
(kfsystem
automatically updates corresonding spaceDefaultToV3Stack
field in the config-defaults
ConfigMap):
kubectl edit kfsystem kfsystem
spaceDefaultToV3Stack: "true"
This option can also be modified on a per-Space basis by changing setting the
spec.buildConfig.defaultToV3Stack
field to be true
or false
. If unset,
the value from the config-defaults
ConfigMap is used.
config-defaults value for spaceDefaultToV3Stack | Space’s spec.buildConfig.defaultToV3Stack | Default stack |
---|---|---|
unset | unset | V2 |
"false" | unset | V2 |
"true" | unset | V3 |
any | false | V2 |
any | true | V3 |