This is the multi-page printable view of this section. Click here to print.
Develop Applications on Kf
- 1: Build and deploy applications
- 1.1: Deploy an application
- 1.2: Get started with buildpacks
- 1.3: Reduce deployment risk with blue-green deployments
- 1.4: Compare V2 and V3 Buildpacks
- 1.5: App Manifest
- 1.6: App runtime
- 1.7: Build runtime
- 2: Backing services
- 2.1: Backing Services Overview
- 2.2: Use managed services
- 2.3: Configure user-provided services
- 2.4: User Provided Service Templates
- 2.5: Configure NFS volumes
- 3: Configure and Use Service Accounts
- 4: Scaling
- 5: Service discovery
- 6: Debugging workloads
- 7: Configure routes and domains
- 8: Tasks
- 8.1: Tasks Overview
- 8.2: Run Tasks
- 8.3: Schedule Tasks
1 - Build and deploy applications
1.1 - Deploy an application
When pushing an app (via kf push
) to Kf, there are
three lifecycles that Kf uses to take your source code
and allow it to handle traffic:
- Source code upload
- Build
- Run
Source code upload
The first thing that happens when you kf push
is the Kf CLI (kf
) packages
up your directory (either current or --path/-p
) into a container and
publishes it to the container registry configured for the Space. This is
called the source container. The Kf CLI then creates an App
type in Kubernetes
that contains both the source image and configuration from the App manifest and
push flags.
Ignore files during push
In many cases, you will not want to upload certain files during kf push
(i.e., “ignore” them).
This is where a .kfignore
(or .cfignore
) file can be used.
Similar to a .gitignore
file, this file instructs the Kf CLI which
files to not include in the source code container.
To create a .kfignore
file, create a text file named .kfignore
in the base
directory of your app (similar to where you would store the manifest
file). Then populate it with a newline delimited list of files and directories
you don’t want published. For example:
bin
.idea
This will tell the Kf CLI to not include anything in the bin
or .idea
directories.
Kf supports gitignore style syntax.
Build
The Build lifecycle is handled by a Tekton TaskRun. Depending on the flags that you provide while pushing, it will choose a specific Tekton Task. Kf currently has the following Tekton Tasks:
- buildpackv2
- buildpackv3
- kaniko
Kf tracks each TaskRun as a Build. If a Build succeeds, the resulting container image is then deployed via the Run lifecycle (described below).
More information can be found at Build runtime.
Run
The Run lifecycle is responsible for taking a container image and creating a Kubernetes Deployment.
It also creates:
More information can be found at Build runtime.
Push timeouts
Kf supports setting an environment variable to instruct the CLI to time out
while pushing apps. If set, the variables KF_STARTUP_TIMEOUT
or
CF_STARTUP_TIMEOUT
are parsed as a golang style duration (for example 15m
,
1h
). If a value is not set, the push timeout defaults to 15 minutes.
1.2 - Get started with buildpacks
Kf supports a variety of buildpacks. This document covers some starter examples for using them.
Before you begin
- You should have Kf running on a cluster.
- You should have run
kf target -s <space-name>
to target your space.
Java (v2) buildpack
Use spring initializr to create a Java 8 maven project with a spring web dependency and JAR packaging. Download it, extract it, and once extracted you can generate a JAR.
./mvnw package
Push the JAR to Kf with the Java v2 buildpack.
kf push java-v2 --path target/helloworld-0.0.1-SNAPSHOT.jar
Java (v3) buildpack
Use spring initializr to create a Java 8 maven project with a spring web dependency and JAR packaging. Download it, extract it, and once extracted, push to Kf with the cloud native buildpack.
kf push java-v3 --stack org.cloudfoundry.stacks.cflinuxfs3
Python (v2) buildpack
Create a new directory with files as shown in the following structure.
tree
.
├── Procfile
├── requirements.txt
└── server.py
cat Procfile
web: python server.py
cat requirements.txt
Flask
cat server.py
from flask import Flask
import os
app = Flask(__name__)
@app.route('/')
def hello_world():
return 'Hello, World!'
if __name__ == "__main__":
port = int(os.getenv("PORT", 8080))
app.run(host='0.0.0.0', port=port)
Push the Python flask app using v2 buildpacks.
kf push python --buildpack python\_buildpack
Python (v3) buildpack
(same as above)
Push the Python flask app using cloud native buildpacks.
kf push pythonv3 --stack org.cloudfoundry.stacks.cflinuxfs3
Staticfile (v2) buildpack
Create a new directory that holds your source code.
Add an index.html
file with this content.
<!DOCTYPE html>
<html lang="en">
<head><title>Hello, world!</title></head>
<body><h1>Hello, world!</h1></body>
</html>
Push the static content with the staticfile buildpack.
kf push staticsite --buildpack staticfile\_buildpack
1.3 - Reduce deployment risk with blue-green deployments
This page shows you how to deploy a new version of your application and migrate traffic over from an old to a new version.
Push the initial App
Use the Kf CLI to push the initial version of your App with any routes:
$ kf push app-v1 --route my-app.my-space.example.com
Push the updated App
Use the Kf CLI to push a new version of your App without any routes:
$ kf push app-v2 --no-route
Add routes to the updated App
Use the Kf CLI to bind all existing routes to the updated App with a weight of 0 to ensure that they don’t get any requests:
$ kf map-route app-v2 my-space.example.com --hostname my-app --weight 0
Shift traffic
Start shifting traffic from the old App to the updated App by updating the weights on the routes:
$ kf map-route app-v1 my-space.example.com --hostname my-app --weight 80
$ kf map-route app-v2 my-space.example.com --hostname my-app --weight 20
If the deployment is going well, you can shift more traffic by updating the weights again:
$ kf map-route app-v1 my-space.example.com --hostname my-app --weight 50
$ kf map-route app-v2 my-space.example.com --hostname my-app --weight 50
Complete traffic shifting
After you’re satisfied that the new service hasn’t introduced regressions, complete the rollout by shifting all traffic to the new instance:
$ kf map-route app-v1 my-space.example.com --hostname my-app --weight 0
$ kf map-route app-v2 my-space.example.com --hostname my-app --weight 100
Turn down the original App
After you’re satisfied that quick rollbacks aren’t needed, remove the original route and stop the App:
$ kf unmap-route app-v1 myspace.example.com --hostname my-app
$ kf stop app-v1
Or delete the App and all associated route mappings:
$ kf delete app-v1
1.4 - Compare V2 and V3 Buildpacks
A buildpack converts source code into an executable, and is used to deliver a simple, reliable, and repeatable way to create containers. Kf supports both V2 and V3 buildpacks, and it is important to understand the differences between them.
V2 buildpacks
Most Cloud Foundry applications already use V2 buildpacks. When using V2 buildpacks with Kf, the lifecycle binaries and the buildpacks are downloaded and configured from their git URLs. Kf then uses the lifecycle
CLI to execute each buildpack against the source code.
Pros
- Ready out of the box without pipeline or code changes.
Cons
- Legacy buildpack supersceded by V3.
- Weaker performance and reliability. The Kf build pipeline requires more IO for V2 buildpacks.
- Fewer community resources.
- Kf only supports OSS git repos.
V3 buildpacks
V3 buildpacks are a Cloud Native Computing Foundation (CNCF) project with a well defined spec, a CLI (pack) and a growing community that is innovating around different languages and frameworks. Google Cloud also has its own set of OSS buildpacks.
V3 buildpacks have two overarching OCI containers:
- Builder image
- Run image
Builder image
The builder image is used while your source code is being built into a runnable container. The image has the necessary detect
scripts and other utilities to compile source code.
Run image
The run image is the base image that a container is built on. This means that it is the base image that will run when the App executes.
Layers
V3 buildpacks use layers to compose the final container. Each buildpack included in a build is given the opportunity to manipulate the file system and environment variables of the App. This layering approach allows for buildpacks to be thinner and more generic.
V3 buildpacks are built on OCI containers. This requires that the V3 builder image be stored in a container registry that the Kf build pipeline has access to. The build pipeline uses the builder image to apply the underlying scripts to build the source code into a runnable container.
Pros
- Google supported builder and run image.
- Works with various CI/CD runtimes like Cloud Build.
- Growing community and buildpack registry.
Cons
- May require code/process updates. For example, the Java buildpack requires source code while the V2 buildpack requires a jar file.
- V3 buildpacks are newer and might require additional validation (is using community developed buildpacks).
Kf Stacks
View Stacks
When pushing an App, the build pipeline determines the buildpack based on the selected Stack (specified via the --stack
flag or the manifest).
To see which Stacks are available in a Space first ensure a Space is targeted:
kf target -s myspace
The kf stacks
subcommand can then be used to list the Stacks:
kf stacks
The output shows both V2 and V3 Stacks:
Getting stacks in Space: myspace
Version Name Build Image Run Image
V2 cflinuxfs3 cloudfoundry/cflinuxfs3@sha256:5219e9e30000e43e5da17906581127b38fa6417f297f522e332a801e737928f5 cloudfoundry/cflinuxfs3@sha256:5219e9e30000e43e5da17906581127b38fa6417f297f522e332a801e737928f5
V3 kf-v2-to-v3-shim gcr.io/kf-releases/v2-to-v3:v2.7.0 gcr.io/buildpacks/gcp/run:v1 This is a stack added by the integration tests to assert that v2->v3 shim works
V3 google gcr.io/buildpacks/builder:v1 gcr.io/buildpacks/gcp/run:v1 Google buildpacks (https://github.com/GoogleCloudPlatform/buildpacks)
V3 org.cloudfoundry.stacks.cflinuxfs3 cloudfoundry/cnb:cflinuxfs3@sha256:f96b6e3528185368dd6af1d9657527437cefdaa5fa135338462f68f9c9db3022 cloudfoundry/run:full-cnb@sha256:dbe17be507b1cc6ffae1e9edf02806fe0e28ffbbb89a6c7ef41f37b69156c3c2 A large Cloud Foundry stack based on Ubuntu 18.04
V2 to V3 Buildpack Migration
Kf provides a V3 stack to build applications that were built with standard V2 buildpacks, using a stack named kf-v2-to-v3-shim
. The kf-v2-to-v3-shim
stack is created following the standard V3 buildpacks API. A Google maintained builder image is created with each Kf release, following the standard buildpack process. The builder image aggregates a list of V3 buildpacks created by the same process used with the kf wrap-v2-buildpack
command. The V3 buildpack images are created using the standard V2 buildpack images. It’s important to note that the V3 buildpacks do not contain the binary of the referenced V2 buildpacks. Instead, the V2 buildpack images are referenced, and the bits are downloaded at App build time (by running kf push
).
At App build time, the V2 buildpack is downloaded from the corresponding git repository. When V3 detection runs, it delegates to the downloaded V2 detection script. For the first buildpack group that passes detection, it proceeds to the build step, which delegates the build execution to the downloaded V2 builder script.
The following V2 buildpacks are supported in the kf-v2-to-v3-shim
stack:
Buildpack | Git Repository |
---|---|
java_buildpack | https://github.com/cloudfoundry/java-buildpack |
dotnet_core_buildpack | https://github.com/cloudfoundry/dotnet-core-buildpack |
nodejs_buildpack | https://github.com/cloudfoundry/nodejs-buildpack |
go_buildpack | https://github.com/cloudfoundry/go-buildpack |
python_buildpack | https://github.com/cloudfoundry/python-buildpack |
binary_buildpack | https://github.com/cloudfoundry/binary-buildpack |
nginx_buildpack | https://github.com/cloudfoundry/nginx-buildpack |
Option 1: Migrate Apps built with standard V2 buildpacks
To build Apps with the kf-v2-to-v3-shim
stack, use the following command:
kf push myapp --stack kf-v2-to-v3-shim
The kf-v2-to-v3-shim
stack will automatically detect the runtime with the wrapped V2 buildpacks. The resulting App image is created using the V3 standard and build pipeline, but the builder of the equivalent V2 buildpack.
Option 2: Migrate Apps built with custom V2 buildpacks
Kf has a buildpack migration tool that can take a V2 buildpack and wrap it with a V3 buildpack. The wrapped buildpack can then be used anywhere V3 buildpacks are available.
kf wrap-v2-buildpack gcr.io/your-project/v2-go-buildpack https://github.com/cloudfoundry/go-buildpack --publish
This will create a buildpack image named gcr.io/your-project/v2-go-buildpack
. It can then be used to create a builder by following the create a builder docs.
This subcommand uses the following CLIs transparently:
go
git
pack
unzip
1.5 - App Manifest
App manifests provide a way for developers to record their App’s execution environment in a declarative way. They allow Apps to be deployed consistently and reproducibly.
Format
Manifests are YAML files in the root directory of the App. They must be named manifest.yml
or manifest.yaml
.
Kf App manifests are allowed to have a single top-level element: applications
.
The applications
element can contain one or more application entries.
Application fields
The following fields are valid for objects under applications
:
Field | Type | Description |
---|---|---|
name | string | The name of the application. The app name should be lower-case alphanumeric characters and dashes. It must not start with a dash. |
path | string | The path to the source of the app. Defaults to the manifest’s directory. |
buildpacks | string[] | A list of buildpacks to apply to the app. |
stack | string | Base image to use for to use for apps created with a buildpack. |
docker | object | A docker object. See the Docker Fields section for more information. |
env | map | Key/value pairs to use as the environment variables for the app and build. |
services | string[] | A list of service instance names to automatically bind to the app. |
disk_quota | quantity | The amount of disk the application should get. Defaults to 1GiB. |
memory | quantity | The amount of RAM to provide the app. Defaults to 1GiB. |
cpu † | quantity | The amount of CPU to provide the application. Defaults to 1/10th of a CPU. |
cpu-limit † | quantity | The maximum amount of CPU to provide the application. Defaults to unlimited. |
instances | int | The number of instances of the app to run. Defaults to 1. |
routes | object | A list of routes the app should listen on. See the Route Fields section for more. |
no-route | boolean | If set to true, the application will not be routable. |
random-route | boolean | If set to true, the app will be given a random route. |
timeout | int | The number of seconds to wait for the app to become healthy. |
health-check-type | string | The type of health-check to use port , process , none , or http . Default: port |
health-check-http-endpoint | string | The endpoint to target as part of the health-check. Only valid if health-check-type is http . |
health-check-invocation-timeout | int | Timeout in seconds for an individual health check probe to complete. Default: 1 . |
command | string | The command that starts the app. If supplied, this will be passed to the container entrypoint. |
entrypoint † | string | Overrides the app container’s entrypoint. |
args † | string[] | Overrides the arguments the app container. |
ports † | object | A list of ports to expose on the container. If supplied, the first entry in this list is used as the default port. |
startupProbe † | probe | Sets the app container’s startup probe. |
livenessProbe † | probe | Sets the app container’s liveness probe. |
readinessProbe † | probe | Sets the app container’s readiness probe. |
metadata | object | Additional tags for applications and their underlying resources. |
† Unique to Kf
Docker fields
The following fields are valid for application.docker
objects:
Field | Type | Description |
---|---|---|
image | string | The docker image to use. |
Route fields
The following fields are valid for application.routes
objects:
Field | Type | Description |
---|---|---|
route | string | A route to the app including hostname, domain, and path. |
appPort | int | (Optional) A custom port on the App the route will send traffic to. |
Port fields
The following fields are valid for application.ports
objects:
Field | Type | Description |
---|---|---|
port | int | The port to expose on the App’s container. |
protocol | string | The protocol of the port to expose. Must be tcp , http or http2 . Default: tcp |
Metadata fields
The following fields are valid for application.metadata
objects:
Field | Type | Description |
---|---|---|
labels | string -> string map | Labels to add to the app and underlying application Pods. |
annotations | string -> string map | Annotations to add to the app and underlying application Pods. |
Probe fields
Probes allow a subset of functionality from Kubernetes probes.
A probe must contain one action and other settings.
Field | Type | Description |
---|---|---|
failureThreshold | int | Minimum consecutive failures for the probe to be considered failed. |
initialDelaySeconds | int | Number of seconds to wait after container initialization to start the probe. |
periodSeconds | int | How often (in seconds) to perform the probe. |
successThreshold | int | Minimum consecutive successes for the probe to be considered successful. |
timeoutSeconds | int | Number of seconds after a single invocation of the probe times out. |
tcpSocket | TCPSocketAction object | Action specifying a request to a TCP port. |
httpGet | HTTPGetAction object | Action specifying a request to a TCP port. |
TCPSocketAction fields
Describes an action based on TCP requests.
Field | Type | Description |
---|---|---|
host | string | Host to connect to, defaults to the App’s IP. |
HTTPGetAction fields
Describes an action based on HTTP get requests.
Field | Type | Description |
---|---|---|
host | string | Host to connect to, defaults to the App’s IP. |
path | string | Path to access on the HTTP server. |
scheme | string | Scheme to use when connecting to the host. Default: http |
httpHeaders | array of {"name": <string>, "value": <string>} objects | Additional headers to send. |
Examples
Minimal App
This is a bare-bones manifest that will build an App by auto-detecting the buildpack based on the uploaded source, and deploy one instance of it.
---
applications:
- name: my-minimal-application
Simple App
This is a full manifest for a more traditional Java App.
---
applications:
- name: account-manager
# only upload src/ on push
path: src
# use the Java buildpack
buildpacks:
- java
env:
# manually configure the buildpack's Java version
BP_JAVA_VERSION: 8
ENVIRONMENT: PRODUCTION
# use less disk and memory than default
disk_quota: 512M
memory: 512M
# Give the app a minimum of 2/10ths of a CPU
cpu: 200m
instances: 3
# make the app listen on three routes
routes:
- route: accounts.mycompany.com
- route: accounts.datacenter.mycompany.internal
- route: mycompany.com/accounts
# set up a longer timeout and custom endpoint to validate
# when the app comes up
timeout: 300
health-check-type: http
health-check-http-endpoint: /healthz
# attach two services by name
services:
- customer-database
- web-cache
Docker App
Kf can deploy Docker containers as well as manifest deployed App.
These Docker Apps MUST listen on the PORT
environment variable.
---
applications:
- name: white-label-app
# use a pre-built docker image (must listen on $PORT)
docker:
image: gcr.io/my-company/white-label-app:123
env:
# add additional environment variables
ENVIRONMENT: PRODUCTION
disk_quota: 1G
memory: 1G
# Give the app a minimum of 2 full CPUs
cpu: 2000m
instances: 1
routes:
- route: white-label-app.mycompany.com
App with multiple ports
This App has multiple ports to expose an admin console, website, and SMTP server.
---
applications:
- name: b2b-server
ports:
- port: 8080
protocol: http
- port: 9090
protocol: http
- port: 2525
protocol: tcp
routes:
- route: b2b-admin.mycompany.com
appPort: 9090
- route: b2b.mycompany.com
# gets the default (first) port
Health check types
Kf supports three different health check types:
port
(default)http
process
(ornone
)
port
and http
set a Kubernetes readiness and liveness
probe
that ensures the application is ready before sending traffic to it.
The port
health check will ensure the port found at $PORT
is being
listened to. Under the hood Kf uses a TCP probe.
The http
health check will use the configured value in
health-check-http-endpoint
to check the application’s health. Under the hood
Kf uses an HTTP probe.
A process
health check only checks to see if the process running on the
container is alive. It does NOT set a Kubernetes readiness or liveness probe.
Known differences
The following are known differences between Kf manifests and CF manifests:
- Kf does not support deprecated CF manifest fields. This includes all fields at the root-level of the manifest (other than applications) and routing fields.
- Kf is missing support for the following v2 manifest fields:
docker.username
- Kf does not support auto-detecting ports for Docker containers.
1.6 - App runtime
The app runtime is the environment apps are executed in.
Buildpack Apps | Container Image Apps | |
---|---|---|
System libraries | Provided by the Stack | Provided in the container |
Network access | Full access through Envoy sidecar | Full access through Envoy sidecar |
File system | Ephemeral storage | Ephemeral storage |
Language runtime | Supplied by the Stack or Buildpack | Built into the container |
User | Specified by the Stack | Specified on the container |
Isolation mechanism | Kubernetes Pod | Kubernetes Pod |
DNS | Provided by Kubernetes | Provided by Kubernetes |
Environment variables
Environment variables are injected into the app at runtime by Kubernetes. Variables are added based on the following order, where later values override earlier ones with the same name:
- Space (set by administrators)
- App (set by developers)
- System (set by Kf)
Kf provides the following system environment variables:
Variable | Purpose |
---|---|
CF_INSTANCE_ADDR | The cluster-visible IP:PORT of the App instance. |
CF_INSTANCE_GUID | The UUID of the App instance. |
INSTANCE_GUID | Alias of CF_INSTANCE_GUID . |
CF_INSTANCE_INDEX | The index number of the App instance, this will ALWAYS be 0. |
INSTANCE_INDEX | Alias of CF_INSTANCE_INDEX . |
CF_INSTANCE_IP | The cluster-visible IP of the App instance. |
CF_INSTANCE_INTERNAL_IP | Alias of CF_INSTANCE_IP |
VCAP_APP_HOST | Alias of CF_INSTANCE_IP |
CF_INSTANCE_PORT | The cluster-visible port of the App instance. In Kf this is the same as PORT . |
DATABASE_URL | The first URI found in a VCAP_SERVICES credential. |
LANG | Required by Buildpacks to ensure consistent script load order. |
MEMORY_LIMIT | The maximum amount of memory in MB the App can consume. |
PORT | The port the App should listen on for requests. |
VCAP_APP_PORT | Alias of PORT . |
VCAP_APPLICATION | A JSON structure containing App metadata. |
VCAP_SERVICES | A JSON structure specifying bound services. |
Service credentials from bound services get injected into Apps via the
VCAP_SERVICES
environment variable. The variable is a valid JSON object with
the following structure.
VCAPServices
A JSON object where the keys are Service labels and the values are an array of
VCAPService
. The array represents every bound
service with that label.
User provided services
are placed under the label user-provided
.
Example
{
"mysql": [...],
"postgresql": [...],
"user-provided": [...]
}
VCAPService
This type represents a single bound service instance.
Example
{
"binding_name": string,
"instance_name": string,
"name": string,
"label": string,
"tags": string[],
"plan": string,
"credentials": object
}
Fields
Field | Type | Description |
---|---|---|
binding_name | string | The name assigned to the service binding by the user. |
instance_name | string | The name assigned to the service instance by the user. |
name | string | The binding_name if it exists; otherwise the instance_name . |
label | string | The name of the service offering. |
tags | string[] | An array of strings an app can use to identify a service instance. |
plan | string[] | The service plan selected when the service instance was created. |
credentials | object | The service-specific credentials needed to access the service instance. |
VCAP_APPLICATION
TheVCAP_APPLICATION
environment variable is a JSON object containing metadata about the App.
Example
{
"application_id": "12345",
"application_name": "my-app",
"application_uris": ["my-app.example.com"],
"limits": {
"disk": 1024,
"mem": 256
},
"name": "my-app",
"process_id": "12345",
"process_type": "web",
"space_name": "my-ns",
"uris": ["my-app.example.com"]
}
Fields
Field | Type | Description |
---|---|---|
application_id | string | The GUID identifying the App. |
application_name | string | The name assigned to the App when it was pushed. |
application_uris | string[] | The URIs assigned to the App. |
limits | object | The limits to disk space, and memory permitted to the App. Memory and disk space limits are supplied when the App is deployed, either on the command line or in the App manifest. Disk and memory limits are represented as integers, with an assumed unit of MB. |
name | string | Identical to application_name . |
process_id | string | The UID identifying the process. Only present in running App containers. |
process_type | string | The type of process. Only present in running App containers. |
space_name | string | The human-readable name of the Space where the App is deployed. |
uris | string[] | Identical to application_uris . |
Missing Fields
Some fields in VCAP_APPLICATION
that are in Cloud Foundry are currently not supported in Kf.
Besides CF-specific and deprecated fields (cf_api
, host
, users
) the fields that are not supported in Kf are:
application_version
(identical toversion
)organization_id
organization_name
space_id
start
(identical tostarted_at
)started_at_timestamp
(identical tostate_timestamp
)
1.7 - Build runtime
The Build runtime is the environment Apps are built in.
Buildpack Builds | Docker Builds | |
---|---|---|
System libraries | Provided by the Stack | User supplied |
Network access | Full access through Envoy sidecar | Full access through Envoy sidecar |
File system | No storage | No storage |
Language runtime | Provided by the Stack | User supplied |
User | Specified by the Stack | User supplied |
Isolation mechanism | Kubernetes Pod | Kubernetes Pod |
DNS | Provided by Kubernetes | Provided by Kubernetes |
Environment variables
Environment variables are injected into the Build at runtime. Variables are added based on the following order, where later values override earlier ones with the same name:
- Space (set by administrators)
- App (set by developers)
- System (set by Kf)
Kf provides the following system environment variables to Builds:
Variable | Purpose |
---|---|
CF_INSTANCE_ADDR | The cluster-visible IP:PORT of the Build. |
INSTANCE_GUID | Alias of CF_INSTANCE_GUID . |
CF_INSTANCE_IP | The cluster-visible IP of the Build. |
CF_INSTANCE_INTERNAL_IP | Alias of CF_INSTANCE_IP |
VCAP_APP_HOST | Alias of CF_INSTANCE_IP |
CF_INSTANCE_PORT | The cluster-visible port of the Build. |
LANG | Required by Buildpacks to ensure consistent script load order. |
MEMORY_LIMIT | The maximum amount of memory in MB the Build can consume. |
VCAP_APPLICATION | A JSON structure containing App metadata. |
VCAP_SERVICES | A JSON structure specifying bound services. |
2 - Backing services
2.1 - Backing Services Overview
Backing services are any processes that the App contacts over the network during its operation. In traditional operating systems, these services could have been accessed over the network, a UNIX socket, or could even be a sub-process. Examples include the following:
- Databases — for example: MySQL, PostgreSQL
- File storage — for example: NFS, FTP
- Logging services — for example: syslog endpoints
- Traditional HTTP APIs — for example: Google Maps, WikiData, Parcel Tracking APIs
Connecting to backing services over the network rather than installing them all into the same machine allows developers to focus on their App, independent security upgrades for different components, and flexibility to swap implementations.
Backing services in Kf
Kf supports two major types of backing services:
Managed services: These services are created by a service broker and are tied to the Kf cluster.
User-provided services: These services are created outside of Kf, but get can be bound to apps in the same way as managed services.
2.2 - Use managed services
Find a service
Use the kf marketplace
command to find a service you want to use in your App.
Running the command without arguments will show all the service classes
available. A service class represents a specific type of service e.g. a
MySQL database or a Postfix SMTP relay.
$ kf marketplace
5 services can be used in Space "test", use the --service flag to list the plans for a service
Broker Name Space Status Description
minibroker mariadb Active Helm Chart for mariadb
minibroker mongodb Active Helm Chart for mongodb
minibroker mysql Active Helm Chart for mysql
minibroker postgresql Active Helm Chart for postgresql
minibroker redis Active Helm Chart for redis
Service classes can have multiple plans available. A service plan generally corresponds to a version or pricing tier of the software. You can view the plans for a specific service by supplying the service name with the marketplace command:
$ kf marketplace --service mysql
Name Free Status Description
5-7-14 true Active Fast, reliable, scalable, and easy to use open-source relational database system.
5-7-27 true Active Fast, reliable, scalable, and easy to use open-source relational database system.
5-7-28 true Active Fast, reliable, scalable, and easy to use open-source relational database system.
Provision a service
Once you have identified a service class and plan to provision, you can create
an instance of the service using kf create-service
:
$ kf create-service mysql 5-7-28 my-db
Creating service instance "my-db" in Space "test"
Waiting for service instance to become ready...
Success
Services are provisioned into a single Space. You can see the services in the
current Space by running kf services
:
$ kf services
Listing services in Space: "test"
Name ClassName PlanName Age Ready Reason
my-db mysql 5-7-28 111s True <nil>
You can delete a service using kf delete-service
:
$ kf delete-service my-db
Bind a service
Once a service has been created, you can bind it to an App, which will
inject credentials into the App so the service can be used. You can create
the binding using kf bind-service
:
$ kf bind-service my-app my-db
Creating service instance binding "binding-my-app-my-db" in Space "test"
Waiting for service instance binding to become ready...
Success
You can list all bindings in a Space using the kf bindings
command:
$ kf bindings
Listing bindings in Space: "test"
Name App Service Age Ready
binding-my-app-my-db my-app my-db 82s True
Once a service is bound, restart the App using kf restart
and the credentials
will be in the VCAP_SERVICES
environment variable.
You can delete a service binding with the kf unbind-service
command:
$ kf unbind-service my-app my-db
2.3 - Configure user-provided services
Users can leverage services that aren’t available in the marketplace by creating user-provided service instances.
Once created, user-provided service instances behave like managed service instances created through kf create-service
.
Creating, listing, updating, deleting, binding, and unbinding user-provided services are all supported in Kf.
Create a user-provided service instance
The name given to a user-provided service must be unique across all service instances in a Space, including services created through a service broker.
Deliver service credentials to an app
A user-provided service instance can be used to deliver credentials to an App. For example, a database admin can have a set of credentials for an existing database managed outside of Kf, and these credentials include the URL, port, username, and password used to connect to the database.
The admin can create a user-provided service with the credentials and the developer can bind the service instance to the App. This allows the credentials to be shared without ever leaving the platform. Binding a service instance to an App has the same effect regardless of whether the service is a user-provided service or a marketplace service.
The App is configured with the credentials provided by the user, and the App runtime environment variable VCAP_SERVICES
is populated with information about all of the bound services to that App.
A user-provided service can be created with credentials and/or a list of tags.
kf cups my-db -p '{"username":"admin", "password":"test123", "some-int-val": 1, "some-bool": true}' -t "comma, separated, tags"
This will create the user-provided service instance my-db
with the provided credentials and tags. The credentials passed in to the -p
flag must be valid JSON (either inline or loaded from a file path).
To deliver the credentials to one or more Apps, the user can run kf bind-service
.
Suppose we have an App with one bound service, the user-provided service my-db
defined above. The VCAP_SERVICES
environment variable for that App will have the following contents:
{
"user-provided": [
{
"name": "my-db",
"instance_name": "my-db",
"label": "user-provided",
"tags": [
"comma",
"separated",
"tags"
],
"credentials": {
"username": "admin",
"password": "test123",
"some-int-val": 1,
"some-bool": true
}
}
]
}
Update a user-provided service instance
A user-provided service can be updated with the uups
command. New credentials and/or tags passed in completely overwrite existing ones. For example, if the user created the user-provided service my-db
above, called kf bind-service
to bind the service to an App, then ran the command.
kf uups my-db -p '{"username":"admin", "password":"new-pw", "new-key": "new-val"}'
The updated credentials will only be reflected on the App after the user unbinds and rebinds the service to the App. No restart or restage of the App is required. The updated VCAP_SERVICES
environment variable will have the following contents:
{
"user-provided": [
{
"name": "my-db",
"instance_name": "my-db",
"label": "user-provided",
"tags": [
"comma",
"separated",
"tags"
],
"credentials": {
"username": "admin",
"password": "new-pw",
"new-key": "new-val"
}
}
]
}
The new credentials overwrite the old credentials, and the tags are unchanged because they were not specified in the update command.
2.4 - User Provided Service Templates
Before you begin
- Ensure your service is running and accessible on the same network running your Kf cluster.
- Ensure you have targeted the Space where you want to create the service.
Create the user-provided instance
The following examples use the most common parameters used by applications to autoconfigure services. Most libraries use tags to find the right bound service and a URI to connect.
MySQL
MySQL libraries usually expect the tag mysql
and the following parameters:
uri
- Example
mysql://username:password@host:port/dbname
. The MySQL documentation can help with creating a URI string. The port is usually3306
. username
- The connection username, required by some libraries even if included in
uri
. password
- The connection password, required by some libraries even if included in
uri
.
kf cups service-instance-name \
-p '{"username":"my-username", "password":"my-password", "uri":"mysql://my-username:my-password@mysql-host:3306/my-db"}' \
-t "mysql"
RabbitMQ
RabbitMQ libraries usually expect the tag rabbitmq
and the following parameters:
uri
- Example
amqp://username:password@host:port/vhost?query
. The RabbitMQ documentation can help with creating a URI string. The port is usually5672
.
Example:
kf cups service-instance-name \
-p '{"uri":"amqp://username:password@host:5672"}' \
-t "rabbitmq"
Redis
Redis libraries usually expect the tag redis
and the following parameters:
uri
- Example
redis://:password@host:port/uery
. The IANA Redis URI documentation can help with creating a URI string. The port is usually6379
.
Example for Redis with no AUTH configured:
kf cups service-instance-name \
-p '{"uri":"redis://redis-host:6379"}' \
-t "redis"
Example for Redis with AUTH configured:
kf cups service-instance-name \
-p '{"uri":"redis://:password@redis-host:6379"}' \
-t "redis"
Bind your App
Once the user-provided service has been created, you can bind your App to the user provided service by name, just like a managed service:
kf bind-service application-name service-instance-name
What’s next
- Learn about how the credentials are injected into your app.
2.5 - Configure NFS volumes
Kf supports mounting NFS volumes using the kf marketplace
.
Prerequisites
- Your administrator must have completed the NFS platform setup guide.
Create an NFS service instance
Run kf marketplace
to see available services. The built-in NFS service appears on the list if NFS is enabled on the platform.
Broker Name Namespace Description
nfsvolumebroker nfs mount nfs shares
Mount an external filesystem
Create a service instance
To mount to an existing NFS service:
kf create-service nfs existing SERVICE-INSTANCE-NAME -c '{"share":"SERVER/SHARE", "capacity":"CAPACITY"}'
Replace variables with your values.
- SERVICE-INSTANCE-NAME is the name you want for this NFS volume service instance.
- SERVER/SHARE is the NFS address of your server and share.
- CAPACITY uses the Kubernetes quantity format.
Confirm that the NFS volume service appears in your list of services. You can expect output similar to this example:
$ kf services
...
Listing services in Space: demo-space
Name Type ClassName PlanName Age Ready Reason
filestore-nfs volume nfs existing 6s True <nil>
...
Bind your service instance to an App
To bind an NFS service instance to an App, run:
kf bind-service YOUR-APP-NAME SERVICE-NAME -c '{"uid":"2000","gid":"2000","mount":"MOUNT-PATH","readonly":true}'
Replace variables with your values.
YOUR-APP-NAME is the name of the App for which you want to use the volume service.
SERVICE-NAME is the name of the volume service instance you created in the previous step.
uid
:UID andgid
:GID specify the directory permissions of the mounting share.MOUNT-PATH is the path the volume should be mounted to within your App.
(Optional)
"readonly":true
is an optional JSON string that creates a read-only mount. By default, Volume Services mounts a read-write file system.Note: Your App automatically restarts when the NFS binding changes.
You can list all bindings in a Space using the kf bindings
command. You will see output similar to this example:
$ kf bindings
...
Listing bindings in Space: demo-space
Name App Service Age Ready
binding-spring-music-filestore-nfs spring-music filestore-nfs 71s True
...
Access the volume service from your App
To access the volume service from your App, you must know which file path to use in your code. You can view the file path in the details of the service binding, which are visible in the environment variables for your App.
View environment variables for your App:
kf vcap-services YOUR-APP-NAME
Replace YOUR-APP-NAME with the name of your App.
The following is example output of the kf vcap-services
command:
$ kf vcap-services YOUR-APP-NAME
{
"nfs": [
{
"instance_name": "nfs-instance",
"name": "nfs-instance",
"label": "nfs",
"tags": [],
"plan": "existing",
"credentials": {
"capacity": "1Gi",
"gid": 2000,
"mount": "/test/mount",
"share": "10.91.208.210/test",
"uid": 2000
},
"volume_mounts": [
{
"container_dir": "/test/mount",
"device_type": "shared",
"mode": "rw"
}
]
}
]
}
Use the properties under volume_mounts
for any information required by your App.
Property | Description |
---|---|
container_dir | String containing the path to the mounted volume that you bound to your App. |
device_type | The NFS volume release. This currently only supports shared devices. A shared device represents a distributed file system that can mount on all App instances simultaneously. |
mode | String that informs what type of access your App has to NFS, either ro (read-only), or rw (read and write). |
3 - Configure and Use Service Accounts
By default, all applications in Kf are assigned a unique Kubernetes Service Account (KSA) named sa-<APP_NAME>
.
Kf uses this KSA as the “user” it runs application instances and tasks under.
Each App KSA receives a copy of the container registry credentials used by the Space’s build KSA so Kf apps can pull
container images that were created during kf push
.
Using the service account
Kuberenetes Pods (the building blocks of Apps and Tasks) automatically receive a JWT for the KSA mounted in the container:
$ ls /var/run/secrets/kubernetes.io/serviceaccount/
ca.crt
namespace
token
ca.crt
The Kubernetes control plane’s certificate.namespace
The Kubernetes namespace of the workload.token
A Base64 encoded JWT for the Kf App’s Service Account.
Below is an example of what the JWT looks like, note that:
- It expires and needs to be periodically refreshed from disk.
- It’s audience is only valid within the Kubernetes cluster.
{
"aud": [
"<KUBERNETES_CLUSTER_URI>"
],
"exp": 3600,
"iat": 0,
"iss": "<KUBERNETES_CLUSTER_URI>",
"kubernetes.io": {
"namespace": "<SPACE_NAME>",
"pod": {
"name": "<APP_NAME>-<RANDOM_SUFFIX>",
"uid": "<APP_GUID>"
},
"serviceaccount": {
"name": "sa-<APP_NAME>",
"uid": "<SERVICE_ACCOUNT_GUID>"
},
"warnafter": 3500
},
"nbf": 0,
"sub": "system:serviceaccount:<SPACE_NAME>:sa-<APP_NAME>"
}
You can use this credential to connect to the Kubernetes control plane listed in the issuer (iss
) field.
Customizing the service account
You want to use a different service account than the default one Kf provides, for example to:
- Allow blue/green apps to have the same identity.
- Use Kf with a federated identity system.
- Provide custom image pull credentials for a specific app.
You can enable this by adding the apps.kf.dev/service-account-name
annotation to your app manifest.
The value should be the name of the KSA you want the application and tasks to use.
Example:
applications:
- name: my-app
metadata:
annotations:
"apps.kf.dev/service-account-name": "override-sa-name"
Limitations:
- Only KSAs within the same Kubernetes namespace–corresponding to a Kf Space–are allowed.
- The KSA must exist and be readable by Kf, otherwise the app will not deploy.
- The KSA or the cluster must have permission to pull the application’s container images.
Additional resources
- If you use GKE, learn how to inject apps with a Google Service Account credential.
- Learn how to use federated identity in GCP to allow authenticating KSAs to GCP infrastructure.
4 - Scaling
4.1 - Scaling overview
Kf leverages the Kubernetes Horizontal Pod Autoscaler (HPA) to automatically scale the number of Pods in a App. When autoscaling is enabled for an App, an HPA object is created and bound to the App object. It then dynamically calculates the target scale and sets it for the App.
Kf Apps are also compatible with HPA policies created outside of Kf.
How Kf scaling works
The number of Pods that are deployed for a Kf App is
controlled by its underlying Deployment object’s replicas
field. The target
number of Deployment replicas is set through the App’s replicas
field.
Scaling can be done manually with the kf scale
command.
This command is disabled when autoscaling is enabled to avoid conflicting targets.
How the Kubernetes Horizontal Pod Autoscaler works
The Horizontal Pod Autoscaler (HPA) is implemented as a Kubernetes API resource (the HPA object) and a control loop (the HPA controller) which periodically calculates the number of desired replicas based on current resource utilization. The HPA controller then passes the number to the target object that implements the Scale subresource. The actual scaling is delegated to the underlying object and its controller. You can find more information in the Kubernetes documentation.
How the Autoscaler determines when to scale
Periodically, the HPA controller queries the resource utilization against the metrics specified in each HorizontalPodAutoscaler definition. The controller obtains the metrics from the resource metrics API for each Pod. Then the controller calculates the utilization value as a percentage of the equivalent resource request. The desired number of replicas is then calculated based on the ratio of current percentage and desired percentage. You can read more about the autoscaling algorithm in the Kubernetes documentation.
Metrics
Kf uses HPA v1 which only supports CPU as the target metric.
How the Kubernetes Horizontal Autoscaler works with Kf
When autoscaling is enabled for a Kf App, the Kf controller will create an HPA object based on the scaling limits and rules specified on the App. Then the HPA controller fetches the specs from the HPA object and scales the App accordingly.
The HPA object will be deleted if Autoscaling is disabled or if the corresponding App is deleted.
4.2 - Manage Autoscaling
Kf supports two primary autoscaling modes:
- Built-in autosacling similar to Cloud Foundry.
- Advanced autoscaling through the Kubernetes Horizontal Pod Autoscaler (HPA).
Built-in autoscaling
Kf Apps can be automatically scaled based on CPU usage. You can configure autoscaling limits for your Apps and the target CPU usage for each App instance. Kf automatically scales your Apps up and down in response to demand.
By default, autoscaling is disabled. Follow the steps below to enable autoscaling.
View Apps
You can view the autoscaling status for an App using the kf apps
command. If autoscaling is enabled for an App, Instances
includes the
autoscaling status.
$ kf apps
Name Instances Memory Disk CPU
app1 4 (autoscaled 4 to 5) 256Mi 1Gi 100m
app2 1 256Mi 1Gi 100m
Autoscaling is enabled for app1
with min-instances
set to 4 and
max-instances
set to 5. Autoscaling is disabled for app2
.
Update autoscaling limits
You can update the instance limits using the kf update-autoscaling-limits
command.
kf update-autoscaling-limits app-name min-instances max-instances
Create autoscaling rule
You can create autoscaling rules using the kf create-autoscaling-rule
command.
kf create-autoscaling-rule app-name CPU min-threshold max-threshold
Delete autoscaling rules
You can delete all autoscaling rules with the
kf delete-autoscaling-rule
command. Kf only supports
one autoscaling rule.
kf delete-autoscaling-rules app-name
Enable and disable autoscaling
Autoscaling can be enabled by using enable-autoscaling
and
disabled by using disable-autoscaling
. When it is disabled, the
configurations, including limits and rules, are preserved.
kf enable-autoscaling app-name
kf disable-autoscaling app-name
Advanced autoscaling
Kf Apps support the Kubernetes Horizontal Pod Autoscaler interface and will
therefore work with HPAs created using kubectl
.
Kubernetes HPA policies are less restrictive than Kf’s built-in support for autoscaling.
They include support for:
- Scaling on memory, CPU, or disk usage.
- Scaling based on custom metrics, such as traffic load or queue length.
- Scaling on multiple metrics.
- The ability to tune reactivity to smooth out rapid scaling.
Using custom HPAs with apps
You can follow the Kubernetes HPA walkthrough to learn how to set up autoscalers.
When you create the HPA, make sure to set the scaleTargetRef
to be your application:
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: app-scaler
namespace: SPACE_NAME
spec:
scaleTargetRef:
apiVersion: kf.dev/v1alpha1
kind: App
name: APP_NAME
minReplicas: 3
maxReplicas: 10
metrics:
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 60
Caveats
- You shouldn’t use Kf autoscaling with an HPA.
- When you use an HPA,
kf apps
will show the current number of instances, it won’t show that the App is being autoscaled.
4.3 - Managing Resources for Apps
When you create an app, you can optionally specify how much of each resource an instance of the application will receive when it runs.
Kf simplifies the Kubernetes model of resources and provides defaults that should work for most I/O bound applications out of the box.
Resource types
Kf supports three types of resources, memory, CPU, and ephemeral disk.
- Memory specifies the amount of RAM an application receives when running. If it exceeds this amount then the container is restarted.
- Ephemeral disk specifies how much an application can write to a local disk. If an application exceeds this amount then it may not be able to write more.
- CPU specifies the number of CPUs an application receives when running.
Manifest
Resources are specified using four values in the manifest:
memory
sets the guaranteed minimum an app will receive and the maximum it’s permitted to use.disk_quota
sets the guaranteed minimum an app will receive and the maximum it’s permitted to use.cpu
sets the guaranteed minimum an app will receive.cpu-limit
sets the maximum CPU an app can use.
Example:
applications:
- name: "example"
disk_quota: 512M
memory: 512M
cpu: 200m
cpu-limit: 2000m
Defaults
Memory and ephemeral storage are both set to 1Gi if not specified.
CPU defaults to one of the following
- 1/10th of a CPU if the platform operator hasn’t overridden it.
- A CPU value proportionally scaled by the amount of memory requested.
- A minimum CPU value set by the platform operator.
Resource units
Memory and disk
Cloud Foundry used the units T
, G
, M
, and K
to represent powers of two.
Kubernetes uses the units Ei
, Pi
, Gi
, Mi
, and Ki
for the same.
Kf allows you to specify memory and disk in either units.
CPU
Kf and Kubernetes use the unit m
for CPU, representing milli-CPU cores (thousandths of a core).
Sidecar overhead
When Kf schedules your app’s container as a Kubernetes Pod, it may bundle additional containers to your app to provide additional functionality. It’s likely your application will also have an Istio sidecar which is responsible for networking.
These containers will supply their own resource requests and limits and are overhead associated with running your application.
Best practices
- All applications should set memory and disk quotas.
- CPU intensive applications should set a CPU request and limit to guarantee they’ll have the resources they need without starving other apps.
- I/O bound applications shouldn’t set a CPU limit so they can burst during startup.
Additional reading
5 - Service discovery
This document is an overview of Kubernetes DNS-based service discovery and how it can be used with Kf.
When to use Kubernetes service discovery with Kf
Kubernetes service discovery can be used by applications that need to locate backing services in a consistent way regardless of where the application is deployed. For example, a team might want to use a common URI in their configuration that always points at the local SMTP gateway to decouple code from the environment it ran in.
Service discovery helps application teams by:
- Reducing the amount of per-environment configuration.
- Decoupling client and server applications.
- Allowing applications to be portable to new environments.
You can use Kubernetes service discovery when:
- Applications use their container’s DNS configurations to resolve hosts.
- Applications are deployed with their backing services in the same Kubernetes cluster or namespace.
- Backing services have an associated Kubernetes service. Kf creates these for each app.
- Kubernetes NetworkPolicies allow traffic between an application and the Kubernetes service it needs to communicate with. Kf creates these policies in each Kf space.
You should not use Kubernetes service discovery if:
- Applications need to failover between multiple clusters.
- You override the DNS resolver used by your application.
- Applications need specific types of load balancing.
How Kubernetes service discovery works
Kubernetes service discovery works by modifying the DNS configuration of containers running on a Kubernetes node. When an application looks up an unqualified domain name, the local DNS resolver will first attempt to resolve the name in the local cluster.
Domains without multiple parts will be resolved against the names of Kubernetes
services in the container’s namespace. Each Kf app
creates a Kubernetes service with the same name. If two
Kf apps ping
and pong
were deployed in the same
Kf space, then ping
could use the URL http://pong
to
send traffic to the other service.
Domains with a single dot will be resolved against the Kubernetes services in
the Kubernetes namespace with the same name as the label after the dot. For
example, if there was a PostgreSQL database with a customers
service in the
database
namespace, an application in another namespace could
resolve it using postgres://customers.database
.
How to use service discovery with Kf
Kubernetes DNS based service discovery can be used in any Kf app. Each Kf app creates a Kubernetes service of the same name, and each Kf space creates a Kubernetes namespace with the same name.
- Refer to a Kf app in the current space using
protocol://app-name
. - Refer to a Kf app in a different space using
protocol://app-name.space-name
. - Refer to a Kf app in the current space listening on
a custom port using
protocol://app-name:port
. - Refer to a Kf app in a different space listening
a custom port using
protocol://app-name.space-name:port
.
Best practices
Applications that are going to be the target of DNS based service discovery should have frequent health checks to ensure they are rapidly added and removed from the pool of hosts that accept connections.
Applications using DNS based service discovery should not cache the IP addresses of the resolved services because they are not guaranteed to be stable.
If environment specific services exist outside of the cluster, they can be resolved using Kubernetes DNS if you set up ExternalName Kubernetes services. These Kubernetes services provide the same resolution capabilities, but return a CNAME record to redirect requests to an external authority.
Comparison to Eureka
Eureka is an open source client-side load-balancer created by Netflix. It is commonly used as part of the Spring Cloud Services service broker. Eureka was built to be a regional load balancer and service discovery mechanism for services running in an environment that caused frequent disruptions to workloads leading to unstable IP addresses.
Eureka is designed as a client/server model. Clients register themselves with the server indicating which names they want to be associated with and periodically send the server heartbeats. The server allows all connected clients to resolve names.
In general, you should use Kubernetes DNS rather than Eureka in Kubernetes for the following reasons:
- DNS works with all programming languages and applications without the need for libraries.
- Your application’s existing health check will be reused reducing combinations of errors.
- Kubernetes manages the DNS server, allowing you to rely on fewer dependencies.
- Kubernetes DNS respects the same policy and RBAC constraints as the rest of Kubernetes.
There are a few times when deploying a Eureka server would be advantageous:
- You need service discovery across Kubernetes and VM based applications.
- You need client based load-balancing.
- You need independent health checks.
What’s next
- Read more about service discovery in GKE.
- Learn about Service Directory, a managed offering similar to Eureka.
6 - Debugging workloads
Kf uses Kubernetes under the hood to schedule application workloads onto a cluster. Ultimately, every workload running on a Kubernetes cluster is scheduled as a Pod, but the Pods may have different properties based on the higher level abstractions that schedule them.
Once you understand how to debug Pods, you can debug any running workload on the cluster including the components that make up Kf and Kubernetes.
General debugging
Kubernetes divides most resources into Namespaces. Each Kf Space creates a Namespace
with the same name. If you’re debugging a Kf resource, you’ll want to remember to set
the -n
or --namespace
CLI flag on each kubectl
command you run.
Finding a Pod
You can list all the Pods in a Namespace using the command:
kubectl get pods -n NAMESPACE
This will list all Pods in the Namespace. You’ll often want to filter these using a label selector like the following:
kubectl get pods -n NAMESPACE -l "app.kubernetes.io/component=app-server,app.kubernetes.io/name=echo"
You can get the definition of a Pod using a command like the following:
kubectl get pods -n NAMESPACE POD_NAME -o yaml
Understand Kubernetes objects
Most Kubernetes objects follow the same general structure. Kubernetes also has extensive help documentation for each type of object (including Kf’s).
If you need to quickly look up what a field is for, use the kubectl explain
command
with the object type and the path to the field you want documentation on.
$ kubectl explain pod.metadata.labels
KIND: Pod
VERSION: v1
FIELD: labels <map[string]string>
DESCRIPTION:
Map of string keys and values that can be used to organize and categorize
(scope and select) objects. May match selectors of replication controllers
and services. More info: http://kubernetes.io/docs/user-guide/labels
When you run kubectl get RESOURCE_TYPE RESOURCE_NAME -oyaml
you’ll see the stored version
of the object. An annotated example of an Pod running the instance of an App is below:
apiVersion: v1
kind: Pod
metadata:
# Annotations hold security information and configuration for the
# object.
annotations:
kubectl.kubernetes.io/default-container: user-container
kubectl.kubernetes.io/default-logs-container: user-container
sidecar.istio.io/inject: "true"
traffic.sidecar.istio.io/includeOutboundIPRanges: '*'
# Labels hold information useful for filtering resources.
labels:
# Kf sets many of these labels to help find Pods.
app.kubernetes.io/component: app-server
app.kubernetes.io/managed-by: kf
app.kubernetes.io/name: echo
kf.dev/networkpolicy: app
name: echo-6b759c978b-zwrt8
namespace: development
# Contains the object(s) that "own" this resource, this usually
# means the ones that were responsible for creating it.
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: echo-6b759c978b
uid: 7f0ee42d-f1e8-4c4f-b0c8-f0c5d7f27a0a
# The ID of the resource, if deleted and re-created the ID will
# change.
uid: 0d49d5d7-afa4-4904-9f69-f98ce1923745
spec:
# Contains the desired state of the object, written by a human or
# one of the metadata.ownerReferences.
# Omitted for brevity.
status:
# Contains the state of the object as written by the controller.
# Omitted for brevity.
When you see an object with an metadata.ownerReferences
set, you can run
kubectl get
again to find that object’s information all the way back up
until you find the root object responsible for creating it. In this case
the chain would look like Pod -> ReplicaSet -> Deployment -> App.
Getting logs
You can get logs for a specific Pod using kubectl logs
:
kubectl logs -c user-container -n NAMESPACE POD_NAME
You can find a list of containers on the Pod in the .spec.containers.name
field:
kubectl get pods -n NAMESPACE -o jsonpath='{.spec.containers[*].name}' POD_NAME
Port forwarding
You can port-forward
to a specific port using kubectl port-forward
.
# Bind remote port 8080 to local port 8080.
kubectl port-forward -n NAMESPACE POD_NAME 8080
# Bind remote port 8080 to local port 9000.
kubectl port-forward -n NAMESPACE POD_NAME 9000:8080
Open a shell
You can open a shell to a container
using kubectl exec
.
kubectl exec --stdin --tty -n NAMESPACE -c user-container POD_NAME -- /bin/bash
Kf Specifics
The following sections show specific information about Kf types that schedule Pods.
App Pods
Label selector
All Apps:
app.kubernetes.io/component=app-server,app.kubernetes.io/managed-by=kf
Specific App (replace APP_NAME
):
app.kubernetes.io/component=app-server,app.kubernetes.io/managed-by=kf,app.kubernetes.io/name=APP_NAME
Expected containers
user-container
Container running the application code.istio-proxy
Connects the application code to the virtual network.
Ownership hierarchy
Each resource will have a metadata.ownerReferences
to the resource below it:
- Kubernetes Pod Runs a single instance of the application code.
- Kubernetes ReplicaSet Schedules all the instances for one version of an App.
- Kubernetes Deployment Manages rollouts and scaling of multiple versions of the App.
- Kf App Orchestrates routing, rollouts, service bindings, builds, etc. for an App.
Build Pods
Label selector
All Builds:
app.kubernetes.io/component=build,app.kubernetes.io/managed-by=kf
Builds for a specific App (replace APP_NAME
):
app.kubernetes.io/component=build,app.kubernetes.io/managed-by=kf,app.kubernetes.io/name=APP_NAME
Specific Build for an App (replace APP_NAME
and TASKRUN_NAME
):
app.kubernetes.io/component=build,app.kubernetes.io/managed-by=kf,app.kubernetes.io/name=APP_NAME,tekton.dev/taskRun=TASKRUN_NAME
Expected containers
step-.*
Container(s) that execute different steps of the build process. Specific steps depend on the type of build.istio-proxy
(Optional) Connects the application code to the virtual network.
Ownership hierarchy
Each resource will have a metadata.ownerReferences
to the resource below it:
- Kubernetes Pod Runs the steps of the build.
- Tekton TaskRun Schedules the Pod, ensures it runs to completion once, and cleans it up once done.
- Kf Build Creates a TaskRun with the proper steps to build an App from source.
- Kf App Creates Builds with app-specific information like environment variables.
Task Pods
Label selector
All Tasks:
app.kubernetes.io/component=task,app.kubernetes.io/managed-by=kf
Tasks for a specific App (replace APP_NAME
):
app.kubernetes.io/component=task,app.kubernetes.io/managed-by=kf,app.kubernetes.io/name=APP_NAME
Specific Task for an App (replace APP_NAME
and TASKRUN_NAME
):
app.kubernetes.io/component=task,app.kubernetes.io/managed-by=kf,app.kubernetes.io/name=APP_NAME,tekton.dev/taskRun=TASKRUN_NAME
Expected containers
step-user-container
Container running the application code.istio-proxy
Connects the application code to the virtual network.
Ownership hierarchy
- Kubernetes Pod Runs an instance of the application code.
- Tekton TaskRun Schedules the Pod, ensures it runs to completion once, and cleans it up once done.
- Kf Task Creates a TaskRun with the proper step to run a one-off Task.
- Kf App Creates Builds with app-specific information like environment variables.
Next steps
- Explore the Kubernetes guide to debugging applications.
- Understand how to write label selectors.
7 - Configure routes and domains
This page describes how routes and domains work in Kf, and how developers and administrators configure routes and domains for an App deployed on Kf cluster.
You must create domain and routes to give external access to your application.
Internal routing
Kf apps can communicate internally with other apps in the cluster directly using a service mesh without leaving the cluster network. By default, all traffic on the service mesh is encrypted using mutual TLS.
All apps deployed in the Kf cluster come with an internal endpoint configured by default. You can use the address app-name.space-name.svc.cluster.local
for internal communication between apps. To use this internal address no extra steps are required. Mutual TLS is enabled by default for internal routes. Note that this internal address is only accessible from the pods running the apps and not accessible from outside the cluster.
App load balancing
Traffic is routed by Istio to healthy instances of an App using a round-robin policy. Currently, this policy can’t be changed.
Route capabilities
Routes tell the cluster’s ingress gateway where to deliver traffic and what to do if no Apps are available on the given address. By default, if no App is available on a Route and the Route receives a request it returns an HTTP 503 status code.
Routes are comprised of three parts: host, domain, and path. For example, in
the URI payroll.mydatacenter.example.com/login
:
- The host is
payroll
- The domain is
mydatacenter.example.com
- The path is
/login
Routes must contain a domain, but the host and path is optional. Multiple Routes can share the same host and domain if they specify different paths. Multiple Apps can share the same Route and traffic will be split between them. This is useful if you need to support legacy blue/green deployments. If multiple Apps are bound to different paths, the priority is longest to shortest path.
Warning: Kf doesn’t currently support TCP port-based routing. You must use a
Kubernetes LoadBalancer
if you want to expose a TCP port to the Internet. Ports are available on the cluster internal App address <app-name>.<space>
.
Manage routes
The following sections describe how to use the kf
CLI to manage Routes.
List routes
Developers can list Routes for the current Space using the kf routes
command.
$ kf routes
Getting Routes in Space: my-space
Found 2 Routes in Space my-space
HOST DOMAIN PATH APPS
echo example.com / echo
* example.com /login uaa
Create a route
Developers can create Routes using the kf create-route
command.
# Create a Route in the targeted Space to match traffic for myapp.example.com/*
$ kf create-route example.com --hostname myapp
# Create a Route in the Space myspace to match traffic for myapp.example.com/*
$ kf create-route -n myspace example.com --hostname myapp
# Create a Route in the targeted Space to match traffic for myapp.example.com/mypath*
$ kf create-route example.com --hostname myapp --path /mypath
# You can also supply the Space name as the first parameter if you have
# scripts that rely on the old cf style API.
$ kf create-route myspace example.com --hostname myapp # myapp.example.com
After a Route is created, if no Apps are bound to it then an HTTP 503 status code is returned for any matching requests.
Map a route to your app
Developers can make their App accessible on a Route using the kf map-route
command.
$ kf map-route MYAPP mycluster.example.com --hostname myapp --path mypath
Unmap a route
Developers can remove their App from being accessible on a Route using the kf unmap-route
command.
$ kf unmap-route MYAPP mycluster.example.com --hostname myapp --path mypath
Delete a route
Developers can delete a Route using the kf delete-route
command.
$ kf delete-route mycluster.example.com --hostname myapp --path mypath
Deleting a Route will stop traffic from being routed to all Apps listening on the Route.
Manage routes declaratively in your app manifest
Routes can be managed declaratively in your app manifest file. They will be created if they do not yet exist.
---
applications:
- name: my-app
# ...
routes:
- route: example.com
- route: www.example.com/path
You can read more about the supported route properties in the manifest documentation.
Routing CRDs
There are four types that are relevant to routing:
- VirtualService
- Route
- Service
- App
Each App has a Service, which is an abstract name given to all running instances of your App. The name of the Service is the same as the App. A Route represents a single external URL. Routes constantly watch for changes to Apps, when an App requests to be added to a Route, the Route updates its list of Apps and then the VirtualService. A VirtualService represents a single domain and merges a list of all Routes in a Space that belong to that domain.
Istio reads the configuration on VirtualServices to determine how to route traffic.
8 - Tasks
8.1 - Tasks Overview
About Tasks
Unlike Apps which run indefinitely and restart if the process terminates, Tasks run a process until it completes and then stop. Tasks are run in their own containers and are based on the configuration and binary of an existing App.
Tasks are not accessible from routes, and should be used for one-off or scheduled recurring work necessary for the health of an application.
Use cases for Tasks
- Migrating a database
- Running a batch job (scheduled/unscheduled)
- Sending an email
- Transforming data (ETL)
- Processing data (upload/backup/download)
How Tasks work
Tasks are executed asynchronously and run independently from the parent App or other Tasks running on the same App. An App created for running Tasks does not have routes created or assigned, and the Run lifecycle is skipped. The Source code upload and Build lifecycles still proceed and result in a container image used for running Tasks after pushing the App (see App lifecycles at Deploying an Application).
The lifecycle of a Task is as follows:
- You push an App for running tasks with the
kf push APP_NAME --task
command. - You run a Task on the App with the
kf run-task APP_NAME
command. Task inherits the environment variables, service bindings, resource allocation, start-up command, and security groups bound to the App. - Kf creates a Tekton PipelineRun with values from the App and parameters from the
run-task
command. - The Tekton PipelineRun creates a Kubernetes Pod which launches a container based on the configurations on the App and Task.
- Task execution stops (Task exits or is terminated manually), the underlying Pod is either stopped or terminated. Pods of stopped Tasks are preserved and thus Task logs are accessible via the
kf logs APP_NAME --task
command. - If you terminate a Task before it stops, the Tekton PipelineRun is cancelled (see Cancelling a PipelineRun), the underlying Pod together with the logs are deleted. The logs of termianted Tasks are delivered to the cluster level logging streams if configured (e.g. Stackdriver, Fluentd).
- If the number of Tasks run on an App is greater than 500, the oldest Tasks are automatically deleted.
Tasks retention policy
Tasks are created as custom resources in the Kubernetes cluster, therefore, it is important not to exhaust the space of the underlying etcd
database. By default, Kf only keeps the latest 500 Tasks per each App. Once the number of Tasks reach 500, the oldest Tasks (together with the underlying Pods and logs) will be automatically deleted.
Task logging and execution history
Any data or messages the Task outputs to STDOUT or STDERR is available by using the kf logs APP_NAME --task
command. Cluster level logging mechanism (such as Stackdriver, Fluentd) will deliver the Task logs to the configured logging destination.
Scheduling Tasks
As described above, Tasks can be run asynchronously by using the kf run-task APP_NAME
command.
Alternatively, you can schedule Tasks for execution by first creating a Job using
the kf create-job
command, and then scheduling it with the
kf schedule-job JOB_NAME
command. You can schedule that Job to automatically
run Tasks on a specified unix-cron schedule.
How Tasks are scheduled
Create and schedule a Job to run the Task. A Job describes the Task to execute and automatically manages Task creation.
Tasks are created on the schedule even if previous executions of the Task are still running. If any executions are missed for any reason, only the most recently missed execution are executed when the system recovers.
Deleting a Job deletes all associated Tasks. If any associated Tasks were still in progress, they are forcefully deleted without running to completion.
Tasks created by a scheduled Job are still subject to the Task retention policy.
Differences from PCF Scheduler
PCF Scheduler allows multiple schedules for a single Job while Kf only supports a single schedule per Job. You can replicate the PCF Scheduler behavior by creating multiple Jobs, one for each schedule.
8.2 - Run Tasks
You can execute short-lived workflows by running them as Tasks in Kf. Tasks are run under Apps, meaning that each Task must have an associated App. Each Task execution uses the build artifacts from the parent App. Because Tasks are short-lived, the App is not deployed as a long-running application, and no routes should be created for the App or the Task.
Push an App for running Tasks
Clone the test-app repo repo:
git clone https://github.com/cloudfoundry-samples/test-app test-app cd test-app
Push the App.
Push the App with the
kf push APP_NAME --task
command. The--task
flag indicates that the App is meant to be used for running Tasks, and thus no routes are created on the App, and it is not deployed as a long-running application:kf push test-app --task
Confirm that no App instances or routes were created by listing the App:
kf apps
Notice that the App is not started and has no URLs:
Listing Apps in Space: test-space Name Instances Memory Disk CPU URLs test-app stopped 1Gi 1Gi 100m <nil>
Run a Task on the App
When you run a Task on the App, you can optionally specify a start command by
using the --command
flag. If no start command is specified, it uses the start
command specified on the App. If the App doesn’t have a start command specified,
it looks up the CMD configuration of the container image. A start command must
exist in order to run the Task successfully.
kf run-task test-app --command "printenv"
You see something similar to this, confirming that the Task was submitted:
Task test-app-gd8dv is submitted successfully for execution.
The Task name is automatically generated, prefixed with the App name, and suffixed with an arbitrary string. The Task name is a unique identifier for Tasks within the same cluster.
Specify Task resource limits
Resource limits (such as CPU cores/Memory limit/Disk quota) can be specified in
the App (during kf push
) or during the kf run-task
command. The limits
specified in the kf run-task
command take prededence over the limits specified
on the App.
To specify resource limits in an App, you can use the --cpu-cores
,
--memory-limit
, and --disk-quota
flags in the kf push
command:
kf push test-app --command "printenv" --cpu-cores=0.5 --memory-limit=2G --disk-quota=5G --task
To override these limits in the App, you can use the --cpu-cores
,
--memory-limit
, and --disk-quota
flags in the kf run-task
command:
kf run-task test-app --command "printenv" --cpu-cores=0.5 --memory-limit=2G --disk-quota=5G
Specify a custom display name for a Task
You can optionally use the --name
flag to specify a custom display name for a
Task for easier identification/grouping:
$ kf run-task test-app --command "printenv" --name foo
Task test-app-6swct is submitted successfully for execution.
$ kf tasks test-app
Listing Tasks in Space: test space
Name ID DisplayName Age Duration Succeeded Reason
test-app-6swct 3 foo 1m 21s True <nil>
Manage Tasks
View all Tasks of an App with the kf tasks APP_NAME
command:
$ kf tasks test-app
Listing Tasks in Space: test space
Name ID DisplayName Age Duration Succeeded Reason
test-app-gd8dv 1 test-app-gd8dv 1m 21s True <nil>
Cancel a Task
Cancel an active Task by using the kf terminate-task
command:
Cancel a Task by Task name:
$ kf terminate-task test-app-6w6mz Task "test-app-6w6mz" is successfully submitted for termination
Or cancel a Task by
APP_NAME
+ Task ID:$ kf terminate-task test-app 2 Task "test-app-6w6mz" is successfully submitted for termination
Cancelled Tasks have PipelineRunCancelled
status.
$ kf tasks test-app
Listing Tasks in Space: test space
Name ID DisplayName Age Duration Succeeded Reason
test-app-gd8dv 1 test-app-gd8dv 1m 21s True <nil>
test-app-6w6mz 2 test-app-6w6mz 38s 11s False PipelineRunCancelled
View Task logs
View logs of a Task by using the kf logs APP_NAME --task
command:
$ kf logs test-app --task
8.3 - Schedule Tasks
You can execute short-lived workflows by running them as Tasks. Running Tasks describes how to run Tasks under Apps.
You can also schedule Tasks to run at recurring intervals specified using the unix-cron format. With scheduled Tasks, you first push an App running the Task as you do with an unscheduled Task, and then create a Job to schedule the Task.
You can define a schedule so that your Task runs multiple times a day or on specific days and months.
Push an App for running scheduled Tasks
- Clone the test-app repo:
git clone https://github.com/cloudfoundry-samples/test-app test-app
cd test-app
Push the App.
Push the App with the
kf push APP_NAME --task
command. The--task
flag indicates that the App is meant to be used for running Tasks, and thus no routes will be created on the App and it will not be deployed as a long-running application.kf push test-app --task
Confirm that no App instances or routes were created by listing the App:
kf apps
Notice that the App is not started and has no URLs:
Listing Apps in Space: test-space Name Instances Memory Disk CPU URLs test-app stopped 1Gi 1Gi 100m <nil>
Create a Job
To run a Task on a schedule, you must first create a Job that describes the Task:
kf create-job test-app test-job "printenv"
The Job starts suspended or unscheduled, and does not create Tasks until it is
manually executed by kf run-job
or scheduled by kf schedule-task
.
Run a Job manually
Jobs can be run ad hoc similar to running Tasks by kf run-task
. This option
can be useful for testing the Job before scheduling or running as needed in addition
to the schedule.
kf run-job test-job
This command runs the Task defined by the Job a single time immediately.
Schedule a Job
To schedule the Job for execution, you must provide a unix-cron schedule in the
kf schedule-job
command:
kf schedule-job test-job "* * * * *"
This command triggers the Job to automatically create Tasks on the specified schedule. In this example a Task runs every minute.
You can update a Job’s schedule by running kf schedule-task
with a new schedule.
Jobs in Kf can only have a single cron schedule. This differs
from the PCF Scheduler, which allows multiple schedules for a single Job.
If you require multiple cron schedules, then you can achieve that with multiple Jobs.
Manage Jobs and schedules
View all Jobs, both scheduled and unscheduled, in the current Space by using
the kf jobs
command:
$ kf jobs
Listing Jobs in Space: test space
Name Schedule Suspend LastSchedule Age Ready Reason
test-job * * * * * <nil> 16s 2m True <nil>
unscheduled-job 0 0 30 2 * true 16s 2m True <nil>
Additionally, you can view only Jobs that are actively scheduled with
the kf job-schedules
command.
$ kf job-schedules
Listing job schedules in Space: test space
Name Schedule Suspend LastSchedule Age Ready Reason
test-job * * * * * <nil> 16s 2m True <nil>
Notice how the unscheduled-job
is not listed in the kf job-schedules
output.
Cancel a Job’s schedule
You can stop a scheduled Job with the kf delete-job-schedule
command:
kf delete-job-schedule test-job
This command suspends the Job and stops it from creating Tasks on the previous schedule.
The Job is not deleted and can be scheduled again by kf schedule-job
to continue execution.
Delete a Job
The entire Job can be deleted with the kf delete-job
command:
kf delete-job test-job
This command deletes the Job and all Tasks that were created by the Job, both scheduled and manual executions. If any Tasks are still running, this command forcefully deletes them.
If you want to ensure that running Tasks are not interrupted, then first delete
the Jobs schedule with kf delete-job-schedule
, wait for all Tasks to complete,
and then delete the job by calling kf delete-job
.