This is the multi-page printable view of this section. Click here to print.
Examples
1 - Add X-VCAP-REQUEST-ID HTTP Headers.
Kf limits the headers it returns for security and network cost purposes.
If you have applications that need the X-VCAP-REQUEST-ID
HTTP header and
can’t be upgraded, then you can use Istio to add it to requests and responses
to mimic CLoud Foundry’s gorouter.
To mimic this header, we can create an EnvoyFilter
that does the following:
- Watches for HTTP traffic coming into the gateway (make sure you target the same ingress gateway Kf is using).
- Saves Envoy’s built-in request ID.
- Copies that ID to the request.
- Mutates the response with the same request ID.
You may need to make changes to this filter if:
- You rely on the HTTP/1.1
Upgrade
header. - You need these headers for mesh (East-West) traffic.
- You only want to target a subset of applications.
Example
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: vcap-http-header
# Set the namespace to match the gateway.
namespace: asm-gateways
spec:
# Set the workload selector to match the Istio ingress gateway
# your domain targets and/or your workload
workloadSelector:
labels:
istio: ingressgateway
configPatches:
- applyTo: HTTP_FILTER
match:
context: GATEWAY
listener:
filterChain:
filter:
name: "envoy.filters.network.http_connection_manager"
subFilter:
name: "envoy.filters.http.router"
patch:
operation: INSERT_BEFORE
value:
name: envoy.lua
typed_config:
"@type": "type.googleapis.com/envoy.extensions.filters.http.lua.v3.Lua"
inlineCode: |
function envoy_on_request(request)
local metadata = request:streamInfo():dynamicMetadata()
-- Get Envoy's internal request ID
local request_id = request:headers():get("x-request-id")
if request_id ~= nil then
-- Save the request ID for later and set it on the request
-- for the application to conusme.
metadata:set("envoy.filters.http.lua", "req.x-request-id", request_id)
request:headers():add("x-vcap-request-id", request_id)
end
end
function envoy_on_response(response)
local metadata = response:streamInfo():dynamicMetadata():get("envoy.filters.http.lua")
local request_id = metadata["req.x-request-id"]
-- Set the value on the outbound response as well.
if request_id ~= nil then
response:headers():add("x-vcap-request-id", request_id)
end
end
2 - Deploy Docker apps with NFS UID/GID mapping.
This document outlines how to do UID/GID mapping for NFS volumes within a Docker container. This may be necessary if your application uses NFS because Kubernetes assumes the UID and GID of the NFS volume map directly into the UID/GID namespace of your container.
To get around this limitation, Kf adds the mapfs
binary to all continers it builds. The mapfs
binary creates a FUSE filesystem that maps the UID and GID of a host container into the UID and GID
of an NFS volume.
Prerequisites
In order for these operations to work:
- Your container’s OS must be Linux.
- Your container must have the coreutils
timeout
,sh
, andwait
installed. - Your container must have
fusermount
installed.
Update your Dockerfile
First, you’ll need to update your Dockerfile to add the mapfs
binary to your application:
# Get the mapfs binrary from a version of Kf.
FROM gcr.io/kf-releases/fusesidecar:v2.11.14 as builder
COPY --from=builder --chown=root:vcap /bin/mapfs /bin/mapfs
# Allow users other than root to use fuse.
RUN echo "user_allow_other" >> /etc/fuse.conf
RUN chmod 644 /etc/fuse.conf
RUN chmod 750 /bin/mapfs
# Allow setuid so the mapfs binary is run as root.
RUN chmod u+s /bin/mapfs
Set manifest attributes
Next, you’ll have to update manifest attributes for your application.
You MUST set args
and entrypoint
because they’ll be used by mapfs
to launch the application.
- Set
args
to be your container’sCMD
- Set
entrypoint
to be your container’sENTRYPOINT
applications:
- name: my-docker-app
args: ["-jar", "my-app"]
entrypoint: "java"
dockerfile:
path: gcr.io/my-application-with-mapfs
Deploy your application
Once your Docker image and manifest are updated, you can deploy your application and check that your NFS volume mounting correctly in the container.
If something has gone wrong, you can debug it by getting the Deployment in Kubernetes with the same name as your application:
kubectl get deployment my-docker-app -n my-space -o yaml
Validate the command
and args
for the container named user-container
look valid.
3 - Deploy Spring Cloud Config
This document shows how to deploy Spring Cloud Config in a Kf cluster.
Spring Cloud Config provides a way to decouple application code from its runtime configuration. The Spring Cloud Config configuration server can read configuration files from Git repositories, the local filesystem, HashiCorp Vault servers, or Cloud Foundry CredHub. Once the configuration server has read the configuration, it can format and serve that configuration as YAML, Java Properties, or JSON over HTTP.
Before you begin
You will need a cluster with Kf installed and access to the Kf CLI.
Additionally, you will need the following software:
git
: Git is required to clone a repository.
Download the Spring Cloud Config configuration server
To download the configuration server source:
Open a terminal.
Clone the source for the configuration server:
git clone --depth 1 "https://github.com/google/kf"
Configure and deploy a configuration server
To update the settings for the instance:
Change directory to
spring-cloud-config-server
:cd kf/spring-cloud-config-server
Open
manifest.yaml
.Change the
GIT_URI
environment variable to the URI of your Git configuration server.Optionally, change the name of the application in the manifest.
Optionally, configure additional properties or alternative property sources by editing
src/main/resources/application.properties
.Deploy the configuration server without an external route. If you changed the name of the application in the manifest, update it here:
kf push --no-route spring-cloud-config
Bind applications to the configuration server
You can create a user provided service to bind the deployed configuration server to other Kf applications in the same cluster or namespace.
How you configure them will depend on the library you use:
- Applications using Pivotal’s Spring Cloud services client library
Existing PCF applications that use Pivotal’s Spring Cloud Services client library can be bound using the following method:
Create a user provided service named config-server. This step only has to be done once per configuration server:
kf cups config-server -p '{"uri":"http://spring-cloud-config"}' -t configuration
For each application that needs get credentials, run:
kf bind-service application-name config-server kf restart application-name
This will create an entry into the
VCAP_SERVICES
environment variable for the configuration server.
- Other applications
Applications that can connect directly to a Spring Cloud Config configuration server should be configured to access it using its cluster internal URI:
http://spring-cloud-config
- For Spring applications that use the Spring Cloud Config client library
can set the
spring.cloud.config.uri
property in the appropriate location for your application. This is usually anapplication.properties
orapplication.yaml
file. - For other frameworks, see your library’s reference information.
Delete the configuration server
To remove a configuration server:
Remove all bindings to the configuration server running the following commands for each bound application:
kf unbind-service application-name config-server kf restart application-name
Remove the service entry for the configuration server:
kf delete-service config-server
Delete the configuration server application:
kf delete spring-cloud-config
What’s next
- Read more about the types of configuration sources Spring Cloud Config supports.
- Learn about the structure of the
VCAP_SERVICES
environment variable to understand how it can be used for service discovery.
4 - Deploy Spring Music
These instructions will walk you through deploying the Cloud Foundry Spring Music reference App using the Kf Cloud Service Broker.
Building Java Apps from source: The Spring Music source will be built on the cluster, not locally.
Service broker integration: You will create a database using the Kf Cloud Service Broker and bind the Spring Music App to it.
Spring Cloud Connectors: Spring Cloud Connectors are used by the Spring Music App to detect things like bound CF services. They work seamlessly with Kf.
Configuring the Java version: You will specify the version of Java you want the buildpack to use.
Prerequisites
Install and configure the Kf Cloud Service Broker.
Deploy Spring Music
Clone source
Clone the Spring Music repo.
git clone https://github.com/cloudfoundry-samples/spring-music.git spring-music cd spring-music
Edit
manifest.yml
, and replacepath: build/libs/spring-music-1.0.jar
withstack: org.cloudfoundry.stacks.cflinuxfs3
. This instructs Kf to build from source using cloud native buildpacks so you don’t have to compile locally.--- applications: - name: spring-music memory: 1G random-route: true stack: org.cloudfoundry.stacks.cflinuxfs3 env: JBP_CONFIG_SPRING_AUTO_RECONFIGURATION: '{enabled: false}' # JBP_CONFIG_OPEN_JDK_JRE: '{ jre: { version: 11.+ } }'
Push Spring Music with no bindings
Create and target a Space.
kf create-space test kf target -s test
Deploy Spring Music.
kf push spring-music
Use the proxy feature to access the deployed App.
Start the proxy:
kf proxy spring-music
Open
http://localhost:8080
in your browser:The deployed App includes a UI element showing which (if any) Spring profile is being used. No profile is being used here, indicating an in-memory database is in use.
Create and bind a database
Create a PostgresSQL database from the marketplace.
kf create-service csb-google-postgres small spring-music-postgres-db -c '{"region":"COMPUTE_REGION","authorized_network":"VPC_NAME"}'
Bind the Service with the App.
kf bind-service spring-music spring-music-postgres-db
Restart the App to make the service binding available via the VCAP_SERVICES environment variable.
kf restart spring-music
(Optional) View the binding details.
kf bindings
Verify the App is using the new binding.
Start the proxy:
kf proxy spring-music
Open
http://localhost:8080
in your browser:You now see the Postgres profile is being used, and we see the name of our Service we bound the App to.
Clean up
Unbind and delete the PostgreSQL service:
kf unbind-service spring-music spring-music-db kf delete-service spring-music-db
Delete the App:
kf delete spring-music
5 - Deploy With Offline Java Buildpack
This document shows how to use an offline Java Buildpack to deploy your applications.
Cloud Foundry’s Java buildpack uses a number of large dependencies. At the time of writing, ~800 MB of sources are pulled into the builder during the buildpack’s execution. Much of this data is brought in from the internet which is convenient as it keeps the buildpack itself small but introduces a great deal of data transfer.
Java builds can be optimized to reduce outside network ingress and improve performance by hosting your own Java buildpack compiled in an offline mode. In its offline mode, Cloud Foundry’s Java buildpack downloads packages that may be used into the cache when creating the builder. This avoids pulling dependencies from the internet at runtime and makes the builder image self contained.
Before You Begin
You will need a cluster with Kf installed and access to the Kf CLI
Additionally, you will need access to the following software:
git
: Git is required to clone a repository.ruby
: Ruby is required to create the Java bulidpack.bundle
: Ruby package manager to install Java buildpack dependencies.
Compile the Java Buildpack in Offline mode
Follow Cloud Foundry’s instructions to compile the Java Buildpack in Offline mode: https://github.com/cloudfoundry/java-buildpack/blob/main/README.md#offline-package.
These instructions will generate a .zip
extension file containing the
buildpack and its dependencies.
Deploy the Java Buildpack
Self Hosted on Kf (Recommended)
Once you have built the Java Buildpack, you can host it on your Kf cluster using a staticfile buildpack.
- Create a new directory for your static file Kf app i.e.
java-buildpack-staticapp
- Create a new manifest in that directory
manifest.yml
with the contents:
---
applications:
- name: java-buildpack-static
- Create an emptyfile named
Staticfile
(case sensitive) in the directory. This indicates that this is a static app to the staticfile buildpack which will create a small image containing the contents of the directory + an nginx installation to host your files. - Copy the buildpack you created in the previous step into the directory as
java-buildpack-offline-<hash>.zip
. Your directory structure at this point should resemble:
/
├── java-buildpack-offline-fe26136c.zip
├── manifest.yml
└── Staticfile
- Run
kf push
to deploy your app, this will trigger a build. - When the push finishes, run
kf apps
and take note of the URL your app is running on. You will reference this in the next step.
See internal routing to
construct an internal link to your locally hosted buildpack. The URL should
resemble <your route>/java-buildpack-offline-<hash>.zip
. Take note of this URL
as you will reference it later in your application manifests or in your
cluster’s buildpack configuration.
Served from Google Cloud Storage
See the Google Cloud documentation on creating public objects in a Cloud Storage bucket: https://cloud.google.com/storage/docs/access-control/making-data-public.
To find the URL where your buildpack is hosted and that you will reference in your manifests, see: https://cloud.google.com/storage/docs/access-public-data.
Using an Offline Buildpack
Whole Cluster
You can apply your buildpack to the whole cluster by updating the cluster’s configuration. See configuring stacks to learn how to register your buildpack URL for an entire cluster.
In App Manifest
To use your offline buildpack in a specific app, update your application manifest to specify the buildpack explicitly i.e.
---
applications:
- name: my-app
buildpacks:
- http://<your host>/java-buildpack-offline-<hash>.zip