PLEASE NOTE: This document applies to version 1.9 and not to the latest release 1.10
Documentation for other releases can be found by using the version selector in the top right of any doc page.This guide is adapted from the Vault on Minikube and Vault Kubernetes Sidecar guides.
Most Crossplane providers support supplying credentials from at least the following sources:
A provider may optionally support additional credentials sources, but the common sources cover a wide variety of use cases. One specific use case that is popular among organizations that use Vault for secrets management is using a sidecar to inject credentials into the filesystem. This guide will demonstrate how to use the Vault Kubernetes Sidecar to provide credentials for provider-gcp.
Note: in this guide we will copy GCP credentials into Vault’s KV secrets engine. This is a simple generic approach to managing secrets with Vault, but is not as robust as using Vault’s dedicated cloud provider secrets engines for AWS, Azure, and GCP.
Note: this guide walks through setting up Vault running in the same cluster as Crossplane. You may also choose to use an existing Vault instance that runs outside the cluster but has Kubernetes authentication enabled.
Before getting started, you must ensure that you have installed Crossplane and Vault and that they are running in your cluster.
kubectl create namespace crossplane-system
helm repo add crossplane-stable https://charts.crossplane.io/stable
helm repo update
helm install crossplane --namespace crossplane-system crossplane-stable/crossplane
helm repo add hashicorp https://helm.releases.hashicorp.com
helm install vault hashicorp/vault
In order for Vault to access encrypted data from physical storage, it must be unsealed.
kubectl exec vault-0 -- vault operator init -key-shares=1 -key-threshold=1 -format=json > cluster-keys.json
VAULT_UNSEAL_KEY=$(cat cluster-keys.json | jq -r ".unseal_keys_b64[]")
kubectl exec vault-0 -- vault operator unseal $VAULT_UNSEAL_KEY
In order for Vault to be able to authenticate requests based on Kubernetes service accounts, the Kubernetes authentication backend must be enabled. This requires logging in to Vault and configuring it with a service account token, API server address, and certificate. Because we are running Vault in Kubernetes, these values are already available via the container filesystem and environment variables.
cat cluster-keys.json | jq -r ".root_token" # get root token
kubectl exec -it vault-0 -- /bin/sh
vault login # use root token from above
vault auth enable kubernetes
vault write auth/kubernetes/config \
token_reviewer_jwt="$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" \
kubernetes_host="https://$KUBERNETES_PORT_443_TCP_ADDR:443" \
kubernetes_ca_cert=@/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
The next steps will be executed in your local environment.
exit
In order to provision infrastructure on GCP, you will need to create a service
account with appropriate permissions. In this guide we will only provision a
CloudSQL instance, so the service account will be bound to the cloudsql.admin
role. The following steps will setup a GCP service account, give it the
necessary permissions for Crossplane to be able to manage CloudSQL instances,
and emit the service account credentials in a JSON file.
# replace this with your own gcp project id and the name of the service account
# that will be created.
PROJECT_ID=my-project
NEW_SA_NAME=test-service-account-name
# create service account
SA="${NEW_SA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com"
gcloud iam service-accounts create $NEW_SA_NAME --project $PROJECT_ID
# enable cloud API
SERVICE="sqladmin.googleapis.com"
gcloud services enable $SERVICE --project $PROJECT_ID
# grant access to cloud API
ROLE="roles/cloudsql.admin"
gcloud projects add-iam-policy-binding --role="$ROLE" $PROJECT_ID --member "serviceAccount:$SA"
# create service account keyfile
gcloud iam service-accounts keys create creds.json --project $PROJECT_ID --iam-account $SA
You should now have valid service account credentials in creds.json
.
After setting up Vault, you will need to store your credentials in the kv secrets engine.
Note: the steps below involve copying credentials into the container filesystem before storing them in Vault. You may also choose to use Vault’s HTTP API or UI by port-forwarding the container to your local environment (
kubectl port-forward vault-0 8200:8200
).
Copy your credentials into the container filesystem so that your can store them in Vault.
kubectl cp creds.json vault-0:/tmp/creds.json
Secrets engines must be enabled before they can be used. Enable the kv-v2
secrets engine a the secret
path.
kubectl exec -it vault-0 -- /bin/sh
vault secrets enable -path=secret kv-v2
The path of your GCP credentials is how the secret will be referenced when
injecting it into the provider-gcp
controller Pod
.
vault kv put secret/provider-creds/gcp-default @tmp/creds.json
You no longer need our GCP credentials file in the container filesystem, so go ahead and clean it up.
rm tmp/creds.json
In order for our controllers to have the Vault sidecar inject the credentials
into their filesystem, you must associate the Pod
with a policy. This policy
will allow for reading and listing all secrets on the provider-creds
path in
the kv-v2
secrets engine.
vault policy write provider-creds - <<EOF
path "secret/data/provider-creds/*" {
capabilities = ["read", "list"]
}
EOF
The last step is to create a role that is bound to the policy you created and
associate it with a group of Kubernetes service accounts. This role can be
assumed by any (*
) service account in the crossplane-system
namespace.
vault write auth/kubernetes/role/crossplane-providers \
bound_service_account_names="*" \
bound_service_account_namespaces=crossplane-system \
policies=provider-creds \
ttl=24h
The next steps will be executed in your local environment.
exit
You are now ready to install provider-gcp
. Crossplane provides a
ControllerConfig
type that allows you to customize the deployment of a
provider’s controller Pod
. A ControllerConfig
can be created and referenced
by any number of Provider
objects that wish to use its configuration. In the
example below, the Pod
annotations indicate to the Vault mutating webhook that
we want for the secret stored at secret/provider-creds/gcp-default
to be
injected into the container filesystem by assuming role crossplane-providers
.
There is also so template formatting added to make sure the secret data is
presented in a form that provider-gcp
is expecting.
{% raw %}
echo "apiVersion: pkg.crossplane.io/v1alpha1
kind: ControllerConfig
metadata:
name: vault-config
spec:
metadata:
annotations:
vault.hashicorp.com/agent-inject: \"true\"
vault.hashicorp.com/role: "crossplane-providers"
vault.hashicorp.com/agent-inject-secret-creds.txt: "secret/provider-creds/gcp-default"
vault.hashicorp.com/agent-inject-template-creds.txt: |
{{- with secret \"secret/provider-creds/gcp-default\" -}}
{{ .Data.data | toJSON }}
{{- end -}}
---
apiVersion: pkg.crossplane.io/v1
kind: Provider
metadata:
name: provider-gcp
spec:
package: xpkg.upbound.io/crossplane-contrib/provider-gcp:v0.22.0
controllerConfigRef:
name: vault-config" | kubectl apply -f -
{% endraw %}
One provider-gcp
is installed and running, you will want to create a
ProviderConfig
that specifies the credentials in the filesystem that should be
used to provision managed resources that reference this ProviderConfig
.
Because the name of this ProviderConfig
is default
it will be used by any
managed resources that do not explicitly reference a ProviderConfig
.
Note: make sure that the
PROJECT_ID
environment variable that was defined earlier is still set correctly.
echo "apiVersion: gcp.crossplane.io/v1beta1
kind: ProviderConfig
metadata:
name: default
spec:
projectID: ${PROJECT_ID}
credentials:
source: Filesystem
fs:
path: /vault/secrets/creds.txt" | kubectl apply -f -
The final step is to actually provision a CloudSQLInstance
. Creating the
object below will result in the creation of a Cloud SQL Postgres database on
GCP.
echo "apiVersion: database.gcp.crossplane.io/v1beta1
kind: CloudSQLInstance
metadata:
name: postgres-vault-demo
spec:
forProvider:
databaseVersion: POSTGRES_12
region: us-central1
settings:
tier: db-custom-1-3840
dataDiskType: PD_SSD
dataDiskSizeGb: 10
writeConnectionSecretToRef:
namespace: crossplane-system
name: cloudsqlpostgresql-conn" | kubectl apply -f -
You can monitor the progress of the database provisioning with the following command:
kubectl get cloudsqlinstance -w