No CONTROLLER_KEYS in vfapi-custom-env.list causes vfapi Pods to throw 'One of the given keys is nil' error
Overview of the Issue
This issue occurs in the following circumstances:
- An organization install vFunction Server v4.2 and later in a Kubernetes Cluster via Helm Charts
- During the installation, the vfapi Pods go into a CrashLoopBackOff
- The logs from any of the vfapi Pods throws the following error
kubectl logs vfunction-vfapi-idp-PODID
{"level":"fatal","s":"IDP","time":"2025-08-19T17:36:37Z", "caller":"/src/vfapi/services/IDP/main.go:41", "message":"One of the given keys is nil"}
- The vfapi configMap does not contain a reference to the CONTROLLERS_KEY
kubectl edit configmap vfapi-custom-env.list
apiVersion: v1
data:
AUTH_SERVICE: https://vfunction-vfapi-users:8004
BROWSER_KEY: RANDOM_STRING
CLIENT_ID: RANDOM_STRING
CLIENT_SECRET: RANDOM_STRING
CONTROLLERS_KEY: ""
Steps to Resolve the Issue on the existing 4.2 environment
Take the following steps to resolve this issue:
- Create a random string for the CONTROLLERS_KEY
head -3 /dev/urandom | tr -cd '[:alnum:]' | cut -c -32
- Modify the vfapi-custom-env.list to add the Controllers Key
kubectl edit configmap vfapi-custom-env.list
apiVersion: v1
data:
AUTH_SERVICE: https://vfunction-vfapi-users:8004
BROWSER_KEY: RANDOM_STRING
CLIENT_ID: RANDOM_STRING
CLIENT_SECRET: RANDOM_STRING
CONTROLLERS_KEY: RANDOM_STRING
- On the next restart, the vfapi Pods should restart without issues. Rather than waiting, you can also delete the problematic Pod so that it will restart
Steps to Resolve the Issue after an upgrade to 4.3 and later
If you manually added the Controllers Key during an installation, you will need to take the same step after an initial upgrade. Take the following steps:
- Retrieve the existing Controllers Key
kubectl get configmap vfapi-custom-env.list -o yaml | grep -i 'controllers_key'
- From the Portal, download vFunction Server Helm Kubernetes version 4.3.1790 or later
- Move the downloaded TGZ to the Linux VM with kubectl access to the Kubernetes Cluster where the vFunction Server installation was performed, specifically next to the vfunction-server-for-kubernetes/ directory
- Unpack the TGZ:
### Replace VERSION with the actual version
tar -xvzf vfunction-server-installation-helm-kubernetes.vVERSION.tgz
- Move into the vfunction-server-for-kubernetes directory and run the upgrade script
### Replace NAMESPACE with the actual namespace
cd vfunction-server-for-kubernetes
bash upgrade.sh -n NAMESPACE
- Edit the installation.yaml to add the controllers_key
vi vfunction-server-for-kubernetes/config/installation.yaml
### Before
generated:
client_id: RANDOM_STRING
client_secret: RANDOM_STRING
app_client_id: RANDOM_STRING
app_client_secret: RANDOM_STRING
browser_key: RANDOM_STRING
### After
generated:
client_id: RANDOM_STRING
client_secret: RANDOM_STRING
app_client_id: RANDOM_STRING
app_client_secret: RANDOM_STRING
browser_key: RANDOM_STRING
controllers_key: RANDOM_STRING
- Edit the custom-values.yaml to add the controllers_key
vi vfunction-server-for-kubernetes/config/helm/custom-values.yaml
### Before
generated:
client_id: RANDOM_STRING
client_secret: RANDOM_STRING
app_client_id: RANDOM_STRING
app_client_secret: RANDOM_STRING
browser_key: RANDOM_STRING
### After
generated:
client_id: RANDOM_STRING
client_secret: RANDOM_STRING
app_client_id: RANDOM_STRING
app_client_secret: RANDOM_STRING
browser_key: RANDOM_STRING
controllers_key: RANDOM_STRING
- Run the upgrade again
### Replace NAMESPACE with the actual namespace
cd vfunction-server-for-kubernetes
bash upgrade.sh -n NAMESPACE