Error Invalid Value Api/all= On Kube-APIserver
Author: Sam GriffithThis week I ran into a rather specific error with installing a Kubernetes Cluster with v1.17.
ERR:
Feb 19 13:16:43 k8s-848-master-02 kube-apiserver[9623]: Error: invalid value api/all=
2: Explanation
I was going through the task of upgrading the kubernetes-the-alta3-way repo when I stumbled across this issue. I had updated the version number of kubectl from 1.15.3 to 1.17.3, and updated a few other version numbers for other dependencies (cni, etcd, etc..).
Then I ran the Ansible script to deploy the High Availability Kubernetes Cluster, and it all looked good … until the script ran its first kubectl command!
I double checked the version numbers. They all were correct. I did a diff for the updates. It looked clean to me.
Well, looks like it’s troubleshooting time… You can read more about how I found the error in the troubleshooting section.
Fix:
If you run into this error, the simple fix is to edit the /etc/systemd/system/kube-apiserver.service
file.
Bad line of configuration:
--runtime-config=api/all
Working line of configuration:
--runtime-config=api/all=true
Troubleshooting
I checked the syslogs of my local system. I tailed them (sudo systemctl tail -f /var/log/syslog
) while running a basic kubectl get pods
command, and found this as my output:
Feb 19 13:05:47 k8s-848-bchd nginx[20254]: 2020/02/19 13:05:47 [error] 20257#20257: *17 connect() failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: 127.0.0.1:6443, upstream: "10.7.88.217:6443", bytes from/to client:0/0, bytes from/to upstream:0/0
Feb 19 13:05:47 k8s-848-bchd nginx[20254]: 2020/02/19 13:05:47 [warn] 20257#20257: *17 upstream server temporarily disabled while connecting to upstream, client: 127.0.0.1, server: 127.0.0.1:6443, upstream: "10.7.88.217:6443", bytes from/to client:0/0, bytes from/to upstream:0/0
Feb 19 13:05:47 k8s-848-bchd nginx[20254]: 2020/02/19 13:05:47 [error] 20257#20257: *17 connect() failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: 127.0.0.1:6443, upstream: "10.8.223.124:6443", bytes from/to client:0/0, bytes from/to upstream:0/0
Feb 19 13:05:47 k8s-848-bchd nginx[20254]: 2020/02/19 13:05:47 [warn] 20257#20257: *17 upstream server temporarily disabled while connecting to upstream, client: 127.0.0.1, server: 127.0.0.1:6443, upstream: "10.8.223.124:6443", bytes from/to client:0/0, bytes from/to upstream:0/0
Feb 19 13:05:47 k8s-848-bchd nginx[20254]: 2020/02/19 13:05:47 [error] 20257#20257: *17 connect() failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: 127.0.0.1:6443, upstream: "10.13.115.104:6443", bytes from/to client:0/0, bytes from/to upstream:0/0
Seeing the Connection refused error immediately clued me in that something was wrong in my cluster - specifically, my Master Nodes. I ssh’d into one of them to take a look at the kube* services and their logs.
Performing a sudo systemctl status kube-apiserver
gave me these results:
student@k8s-848-master-02:~$ sudo systemctl status kube-apiserver
● kube-apiserver.service - Kubernetes API Server
Loaded: loaded (/etc/systemd/system/kube-apiserver.service; enabled; vendor preset: enabled)
Active: activating (auto-restart) (Result: exit-code) since Wed 2020-02-19 13:11:12 UTC; 3s ago
Docs: https://github.com/kubernetes/kubernetes
Process: 6188 ExecStart=/usr/local/bin/kube-apiserver --advertise-address=10.8.223.124 --allow-privileged=true --apiserver-count=3 --audit-log-maxage=30 --audit-log-m
Main PID: 6188 (code=exited, status=1/FAILURE)
The key thing I noticed here was that the kube-apiserver.service was not running, but was in an activating state.
Next I looked at the logs of the master and grepped for kube-apiserver (sudo cat /var/log/syslog | grep kube-apiserver
).
The output was enormous. To save your monitor some ink, let me just show you the first line that came up:
Feb 19 13:16:43 k8s-848-master-02 kube-apiserver[9623]: Error: invalid value api/all=
Knowing that the error was with the kube-apiserver.service, I then was able to take a look inside of it with cat /etc/systemd/system/kube-apiserver.service
. In the output I could see that I had a flag named –runtime-config that was set equal to api/all. This is exactly how it was set for installing Kubernetes v1.15.
However, after a small bit of research in the kube-apiserver documentation, I discovered that this flag expected a mapStringString object. That meant that –runtime-config=api/all was expecting api/all to be set equal to either “true” or “false”.
Version 1.15 allowed us to get away with an assumed “true” value here, but 1.17 does not.
You know what happens when you assume, right?