Applying API Server App ID to k8s cluster spec
up vote
1
down vote
favorite
Team,
I already have a cluster running and I need to update the OIDC value. is there a way I can do it without having to recreate the cluster?
ex: below is my cluster info and I need to update the oidcClientID: spn:
How can I do this as I have 5 masters running?
kubeAPIServer:
storageBackend: etcd3
oidcClientID: spn:45645hhh-f641-498d-b11a-1321231231
oidcUsernameClaim: upn
oidcUsernamePrefix: "oidc:"
oidcGroupsClaim: groups
oidcGroupsPrefix: "oidc:"
kubernetes kubectl azure-kubernetes
add a comment |
up vote
1
down vote
favorite
Team,
I already have a cluster running and I need to update the OIDC value. is there a way I can do it without having to recreate the cluster?
ex: below is my cluster info and I need to update the oidcClientID: spn:
How can I do this as I have 5 masters running?
kubeAPIServer:
storageBackend: etcd3
oidcClientID: spn:45645hhh-f641-498d-b11a-1321231231
oidcUsernameClaim: upn
oidcUsernamePrefix: "oidc:"
oidcGroupsClaim: groups
oidcGroupsPrefix: "oidc:"
kubernetes kubectl azure-kubernetes
add a comment |
up vote
1
down vote
favorite
up vote
1
down vote
favorite
Team,
I already have a cluster running and I need to update the OIDC value. is there a way I can do it without having to recreate the cluster?
ex: below is my cluster info and I need to update the oidcClientID: spn:
How can I do this as I have 5 masters running?
kubeAPIServer:
storageBackend: etcd3
oidcClientID: spn:45645hhh-f641-498d-b11a-1321231231
oidcUsernameClaim: upn
oidcUsernamePrefix: "oidc:"
oidcGroupsClaim: groups
oidcGroupsPrefix: "oidc:"
kubernetes kubectl azure-kubernetes
Team,
I already have a cluster running and I need to update the OIDC value. is there a way I can do it without having to recreate the cluster?
ex: below is my cluster info and I need to update the oidcClientID: spn:
How can I do this as I have 5 masters running?
kubeAPIServer:
storageBackend: etcd3
oidcClientID: spn:45645hhh-f641-498d-b11a-1321231231
oidcUsernameClaim: upn
oidcUsernamePrefix: "oidc:"
oidcGroupsClaim: groups
oidcGroupsPrefix: "oidc:"
kubernetes kubectl azure-kubernetes
kubernetes kubectl azure-kubernetes
edited Nov 8 at 0:54
Rico
24.3k94864
24.3k94864
asked Nov 7 at 18:28
AhmFM
237
237
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
up vote
1
down vote
accepted
You update your kube-apiserver on your masters one by one (update/restart). If your cluster is setup correctly, when you get to the active kube-apiserver it should automatically failover to another kube-apiserver master in standby.
You can add the oidc options in the /etc/kubernetes/manifests/kube-apiserver.yaml
pod manifest file.
apiVersion: v1
kind: Pod
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
- kube-apiserver
- --authorization-mode=Node,RBAC
- --advertise-address=172.x.x.x
- --allow-privileged=true
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --oidc-client-id=...
- --oidc-username-claim=...
- --oidc-username-prefix=...
- --oidc-groups-claim=...
- --oidc-groups-prefix=...
...
Then you can restart your kube-apiserver
container, if you are using docker:
$ sudo docker restart <container-id-for-kube-apiserver>
Or if you'd like to restart all the components on the master:
$ sudo systemctl restart docker
Watch for logs on the kube-apiserver container
$ sudo docker logs -f <container-id-for-kube-apiserver>
Make sure you never have less running nodes than your quorum which should be 3 for your 5 master cluster, to be safe. If for some reason your etcd cluster falls out of quorum you will have to recover by recreating the etcd cluster and restoring from a backup.
I see.. this looks doable. however my bad I have 3 masters only not 5. so anything to worry about before I make change? and why would it matter how many masters I have? yes, with 1 if am restarting, traffic will be down however with 2 or more, there is redundancy..
– AhmFM
Nov 9 at 18:14
3 masters should be fine, just make sure 2 of them are up all the time
– Rico
Nov 9 at 18:34
awesome.. it worked. however i just did after change systemctl restart kubelet
– AhmFM
Nov 9 at 23:24
add a comment |
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
1
down vote
accepted
You update your kube-apiserver on your masters one by one (update/restart). If your cluster is setup correctly, when you get to the active kube-apiserver it should automatically failover to another kube-apiserver master in standby.
You can add the oidc options in the /etc/kubernetes/manifests/kube-apiserver.yaml
pod manifest file.
apiVersion: v1
kind: Pod
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
- kube-apiserver
- --authorization-mode=Node,RBAC
- --advertise-address=172.x.x.x
- --allow-privileged=true
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --oidc-client-id=...
- --oidc-username-claim=...
- --oidc-username-prefix=...
- --oidc-groups-claim=...
- --oidc-groups-prefix=...
...
Then you can restart your kube-apiserver
container, if you are using docker:
$ sudo docker restart <container-id-for-kube-apiserver>
Or if you'd like to restart all the components on the master:
$ sudo systemctl restart docker
Watch for logs on the kube-apiserver container
$ sudo docker logs -f <container-id-for-kube-apiserver>
Make sure you never have less running nodes than your quorum which should be 3 for your 5 master cluster, to be safe. If for some reason your etcd cluster falls out of quorum you will have to recover by recreating the etcd cluster and restoring from a backup.
I see.. this looks doable. however my bad I have 3 masters only not 5. so anything to worry about before I make change? and why would it matter how many masters I have? yes, with 1 if am restarting, traffic will be down however with 2 or more, there is redundancy..
– AhmFM
Nov 9 at 18:14
3 masters should be fine, just make sure 2 of them are up all the time
– Rico
Nov 9 at 18:34
awesome.. it worked. however i just did after change systemctl restart kubelet
– AhmFM
Nov 9 at 23:24
add a comment |
up vote
1
down vote
accepted
You update your kube-apiserver on your masters one by one (update/restart). If your cluster is setup correctly, when you get to the active kube-apiserver it should automatically failover to another kube-apiserver master in standby.
You can add the oidc options in the /etc/kubernetes/manifests/kube-apiserver.yaml
pod manifest file.
apiVersion: v1
kind: Pod
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
- kube-apiserver
- --authorization-mode=Node,RBAC
- --advertise-address=172.x.x.x
- --allow-privileged=true
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --oidc-client-id=...
- --oidc-username-claim=...
- --oidc-username-prefix=...
- --oidc-groups-claim=...
- --oidc-groups-prefix=...
...
Then you can restart your kube-apiserver
container, if you are using docker:
$ sudo docker restart <container-id-for-kube-apiserver>
Or if you'd like to restart all the components on the master:
$ sudo systemctl restart docker
Watch for logs on the kube-apiserver container
$ sudo docker logs -f <container-id-for-kube-apiserver>
Make sure you never have less running nodes than your quorum which should be 3 for your 5 master cluster, to be safe. If for some reason your etcd cluster falls out of quorum you will have to recover by recreating the etcd cluster and restoring from a backup.
I see.. this looks doable. however my bad I have 3 masters only not 5. so anything to worry about before I make change? and why would it matter how many masters I have? yes, with 1 if am restarting, traffic will be down however with 2 or more, there is redundancy..
– AhmFM
Nov 9 at 18:14
3 masters should be fine, just make sure 2 of them are up all the time
– Rico
Nov 9 at 18:34
awesome.. it worked. however i just did after change systemctl restart kubelet
– AhmFM
Nov 9 at 23:24
add a comment |
up vote
1
down vote
accepted
up vote
1
down vote
accepted
You update your kube-apiserver on your masters one by one (update/restart). If your cluster is setup correctly, when you get to the active kube-apiserver it should automatically failover to another kube-apiserver master in standby.
You can add the oidc options in the /etc/kubernetes/manifests/kube-apiserver.yaml
pod manifest file.
apiVersion: v1
kind: Pod
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
- kube-apiserver
- --authorization-mode=Node,RBAC
- --advertise-address=172.x.x.x
- --allow-privileged=true
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --oidc-client-id=...
- --oidc-username-claim=...
- --oidc-username-prefix=...
- --oidc-groups-claim=...
- --oidc-groups-prefix=...
...
Then you can restart your kube-apiserver
container, if you are using docker:
$ sudo docker restart <container-id-for-kube-apiserver>
Or if you'd like to restart all the components on the master:
$ sudo systemctl restart docker
Watch for logs on the kube-apiserver container
$ sudo docker logs -f <container-id-for-kube-apiserver>
Make sure you never have less running nodes than your quorum which should be 3 for your 5 master cluster, to be safe. If for some reason your etcd cluster falls out of quorum you will have to recover by recreating the etcd cluster and restoring from a backup.
You update your kube-apiserver on your masters one by one (update/restart). If your cluster is setup correctly, when you get to the active kube-apiserver it should automatically failover to another kube-apiserver master in standby.
You can add the oidc options in the /etc/kubernetes/manifests/kube-apiserver.yaml
pod manifest file.
apiVersion: v1
kind: Pod
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
- kube-apiserver
- --authorization-mode=Node,RBAC
- --advertise-address=172.x.x.x
- --allow-privileged=true
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --oidc-client-id=...
- --oidc-username-claim=...
- --oidc-username-prefix=...
- --oidc-groups-claim=...
- --oidc-groups-prefix=...
...
Then you can restart your kube-apiserver
container, if you are using docker:
$ sudo docker restart <container-id-for-kube-apiserver>
Or if you'd like to restart all the components on the master:
$ sudo systemctl restart docker
Watch for logs on the kube-apiserver container
$ sudo docker logs -f <container-id-for-kube-apiserver>
Make sure you never have less running nodes than your quorum which should be 3 for your 5 master cluster, to be safe. If for some reason your etcd cluster falls out of quorum you will have to recover by recreating the etcd cluster and restoring from a backup.
answered Nov 8 at 1:20
Rico
24.3k94864
24.3k94864
I see.. this looks doable. however my bad I have 3 masters only not 5. so anything to worry about before I make change? and why would it matter how many masters I have? yes, with 1 if am restarting, traffic will be down however with 2 or more, there is redundancy..
– AhmFM
Nov 9 at 18:14
3 masters should be fine, just make sure 2 of them are up all the time
– Rico
Nov 9 at 18:34
awesome.. it worked. however i just did after change systemctl restart kubelet
– AhmFM
Nov 9 at 23:24
add a comment |
I see.. this looks doable. however my bad I have 3 masters only not 5. so anything to worry about before I make change? and why would it matter how many masters I have? yes, with 1 if am restarting, traffic will be down however with 2 or more, there is redundancy..
– AhmFM
Nov 9 at 18:14
3 masters should be fine, just make sure 2 of them are up all the time
– Rico
Nov 9 at 18:34
awesome.. it worked. however i just did after change systemctl restart kubelet
– AhmFM
Nov 9 at 23:24
I see.. this looks doable. however my bad I have 3 masters only not 5. so anything to worry about before I make change? and why would it matter how many masters I have? yes, with 1 if am restarting, traffic will be down however with 2 or more, there is redundancy..
– AhmFM
Nov 9 at 18:14
I see.. this looks doable. however my bad I have 3 masters only not 5. so anything to worry about before I make change? and why would it matter how many masters I have? yes, with 1 if am restarting, traffic will be down however with 2 or more, there is redundancy..
– AhmFM
Nov 9 at 18:14
3 masters should be fine, just make sure 2 of them are up all the time
– Rico
Nov 9 at 18:34
3 masters should be fine, just make sure 2 of them are up all the time
– Rico
Nov 9 at 18:34
awesome.. it worked. however i just did after change systemctl restart kubelet
– AhmFM
Nov 9 at 23:24
awesome.. it worked. however i just did after change systemctl restart kubelet
– AhmFM
Nov 9 at 23:24
add a comment |
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53195608%2fapplying-api-server-app-id-to-k8s-cluster-spec%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown