Applying API Server App ID to k8s cluster spec











up vote
1
down vote

favorite












Team,



I already have a cluster running and I need to update the OIDC value. is there a way I can do it without having to recreate the cluster?



ex: below is my cluster info and I need to update the oidcClientID: spn:



How can I do this as I have 5 masters running?



kubeAPIServer:
storageBackend: etcd3
oidcClientID: spn:45645hhh-f641-498d-b11a-1321231231
oidcUsernameClaim: upn
oidcUsernamePrefix: "oidc:"
oidcGroupsClaim: groups
oidcGroupsPrefix: "oidc:"









share|improve this question




























    up vote
    1
    down vote

    favorite












    Team,



    I already have a cluster running and I need to update the OIDC value. is there a way I can do it without having to recreate the cluster?



    ex: below is my cluster info and I need to update the oidcClientID: spn:



    How can I do this as I have 5 masters running?



    kubeAPIServer:
    storageBackend: etcd3
    oidcClientID: spn:45645hhh-f641-498d-b11a-1321231231
    oidcUsernameClaim: upn
    oidcUsernamePrefix: "oidc:"
    oidcGroupsClaim: groups
    oidcGroupsPrefix: "oidc:"









    share|improve this question


























      up vote
      1
      down vote

      favorite









      up vote
      1
      down vote

      favorite











      Team,



      I already have a cluster running and I need to update the OIDC value. is there a way I can do it without having to recreate the cluster?



      ex: below is my cluster info and I need to update the oidcClientID: spn:



      How can I do this as I have 5 masters running?



      kubeAPIServer:
      storageBackend: etcd3
      oidcClientID: spn:45645hhh-f641-498d-b11a-1321231231
      oidcUsernameClaim: upn
      oidcUsernamePrefix: "oidc:"
      oidcGroupsClaim: groups
      oidcGroupsPrefix: "oidc:"









      share|improve this question















      Team,



      I already have a cluster running and I need to update the OIDC value. is there a way I can do it without having to recreate the cluster?



      ex: below is my cluster info and I need to update the oidcClientID: spn:



      How can I do this as I have 5 masters running?



      kubeAPIServer:
      storageBackend: etcd3
      oidcClientID: spn:45645hhh-f641-498d-b11a-1321231231
      oidcUsernameClaim: upn
      oidcUsernamePrefix: "oidc:"
      oidcGroupsClaim: groups
      oidcGroupsPrefix: "oidc:"






      kubernetes kubectl azure-kubernetes






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Nov 8 at 0:54









      Rico

      24.3k94864




      24.3k94864










      asked Nov 7 at 18:28









      AhmFM

      237




      237
























          1 Answer
          1






          active

          oldest

          votes

















          up vote
          1
          down vote



          accepted










          You update your kube-apiserver on your masters one by one (update/restart). If your cluster is setup correctly, when you get to the active kube-apiserver it should automatically failover to another kube-apiserver master in standby.



          You can add the oidc options in the /etc/kubernetes/manifests/kube-apiserver.yaml pod manifest file.



          apiVersion: v1
          kind: Pod
          metadata:
          annotations:
          scheduler.alpha.kubernetes.io/critical-pod: ""
          creationTimestamp: null
          labels:
          component: kube-apiserver
          tier: control-plane
          name: kube-apiserver
          namespace: kube-system
          spec:
          containers:
          - command:
          - kube-apiserver
          - --authorization-mode=Node,RBAC
          - --advertise-address=172.x.x.x
          - --allow-privileged=true
          - --client-ca-file=/etc/kubernetes/pki/ca.crt
          - --oidc-client-id=...
          - --oidc-username-claim=...
          - --oidc-username-prefix=...
          - --oidc-groups-claim=...
          - --oidc-groups-prefix=...
          ...


          Then you can restart your kube-apiserver container, if you are using docker:



          $ sudo docker restart <container-id-for-kube-apiserver>


          Or if you'd like to restart all the components on the master:



          $ sudo systemctl restart docker


          Watch for logs on the kube-apiserver container



          $ sudo docker logs -f <container-id-for-kube-apiserver>


          Make sure you never have less running nodes than your quorum which should be 3 for your 5 master cluster, to be safe. If for some reason your etcd cluster falls out of quorum you will have to recover by recreating the etcd cluster and restoring from a backup.






          share|improve this answer





















          • I see.. this looks doable. however my bad I have 3 masters only not 5. so anything to worry about before I make change? and why would it matter how many masters I have? yes, with 1 if am restarting, traffic will be down however with 2 or more, there is redundancy..
            – AhmFM
            Nov 9 at 18:14












          • 3 masters should be fine, just make sure 2 of them are up all the time
            – Rico
            Nov 9 at 18:34










          • awesome.. it worked. however i just did after change systemctl restart kubelet
            – AhmFM
            Nov 9 at 23:24











          Your Answer






          StackExchange.ifUsing("editor", function () {
          StackExchange.using("externalEditor", function () {
          StackExchange.using("snippets", function () {
          StackExchange.snippets.init();
          });
          });
          }, "code-snippets");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "1"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














           

          draft saved


          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53195608%2fapplying-api-server-app-id-to-k8s-cluster-spec%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes








          up vote
          1
          down vote



          accepted










          You update your kube-apiserver on your masters one by one (update/restart). If your cluster is setup correctly, when you get to the active kube-apiserver it should automatically failover to another kube-apiserver master in standby.



          You can add the oidc options in the /etc/kubernetes/manifests/kube-apiserver.yaml pod manifest file.



          apiVersion: v1
          kind: Pod
          metadata:
          annotations:
          scheduler.alpha.kubernetes.io/critical-pod: ""
          creationTimestamp: null
          labels:
          component: kube-apiserver
          tier: control-plane
          name: kube-apiserver
          namespace: kube-system
          spec:
          containers:
          - command:
          - kube-apiserver
          - --authorization-mode=Node,RBAC
          - --advertise-address=172.x.x.x
          - --allow-privileged=true
          - --client-ca-file=/etc/kubernetes/pki/ca.crt
          - --oidc-client-id=...
          - --oidc-username-claim=...
          - --oidc-username-prefix=...
          - --oidc-groups-claim=...
          - --oidc-groups-prefix=...
          ...


          Then you can restart your kube-apiserver container, if you are using docker:



          $ sudo docker restart <container-id-for-kube-apiserver>


          Or if you'd like to restart all the components on the master:



          $ sudo systemctl restart docker


          Watch for logs on the kube-apiserver container



          $ sudo docker logs -f <container-id-for-kube-apiserver>


          Make sure you never have less running nodes than your quorum which should be 3 for your 5 master cluster, to be safe. If for some reason your etcd cluster falls out of quorum you will have to recover by recreating the etcd cluster and restoring from a backup.






          share|improve this answer





















          • I see.. this looks doable. however my bad I have 3 masters only not 5. so anything to worry about before I make change? and why would it matter how many masters I have? yes, with 1 if am restarting, traffic will be down however with 2 or more, there is redundancy..
            – AhmFM
            Nov 9 at 18:14












          • 3 masters should be fine, just make sure 2 of them are up all the time
            – Rico
            Nov 9 at 18:34










          • awesome.. it worked. however i just did after change systemctl restart kubelet
            – AhmFM
            Nov 9 at 23:24















          up vote
          1
          down vote



          accepted










          You update your kube-apiserver on your masters one by one (update/restart). If your cluster is setup correctly, when you get to the active kube-apiserver it should automatically failover to another kube-apiserver master in standby.



          You can add the oidc options in the /etc/kubernetes/manifests/kube-apiserver.yaml pod manifest file.



          apiVersion: v1
          kind: Pod
          metadata:
          annotations:
          scheduler.alpha.kubernetes.io/critical-pod: ""
          creationTimestamp: null
          labels:
          component: kube-apiserver
          tier: control-plane
          name: kube-apiserver
          namespace: kube-system
          spec:
          containers:
          - command:
          - kube-apiserver
          - --authorization-mode=Node,RBAC
          - --advertise-address=172.x.x.x
          - --allow-privileged=true
          - --client-ca-file=/etc/kubernetes/pki/ca.crt
          - --oidc-client-id=...
          - --oidc-username-claim=...
          - --oidc-username-prefix=...
          - --oidc-groups-claim=...
          - --oidc-groups-prefix=...
          ...


          Then you can restart your kube-apiserver container, if you are using docker:



          $ sudo docker restart <container-id-for-kube-apiserver>


          Or if you'd like to restart all the components on the master:



          $ sudo systemctl restart docker


          Watch for logs on the kube-apiserver container



          $ sudo docker logs -f <container-id-for-kube-apiserver>


          Make sure you never have less running nodes than your quorum which should be 3 for your 5 master cluster, to be safe. If for some reason your etcd cluster falls out of quorum you will have to recover by recreating the etcd cluster and restoring from a backup.






          share|improve this answer





















          • I see.. this looks doable. however my bad I have 3 masters only not 5. so anything to worry about before I make change? and why would it matter how many masters I have? yes, with 1 if am restarting, traffic will be down however with 2 or more, there is redundancy..
            – AhmFM
            Nov 9 at 18:14












          • 3 masters should be fine, just make sure 2 of them are up all the time
            – Rico
            Nov 9 at 18:34










          • awesome.. it worked. however i just did after change systemctl restart kubelet
            – AhmFM
            Nov 9 at 23:24













          up vote
          1
          down vote



          accepted







          up vote
          1
          down vote



          accepted






          You update your kube-apiserver on your masters one by one (update/restart). If your cluster is setup correctly, when you get to the active kube-apiserver it should automatically failover to another kube-apiserver master in standby.



          You can add the oidc options in the /etc/kubernetes/manifests/kube-apiserver.yaml pod manifest file.



          apiVersion: v1
          kind: Pod
          metadata:
          annotations:
          scheduler.alpha.kubernetes.io/critical-pod: ""
          creationTimestamp: null
          labels:
          component: kube-apiserver
          tier: control-plane
          name: kube-apiserver
          namespace: kube-system
          spec:
          containers:
          - command:
          - kube-apiserver
          - --authorization-mode=Node,RBAC
          - --advertise-address=172.x.x.x
          - --allow-privileged=true
          - --client-ca-file=/etc/kubernetes/pki/ca.crt
          - --oidc-client-id=...
          - --oidc-username-claim=...
          - --oidc-username-prefix=...
          - --oidc-groups-claim=...
          - --oidc-groups-prefix=...
          ...


          Then you can restart your kube-apiserver container, if you are using docker:



          $ sudo docker restart <container-id-for-kube-apiserver>


          Or if you'd like to restart all the components on the master:



          $ sudo systemctl restart docker


          Watch for logs on the kube-apiserver container



          $ sudo docker logs -f <container-id-for-kube-apiserver>


          Make sure you never have less running nodes than your quorum which should be 3 for your 5 master cluster, to be safe. If for some reason your etcd cluster falls out of quorum you will have to recover by recreating the etcd cluster and restoring from a backup.






          share|improve this answer












          You update your kube-apiserver on your masters one by one (update/restart). If your cluster is setup correctly, when you get to the active kube-apiserver it should automatically failover to another kube-apiserver master in standby.



          You can add the oidc options in the /etc/kubernetes/manifests/kube-apiserver.yaml pod manifest file.



          apiVersion: v1
          kind: Pod
          metadata:
          annotations:
          scheduler.alpha.kubernetes.io/critical-pod: ""
          creationTimestamp: null
          labels:
          component: kube-apiserver
          tier: control-plane
          name: kube-apiserver
          namespace: kube-system
          spec:
          containers:
          - command:
          - kube-apiserver
          - --authorization-mode=Node,RBAC
          - --advertise-address=172.x.x.x
          - --allow-privileged=true
          - --client-ca-file=/etc/kubernetes/pki/ca.crt
          - --oidc-client-id=...
          - --oidc-username-claim=...
          - --oidc-username-prefix=...
          - --oidc-groups-claim=...
          - --oidc-groups-prefix=...
          ...


          Then you can restart your kube-apiserver container, if you are using docker:



          $ sudo docker restart <container-id-for-kube-apiserver>


          Or if you'd like to restart all the components on the master:



          $ sudo systemctl restart docker


          Watch for logs on the kube-apiserver container



          $ sudo docker logs -f <container-id-for-kube-apiserver>


          Make sure you never have less running nodes than your quorum which should be 3 for your 5 master cluster, to be safe. If for some reason your etcd cluster falls out of quorum you will have to recover by recreating the etcd cluster and restoring from a backup.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Nov 8 at 1:20









          Rico

          24.3k94864




          24.3k94864












          • I see.. this looks doable. however my bad I have 3 masters only not 5. so anything to worry about before I make change? and why would it matter how many masters I have? yes, with 1 if am restarting, traffic will be down however with 2 or more, there is redundancy..
            – AhmFM
            Nov 9 at 18:14












          • 3 masters should be fine, just make sure 2 of them are up all the time
            – Rico
            Nov 9 at 18:34










          • awesome.. it worked. however i just did after change systemctl restart kubelet
            – AhmFM
            Nov 9 at 23:24


















          • I see.. this looks doable. however my bad I have 3 masters only not 5. so anything to worry about before I make change? and why would it matter how many masters I have? yes, with 1 if am restarting, traffic will be down however with 2 or more, there is redundancy..
            – AhmFM
            Nov 9 at 18:14












          • 3 masters should be fine, just make sure 2 of them are up all the time
            – Rico
            Nov 9 at 18:34










          • awesome.. it worked. however i just did after change systemctl restart kubelet
            – AhmFM
            Nov 9 at 23:24
















          I see.. this looks doable. however my bad I have 3 masters only not 5. so anything to worry about before I make change? and why would it matter how many masters I have? yes, with 1 if am restarting, traffic will be down however with 2 or more, there is redundancy..
          – AhmFM
          Nov 9 at 18:14






          I see.. this looks doable. however my bad I have 3 masters only not 5. so anything to worry about before I make change? and why would it matter how many masters I have? yes, with 1 if am restarting, traffic will be down however with 2 or more, there is redundancy..
          – AhmFM
          Nov 9 at 18:14














          3 masters should be fine, just make sure 2 of them are up all the time
          – Rico
          Nov 9 at 18:34




          3 masters should be fine, just make sure 2 of them are up all the time
          – Rico
          Nov 9 at 18:34












          awesome.. it worked. however i just did after change systemctl restart kubelet
          – AhmFM
          Nov 9 at 23:24




          awesome.. it worked. however i just did after change systemctl restart kubelet
          – AhmFM
          Nov 9 at 23:24


















           

          draft saved


          draft discarded



















































           


          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53195608%2fapplying-api-server-app-id-to-k8s-cluster-spec%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          這個網誌中的熱門文章

          Xamarin.form Move up view when keyboard appear

          Post-Redirect-Get with Spring WebFlux and Thymeleaf

          Anylogic : not able to use stopDelay()