update kubernetes deployment with jenkins












1















I'm using Kubernetes Continuous Deploy Plugin to deploy and upgrade a Deployment on my Kubernetes Cluster.
I'm using pipeline and this is the Jenkinsfile:



pipeline {
environment {
JOB_NAME = "${JOB_NAME}".replace("-deploy", "")
REGISTRY = "my-docker-registry"
}
agent any
stages {
stage('Fetching kubernetes config files') {
steps {
git 'git_url_of_k8s_configurations'
}
}
stage('Deploy on kubernetes') {
steps {
kubernetesDeploy(
kubeconfigId: 'k8s-default-namespace-config-id',
configs: 'deployment.yml',
enableConfigSubstitution: true
)
}
}
}
}


Deployment.yml instead is:



apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: ${JOB_NAME}
spec:
replicas: 1
template:
metadata:
labels:
build_number: ${BUILD_NUMBER}
app: ${JOB_NAME}
role: rolling-update
spec:
containers:
- name: ${JOB_NAME}-container
image: ${REGISTRY}/${JOB_NAME}:latest
ports:
- containerPort: 8080
envFrom:
- configMapRef:
name: postgres
imagePullSecrets:
- name: regcred
strategy:
type: RollingUpdate


In order to let Kubernetes understand that Deployment is changed ( so to upgrade it and pods ) I used the Jenkins build number as annotation:



...
metadata:
labels:
build_number: ${BUILD_NUMBER}
...


The problem or my misunderstanding:



If Deployment does not exists on Kubernetes, all works good, creating one Deployment and one ReplicaSet.



If Deployment still exists and an upgrade is applied, Kubernetes creates a new ReplicaSet:



Before first deploy



before first deploy



First deploy



first deploy



Second deploy



second deploy



Third deploy



enter image description here



As you can see, each new Jenkins deploy will update corretly the deployment but creates a new ReplicaSet without removing the old one.



What could be the issue?










share|improve this question



























    1















    I'm using Kubernetes Continuous Deploy Plugin to deploy and upgrade a Deployment on my Kubernetes Cluster.
    I'm using pipeline and this is the Jenkinsfile:



    pipeline {
    environment {
    JOB_NAME = "${JOB_NAME}".replace("-deploy", "")
    REGISTRY = "my-docker-registry"
    }
    agent any
    stages {
    stage('Fetching kubernetes config files') {
    steps {
    git 'git_url_of_k8s_configurations'
    }
    }
    stage('Deploy on kubernetes') {
    steps {
    kubernetesDeploy(
    kubeconfigId: 'k8s-default-namespace-config-id',
    configs: 'deployment.yml',
    enableConfigSubstitution: true
    )
    }
    }
    }
    }


    Deployment.yml instead is:



    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
    name: ${JOB_NAME}
    spec:
    replicas: 1
    template:
    metadata:
    labels:
    build_number: ${BUILD_NUMBER}
    app: ${JOB_NAME}
    role: rolling-update
    spec:
    containers:
    - name: ${JOB_NAME}-container
    image: ${REGISTRY}/${JOB_NAME}:latest
    ports:
    - containerPort: 8080
    envFrom:
    - configMapRef:
    name: postgres
    imagePullSecrets:
    - name: regcred
    strategy:
    type: RollingUpdate


    In order to let Kubernetes understand that Deployment is changed ( so to upgrade it and pods ) I used the Jenkins build number as annotation:



    ...
    metadata:
    labels:
    build_number: ${BUILD_NUMBER}
    ...


    The problem or my misunderstanding:



    If Deployment does not exists on Kubernetes, all works good, creating one Deployment and one ReplicaSet.



    If Deployment still exists and an upgrade is applied, Kubernetes creates a new ReplicaSet:



    Before first deploy



    before first deploy



    First deploy



    first deploy



    Second deploy



    second deploy



    Third deploy



    enter image description here



    As you can see, each new Jenkins deploy will update corretly the deployment but creates a new ReplicaSet without removing the old one.



    What could be the issue?










    share|improve this question

























      1












      1








      1








      I'm using Kubernetes Continuous Deploy Plugin to deploy and upgrade a Deployment on my Kubernetes Cluster.
      I'm using pipeline and this is the Jenkinsfile:



      pipeline {
      environment {
      JOB_NAME = "${JOB_NAME}".replace("-deploy", "")
      REGISTRY = "my-docker-registry"
      }
      agent any
      stages {
      stage('Fetching kubernetes config files') {
      steps {
      git 'git_url_of_k8s_configurations'
      }
      }
      stage('Deploy on kubernetes') {
      steps {
      kubernetesDeploy(
      kubeconfigId: 'k8s-default-namespace-config-id',
      configs: 'deployment.yml',
      enableConfigSubstitution: true
      )
      }
      }
      }
      }


      Deployment.yml instead is:



      apiVersion: extensions/v1beta1
      kind: Deployment
      metadata:
      name: ${JOB_NAME}
      spec:
      replicas: 1
      template:
      metadata:
      labels:
      build_number: ${BUILD_NUMBER}
      app: ${JOB_NAME}
      role: rolling-update
      spec:
      containers:
      - name: ${JOB_NAME}-container
      image: ${REGISTRY}/${JOB_NAME}:latest
      ports:
      - containerPort: 8080
      envFrom:
      - configMapRef:
      name: postgres
      imagePullSecrets:
      - name: regcred
      strategy:
      type: RollingUpdate


      In order to let Kubernetes understand that Deployment is changed ( so to upgrade it and pods ) I used the Jenkins build number as annotation:



      ...
      metadata:
      labels:
      build_number: ${BUILD_NUMBER}
      ...


      The problem or my misunderstanding:



      If Deployment does not exists on Kubernetes, all works good, creating one Deployment and one ReplicaSet.



      If Deployment still exists and an upgrade is applied, Kubernetes creates a new ReplicaSet:



      Before first deploy



      before first deploy



      First deploy



      first deploy



      Second deploy



      second deploy



      Third deploy



      enter image description here



      As you can see, each new Jenkins deploy will update corretly the deployment but creates a new ReplicaSet without removing the old one.



      What could be the issue?










      share|improve this question














      I'm using Kubernetes Continuous Deploy Plugin to deploy and upgrade a Deployment on my Kubernetes Cluster.
      I'm using pipeline and this is the Jenkinsfile:



      pipeline {
      environment {
      JOB_NAME = "${JOB_NAME}".replace("-deploy", "")
      REGISTRY = "my-docker-registry"
      }
      agent any
      stages {
      stage('Fetching kubernetes config files') {
      steps {
      git 'git_url_of_k8s_configurations'
      }
      }
      stage('Deploy on kubernetes') {
      steps {
      kubernetesDeploy(
      kubeconfigId: 'k8s-default-namespace-config-id',
      configs: 'deployment.yml',
      enableConfigSubstitution: true
      )
      }
      }
      }
      }


      Deployment.yml instead is:



      apiVersion: extensions/v1beta1
      kind: Deployment
      metadata:
      name: ${JOB_NAME}
      spec:
      replicas: 1
      template:
      metadata:
      labels:
      build_number: ${BUILD_NUMBER}
      app: ${JOB_NAME}
      role: rolling-update
      spec:
      containers:
      - name: ${JOB_NAME}-container
      image: ${REGISTRY}/${JOB_NAME}:latest
      ports:
      - containerPort: 8080
      envFrom:
      - configMapRef:
      name: postgres
      imagePullSecrets:
      - name: regcred
      strategy:
      type: RollingUpdate


      In order to let Kubernetes understand that Deployment is changed ( so to upgrade it and pods ) I used the Jenkins build number as annotation:



      ...
      metadata:
      labels:
      build_number: ${BUILD_NUMBER}
      ...


      The problem or my misunderstanding:



      If Deployment does not exists on Kubernetes, all works good, creating one Deployment and one ReplicaSet.



      If Deployment still exists and an upgrade is applied, Kubernetes creates a new ReplicaSet:



      Before first deploy



      before first deploy



      First deploy



      first deploy



      Second deploy



      second deploy



      Third deploy



      enter image description here



      As you can see, each new Jenkins deploy will update corretly the deployment but creates a new ReplicaSet without removing the old one.



      What could be the issue?







      jenkins deployment kubernetes replicaset






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Nov 23 '18 at 8:59









      JackTurkyJackTurky

      7,92435115190




      7,92435115190
























          1 Answer
          1






          active

          oldest

          votes


















          2














          This is expected behavior. Every time you update a Deployment a new ReplicaSet will be created. But, old ReplicaSet will be kept so that you can roll-back to previous state in case of any problem in your updated Deployment.



          Ref: Updating a Deployment



          However, you can limit how many ReplicaSet should be kept through spec.revisionHistoryLimit field. Default value is 10. Ref: RevisionHistoryLimit






          share|improve this answer
























          • ok, thank you. But i can see that deployment doesn't kill older pods, why?

            – JackTurky
            Nov 23 '18 at 11:01











          • Well, that's unexpected. Deployment should kill old pod once new pod is ready. There must be something going on. Try configuring maxUnavailable and maxSurge field for RollingUpdate. kubernetes.io/docs/concepts/workloads/controllers/deployment/…

            – Emruz Hossain
            Nov 23 '18 at 11:38











          • yes, the problem was with the build_number. now I've edited using a dynamic image_tag and all works well! Thank you very much

            – JackTurky
            Nov 23 '18 at 12:12












          Your Answer






          StackExchange.ifUsing("editor", function () {
          StackExchange.using("externalEditor", function () {
          StackExchange.using("snippets", function () {
          StackExchange.snippets.init();
          });
          });
          }, "code-snippets");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "1"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53443441%2fupdate-kubernetes-deployment-with-jenkins%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          2














          This is expected behavior. Every time you update a Deployment a new ReplicaSet will be created. But, old ReplicaSet will be kept so that you can roll-back to previous state in case of any problem in your updated Deployment.



          Ref: Updating a Deployment



          However, you can limit how many ReplicaSet should be kept through spec.revisionHistoryLimit field. Default value is 10. Ref: RevisionHistoryLimit






          share|improve this answer
























          • ok, thank you. But i can see that deployment doesn't kill older pods, why?

            – JackTurky
            Nov 23 '18 at 11:01











          • Well, that's unexpected. Deployment should kill old pod once new pod is ready. There must be something going on. Try configuring maxUnavailable and maxSurge field for RollingUpdate. kubernetes.io/docs/concepts/workloads/controllers/deployment/…

            – Emruz Hossain
            Nov 23 '18 at 11:38











          • yes, the problem was with the build_number. now I've edited using a dynamic image_tag and all works well! Thank you very much

            – JackTurky
            Nov 23 '18 at 12:12
















          2














          This is expected behavior. Every time you update a Deployment a new ReplicaSet will be created. But, old ReplicaSet will be kept so that you can roll-back to previous state in case of any problem in your updated Deployment.



          Ref: Updating a Deployment



          However, you can limit how many ReplicaSet should be kept through spec.revisionHistoryLimit field. Default value is 10. Ref: RevisionHistoryLimit






          share|improve this answer
























          • ok, thank you. But i can see that deployment doesn't kill older pods, why?

            – JackTurky
            Nov 23 '18 at 11:01











          • Well, that's unexpected. Deployment should kill old pod once new pod is ready. There must be something going on. Try configuring maxUnavailable and maxSurge field for RollingUpdate. kubernetes.io/docs/concepts/workloads/controllers/deployment/…

            – Emruz Hossain
            Nov 23 '18 at 11:38











          • yes, the problem was with the build_number. now I've edited using a dynamic image_tag and all works well! Thank you very much

            – JackTurky
            Nov 23 '18 at 12:12














          2












          2








          2







          This is expected behavior. Every time you update a Deployment a new ReplicaSet will be created. But, old ReplicaSet will be kept so that you can roll-back to previous state in case of any problem in your updated Deployment.



          Ref: Updating a Deployment



          However, you can limit how many ReplicaSet should be kept through spec.revisionHistoryLimit field. Default value is 10. Ref: RevisionHistoryLimit






          share|improve this answer













          This is expected behavior. Every time you update a Deployment a new ReplicaSet will be created. But, old ReplicaSet will be kept so that you can roll-back to previous state in case of any problem in your updated Deployment.



          Ref: Updating a Deployment



          However, you can limit how many ReplicaSet should be kept through spec.revisionHistoryLimit field. Default value is 10. Ref: RevisionHistoryLimit







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Nov 23 '18 at 10:02









          Emruz HossainEmruz Hossain

          1,156511




          1,156511













          • ok, thank you. But i can see that deployment doesn't kill older pods, why?

            – JackTurky
            Nov 23 '18 at 11:01











          • Well, that's unexpected. Deployment should kill old pod once new pod is ready. There must be something going on. Try configuring maxUnavailable and maxSurge field for RollingUpdate. kubernetes.io/docs/concepts/workloads/controllers/deployment/…

            – Emruz Hossain
            Nov 23 '18 at 11:38











          • yes, the problem was with the build_number. now I've edited using a dynamic image_tag and all works well! Thank you very much

            – JackTurky
            Nov 23 '18 at 12:12



















          • ok, thank you. But i can see that deployment doesn't kill older pods, why?

            – JackTurky
            Nov 23 '18 at 11:01











          • Well, that's unexpected. Deployment should kill old pod once new pod is ready. There must be something going on. Try configuring maxUnavailable and maxSurge field for RollingUpdate. kubernetes.io/docs/concepts/workloads/controllers/deployment/…

            – Emruz Hossain
            Nov 23 '18 at 11:38











          • yes, the problem was with the build_number. now I've edited using a dynamic image_tag and all works well! Thank you very much

            – JackTurky
            Nov 23 '18 at 12:12

















          ok, thank you. But i can see that deployment doesn't kill older pods, why?

          – JackTurky
          Nov 23 '18 at 11:01





          ok, thank you. But i can see that deployment doesn't kill older pods, why?

          – JackTurky
          Nov 23 '18 at 11:01













          Well, that's unexpected. Deployment should kill old pod once new pod is ready. There must be something going on. Try configuring maxUnavailable and maxSurge field for RollingUpdate. kubernetes.io/docs/concepts/workloads/controllers/deployment/…

          – Emruz Hossain
          Nov 23 '18 at 11:38





          Well, that's unexpected. Deployment should kill old pod once new pod is ready. There must be something going on. Try configuring maxUnavailable and maxSurge field for RollingUpdate. kubernetes.io/docs/concepts/workloads/controllers/deployment/…

          – Emruz Hossain
          Nov 23 '18 at 11:38













          yes, the problem was with the build_number. now I've edited using a dynamic image_tag and all works well! Thank you very much

          – JackTurky
          Nov 23 '18 at 12:12





          yes, the problem was with the build_number. now I've edited using a dynamic image_tag and all works well! Thank you very much

          – JackTurky
          Nov 23 '18 at 12:12




















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53443441%2fupdate-kubernetes-deployment-with-jenkins%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          這個網誌中的熱門文章

          Xamarin.form Move up view when keyboard appear

          Post-Redirect-Get with Spring WebFlux and Thymeleaf

          Anylogic : not able to use stopDelay()