Persistent Volume Claim for Azure Disk with specific user permissions












0















I'm trying to create a dynamic Azure Disk volume to use in a pod that has specific permissions requirements.



The container runs under the user id 472, so I need to find a way to mount the volume with rw permissions for (at least) that user.



With the following StorageClass defined



apiVersion: storage.k8s.io/v1
kind: StorageClass
provisioner: kubernetes.io/azure-disk
reclaimPolicy: Delete
volumeBindingMode: Immediate
metadata:
name: foo-storage
mountOptions:
- rw
parameters:
cachingmode: None
kind: Managed
storageaccounttype: Standard_LRS


and this PVC



apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: foo-storage
namespace: foo
spec:
accessModes:
- ReadWriteOnce
storageClassName: foo-storage
resources:
requests:
storage: 1Gi


I can run the following in a pod:



containers:
- image: ubuntu
name: foo
imagePullPolicy: IfNotPresent
command:
- ls
- -l
- /var/lib/foo
volumeMounts:
- name: foo-persistent-storage
mountPath: /var/lib/foo
volumes:
- name: foo-persistent-storage
persistentVolumeClaim:
claimName: foo-storage


The pod will mount and start correctly, but kubectl logs <the-pod> will show



total 24
drwxr-xr-x 3 root root 4096 Nov 23 11:42 .
drwxr-xr-x 1 root root 4096 Nov 13 12:32 ..
drwx------ 2 root root 16384 Nov 23 11:42 lost+found


i.e. the current directory is mounted as owned by root and read-only for all other users.



I've tried adding a mountOptions section to the StorageClass, but whatever I try (uid=472, user=472 etc) I get mount errors on startup, e.g.



mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/plugins/kubernetes.io/azure-disk/mounts/m1019941199 --scope -- mount -t ext4 -o group=472,rw,user=472,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/azure-disk/mounts/m1019941199
Output: Running scope as unit run-r7165038756bf43e49db934e8968cca8b.scope.
mount: wrong fs type, bad option, bad superblock on /dev/sdc,
missing codepage or helper program, or other error

In some cases useful info is found in syslog - try
dmesg | tail or so.


I've also tried to get some info from man mount, but I haven't found anything that worked.



How can I configure this storage class, persistent volume claim and volume mount so that the non-root user running the container process gets access to write (and create subdirectories) in the mounted path?










share|improve this question



























    0















    I'm trying to create a dynamic Azure Disk volume to use in a pod that has specific permissions requirements.



    The container runs under the user id 472, so I need to find a way to mount the volume with rw permissions for (at least) that user.



    With the following StorageClass defined



    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    provisioner: kubernetes.io/azure-disk
    reclaimPolicy: Delete
    volumeBindingMode: Immediate
    metadata:
    name: foo-storage
    mountOptions:
    - rw
    parameters:
    cachingmode: None
    kind: Managed
    storageaccounttype: Standard_LRS


    and this PVC



    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
    name: foo-storage
    namespace: foo
    spec:
    accessModes:
    - ReadWriteOnce
    storageClassName: foo-storage
    resources:
    requests:
    storage: 1Gi


    I can run the following in a pod:



    containers:
    - image: ubuntu
    name: foo
    imagePullPolicy: IfNotPresent
    command:
    - ls
    - -l
    - /var/lib/foo
    volumeMounts:
    - name: foo-persistent-storage
    mountPath: /var/lib/foo
    volumes:
    - name: foo-persistent-storage
    persistentVolumeClaim:
    claimName: foo-storage


    The pod will mount and start correctly, but kubectl logs <the-pod> will show



    total 24
    drwxr-xr-x 3 root root 4096 Nov 23 11:42 .
    drwxr-xr-x 1 root root 4096 Nov 13 12:32 ..
    drwx------ 2 root root 16384 Nov 23 11:42 lost+found


    i.e. the current directory is mounted as owned by root and read-only for all other users.



    I've tried adding a mountOptions section to the StorageClass, but whatever I try (uid=472, user=472 etc) I get mount errors on startup, e.g.



    mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/plugins/kubernetes.io/azure-disk/mounts/m1019941199 --scope -- mount -t ext4 -o group=472,rw,user=472,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/azure-disk/mounts/m1019941199
    Output: Running scope as unit run-r7165038756bf43e49db934e8968cca8b.scope.
    mount: wrong fs type, bad option, bad superblock on /dev/sdc,
    missing codepage or helper program, or other error

    In some cases useful info is found in syslog - try
    dmesg | tail or so.


    I've also tried to get some info from man mount, but I haven't found anything that worked.



    How can I configure this storage class, persistent volume claim and volume mount so that the non-root user running the container process gets access to write (and create subdirectories) in the mounted path?










    share|improve this question

























      0












      0








      0








      I'm trying to create a dynamic Azure Disk volume to use in a pod that has specific permissions requirements.



      The container runs under the user id 472, so I need to find a way to mount the volume with rw permissions for (at least) that user.



      With the following StorageClass defined



      apiVersion: storage.k8s.io/v1
      kind: StorageClass
      provisioner: kubernetes.io/azure-disk
      reclaimPolicy: Delete
      volumeBindingMode: Immediate
      metadata:
      name: foo-storage
      mountOptions:
      - rw
      parameters:
      cachingmode: None
      kind: Managed
      storageaccounttype: Standard_LRS


      and this PVC



      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
      name: foo-storage
      namespace: foo
      spec:
      accessModes:
      - ReadWriteOnce
      storageClassName: foo-storage
      resources:
      requests:
      storage: 1Gi


      I can run the following in a pod:



      containers:
      - image: ubuntu
      name: foo
      imagePullPolicy: IfNotPresent
      command:
      - ls
      - -l
      - /var/lib/foo
      volumeMounts:
      - name: foo-persistent-storage
      mountPath: /var/lib/foo
      volumes:
      - name: foo-persistent-storage
      persistentVolumeClaim:
      claimName: foo-storage


      The pod will mount and start correctly, but kubectl logs <the-pod> will show



      total 24
      drwxr-xr-x 3 root root 4096 Nov 23 11:42 .
      drwxr-xr-x 1 root root 4096 Nov 13 12:32 ..
      drwx------ 2 root root 16384 Nov 23 11:42 lost+found


      i.e. the current directory is mounted as owned by root and read-only for all other users.



      I've tried adding a mountOptions section to the StorageClass, but whatever I try (uid=472, user=472 etc) I get mount errors on startup, e.g.



      mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/plugins/kubernetes.io/azure-disk/mounts/m1019941199 --scope -- mount -t ext4 -o group=472,rw,user=472,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/azure-disk/mounts/m1019941199
      Output: Running scope as unit run-r7165038756bf43e49db934e8968cca8b.scope.
      mount: wrong fs type, bad option, bad superblock on /dev/sdc,
      missing codepage or helper program, or other error

      In some cases useful info is found in syslog - try
      dmesg | tail or so.


      I've also tried to get some info from man mount, but I haven't found anything that worked.



      How can I configure this storage class, persistent volume claim and volume mount so that the non-root user running the container process gets access to write (and create subdirectories) in the mounted path?










      share|improve this question














      I'm trying to create a dynamic Azure Disk volume to use in a pod that has specific permissions requirements.



      The container runs under the user id 472, so I need to find a way to mount the volume with rw permissions for (at least) that user.



      With the following StorageClass defined



      apiVersion: storage.k8s.io/v1
      kind: StorageClass
      provisioner: kubernetes.io/azure-disk
      reclaimPolicy: Delete
      volumeBindingMode: Immediate
      metadata:
      name: foo-storage
      mountOptions:
      - rw
      parameters:
      cachingmode: None
      kind: Managed
      storageaccounttype: Standard_LRS


      and this PVC



      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
      name: foo-storage
      namespace: foo
      spec:
      accessModes:
      - ReadWriteOnce
      storageClassName: foo-storage
      resources:
      requests:
      storage: 1Gi


      I can run the following in a pod:



      containers:
      - image: ubuntu
      name: foo
      imagePullPolicy: IfNotPresent
      command:
      - ls
      - -l
      - /var/lib/foo
      volumeMounts:
      - name: foo-persistent-storage
      mountPath: /var/lib/foo
      volumes:
      - name: foo-persistent-storage
      persistentVolumeClaim:
      claimName: foo-storage


      The pod will mount and start correctly, but kubectl logs <the-pod> will show



      total 24
      drwxr-xr-x 3 root root 4096 Nov 23 11:42 .
      drwxr-xr-x 1 root root 4096 Nov 13 12:32 ..
      drwx------ 2 root root 16384 Nov 23 11:42 lost+found


      i.e. the current directory is mounted as owned by root and read-only for all other users.



      I've tried adding a mountOptions section to the StorageClass, but whatever I try (uid=472, user=472 etc) I get mount errors on startup, e.g.



      mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/plugins/kubernetes.io/azure-disk/mounts/m1019941199 --scope -- mount -t ext4 -o group=472,rw,user=472,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/azure-disk/mounts/m1019941199
      Output: Running scope as unit run-r7165038756bf43e49db934e8968cca8b.scope.
      mount: wrong fs type, bad option, bad superblock on /dev/sdc,
      missing codepage or helper program, or other error

      In some cases useful info is found in syslog - try
      dmesg | tail or so.


      I've also tried to get some info from man mount, but I haven't found anything that worked.



      How can I configure this storage class, persistent volume claim and volume mount so that the non-root user running the container process gets access to write (and create subdirectories) in the mounted path?







      kubernetes persistent-volumes azure-aks persistent-volume-claims






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Nov 23 '18 at 11:50









      Tomas AschanTomas Aschan

      35.3k34171322




      35.3k34171322
























          1 Answer
          1






          active

          oldest

          votes


















          0














          You need to define the securityContext of your pod spec like the following, so it matches the new running user and group id:



          securityContext:
          runAsUser: 472
          fsGroup: 472


          The stable Grafana Helm Chart also does it in the same way. See securityContext under Configuration here: https://github.com/helm/charts/tree/master/stable/grafana#configuration






          share|improve this answer
























            Your Answer






            StackExchange.ifUsing("editor", function () {
            StackExchange.using("externalEditor", function () {
            StackExchange.using("snippets", function () {
            StackExchange.snippets.init();
            });
            });
            }, "code-snippets");

            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "1"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });














            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53446184%2fpersistent-volume-claim-for-azure-disk-with-specific-user-permissions%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            0














            You need to define the securityContext of your pod spec like the following, so it matches the new running user and group id:



            securityContext:
            runAsUser: 472
            fsGroup: 472


            The stable Grafana Helm Chart also does it in the same way. See securityContext under Configuration here: https://github.com/helm/charts/tree/master/stable/grafana#configuration






            share|improve this answer




























              0














              You need to define the securityContext of your pod spec like the following, so it matches the new running user and group id:



              securityContext:
              runAsUser: 472
              fsGroup: 472


              The stable Grafana Helm Chart also does it in the same way. See securityContext under Configuration here: https://github.com/helm/charts/tree/master/stable/grafana#configuration






              share|improve this answer


























                0












                0








                0







                You need to define the securityContext of your pod spec like the following, so it matches the new running user and group id:



                securityContext:
                runAsUser: 472
                fsGroup: 472


                The stable Grafana Helm Chart also does it in the same way. See securityContext under Configuration here: https://github.com/helm/charts/tree/master/stable/grafana#configuration






                share|improve this answer













                You need to define the securityContext of your pod spec like the following, so it matches the new running user and group id:



                securityContext:
                runAsUser: 472
                fsGroup: 472


                The stable Grafana Helm Chart also does it in the same way. See securityContext under Configuration here: https://github.com/helm/charts/tree/master/stable/grafana#configuration







                share|improve this answer












                share|improve this answer



                share|improve this answer










                answered Nov 25 '18 at 2:30









                Utku ÖzdemirUtku Özdemir

                3,76713240




                3,76713240
































                    draft saved

                    draft discarded




















































                    Thanks for contributing an answer to Stack Overflow!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53446184%2fpersistent-volume-claim-for-azure-disk-with-specific-user-permissions%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    這個網誌中的熱門文章

                    Xamarin.form Move up view when keyboard appear

                    Post-Redirect-Get with Spring WebFlux and Thymeleaf

                    Anylogic : not able to use stopDelay()