Advice on setting pod memory request size











up vote
1
down vote

favorite












I have a question based on my experience trying to implement memory requests/limits correctly in an OpenShift OKD cluster. I started by setting no request, then watching to see what cluster metrics reported for memory use, then setting something close to that as a request. I ended up with high-memory-pressure nodes, thrashing, and oom kills. I have found I need to set the requests to something closer to the VIRT size in ‘top’ (include the program binary size) to keep performance up. Does this make sense? I'm confused by the asymmetry between request (and apparent need) and reported use in metrics.










share|improve this question






















  • Are all deployed applications on the cluster defining memory resources required, or only some? Are you not defining a limit at all for anything?
    – Graham Dumpleton
    Nov 7 at 23:35










  • Most/all are defining request; many also have limits. These are primarily C++ processes. I understand Java apps have an added set of concerns but if I can get a solid hold on the C++ processes that would help me a LOT.
    – Steve Huston
    Nov 10 at 23:30










  • If things don't have limits, and only have a request for memory, then obviously their memory usage growth is unbounded and you could overwhelm the node. If you know the upper bound, as least define a limit.
    – Graham Dumpleton
    Nov 11 at 5:08










  • I understand that. My original question was more towards the role of the program image size in the calculated need. For example, I have containers that report in metrics they use 50-80 MB. But if I set the memory request/limit in the 80-100 MB range, the node will end up oversubscribed and thrashing with horrible performance. I need to set the memory requests nearer 1GB, accounting for the executable image size. Is this the way it is supposed to work?
    – Steve Huston
    Nov 12 at 13:04















up vote
1
down vote

favorite












I have a question based on my experience trying to implement memory requests/limits correctly in an OpenShift OKD cluster. I started by setting no request, then watching to see what cluster metrics reported for memory use, then setting something close to that as a request. I ended up with high-memory-pressure nodes, thrashing, and oom kills. I have found I need to set the requests to something closer to the VIRT size in ‘top’ (include the program binary size) to keep performance up. Does this make sense? I'm confused by the asymmetry between request (and apparent need) and reported use in metrics.










share|improve this question






















  • Are all deployed applications on the cluster defining memory resources required, or only some? Are you not defining a limit at all for anything?
    – Graham Dumpleton
    Nov 7 at 23:35










  • Most/all are defining request; many also have limits. These are primarily C++ processes. I understand Java apps have an added set of concerns but if I can get a solid hold on the C++ processes that would help me a LOT.
    – Steve Huston
    Nov 10 at 23:30










  • If things don't have limits, and only have a request for memory, then obviously their memory usage growth is unbounded and you could overwhelm the node. If you know the upper bound, as least define a limit.
    – Graham Dumpleton
    Nov 11 at 5:08










  • I understand that. My original question was more towards the role of the program image size in the calculated need. For example, I have containers that report in metrics they use 50-80 MB. But if I set the memory request/limit in the 80-100 MB range, the node will end up oversubscribed and thrashing with horrible performance. I need to set the memory requests nearer 1GB, accounting for the executable image size. Is this the way it is supposed to work?
    – Steve Huston
    Nov 12 at 13:04













up vote
1
down vote

favorite









up vote
1
down vote

favorite











I have a question based on my experience trying to implement memory requests/limits correctly in an OpenShift OKD cluster. I started by setting no request, then watching to see what cluster metrics reported for memory use, then setting something close to that as a request. I ended up with high-memory-pressure nodes, thrashing, and oom kills. I have found I need to set the requests to something closer to the VIRT size in ‘top’ (include the program binary size) to keep performance up. Does this make sense? I'm confused by the asymmetry between request (and apparent need) and reported use in metrics.










share|improve this question













I have a question based on my experience trying to implement memory requests/limits correctly in an OpenShift OKD cluster. I started by setting no request, then watching to see what cluster metrics reported for memory use, then setting something close to that as a request. I ended up with high-memory-pressure nodes, thrashing, and oom kills. I have found I need to set the requests to something closer to the VIRT size in ‘top’ (include the program binary size) to keep performance up. Does this make sense? I'm confused by the asymmetry between request (and apparent need) and reported use in metrics.







kubernetes openshift






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Nov 7 at 22:02









Steve Huston

499310




499310












  • Are all deployed applications on the cluster defining memory resources required, or only some? Are you not defining a limit at all for anything?
    – Graham Dumpleton
    Nov 7 at 23:35










  • Most/all are defining request; many also have limits. These are primarily C++ processes. I understand Java apps have an added set of concerns but if I can get a solid hold on the C++ processes that would help me a LOT.
    – Steve Huston
    Nov 10 at 23:30










  • If things don't have limits, and only have a request for memory, then obviously their memory usage growth is unbounded and you could overwhelm the node. If you know the upper bound, as least define a limit.
    – Graham Dumpleton
    Nov 11 at 5:08










  • I understand that. My original question was more towards the role of the program image size in the calculated need. For example, I have containers that report in metrics they use 50-80 MB. But if I set the memory request/limit in the 80-100 MB range, the node will end up oversubscribed and thrashing with horrible performance. I need to set the memory requests nearer 1GB, accounting for the executable image size. Is this the way it is supposed to work?
    – Steve Huston
    Nov 12 at 13:04


















  • Are all deployed applications on the cluster defining memory resources required, or only some? Are you not defining a limit at all for anything?
    – Graham Dumpleton
    Nov 7 at 23:35










  • Most/all are defining request; many also have limits. These are primarily C++ processes. I understand Java apps have an added set of concerns but if I can get a solid hold on the C++ processes that would help me a LOT.
    – Steve Huston
    Nov 10 at 23:30










  • If things don't have limits, and only have a request for memory, then obviously their memory usage growth is unbounded and you could overwhelm the node. If you know the upper bound, as least define a limit.
    – Graham Dumpleton
    Nov 11 at 5:08










  • I understand that. My original question was more towards the role of the program image size in the calculated need. For example, I have containers that report in metrics they use 50-80 MB. But if I set the memory request/limit in the 80-100 MB range, the node will end up oversubscribed and thrashing with horrible performance. I need to set the memory requests nearer 1GB, accounting for the executable image size. Is this the way it is supposed to work?
    – Steve Huston
    Nov 12 at 13:04
















Are all deployed applications on the cluster defining memory resources required, or only some? Are you not defining a limit at all for anything?
– Graham Dumpleton
Nov 7 at 23:35




Are all deployed applications on the cluster defining memory resources required, or only some? Are you not defining a limit at all for anything?
– Graham Dumpleton
Nov 7 at 23:35












Most/all are defining request; many also have limits. These are primarily C++ processes. I understand Java apps have an added set of concerns but if I can get a solid hold on the C++ processes that would help me a LOT.
– Steve Huston
Nov 10 at 23:30




Most/all are defining request; many also have limits. These are primarily C++ processes. I understand Java apps have an added set of concerns but if I can get a solid hold on the C++ processes that would help me a LOT.
– Steve Huston
Nov 10 at 23:30












If things don't have limits, and only have a request for memory, then obviously their memory usage growth is unbounded and you could overwhelm the node. If you know the upper bound, as least define a limit.
– Graham Dumpleton
Nov 11 at 5:08




If things don't have limits, and only have a request for memory, then obviously their memory usage growth is unbounded and you could overwhelm the node. If you know the upper bound, as least define a limit.
– Graham Dumpleton
Nov 11 at 5:08












I understand that. My original question was more towards the role of the program image size in the calculated need. For example, I have containers that report in metrics they use 50-80 MB. But if I set the memory request/limit in the 80-100 MB range, the node will end up oversubscribed and thrashing with horrible performance. I need to set the memory requests nearer 1GB, accounting for the executable image size. Is this the way it is supposed to work?
– Steve Huston
Nov 12 at 13:04




I understand that. My original question was more towards the role of the program image size in the calculated need. For example, I have containers that report in metrics they use 50-80 MB. But if I set the memory request/limit in the 80-100 MB range, the node will end up oversubscribed and thrashing with horrible performance. I need to set the memory requests nearer 1GB, accounting for the executable image size. Is this the way it is supposed to work?
– Steve Huston
Nov 12 at 13:04












1 Answer
1






active

oldest

votes

















up vote
1
down vote













You always need to leave a bit of memory headroom for overhead an memory spills. If for some reason the container exceeds the memory, either from your application, from your binary of some garbage collection system it will get killed. For example, this is common in Java apps, where you specify a heap and you need an extra overhead for the garbage collector and other things such as:




  • Native JRE

  • Perm / metaspace

  • JIT bytecode

  • JNI

  • NIO

  • Threads


This blog explains some of them.






share|improve this answer

















  • 2




    The JVM memory size is especially an issue when running in containers, as various Java images do not size the JVM based on amount of memory allocated to the container via the memory limit. As such they will size the JVM memory based on memory for the whole node if you don't set a value explicitly. Thus very important to also specify a limit and not just a request. Also use a Java image that uses container memory.
    – Graham Dumpleton
    Nov 8 at 5:24












  • Thank yo for the Java info - I will need to deal with that also. However, my primary confusion/questions right now are dealing with C++ applications - straight up binaries and shared libraries.
    – Steve Huston
    Nov 10 at 23:32











Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














 

draft saved


draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53198542%2fadvice-on-setting-pod-memory-request-size%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes








up vote
1
down vote













You always need to leave a bit of memory headroom for overhead an memory spills. If for some reason the container exceeds the memory, either from your application, from your binary of some garbage collection system it will get killed. For example, this is common in Java apps, where you specify a heap and you need an extra overhead for the garbage collector and other things such as:




  • Native JRE

  • Perm / metaspace

  • JIT bytecode

  • JNI

  • NIO

  • Threads


This blog explains some of them.






share|improve this answer

















  • 2




    The JVM memory size is especially an issue when running in containers, as various Java images do not size the JVM based on amount of memory allocated to the container via the memory limit. As such they will size the JVM memory based on memory for the whole node if you don't set a value explicitly. Thus very important to also specify a limit and not just a request. Also use a Java image that uses container memory.
    – Graham Dumpleton
    Nov 8 at 5:24












  • Thank yo for the Java info - I will need to deal with that also. However, my primary confusion/questions right now are dealing with C++ applications - straight up binaries and shared libraries.
    – Steve Huston
    Nov 10 at 23:32















up vote
1
down vote













You always need to leave a bit of memory headroom for overhead an memory spills. If for some reason the container exceeds the memory, either from your application, from your binary of some garbage collection system it will get killed. For example, this is common in Java apps, where you specify a heap and you need an extra overhead for the garbage collector and other things such as:




  • Native JRE

  • Perm / metaspace

  • JIT bytecode

  • JNI

  • NIO

  • Threads


This blog explains some of them.






share|improve this answer

















  • 2




    The JVM memory size is especially an issue when running in containers, as various Java images do not size the JVM based on amount of memory allocated to the container via the memory limit. As such they will size the JVM memory based on memory for the whole node if you don't set a value explicitly. Thus very important to also specify a limit and not just a request. Also use a Java image that uses container memory.
    – Graham Dumpleton
    Nov 8 at 5:24












  • Thank yo for the Java info - I will need to deal with that also. However, my primary confusion/questions right now are dealing with C++ applications - straight up binaries and shared libraries.
    – Steve Huston
    Nov 10 at 23:32













up vote
1
down vote










up vote
1
down vote









You always need to leave a bit of memory headroom for overhead an memory spills. If for some reason the container exceeds the memory, either from your application, from your binary of some garbage collection system it will get killed. For example, this is common in Java apps, where you specify a heap and you need an extra overhead for the garbage collector and other things such as:




  • Native JRE

  • Perm / metaspace

  • JIT bytecode

  • JNI

  • NIO

  • Threads


This blog explains some of them.






share|improve this answer












You always need to leave a bit of memory headroom for overhead an memory spills. If for some reason the container exceeds the memory, either from your application, from your binary of some garbage collection system it will get killed. For example, this is common in Java apps, where you specify a heap and you need an extra overhead for the garbage collector and other things such as:




  • Native JRE

  • Perm / metaspace

  • JIT bytecode

  • JNI

  • NIO

  • Threads


This blog explains some of them.







share|improve this answer












share|improve this answer



share|improve this answer










answered Nov 8 at 0:52









Rico

24.4k94864




24.4k94864








  • 2




    The JVM memory size is especially an issue when running in containers, as various Java images do not size the JVM based on amount of memory allocated to the container via the memory limit. As such they will size the JVM memory based on memory for the whole node if you don't set a value explicitly. Thus very important to also specify a limit and not just a request. Also use a Java image that uses container memory.
    – Graham Dumpleton
    Nov 8 at 5:24












  • Thank yo for the Java info - I will need to deal with that also. However, my primary confusion/questions right now are dealing with C++ applications - straight up binaries and shared libraries.
    – Steve Huston
    Nov 10 at 23:32














  • 2




    The JVM memory size is especially an issue when running in containers, as various Java images do not size the JVM based on amount of memory allocated to the container via the memory limit. As such they will size the JVM memory based on memory for the whole node if you don't set a value explicitly. Thus very important to also specify a limit and not just a request. Also use a Java image that uses container memory.
    – Graham Dumpleton
    Nov 8 at 5:24












  • Thank yo for the Java info - I will need to deal with that also. However, my primary confusion/questions right now are dealing with C++ applications - straight up binaries and shared libraries.
    – Steve Huston
    Nov 10 at 23:32








2




2




The JVM memory size is especially an issue when running in containers, as various Java images do not size the JVM based on amount of memory allocated to the container via the memory limit. As such they will size the JVM memory based on memory for the whole node if you don't set a value explicitly. Thus very important to also specify a limit and not just a request. Also use a Java image that uses container memory.
– Graham Dumpleton
Nov 8 at 5:24






The JVM memory size is especially an issue when running in containers, as various Java images do not size the JVM based on amount of memory allocated to the container via the memory limit. As such they will size the JVM memory based on memory for the whole node if you don't set a value explicitly. Thus very important to also specify a limit and not just a request. Also use a Java image that uses container memory.
– Graham Dumpleton
Nov 8 at 5:24














Thank yo for the Java info - I will need to deal with that also. However, my primary confusion/questions right now are dealing with C++ applications - straight up binaries and shared libraries.
– Steve Huston
Nov 10 at 23:32




Thank yo for the Java info - I will need to deal with that also. However, my primary confusion/questions right now are dealing with C++ applications - straight up binaries and shared libraries.
– Steve Huston
Nov 10 at 23:32


















 

draft saved


draft discarded



















































 


draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53198542%2fadvice-on-setting-pod-memory-request-size%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







這個網誌中的熱門文章

Post-Redirect-Get with Spring WebFlux and Thymeleaf

Xamarin.form Move up view when keyboard appear

JBPM : POST request for execute process go wrong