Advice on setting pod memory request size
up vote
1
down vote
favorite
I have a question based on my experience trying to implement memory requests/limits correctly in an OpenShift OKD cluster. I started by setting no request, then watching to see what cluster metrics reported for memory use, then setting something close to that as a request. I ended up with high-memory-pressure nodes, thrashing, and oom kills. I have found I need to set the requests to something closer to the VIRT size in ‘top’ (include the program binary size) to keep performance up. Does this make sense? I'm confused by the asymmetry between request (and apparent need) and reported use in metrics.
kubernetes openshift
add a comment |
up vote
1
down vote
favorite
I have a question based on my experience trying to implement memory requests/limits correctly in an OpenShift OKD cluster. I started by setting no request, then watching to see what cluster metrics reported for memory use, then setting something close to that as a request. I ended up with high-memory-pressure nodes, thrashing, and oom kills. I have found I need to set the requests to something closer to the VIRT size in ‘top’ (include the program binary size) to keep performance up. Does this make sense? I'm confused by the asymmetry between request (and apparent need) and reported use in metrics.
kubernetes openshift
Are all deployed applications on the cluster defining memory resources required, or only some? Are you not defining a limit at all for anything?
– Graham Dumpleton
Nov 7 at 23:35
Most/all are defining request; many also have limits. These are primarily C++ processes. I understand Java apps have an added set of concerns but if I can get a solid hold on the C++ processes that would help me a LOT.
– Steve Huston
Nov 10 at 23:30
If things don't have limits, and only have a request for memory, then obviously their memory usage growth is unbounded and you could overwhelm the node. If you know the upper bound, as least define a limit.
– Graham Dumpleton
Nov 11 at 5:08
I understand that. My original question was more towards the role of the program image size in the calculated need. For example, I have containers that report in metrics they use 50-80 MB. But if I set the memory request/limit in the 80-100 MB range, the node will end up oversubscribed and thrashing with horrible performance. I need to set the memory requests nearer 1GB, accounting for the executable image size. Is this the way it is supposed to work?
– Steve Huston
Nov 12 at 13:04
add a comment |
up vote
1
down vote
favorite
up vote
1
down vote
favorite
I have a question based on my experience trying to implement memory requests/limits correctly in an OpenShift OKD cluster. I started by setting no request, then watching to see what cluster metrics reported for memory use, then setting something close to that as a request. I ended up with high-memory-pressure nodes, thrashing, and oom kills. I have found I need to set the requests to something closer to the VIRT size in ‘top’ (include the program binary size) to keep performance up. Does this make sense? I'm confused by the asymmetry between request (and apparent need) and reported use in metrics.
kubernetes openshift
I have a question based on my experience trying to implement memory requests/limits correctly in an OpenShift OKD cluster. I started by setting no request, then watching to see what cluster metrics reported for memory use, then setting something close to that as a request. I ended up with high-memory-pressure nodes, thrashing, and oom kills. I have found I need to set the requests to something closer to the VIRT size in ‘top’ (include the program binary size) to keep performance up. Does this make sense? I'm confused by the asymmetry between request (and apparent need) and reported use in metrics.
kubernetes openshift
kubernetes openshift
asked Nov 7 at 22:02
Steve Huston
499310
499310
Are all deployed applications on the cluster defining memory resources required, or only some? Are you not defining a limit at all for anything?
– Graham Dumpleton
Nov 7 at 23:35
Most/all are defining request; many also have limits. These are primarily C++ processes. I understand Java apps have an added set of concerns but if I can get a solid hold on the C++ processes that would help me a LOT.
– Steve Huston
Nov 10 at 23:30
If things don't have limits, and only have a request for memory, then obviously their memory usage growth is unbounded and you could overwhelm the node. If you know the upper bound, as least define a limit.
– Graham Dumpleton
Nov 11 at 5:08
I understand that. My original question was more towards the role of the program image size in the calculated need. For example, I have containers that report in metrics they use 50-80 MB. But if I set the memory request/limit in the 80-100 MB range, the node will end up oversubscribed and thrashing with horrible performance. I need to set the memory requests nearer 1GB, accounting for the executable image size. Is this the way it is supposed to work?
– Steve Huston
Nov 12 at 13:04
add a comment |
Are all deployed applications on the cluster defining memory resources required, or only some? Are you not defining a limit at all for anything?
– Graham Dumpleton
Nov 7 at 23:35
Most/all are defining request; many also have limits. These are primarily C++ processes. I understand Java apps have an added set of concerns but if I can get a solid hold on the C++ processes that would help me a LOT.
– Steve Huston
Nov 10 at 23:30
If things don't have limits, and only have a request for memory, then obviously their memory usage growth is unbounded and you could overwhelm the node. If you know the upper bound, as least define a limit.
– Graham Dumpleton
Nov 11 at 5:08
I understand that. My original question was more towards the role of the program image size in the calculated need. For example, I have containers that report in metrics they use 50-80 MB. But if I set the memory request/limit in the 80-100 MB range, the node will end up oversubscribed and thrashing with horrible performance. I need to set the memory requests nearer 1GB, accounting for the executable image size. Is this the way it is supposed to work?
– Steve Huston
Nov 12 at 13:04
Are all deployed applications on the cluster defining memory resources required, or only some? Are you not defining a limit at all for anything?
– Graham Dumpleton
Nov 7 at 23:35
Are all deployed applications on the cluster defining memory resources required, or only some? Are you not defining a limit at all for anything?
– Graham Dumpleton
Nov 7 at 23:35
Most/all are defining request; many also have limits. These are primarily C++ processes. I understand Java apps have an added set of concerns but if I can get a solid hold on the C++ processes that would help me a LOT.
– Steve Huston
Nov 10 at 23:30
Most/all are defining request; many also have limits. These are primarily C++ processes. I understand Java apps have an added set of concerns but if I can get a solid hold on the C++ processes that would help me a LOT.
– Steve Huston
Nov 10 at 23:30
If things don't have limits, and only have a request for memory, then obviously their memory usage growth is unbounded and you could overwhelm the node. If you know the upper bound, as least define a limit.
– Graham Dumpleton
Nov 11 at 5:08
If things don't have limits, and only have a request for memory, then obviously their memory usage growth is unbounded and you could overwhelm the node. If you know the upper bound, as least define a limit.
– Graham Dumpleton
Nov 11 at 5:08
I understand that. My original question was more towards the role of the program image size in the calculated need. For example, I have containers that report in metrics they use 50-80 MB. But if I set the memory request/limit in the 80-100 MB range, the node will end up oversubscribed and thrashing with horrible performance. I need to set the memory requests nearer 1GB, accounting for the executable image size. Is this the way it is supposed to work?
– Steve Huston
Nov 12 at 13:04
I understand that. My original question was more towards the role of the program image size in the calculated need. For example, I have containers that report in metrics they use 50-80 MB. But if I set the memory request/limit in the 80-100 MB range, the node will end up oversubscribed and thrashing with horrible performance. I need to set the memory requests nearer 1GB, accounting for the executable image size. Is this the way it is supposed to work?
– Steve Huston
Nov 12 at 13:04
add a comment |
1 Answer
1
active
oldest
votes
up vote
1
down vote
You always need to leave a bit of memory headroom for overhead an memory spills. If for some reason the container exceeds the memory, either from your application, from your binary of some garbage collection system it will get killed. For example, this is common in Java apps, where you specify a heap and you need an extra overhead for the garbage collector and other things such as:
- Native JRE
- Perm / metaspace
- JIT bytecode
- JNI
- NIO
- Threads
This blog explains some of them.
2
The JVM memory size is especially an issue when running in containers, as various Java images do not size the JVM based on amount of memory allocated to the container via the memory limit. As such they will size the JVM memory based on memory for the whole node if you don't set a value explicitly. Thus very important to also specify a limit and not just a request. Also use a Java image that uses container memory.
– Graham Dumpleton
Nov 8 at 5:24
Thank yo for the Java info - I will need to deal with that also. However, my primary confusion/questions right now are dealing with C++ applications - straight up binaries and shared libraries.
– Steve Huston
Nov 10 at 23:32
add a comment |
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
1
down vote
You always need to leave a bit of memory headroom for overhead an memory spills. If for some reason the container exceeds the memory, either from your application, from your binary of some garbage collection system it will get killed. For example, this is common in Java apps, where you specify a heap and you need an extra overhead for the garbage collector and other things such as:
- Native JRE
- Perm / metaspace
- JIT bytecode
- JNI
- NIO
- Threads
This blog explains some of them.
2
The JVM memory size is especially an issue when running in containers, as various Java images do not size the JVM based on amount of memory allocated to the container via the memory limit. As such they will size the JVM memory based on memory for the whole node if you don't set a value explicitly. Thus very important to also specify a limit and not just a request. Also use a Java image that uses container memory.
– Graham Dumpleton
Nov 8 at 5:24
Thank yo for the Java info - I will need to deal with that also. However, my primary confusion/questions right now are dealing with C++ applications - straight up binaries and shared libraries.
– Steve Huston
Nov 10 at 23:32
add a comment |
up vote
1
down vote
You always need to leave a bit of memory headroom for overhead an memory spills. If for some reason the container exceeds the memory, either from your application, from your binary of some garbage collection system it will get killed. For example, this is common in Java apps, where you specify a heap and you need an extra overhead for the garbage collector and other things such as:
- Native JRE
- Perm / metaspace
- JIT bytecode
- JNI
- NIO
- Threads
This blog explains some of them.
2
The JVM memory size is especially an issue when running in containers, as various Java images do not size the JVM based on amount of memory allocated to the container via the memory limit. As such they will size the JVM memory based on memory for the whole node if you don't set a value explicitly. Thus very important to also specify a limit and not just a request. Also use a Java image that uses container memory.
– Graham Dumpleton
Nov 8 at 5:24
Thank yo for the Java info - I will need to deal with that also. However, my primary confusion/questions right now are dealing with C++ applications - straight up binaries and shared libraries.
– Steve Huston
Nov 10 at 23:32
add a comment |
up vote
1
down vote
up vote
1
down vote
You always need to leave a bit of memory headroom for overhead an memory spills. If for some reason the container exceeds the memory, either from your application, from your binary of some garbage collection system it will get killed. For example, this is common in Java apps, where you specify a heap and you need an extra overhead for the garbage collector and other things such as:
- Native JRE
- Perm / metaspace
- JIT bytecode
- JNI
- NIO
- Threads
This blog explains some of them.
You always need to leave a bit of memory headroom for overhead an memory spills. If for some reason the container exceeds the memory, either from your application, from your binary of some garbage collection system it will get killed. For example, this is common in Java apps, where you specify a heap and you need an extra overhead for the garbage collector and other things such as:
- Native JRE
- Perm / metaspace
- JIT bytecode
- JNI
- NIO
- Threads
This blog explains some of them.
answered Nov 8 at 0:52
Rico
24.4k94864
24.4k94864
2
The JVM memory size is especially an issue when running in containers, as various Java images do not size the JVM based on amount of memory allocated to the container via the memory limit. As such they will size the JVM memory based on memory for the whole node if you don't set a value explicitly. Thus very important to also specify a limit and not just a request. Also use a Java image that uses container memory.
– Graham Dumpleton
Nov 8 at 5:24
Thank yo for the Java info - I will need to deal with that also. However, my primary confusion/questions right now are dealing with C++ applications - straight up binaries and shared libraries.
– Steve Huston
Nov 10 at 23:32
add a comment |
2
The JVM memory size is especially an issue when running in containers, as various Java images do not size the JVM based on amount of memory allocated to the container via the memory limit. As such they will size the JVM memory based on memory for the whole node if you don't set a value explicitly. Thus very important to also specify a limit and not just a request. Also use a Java image that uses container memory.
– Graham Dumpleton
Nov 8 at 5:24
Thank yo for the Java info - I will need to deal with that also. However, my primary confusion/questions right now are dealing with C++ applications - straight up binaries and shared libraries.
– Steve Huston
Nov 10 at 23:32
2
2
The JVM memory size is especially an issue when running in containers, as various Java images do not size the JVM based on amount of memory allocated to the container via the memory limit. As such they will size the JVM memory based on memory for the whole node if you don't set a value explicitly. Thus very important to also specify a limit and not just a request. Also use a Java image that uses container memory.
– Graham Dumpleton
Nov 8 at 5:24
The JVM memory size is especially an issue when running in containers, as various Java images do not size the JVM based on amount of memory allocated to the container via the memory limit. As such they will size the JVM memory based on memory for the whole node if you don't set a value explicitly. Thus very important to also specify a limit and not just a request. Also use a Java image that uses container memory.
– Graham Dumpleton
Nov 8 at 5:24
Thank yo for the Java info - I will need to deal with that also. However, my primary confusion/questions right now are dealing with C++ applications - straight up binaries and shared libraries.
– Steve Huston
Nov 10 at 23:32
Thank yo for the Java info - I will need to deal with that also. However, my primary confusion/questions right now are dealing with C++ applications - straight up binaries and shared libraries.
– Steve Huston
Nov 10 at 23:32
add a comment |
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53198542%2fadvice-on-setting-pod-memory-request-size%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Are all deployed applications on the cluster defining memory resources required, or only some? Are you not defining a limit at all for anything?
– Graham Dumpleton
Nov 7 at 23:35
Most/all are defining request; many also have limits. These are primarily C++ processes. I understand Java apps have an added set of concerns but if I can get a solid hold on the C++ processes that would help me a LOT.
– Steve Huston
Nov 10 at 23:30
If things don't have limits, and only have a request for memory, then obviously their memory usage growth is unbounded and you could overwhelm the node. If you know the upper bound, as least define a limit.
– Graham Dumpleton
Nov 11 at 5:08
I understand that. My original question was more towards the role of the program image size in the calculated need. For example, I have containers that report in metrics they use 50-80 MB. But if I set the memory request/limit in the 80-100 MB range, the node will end up oversubscribed and thrashing with horrible performance. I need to set the memory requests nearer 1GB, accounting for the executable image size. Is this the way it is supposed to work?
– Steve Huston
Nov 12 at 13:04