use docker's remote API in a secure manner
I am trying to find an effective way to use the docker remote API in a secure way.
I have a docker daemon running in a remote host, and a docker client on a different machine. I need my solution to not be client/server OS dependent, so that it would be relevant to any machine with a docker client/daemon etc.
So far, the only way I found to do such a thing is to create certs on a Linux machine with openssl and copy the certs to the client/server manually, as in this example:
https://docs.docker.com/engine/security/https/
and then configure docker on both sides to use the certificates for encryption and authentication.
This method is rather clunky in my opinion, because some times it's a problem to copy files and put them on each machine I want to use remote API from.
I am looking for something more elegant.
Another solution I've found is using a proxy for basic HTTP authentication, but in this method the traffic is not encrypted and it is not really secure that way.
Does anyone have a suggestion for a different solution or for a way to improve one of the above?
docker authentication encryption tls1.2
add a comment |
I am trying to find an effective way to use the docker remote API in a secure way.
I have a docker daemon running in a remote host, and a docker client on a different machine. I need my solution to not be client/server OS dependent, so that it would be relevant to any machine with a docker client/daemon etc.
So far, the only way I found to do such a thing is to create certs on a Linux machine with openssl and copy the certs to the client/server manually, as in this example:
https://docs.docker.com/engine/security/https/
and then configure docker on both sides to use the certificates for encryption and authentication.
This method is rather clunky in my opinion, because some times it's a problem to copy files and put them on each machine I want to use remote API from.
I am looking for something more elegant.
Another solution I've found is using a proxy for basic HTTP authentication, but in this method the traffic is not encrypted and it is not really secure that way.
Does anyone have a suggestion for a different solution or for a way to improve one of the above?
docker authentication encryption tls1.2
add a comment |
I am trying to find an effective way to use the docker remote API in a secure way.
I have a docker daemon running in a remote host, and a docker client on a different machine. I need my solution to not be client/server OS dependent, so that it would be relevant to any machine with a docker client/daemon etc.
So far, the only way I found to do such a thing is to create certs on a Linux machine with openssl and copy the certs to the client/server manually, as in this example:
https://docs.docker.com/engine/security/https/
and then configure docker on both sides to use the certificates for encryption and authentication.
This method is rather clunky in my opinion, because some times it's a problem to copy files and put them on each machine I want to use remote API from.
I am looking for something more elegant.
Another solution I've found is using a proxy for basic HTTP authentication, but in this method the traffic is not encrypted and it is not really secure that way.
Does anyone have a suggestion for a different solution or for a way to improve one of the above?
docker authentication encryption tls1.2
I am trying to find an effective way to use the docker remote API in a secure way.
I have a docker daemon running in a remote host, and a docker client on a different machine. I need my solution to not be client/server OS dependent, so that it would be relevant to any machine with a docker client/daemon etc.
So far, the only way I found to do such a thing is to create certs on a Linux machine with openssl and copy the certs to the client/server manually, as in this example:
https://docs.docker.com/engine/security/https/
and then configure docker on both sides to use the certificates for encryption and authentication.
This method is rather clunky in my opinion, because some times it's a problem to copy files and put them on each machine I want to use remote API from.
I am looking for something more elegant.
Another solution I've found is using a proxy for basic HTTP authentication, but in this method the traffic is not encrypted and it is not really secure that way.
Does anyone have a suggestion for a different solution or for a way to improve one of the above?
docker authentication encryption tls1.2
docker authentication encryption tls1.2
edited Nov 13 '18 at 20:03
Roy Cohen
asked Nov 13 '18 at 8:34
Roy CohenRoy Cohen
266
266
add a comment |
add a comment |
2 Answers
2
active
oldest
votes
Your favorite system automation tool (Chef, SaltStack, Ansible) can probably directly manage the running Docker containers on a remote host, without opening another root-equivalent network path. There are Docker-oriented clustering tools (Docker Swarm, Nomad, Kubernetes, AWS ECS) that can run a container locally or remotely, but you have less control over where exactly (you frequently don't actually care) and they tend to take over the machines they're running on.
If I really had to manage systems this way I'd probably use some sort of centralized storage to keep the TLS client keys, most likely Vault, which has the property of storing the keys encrypted, requiring some level of authentication to retrieve them, and being able to access-control them. You could write a shell function like this (untested):
dockerHost() {
mkdir -p "$HOME/.docker/$1"
JSON=$(vault kv get -format=json "secret/docker/$1")
for f in ca.pem cert.pem key.pem; do
echo "$JSON" | jq ".data.data.["$f"]" > "$HOME/.docker/$1/$f"
done
export DOCKER_HOST="https://$1:2376"
export DOCKER_CERT_PATH="$HOME/.docker/$1"
}
While your question makes clear you understand this, it bears repeating: do not enable unauthenticated remote access to the Docker daemon, since it is trivial to take over a host with unrestricted root access if you can access the socket at all.
1
my problem with clustering tools is that I am looking to deploy multiple containers on a single host and I don't need a whole cluster for that matter, only 1 host. I thought about using a single host cluster, but that feels wrong to me. Do you think it is a good option?
– Roy Cohen
Nov 13 '18 at 12:01
Honestly, it's overkill for a lot of things, and I wish Swarm wasn't quite so prominent in the official Docker tutorials. For multiple containers on a single host, though, Docker Compose is pretty lightweight and standardized; but it doesn't work well for multi-host installations, which is what your original question was more getting at.
– David Maze
Nov 13 '18 at 12:13
I want only a single host to be the docker host, and I want to be able to control this host remotely, using a docker client only on a remote host. the client will not have a docker daemon, but only the cli docker client binary
– Roy Cohen
Nov 13 '18 at 12:18
My very first choice for that setup would just be tossh
to the host.
– David Maze
Nov 13 '18 at 12:20
currently I am using ssh but I am looking to do it in a program/script and not manually. I have a lot of docker actions and configurations which I want to perform from a remote host (create containers, connect them to networks etc..) I want to use the docker-py module in order to do so. I also try to wrap the docker remote API in some kind of an interface which is relevant to the app that runs on the containers.
– Roy Cohen
Nov 13 '18 at 12:35
add a comment |
Based on your comments, I would suggest you go with Ansible if you don't need the swarm functionality and require only single host support. Ansible only requires SSH access which you probably already have available.
It's very easy to use an existing service that's defined in Docker Compose or you can just invoke your shell scripts in Ansible. No need to expose the Docker daemon to the external world.
A very simple example file (playbook.yml
)
- hosts: all
tasks:
- name: setup container
docker_container:
name: helloworld
image: hello-world
Running the playbook
ansible-playbook -i username@mysshhost.com, playbook.yml
Ansible provides pretty much all of the functionality you need to interact with Docker via its module system:
docker_service
Use your existing Docker compose files to orchestrate containers on a single Docker daemon or on Swarm. Supports compose versions 1 and 2.
docker_container
Manages the container lifecycle by providing the ability to create, update, stop, start and destroy a container.
docker_image
Provides full control over images, including: build, pull, push, tag and remove.
docker_image_facts
Inspects one or more images in the Docker host’s image cache, providing the information as facts for making decision or assertions in a playbook.
docker_login
Authenticates with Docker Hub or any Docker registry and updates the Docker Engine config file, which in turn provides password-free pushing and pulling of images to and from the registry.
docker (dynamic inventory)
Dynamically builds an inventory of all the available containers from a set of one or more Docker hosts.
thank you for the detailed comment. I've looked into Ansible but it doesn't seem to fit because the "Ansible master" must be a Linux machine. in my solution, I'm looking to allow any type of machine with any OS to control the containers from afar. For example, using only the docker client binary, any host(windows, OSX, Linux etc) that supports docker can control the containers remotely with no dependencies except for a single binary.
– Roy Cohen
Nov 14 '18 at 10:18
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53276839%2fuse-dockers-remote-api-in-a-secure-manner%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
Your favorite system automation tool (Chef, SaltStack, Ansible) can probably directly manage the running Docker containers on a remote host, without opening another root-equivalent network path. There are Docker-oriented clustering tools (Docker Swarm, Nomad, Kubernetes, AWS ECS) that can run a container locally or remotely, but you have less control over where exactly (you frequently don't actually care) and they tend to take over the machines they're running on.
If I really had to manage systems this way I'd probably use some sort of centralized storage to keep the TLS client keys, most likely Vault, which has the property of storing the keys encrypted, requiring some level of authentication to retrieve them, and being able to access-control them. You could write a shell function like this (untested):
dockerHost() {
mkdir -p "$HOME/.docker/$1"
JSON=$(vault kv get -format=json "secret/docker/$1")
for f in ca.pem cert.pem key.pem; do
echo "$JSON" | jq ".data.data.["$f"]" > "$HOME/.docker/$1/$f"
done
export DOCKER_HOST="https://$1:2376"
export DOCKER_CERT_PATH="$HOME/.docker/$1"
}
While your question makes clear you understand this, it bears repeating: do not enable unauthenticated remote access to the Docker daemon, since it is trivial to take over a host with unrestricted root access if you can access the socket at all.
1
my problem with clustering tools is that I am looking to deploy multiple containers on a single host and I don't need a whole cluster for that matter, only 1 host. I thought about using a single host cluster, but that feels wrong to me. Do you think it is a good option?
– Roy Cohen
Nov 13 '18 at 12:01
Honestly, it's overkill for a lot of things, and I wish Swarm wasn't quite so prominent in the official Docker tutorials. For multiple containers on a single host, though, Docker Compose is pretty lightweight and standardized; but it doesn't work well for multi-host installations, which is what your original question was more getting at.
– David Maze
Nov 13 '18 at 12:13
I want only a single host to be the docker host, and I want to be able to control this host remotely, using a docker client only on a remote host. the client will not have a docker daemon, but only the cli docker client binary
– Roy Cohen
Nov 13 '18 at 12:18
My very first choice for that setup would just be tossh
to the host.
– David Maze
Nov 13 '18 at 12:20
currently I am using ssh but I am looking to do it in a program/script and not manually. I have a lot of docker actions and configurations which I want to perform from a remote host (create containers, connect them to networks etc..) I want to use the docker-py module in order to do so. I also try to wrap the docker remote API in some kind of an interface which is relevant to the app that runs on the containers.
– Roy Cohen
Nov 13 '18 at 12:35
add a comment |
Your favorite system automation tool (Chef, SaltStack, Ansible) can probably directly manage the running Docker containers on a remote host, without opening another root-equivalent network path. There are Docker-oriented clustering tools (Docker Swarm, Nomad, Kubernetes, AWS ECS) that can run a container locally or remotely, but you have less control over where exactly (you frequently don't actually care) and they tend to take over the machines they're running on.
If I really had to manage systems this way I'd probably use some sort of centralized storage to keep the TLS client keys, most likely Vault, which has the property of storing the keys encrypted, requiring some level of authentication to retrieve them, and being able to access-control them. You could write a shell function like this (untested):
dockerHost() {
mkdir -p "$HOME/.docker/$1"
JSON=$(vault kv get -format=json "secret/docker/$1")
for f in ca.pem cert.pem key.pem; do
echo "$JSON" | jq ".data.data.["$f"]" > "$HOME/.docker/$1/$f"
done
export DOCKER_HOST="https://$1:2376"
export DOCKER_CERT_PATH="$HOME/.docker/$1"
}
While your question makes clear you understand this, it bears repeating: do not enable unauthenticated remote access to the Docker daemon, since it is trivial to take over a host with unrestricted root access if you can access the socket at all.
1
my problem with clustering tools is that I am looking to deploy multiple containers on a single host and I don't need a whole cluster for that matter, only 1 host. I thought about using a single host cluster, but that feels wrong to me. Do you think it is a good option?
– Roy Cohen
Nov 13 '18 at 12:01
Honestly, it's overkill for a lot of things, and I wish Swarm wasn't quite so prominent in the official Docker tutorials. For multiple containers on a single host, though, Docker Compose is pretty lightweight and standardized; but it doesn't work well for multi-host installations, which is what your original question was more getting at.
– David Maze
Nov 13 '18 at 12:13
I want only a single host to be the docker host, and I want to be able to control this host remotely, using a docker client only on a remote host. the client will not have a docker daemon, but only the cli docker client binary
– Roy Cohen
Nov 13 '18 at 12:18
My very first choice for that setup would just be tossh
to the host.
– David Maze
Nov 13 '18 at 12:20
currently I am using ssh but I am looking to do it in a program/script and not manually. I have a lot of docker actions and configurations which I want to perform from a remote host (create containers, connect them to networks etc..) I want to use the docker-py module in order to do so. I also try to wrap the docker remote API in some kind of an interface which is relevant to the app that runs on the containers.
– Roy Cohen
Nov 13 '18 at 12:35
add a comment |
Your favorite system automation tool (Chef, SaltStack, Ansible) can probably directly manage the running Docker containers on a remote host, without opening another root-equivalent network path. There are Docker-oriented clustering tools (Docker Swarm, Nomad, Kubernetes, AWS ECS) that can run a container locally or remotely, but you have less control over where exactly (you frequently don't actually care) and they tend to take over the machines they're running on.
If I really had to manage systems this way I'd probably use some sort of centralized storage to keep the TLS client keys, most likely Vault, which has the property of storing the keys encrypted, requiring some level of authentication to retrieve them, and being able to access-control them. You could write a shell function like this (untested):
dockerHost() {
mkdir -p "$HOME/.docker/$1"
JSON=$(vault kv get -format=json "secret/docker/$1")
for f in ca.pem cert.pem key.pem; do
echo "$JSON" | jq ".data.data.["$f"]" > "$HOME/.docker/$1/$f"
done
export DOCKER_HOST="https://$1:2376"
export DOCKER_CERT_PATH="$HOME/.docker/$1"
}
While your question makes clear you understand this, it bears repeating: do not enable unauthenticated remote access to the Docker daemon, since it is trivial to take over a host with unrestricted root access if you can access the socket at all.
Your favorite system automation tool (Chef, SaltStack, Ansible) can probably directly manage the running Docker containers on a remote host, without opening another root-equivalent network path. There are Docker-oriented clustering tools (Docker Swarm, Nomad, Kubernetes, AWS ECS) that can run a container locally or remotely, but you have less control over where exactly (you frequently don't actually care) and they tend to take over the machines they're running on.
If I really had to manage systems this way I'd probably use some sort of centralized storage to keep the TLS client keys, most likely Vault, which has the property of storing the keys encrypted, requiring some level of authentication to retrieve them, and being able to access-control them. You could write a shell function like this (untested):
dockerHost() {
mkdir -p "$HOME/.docker/$1"
JSON=$(vault kv get -format=json "secret/docker/$1")
for f in ca.pem cert.pem key.pem; do
echo "$JSON" | jq ".data.data.["$f"]" > "$HOME/.docker/$1/$f"
done
export DOCKER_HOST="https://$1:2376"
export DOCKER_CERT_PATH="$HOME/.docker/$1"
}
While your question makes clear you understand this, it bears repeating: do not enable unauthenticated remote access to the Docker daemon, since it is trivial to take over a host with unrestricted root access if you can access the socket at all.
answered Nov 13 '18 at 11:37
David MazeDavid Maze
11.4k21023
11.4k21023
1
my problem with clustering tools is that I am looking to deploy multiple containers on a single host and I don't need a whole cluster for that matter, only 1 host. I thought about using a single host cluster, but that feels wrong to me. Do you think it is a good option?
– Roy Cohen
Nov 13 '18 at 12:01
Honestly, it's overkill for a lot of things, and I wish Swarm wasn't quite so prominent in the official Docker tutorials. For multiple containers on a single host, though, Docker Compose is pretty lightweight and standardized; but it doesn't work well for multi-host installations, which is what your original question was more getting at.
– David Maze
Nov 13 '18 at 12:13
I want only a single host to be the docker host, and I want to be able to control this host remotely, using a docker client only on a remote host. the client will not have a docker daemon, but only the cli docker client binary
– Roy Cohen
Nov 13 '18 at 12:18
My very first choice for that setup would just be tossh
to the host.
– David Maze
Nov 13 '18 at 12:20
currently I am using ssh but I am looking to do it in a program/script and not manually. I have a lot of docker actions and configurations which I want to perform from a remote host (create containers, connect them to networks etc..) I want to use the docker-py module in order to do so. I also try to wrap the docker remote API in some kind of an interface which is relevant to the app that runs on the containers.
– Roy Cohen
Nov 13 '18 at 12:35
add a comment |
1
my problem with clustering tools is that I am looking to deploy multiple containers on a single host and I don't need a whole cluster for that matter, only 1 host. I thought about using a single host cluster, but that feels wrong to me. Do you think it is a good option?
– Roy Cohen
Nov 13 '18 at 12:01
Honestly, it's overkill for a lot of things, and I wish Swarm wasn't quite so prominent in the official Docker tutorials. For multiple containers on a single host, though, Docker Compose is pretty lightweight and standardized; but it doesn't work well for multi-host installations, which is what your original question was more getting at.
– David Maze
Nov 13 '18 at 12:13
I want only a single host to be the docker host, and I want to be able to control this host remotely, using a docker client only on a remote host. the client will not have a docker daemon, but only the cli docker client binary
– Roy Cohen
Nov 13 '18 at 12:18
My very first choice for that setup would just be tossh
to the host.
– David Maze
Nov 13 '18 at 12:20
currently I am using ssh but I am looking to do it in a program/script and not manually. I have a lot of docker actions and configurations which I want to perform from a remote host (create containers, connect them to networks etc..) I want to use the docker-py module in order to do so. I also try to wrap the docker remote API in some kind of an interface which is relevant to the app that runs on the containers.
– Roy Cohen
Nov 13 '18 at 12:35
1
1
my problem with clustering tools is that I am looking to deploy multiple containers on a single host and I don't need a whole cluster for that matter, only 1 host. I thought about using a single host cluster, but that feels wrong to me. Do you think it is a good option?
– Roy Cohen
Nov 13 '18 at 12:01
my problem with clustering tools is that I am looking to deploy multiple containers on a single host and I don't need a whole cluster for that matter, only 1 host. I thought about using a single host cluster, but that feels wrong to me. Do you think it is a good option?
– Roy Cohen
Nov 13 '18 at 12:01
Honestly, it's overkill for a lot of things, and I wish Swarm wasn't quite so prominent in the official Docker tutorials. For multiple containers on a single host, though, Docker Compose is pretty lightweight and standardized; but it doesn't work well for multi-host installations, which is what your original question was more getting at.
– David Maze
Nov 13 '18 at 12:13
Honestly, it's overkill for a lot of things, and I wish Swarm wasn't quite so prominent in the official Docker tutorials. For multiple containers on a single host, though, Docker Compose is pretty lightweight and standardized; but it doesn't work well for multi-host installations, which is what your original question was more getting at.
– David Maze
Nov 13 '18 at 12:13
I want only a single host to be the docker host, and I want to be able to control this host remotely, using a docker client only on a remote host. the client will not have a docker daemon, but only the cli docker client binary
– Roy Cohen
Nov 13 '18 at 12:18
I want only a single host to be the docker host, and I want to be able to control this host remotely, using a docker client only on a remote host. the client will not have a docker daemon, but only the cli docker client binary
– Roy Cohen
Nov 13 '18 at 12:18
My very first choice for that setup would just be to
ssh
to the host.– David Maze
Nov 13 '18 at 12:20
My very first choice for that setup would just be to
ssh
to the host.– David Maze
Nov 13 '18 at 12:20
currently I am using ssh but I am looking to do it in a program/script and not manually. I have a lot of docker actions and configurations which I want to perform from a remote host (create containers, connect them to networks etc..) I want to use the docker-py module in order to do so. I also try to wrap the docker remote API in some kind of an interface which is relevant to the app that runs on the containers.
– Roy Cohen
Nov 13 '18 at 12:35
currently I am using ssh but I am looking to do it in a program/script and not manually. I have a lot of docker actions and configurations which I want to perform from a remote host (create containers, connect them to networks etc..) I want to use the docker-py module in order to do so. I also try to wrap the docker remote API in some kind of an interface which is relevant to the app that runs on the containers.
– Roy Cohen
Nov 13 '18 at 12:35
add a comment |
Based on your comments, I would suggest you go with Ansible if you don't need the swarm functionality and require only single host support. Ansible only requires SSH access which you probably already have available.
It's very easy to use an existing service that's defined in Docker Compose or you can just invoke your shell scripts in Ansible. No need to expose the Docker daemon to the external world.
A very simple example file (playbook.yml
)
- hosts: all
tasks:
- name: setup container
docker_container:
name: helloworld
image: hello-world
Running the playbook
ansible-playbook -i username@mysshhost.com, playbook.yml
Ansible provides pretty much all of the functionality you need to interact with Docker via its module system:
docker_service
Use your existing Docker compose files to orchestrate containers on a single Docker daemon or on Swarm. Supports compose versions 1 and 2.
docker_container
Manages the container lifecycle by providing the ability to create, update, stop, start and destroy a container.
docker_image
Provides full control over images, including: build, pull, push, tag and remove.
docker_image_facts
Inspects one or more images in the Docker host’s image cache, providing the information as facts for making decision or assertions in a playbook.
docker_login
Authenticates with Docker Hub or any Docker registry and updates the Docker Engine config file, which in turn provides password-free pushing and pulling of images to and from the registry.
docker (dynamic inventory)
Dynamically builds an inventory of all the available containers from a set of one or more Docker hosts.
thank you for the detailed comment. I've looked into Ansible but it doesn't seem to fit because the "Ansible master" must be a Linux machine. in my solution, I'm looking to allow any type of machine with any OS to control the containers from afar. For example, using only the docker client binary, any host(windows, OSX, Linux etc) that supports docker can control the containers remotely with no dependencies except for a single binary.
– Roy Cohen
Nov 14 '18 at 10:18
add a comment |
Based on your comments, I would suggest you go with Ansible if you don't need the swarm functionality and require only single host support. Ansible only requires SSH access which you probably already have available.
It's very easy to use an existing service that's defined in Docker Compose or you can just invoke your shell scripts in Ansible. No need to expose the Docker daemon to the external world.
A very simple example file (playbook.yml
)
- hosts: all
tasks:
- name: setup container
docker_container:
name: helloworld
image: hello-world
Running the playbook
ansible-playbook -i username@mysshhost.com, playbook.yml
Ansible provides pretty much all of the functionality you need to interact with Docker via its module system:
docker_service
Use your existing Docker compose files to orchestrate containers on a single Docker daemon or on Swarm. Supports compose versions 1 and 2.
docker_container
Manages the container lifecycle by providing the ability to create, update, stop, start and destroy a container.
docker_image
Provides full control over images, including: build, pull, push, tag and remove.
docker_image_facts
Inspects one or more images in the Docker host’s image cache, providing the information as facts for making decision or assertions in a playbook.
docker_login
Authenticates with Docker Hub or any Docker registry and updates the Docker Engine config file, which in turn provides password-free pushing and pulling of images to and from the registry.
docker (dynamic inventory)
Dynamically builds an inventory of all the available containers from a set of one or more Docker hosts.
thank you for the detailed comment. I've looked into Ansible but it doesn't seem to fit because the "Ansible master" must be a Linux machine. in my solution, I'm looking to allow any type of machine with any OS to control the containers from afar. For example, using only the docker client binary, any host(windows, OSX, Linux etc) that supports docker can control the containers remotely with no dependencies except for a single binary.
– Roy Cohen
Nov 14 '18 at 10:18
add a comment |
Based on your comments, I would suggest you go with Ansible if you don't need the swarm functionality and require only single host support. Ansible only requires SSH access which you probably already have available.
It's very easy to use an existing service that's defined in Docker Compose or you can just invoke your shell scripts in Ansible. No need to expose the Docker daemon to the external world.
A very simple example file (playbook.yml
)
- hosts: all
tasks:
- name: setup container
docker_container:
name: helloworld
image: hello-world
Running the playbook
ansible-playbook -i username@mysshhost.com, playbook.yml
Ansible provides pretty much all of the functionality you need to interact with Docker via its module system:
docker_service
Use your existing Docker compose files to orchestrate containers on a single Docker daemon or on Swarm. Supports compose versions 1 and 2.
docker_container
Manages the container lifecycle by providing the ability to create, update, stop, start and destroy a container.
docker_image
Provides full control over images, including: build, pull, push, tag and remove.
docker_image_facts
Inspects one or more images in the Docker host’s image cache, providing the information as facts for making decision or assertions in a playbook.
docker_login
Authenticates with Docker Hub or any Docker registry and updates the Docker Engine config file, which in turn provides password-free pushing and pulling of images to and from the registry.
docker (dynamic inventory)
Dynamically builds an inventory of all the available containers from a set of one or more Docker hosts.
Based on your comments, I would suggest you go with Ansible if you don't need the swarm functionality and require only single host support. Ansible only requires SSH access which you probably already have available.
It's very easy to use an existing service that's defined in Docker Compose or you can just invoke your shell scripts in Ansible. No need to expose the Docker daemon to the external world.
A very simple example file (playbook.yml
)
- hosts: all
tasks:
- name: setup container
docker_container:
name: helloworld
image: hello-world
Running the playbook
ansible-playbook -i username@mysshhost.com, playbook.yml
Ansible provides pretty much all of the functionality you need to interact with Docker via its module system:
docker_service
Use your existing Docker compose files to orchestrate containers on a single Docker daemon or on Swarm. Supports compose versions 1 and 2.
docker_container
Manages the container lifecycle by providing the ability to create, update, stop, start and destroy a container.
docker_image
Provides full control over images, including: build, pull, push, tag and remove.
docker_image_facts
Inspects one or more images in the Docker host’s image cache, providing the information as facts for making decision or assertions in a playbook.
docker_login
Authenticates with Docker Hub or any Docker registry and updates the Docker Engine config file, which in turn provides password-free pushing and pulling of images to and from the registry.
docker (dynamic inventory)
Dynamically builds an inventory of all the available containers from a set of one or more Docker hosts.
answered Nov 13 '18 at 15:03
Uku LoskitUku Loskit
30.3k86879
30.3k86879
thank you for the detailed comment. I've looked into Ansible but it doesn't seem to fit because the "Ansible master" must be a Linux machine. in my solution, I'm looking to allow any type of machine with any OS to control the containers from afar. For example, using only the docker client binary, any host(windows, OSX, Linux etc) that supports docker can control the containers remotely with no dependencies except for a single binary.
– Roy Cohen
Nov 14 '18 at 10:18
add a comment |
thank you for the detailed comment. I've looked into Ansible but it doesn't seem to fit because the "Ansible master" must be a Linux machine. in my solution, I'm looking to allow any type of machine with any OS to control the containers from afar. For example, using only the docker client binary, any host(windows, OSX, Linux etc) that supports docker can control the containers remotely with no dependencies except for a single binary.
– Roy Cohen
Nov 14 '18 at 10:18
thank you for the detailed comment. I've looked into Ansible but it doesn't seem to fit because the "Ansible master" must be a Linux machine. in my solution, I'm looking to allow any type of machine with any OS to control the containers from afar. For example, using only the docker client binary, any host(windows, OSX, Linux etc) that supports docker can control the containers remotely with no dependencies except for a single binary.
– Roy Cohen
Nov 14 '18 at 10:18
thank you for the detailed comment. I've looked into Ansible but it doesn't seem to fit because the "Ansible master" must be a Linux machine. in my solution, I'm looking to allow any type of machine with any OS to control the containers from afar. For example, using only the docker client binary, any host(windows, OSX, Linux etc) that supports docker can control the containers remotely with no dependencies except for a single binary.
– Roy Cohen
Nov 14 '18 at 10:18
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53276839%2fuse-dockers-remote-api-in-a-secure-manner%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown