Two (Kafka) S3 Connectors not working simultaneously
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty{ height:90px;width:728px;box-sizing:border-box;
}
I have a Kafka Connect working in a cluster (3 nodes) with 1 connector (topic -> S3), everything is fine:
root@dev-kafka1 ~]# curl localhost:8083/connectors/s3-postgres/status | jq -r
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 219 100 219 0 0 36384 0 --:--:-- --:--:-- --:--:-- 43800
{
"name": "s3-postgres",
"connector": {
"state": "RUNNING",
"worker_id": "127.0.0.1:8083"
},
"tasks": [
{
"state": "RUNNING",
"id": 0,
"worker_id": "127.0.0.1:8083"
},
{
"state": "RUNNING",
"id": 1,
"worker_id": "127.0.0.1:8083"
}
],
"type": "sink"
}
But when I created another connector, the task status is always like that:
[root@dev-kafka1 ~]# curl localhost:8083/connectors/s3-postgres6/status | jq -r
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 109 100 109 0 0 14347 0 --:--:-- --:--:-- --:--:-- 15571
{
"name": "s3-postgres6",
"connector": {
"state": "RUNNING",
"worker_id": "127.0.0.1:8083"
},
"tasks": ,
"type": "sink"
}
I don't know why I did it wrong in configuration that two connectors of the same plugin don't work together, if I stop the connector #1 that is running fine, the connector #2 after restart, work fine. Does anyone know something I should change in configs ?
apache-kafka apache-kafka-connect kafka-cluster
add a comment |
I have a Kafka Connect working in a cluster (3 nodes) with 1 connector (topic -> S3), everything is fine:
root@dev-kafka1 ~]# curl localhost:8083/connectors/s3-postgres/status | jq -r
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 219 100 219 0 0 36384 0 --:--:-- --:--:-- --:--:-- 43800
{
"name": "s3-postgres",
"connector": {
"state": "RUNNING",
"worker_id": "127.0.0.1:8083"
},
"tasks": [
{
"state": "RUNNING",
"id": 0,
"worker_id": "127.0.0.1:8083"
},
{
"state": "RUNNING",
"id": 1,
"worker_id": "127.0.0.1:8083"
}
],
"type": "sink"
}
But when I created another connector, the task status is always like that:
[root@dev-kafka1 ~]# curl localhost:8083/connectors/s3-postgres6/status | jq -r
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 109 100 109 0 0 14347 0 --:--:-- --:--:-- --:--:-- 15571
{
"name": "s3-postgres6",
"connector": {
"state": "RUNNING",
"worker_id": "127.0.0.1:8083"
},
"tasks": ,
"type": "sink"
}
I don't know why I did it wrong in configuration that two connectors of the same plugin don't work together, if I stop the connector #1 that is running fine, the connector #2 after restart, work fine. Does anyone know something I should change in configs ?
apache-kafka apache-kafka-connect kafka-cluster
Try increasing the heap size of Connect? Maybe not enough memory for running more tasks
– cricket_007
Nov 24 '18 at 23:17
Also, ideally you run Connect on separate machines than the Kafka Brokers
– cricket_007
Nov 24 '18 at 23:29
It worked! I just created 3 nodes to realocate kafka connect, and increased memory and cpu in these nodes (from aws m4.large to m4.2xlarge) and it's fine now, the connectors tasks are running. Thanks a lot @cricket_007 !
– Paulo Singaretti
Nov 26 '18 at 0:33
add a comment |
I have a Kafka Connect working in a cluster (3 nodes) with 1 connector (topic -> S3), everything is fine:
root@dev-kafka1 ~]# curl localhost:8083/connectors/s3-postgres/status | jq -r
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 219 100 219 0 0 36384 0 --:--:-- --:--:-- --:--:-- 43800
{
"name": "s3-postgres",
"connector": {
"state": "RUNNING",
"worker_id": "127.0.0.1:8083"
},
"tasks": [
{
"state": "RUNNING",
"id": 0,
"worker_id": "127.0.0.1:8083"
},
{
"state": "RUNNING",
"id": 1,
"worker_id": "127.0.0.1:8083"
}
],
"type": "sink"
}
But when I created another connector, the task status is always like that:
[root@dev-kafka1 ~]# curl localhost:8083/connectors/s3-postgres6/status | jq -r
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 109 100 109 0 0 14347 0 --:--:-- --:--:-- --:--:-- 15571
{
"name": "s3-postgres6",
"connector": {
"state": "RUNNING",
"worker_id": "127.0.0.1:8083"
},
"tasks": ,
"type": "sink"
}
I don't know why I did it wrong in configuration that two connectors of the same plugin don't work together, if I stop the connector #1 that is running fine, the connector #2 after restart, work fine. Does anyone know something I should change in configs ?
apache-kafka apache-kafka-connect kafka-cluster
I have a Kafka Connect working in a cluster (3 nodes) with 1 connector (topic -> S3), everything is fine:
root@dev-kafka1 ~]# curl localhost:8083/connectors/s3-postgres/status | jq -r
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 219 100 219 0 0 36384 0 --:--:-- --:--:-- --:--:-- 43800
{
"name": "s3-postgres",
"connector": {
"state": "RUNNING",
"worker_id": "127.0.0.1:8083"
},
"tasks": [
{
"state": "RUNNING",
"id": 0,
"worker_id": "127.0.0.1:8083"
},
{
"state": "RUNNING",
"id": 1,
"worker_id": "127.0.0.1:8083"
}
],
"type": "sink"
}
But when I created another connector, the task status is always like that:
[root@dev-kafka1 ~]# curl localhost:8083/connectors/s3-postgres6/status | jq -r
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 109 100 109 0 0 14347 0 --:--:-- --:--:-- --:--:-- 15571
{
"name": "s3-postgres6",
"connector": {
"state": "RUNNING",
"worker_id": "127.0.0.1:8083"
},
"tasks": ,
"type": "sink"
}
I don't know why I did it wrong in configuration that two connectors of the same plugin don't work together, if I stop the connector #1 that is running fine, the connector #2 after restart, work fine. Does anyone know something I should change in configs ?
apache-kafka apache-kafka-connect kafka-cluster
apache-kafka apache-kafka-connect kafka-cluster
asked Nov 24 '18 at 20:21
Paulo SingarettiPaulo Singaretti
82
82
Try increasing the heap size of Connect? Maybe not enough memory for running more tasks
– cricket_007
Nov 24 '18 at 23:17
Also, ideally you run Connect on separate machines than the Kafka Brokers
– cricket_007
Nov 24 '18 at 23:29
It worked! I just created 3 nodes to realocate kafka connect, and increased memory and cpu in these nodes (from aws m4.large to m4.2xlarge) and it's fine now, the connectors tasks are running. Thanks a lot @cricket_007 !
– Paulo Singaretti
Nov 26 '18 at 0:33
add a comment |
Try increasing the heap size of Connect? Maybe not enough memory for running more tasks
– cricket_007
Nov 24 '18 at 23:17
Also, ideally you run Connect on separate machines than the Kafka Brokers
– cricket_007
Nov 24 '18 at 23:29
It worked! I just created 3 nodes to realocate kafka connect, and increased memory and cpu in these nodes (from aws m4.large to m4.2xlarge) and it's fine now, the connectors tasks are running. Thanks a lot @cricket_007 !
– Paulo Singaretti
Nov 26 '18 at 0:33
Try increasing the heap size of Connect? Maybe not enough memory for running more tasks
– cricket_007
Nov 24 '18 at 23:17
Try increasing the heap size of Connect? Maybe not enough memory for running more tasks
– cricket_007
Nov 24 '18 at 23:17
Also, ideally you run Connect on separate machines than the Kafka Brokers
– cricket_007
Nov 24 '18 at 23:29
Also, ideally you run Connect on separate machines than the Kafka Brokers
– cricket_007
Nov 24 '18 at 23:29
It worked! I just created 3 nodes to realocate kafka connect, and increased memory and cpu in these nodes (from aws m4.large to m4.2xlarge) and it's fine now, the connectors tasks are running. Thanks a lot @cricket_007 !
– Paulo Singaretti
Nov 26 '18 at 0:33
It worked! I just created 3 nodes to realocate kafka connect, and increased memory and cpu in these nodes (from aws m4.large to m4.2xlarge) and it's fine now, the connectors tasks are running. Thanks a lot @cricket_007 !
– Paulo Singaretti
Nov 26 '18 at 0:33
add a comment |
1 Answer
1
active
oldest
votes
Hard to say what exactly the problem could be without searching through the logs, maybe even changing the logging to debug verbosity temporarily, but depending on the connector properties, Kafka Connect can be very memory hungry.
Therefore, I'd suggest running Connect itself on isolated machines from the Kafka brokers, and allowing Connect to take more heap size (the default is 2g in latest versions) by exporting the KAFKA_HEAP_OPTS
variable
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53462028%2ftwo-kafka-s3-connectors-not-working-simultaneously%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
Hard to say what exactly the problem could be without searching through the logs, maybe even changing the logging to debug verbosity temporarily, but depending on the connector properties, Kafka Connect can be very memory hungry.
Therefore, I'd suggest running Connect itself on isolated machines from the Kafka brokers, and allowing Connect to take more heap size (the default is 2g in latest versions) by exporting the KAFKA_HEAP_OPTS
variable
add a comment |
Hard to say what exactly the problem could be without searching through the logs, maybe even changing the logging to debug verbosity temporarily, but depending on the connector properties, Kafka Connect can be very memory hungry.
Therefore, I'd suggest running Connect itself on isolated machines from the Kafka brokers, and allowing Connect to take more heap size (the default is 2g in latest versions) by exporting the KAFKA_HEAP_OPTS
variable
add a comment |
Hard to say what exactly the problem could be without searching through the logs, maybe even changing the logging to debug verbosity temporarily, but depending on the connector properties, Kafka Connect can be very memory hungry.
Therefore, I'd suggest running Connect itself on isolated machines from the Kafka brokers, and allowing Connect to take more heap size (the default is 2g in latest versions) by exporting the KAFKA_HEAP_OPTS
variable
Hard to say what exactly the problem could be without searching through the logs, maybe even changing the logging to debug verbosity temporarily, but depending on the connector properties, Kafka Connect can be very memory hungry.
Therefore, I'd suggest running Connect itself on isolated machines from the Kafka brokers, and allowing Connect to take more heap size (the default is 2g in latest versions) by exporting the KAFKA_HEAP_OPTS
variable
answered Nov 26 '18 at 2:33
cricket_007cricket_007
84.6k1147120
84.6k1147120
add a comment |
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53462028%2ftwo-kafka-s3-connectors-not-working-simultaneously%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Try increasing the heap size of Connect? Maybe not enough memory for running more tasks
– cricket_007
Nov 24 '18 at 23:17
Also, ideally you run Connect on separate machines than the Kafka Brokers
– cricket_007
Nov 24 '18 at 23:29
It worked! I just created 3 nodes to realocate kafka connect, and increased memory and cpu in these nodes (from aws m4.large to m4.2xlarge) and it's fine now, the connectors tasks are running. Thanks a lot @cricket_007 !
– Paulo Singaretti
Nov 26 '18 at 0:33