error while running eval.py tensorflow object detection api












0















When I run the eval.py script, the images are evaluated and in the logs I can see that visualizations are also created on the images.



I have set num_examples to 50 in the pipeline.config. However, 50 images are not evaluated. After evaluation of some 9 images (which varies a lot - sometimes 5, sometimes 4), I get an error that ValueError: Image with id 1531471339_visible.png already added.



I am not sure where I am going wrong.



Note: This is my own dataset that I've trained.
I also tried different machines, and still the same error.



eval config:



eval_config {
num_examples: 50
use_moving_averages: false
}
eval_input_reader {
label_map_path: "/home/path/to/labelmap.pbtxt"
tf_record_input_reader {
input_path: "/home/path/to/file.tfrecord"
}
}









share|improve this question

























  • Please post your .config for more detailed help. How many images does your evaluation set contain?

    – Janikan
    Nov 13 '18 at 14:07











  • There are 215 images in the eval set.

    – nirvair
    Nov 13 '18 at 14:20











  • Usually this appears if the dataset is too small for num_examples and images are used twice. Did you check your eval set for duplicates?

    – Janikan
    Nov 13 '18 at 14:31











  • Yes, I checked for duplicates. No duplicates found.

    – nirvair
    Nov 13 '18 at 14:32
















0















When I run the eval.py script, the images are evaluated and in the logs I can see that visualizations are also created on the images.



I have set num_examples to 50 in the pipeline.config. However, 50 images are not evaluated. After evaluation of some 9 images (which varies a lot - sometimes 5, sometimes 4), I get an error that ValueError: Image with id 1531471339_visible.png already added.



I am not sure where I am going wrong.



Note: This is my own dataset that I've trained.
I also tried different machines, and still the same error.



eval config:



eval_config {
num_examples: 50
use_moving_averages: false
}
eval_input_reader {
label_map_path: "/home/path/to/labelmap.pbtxt"
tf_record_input_reader {
input_path: "/home/path/to/file.tfrecord"
}
}









share|improve this question

























  • Please post your .config for more detailed help. How many images does your evaluation set contain?

    – Janikan
    Nov 13 '18 at 14:07











  • There are 215 images in the eval set.

    – nirvair
    Nov 13 '18 at 14:20











  • Usually this appears if the dataset is too small for num_examples and images are used twice. Did you check your eval set for duplicates?

    – Janikan
    Nov 13 '18 at 14:31











  • Yes, I checked for duplicates. No duplicates found.

    – nirvair
    Nov 13 '18 at 14:32














0












0








0








When I run the eval.py script, the images are evaluated and in the logs I can see that visualizations are also created on the images.



I have set num_examples to 50 in the pipeline.config. However, 50 images are not evaluated. After evaluation of some 9 images (which varies a lot - sometimes 5, sometimes 4), I get an error that ValueError: Image with id 1531471339_visible.png already added.



I am not sure where I am going wrong.



Note: This is my own dataset that I've trained.
I also tried different machines, and still the same error.



eval config:



eval_config {
num_examples: 50
use_moving_averages: false
}
eval_input_reader {
label_map_path: "/home/path/to/labelmap.pbtxt"
tf_record_input_reader {
input_path: "/home/path/to/file.tfrecord"
}
}









share|improve this question
















When I run the eval.py script, the images are evaluated and in the logs I can see that visualizations are also created on the images.



I have set num_examples to 50 in the pipeline.config. However, 50 images are not evaluated. After evaluation of some 9 images (which varies a lot - sometimes 5, sometimes 4), I get an error that ValueError: Image with id 1531471339_visible.png already added.



I am not sure where I am going wrong.



Note: This is my own dataset that I've trained.
I also tried different machines, and still the same error.



eval config:



eval_config {
num_examples: 50
use_moving_averages: false
}
eval_input_reader {
label_map_path: "/home/path/to/labelmap.pbtxt"
tf_record_input_reader {
input_path: "/home/path/to/file.tfrecord"
}
}






python tensorflow object-detection object-detection-api






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 13 '18 at 14:21







nirvair

















asked Nov 13 '18 at 11:14









nirvairnirvair

1,00711640




1,00711640













  • Please post your .config for more detailed help. How many images does your evaluation set contain?

    – Janikan
    Nov 13 '18 at 14:07











  • There are 215 images in the eval set.

    – nirvair
    Nov 13 '18 at 14:20











  • Usually this appears if the dataset is too small for num_examples and images are used twice. Did you check your eval set for duplicates?

    – Janikan
    Nov 13 '18 at 14:31











  • Yes, I checked for duplicates. No duplicates found.

    – nirvair
    Nov 13 '18 at 14:32



















  • Please post your .config for more detailed help. How many images does your evaluation set contain?

    – Janikan
    Nov 13 '18 at 14:07











  • There are 215 images in the eval set.

    – nirvair
    Nov 13 '18 at 14:20











  • Usually this appears if the dataset is too small for num_examples and images are used twice. Did you check your eval set for duplicates?

    – Janikan
    Nov 13 '18 at 14:31











  • Yes, I checked for duplicates. No duplicates found.

    – nirvair
    Nov 13 '18 at 14:32

















Please post your .config for more detailed help. How many images does your evaluation set contain?

– Janikan
Nov 13 '18 at 14:07





Please post your .config for more detailed help. How many images does your evaluation set contain?

– Janikan
Nov 13 '18 at 14:07













There are 215 images in the eval set.

– nirvair
Nov 13 '18 at 14:20





There are 215 images in the eval set.

– nirvair
Nov 13 '18 at 14:20













Usually this appears if the dataset is too small for num_examples and images are used twice. Did you check your eval set for duplicates?

– Janikan
Nov 13 '18 at 14:31





Usually this appears if the dataset is too small for num_examples and images are used twice. Did you check your eval set for duplicates?

– Janikan
Nov 13 '18 at 14:31













Yes, I checked for duplicates. No duplicates found.

– nirvair
Nov 13 '18 at 14:32





Yes, I checked for duplicates. No duplicates found.

– nirvair
Nov 13 '18 at 14:32












0






active

oldest

votes











Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53279787%2ferror-while-running-eval-py-tensorflow-object-detection-api%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























0






active

oldest

votes








0






active

oldest

votes









active

oldest

votes






active

oldest

votes
















draft saved

draft discarded




















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53279787%2ferror-while-running-eval-py-tensorflow-object-detection-api%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







這個網誌中的熱門文章

Tangent Lines Diagram Along Smooth Curve

Yusuf al-Mu'taman ibn Hud

Zucchini