TensorFlow Estimator makes different prediction on each call of predict












0














I trained a classifier for the Iris dataset using TF Estimators, but each prediction call I'm getting different results. I wonder if I'm making something wrong in the training or if it is an issue in the prediction.



I'm loading an already trained model and just making the .predict call. This is my input function for prediction.



def get_predict_fn(features,batch_size):
def predict_input_fn():
dataset = tf.data.Dataset.from_tensor_slices(dict(features))
dataset = dataset.batch(batch_size)
return dataset.make_one_shot_iterator().get_next()

return predict_input_fn


This is the result of one call




[{'logits': array([-3.5082035, -1.074667 , -3.8533034],
dtype=float32), 'probabilities': array([0.07629351, 0.8696793 ,
0.05402722], dtype=float32), 'class_ids': array([1]), 'classes': array([b'Iris-versicolor'], dtype=object)}]




This is another call




[{'logits': array([ 3.0530725, -1.0889677, 2.3922846],
dtype=float32), 'probabilities': array([0.6525989 , 0.01037006,
0.337031 ], dtype=float32), 'class_ids': array([0]), 'classes': array([b'Iris-setosa'], dtype=object)}]




Both are making a call to the same model, sending the same example DataFrame.




sepal_length sepal_width petal_length petal_width

5.7 2.5 5.0 2.0











share|improve this question






















  • If you are loading a model, then the results should be consistent. Why might they change? 1. perhaps you are sending different data, but as you state you are sending the same fixed array, so that should not be the cause. 2. you are altering the model after you load it. Can you add the entire code of loading the model and using predict on the fixed input to make sure this is not the case. Otherwise there is not enough information to help
    – SumNeuron
    Nov 21 at 15:05
















0














I trained a classifier for the Iris dataset using TF Estimators, but each prediction call I'm getting different results. I wonder if I'm making something wrong in the training or if it is an issue in the prediction.



I'm loading an already trained model and just making the .predict call. This is my input function for prediction.



def get_predict_fn(features,batch_size):
def predict_input_fn():
dataset = tf.data.Dataset.from_tensor_slices(dict(features))
dataset = dataset.batch(batch_size)
return dataset.make_one_shot_iterator().get_next()

return predict_input_fn


This is the result of one call




[{'logits': array([-3.5082035, -1.074667 , -3.8533034],
dtype=float32), 'probabilities': array([0.07629351, 0.8696793 ,
0.05402722], dtype=float32), 'class_ids': array([1]), 'classes': array([b'Iris-versicolor'], dtype=object)}]




This is another call




[{'logits': array([ 3.0530725, -1.0889677, 2.3922846],
dtype=float32), 'probabilities': array([0.6525989 , 0.01037006,
0.337031 ], dtype=float32), 'class_ids': array([0]), 'classes': array([b'Iris-setosa'], dtype=object)}]




Both are making a call to the same model, sending the same example DataFrame.




sepal_length sepal_width petal_length petal_width

5.7 2.5 5.0 2.0











share|improve this question






















  • If you are loading a model, then the results should be consistent. Why might they change? 1. perhaps you are sending different data, but as you state you are sending the same fixed array, so that should not be the cause. 2. you are altering the model after you load it. Can you add the entire code of loading the model and using predict on the fixed input to make sure this is not the case. Otherwise there is not enough information to help
    – SumNeuron
    Nov 21 at 15:05














0












0








0







I trained a classifier for the Iris dataset using TF Estimators, but each prediction call I'm getting different results. I wonder if I'm making something wrong in the training or if it is an issue in the prediction.



I'm loading an already trained model and just making the .predict call. This is my input function for prediction.



def get_predict_fn(features,batch_size):
def predict_input_fn():
dataset = tf.data.Dataset.from_tensor_slices(dict(features))
dataset = dataset.batch(batch_size)
return dataset.make_one_shot_iterator().get_next()

return predict_input_fn


This is the result of one call




[{'logits': array([-3.5082035, -1.074667 , -3.8533034],
dtype=float32), 'probabilities': array([0.07629351, 0.8696793 ,
0.05402722], dtype=float32), 'class_ids': array([1]), 'classes': array([b'Iris-versicolor'], dtype=object)}]




This is another call




[{'logits': array([ 3.0530725, -1.0889677, 2.3922846],
dtype=float32), 'probabilities': array([0.6525989 , 0.01037006,
0.337031 ], dtype=float32), 'class_ids': array([0]), 'classes': array([b'Iris-setosa'], dtype=object)}]




Both are making a call to the same model, sending the same example DataFrame.




sepal_length sepal_width petal_length petal_width

5.7 2.5 5.0 2.0











share|improve this question













I trained a classifier for the Iris dataset using TF Estimators, but each prediction call I'm getting different results. I wonder if I'm making something wrong in the training or if it is an issue in the prediction.



I'm loading an already trained model and just making the .predict call. This is my input function for prediction.



def get_predict_fn(features,batch_size):
def predict_input_fn():
dataset = tf.data.Dataset.from_tensor_slices(dict(features))
dataset = dataset.batch(batch_size)
return dataset.make_one_shot_iterator().get_next()

return predict_input_fn


This is the result of one call




[{'logits': array([-3.5082035, -1.074667 , -3.8533034],
dtype=float32), 'probabilities': array([0.07629351, 0.8696793 ,
0.05402722], dtype=float32), 'class_ids': array([1]), 'classes': array([b'Iris-versicolor'], dtype=object)}]




This is another call




[{'logits': array([ 3.0530725, -1.0889677, 2.3922846],
dtype=float32), 'probabilities': array([0.6525989 , 0.01037006,
0.337031 ], dtype=float32), 'class_ids': array([0]), 'classes': array([b'Iris-setosa'], dtype=object)}]




Both are making a call to the same model, sending the same example DataFrame.




sepal_length sepal_width petal_length petal_width

5.7 2.5 5.0 2.0








python tensorflow tensorflow-estimator






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Nov 10 at 23:46









osanseviero

13




13












  • If you are loading a model, then the results should be consistent. Why might they change? 1. perhaps you are sending different data, but as you state you are sending the same fixed array, so that should not be the cause. 2. you are altering the model after you load it. Can you add the entire code of loading the model and using predict on the fixed input to make sure this is not the case. Otherwise there is not enough information to help
    – SumNeuron
    Nov 21 at 15:05


















  • If you are loading a model, then the results should be consistent. Why might they change? 1. perhaps you are sending different data, but as you state you are sending the same fixed array, so that should not be the cause. 2. you are altering the model after you load it. Can you add the entire code of loading the model and using predict on the fixed input to make sure this is not the case. Otherwise there is not enough information to help
    – SumNeuron
    Nov 21 at 15:05
















If you are loading a model, then the results should be consistent. Why might they change? 1. perhaps you are sending different data, but as you state you are sending the same fixed array, so that should not be the cause. 2. you are altering the model after you load it. Can you add the entire code of loading the model and using predict on the fixed input to make sure this is not the case. Otherwise there is not enough information to help
– SumNeuron
Nov 21 at 15:05




If you are loading a model, then the results should be consistent. Why might they change? 1. perhaps you are sending different data, but as you state you are sending the same fixed array, so that should not be the cause. 2. you are altering the model after you load it. Can you add the entire code of loading the model and using predict on the fixed input to make sure this is not the case. Otherwise there is not enough information to help
– SumNeuron
Nov 21 at 15:05

















active

oldest

votes











Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53244539%2ftensorflow-estimator-makes-different-prediction-on-each-call-of-predict%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown






























active

oldest

votes













active

oldest

votes









active

oldest

votes






active

oldest

votes
















draft saved

draft discarded




















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.





Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


Please pay close attention to the following guidance:


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53244539%2ftensorflow-estimator-makes-different-prediction-on-each-call-of-predict%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







這個網誌中的熱門文章

Xamarin.form Move up view when keyboard appear

Post-Redirect-Get with Spring WebFlux and Thymeleaf

Anylogic : not able to use stopDelay()