Tensorflow calculates incorrect loss for `tf.keras` models when using Weights












4















The loss calculation is not correct when working with tf.keras. After building the model, tf.keras.fit_generator should accept (inputs, targets, sample_weights) as inputs. However, if I multiply the sample_weights by 10000, the loss doesn't change.



The bug seems to appear from 1.10 version of Tensorflow onwards e.g. (1.11, 1.12)



Code to reproduce



import numpy as np
import tensorflow as tf

WEIGHT_VARIABLE = 1

no_of_features = 10
timesteps = 3
batch_size = 32

def data_gen():

while True:
numerical = np.random.randint(5, size=(batch_size, timesteps, no_of_features))
y = np.random.randint(2, size=batch_size)
w = np.ones(batch_size) * WEIGHT_VARIABLE

yield {'numeric_input': numerical}, y, w


def build_model():
numerical_input = tf.keras.layers.Input(shape=(timesteps, no_of_features), name='numeric_input')
rnn_out = tf.keras.layers.GRU(32, return_sequences=False)(numerical_input)
dense = tf.keras.layers.Dense(1, activation='sigmoid', name='main_output')(rnn_out)

model = tf.keras.models.Model(numerical_input, dense)

params = {
'loss': 'binary_crossentropy',
'optimizer': tf.keras.optimizers.Adam(),
'metrics': [tf.keras.metrics.binary_crossentropy, tf.keras.metrics.binary_accuracy]
}
model.compile(**params)

return model


def train_model():
gen1 = data_gen()
model = build_model()

model.fit_generator(gen1, epochs=30, steps_per_epoch=10)


if __name__ == '__main__':
train_model()


In the above code, you simply need to change the WEIGHT_VARIABLE = 1 From 1 to 100000 and rerun the file.





Logs



v1.10

WEIGHT_VARIABLE = 1

Epoch 1/5 10/10 [==============================] -
1s 128ms/step - loss: 0.7407 - binary_crossentropy: 0.7407 - binary_accuracy: 0.5031
Epoch 2/5 10/10 [==============================] -
0s 4ms/step - loss: 0.7043 - binary_crossentropy: 0.7043 - binary_accuracy: 0.5125
Epoch 3/5 10/10 [==============================] -
0s 4ms/step - loss: 0.7055 - binary_crossentropy: 0.7055 - binary_accuracy: 0.5219
Epoch 4/5 10/10 [==============================] -
0s 4ms/step - loss: 0.7002 - binary_crossentropy: 0.7002 - binary_accuracy: 0.5250
Epoch 5/5 10/10 [==============================] -
0s 4ms/step - loss: 0.6944 - binary_crossentropy: 0.6944 - binary_accuracy: 0.5375

WEIGHT_VARIABLE = 10000

Epoch 1/5 10/10 [==============================] -
1s 131ms/step - loss: 7235.5976 - binary_crossentropy: 0.7236 - binary_accuracy: 0.4562
Epoch 2/5 10/10 [==============================] -
0s 4ms/step - loss: 7271.9184 - binary_crossentropy: 0.7272 - binary_accuracy: 0.4844
Epoch 3/5 10/10 [==============================] -
0s 4ms/step - loss: 7276.9147 - binary_crossentropy: 0.7277 - binary_accuracy: 0.4500
Epoch 4/5 10/10 [==============================] -
0s 4ms/step - loss: 7052.0121 - binary_crossentropy: 0.7052 - binary_accuracy: 0.4625
Epoch 5/5 10/10 [==============================] -
0s 4ms/step - loss: 7187.0285 - binary_crossentropy: 0.7187 - binary_accuracy: 0.4969

v1.12

WEIGHT_VARIABLE = 1

Epoch 1/5 10/10 [==============================] -
1s 68ms/step - loss: 0.7188 - binary_crossentropy: 0.7188 - binary_accuracy: 0.5312
Epoch 2/5 10/10 [==============================] -
0s 4ms/step - loss: 0.7044 - binary_crossentropy: 0.7044 - binary_accuracy: 0.4969
Epoch 3/5 10/10 [==============================] -
0s 4ms/step - loss: 0.7086 - binary_crossentropy: 0.7086 - binary_accuracy: 0.4844
Epoch 4/5 10/10 [==============================] -
0s 4ms/step - loss: 0.7075 - binary_crossentropy: 0.7075 - binary_accuracy: 0.4500
Epoch 5/5 10/10 [==============================] -
0s 4ms/step - loss: 0.6950 - binary_crossentropy: 0.6950 - binary_accuracy: 0.5187

WEIGHT_VARIABLE = 10000

Epoch 1/5 10/10 [==============================] -
1s 74ms/step - loss: 0.9084 - binary_crossentropy: 0.9084 - binary_accuracy: 0.4719
Epoch 2/5 10/10 [==============================] -
0s 4ms/step - loss: 0.7120 - binary_crossentropy: 0.7120 - binary_accuracy: 0.5062
Epoch 3/5 10/10 [==============================] -
0s 4ms/step - loss: 0.7024 - binary_crossentropy: 0.7024 - binary_accuracy: 0.5344
Epoch 4/5 10/10 [==============================] -
0s 4ms/step - loss: 0.7257 - binary_crossentropy: 0.7257 - binary_accuracy: 0.4500
Epoch 5/5 10/10 [==============================] -
0s 4ms/step - loss: 0.7013 - binary_crossentropy: 0.7013 - binary_accuracy: 0.4844


Link to Github Issue










share|improve this question

























  • Keras might be normalizing the weights (it has code to do this), so if you have all weights set to the same value, changing the magnitude won't change anything.

    – Matias Valdenegro
    Nov 15 '18 at 13:29











  • In my real code, I only add weight to positive samples e.g. w = np.where(y==1, WEIGHT_VARIABLE, 1), which produces weird losses. I understand that everything is still okay if keras normalised the loss, but why? Now I have no interpretation for the loss function

    – GRS
    Nov 15 '18 at 13:37











  • @MatiasValdenegro I looked through the source code of Keras and could not find any weight normalization code. However, even if there is such a thing, how do explain the different loss values in v1.10 and v1.12?

    – today
    Nov 16 '18 at 10:55
















4















The loss calculation is not correct when working with tf.keras. After building the model, tf.keras.fit_generator should accept (inputs, targets, sample_weights) as inputs. However, if I multiply the sample_weights by 10000, the loss doesn't change.



The bug seems to appear from 1.10 version of Tensorflow onwards e.g. (1.11, 1.12)



Code to reproduce



import numpy as np
import tensorflow as tf

WEIGHT_VARIABLE = 1

no_of_features = 10
timesteps = 3
batch_size = 32

def data_gen():

while True:
numerical = np.random.randint(5, size=(batch_size, timesteps, no_of_features))
y = np.random.randint(2, size=batch_size)
w = np.ones(batch_size) * WEIGHT_VARIABLE

yield {'numeric_input': numerical}, y, w


def build_model():
numerical_input = tf.keras.layers.Input(shape=(timesteps, no_of_features), name='numeric_input')
rnn_out = tf.keras.layers.GRU(32, return_sequences=False)(numerical_input)
dense = tf.keras.layers.Dense(1, activation='sigmoid', name='main_output')(rnn_out)

model = tf.keras.models.Model(numerical_input, dense)

params = {
'loss': 'binary_crossentropy',
'optimizer': tf.keras.optimizers.Adam(),
'metrics': [tf.keras.metrics.binary_crossentropy, tf.keras.metrics.binary_accuracy]
}
model.compile(**params)

return model


def train_model():
gen1 = data_gen()
model = build_model()

model.fit_generator(gen1, epochs=30, steps_per_epoch=10)


if __name__ == '__main__':
train_model()


In the above code, you simply need to change the WEIGHT_VARIABLE = 1 From 1 to 100000 and rerun the file.





Logs



v1.10

WEIGHT_VARIABLE = 1

Epoch 1/5 10/10 [==============================] -
1s 128ms/step - loss: 0.7407 - binary_crossentropy: 0.7407 - binary_accuracy: 0.5031
Epoch 2/5 10/10 [==============================] -
0s 4ms/step - loss: 0.7043 - binary_crossentropy: 0.7043 - binary_accuracy: 0.5125
Epoch 3/5 10/10 [==============================] -
0s 4ms/step - loss: 0.7055 - binary_crossentropy: 0.7055 - binary_accuracy: 0.5219
Epoch 4/5 10/10 [==============================] -
0s 4ms/step - loss: 0.7002 - binary_crossentropy: 0.7002 - binary_accuracy: 0.5250
Epoch 5/5 10/10 [==============================] -
0s 4ms/step - loss: 0.6944 - binary_crossentropy: 0.6944 - binary_accuracy: 0.5375

WEIGHT_VARIABLE = 10000

Epoch 1/5 10/10 [==============================] -
1s 131ms/step - loss: 7235.5976 - binary_crossentropy: 0.7236 - binary_accuracy: 0.4562
Epoch 2/5 10/10 [==============================] -
0s 4ms/step - loss: 7271.9184 - binary_crossentropy: 0.7272 - binary_accuracy: 0.4844
Epoch 3/5 10/10 [==============================] -
0s 4ms/step - loss: 7276.9147 - binary_crossentropy: 0.7277 - binary_accuracy: 0.4500
Epoch 4/5 10/10 [==============================] -
0s 4ms/step - loss: 7052.0121 - binary_crossentropy: 0.7052 - binary_accuracy: 0.4625
Epoch 5/5 10/10 [==============================] -
0s 4ms/step - loss: 7187.0285 - binary_crossentropy: 0.7187 - binary_accuracy: 0.4969

v1.12

WEIGHT_VARIABLE = 1

Epoch 1/5 10/10 [==============================] -
1s 68ms/step - loss: 0.7188 - binary_crossentropy: 0.7188 - binary_accuracy: 0.5312
Epoch 2/5 10/10 [==============================] -
0s 4ms/step - loss: 0.7044 - binary_crossentropy: 0.7044 - binary_accuracy: 0.4969
Epoch 3/5 10/10 [==============================] -
0s 4ms/step - loss: 0.7086 - binary_crossentropy: 0.7086 - binary_accuracy: 0.4844
Epoch 4/5 10/10 [==============================] -
0s 4ms/step - loss: 0.7075 - binary_crossentropy: 0.7075 - binary_accuracy: 0.4500
Epoch 5/5 10/10 [==============================] -
0s 4ms/step - loss: 0.6950 - binary_crossentropy: 0.6950 - binary_accuracy: 0.5187

WEIGHT_VARIABLE = 10000

Epoch 1/5 10/10 [==============================] -
1s 74ms/step - loss: 0.9084 - binary_crossentropy: 0.9084 - binary_accuracy: 0.4719
Epoch 2/5 10/10 [==============================] -
0s 4ms/step - loss: 0.7120 - binary_crossentropy: 0.7120 - binary_accuracy: 0.5062
Epoch 3/5 10/10 [==============================] -
0s 4ms/step - loss: 0.7024 - binary_crossentropy: 0.7024 - binary_accuracy: 0.5344
Epoch 4/5 10/10 [==============================] -
0s 4ms/step - loss: 0.7257 - binary_crossentropy: 0.7257 - binary_accuracy: 0.4500
Epoch 5/5 10/10 [==============================] -
0s 4ms/step - loss: 0.7013 - binary_crossentropy: 0.7013 - binary_accuracy: 0.4844


Link to Github Issue










share|improve this question

























  • Keras might be normalizing the weights (it has code to do this), so if you have all weights set to the same value, changing the magnitude won't change anything.

    – Matias Valdenegro
    Nov 15 '18 at 13:29











  • In my real code, I only add weight to positive samples e.g. w = np.where(y==1, WEIGHT_VARIABLE, 1), which produces weird losses. I understand that everything is still okay if keras normalised the loss, but why? Now I have no interpretation for the loss function

    – GRS
    Nov 15 '18 at 13:37











  • @MatiasValdenegro I looked through the source code of Keras and could not find any weight normalization code. However, even if there is such a thing, how do explain the different loss values in v1.10 and v1.12?

    – today
    Nov 16 '18 at 10:55














4












4








4








The loss calculation is not correct when working with tf.keras. After building the model, tf.keras.fit_generator should accept (inputs, targets, sample_weights) as inputs. However, if I multiply the sample_weights by 10000, the loss doesn't change.



The bug seems to appear from 1.10 version of Tensorflow onwards e.g. (1.11, 1.12)



Code to reproduce



import numpy as np
import tensorflow as tf

WEIGHT_VARIABLE = 1

no_of_features = 10
timesteps = 3
batch_size = 32

def data_gen():

while True:
numerical = np.random.randint(5, size=(batch_size, timesteps, no_of_features))
y = np.random.randint(2, size=batch_size)
w = np.ones(batch_size) * WEIGHT_VARIABLE

yield {'numeric_input': numerical}, y, w


def build_model():
numerical_input = tf.keras.layers.Input(shape=(timesteps, no_of_features), name='numeric_input')
rnn_out = tf.keras.layers.GRU(32, return_sequences=False)(numerical_input)
dense = tf.keras.layers.Dense(1, activation='sigmoid', name='main_output')(rnn_out)

model = tf.keras.models.Model(numerical_input, dense)

params = {
'loss': 'binary_crossentropy',
'optimizer': tf.keras.optimizers.Adam(),
'metrics': [tf.keras.metrics.binary_crossentropy, tf.keras.metrics.binary_accuracy]
}
model.compile(**params)

return model


def train_model():
gen1 = data_gen()
model = build_model()

model.fit_generator(gen1, epochs=30, steps_per_epoch=10)


if __name__ == '__main__':
train_model()


In the above code, you simply need to change the WEIGHT_VARIABLE = 1 From 1 to 100000 and rerun the file.





Logs



v1.10

WEIGHT_VARIABLE = 1

Epoch 1/5 10/10 [==============================] -
1s 128ms/step - loss: 0.7407 - binary_crossentropy: 0.7407 - binary_accuracy: 0.5031
Epoch 2/5 10/10 [==============================] -
0s 4ms/step - loss: 0.7043 - binary_crossentropy: 0.7043 - binary_accuracy: 0.5125
Epoch 3/5 10/10 [==============================] -
0s 4ms/step - loss: 0.7055 - binary_crossentropy: 0.7055 - binary_accuracy: 0.5219
Epoch 4/5 10/10 [==============================] -
0s 4ms/step - loss: 0.7002 - binary_crossentropy: 0.7002 - binary_accuracy: 0.5250
Epoch 5/5 10/10 [==============================] -
0s 4ms/step - loss: 0.6944 - binary_crossentropy: 0.6944 - binary_accuracy: 0.5375

WEIGHT_VARIABLE = 10000

Epoch 1/5 10/10 [==============================] -
1s 131ms/step - loss: 7235.5976 - binary_crossentropy: 0.7236 - binary_accuracy: 0.4562
Epoch 2/5 10/10 [==============================] -
0s 4ms/step - loss: 7271.9184 - binary_crossentropy: 0.7272 - binary_accuracy: 0.4844
Epoch 3/5 10/10 [==============================] -
0s 4ms/step - loss: 7276.9147 - binary_crossentropy: 0.7277 - binary_accuracy: 0.4500
Epoch 4/5 10/10 [==============================] -
0s 4ms/step - loss: 7052.0121 - binary_crossentropy: 0.7052 - binary_accuracy: 0.4625
Epoch 5/5 10/10 [==============================] -
0s 4ms/step - loss: 7187.0285 - binary_crossentropy: 0.7187 - binary_accuracy: 0.4969

v1.12

WEIGHT_VARIABLE = 1

Epoch 1/5 10/10 [==============================] -
1s 68ms/step - loss: 0.7188 - binary_crossentropy: 0.7188 - binary_accuracy: 0.5312
Epoch 2/5 10/10 [==============================] -
0s 4ms/step - loss: 0.7044 - binary_crossentropy: 0.7044 - binary_accuracy: 0.4969
Epoch 3/5 10/10 [==============================] -
0s 4ms/step - loss: 0.7086 - binary_crossentropy: 0.7086 - binary_accuracy: 0.4844
Epoch 4/5 10/10 [==============================] -
0s 4ms/step - loss: 0.7075 - binary_crossentropy: 0.7075 - binary_accuracy: 0.4500
Epoch 5/5 10/10 [==============================] -
0s 4ms/step - loss: 0.6950 - binary_crossentropy: 0.6950 - binary_accuracy: 0.5187

WEIGHT_VARIABLE = 10000

Epoch 1/5 10/10 [==============================] -
1s 74ms/step - loss: 0.9084 - binary_crossentropy: 0.9084 - binary_accuracy: 0.4719
Epoch 2/5 10/10 [==============================] -
0s 4ms/step - loss: 0.7120 - binary_crossentropy: 0.7120 - binary_accuracy: 0.5062
Epoch 3/5 10/10 [==============================] -
0s 4ms/step - loss: 0.7024 - binary_crossentropy: 0.7024 - binary_accuracy: 0.5344
Epoch 4/5 10/10 [==============================] -
0s 4ms/step - loss: 0.7257 - binary_crossentropy: 0.7257 - binary_accuracy: 0.4500
Epoch 5/5 10/10 [==============================] -
0s 4ms/step - loss: 0.7013 - binary_crossentropy: 0.7013 - binary_accuracy: 0.4844


Link to Github Issue










share|improve this question
















The loss calculation is not correct when working with tf.keras. After building the model, tf.keras.fit_generator should accept (inputs, targets, sample_weights) as inputs. However, if I multiply the sample_weights by 10000, the loss doesn't change.



The bug seems to appear from 1.10 version of Tensorflow onwards e.g. (1.11, 1.12)



Code to reproduce



import numpy as np
import tensorflow as tf

WEIGHT_VARIABLE = 1

no_of_features = 10
timesteps = 3
batch_size = 32

def data_gen():

while True:
numerical = np.random.randint(5, size=(batch_size, timesteps, no_of_features))
y = np.random.randint(2, size=batch_size)
w = np.ones(batch_size) * WEIGHT_VARIABLE

yield {'numeric_input': numerical}, y, w


def build_model():
numerical_input = tf.keras.layers.Input(shape=(timesteps, no_of_features), name='numeric_input')
rnn_out = tf.keras.layers.GRU(32, return_sequences=False)(numerical_input)
dense = tf.keras.layers.Dense(1, activation='sigmoid', name='main_output')(rnn_out)

model = tf.keras.models.Model(numerical_input, dense)

params = {
'loss': 'binary_crossentropy',
'optimizer': tf.keras.optimizers.Adam(),
'metrics': [tf.keras.metrics.binary_crossentropy, tf.keras.metrics.binary_accuracy]
}
model.compile(**params)

return model


def train_model():
gen1 = data_gen()
model = build_model()

model.fit_generator(gen1, epochs=30, steps_per_epoch=10)


if __name__ == '__main__':
train_model()


In the above code, you simply need to change the WEIGHT_VARIABLE = 1 From 1 to 100000 and rerun the file.





Logs



v1.10

WEIGHT_VARIABLE = 1

Epoch 1/5 10/10 [==============================] -
1s 128ms/step - loss: 0.7407 - binary_crossentropy: 0.7407 - binary_accuracy: 0.5031
Epoch 2/5 10/10 [==============================] -
0s 4ms/step - loss: 0.7043 - binary_crossentropy: 0.7043 - binary_accuracy: 0.5125
Epoch 3/5 10/10 [==============================] -
0s 4ms/step - loss: 0.7055 - binary_crossentropy: 0.7055 - binary_accuracy: 0.5219
Epoch 4/5 10/10 [==============================] -
0s 4ms/step - loss: 0.7002 - binary_crossentropy: 0.7002 - binary_accuracy: 0.5250
Epoch 5/5 10/10 [==============================] -
0s 4ms/step - loss: 0.6944 - binary_crossentropy: 0.6944 - binary_accuracy: 0.5375

WEIGHT_VARIABLE = 10000

Epoch 1/5 10/10 [==============================] -
1s 131ms/step - loss: 7235.5976 - binary_crossentropy: 0.7236 - binary_accuracy: 0.4562
Epoch 2/5 10/10 [==============================] -
0s 4ms/step - loss: 7271.9184 - binary_crossentropy: 0.7272 - binary_accuracy: 0.4844
Epoch 3/5 10/10 [==============================] -
0s 4ms/step - loss: 7276.9147 - binary_crossentropy: 0.7277 - binary_accuracy: 0.4500
Epoch 4/5 10/10 [==============================] -
0s 4ms/step - loss: 7052.0121 - binary_crossentropy: 0.7052 - binary_accuracy: 0.4625
Epoch 5/5 10/10 [==============================] -
0s 4ms/step - loss: 7187.0285 - binary_crossentropy: 0.7187 - binary_accuracy: 0.4969

v1.12

WEIGHT_VARIABLE = 1

Epoch 1/5 10/10 [==============================] -
1s 68ms/step - loss: 0.7188 - binary_crossentropy: 0.7188 - binary_accuracy: 0.5312
Epoch 2/5 10/10 [==============================] -
0s 4ms/step - loss: 0.7044 - binary_crossentropy: 0.7044 - binary_accuracy: 0.4969
Epoch 3/5 10/10 [==============================] -
0s 4ms/step - loss: 0.7086 - binary_crossentropy: 0.7086 - binary_accuracy: 0.4844
Epoch 4/5 10/10 [==============================] -
0s 4ms/step - loss: 0.7075 - binary_crossentropy: 0.7075 - binary_accuracy: 0.4500
Epoch 5/5 10/10 [==============================] -
0s 4ms/step - loss: 0.6950 - binary_crossentropy: 0.6950 - binary_accuracy: 0.5187

WEIGHT_VARIABLE = 10000

Epoch 1/5 10/10 [==============================] -
1s 74ms/step - loss: 0.9084 - binary_crossentropy: 0.9084 - binary_accuracy: 0.4719
Epoch 2/5 10/10 [==============================] -
0s 4ms/step - loss: 0.7120 - binary_crossentropy: 0.7120 - binary_accuracy: 0.5062
Epoch 3/5 10/10 [==============================] -
0s 4ms/step - loss: 0.7024 - binary_crossentropy: 0.7024 - binary_accuracy: 0.5344
Epoch 4/5 10/10 [==============================] -
0s 4ms/step - loss: 0.7257 - binary_crossentropy: 0.7257 - binary_accuracy: 0.4500
Epoch 5/5 10/10 [==============================] -
0s 4ms/step - loss: 0.7013 - binary_crossentropy: 0.7013 - binary_accuracy: 0.4844


Link to Github Issue







python tensorflow machine-learning keras loss-function






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 15 '18 at 12:02









today

10.7k21737




10.7k21737










asked Nov 15 '18 at 11:16









GRSGRS

384625




384625













  • Keras might be normalizing the weights (it has code to do this), so if you have all weights set to the same value, changing the magnitude won't change anything.

    – Matias Valdenegro
    Nov 15 '18 at 13:29











  • In my real code, I only add weight to positive samples e.g. w = np.where(y==1, WEIGHT_VARIABLE, 1), which produces weird losses. I understand that everything is still okay if keras normalised the loss, but why? Now I have no interpretation for the loss function

    – GRS
    Nov 15 '18 at 13:37











  • @MatiasValdenegro I looked through the source code of Keras and could not find any weight normalization code. However, even if there is such a thing, how do explain the different loss values in v1.10 and v1.12?

    – today
    Nov 16 '18 at 10:55



















  • Keras might be normalizing the weights (it has code to do this), so if you have all weights set to the same value, changing the magnitude won't change anything.

    – Matias Valdenegro
    Nov 15 '18 at 13:29











  • In my real code, I only add weight to positive samples e.g. w = np.where(y==1, WEIGHT_VARIABLE, 1), which produces weird losses. I understand that everything is still okay if keras normalised the loss, but why? Now I have no interpretation for the loss function

    – GRS
    Nov 15 '18 at 13:37











  • @MatiasValdenegro I looked through the source code of Keras and could not find any weight normalization code. However, even if there is such a thing, how do explain the different loss values in v1.10 and v1.12?

    – today
    Nov 16 '18 at 10:55

















Keras might be normalizing the weights (it has code to do this), so if you have all weights set to the same value, changing the magnitude won't change anything.

– Matias Valdenegro
Nov 15 '18 at 13:29





Keras might be normalizing the weights (it has code to do this), so if you have all weights set to the same value, changing the magnitude won't change anything.

– Matias Valdenegro
Nov 15 '18 at 13:29













In my real code, I only add weight to positive samples e.g. w = np.where(y==1, WEIGHT_VARIABLE, 1), which produces weird losses. I understand that everything is still okay if keras normalised the loss, but why? Now I have no interpretation for the loss function

– GRS
Nov 15 '18 at 13:37





In my real code, I only add weight to positive samples e.g. w = np.where(y==1, WEIGHT_VARIABLE, 1), which produces weird losses. I understand that everything is still okay if keras normalised the loss, but why? Now I have no interpretation for the loss function

– GRS
Nov 15 '18 at 13:37













@MatiasValdenegro I looked through the source code of Keras and could not find any weight normalization code. However, even if there is such a thing, how do explain the different loss values in v1.10 and v1.12?

– today
Nov 16 '18 at 10:55





@MatiasValdenegro I looked through the source code of Keras and could not find any weight normalization code. However, even if there is such a thing, how do explain the different loss values in v1.10 and v1.12?

– today
Nov 16 '18 at 10:55












0






active

oldest

votes











Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53318270%2ftensorflow-calculates-incorrect-loss-for-tf-keras-models-when-using-weights%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























0






active

oldest

votes








0






active

oldest

votes









active

oldest

votes






active

oldest

votes
















draft saved

draft discarded




















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53318270%2ftensorflow-calculates-incorrect-loss-for-tf-keras-models-when-using-weights%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







這個網誌中的熱門文章

Xamarin.form Move up view when keyboard appear

Post-Redirect-Get with Spring WebFlux and Thymeleaf

Anylogic : not able to use stopDelay()