Use both losses on a subnetwork of combined networks












3















I am trying to stack two networks together. I want to calculate loss of each network separately. For example in the image below; loss of LSTM1 should be (Loss1 + Loss2) and loss of system should be just (Loss2)



stacked networks



I implemented a network like below with the idea above but have no idea how to compile and run it.



def build_lstm1():
x = Input(shape=(self.timesteps, self.input_dim,), name = 'input')
h = LSTM(1024, return_sequences=True))(x)
scores = TimeDistributed(Dense(self.input_dim, activation='sigmoid', name='dense'))(h)
LSTM1 = Model(x, scores)
return LSTM1


def build_lstm2():
x = Input(shape=(self.timesteps, self.input_dim,), name = 'input')
h = LSTM(1024, return_sequences=True))(x)
labels = TimeDistributed(Dense(self.input_dim, activation='sigmoid', name='dense'))(h)
LSTM2 = Model(x, labels)
return LSTM2




lstm1 = build_lstm1()
lstm2 = build_lstm2()


combined = Model(inputs = lstm1.input ,
outputs = [lstm1.output,
lstm2(lstm1.output).output)])









share|improve this question

























  • Can you give a detailed result of above network. What is the result of above network after feeding inputs properly?

    – BlackCode
    Nov 14 '18 at 8:20






  • 1





    Feseoglu, here is a keras tutorial page keras.io/getting-started/functional-api-guide/… for your problem. Please inform us if it works or not.

    – BlackCode
    Nov 14 '18 at 8:39
















3















I am trying to stack two networks together. I want to calculate loss of each network separately. For example in the image below; loss of LSTM1 should be (Loss1 + Loss2) and loss of system should be just (Loss2)



stacked networks



I implemented a network like below with the idea above but have no idea how to compile and run it.



def build_lstm1():
x = Input(shape=(self.timesteps, self.input_dim,), name = 'input')
h = LSTM(1024, return_sequences=True))(x)
scores = TimeDistributed(Dense(self.input_dim, activation='sigmoid', name='dense'))(h)
LSTM1 = Model(x, scores)
return LSTM1


def build_lstm2():
x = Input(shape=(self.timesteps, self.input_dim,), name = 'input')
h = LSTM(1024, return_sequences=True))(x)
labels = TimeDistributed(Dense(self.input_dim, activation='sigmoid', name='dense'))(h)
LSTM2 = Model(x, labels)
return LSTM2




lstm1 = build_lstm1()
lstm2 = build_lstm2()


combined = Model(inputs = lstm1.input ,
outputs = [lstm1.output,
lstm2(lstm1.output).output)])









share|improve this question

























  • Can you give a detailed result of above network. What is the result of above network after feeding inputs properly?

    – BlackCode
    Nov 14 '18 at 8:20






  • 1





    Feseoglu, here is a keras tutorial page keras.io/getting-started/functional-api-guide/… for your problem. Please inform us if it works or not.

    – BlackCode
    Nov 14 '18 at 8:39














3












3








3


2






I am trying to stack two networks together. I want to calculate loss of each network separately. For example in the image below; loss of LSTM1 should be (Loss1 + Loss2) and loss of system should be just (Loss2)



stacked networks



I implemented a network like below with the idea above but have no idea how to compile and run it.



def build_lstm1():
x = Input(shape=(self.timesteps, self.input_dim,), name = 'input')
h = LSTM(1024, return_sequences=True))(x)
scores = TimeDistributed(Dense(self.input_dim, activation='sigmoid', name='dense'))(h)
LSTM1 = Model(x, scores)
return LSTM1


def build_lstm2():
x = Input(shape=(self.timesteps, self.input_dim,), name = 'input')
h = LSTM(1024, return_sequences=True))(x)
labels = TimeDistributed(Dense(self.input_dim, activation='sigmoid', name='dense'))(h)
LSTM2 = Model(x, labels)
return LSTM2




lstm1 = build_lstm1()
lstm2 = build_lstm2()


combined = Model(inputs = lstm1.input ,
outputs = [lstm1.output,
lstm2(lstm1.output).output)])









share|improve this question
















I am trying to stack two networks together. I want to calculate loss of each network separately. For example in the image below; loss of LSTM1 should be (Loss1 + Loss2) and loss of system should be just (Loss2)



stacked networks



I implemented a network like below with the idea above but have no idea how to compile and run it.



def build_lstm1():
x = Input(shape=(self.timesteps, self.input_dim,), name = 'input')
h = LSTM(1024, return_sequences=True))(x)
scores = TimeDistributed(Dense(self.input_dim, activation='sigmoid', name='dense'))(h)
LSTM1 = Model(x, scores)
return LSTM1


def build_lstm2():
x = Input(shape=(self.timesteps, self.input_dim,), name = 'input')
h = LSTM(1024, return_sequences=True))(x)
labels = TimeDistributed(Dense(self.input_dim, activation='sigmoid', name='dense'))(h)
LSTM2 = Model(x, labels)
return LSTM2




lstm1 = build_lstm1()
lstm2 = build_lstm2()


combined = Model(inputs = lstm1.input ,
outputs = [lstm1.output,
lstm2(lstm1.output).output)])






python keras neural-network deep-learning lstm






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 15 '18 at 5:18







Feseoglu

















asked Nov 14 '18 at 7:46









FeseogluFeseoglu

418




418













  • Can you give a detailed result of above network. What is the result of above network after feeding inputs properly?

    – BlackCode
    Nov 14 '18 at 8:20






  • 1





    Feseoglu, here is a keras tutorial page keras.io/getting-started/functional-api-guide/… for your problem. Please inform us if it works or not.

    – BlackCode
    Nov 14 '18 at 8:39



















  • Can you give a detailed result of above network. What is the result of above network after feeding inputs properly?

    – BlackCode
    Nov 14 '18 at 8:20






  • 1





    Feseoglu, here is a keras tutorial page keras.io/getting-started/functional-api-guide/… for your problem. Please inform us if it works or not.

    – BlackCode
    Nov 14 '18 at 8:39

















Can you give a detailed result of above network. What is the result of above network after feeding inputs properly?

– BlackCode
Nov 14 '18 at 8:20





Can you give a detailed result of above network. What is the result of above network after feeding inputs properly?

– BlackCode
Nov 14 '18 at 8:20




1




1





Feseoglu, here is a keras tutorial page keras.io/getting-started/functional-api-guide/… for your problem. Please inform us if it works or not.

– BlackCode
Nov 14 '18 at 8:39





Feseoglu, here is a keras tutorial page keras.io/getting-started/functional-api-guide/… for your problem. Please inform us if it works or not.

– BlackCode
Nov 14 '18 at 8:39












1 Answer
1






active

oldest

votes


















0














This is a wrong way of using the Model functional API of Keras. Also it's not possible to have loss of LSTM1 as Loss1+ Loss2. It will be only Loss1. Similarly for LSTM2 it will be only Loss2. However, for the combined network you can have any linear combination of Loss1 and Loss2 as the overall Loss i.e.



Loss_overall = a.Loss1 + b.Loss2. where a,b are non-negative real numbers



The real essence of Model Functional API is that it allows you create Deep learning architecture with multiple outputs and multiple inputs in single model.



def build_lstm_combined():
x = Input(shape=(self.timesteps, self.input_dim,), name = 'input')
h_1 = LSTM(1024, return_sequences=True))(x)
scores = TimeDistributed(Dense(self.input_dim, activation='sigmoid', name='dense'))(h_1)
h_2 = LSTM(1024, return_sequences=True))(h_1)
labels = TimeDistributed(Dense(self.input_dim, activation='sigmoid', name='dense'))(h_2)
LSTM_combined = Model(x,[scores,labels])
return LSTM_combined


This combined model has loss which is combination of both Loss1 and Loss2. While compiling the model you can specify the weights of each loss to get overall loss. If your desired loss is 0.5Loss1 + Loss2 you can do this by:



model_1 = build_lstm_combined()
model_1.compile(optimizer=Adam(0.001), loss = ['categorical_crossentropy','categorical_crossentropy'],loss_weights= [0.5,1])





share|improve this answer
























  • As Loss2, I meant the loss of system. I edited my question now, so now the loss of lstm1 should be model loss and lstm1 loss, loss of lstm2 should come from the models loss.

    – Feseoglu
    Nov 15 '18 at 5:21











  • I don't know why you would want loss of LSTM 1 and LSTM 2 separately since there are part of one big model. As far as training of any neural network no matter how complex or for that matter any ML model, there is only one loss. Loss, as in our case, may be combination of different parts (Loss = loss1 + loss2) but during we try to to minimise the overall loss to get optimum weight parameters. I think by loss you mean intermediate output from LSTM 1 which is reflected in my answer.

    – Huzaifa Calcuttawala
    Nov 16 '18 at 15:45











Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53295272%2fuse-both-losses-on-a-subnetwork-of-combined-networks%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









0














This is a wrong way of using the Model functional API of Keras. Also it's not possible to have loss of LSTM1 as Loss1+ Loss2. It will be only Loss1. Similarly for LSTM2 it will be only Loss2. However, for the combined network you can have any linear combination of Loss1 and Loss2 as the overall Loss i.e.



Loss_overall = a.Loss1 + b.Loss2. where a,b are non-negative real numbers



The real essence of Model Functional API is that it allows you create Deep learning architecture with multiple outputs and multiple inputs in single model.



def build_lstm_combined():
x = Input(shape=(self.timesteps, self.input_dim,), name = 'input')
h_1 = LSTM(1024, return_sequences=True))(x)
scores = TimeDistributed(Dense(self.input_dim, activation='sigmoid', name='dense'))(h_1)
h_2 = LSTM(1024, return_sequences=True))(h_1)
labels = TimeDistributed(Dense(self.input_dim, activation='sigmoid', name='dense'))(h_2)
LSTM_combined = Model(x,[scores,labels])
return LSTM_combined


This combined model has loss which is combination of both Loss1 and Loss2. While compiling the model you can specify the weights of each loss to get overall loss. If your desired loss is 0.5Loss1 + Loss2 you can do this by:



model_1 = build_lstm_combined()
model_1.compile(optimizer=Adam(0.001), loss = ['categorical_crossentropy','categorical_crossentropy'],loss_weights= [0.5,1])





share|improve this answer
























  • As Loss2, I meant the loss of system. I edited my question now, so now the loss of lstm1 should be model loss and lstm1 loss, loss of lstm2 should come from the models loss.

    – Feseoglu
    Nov 15 '18 at 5:21











  • I don't know why you would want loss of LSTM 1 and LSTM 2 separately since there are part of one big model. As far as training of any neural network no matter how complex or for that matter any ML model, there is only one loss. Loss, as in our case, may be combination of different parts (Loss = loss1 + loss2) but during we try to to minimise the overall loss to get optimum weight parameters. I think by loss you mean intermediate output from LSTM 1 which is reflected in my answer.

    – Huzaifa Calcuttawala
    Nov 16 '18 at 15:45
















0














This is a wrong way of using the Model functional API of Keras. Also it's not possible to have loss of LSTM1 as Loss1+ Loss2. It will be only Loss1. Similarly for LSTM2 it will be only Loss2. However, for the combined network you can have any linear combination of Loss1 and Loss2 as the overall Loss i.e.



Loss_overall = a.Loss1 + b.Loss2. where a,b are non-negative real numbers



The real essence of Model Functional API is that it allows you create Deep learning architecture with multiple outputs and multiple inputs in single model.



def build_lstm_combined():
x = Input(shape=(self.timesteps, self.input_dim,), name = 'input')
h_1 = LSTM(1024, return_sequences=True))(x)
scores = TimeDistributed(Dense(self.input_dim, activation='sigmoid', name='dense'))(h_1)
h_2 = LSTM(1024, return_sequences=True))(h_1)
labels = TimeDistributed(Dense(self.input_dim, activation='sigmoid', name='dense'))(h_2)
LSTM_combined = Model(x,[scores,labels])
return LSTM_combined


This combined model has loss which is combination of both Loss1 and Loss2. While compiling the model you can specify the weights of each loss to get overall loss. If your desired loss is 0.5Loss1 + Loss2 you can do this by:



model_1 = build_lstm_combined()
model_1.compile(optimizer=Adam(0.001), loss = ['categorical_crossentropy','categorical_crossentropy'],loss_weights= [0.5,1])





share|improve this answer
























  • As Loss2, I meant the loss of system. I edited my question now, so now the loss of lstm1 should be model loss and lstm1 loss, loss of lstm2 should come from the models loss.

    – Feseoglu
    Nov 15 '18 at 5:21











  • I don't know why you would want loss of LSTM 1 and LSTM 2 separately since there are part of one big model. As far as training of any neural network no matter how complex or for that matter any ML model, there is only one loss. Loss, as in our case, may be combination of different parts (Loss = loss1 + loss2) but during we try to to minimise the overall loss to get optimum weight parameters. I think by loss you mean intermediate output from LSTM 1 which is reflected in my answer.

    – Huzaifa Calcuttawala
    Nov 16 '18 at 15:45














0












0








0







This is a wrong way of using the Model functional API of Keras. Also it's not possible to have loss of LSTM1 as Loss1+ Loss2. It will be only Loss1. Similarly for LSTM2 it will be only Loss2. However, for the combined network you can have any linear combination of Loss1 and Loss2 as the overall Loss i.e.



Loss_overall = a.Loss1 + b.Loss2. where a,b are non-negative real numbers



The real essence of Model Functional API is that it allows you create Deep learning architecture with multiple outputs and multiple inputs in single model.



def build_lstm_combined():
x = Input(shape=(self.timesteps, self.input_dim,), name = 'input')
h_1 = LSTM(1024, return_sequences=True))(x)
scores = TimeDistributed(Dense(self.input_dim, activation='sigmoid', name='dense'))(h_1)
h_2 = LSTM(1024, return_sequences=True))(h_1)
labels = TimeDistributed(Dense(self.input_dim, activation='sigmoid', name='dense'))(h_2)
LSTM_combined = Model(x,[scores,labels])
return LSTM_combined


This combined model has loss which is combination of both Loss1 and Loss2. While compiling the model you can specify the weights of each loss to get overall loss. If your desired loss is 0.5Loss1 + Loss2 you can do this by:



model_1 = build_lstm_combined()
model_1.compile(optimizer=Adam(0.001), loss = ['categorical_crossentropy','categorical_crossentropy'],loss_weights= [0.5,1])





share|improve this answer













This is a wrong way of using the Model functional API of Keras. Also it's not possible to have loss of LSTM1 as Loss1+ Loss2. It will be only Loss1. Similarly for LSTM2 it will be only Loss2. However, for the combined network you can have any linear combination of Loss1 and Loss2 as the overall Loss i.e.



Loss_overall = a.Loss1 + b.Loss2. where a,b are non-negative real numbers



The real essence of Model Functional API is that it allows you create Deep learning architecture with multiple outputs and multiple inputs in single model.



def build_lstm_combined():
x = Input(shape=(self.timesteps, self.input_dim,), name = 'input')
h_1 = LSTM(1024, return_sequences=True))(x)
scores = TimeDistributed(Dense(self.input_dim, activation='sigmoid', name='dense'))(h_1)
h_2 = LSTM(1024, return_sequences=True))(h_1)
labels = TimeDistributed(Dense(self.input_dim, activation='sigmoid', name='dense'))(h_2)
LSTM_combined = Model(x,[scores,labels])
return LSTM_combined


This combined model has loss which is combination of both Loss1 and Loss2. While compiling the model you can specify the weights of each loss to get overall loss. If your desired loss is 0.5Loss1 + Loss2 you can do this by:



model_1 = build_lstm_combined()
model_1.compile(optimizer=Adam(0.001), loss = ['categorical_crossentropy','categorical_crossentropy'],loss_weights= [0.5,1])






share|improve this answer












share|improve this answer



share|improve this answer










answered Nov 14 '18 at 10:22









Huzaifa CalcuttawalaHuzaifa Calcuttawala

92




92













  • As Loss2, I meant the loss of system. I edited my question now, so now the loss of lstm1 should be model loss and lstm1 loss, loss of lstm2 should come from the models loss.

    – Feseoglu
    Nov 15 '18 at 5:21











  • I don't know why you would want loss of LSTM 1 and LSTM 2 separately since there are part of one big model. As far as training of any neural network no matter how complex or for that matter any ML model, there is only one loss. Loss, as in our case, may be combination of different parts (Loss = loss1 + loss2) but during we try to to minimise the overall loss to get optimum weight parameters. I think by loss you mean intermediate output from LSTM 1 which is reflected in my answer.

    – Huzaifa Calcuttawala
    Nov 16 '18 at 15:45



















  • As Loss2, I meant the loss of system. I edited my question now, so now the loss of lstm1 should be model loss and lstm1 loss, loss of lstm2 should come from the models loss.

    – Feseoglu
    Nov 15 '18 at 5:21











  • I don't know why you would want loss of LSTM 1 and LSTM 2 separately since there are part of one big model. As far as training of any neural network no matter how complex or for that matter any ML model, there is only one loss. Loss, as in our case, may be combination of different parts (Loss = loss1 + loss2) but during we try to to minimise the overall loss to get optimum weight parameters. I think by loss you mean intermediate output from LSTM 1 which is reflected in my answer.

    – Huzaifa Calcuttawala
    Nov 16 '18 at 15:45

















As Loss2, I meant the loss of system. I edited my question now, so now the loss of lstm1 should be model loss and lstm1 loss, loss of lstm2 should come from the models loss.

– Feseoglu
Nov 15 '18 at 5:21





As Loss2, I meant the loss of system. I edited my question now, so now the loss of lstm1 should be model loss and lstm1 loss, loss of lstm2 should come from the models loss.

– Feseoglu
Nov 15 '18 at 5:21













I don't know why you would want loss of LSTM 1 and LSTM 2 separately since there are part of one big model. As far as training of any neural network no matter how complex or for that matter any ML model, there is only one loss. Loss, as in our case, may be combination of different parts (Loss = loss1 + loss2) but during we try to to minimise the overall loss to get optimum weight parameters. I think by loss you mean intermediate output from LSTM 1 which is reflected in my answer.

– Huzaifa Calcuttawala
Nov 16 '18 at 15:45





I don't know why you would want loss of LSTM 1 and LSTM 2 separately since there are part of one big model. As far as training of any neural network no matter how complex or for that matter any ML model, there is only one loss. Loss, as in our case, may be combination of different parts (Loss = loss1 + loss2) but during we try to to minimise the overall loss to get optimum weight parameters. I think by loss you mean intermediate output from LSTM 1 which is reflected in my answer.

– Huzaifa Calcuttawala
Nov 16 '18 at 15:45


















draft saved

draft discarded




















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53295272%2fuse-both-losses-on-a-subnetwork-of-combined-networks%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







這個網誌中的熱門文章

Hercules Kyvelos

Tangent Lines Diagram Along Smooth Curve

Yusuf al-Mu'taman ibn Hud