Accuracy Decreasing with higher epochs












0














I am a newbie to Keras and machine learning in general. I'm trying to build a binary classification model using the Sequential model. After some experimenting, I saw that on multiple runs(not always) I was getting a an accuracy of even 97% on my validation data in the second or third epoch itself but this dramatically decreased to as much as 12%. What is the reason behind this ? How do I fine tune my model ?
Here's my code -



model = Sequential()
model.add(Flatten(input_shape=(6,size)))
model.add(Dense(6,activation='relu'))
model.add(Dropout(0.35))
model.add(Dense(3,activation='relu'))
model.add(Dropout(0.1))
model.add(Dense(1,activation='sigmoid'))
model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['binary_accuracy'])
model.fit(x, y,epochs=60,batch_size=40,validation_split=0.2)









share|improve this question






















  • How is you loss doing? also increasing?
    – Dinari
    Nov 10 at 20:02






  • 2




    One word: Overfitting
    – Matias Valdenegro
    Nov 10 at 20:03












  • As Matias mentioned above, it's the case of overfitting: the more data you provide to the model, the more noise it generates and subsequently fails to maintain accuracy. Because each layer is initialized with randomized weights, each time you run the model from clean state, it produces different results. If you added the input data sample to the question, it might help us dig a bit deeper, because it's hard right now to judge from the code alone (except that it's quite simple) what could go wrong.
    – rm-
    Nov 10 at 20:17










  • The loss always goes down to around 0.6. Would increasing the amount of input data avoid overfitting ? Each data sample is a 2D array of floating point numbers of the form (6,50)
    – anirudh
    Nov 11 at 6:07
















0














I am a newbie to Keras and machine learning in general. I'm trying to build a binary classification model using the Sequential model. After some experimenting, I saw that on multiple runs(not always) I was getting a an accuracy of even 97% on my validation data in the second or third epoch itself but this dramatically decreased to as much as 12%. What is the reason behind this ? How do I fine tune my model ?
Here's my code -



model = Sequential()
model.add(Flatten(input_shape=(6,size)))
model.add(Dense(6,activation='relu'))
model.add(Dropout(0.35))
model.add(Dense(3,activation='relu'))
model.add(Dropout(0.1))
model.add(Dense(1,activation='sigmoid'))
model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['binary_accuracy'])
model.fit(x, y,epochs=60,batch_size=40,validation_split=0.2)









share|improve this question






















  • How is you loss doing? also increasing?
    – Dinari
    Nov 10 at 20:02






  • 2




    One word: Overfitting
    – Matias Valdenegro
    Nov 10 at 20:03












  • As Matias mentioned above, it's the case of overfitting: the more data you provide to the model, the more noise it generates and subsequently fails to maintain accuracy. Because each layer is initialized with randomized weights, each time you run the model from clean state, it produces different results. If you added the input data sample to the question, it might help us dig a bit deeper, because it's hard right now to judge from the code alone (except that it's quite simple) what could go wrong.
    – rm-
    Nov 10 at 20:17










  • The loss always goes down to around 0.6. Would increasing the amount of input data avoid overfitting ? Each data sample is a 2D array of floating point numbers of the form (6,50)
    – anirudh
    Nov 11 at 6:07














0












0








0







I am a newbie to Keras and machine learning in general. I'm trying to build a binary classification model using the Sequential model. After some experimenting, I saw that on multiple runs(not always) I was getting a an accuracy of even 97% on my validation data in the second or third epoch itself but this dramatically decreased to as much as 12%. What is the reason behind this ? How do I fine tune my model ?
Here's my code -



model = Sequential()
model.add(Flatten(input_shape=(6,size)))
model.add(Dense(6,activation='relu'))
model.add(Dropout(0.35))
model.add(Dense(3,activation='relu'))
model.add(Dropout(0.1))
model.add(Dense(1,activation='sigmoid'))
model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['binary_accuracy'])
model.fit(x, y,epochs=60,batch_size=40,validation_split=0.2)









share|improve this question













I am a newbie to Keras and machine learning in general. I'm trying to build a binary classification model using the Sequential model. After some experimenting, I saw that on multiple runs(not always) I was getting a an accuracy of even 97% on my validation data in the second or third epoch itself but this dramatically decreased to as much as 12%. What is the reason behind this ? How do I fine tune my model ?
Here's my code -



model = Sequential()
model.add(Flatten(input_shape=(6,size)))
model.add(Dense(6,activation='relu'))
model.add(Dropout(0.35))
model.add(Dense(3,activation='relu'))
model.add(Dropout(0.1))
model.add(Dense(1,activation='sigmoid'))
model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['binary_accuracy'])
model.fit(x, y,epochs=60,batch_size=40,validation_split=0.2)






python machine-learning keras deep-learning classification






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Nov 10 at 19:58









anirudh

112




112












  • How is you loss doing? also increasing?
    – Dinari
    Nov 10 at 20:02






  • 2




    One word: Overfitting
    – Matias Valdenegro
    Nov 10 at 20:03












  • As Matias mentioned above, it's the case of overfitting: the more data you provide to the model, the more noise it generates and subsequently fails to maintain accuracy. Because each layer is initialized with randomized weights, each time you run the model from clean state, it produces different results. If you added the input data sample to the question, it might help us dig a bit deeper, because it's hard right now to judge from the code alone (except that it's quite simple) what could go wrong.
    – rm-
    Nov 10 at 20:17










  • The loss always goes down to around 0.6. Would increasing the amount of input data avoid overfitting ? Each data sample is a 2D array of floating point numbers of the form (6,50)
    – anirudh
    Nov 11 at 6:07


















  • How is you loss doing? also increasing?
    – Dinari
    Nov 10 at 20:02






  • 2




    One word: Overfitting
    – Matias Valdenegro
    Nov 10 at 20:03












  • As Matias mentioned above, it's the case of overfitting: the more data you provide to the model, the more noise it generates and subsequently fails to maintain accuracy. Because each layer is initialized with randomized weights, each time you run the model from clean state, it produces different results. If you added the input data sample to the question, it might help us dig a bit deeper, because it's hard right now to judge from the code alone (except that it's quite simple) what could go wrong.
    – rm-
    Nov 10 at 20:17










  • The loss always goes down to around 0.6. Would increasing the amount of input data avoid overfitting ? Each data sample is a 2D array of floating point numbers of the form (6,50)
    – anirudh
    Nov 11 at 6:07
















How is you loss doing? also increasing?
– Dinari
Nov 10 at 20:02




How is you loss doing? also increasing?
– Dinari
Nov 10 at 20:02




2




2




One word: Overfitting
– Matias Valdenegro
Nov 10 at 20:03






One word: Overfitting
– Matias Valdenegro
Nov 10 at 20:03














As Matias mentioned above, it's the case of overfitting: the more data you provide to the model, the more noise it generates and subsequently fails to maintain accuracy. Because each layer is initialized with randomized weights, each time you run the model from clean state, it produces different results. If you added the input data sample to the question, it might help us dig a bit deeper, because it's hard right now to judge from the code alone (except that it's quite simple) what could go wrong.
– rm-
Nov 10 at 20:17




As Matias mentioned above, it's the case of overfitting: the more data you provide to the model, the more noise it generates and subsequently fails to maintain accuracy. Because each layer is initialized with randomized weights, each time you run the model from clean state, it produces different results. If you added the input data sample to the question, it might help us dig a bit deeper, because it's hard right now to judge from the code alone (except that it's quite simple) what could go wrong.
– rm-
Nov 10 at 20:17












The loss always goes down to around 0.6. Would increasing the amount of input data avoid overfitting ? Each data sample is a 2D array of floating point numbers of the form (6,50)
– anirudh
Nov 11 at 6:07




The loss always goes down to around 0.6. Would increasing the amount of input data avoid overfitting ? Each data sample is a 2D array of floating point numbers of the form (6,50)
– anirudh
Nov 11 at 6:07












1 Answer
1






active

oldest

votes


















0














According to me, you can take into consideration the following factors.





  1. Reduce your learning rate to a very small number like 0.001 or even 0.0001.

  2. Provide more data.

  3. Set Dropout rates to a number like 0.2. Keep them uniform across the network.

  4. Try decreasing the batch size.

  5. Using appropriate optimizer: You may need to experiment a bit on this. Use different optimizers on the same network, and select an optimizer which gives you the least loss.


If any of the above factors work for you, please let me know about it in the comments section.






share|improve this answer





















    Your Answer






    StackExchange.ifUsing("editor", function () {
    StackExchange.using("externalEditor", function () {
    StackExchange.using("snippets", function () {
    StackExchange.snippets.init();
    });
    });
    }, "code-snippets");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "1"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53242875%2faccuracy-decreasing-with-higher-epochs%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    0














    According to me, you can take into consideration the following factors.





    1. Reduce your learning rate to a very small number like 0.001 or even 0.0001.

    2. Provide more data.

    3. Set Dropout rates to a number like 0.2. Keep them uniform across the network.

    4. Try decreasing the batch size.

    5. Using appropriate optimizer: You may need to experiment a bit on this. Use different optimizers on the same network, and select an optimizer which gives you the least loss.


    If any of the above factors work for you, please let me know about it in the comments section.






    share|improve this answer


























      0














      According to me, you can take into consideration the following factors.





      1. Reduce your learning rate to a very small number like 0.001 or even 0.0001.

      2. Provide more data.

      3. Set Dropout rates to a number like 0.2. Keep them uniform across the network.

      4. Try decreasing the batch size.

      5. Using appropriate optimizer: You may need to experiment a bit on this. Use different optimizers on the same network, and select an optimizer which gives you the least loss.


      If any of the above factors work for you, please let me know about it in the comments section.






      share|improve this answer
























        0












        0








        0






        According to me, you can take into consideration the following factors.





        1. Reduce your learning rate to a very small number like 0.001 or even 0.0001.

        2. Provide more data.

        3. Set Dropout rates to a number like 0.2. Keep them uniform across the network.

        4. Try decreasing the batch size.

        5. Using appropriate optimizer: You may need to experiment a bit on this. Use different optimizers on the same network, and select an optimizer which gives you the least loss.


        If any of the above factors work for you, please let me know about it in the comments section.






        share|improve this answer












        According to me, you can take into consideration the following factors.





        1. Reduce your learning rate to a very small number like 0.001 or even 0.0001.

        2. Provide more data.

        3. Set Dropout rates to a number like 0.2. Keep them uniform across the network.

        4. Try decreasing the batch size.

        5. Using appropriate optimizer: You may need to experiment a bit on this. Use different optimizers on the same network, and select an optimizer which gives you the least loss.


        If any of the above factors work for you, please let me know about it in the comments section.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Nov 11 at 7:14









        Shubham Panchal

        36018




        36018






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Stack Overflow!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.





            Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


            Please pay close attention to the following guidance:


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53242875%2faccuracy-decreasing-with-higher-epochs%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            這個網誌中的熱門文章

            Xamarin.form Move up view when keyboard appear

            Post-Redirect-Get with Spring WebFlux and Thymeleaf

            Anylogic : not able to use stopDelay()