Saving and restoring functions in TensorFlow












1














I am working on a VAE project in TensorFlow where the encoder/decoder networks are build in functions. The idea is to be able to save, then load the trained model and do sampling, using the encoder function.



After restoring the model, I am having trouble getting the decoder function to run and give me back the restored, trained variables, getting an "Uninitialized value" error. I assume it is because the function is either creating a new new one, overwriting the existing, or otherwise. But I cannot figure out how to solve this. Here is some code:



class VAE(object):    
def __init__(self, restore=True):
self.session = tf.Session()
if restore:
self.restore_model()
self.build_decoder = tf.make_template('decoder', self._build_decoder)

@staticmethod
def _build_decoder(z, output_size=768, hidden_size=200,
hidden_activation=tf.nn.elu, output_activation=tf.nn.sigmoid):
x = tf.layers.dense(z, hidden_size, activation=hidden_activation)
x = tf.layers.dense(x, hidden_size, activation=hidden_activation)
logits = tf.layers.dense(x, output_size, activation=output_activation)
return distributions.Independent(distributions.Bernoulli(logits), 2)

def sample_decoder(self, n_samples):
prior = self.build_prior(self.latent_dim)
samples = self.build_decoder(prior.sample(n_samples), self.input_size).mean()
return self.session.run([samples])

def restore_model(self):
print("Restoring")
self.saver = tf.train.import_meta_graph(os.path.join(self.save_dir, "turbolearn.meta"))
self.saver.restore(self.sess, tf.train.latest_checkpoint(self.save_dir))
self._restored = True


want to run samples = vae.sample_decoder(5)



In my training routine, I run:



        if self.checkpoint:
self.saver.save(self.session, os.path.join(self.save_dir, "myvae"), write_meta_graph=True)


UPDATE



Based on the suggested answer below, I changed the restore method



self.saver = tf.train.Saver()
self.saver.restore(self.session, tf.train.latest_checkpoint(self.save_dir))


But now get a value error when it creates the Saver() object:



ValueError: No variables to save









share|improve this question





























    1














    I am working on a VAE project in TensorFlow where the encoder/decoder networks are build in functions. The idea is to be able to save, then load the trained model and do sampling, using the encoder function.



    After restoring the model, I am having trouble getting the decoder function to run and give me back the restored, trained variables, getting an "Uninitialized value" error. I assume it is because the function is either creating a new new one, overwriting the existing, or otherwise. But I cannot figure out how to solve this. Here is some code:



    class VAE(object):    
    def __init__(self, restore=True):
    self.session = tf.Session()
    if restore:
    self.restore_model()
    self.build_decoder = tf.make_template('decoder', self._build_decoder)

    @staticmethod
    def _build_decoder(z, output_size=768, hidden_size=200,
    hidden_activation=tf.nn.elu, output_activation=tf.nn.sigmoid):
    x = tf.layers.dense(z, hidden_size, activation=hidden_activation)
    x = tf.layers.dense(x, hidden_size, activation=hidden_activation)
    logits = tf.layers.dense(x, output_size, activation=output_activation)
    return distributions.Independent(distributions.Bernoulli(logits), 2)

    def sample_decoder(self, n_samples):
    prior = self.build_prior(self.latent_dim)
    samples = self.build_decoder(prior.sample(n_samples), self.input_size).mean()
    return self.session.run([samples])

    def restore_model(self):
    print("Restoring")
    self.saver = tf.train.import_meta_graph(os.path.join(self.save_dir, "turbolearn.meta"))
    self.saver.restore(self.sess, tf.train.latest_checkpoint(self.save_dir))
    self._restored = True


    want to run samples = vae.sample_decoder(5)



    In my training routine, I run:



            if self.checkpoint:
    self.saver.save(self.session, os.path.join(self.save_dir, "myvae"), write_meta_graph=True)


    UPDATE



    Based on the suggested answer below, I changed the restore method



    self.saver = tf.train.Saver()
    self.saver.restore(self.session, tf.train.latest_checkpoint(self.save_dir))


    But now get a value error when it creates the Saver() object:



    ValueError: No variables to save









    share|improve this question



























      1












      1








      1







      I am working on a VAE project in TensorFlow where the encoder/decoder networks are build in functions. The idea is to be able to save, then load the trained model and do sampling, using the encoder function.



      After restoring the model, I am having trouble getting the decoder function to run and give me back the restored, trained variables, getting an "Uninitialized value" error. I assume it is because the function is either creating a new new one, overwriting the existing, or otherwise. But I cannot figure out how to solve this. Here is some code:



      class VAE(object):    
      def __init__(self, restore=True):
      self.session = tf.Session()
      if restore:
      self.restore_model()
      self.build_decoder = tf.make_template('decoder', self._build_decoder)

      @staticmethod
      def _build_decoder(z, output_size=768, hidden_size=200,
      hidden_activation=tf.nn.elu, output_activation=tf.nn.sigmoid):
      x = tf.layers.dense(z, hidden_size, activation=hidden_activation)
      x = tf.layers.dense(x, hidden_size, activation=hidden_activation)
      logits = tf.layers.dense(x, output_size, activation=output_activation)
      return distributions.Independent(distributions.Bernoulli(logits), 2)

      def sample_decoder(self, n_samples):
      prior = self.build_prior(self.latent_dim)
      samples = self.build_decoder(prior.sample(n_samples), self.input_size).mean()
      return self.session.run([samples])

      def restore_model(self):
      print("Restoring")
      self.saver = tf.train.import_meta_graph(os.path.join(self.save_dir, "turbolearn.meta"))
      self.saver.restore(self.sess, tf.train.latest_checkpoint(self.save_dir))
      self._restored = True


      want to run samples = vae.sample_decoder(5)



      In my training routine, I run:



              if self.checkpoint:
      self.saver.save(self.session, os.path.join(self.save_dir, "myvae"), write_meta_graph=True)


      UPDATE



      Based on the suggested answer below, I changed the restore method



      self.saver = tf.train.Saver()
      self.saver.restore(self.session, tf.train.latest_checkpoint(self.save_dir))


      But now get a value error when it creates the Saver() object:



      ValueError: No variables to save









      share|improve this question















      I am working on a VAE project in TensorFlow where the encoder/decoder networks are build in functions. The idea is to be able to save, then load the trained model and do sampling, using the encoder function.



      After restoring the model, I am having trouble getting the decoder function to run and give me back the restored, trained variables, getting an "Uninitialized value" error. I assume it is because the function is either creating a new new one, overwriting the existing, or otherwise. But I cannot figure out how to solve this. Here is some code:



      class VAE(object):    
      def __init__(self, restore=True):
      self.session = tf.Session()
      if restore:
      self.restore_model()
      self.build_decoder = tf.make_template('decoder', self._build_decoder)

      @staticmethod
      def _build_decoder(z, output_size=768, hidden_size=200,
      hidden_activation=tf.nn.elu, output_activation=tf.nn.sigmoid):
      x = tf.layers.dense(z, hidden_size, activation=hidden_activation)
      x = tf.layers.dense(x, hidden_size, activation=hidden_activation)
      logits = tf.layers.dense(x, output_size, activation=output_activation)
      return distributions.Independent(distributions.Bernoulli(logits), 2)

      def sample_decoder(self, n_samples):
      prior = self.build_prior(self.latent_dim)
      samples = self.build_decoder(prior.sample(n_samples), self.input_size).mean()
      return self.session.run([samples])

      def restore_model(self):
      print("Restoring")
      self.saver = tf.train.import_meta_graph(os.path.join(self.save_dir, "turbolearn.meta"))
      self.saver.restore(self.sess, tf.train.latest_checkpoint(self.save_dir))
      self._restored = True


      want to run samples = vae.sample_decoder(5)



      In my training routine, I run:



              if self.checkpoint:
      self.saver.save(self.session, os.path.join(self.save_dir, "myvae"), write_meta_graph=True)


      UPDATE



      Based on the suggested answer below, I changed the restore method



      self.saver = tf.train.Saver()
      self.saver.restore(self.session, tf.train.latest_checkpoint(self.save_dir))


      But now get a value error when it creates the Saver() object:



      ValueError: No variables to save






      python tensorflow machine-learning tensorflow-probability






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Nov 25 at 3:22

























      asked Nov 11 at 6:14









      taylormade201

      2661520




      2661520
























          1 Answer
          1






          active

          oldest

          votes


















          0





          +50









          The tf.train.import_meta_graph restores the graph, meaning rebuilds the network architecture that was stored to the file. The call to tf.train.Saver.restore on the other hand only restores the variable values from the file to the current graph in the session (this naturally fails if the some values of in the file belong to variables that do not exist in the currently active graph).



          So if you already build the network layers in the code, you don't need to call tf.train.import_meta_graph. Otherwise this might be causing you problems.



          Not sure how the rest of your code looks like but here are some suggestions. First build the graph, then create the session, and finally restore if applicable. Your init might look like this then



          def __init__(self, restore=True):
          self.build_decoder = tf.make_template('decoder', self._build_decoder)
          self.session = tf.Session()
          if restore:
          self.restore_model()


          However if you are only restoring the encoder, and building the decoder anew, you might build the decoder last. But then don't forget to initialize its variables before usage.






          share|improve this answer



















          • 1




            Feel like I am missing something here. I tried replacing that line as you suggested, with a call to build the standard saver object, and subsequently call restore on that. See the update above for the code changes
            – taylormade201
            Nov 25 at 3:17












          • @taylormade201 The tf.train.Saver must be created after the variables (the graph) is created. So in your code you should be careful with order of creating the graph and restoring the variables. I updated the answer with more detailed suggestion
            – dsalaj
            Nov 25 at 9:15






          • 1




            Thanks, that put me down the right path
            – taylormade201
            Nov 26 at 0:06











          Your Answer






          StackExchange.ifUsing("editor", function () {
          StackExchange.using("externalEditor", function () {
          StackExchange.using("snippets", function () {
          StackExchange.snippets.init();
          });
          });
          }, "code-snippets");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "1"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53246331%2fsaving-and-restoring-functions-in-tensorflow%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          0





          +50









          The tf.train.import_meta_graph restores the graph, meaning rebuilds the network architecture that was stored to the file. The call to tf.train.Saver.restore on the other hand only restores the variable values from the file to the current graph in the session (this naturally fails if the some values of in the file belong to variables that do not exist in the currently active graph).



          So if you already build the network layers in the code, you don't need to call tf.train.import_meta_graph. Otherwise this might be causing you problems.



          Not sure how the rest of your code looks like but here are some suggestions. First build the graph, then create the session, and finally restore if applicable. Your init might look like this then



          def __init__(self, restore=True):
          self.build_decoder = tf.make_template('decoder', self._build_decoder)
          self.session = tf.Session()
          if restore:
          self.restore_model()


          However if you are only restoring the encoder, and building the decoder anew, you might build the decoder last. But then don't forget to initialize its variables before usage.






          share|improve this answer



















          • 1




            Feel like I am missing something here. I tried replacing that line as you suggested, with a call to build the standard saver object, and subsequently call restore on that. See the update above for the code changes
            – taylormade201
            Nov 25 at 3:17












          • @taylormade201 The tf.train.Saver must be created after the variables (the graph) is created. So in your code you should be careful with order of creating the graph and restoring the variables. I updated the answer with more detailed suggestion
            – dsalaj
            Nov 25 at 9:15






          • 1




            Thanks, that put me down the right path
            – taylormade201
            Nov 26 at 0:06
















          0





          +50









          The tf.train.import_meta_graph restores the graph, meaning rebuilds the network architecture that was stored to the file. The call to tf.train.Saver.restore on the other hand only restores the variable values from the file to the current graph in the session (this naturally fails if the some values of in the file belong to variables that do not exist in the currently active graph).



          So if you already build the network layers in the code, you don't need to call tf.train.import_meta_graph. Otherwise this might be causing you problems.



          Not sure how the rest of your code looks like but here are some suggestions. First build the graph, then create the session, and finally restore if applicable. Your init might look like this then



          def __init__(self, restore=True):
          self.build_decoder = tf.make_template('decoder', self._build_decoder)
          self.session = tf.Session()
          if restore:
          self.restore_model()


          However if you are only restoring the encoder, and building the decoder anew, you might build the decoder last. But then don't forget to initialize its variables before usage.






          share|improve this answer



















          • 1




            Feel like I am missing something here. I tried replacing that line as you suggested, with a call to build the standard saver object, and subsequently call restore on that. See the update above for the code changes
            – taylormade201
            Nov 25 at 3:17












          • @taylormade201 The tf.train.Saver must be created after the variables (the graph) is created. So in your code you should be careful with order of creating the graph and restoring the variables. I updated the answer with more detailed suggestion
            – dsalaj
            Nov 25 at 9:15






          • 1




            Thanks, that put me down the right path
            – taylormade201
            Nov 26 at 0:06














          0





          +50







          0





          +50



          0




          +50




          The tf.train.import_meta_graph restores the graph, meaning rebuilds the network architecture that was stored to the file. The call to tf.train.Saver.restore on the other hand only restores the variable values from the file to the current graph in the session (this naturally fails if the some values of in the file belong to variables that do not exist in the currently active graph).



          So if you already build the network layers in the code, you don't need to call tf.train.import_meta_graph. Otherwise this might be causing you problems.



          Not sure how the rest of your code looks like but here are some suggestions. First build the graph, then create the session, and finally restore if applicable. Your init might look like this then



          def __init__(self, restore=True):
          self.build_decoder = tf.make_template('decoder', self._build_decoder)
          self.session = tf.Session()
          if restore:
          self.restore_model()


          However if you are only restoring the encoder, and building the decoder anew, you might build the decoder last. But then don't forget to initialize its variables before usage.






          share|improve this answer














          The tf.train.import_meta_graph restores the graph, meaning rebuilds the network architecture that was stored to the file. The call to tf.train.Saver.restore on the other hand only restores the variable values from the file to the current graph in the session (this naturally fails if the some values of in the file belong to variables that do not exist in the currently active graph).



          So if you already build the network layers in the code, you don't need to call tf.train.import_meta_graph. Otherwise this might be causing you problems.



          Not sure how the rest of your code looks like but here are some suggestions. First build the graph, then create the session, and finally restore if applicable. Your init might look like this then



          def __init__(self, restore=True):
          self.build_decoder = tf.make_template('decoder', self._build_decoder)
          self.session = tf.Session()
          if restore:
          self.restore_model()


          However if you are only restoring the encoder, and building the decoder anew, you might build the decoder last. But then don't forget to initialize its variables before usage.







          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited Nov 25 at 9:22

























          answered Nov 23 at 23:30









          dsalaj

          6291026




          6291026








          • 1




            Feel like I am missing something here. I tried replacing that line as you suggested, with a call to build the standard saver object, and subsequently call restore on that. See the update above for the code changes
            – taylormade201
            Nov 25 at 3:17












          • @taylormade201 The tf.train.Saver must be created after the variables (the graph) is created. So in your code you should be careful with order of creating the graph and restoring the variables. I updated the answer with more detailed suggestion
            – dsalaj
            Nov 25 at 9:15






          • 1




            Thanks, that put me down the right path
            – taylormade201
            Nov 26 at 0:06














          • 1




            Feel like I am missing something here. I tried replacing that line as you suggested, with a call to build the standard saver object, and subsequently call restore on that. See the update above for the code changes
            – taylormade201
            Nov 25 at 3:17












          • @taylormade201 The tf.train.Saver must be created after the variables (the graph) is created. So in your code you should be careful with order of creating the graph and restoring the variables. I updated the answer with more detailed suggestion
            – dsalaj
            Nov 25 at 9:15






          • 1




            Thanks, that put me down the right path
            – taylormade201
            Nov 26 at 0:06








          1




          1




          Feel like I am missing something here. I tried replacing that line as you suggested, with a call to build the standard saver object, and subsequently call restore on that. See the update above for the code changes
          – taylormade201
          Nov 25 at 3:17






          Feel like I am missing something here. I tried replacing that line as you suggested, with a call to build the standard saver object, and subsequently call restore on that. See the update above for the code changes
          – taylormade201
          Nov 25 at 3:17














          @taylormade201 The tf.train.Saver must be created after the variables (the graph) is created. So in your code you should be careful with order of creating the graph and restoring the variables. I updated the answer with more detailed suggestion
          – dsalaj
          Nov 25 at 9:15




          @taylormade201 The tf.train.Saver must be created after the variables (the graph) is created. So in your code you should be careful with order of creating the graph and restoring the variables. I updated the answer with more detailed suggestion
          – dsalaj
          Nov 25 at 9:15




          1




          1




          Thanks, that put me down the right path
          – taylormade201
          Nov 26 at 0:06




          Thanks, that put me down the right path
          – taylormade201
          Nov 26 at 0:06


















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.





          Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


          Please pay close attention to the following guidance:


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53246331%2fsaving-and-restoring-functions-in-tensorflow%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          這個網誌中的熱門文章

          Tangent Lines Diagram Along Smooth Curve

          Yusuf al-Mu'taman ibn Hud

          Zucchini