What is a correct way to paginate ignite cache?











up vote
0
down vote

favorite












I use apache ignite cache as data storage. Would like to know if there is a way to paginate a large data collection from the client. I do not need or want millions of records transfer from server to my web/mobile client.



private final ClientCache<UUID, Account> accounts;

public List<Account> getAll(int offset, int limit)
{
return accounts.query(new ScanQuery<UUID, Account>()
.setLocal(false))
.getAll()
.stream()
.skip(offset)
.limit(limit)
.map(entity -> entity.getValue())
.collect(Collectors.toList());
}


Is this an efficient way?



I looked at using a Cursor but the API is limited to iterator...



Thanks.










share|improve this question




























    up vote
    0
    down vote

    favorite












    I use apache ignite cache as data storage. Would like to know if there is a way to paginate a large data collection from the client. I do not need or want millions of records transfer from server to my web/mobile client.



    private final ClientCache<UUID, Account> accounts;

    public List<Account> getAll(int offset, int limit)
    {
    return accounts.query(new ScanQuery<UUID, Account>()
    .setLocal(false))
    .getAll()
    .stream()
    .skip(offset)
    .limit(limit)
    .map(entity -> entity.getValue())
    .collect(Collectors.toList());
    }


    Is this an efficient way?



    I looked at using a Cursor but the API is limited to iterator...



    Thanks.










    share|improve this question


























      up vote
      0
      down vote

      favorite









      up vote
      0
      down vote

      favorite











      I use apache ignite cache as data storage. Would like to know if there is a way to paginate a large data collection from the client. I do not need or want millions of records transfer from server to my web/mobile client.



      private final ClientCache<UUID, Account> accounts;

      public List<Account> getAll(int offset, int limit)
      {
      return accounts.query(new ScanQuery<UUID, Account>()
      .setLocal(false))
      .getAll()
      .stream()
      .skip(offset)
      .limit(limit)
      .map(entity -> entity.getValue())
      .collect(Collectors.toList());
      }


      Is this an efficient way?



      I looked at using a Cursor but the API is limited to iterator...



      Thanks.










      share|improve this question















      I use apache ignite cache as data storage. Would like to know if there is a way to paginate a large data collection from the client. I do not need or want millions of records transfer from server to my web/mobile client.



      private final ClientCache<UUID, Account> accounts;

      public List<Account> getAll(int offset, int limit)
      {
      return accounts.query(new ScanQuery<UUID, Account>()
      .setLocal(false))
      .getAll()
      .stream()
      .skip(offset)
      .limit(limit)
      .map(entity -> entity.getValue())
      .collect(Collectors.toList());
      }


      Is this an efficient way?



      I looked at using a Cursor but the API is limited to iterator...



      Thanks.







      java pagination ignite






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Nov 4 at 9:59









      Valdi_Bo

      4,3102614




      4,3102614










      asked Nov 4 at 8:37









      Gadi

      7681126




      7681126
























          1 Answer
          1






          active

          oldest

          votes

















          up vote
          0
          down vote













          I see a getAll() in your code. It makes all data be transferred to the caller side. This exactly what you wanted to avoid.



          Iterator avoids this problem, because data is loaded in batches on demand. So, you don't have to load everything into memory of a single node, when you run the query. Page size may be configured by setting ScanQuery#pageSize property. By default it's equal to 1024. Iterator may be acquired by calling QueryCursor.iterator() method. So, instead of keeping the offset, you need to keep an iterator.



          SQL SELECT query with LIMIT and OFFSET specified is also an option. But in case if you have more than one node, then LIMIT + OFFSET records will be loaded from each node to the reducer during the execution. You should take it into account.






          share|improve this answer





















          • Thanks for this answer @Denis. can you provide an example code? Can you give the iterator a start point that is where you left off ? A bit confused how would you serve large data set to web clients without breaking the server...
            – Gadi
            2 days ago










          • @Gadi, you can find all information on scan queries here: apacheignite.readme.io/docs/cache-queries#section-scan-queries You can treat QueryCursor as an Iterable. ScanQuery cannot be told where to start, but you can keep an iterator without closing it until it's not needed anymore.
            – Denis
            2 days ago










          • Thanks, @Denis, that will make a very messy session handling I was hoping to avoid.
            – Gadi
            2 days ago











          Your Answer






          StackExchange.ifUsing("editor", function () {
          StackExchange.using("externalEditor", function () {
          StackExchange.using("snippets", function () {
          StackExchange.snippets.init();
          });
          });
          }, "code-snippets");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "1"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














           

          draft saved


          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53139060%2fwhat-is-a-correct-way-to-paginate-ignite-cache%23new-answer', 'question_page');
          }
          );

          Post as a guest
































          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes








          up vote
          0
          down vote













          I see a getAll() in your code. It makes all data be transferred to the caller side. This exactly what you wanted to avoid.



          Iterator avoids this problem, because data is loaded in batches on demand. So, you don't have to load everything into memory of a single node, when you run the query. Page size may be configured by setting ScanQuery#pageSize property. By default it's equal to 1024. Iterator may be acquired by calling QueryCursor.iterator() method. So, instead of keeping the offset, you need to keep an iterator.



          SQL SELECT query with LIMIT and OFFSET specified is also an option. But in case if you have more than one node, then LIMIT + OFFSET records will be loaded from each node to the reducer during the execution. You should take it into account.






          share|improve this answer





















          • Thanks for this answer @Denis. can you provide an example code? Can you give the iterator a start point that is where you left off ? A bit confused how would you serve large data set to web clients without breaking the server...
            – Gadi
            2 days ago










          • @Gadi, you can find all information on scan queries here: apacheignite.readme.io/docs/cache-queries#section-scan-queries You can treat QueryCursor as an Iterable. ScanQuery cannot be told where to start, but you can keep an iterator without closing it until it's not needed anymore.
            – Denis
            2 days ago










          • Thanks, @Denis, that will make a very messy session handling I was hoping to avoid.
            – Gadi
            2 days ago















          up vote
          0
          down vote













          I see a getAll() in your code. It makes all data be transferred to the caller side. This exactly what you wanted to avoid.



          Iterator avoids this problem, because data is loaded in batches on demand. So, you don't have to load everything into memory of a single node, when you run the query. Page size may be configured by setting ScanQuery#pageSize property. By default it's equal to 1024. Iterator may be acquired by calling QueryCursor.iterator() method. So, instead of keeping the offset, you need to keep an iterator.



          SQL SELECT query with LIMIT and OFFSET specified is also an option. But in case if you have more than one node, then LIMIT + OFFSET records will be loaded from each node to the reducer during the execution. You should take it into account.






          share|improve this answer





















          • Thanks for this answer @Denis. can you provide an example code? Can you give the iterator a start point that is where you left off ? A bit confused how would you serve large data set to web clients without breaking the server...
            – Gadi
            2 days ago










          • @Gadi, you can find all information on scan queries here: apacheignite.readme.io/docs/cache-queries#section-scan-queries You can treat QueryCursor as an Iterable. ScanQuery cannot be told where to start, but you can keep an iterator without closing it until it's not needed anymore.
            – Denis
            2 days ago










          • Thanks, @Denis, that will make a very messy session handling I was hoping to avoid.
            – Gadi
            2 days ago













          up vote
          0
          down vote










          up vote
          0
          down vote









          I see a getAll() in your code. It makes all data be transferred to the caller side. This exactly what you wanted to avoid.



          Iterator avoids this problem, because data is loaded in batches on demand. So, you don't have to load everything into memory of a single node, when you run the query. Page size may be configured by setting ScanQuery#pageSize property. By default it's equal to 1024. Iterator may be acquired by calling QueryCursor.iterator() method. So, instead of keeping the offset, you need to keep an iterator.



          SQL SELECT query with LIMIT and OFFSET specified is also an option. But in case if you have more than one node, then LIMIT + OFFSET records will be loaded from each node to the reducer during the execution. You should take it into account.






          share|improve this answer












          I see a getAll() in your code. It makes all data be transferred to the caller side. This exactly what you wanted to avoid.



          Iterator avoids this problem, because data is loaded in batches on demand. So, you don't have to load everything into memory of a single node, when you run the query. Page size may be configured by setting ScanQuery#pageSize property. By default it's equal to 1024. Iterator may be acquired by calling QueryCursor.iterator() method. So, instead of keeping the offset, you need to keep an iterator.



          SQL SELECT query with LIMIT and OFFSET specified is also an option. But in case if you have more than one node, then LIMIT + OFFSET records will be loaded from each node to the reducer during the execution. You should take it into account.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered 2 days ago









          Denis

          2,455311




          2,455311












          • Thanks for this answer @Denis. can you provide an example code? Can you give the iterator a start point that is where you left off ? A bit confused how would you serve large data set to web clients without breaking the server...
            – Gadi
            2 days ago










          • @Gadi, you can find all information on scan queries here: apacheignite.readme.io/docs/cache-queries#section-scan-queries You can treat QueryCursor as an Iterable. ScanQuery cannot be told where to start, but you can keep an iterator without closing it until it's not needed anymore.
            – Denis
            2 days ago










          • Thanks, @Denis, that will make a very messy session handling I was hoping to avoid.
            – Gadi
            2 days ago


















          • Thanks for this answer @Denis. can you provide an example code? Can you give the iterator a start point that is where you left off ? A bit confused how would you serve large data set to web clients without breaking the server...
            – Gadi
            2 days ago










          • @Gadi, you can find all information on scan queries here: apacheignite.readme.io/docs/cache-queries#section-scan-queries You can treat QueryCursor as an Iterable. ScanQuery cannot be told where to start, but you can keep an iterator without closing it until it's not needed anymore.
            – Denis
            2 days ago










          • Thanks, @Denis, that will make a very messy session handling I was hoping to avoid.
            – Gadi
            2 days ago
















          Thanks for this answer @Denis. can you provide an example code? Can you give the iterator a start point that is where you left off ? A bit confused how would you serve large data set to web clients without breaking the server...
          – Gadi
          2 days ago




          Thanks for this answer @Denis. can you provide an example code? Can you give the iterator a start point that is where you left off ? A bit confused how would you serve large data set to web clients without breaking the server...
          – Gadi
          2 days ago












          @Gadi, you can find all information on scan queries here: apacheignite.readme.io/docs/cache-queries#section-scan-queries You can treat QueryCursor as an Iterable. ScanQuery cannot be told where to start, but you can keep an iterator without closing it until it's not needed anymore.
          – Denis
          2 days ago




          @Gadi, you can find all information on scan queries here: apacheignite.readme.io/docs/cache-queries#section-scan-queries You can treat QueryCursor as an Iterable. ScanQuery cannot be told where to start, but you can keep an iterator without closing it until it's not needed anymore.
          – Denis
          2 days ago












          Thanks, @Denis, that will make a very messy session handling I was hoping to avoid.
          – Gadi
          2 days ago




          Thanks, @Denis, that will make a very messy session handling I was hoping to avoid.
          – Gadi
          2 days ago


















           

          draft saved


          draft discarded



















































           


          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53139060%2fwhat-is-a-correct-way-to-paginate-ignite-cache%23new-answer', 'question_page');
          }
          );

          Post as a guest




















































































          這個網誌中的熱門文章

          Tangent Lines Diagram Along Smooth Curve

          Yusuf al-Mu'taman ibn Hud

          Zucchini