What is a correct way to paginate ignite cache?
up vote
0
down vote
favorite
I use apache ignite cache as data storage. Would like to know if there is a way to paginate a large data collection from the client. I do not need or want millions of records transfer from server to my web/mobile client.
private final ClientCache<UUID, Account> accounts;
public List<Account> getAll(int offset, int limit)
{
return accounts.query(new ScanQuery<UUID, Account>()
.setLocal(false))
.getAll()
.stream()
.skip(offset)
.limit(limit)
.map(entity -> entity.getValue())
.collect(Collectors.toList());
}
Is this an efficient way?
I looked at using a Cursor but the API is limited to iterator...
Thanks.
java pagination ignite
add a comment |
up vote
0
down vote
favorite
I use apache ignite cache as data storage. Would like to know if there is a way to paginate a large data collection from the client. I do not need or want millions of records transfer from server to my web/mobile client.
private final ClientCache<UUID, Account> accounts;
public List<Account> getAll(int offset, int limit)
{
return accounts.query(new ScanQuery<UUID, Account>()
.setLocal(false))
.getAll()
.stream()
.skip(offset)
.limit(limit)
.map(entity -> entity.getValue())
.collect(Collectors.toList());
}
Is this an efficient way?
I looked at using a Cursor but the API is limited to iterator...
Thanks.
java pagination ignite
add a comment |
up vote
0
down vote
favorite
up vote
0
down vote
favorite
I use apache ignite cache as data storage. Would like to know if there is a way to paginate a large data collection from the client. I do not need or want millions of records transfer from server to my web/mobile client.
private final ClientCache<UUID, Account> accounts;
public List<Account> getAll(int offset, int limit)
{
return accounts.query(new ScanQuery<UUID, Account>()
.setLocal(false))
.getAll()
.stream()
.skip(offset)
.limit(limit)
.map(entity -> entity.getValue())
.collect(Collectors.toList());
}
Is this an efficient way?
I looked at using a Cursor but the API is limited to iterator...
Thanks.
java pagination ignite
I use apache ignite cache as data storage. Would like to know if there is a way to paginate a large data collection from the client. I do not need or want millions of records transfer from server to my web/mobile client.
private final ClientCache<UUID, Account> accounts;
public List<Account> getAll(int offset, int limit)
{
return accounts.query(new ScanQuery<UUID, Account>()
.setLocal(false))
.getAll()
.stream()
.skip(offset)
.limit(limit)
.map(entity -> entity.getValue())
.collect(Collectors.toList());
}
Is this an efficient way?
I looked at using a Cursor but the API is limited to iterator...
Thanks.
java pagination ignite
java pagination ignite
edited Nov 4 at 9:59
Valdi_Bo
4,3102614
4,3102614
asked Nov 4 at 8:37
Gadi
7681126
7681126
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
up vote
0
down vote
I see a getAll()
in your code. It makes all data be transferred to the caller side. This exactly what you wanted to avoid.
Iterator
avoids this problem, because data is loaded in batches on demand. So, you don't have to load everything into memory of a single node, when you run the query. Page size may be configured by setting ScanQuery#pageSize property. By default it's equal to 1024. Iterator may be acquired by calling QueryCursor.iterator()
method. So, instead of keeping the offset, you need to keep an iterator.
SQL SELECT query with LIMIT and OFFSET specified is also an option. But in case if you have more than one node, then LIMIT + OFFSET records will be loaded from each node to the reducer during the execution. You should take it into account.
Thanks for this answer @Denis. can you provide an example code? Can you give the iterator a start point that is where you left off ? A bit confused how would you serve large data set to web clients without breaking the server...
– Gadi
2 days ago
@Gadi, you can find all information on scan queries here: apacheignite.readme.io/docs/cache-queries#section-scan-queries You can treatQueryCursor
as anIterable
.ScanQuery
cannot be told where to start, but you can keep an iterator without closing it until it's not needed anymore.
– Denis
2 days ago
Thanks, @Denis, that will make a very messy session handling I was hoping to avoid.
– Gadi
2 days ago
add a comment |
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
0
down vote
I see a getAll()
in your code. It makes all data be transferred to the caller side. This exactly what you wanted to avoid.
Iterator
avoids this problem, because data is loaded in batches on demand. So, you don't have to load everything into memory of a single node, when you run the query. Page size may be configured by setting ScanQuery#pageSize property. By default it's equal to 1024. Iterator may be acquired by calling QueryCursor.iterator()
method. So, instead of keeping the offset, you need to keep an iterator.
SQL SELECT query with LIMIT and OFFSET specified is also an option. But in case if you have more than one node, then LIMIT + OFFSET records will be loaded from each node to the reducer during the execution. You should take it into account.
Thanks for this answer @Denis. can you provide an example code? Can you give the iterator a start point that is where you left off ? A bit confused how would you serve large data set to web clients without breaking the server...
– Gadi
2 days ago
@Gadi, you can find all information on scan queries here: apacheignite.readme.io/docs/cache-queries#section-scan-queries You can treatQueryCursor
as anIterable
.ScanQuery
cannot be told where to start, but you can keep an iterator without closing it until it's not needed anymore.
– Denis
2 days ago
Thanks, @Denis, that will make a very messy session handling I was hoping to avoid.
– Gadi
2 days ago
add a comment |
up vote
0
down vote
I see a getAll()
in your code. It makes all data be transferred to the caller side. This exactly what you wanted to avoid.
Iterator
avoids this problem, because data is loaded in batches on demand. So, you don't have to load everything into memory of a single node, when you run the query. Page size may be configured by setting ScanQuery#pageSize property. By default it's equal to 1024. Iterator may be acquired by calling QueryCursor.iterator()
method. So, instead of keeping the offset, you need to keep an iterator.
SQL SELECT query with LIMIT and OFFSET specified is also an option. But in case if you have more than one node, then LIMIT + OFFSET records will be loaded from each node to the reducer during the execution. You should take it into account.
Thanks for this answer @Denis. can you provide an example code? Can you give the iterator a start point that is where you left off ? A bit confused how would you serve large data set to web clients without breaking the server...
– Gadi
2 days ago
@Gadi, you can find all information on scan queries here: apacheignite.readme.io/docs/cache-queries#section-scan-queries You can treatQueryCursor
as anIterable
.ScanQuery
cannot be told where to start, but you can keep an iterator without closing it until it's not needed anymore.
– Denis
2 days ago
Thanks, @Denis, that will make a very messy session handling I was hoping to avoid.
– Gadi
2 days ago
add a comment |
up vote
0
down vote
up vote
0
down vote
I see a getAll()
in your code. It makes all data be transferred to the caller side. This exactly what you wanted to avoid.
Iterator
avoids this problem, because data is loaded in batches on demand. So, you don't have to load everything into memory of a single node, when you run the query. Page size may be configured by setting ScanQuery#pageSize property. By default it's equal to 1024. Iterator may be acquired by calling QueryCursor.iterator()
method. So, instead of keeping the offset, you need to keep an iterator.
SQL SELECT query with LIMIT and OFFSET specified is also an option. But in case if you have more than one node, then LIMIT + OFFSET records will be loaded from each node to the reducer during the execution. You should take it into account.
I see a getAll()
in your code. It makes all data be transferred to the caller side. This exactly what you wanted to avoid.
Iterator
avoids this problem, because data is loaded in batches on demand. So, you don't have to load everything into memory of a single node, when you run the query. Page size may be configured by setting ScanQuery#pageSize property. By default it's equal to 1024. Iterator may be acquired by calling QueryCursor.iterator()
method. So, instead of keeping the offset, you need to keep an iterator.
SQL SELECT query with LIMIT and OFFSET specified is also an option. But in case if you have more than one node, then LIMIT + OFFSET records will be loaded from each node to the reducer during the execution. You should take it into account.
answered 2 days ago
Denis
2,455311
2,455311
Thanks for this answer @Denis. can you provide an example code? Can you give the iterator a start point that is where you left off ? A bit confused how would you serve large data set to web clients without breaking the server...
– Gadi
2 days ago
@Gadi, you can find all information on scan queries here: apacheignite.readme.io/docs/cache-queries#section-scan-queries You can treatQueryCursor
as anIterable
.ScanQuery
cannot be told where to start, but you can keep an iterator without closing it until it's not needed anymore.
– Denis
2 days ago
Thanks, @Denis, that will make a very messy session handling I was hoping to avoid.
– Gadi
2 days ago
add a comment |
Thanks for this answer @Denis. can you provide an example code? Can you give the iterator a start point that is where you left off ? A bit confused how would you serve large data set to web clients without breaking the server...
– Gadi
2 days ago
@Gadi, you can find all information on scan queries here: apacheignite.readme.io/docs/cache-queries#section-scan-queries You can treatQueryCursor
as anIterable
.ScanQuery
cannot be told where to start, but you can keep an iterator without closing it until it's not needed anymore.
– Denis
2 days ago
Thanks, @Denis, that will make a very messy session handling I was hoping to avoid.
– Gadi
2 days ago
Thanks for this answer @Denis. can you provide an example code? Can you give the iterator a start point that is where you left off ? A bit confused how would you serve large data set to web clients without breaking the server...
– Gadi
2 days ago
Thanks for this answer @Denis. can you provide an example code? Can you give the iterator a start point that is where you left off ? A bit confused how would you serve large data set to web clients without breaking the server...
– Gadi
2 days ago
@Gadi, you can find all information on scan queries here: apacheignite.readme.io/docs/cache-queries#section-scan-queries You can treat
QueryCursor
as an Iterable
. ScanQuery
cannot be told where to start, but you can keep an iterator without closing it until it's not needed anymore.– Denis
2 days ago
@Gadi, you can find all information on scan queries here: apacheignite.readme.io/docs/cache-queries#section-scan-queries You can treat
QueryCursor
as an Iterable
. ScanQuery
cannot be told where to start, but you can keep an iterator without closing it until it's not needed anymore.– Denis
2 days ago
Thanks, @Denis, that will make a very messy session handling I was hoping to avoid.
– Gadi
2 days ago
Thanks, @Denis, that will make a very messy session handling I was hoping to avoid.
– Gadi
2 days ago
add a comment |
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53139060%2fwhat-is-a-correct-way-to-paginate-ignite-cache%23new-answer', 'question_page');
}
);
Post as a guest
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password