Best practice for preventing duplicate AggregateCreated events
up vote
1
down vote
favorite
I have the following (Axon) Aggregate :
@Aggregate
@NoArgsConstructor
public class Car{
@AggregateIdentifier
private String id;
@CommandHandler
public Car(CreateCar command){
apply( new CarCreated(command.getId()) );
}
@EventSourcingHandler
public void carCreated(CarCreated event) {
this.id = event.getId();
}
}
And I can create the car by submitting a CreateCar
command with a specific id, causing a CarCreated
event. That is great.
However, if I send another CreateCar
command, with the same Id, the command cannot be validated by the aggregate (that the given id already exists). Subsequently it will simply fire a new CarCreated
event. Which is a lie.
What would be the best approach to make sure the CreateCar
command fails if the car already exists?
Naturally I could first check the repository, but this won't prevent race conditions...
java cqrs axon
add a comment |
up vote
1
down vote
favorite
I have the following (Axon) Aggregate :
@Aggregate
@NoArgsConstructor
public class Car{
@AggregateIdentifier
private String id;
@CommandHandler
public Car(CreateCar command){
apply( new CarCreated(command.getId()) );
}
@EventSourcingHandler
public void carCreated(CarCreated event) {
this.id = event.getId();
}
}
And I can create the car by submitting a CreateCar
command with a specific id, causing a CarCreated
event. That is great.
However, if I send another CreateCar
command, with the same Id, the command cannot be validated by the aggregate (that the given id already exists). Subsequently it will simply fire a new CarCreated
event. Which is a lie.
What would be the best approach to make sure the CreateCar
command fails if the car already exists?
Naturally I could first check the repository, but this won't prevent race conditions...
java cqrs axon
use UUID or generate id in database
– giorgi dvalishvili
Nov 7 at 10:47
and how does this prevent two CarCreated events?
– dstibbe
Nov 7 at 10:58
The same create event with the same aggregate identifier can't be saved in the event store, so this transaction should fail at the database level, and be rolled back. I suspect you're using UUIDs for the aggregate identifier, and actually want the aggregates to also have a unique constraint on another identifier. In this case, as far as axon is concerned, you're creating a new unique aggregate, that just happens to have an identical payload to another one. You will have to solve this with a lock and read on a query table.
– Mzzl
Nov 9 at 16:45
This is usually handled using optimistic concurrency. When emitting an event to your storage technology you should send the expected event index along with the event. For your created event that would be 0 (for 0-based systems). Your storage technology hopefully supports this and will check that the event with index 0 has not already been stored and fail otherwise. You get the current event index from hydrating your aggregate. See, when you get the events for your aggregate (before you handle the command) you simply count the number of events and use that count as the expected event index.
– Noel Widmer
Nov 12 at 10:20
add a comment |
up vote
1
down vote
favorite
up vote
1
down vote
favorite
I have the following (Axon) Aggregate :
@Aggregate
@NoArgsConstructor
public class Car{
@AggregateIdentifier
private String id;
@CommandHandler
public Car(CreateCar command){
apply( new CarCreated(command.getId()) );
}
@EventSourcingHandler
public void carCreated(CarCreated event) {
this.id = event.getId();
}
}
And I can create the car by submitting a CreateCar
command with a specific id, causing a CarCreated
event. That is great.
However, if I send another CreateCar
command, with the same Id, the command cannot be validated by the aggregate (that the given id already exists). Subsequently it will simply fire a new CarCreated
event. Which is a lie.
What would be the best approach to make sure the CreateCar
command fails if the car already exists?
Naturally I could first check the repository, but this won't prevent race conditions...
java cqrs axon
I have the following (Axon) Aggregate :
@Aggregate
@NoArgsConstructor
public class Car{
@AggregateIdentifier
private String id;
@CommandHandler
public Car(CreateCar command){
apply( new CarCreated(command.getId()) );
}
@EventSourcingHandler
public void carCreated(CarCreated event) {
this.id = event.getId();
}
}
And I can create the car by submitting a CreateCar
command with a specific id, causing a CarCreated
event. That is great.
However, if I send another CreateCar
command, with the same Id, the command cannot be validated by the aggregate (that the given id already exists). Subsequently it will simply fire a new CarCreated
event. Which is a lie.
What would be the best approach to make sure the CreateCar
command fails if the car already exists?
Naturally I could first check the repository, but this won't prevent race conditions...
java cqrs axon
java cqrs axon
asked Nov 7 at 10:23
dstibbe
970924
970924
use UUID or generate id in database
– giorgi dvalishvili
Nov 7 at 10:47
and how does this prevent two CarCreated events?
– dstibbe
Nov 7 at 10:58
The same create event with the same aggregate identifier can't be saved in the event store, so this transaction should fail at the database level, and be rolled back. I suspect you're using UUIDs for the aggregate identifier, and actually want the aggregates to also have a unique constraint on another identifier. In this case, as far as axon is concerned, you're creating a new unique aggregate, that just happens to have an identical payload to another one. You will have to solve this with a lock and read on a query table.
– Mzzl
Nov 9 at 16:45
This is usually handled using optimistic concurrency. When emitting an event to your storage technology you should send the expected event index along with the event. For your created event that would be 0 (for 0-based systems). Your storage technology hopefully supports this and will check that the event with index 0 has not already been stored and fail otherwise. You get the current event index from hydrating your aggregate. See, when you get the events for your aggregate (before you handle the command) you simply count the number of events and use that count as the expected event index.
– Noel Widmer
Nov 12 at 10:20
add a comment |
use UUID or generate id in database
– giorgi dvalishvili
Nov 7 at 10:47
and how does this prevent two CarCreated events?
– dstibbe
Nov 7 at 10:58
The same create event with the same aggregate identifier can't be saved in the event store, so this transaction should fail at the database level, and be rolled back. I suspect you're using UUIDs for the aggregate identifier, and actually want the aggregates to also have a unique constraint on another identifier. In this case, as far as axon is concerned, you're creating a new unique aggregate, that just happens to have an identical payload to another one. You will have to solve this with a lock and read on a query table.
– Mzzl
Nov 9 at 16:45
This is usually handled using optimistic concurrency. When emitting an event to your storage technology you should send the expected event index along with the event. For your created event that would be 0 (for 0-based systems). Your storage technology hopefully supports this and will check that the event with index 0 has not already been stored and fail otherwise. You get the current event index from hydrating your aggregate. See, when you get the events for your aggregate (before you handle the command) you simply count the number of events and use that count as the expected event index.
– Noel Widmer
Nov 12 at 10:20
use UUID or generate id in database
– giorgi dvalishvili
Nov 7 at 10:47
use UUID or generate id in database
– giorgi dvalishvili
Nov 7 at 10:47
and how does this prevent two CarCreated events?
– dstibbe
Nov 7 at 10:58
and how does this prevent two CarCreated events?
– dstibbe
Nov 7 at 10:58
The same create event with the same aggregate identifier can't be saved in the event store, so this transaction should fail at the database level, and be rolled back. I suspect you're using UUIDs for the aggregate identifier, and actually want the aggregates to also have a unique constraint on another identifier. In this case, as far as axon is concerned, you're creating a new unique aggregate, that just happens to have an identical payload to another one. You will have to solve this with a lock and read on a query table.
– Mzzl
Nov 9 at 16:45
The same create event with the same aggregate identifier can't be saved in the event store, so this transaction should fail at the database level, and be rolled back. I suspect you're using UUIDs for the aggregate identifier, and actually want the aggregates to also have a unique constraint on another identifier. In this case, as far as axon is concerned, you're creating a new unique aggregate, that just happens to have an identical payload to another one. You will have to solve this with a lock and read on a query table.
– Mzzl
Nov 9 at 16:45
This is usually handled using optimistic concurrency. When emitting an event to your storage technology you should send the expected event index along with the event. For your created event that would be 0 (for 0-based systems). Your storage technology hopefully supports this and will check that the event with index 0 has not already been stored and fail otherwise. You get the current event index from hydrating your aggregate. See, when you get the events for your aggregate (before you handle the command) you simply count the number of events and use that count as the expected event index.
– Noel Widmer
Nov 12 at 10:20
This is usually handled using optimistic concurrency. When emitting an event to your storage technology you should send the expected event index along with the event. For your created event that would be 0 (for 0-based systems). Your storage technology hopefully supports this and will check that the event with index 0 has not already been stored and fail otherwise. You get the current event index from hydrating your aggregate. See, when you get the events for your aggregate (before you handle the command) you simply count the number of events and use that count as the expected event index.
– Noel Widmer
Nov 12 at 10:20
add a comment |
2 Answers
2
active
oldest
votes
up vote
0
down vote
What would be the best approach to make sure the CreateCar command fails if the car already exists? Naturally I could first check the repository, but this won't prevent race conditions...
There is no magic.
If you are going to avoid racy writes, then you need either to acquire a lock on the data store, or you need a data store with compare and swap
semantics.
With a lock, you have a guarantee that no conflicting updates will occur between your read of the data in the store and your subsequent write.
lock = lock_for_id id
lock.acquire
Try:
Option[Car] root = repository.load id
switch root {
case None:
Car car = createCar ...
repository.store car
case Some(car):
// deal with the fact that the car has already been created
}
Finally:
lock.release
You'd like to have a lock for each aggregate, but creating locks has the same racy conditions that creating aggregates does. So you will likely end up with something like a coarse grained lock to restrict access to the operation.
With compare-and-swap, you push the contention management toward the data store. Instead of sending the store a PUT, you are sending a conditional PUT.
Option[Car] root = repository.load id
switch root {
case None:
Car car = createCar ...
repository.replace car None
case Some(car):
// deal with the fact that the car has already been created
}
We don't need the locks any more, because we are describing precisely for the store the precondition (eg If-None-Match: *) that needs to be satisfied.
The compare and swap semantic is commonly supported by event stores; "appending" new events onto a stream is done by crafting a query that identifies the expected position of the tail pointer, with specially encoded values to identify cases where the stream is expected to be created (for example, Event Store support an ExpectedVersion.NoStream semantic).
add a comment |
up vote
0
down vote
However, if I send another CreateCar command, with the same Id, the command cannot be validated by the aggregate (that the given id already exists). Subsequently it will simply fire a new CarCreated event. Which is a lie.
Axon actually takes care of this for you. When an aggregate publishes an event, it is not published to other components immediately. It is staged in the Unit of Work, awaiting completion of the handler execution.
After handler execution, a number of "prepare commit" handlers are invoked. One of these stores the aggregate (which is a no-op when using event sourcing), the other is publication of the events (within the scope of a transaction).
Depending on whether you use Event Sourcing or not, either adding the Aggregate instance to the persistent storage will fail (duplicate key), or the publication of the creation event will fail (duplicate aggregate identifier + sequence number).
add a comment |
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
0
down vote
What would be the best approach to make sure the CreateCar command fails if the car already exists? Naturally I could first check the repository, but this won't prevent race conditions...
There is no magic.
If you are going to avoid racy writes, then you need either to acquire a lock on the data store, or you need a data store with compare and swap
semantics.
With a lock, you have a guarantee that no conflicting updates will occur between your read of the data in the store and your subsequent write.
lock = lock_for_id id
lock.acquire
Try:
Option[Car] root = repository.load id
switch root {
case None:
Car car = createCar ...
repository.store car
case Some(car):
// deal with the fact that the car has already been created
}
Finally:
lock.release
You'd like to have a lock for each aggregate, but creating locks has the same racy conditions that creating aggregates does. So you will likely end up with something like a coarse grained lock to restrict access to the operation.
With compare-and-swap, you push the contention management toward the data store. Instead of sending the store a PUT, you are sending a conditional PUT.
Option[Car] root = repository.load id
switch root {
case None:
Car car = createCar ...
repository.replace car None
case Some(car):
// deal with the fact that the car has already been created
}
We don't need the locks any more, because we are describing precisely for the store the precondition (eg If-None-Match: *) that needs to be satisfied.
The compare and swap semantic is commonly supported by event stores; "appending" new events onto a stream is done by crafting a query that identifies the expected position of the tail pointer, with specially encoded values to identify cases where the stream is expected to be created (for example, Event Store support an ExpectedVersion.NoStream semantic).
add a comment |
up vote
0
down vote
What would be the best approach to make sure the CreateCar command fails if the car already exists? Naturally I could first check the repository, but this won't prevent race conditions...
There is no magic.
If you are going to avoid racy writes, then you need either to acquire a lock on the data store, or you need a data store with compare and swap
semantics.
With a lock, you have a guarantee that no conflicting updates will occur between your read of the data in the store and your subsequent write.
lock = lock_for_id id
lock.acquire
Try:
Option[Car] root = repository.load id
switch root {
case None:
Car car = createCar ...
repository.store car
case Some(car):
// deal with the fact that the car has already been created
}
Finally:
lock.release
You'd like to have a lock for each aggregate, but creating locks has the same racy conditions that creating aggregates does. So you will likely end up with something like a coarse grained lock to restrict access to the operation.
With compare-and-swap, you push the contention management toward the data store. Instead of sending the store a PUT, you are sending a conditional PUT.
Option[Car] root = repository.load id
switch root {
case None:
Car car = createCar ...
repository.replace car None
case Some(car):
// deal with the fact that the car has already been created
}
We don't need the locks any more, because we are describing precisely for the store the precondition (eg If-None-Match: *) that needs to be satisfied.
The compare and swap semantic is commonly supported by event stores; "appending" new events onto a stream is done by crafting a query that identifies the expected position of the tail pointer, with specially encoded values to identify cases where the stream is expected to be created (for example, Event Store support an ExpectedVersion.NoStream semantic).
add a comment |
up vote
0
down vote
up vote
0
down vote
What would be the best approach to make sure the CreateCar command fails if the car already exists? Naturally I could first check the repository, but this won't prevent race conditions...
There is no magic.
If you are going to avoid racy writes, then you need either to acquire a lock on the data store, or you need a data store with compare and swap
semantics.
With a lock, you have a guarantee that no conflicting updates will occur between your read of the data in the store and your subsequent write.
lock = lock_for_id id
lock.acquire
Try:
Option[Car] root = repository.load id
switch root {
case None:
Car car = createCar ...
repository.store car
case Some(car):
// deal with the fact that the car has already been created
}
Finally:
lock.release
You'd like to have a lock for each aggregate, but creating locks has the same racy conditions that creating aggregates does. So you will likely end up with something like a coarse grained lock to restrict access to the operation.
With compare-and-swap, you push the contention management toward the data store. Instead of sending the store a PUT, you are sending a conditional PUT.
Option[Car] root = repository.load id
switch root {
case None:
Car car = createCar ...
repository.replace car None
case Some(car):
// deal with the fact that the car has already been created
}
We don't need the locks any more, because we are describing precisely for the store the precondition (eg If-None-Match: *) that needs to be satisfied.
The compare and swap semantic is commonly supported by event stores; "appending" new events onto a stream is done by crafting a query that identifies the expected position of the tail pointer, with specially encoded values to identify cases where the stream is expected to be created (for example, Event Store support an ExpectedVersion.NoStream semantic).
What would be the best approach to make sure the CreateCar command fails if the car already exists? Naturally I could first check the repository, but this won't prevent race conditions...
There is no magic.
If you are going to avoid racy writes, then you need either to acquire a lock on the data store, or you need a data store with compare and swap
semantics.
With a lock, you have a guarantee that no conflicting updates will occur between your read of the data in the store and your subsequent write.
lock = lock_for_id id
lock.acquire
Try:
Option[Car] root = repository.load id
switch root {
case None:
Car car = createCar ...
repository.store car
case Some(car):
// deal with the fact that the car has already been created
}
Finally:
lock.release
You'd like to have a lock for each aggregate, but creating locks has the same racy conditions that creating aggregates does. So you will likely end up with something like a coarse grained lock to restrict access to the operation.
With compare-and-swap, you push the contention management toward the data store. Instead of sending the store a PUT, you are sending a conditional PUT.
Option[Car] root = repository.load id
switch root {
case None:
Car car = createCar ...
repository.replace car None
case Some(car):
// deal with the fact that the car has already been created
}
We don't need the locks any more, because we are describing precisely for the store the precondition (eg If-None-Match: *) that needs to be satisfied.
The compare and swap semantic is commonly supported by event stores; "appending" new events onto a stream is done by crafting a query that identifies the expected position of the tail pointer, with specially encoded values to identify cases where the stream is expected to be created (for example, Event Store support an ExpectedVersion.NoStream semantic).
answered Nov 7 at 13:41
VoiceOfUnreason
18.9k21745
18.9k21745
add a comment |
add a comment |
up vote
0
down vote
However, if I send another CreateCar command, with the same Id, the command cannot be validated by the aggregate (that the given id already exists). Subsequently it will simply fire a new CarCreated event. Which is a lie.
Axon actually takes care of this for you. When an aggregate publishes an event, it is not published to other components immediately. It is staged in the Unit of Work, awaiting completion of the handler execution.
After handler execution, a number of "prepare commit" handlers are invoked. One of these stores the aggregate (which is a no-op when using event sourcing), the other is publication of the events (within the scope of a transaction).
Depending on whether you use Event Sourcing or not, either adding the Aggregate instance to the persistent storage will fail (duplicate key), or the publication of the creation event will fail (duplicate aggregate identifier + sequence number).
add a comment |
up vote
0
down vote
However, if I send another CreateCar command, with the same Id, the command cannot be validated by the aggregate (that the given id already exists). Subsequently it will simply fire a new CarCreated event. Which is a lie.
Axon actually takes care of this for you. When an aggregate publishes an event, it is not published to other components immediately. It is staged in the Unit of Work, awaiting completion of the handler execution.
After handler execution, a number of "prepare commit" handlers are invoked. One of these stores the aggregate (which is a no-op when using event sourcing), the other is publication of the events (within the scope of a transaction).
Depending on whether you use Event Sourcing or not, either adding the Aggregate instance to the persistent storage will fail (duplicate key), or the publication of the creation event will fail (duplicate aggregate identifier + sequence number).
add a comment |
up vote
0
down vote
up vote
0
down vote
However, if I send another CreateCar command, with the same Id, the command cannot be validated by the aggregate (that the given id already exists). Subsequently it will simply fire a new CarCreated event. Which is a lie.
Axon actually takes care of this for you. When an aggregate publishes an event, it is not published to other components immediately. It is staged in the Unit of Work, awaiting completion of the handler execution.
After handler execution, a number of "prepare commit" handlers are invoked. One of these stores the aggregate (which is a no-op when using event sourcing), the other is publication of the events (within the scope of a transaction).
Depending on whether you use Event Sourcing or not, either adding the Aggregate instance to the persistent storage will fail (duplicate key), or the publication of the creation event will fail (duplicate aggregate identifier + sequence number).
However, if I send another CreateCar command, with the same Id, the command cannot be validated by the aggregate (that the given id already exists). Subsequently it will simply fire a new CarCreated event. Which is a lie.
Axon actually takes care of this for you. When an aggregate publishes an event, it is not published to other components immediately. It is staged in the Unit of Work, awaiting completion of the handler execution.
After handler execution, a number of "prepare commit" handlers are invoked. One of these stores the aggregate (which is a no-op when using event sourcing), the other is publication of the events (within the scope of a transaction).
Depending on whether you use Event Sourcing or not, either adding the Aggregate instance to the persistent storage will fail (duplicate key), or the publication of the creation event will fail (duplicate aggregate identifier + sequence number).
answered Nov 12 at 8:17
Allard
1,05165
1,05165
add a comment |
add a comment |
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53187544%2fbest-practice-for-preventing-duplicate-aggregatecreated-events%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
use UUID or generate id in database
– giorgi dvalishvili
Nov 7 at 10:47
and how does this prevent two CarCreated events?
– dstibbe
Nov 7 at 10:58
The same create event with the same aggregate identifier can't be saved in the event store, so this transaction should fail at the database level, and be rolled back. I suspect you're using UUIDs for the aggregate identifier, and actually want the aggregates to also have a unique constraint on another identifier. In this case, as far as axon is concerned, you're creating a new unique aggregate, that just happens to have an identical payload to another one. You will have to solve this with a lock and read on a query table.
– Mzzl
Nov 9 at 16:45
This is usually handled using optimistic concurrency. When emitting an event to your storage technology you should send the expected event index along with the event. For your created event that would be 0 (for 0-based systems). Your storage technology hopefully supports this and will check that the event with index 0 has not already been stored and fail otherwise. You get the current event index from hydrating your aggregate. See, when you get the events for your aggregate (before you handle the command) you simply count the number of events and use that count as the expected event index.
– Noel Widmer
Nov 12 at 10:20