Confused on `K.local_conv1d` and an almost identity implementation to implement capsulenet
up vote
0
down vote
favorite
The essential and problem code of Capsule Net are all below.
The problem is when I switch Part I to Part II code, I will get incompatibable dimension matching error.
The difference, in my opinion , between two part is Part II code do not calculate one of dimension (input_num_capsule) for u_vecs.
Is it Keras cannot support exchanging two None dimensions?
If you want to try it yourself, please fork this code on Kaggle.
```
class Capsule(Layer):
.......
def call(self, u_vecs):
if self.share_weights:
u_hat_vecs = K.conv1d(u_vecs, self.W)
else:
# Part I ###########################
## `local_conv1d`' logic when set kernel_size=1 and stride=1
####################################
# u_vecs: [batch_size, input_num_capsule, input_dim_capsule]
# immediate value : [1, batch_size, input_dim_capsule] # slice_len = 1
# concate immediate value, got X: [input_num_capsule, batch_size, input_dim_capsule]
# W : [input_num_capsule, input_dim_capsule, num_capsule * dim_capsule]
# K.batch_dot(X, W) [input_num_capsule, batch_size, num_capsule * dim_capsule]
# [batch_size, input_num_capsule, num_capsule * dim_capsule]
u_hat_vecs = K.local_conv1d(u_vecs, self.W, [1], [1])
# Part II ###################
### In my idea, this is identical to the `local_conv1d(u_vecs, self.W, [1], [1])`
### , but the first dim of `x_aggregate` is determined.
##############################
u_vecs = K.permute_dimensions(u_vecs, (1, 0, 2))
u_hat_vecs = K.batch_dot(u_vecs, self.W)
u_hat_vecs = K.permute_dimensions(u_hat_vecs, (1, 0, 2))
....
```
tensorflow keras deep-learning
add a comment |
up vote
0
down vote
favorite
The essential and problem code of Capsule Net are all below.
The problem is when I switch Part I to Part II code, I will get incompatibable dimension matching error.
The difference, in my opinion , between two part is Part II code do not calculate one of dimension (input_num_capsule) for u_vecs.
Is it Keras cannot support exchanging two None dimensions?
If you want to try it yourself, please fork this code on Kaggle.
```
class Capsule(Layer):
.......
def call(self, u_vecs):
if self.share_weights:
u_hat_vecs = K.conv1d(u_vecs, self.W)
else:
# Part I ###########################
## `local_conv1d`' logic when set kernel_size=1 and stride=1
####################################
# u_vecs: [batch_size, input_num_capsule, input_dim_capsule]
# immediate value : [1, batch_size, input_dim_capsule] # slice_len = 1
# concate immediate value, got X: [input_num_capsule, batch_size, input_dim_capsule]
# W : [input_num_capsule, input_dim_capsule, num_capsule * dim_capsule]
# K.batch_dot(X, W) [input_num_capsule, batch_size, num_capsule * dim_capsule]
# [batch_size, input_num_capsule, num_capsule * dim_capsule]
u_hat_vecs = K.local_conv1d(u_vecs, self.W, [1], [1])
# Part II ###################
### In my idea, this is identical to the `local_conv1d(u_vecs, self.W, [1], [1])`
### , but the first dim of `x_aggregate` is determined.
##############################
u_vecs = K.permute_dimensions(u_vecs, (1, 0, 2))
u_hat_vecs = K.batch_dot(u_vecs, self.W)
u_hat_vecs = K.permute_dimensions(u_hat_vecs, (1, 0, 2))
....
```
tensorflow keras deep-learning
I have solved this problem. The reason don't depend on being unable to permute toNonedimensions for keras. The reality is you should restoreu_vecsto its original status after permutingu_vecsdimensions. Since the following code assumeu_vecs's has its original version.
– Shi-Feng Ren
Nov 8 at 3:22
add a comment |
up vote
0
down vote
favorite
up vote
0
down vote
favorite
The essential and problem code of Capsule Net are all below.
The problem is when I switch Part I to Part II code, I will get incompatibable dimension matching error.
The difference, in my opinion , between two part is Part II code do not calculate one of dimension (input_num_capsule) for u_vecs.
Is it Keras cannot support exchanging two None dimensions?
If you want to try it yourself, please fork this code on Kaggle.
```
class Capsule(Layer):
.......
def call(self, u_vecs):
if self.share_weights:
u_hat_vecs = K.conv1d(u_vecs, self.W)
else:
# Part I ###########################
## `local_conv1d`' logic when set kernel_size=1 and stride=1
####################################
# u_vecs: [batch_size, input_num_capsule, input_dim_capsule]
# immediate value : [1, batch_size, input_dim_capsule] # slice_len = 1
# concate immediate value, got X: [input_num_capsule, batch_size, input_dim_capsule]
# W : [input_num_capsule, input_dim_capsule, num_capsule * dim_capsule]
# K.batch_dot(X, W) [input_num_capsule, batch_size, num_capsule * dim_capsule]
# [batch_size, input_num_capsule, num_capsule * dim_capsule]
u_hat_vecs = K.local_conv1d(u_vecs, self.W, [1], [1])
# Part II ###################
### In my idea, this is identical to the `local_conv1d(u_vecs, self.W, [1], [1])`
### , but the first dim of `x_aggregate` is determined.
##############################
u_vecs = K.permute_dimensions(u_vecs, (1, 0, 2))
u_hat_vecs = K.batch_dot(u_vecs, self.W)
u_hat_vecs = K.permute_dimensions(u_hat_vecs, (1, 0, 2))
....
```
tensorflow keras deep-learning
The essential and problem code of Capsule Net are all below.
The problem is when I switch Part I to Part II code, I will get incompatibable dimension matching error.
The difference, in my opinion , between two part is Part II code do not calculate one of dimension (input_num_capsule) for u_vecs.
Is it Keras cannot support exchanging two None dimensions?
If you want to try it yourself, please fork this code on Kaggle.
```
class Capsule(Layer):
.......
def call(self, u_vecs):
if self.share_weights:
u_hat_vecs = K.conv1d(u_vecs, self.W)
else:
# Part I ###########################
## `local_conv1d`' logic when set kernel_size=1 and stride=1
####################################
# u_vecs: [batch_size, input_num_capsule, input_dim_capsule]
# immediate value : [1, batch_size, input_dim_capsule] # slice_len = 1
# concate immediate value, got X: [input_num_capsule, batch_size, input_dim_capsule]
# W : [input_num_capsule, input_dim_capsule, num_capsule * dim_capsule]
# K.batch_dot(X, W) [input_num_capsule, batch_size, num_capsule * dim_capsule]
# [batch_size, input_num_capsule, num_capsule * dim_capsule]
u_hat_vecs = K.local_conv1d(u_vecs, self.W, [1], [1])
# Part II ###################
### In my idea, this is identical to the `local_conv1d(u_vecs, self.W, [1], [1])`
### , but the first dim of `x_aggregate` is determined.
##############################
u_vecs = K.permute_dimensions(u_vecs, (1, 0, 2))
u_hat_vecs = K.batch_dot(u_vecs, self.W)
u_hat_vecs = K.permute_dimensions(u_hat_vecs, (1, 0, 2))
....
```
tensorflow keras deep-learning
tensorflow keras deep-learning
asked Nov 7 at 13:31
Shi-Feng Ren
11
11
I have solved this problem. The reason don't depend on being unable to permute toNonedimensions for keras. The reality is you should restoreu_vecsto its original status after permutingu_vecsdimensions. Since the following code assumeu_vecs's has its original version.
– Shi-Feng Ren
Nov 8 at 3:22
add a comment |
I have solved this problem. The reason don't depend on being unable to permute toNonedimensions for keras. The reality is you should restoreu_vecsto its original status after permutingu_vecsdimensions. Since the following code assumeu_vecs's has its original version.
– Shi-Feng Ren
Nov 8 at 3:22
I have solved this problem. The reason don't depend on being unable to permute to
None dimensions for keras. The reality is you should restore u_vecs to its original status after permuting u_vecs dimensions. Since the following code assume u_vecs 's has its original version.– Shi-Feng Ren
Nov 8 at 3:22
I have solved this problem. The reason don't depend on being unable to permute to
None dimensions for keras. The reality is you should restore u_vecs to its original status after permuting u_vecs dimensions. Since the following code assume u_vecs 's has its original version.– Shi-Feng Ren
Nov 8 at 3:22
add a comment |
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53190474%2fconfused-on-k-local-conv1d-and-an-almost-identity-implementation-to-implement%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
I have solved this problem. The reason don't depend on being unable to permute to
Nonedimensions for keras. The reality is you should restoreu_vecsto its original status after permutingu_vecsdimensions. Since the following code assumeu_vecs's has its original version.– Shi-Feng Ren
Nov 8 at 3:22