Polynomial Kernel not PSD?
up vote
0
down vote
favorite
I am currently going through different Machine Learning methods on a low-level basis. Since the polynomial kernel
K(x,z)=(1+x^T*z)^d
is one of the more commonly used kernels, I was under the assumption that this kernel must yield a positive (semi-)definite matrix for a fixed set of data {x1,...,xn}.
However, in my implementation, this seems to not be the case, as the following example shows:
import numpy as np
np.random.seed(93)
x=np.random.uniform(0,1,5)
#Assuming d=1
kernel=(1+x[:,None]@x.T[None,:])
np.linalg.eigvals(kernel)
The output would be
[ 6.9463439e+00 1.6070016e-01 9.5388039e-08 -1.5821310e-08 -3.7724433e-08]
I'm also getting negative eigenvalues for d>2.
Am I totally misunderstanding something here? Or is the polynomia kernel simply not PSD?
EDIT: In a previous version, i used x=np.float32(np.random.uniform(0,1,5))
to reduce computing time, which lead to a greater amount of negative eigenvalues (I believe due to numerical instabilities, as @user2357112 mentioned). I guess this is a good example that precision does matter? Since negative eigenvalues still occur, even for float64 precision, the follow-up question would then be how to avoid such numerical instabilities?
python machine-learning svm kernel-methods
add a comment |
up vote
0
down vote
favorite
I am currently going through different Machine Learning methods on a low-level basis. Since the polynomial kernel
K(x,z)=(1+x^T*z)^d
is one of the more commonly used kernels, I was under the assumption that this kernel must yield a positive (semi-)definite matrix for a fixed set of data {x1,...,xn}.
However, in my implementation, this seems to not be the case, as the following example shows:
import numpy as np
np.random.seed(93)
x=np.random.uniform(0,1,5)
#Assuming d=1
kernel=(1+x[:,None]@x.T[None,:])
np.linalg.eigvals(kernel)
The output would be
[ 6.9463439e+00 1.6070016e-01 9.5388039e-08 -1.5821310e-08 -3.7724433e-08]
I'm also getting negative eigenvalues for d>2.
Am I totally misunderstanding something here? Or is the polynomia kernel simply not PSD?
EDIT: In a previous version, i used x=np.float32(np.random.uniform(0,1,5))
to reduce computing time, which lead to a greater amount of negative eigenvalues (I believe due to numerical instabilities, as @user2357112 mentioned). I guess this is a good example that precision does matter? Since negative eigenvalues still occur, even for float64 precision, the follow-up question would then be how to avoid such numerical instabilities?
python machine-learning svm kernel-methods
3
Those values are 0 up to numerical error.
– user2357112
Nov 8 at 4:13
Also I think you've got your transposes backward.
– user2357112
Nov 8 at 4:16
What would be the way to go to avoid this numerical instability? And no, the transposed should be that way. I want to get a nxn matrix out of a nx1 vector, where the (i,j)th entry is x_i*x_j.
– emilaz
Nov 8 at 4:20
already helped a lot though, thanks.
– emilaz
Nov 8 at 4:41
add a comment |
up vote
0
down vote
favorite
up vote
0
down vote
favorite
I am currently going through different Machine Learning methods on a low-level basis. Since the polynomial kernel
K(x,z)=(1+x^T*z)^d
is one of the more commonly used kernels, I was under the assumption that this kernel must yield a positive (semi-)definite matrix for a fixed set of data {x1,...,xn}.
However, in my implementation, this seems to not be the case, as the following example shows:
import numpy as np
np.random.seed(93)
x=np.random.uniform(0,1,5)
#Assuming d=1
kernel=(1+x[:,None]@x.T[None,:])
np.linalg.eigvals(kernel)
The output would be
[ 6.9463439e+00 1.6070016e-01 9.5388039e-08 -1.5821310e-08 -3.7724433e-08]
I'm also getting negative eigenvalues for d>2.
Am I totally misunderstanding something here? Or is the polynomia kernel simply not PSD?
EDIT: In a previous version, i used x=np.float32(np.random.uniform(0,1,5))
to reduce computing time, which lead to a greater amount of negative eigenvalues (I believe due to numerical instabilities, as @user2357112 mentioned). I guess this is a good example that precision does matter? Since negative eigenvalues still occur, even for float64 precision, the follow-up question would then be how to avoid such numerical instabilities?
python machine-learning svm kernel-methods
I am currently going through different Machine Learning methods on a low-level basis. Since the polynomial kernel
K(x,z)=(1+x^T*z)^d
is one of the more commonly used kernels, I was under the assumption that this kernel must yield a positive (semi-)definite matrix for a fixed set of data {x1,...,xn}.
However, in my implementation, this seems to not be the case, as the following example shows:
import numpy as np
np.random.seed(93)
x=np.random.uniform(0,1,5)
#Assuming d=1
kernel=(1+x[:,None]@x.T[None,:])
np.linalg.eigvals(kernel)
The output would be
[ 6.9463439e+00 1.6070016e-01 9.5388039e-08 -1.5821310e-08 -3.7724433e-08]
I'm also getting negative eigenvalues for d>2.
Am I totally misunderstanding something here? Or is the polynomia kernel simply not PSD?
EDIT: In a previous version, i used x=np.float32(np.random.uniform(0,1,5))
to reduce computing time, which lead to a greater amount of negative eigenvalues (I believe due to numerical instabilities, as @user2357112 mentioned). I guess this is a good example that precision does matter? Since negative eigenvalues still occur, even for float64 precision, the follow-up question would then be how to avoid such numerical instabilities?
python machine-learning svm kernel-methods
python machine-learning svm kernel-methods
edited Nov 8 at 4:55
asked Nov 8 at 4:06
emilaz
708
708
3
Those values are 0 up to numerical error.
– user2357112
Nov 8 at 4:13
Also I think you've got your transposes backward.
– user2357112
Nov 8 at 4:16
What would be the way to go to avoid this numerical instability? And no, the transposed should be that way. I want to get a nxn matrix out of a nx1 vector, where the (i,j)th entry is x_i*x_j.
– emilaz
Nov 8 at 4:20
already helped a lot though, thanks.
– emilaz
Nov 8 at 4:41
add a comment |
3
Those values are 0 up to numerical error.
– user2357112
Nov 8 at 4:13
Also I think you've got your transposes backward.
– user2357112
Nov 8 at 4:16
What would be the way to go to avoid this numerical instability? And no, the transposed should be that way. I want to get a nxn matrix out of a nx1 vector, where the (i,j)th entry is x_i*x_j.
– emilaz
Nov 8 at 4:20
already helped a lot though, thanks.
– emilaz
Nov 8 at 4:41
3
3
Those values are 0 up to numerical error.
– user2357112
Nov 8 at 4:13
Those values are 0 up to numerical error.
– user2357112
Nov 8 at 4:13
Also I think you've got your transposes backward.
– user2357112
Nov 8 at 4:16
Also I think you've got your transposes backward.
– user2357112
Nov 8 at 4:16
What would be the way to go to avoid this numerical instability? And no, the transposed should be that way. I want to get a nxn matrix out of a nx1 vector, where the (i,j)th entry is x_i*x_j.
– emilaz
Nov 8 at 4:20
What would be the way to go to avoid this numerical instability? And no, the transposed should be that way. I want to get a nxn matrix out of a nx1 vector, where the (i,j)th entry is x_i*x_j.
– emilaz
Nov 8 at 4:20
already helped a lot though, thanks.
– emilaz
Nov 8 at 4:41
already helped a lot though, thanks.
– emilaz
Nov 8 at 4:41
add a comment |
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53201419%2fpolynomial-kernel-not-psd%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
3
Those values are 0 up to numerical error.
– user2357112
Nov 8 at 4:13
Also I think you've got your transposes backward.
– user2357112
Nov 8 at 4:16
What would be the way to go to avoid this numerical instability? And no, the transposed should be that way. I want to get a nxn matrix out of a nx1 vector, where the (i,j)th entry is x_i*x_j.
– emilaz
Nov 8 at 4:20
already helped a lot though, thanks.
– emilaz
Nov 8 at 4:41