Polynomial Kernel not PSD?











up vote
0
down vote

favorite












I am currently going through different Machine Learning methods on a low-level basis. Since the polynomial kernel




K(x,z)=(1+x^T*z)^d




is one of the more commonly used kernels, I was under the assumption that this kernel must yield a positive (semi-)definite matrix for a fixed set of data {x1,...,xn}.
However, in my implementation, this seems to not be the case, as the following example shows:



import numpy as np    
np.random.seed(93)
x=np.random.uniform(0,1,5)
#Assuming d=1
kernel=(1+x[:,None]@x.T[None,:])
np.linalg.eigvals(kernel)


The output would be



[ 6.9463439e+00  1.6070016e-01  9.5388039e-08 -1.5821310e-08  -3.7724433e-08]


I'm also getting negative eigenvalues for d>2.
Am I totally misunderstanding something here? Or is the polynomia kernel simply not PSD?



EDIT: In a previous version, i used x=np.float32(np.random.uniform(0,1,5)) to reduce computing time, which lead to a greater amount of negative eigenvalues (I believe due to numerical instabilities, as @user2357112 mentioned). I guess this is a good example that precision does matter? Since negative eigenvalues still occur, even for float64 precision, the follow-up question would then be how to avoid such numerical instabilities?










share|improve this question




















  • 3




    Those values are 0 up to numerical error.
    – user2357112
    Nov 8 at 4:13










  • Also I think you've got your transposes backward.
    – user2357112
    Nov 8 at 4:16










  • What would be the way to go to avoid this numerical instability? And no, the transposed should be that way. I want to get a nxn matrix out of a nx1 vector, where the (i,j)th entry is x_i*x_j.
    – emilaz
    Nov 8 at 4:20










  • already helped a lot though, thanks.
    – emilaz
    Nov 8 at 4:41















up vote
0
down vote

favorite












I am currently going through different Machine Learning methods on a low-level basis. Since the polynomial kernel




K(x,z)=(1+x^T*z)^d




is one of the more commonly used kernels, I was under the assumption that this kernel must yield a positive (semi-)definite matrix for a fixed set of data {x1,...,xn}.
However, in my implementation, this seems to not be the case, as the following example shows:



import numpy as np    
np.random.seed(93)
x=np.random.uniform(0,1,5)
#Assuming d=1
kernel=(1+x[:,None]@x.T[None,:])
np.linalg.eigvals(kernel)


The output would be



[ 6.9463439e+00  1.6070016e-01  9.5388039e-08 -1.5821310e-08  -3.7724433e-08]


I'm also getting negative eigenvalues for d>2.
Am I totally misunderstanding something here? Or is the polynomia kernel simply not PSD?



EDIT: In a previous version, i used x=np.float32(np.random.uniform(0,1,5)) to reduce computing time, which lead to a greater amount of negative eigenvalues (I believe due to numerical instabilities, as @user2357112 mentioned). I guess this is a good example that precision does matter? Since negative eigenvalues still occur, even for float64 precision, the follow-up question would then be how to avoid such numerical instabilities?










share|improve this question




















  • 3




    Those values are 0 up to numerical error.
    – user2357112
    Nov 8 at 4:13










  • Also I think you've got your transposes backward.
    – user2357112
    Nov 8 at 4:16










  • What would be the way to go to avoid this numerical instability? And no, the transposed should be that way. I want to get a nxn matrix out of a nx1 vector, where the (i,j)th entry is x_i*x_j.
    – emilaz
    Nov 8 at 4:20










  • already helped a lot though, thanks.
    – emilaz
    Nov 8 at 4:41













up vote
0
down vote

favorite









up vote
0
down vote

favorite











I am currently going through different Machine Learning methods on a low-level basis. Since the polynomial kernel




K(x,z)=(1+x^T*z)^d




is one of the more commonly used kernels, I was under the assumption that this kernel must yield a positive (semi-)definite matrix for a fixed set of data {x1,...,xn}.
However, in my implementation, this seems to not be the case, as the following example shows:



import numpy as np    
np.random.seed(93)
x=np.random.uniform(0,1,5)
#Assuming d=1
kernel=(1+x[:,None]@x.T[None,:])
np.linalg.eigvals(kernel)


The output would be



[ 6.9463439e+00  1.6070016e-01  9.5388039e-08 -1.5821310e-08  -3.7724433e-08]


I'm also getting negative eigenvalues for d>2.
Am I totally misunderstanding something here? Or is the polynomia kernel simply not PSD?



EDIT: In a previous version, i used x=np.float32(np.random.uniform(0,1,5)) to reduce computing time, which lead to a greater amount of negative eigenvalues (I believe due to numerical instabilities, as @user2357112 mentioned). I guess this is a good example that precision does matter? Since negative eigenvalues still occur, even for float64 precision, the follow-up question would then be how to avoid such numerical instabilities?










share|improve this question















I am currently going through different Machine Learning methods on a low-level basis. Since the polynomial kernel




K(x,z)=(1+x^T*z)^d




is one of the more commonly used kernels, I was under the assumption that this kernel must yield a positive (semi-)definite matrix for a fixed set of data {x1,...,xn}.
However, in my implementation, this seems to not be the case, as the following example shows:



import numpy as np    
np.random.seed(93)
x=np.random.uniform(0,1,5)
#Assuming d=1
kernel=(1+x[:,None]@x.T[None,:])
np.linalg.eigvals(kernel)


The output would be



[ 6.9463439e+00  1.6070016e-01  9.5388039e-08 -1.5821310e-08  -3.7724433e-08]


I'm also getting negative eigenvalues for d>2.
Am I totally misunderstanding something here? Or is the polynomia kernel simply not PSD?



EDIT: In a previous version, i used x=np.float32(np.random.uniform(0,1,5)) to reduce computing time, which lead to a greater amount of negative eigenvalues (I believe due to numerical instabilities, as @user2357112 mentioned). I guess this is a good example that precision does matter? Since negative eigenvalues still occur, even for float64 precision, the follow-up question would then be how to avoid such numerical instabilities?







python machine-learning svm kernel-methods






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 8 at 4:55

























asked Nov 8 at 4:06









emilaz

708




708








  • 3




    Those values are 0 up to numerical error.
    – user2357112
    Nov 8 at 4:13










  • Also I think you've got your transposes backward.
    – user2357112
    Nov 8 at 4:16










  • What would be the way to go to avoid this numerical instability? And no, the transposed should be that way. I want to get a nxn matrix out of a nx1 vector, where the (i,j)th entry is x_i*x_j.
    – emilaz
    Nov 8 at 4:20










  • already helped a lot though, thanks.
    – emilaz
    Nov 8 at 4:41














  • 3




    Those values are 0 up to numerical error.
    – user2357112
    Nov 8 at 4:13










  • Also I think you've got your transposes backward.
    – user2357112
    Nov 8 at 4:16










  • What would be the way to go to avoid this numerical instability? And no, the transposed should be that way. I want to get a nxn matrix out of a nx1 vector, where the (i,j)th entry is x_i*x_j.
    – emilaz
    Nov 8 at 4:20










  • already helped a lot though, thanks.
    – emilaz
    Nov 8 at 4:41








3




3




Those values are 0 up to numerical error.
– user2357112
Nov 8 at 4:13




Those values are 0 up to numerical error.
– user2357112
Nov 8 at 4:13












Also I think you've got your transposes backward.
– user2357112
Nov 8 at 4:16




Also I think you've got your transposes backward.
– user2357112
Nov 8 at 4:16












What would be the way to go to avoid this numerical instability? And no, the transposed should be that way. I want to get a nxn matrix out of a nx1 vector, where the (i,j)th entry is x_i*x_j.
– emilaz
Nov 8 at 4:20




What would be the way to go to avoid this numerical instability? And no, the transposed should be that way. I want to get a nxn matrix out of a nx1 vector, where the (i,j)th entry is x_i*x_j.
– emilaz
Nov 8 at 4:20












already helped a lot though, thanks.
– emilaz
Nov 8 at 4:41




already helped a lot though, thanks.
– emilaz
Nov 8 at 4:41

















active

oldest

votes











Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53201419%2fpolynomial-kernel-not-psd%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown






























active

oldest

votes













active

oldest

votes









active

oldest

votes






active

oldest

votes
















draft saved

draft discarded




















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.





Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


Please pay close attention to the following guidance:


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53201419%2fpolynomial-kernel-not-psd%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







這個網誌中的熱門文章

Tangent Lines Diagram Along Smooth Curve

Yusuf al-Mu'taman ibn Hud

Zucchini