Getting accurate interpolated probability from logistic regression equation
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty{ margin-bottom:0;
}
$begingroup$
I have to ascertain what specific rasch item a student needs to attain to have a 70% probability of passing a future criterion test (the tests are correlated, the results below are output from the logistic regression equation).
I ran a logistic regression equation on a series of rasch items. Because the rasch items represent discrete ability scores, and the number of items was not very large (15-items per student) I have to interpolate what rasch item would be needed to have a 70% probability of passing a criterion. Below is the output and code I have tried to use to create the probability.
intercept = -0.8392
slope = 0.4120
Finding probability of 0.70 given the above intercept/slope:
#Eq1
exp((log(0.70) - intercept)/slope)
#Output: 3.225788
This output would indicate a rasch score of 3.225788 would represent a probability of 0.70. But when I use that output to assess the probability of 3.225788, it comes out to a probability of 0.62.
#Eq2
exp(-0.8392+0.4120*(3.225788)) / (1 + exp(-0.8392 + 0.4120*(3.22578)))
#Output: 0.62
I also tried repeating equation 1 by first assigning the log(0.7) to an object (p) in the hopes that this could solve a rounding error that I read about in McElreath's "Statistical Rethinking" but it didn't appear to help.
Please do let me know if you need a reproducible dataset. I thought perhaps the intercept/slope would be enough, but can put together more if needed.
r logistic
$endgroup$
migrated from stackoverflow.com Nov 25 '18 at 10:18
This question came from our site for professional and enthusiast programmers.
add a comment |
$begingroup$
I have to ascertain what specific rasch item a student needs to attain to have a 70% probability of passing a future criterion test (the tests are correlated, the results below are output from the logistic regression equation).
I ran a logistic regression equation on a series of rasch items. Because the rasch items represent discrete ability scores, and the number of items was not very large (15-items per student) I have to interpolate what rasch item would be needed to have a 70% probability of passing a criterion. Below is the output and code I have tried to use to create the probability.
intercept = -0.8392
slope = 0.4120
Finding probability of 0.70 given the above intercept/slope:
#Eq1
exp((log(0.70) - intercept)/slope)
#Output: 3.225788
This output would indicate a rasch score of 3.225788 would represent a probability of 0.70. But when I use that output to assess the probability of 3.225788, it comes out to a probability of 0.62.
#Eq2
exp(-0.8392+0.4120*(3.225788)) / (1 + exp(-0.8392 + 0.4120*(3.22578)))
#Output: 0.62
I also tried repeating equation 1 by first assigning the log(0.7) to an object (p) in the hopes that this could solve a rounding error that I read about in McElreath's "Statistical Rethinking" but it didn't appear to help.
Please do let me know if you need a reproducible dataset. I thought perhaps the intercept/slope would be enough, but can put together more if needed.
r logistic
$endgroup$
migrated from stackoverflow.com Nov 25 '18 at 10:18
This question came from our site for professional and enthusiast programmers.
add a comment |
$begingroup$
I have to ascertain what specific rasch item a student needs to attain to have a 70% probability of passing a future criterion test (the tests are correlated, the results below are output from the logistic regression equation).
I ran a logistic regression equation on a series of rasch items. Because the rasch items represent discrete ability scores, and the number of items was not very large (15-items per student) I have to interpolate what rasch item would be needed to have a 70% probability of passing a criterion. Below is the output and code I have tried to use to create the probability.
intercept = -0.8392
slope = 0.4120
Finding probability of 0.70 given the above intercept/slope:
#Eq1
exp((log(0.70) - intercept)/slope)
#Output: 3.225788
This output would indicate a rasch score of 3.225788 would represent a probability of 0.70. But when I use that output to assess the probability of 3.225788, it comes out to a probability of 0.62.
#Eq2
exp(-0.8392+0.4120*(3.225788)) / (1 + exp(-0.8392 + 0.4120*(3.22578)))
#Output: 0.62
I also tried repeating equation 1 by first assigning the log(0.7) to an object (p) in the hopes that this could solve a rounding error that I read about in McElreath's "Statistical Rethinking" but it didn't appear to help.
Please do let me know if you need a reproducible dataset. I thought perhaps the intercept/slope would be enough, but can put together more if needed.
r logistic
$endgroup$
I have to ascertain what specific rasch item a student needs to attain to have a 70% probability of passing a future criterion test (the tests are correlated, the results below are output from the logistic regression equation).
I ran a logistic regression equation on a series of rasch items. Because the rasch items represent discrete ability scores, and the number of items was not very large (15-items per student) I have to interpolate what rasch item would be needed to have a 70% probability of passing a criterion. Below is the output and code I have tried to use to create the probability.
intercept = -0.8392
slope = 0.4120
Finding probability of 0.70 given the above intercept/slope:
#Eq1
exp((log(0.70) - intercept)/slope)
#Output: 3.225788
This output would indicate a rasch score of 3.225788 would represent a probability of 0.70. But when I use that output to assess the probability of 3.225788, it comes out to a probability of 0.62.
#Eq2
exp(-0.8392+0.4120*(3.225788)) / (1 + exp(-0.8392 + 0.4120*(3.22578)))
#Output: 0.62
I also tried repeating equation 1 by first assigning the log(0.7) to an object (p) in the hopes that this could solve a rounding error that I read about in McElreath's "Statistical Rethinking" but it didn't appear to help.
Please do let me know if you need a reproducible dataset. I thought perhaps the intercept/slope would be enough, but can put together more if needed.
r logistic
r logistic
asked Nov 24 '18 at 14:10
aleksis.paulaleksis.paul
133
133
migrated from stackoverflow.com Nov 25 '18 at 10:18
This question came from our site for professional and enthusiast programmers.
migrated from stackoverflow.com Nov 25 '18 at 10:18
This question came from our site for professional and enthusiast programmers.
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
I think your arithmetic is wrong ...
- back-transform 0.7 to the log-odds (logit) scale:
log(x/(1-x))
orplogis()
in R:
p <- 0.7
log(p/(1-p)) ## 0.847
logit.p <- qlogis(p) ## same
- Solve for the desired value (
a+b*x=p
->x = (p-a)/b
):
int <- -0.8392
slope <- 0.4120
val <- (logit.p-int)/slope ## 4.093
- Checking: the logistic function (
exp(x)/(1+exp(x))
, or1/(1+exp(-x))
) is also available asplogis()
in R:
plogis(int+slope*val) ## 0.7
There's a dose.p
function in the MASS package that will do this automatically (and compute approximate standard errors) when provided with a glm
object:
$endgroup$
$begingroup$
Thank you! I’m a bit embarrassed that I didn’t notice I wasn’t properly transforming the probability into odds before trying to solve the equation.
$endgroup$
– aleksis.paul
Nov 25 '18 at 1:19
add a comment |
Your Answer
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "65"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f378647%2fgetting-accurate-interpolated-probability-from-logistic-regression-equation%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
I think your arithmetic is wrong ...
- back-transform 0.7 to the log-odds (logit) scale:
log(x/(1-x))
orplogis()
in R:
p <- 0.7
log(p/(1-p)) ## 0.847
logit.p <- qlogis(p) ## same
- Solve for the desired value (
a+b*x=p
->x = (p-a)/b
):
int <- -0.8392
slope <- 0.4120
val <- (logit.p-int)/slope ## 4.093
- Checking: the logistic function (
exp(x)/(1+exp(x))
, or1/(1+exp(-x))
) is also available asplogis()
in R:
plogis(int+slope*val) ## 0.7
There's a dose.p
function in the MASS package that will do this automatically (and compute approximate standard errors) when provided with a glm
object:
$endgroup$
$begingroup$
Thank you! I’m a bit embarrassed that I didn’t notice I wasn’t properly transforming the probability into odds before trying to solve the equation.
$endgroup$
– aleksis.paul
Nov 25 '18 at 1:19
add a comment |
$begingroup$
I think your arithmetic is wrong ...
- back-transform 0.7 to the log-odds (logit) scale:
log(x/(1-x))
orplogis()
in R:
p <- 0.7
log(p/(1-p)) ## 0.847
logit.p <- qlogis(p) ## same
- Solve for the desired value (
a+b*x=p
->x = (p-a)/b
):
int <- -0.8392
slope <- 0.4120
val <- (logit.p-int)/slope ## 4.093
- Checking: the logistic function (
exp(x)/(1+exp(x))
, or1/(1+exp(-x))
) is also available asplogis()
in R:
plogis(int+slope*val) ## 0.7
There's a dose.p
function in the MASS package that will do this automatically (and compute approximate standard errors) when provided with a glm
object:
$endgroup$
$begingroup$
Thank you! I’m a bit embarrassed that I didn’t notice I wasn’t properly transforming the probability into odds before trying to solve the equation.
$endgroup$
– aleksis.paul
Nov 25 '18 at 1:19
add a comment |
$begingroup$
I think your arithmetic is wrong ...
- back-transform 0.7 to the log-odds (logit) scale:
log(x/(1-x))
orplogis()
in R:
p <- 0.7
log(p/(1-p)) ## 0.847
logit.p <- qlogis(p) ## same
- Solve for the desired value (
a+b*x=p
->x = (p-a)/b
):
int <- -0.8392
slope <- 0.4120
val <- (logit.p-int)/slope ## 4.093
- Checking: the logistic function (
exp(x)/(1+exp(x))
, or1/(1+exp(-x))
) is also available asplogis()
in R:
plogis(int+slope*val) ## 0.7
There's a dose.p
function in the MASS package that will do this automatically (and compute approximate standard errors) when provided with a glm
object:
$endgroup$
I think your arithmetic is wrong ...
- back-transform 0.7 to the log-odds (logit) scale:
log(x/(1-x))
orplogis()
in R:
p <- 0.7
log(p/(1-p)) ## 0.847
logit.p <- qlogis(p) ## same
- Solve for the desired value (
a+b*x=p
->x = (p-a)/b
):
int <- -0.8392
slope <- 0.4120
val <- (logit.p-int)/slope ## 4.093
- Checking: the logistic function (
exp(x)/(1+exp(x))
, or1/(1+exp(-x))
) is also available asplogis()
in R:
plogis(int+slope*val) ## 0.7
There's a dose.p
function in the MASS package that will do this automatically (and compute approximate standard errors) when provided with a glm
object:
answered Nov 24 '18 at 17:41
Ben BolkerBen Bolker
23.8k16494
23.8k16494
$begingroup$
Thank you! I’m a bit embarrassed that I didn’t notice I wasn’t properly transforming the probability into odds before trying to solve the equation.
$endgroup$
– aleksis.paul
Nov 25 '18 at 1:19
add a comment |
$begingroup$
Thank you! I’m a bit embarrassed that I didn’t notice I wasn’t properly transforming the probability into odds before trying to solve the equation.
$endgroup$
– aleksis.paul
Nov 25 '18 at 1:19
$begingroup$
Thank you! I’m a bit embarrassed that I didn’t notice I wasn’t properly transforming the probability into odds before trying to solve the equation.
$endgroup$
– aleksis.paul
Nov 25 '18 at 1:19
$begingroup$
Thank you! I’m a bit embarrassed that I didn’t notice I wasn’t properly transforming the probability into odds before trying to solve the equation.
$endgroup$
– aleksis.paul
Nov 25 '18 at 1:19
add a comment |
Thanks for contributing an answer to Cross Validated!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f378647%2fgetting-accurate-interpolated-probability-from-logistic-regression-equation%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown