Getting accurate interpolated probability from logistic regression equation





.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty{ margin-bottom:0;
}







2












$begingroup$


I have to ascertain what specific rasch item a student needs to attain to have a 70% probability of passing a future criterion test (the tests are correlated, the results below are output from the logistic regression equation).



I ran a logistic regression equation on a series of rasch items. Because the rasch items represent discrete ability scores, and the number of items was not very large (15-items per student) I have to interpolate what rasch item would be needed to have a 70% probability of passing a criterion. Below is the output and code I have tried to use to create the probability.



intercept = -0.8392
slope = 0.4120


Finding probability of 0.70 given the above intercept/slope:



#Eq1
exp((log(0.70) - intercept)/slope)
#Output: 3.225788


This output would indicate a rasch score of 3.225788 would represent a probability of 0.70. But when I use that output to assess the probability of 3.225788, it comes out to a probability of 0.62.



#Eq2
exp(-0.8392+0.4120*(3.225788)) / (1 + exp(-0.8392 + 0.4120*(3.22578)))
#Output: 0.62


I also tried repeating equation 1 by first assigning the log(0.7) to an object (p) in the hopes that this could solve a rounding error that I read about in McElreath's "Statistical Rethinking" but it didn't appear to help.



Please do let me know if you need a reproducible dataset. I thought perhaps the intercept/slope would be enough, but can put together more if needed.










share|cite|improve this question









$endgroup$



migrated from stackoverflow.com Nov 25 '18 at 10:18


This question came from our site for professional and enthusiast programmers.

























    2












    $begingroup$


    I have to ascertain what specific rasch item a student needs to attain to have a 70% probability of passing a future criterion test (the tests are correlated, the results below are output from the logistic regression equation).



    I ran a logistic regression equation on a series of rasch items. Because the rasch items represent discrete ability scores, and the number of items was not very large (15-items per student) I have to interpolate what rasch item would be needed to have a 70% probability of passing a criterion. Below is the output and code I have tried to use to create the probability.



    intercept = -0.8392
    slope = 0.4120


    Finding probability of 0.70 given the above intercept/slope:



    #Eq1
    exp((log(0.70) - intercept)/slope)
    #Output: 3.225788


    This output would indicate a rasch score of 3.225788 would represent a probability of 0.70. But when I use that output to assess the probability of 3.225788, it comes out to a probability of 0.62.



    #Eq2
    exp(-0.8392+0.4120*(3.225788)) / (1 + exp(-0.8392 + 0.4120*(3.22578)))
    #Output: 0.62


    I also tried repeating equation 1 by first assigning the log(0.7) to an object (p) in the hopes that this could solve a rounding error that I read about in McElreath's "Statistical Rethinking" but it didn't appear to help.



    Please do let me know if you need a reproducible dataset. I thought perhaps the intercept/slope would be enough, but can put together more if needed.










    share|cite|improve this question









    $endgroup$



    migrated from stackoverflow.com Nov 25 '18 at 10:18


    This question came from our site for professional and enthusiast programmers.





















      2












      2








      2





      $begingroup$


      I have to ascertain what specific rasch item a student needs to attain to have a 70% probability of passing a future criterion test (the tests are correlated, the results below are output from the logistic regression equation).



      I ran a logistic regression equation on a series of rasch items. Because the rasch items represent discrete ability scores, and the number of items was not very large (15-items per student) I have to interpolate what rasch item would be needed to have a 70% probability of passing a criterion. Below is the output and code I have tried to use to create the probability.



      intercept = -0.8392
      slope = 0.4120


      Finding probability of 0.70 given the above intercept/slope:



      #Eq1
      exp((log(0.70) - intercept)/slope)
      #Output: 3.225788


      This output would indicate a rasch score of 3.225788 would represent a probability of 0.70. But when I use that output to assess the probability of 3.225788, it comes out to a probability of 0.62.



      #Eq2
      exp(-0.8392+0.4120*(3.225788)) / (1 + exp(-0.8392 + 0.4120*(3.22578)))
      #Output: 0.62


      I also tried repeating equation 1 by first assigning the log(0.7) to an object (p) in the hopes that this could solve a rounding error that I read about in McElreath's "Statistical Rethinking" but it didn't appear to help.



      Please do let me know if you need a reproducible dataset. I thought perhaps the intercept/slope would be enough, but can put together more if needed.










      share|cite|improve this question









      $endgroup$




      I have to ascertain what specific rasch item a student needs to attain to have a 70% probability of passing a future criterion test (the tests are correlated, the results below are output from the logistic regression equation).



      I ran a logistic regression equation on a series of rasch items. Because the rasch items represent discrete ability scores, and the number of items was not very large (15-items per student) I have to interpolate what rasch item would be needed to have a 70% probability of passing a criterion. Below is the output and code I have tried to use to create the probability.



      intercept = -0.8392
      slope = 0.4120


      Finding probability of 0.70 given the above intercept/slope:



      #Eq1
      exp((log(0.70) - intercept)/slope)
      #Output: 3.225788


      This output would indicate a rasch score of 3.225788 would represent a probability of 0.70. But when I use that output to assess the probability of 3.225788, it comes out to a probability of 0.62.



      #Eq2
      exp(-0.8392+0.4120*(3.225788)) / (1 + exp(-0.8392 + 0.4120*(3.22578)))
      #Output: 0.62


      I also tried repeating equation 1 by first assigning the log(0.7) to an object (p) in the hopes that this could solve a rounding error that I read about in McElreath's "Statistical Rethinking" but it didn't appear to help.



      Please do let me know if you need a reproducible dataset. I thought perhaps the intercept/slope would be enough, but can put together more if needed.







      r logistic






      share|cite|improve this question













      share|cite|improve this question











      share|cite|improve this question




      share|cite|improve this question










      asked Nov 24 '18 at 14:10









      aleksis.paulaleksis.paul

      133




      133




      migrated from stackoverflow.com Nov 25 '18 at 10:18


      This question came from our site for professional and enthusiast programmers.









      migrated from stackoverflow.com Nov 25 '18 at 10:18


      This question came from our site for professional and enthusiast programmers.
























          1 Answer
          1






          active

          oldest

          votes


















          2












          $begingroup$

          I think your arithmetic is wrong ...




          • back-transform 0.7 to the log-odds (logit) scale: log(x/(1-x)) or plogis() in R:


          p <- 0.7
          log(p/(1-p)) ## 0.847
          logit.p <- qlogis(p) ## same



          • Solve for the desired value (a+b*x=p -> x = (p-a)/b):


          int <- -0.8392
          slope <- 0.4120
          val <- (logit.p-int)/slope ## 4.093



          • Checking: the logistic function (exp(x)/(1+exp(x)), or 1/(1+exp(-x))) is also available as plogis() in R:


          plogis(int+slope*val)  ## 0.7


          There's a dose.p function in the MASS package that will do this automatically (and compute approximate standard errors) when provided with a glm object:






          share|cite|improve this answer









          $endgroup$













          • $begingroup$
            Thank you! I’m a bit embarrassed that I didn’t notice I wasn’t properly transforming the probability into odds before trying to solve the equation.
            $endgroup$
            – aleksis.paul
            Nov 25 '18 at 1:19












          Your Answer








          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "65"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: false,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f378647%2fgetting-accurate-interpolated-probability-from-logistic-regression-equation%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          2












          $begingroup$

          I think your arithmetic is wrong ...




          • back-transform 0.7 to the log-odds (logit) scale: log(x/(1-x)) or plogis() in R:


          p <- 0.7
          log(p/(1-p)) ## 0.847
          logit.p <- qlogis(p) ## same



          • Solve for the desired value (a+b*x=p -> x = (p-a)/b):


          int <- -0.8392
          slope <- 0.4120
          val <- (logit.p-int)/slope ## 4.093



          • Checking: the logistic function (exp(x)/(1+exp(x)), or 1/(1+exp(-x))) is also available as plogis() in R:


          plogis(int+slope*val)  ## 0.7


          There's a dose.p function in the MASS package that will do this automatically (and compute approximate standard errors) when provided with a glm object:






          share|cite|improve this answer









          $endgroup$













          • $begingroup$
            Thank you! I’m a bit embarrassed that I didn’t notice I wasn’t properly transforming the probability into odds before trying to solve the equation.
            $endgroup$
            – aleksis.paul
            Nov 25 '18 at 1:19
















          2












          $begingroup$

          I think your arithmetic is wrong ...




          • back-transform 0.7 to the log-odds (logit) scale: log(x/(1-x)) or plogis() in R:


          p <- 0.7
          log(p/(1-p)) ## 0.847
          logit.p <- qlogis(p) ## same



          • Solve for the desired value (a+b*x=p -> x = (p-a)/b):


          int <- -0.8392
          slope <- 0.4120
          val <- (logit.p-int)/slope ## 4.093



          • Checking: the logistic function (exp(x)/(1+exp(x)), or 1/(1+exp(-x))) is also available as plogis() in R:


          plogis(int+slope*val)  ## 0.7


          There's a dose.p function in the MASS package that will do this automatically (and compute approximate standard errors) when provided with a glm object:






          share|cite|improve this answer









          $endgroup$













          • $begingroup$
            Thank you! I’m a bit embarrassed that I didn’t notice I wasn’t properly transforming the probability into odds before trying to solve the equation.
            $endgroup$
            – aleksis.paul
            Nov 25 '18 at 1:19














          2












          2








          2





          $begingroup$

          I think your arithmetic is wrong ...




          • back-transform 0.7 to the log-odds (logit) scale: log(x/(1-x)) or plogis() in R:


          p <- 0.7
          log(p/(1-p)) ## 0.847
          logit.p <- qlogis(p) ## same



          • Solve for the desired value (a+b*x=p -> x = (p-a)/b):


          int <- -0.8392
          slope <- 0.4120
          val <- (logit.p-int)/slope ## 4.093



          • Checking: the logistic function (exp(x)/(1+exp(x)), or 1/(1+exp(-x))) is also available as plogis() in R:


          plogis(int+slope*val)  ## 0.7


          There's a dose.p function in the MASS package that will do this automatically (and compute approximate standard errors) when provided with a glm object:






          share|cite|improve this answer









          $endgroup$



          I think your arithmetic is wrong ...




          • back-transform 0.7 to the log-odds (logit) scale: log(x/(1-x)) or plogis() in R:


          p <- 0.7
          log(p/(1-p)) ## 0.847
          logit.p <- qlogis(p) ## same



          • Solve for the desired value (a+b*x=p -> x = (p-a)/b):


          int <- -0.8392
          slope <- 0.4120
          val <- (logit.p-int)/slope ## 4.093



          • Checking: the logistic function (exp(x)/(1+exp(x)), or 1/(1+exp(-x))) is also available as plogis() in R:


          plogis(int+slope*val)  ## 0.7


          There's a dose.p function in the MASS package that will do this automatically (and compute approximate standard errors) when provided with a glm object:







          share|cite|improve this answer












          share|cite|improve this answer



          share|cite|improve this answer










          answered Nov 24 '18 at 17:41









          Ben BolkerBen Bolker

          23.8k16494




          23.8k16494












          • $begingroup$
            Thank you! I’m a bit embarrassed that I didn’t notice I wasn’t properly transforming the probability into odds before trying to solve the equation.
            $endgroup$
            – aleksis.paul
            Nov 25 '18 at 1:19


















          • $begingroup$
            Thank you! I’m a bit embarrassed that I didn’t notice I wasn’t properly transforming the probability into odds before trying to solve the equation.
            $endgroup$
            – aleksis.paul
            Nov 25 '18 at 1:19
















          $begingroup$
          Thank you! I’m a bit embarrassed that I didn’t notice I wasn’t properly transforming the probability into odds before trying to solve the equation.
          $endgroup$
          – aleksis.paul
          Nov 25 '18 at 1:19




          $begingroup$
          Thank you! I’m a bit embarrassed that I didn’t notice I wasn’t properly transforming the probability into odds before trying to solve the equation.
          $endgroup$
          – aleksis.paul
          Nov 25 '18 at 1:19


















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Cross Validated!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f378647%2fgetting-accurate-interpolated-probability-from-logistic-regression-equation%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          這個網誌中的熱門文章

          Tangent Lines Diagram Along Smooth Curve

          Yusuf al-Mu'taman ibn Hud

          Zucchini