How to perform up-sampling using sample() function(py-spark)












0














I am working on a Binary Classification Machine Learning Problem and I am trying to balance the training set as I have an imbalanced target class variable. I am using Py-Spark for building the model.



Below is the code which is working to balance the data



train_initial, test = new_data.randomSplit([0.7, 0.3], seed = 2018)
train_initial.groupby('label').count().toPandas()
label count
0 0.0 712980
1 1.0 2926
train_new = train_initial.sampleBy('label', fractions={0: 2926./712980, 1: 1.0}).cache()


The above code performs under-sampling, but I think this might lead to loss of information. However, I am not sure how to perform upsampling. I also tried to use sample function as below:



train_up = train_initial.sample(True, 10.0, seed = 2018)


Although, it increases the count of 1 in my data set, it also increases the count of 0 and gives the below result.



   label    count                                                               
0 0.0 7128722
1 1.0 29024


Can someone please help me to achieve up-sampling in py-spark.



Thanks a lot in Advance!!










share|improve this question



























    0














    I am working on a Binary Classification Machine Learning Problem and I am trying to balance the training set as I have an imbalanced target class variable. I am using Py-Spark for building the model.



    Below is the code which is working to balance the data



    train_initial, test = new_data.randomSplit([0.7, 0.3], seed = 2018)
    train_initial.groupby('label').count().toPandas()
    label count
    0 0.0 712980
    1 1.0 2926
    train_new = train_initial.sampleBy('label', fractions={0: 2926./712980, 1: 1.0}).cache()


    The above code performs under-sampling, but I think this might lead to loss of information. However, I am not sure how to perform upsampling. I also tried to use sample function as below:



    train_up = train_initial.sample(True, 10.0, seed = 2018)


    Although, it increases the count of 1 in my data set, it also increases the count of 0 and gives the below result.



       label    count                                                               
    0 0.0 7128722
    1 1.0 29024


    Can someone please help me to achieve up-sampling in py-spark.



    Thanks a lot in Advance!!










    share|improve this question

























      0












      0








      0







      I am working on a Binary Classification Machine Learning Problem and I am trying to balance the training set as I have an imbalanced target class variable. I am using Py-Spark for building the model.



      Below is the code which is working to balance the data



      train_initial, test = new_data.randomSplit([0.7, 0.3], seed = 2018)
      train_initial.groupby('label').count().toPandas()
      label count
      0 0.0 712980
      1 1.0 2926
      train_new = train_initial.sampleBy('label', fractions={0: 2926./712980, 1: 1.0}).cache()


      The above code performs under-sampling, but I think this might lead to loss of information. However, I am not sure how to perform upsampling. I also tried to use sample function as below:



      train_up = train_initial.sample(True, 10.0, seed = 2018)


      Although, it increases the count of 1 in my data set, it also increases the count of 0 and gives the below result.



         label    count                                                               
      0 0.0 7128722
      1 1.0 29024


      Can someone please help me to achieve up-sampling in py-spark.



      Thanks a lot in Advance!!










      share|improve this question













      I am working on a Binary Classification Machine Learning Problem and I am trying to balance the training set as I have an imbalanced target class variable. I am using Py-Spark for building the model.



      Below is the code which is working to balance the data



      train_initial, test = new_data.randomSplit([0.7, 0.3], seed = 2018)
      train_initial.groupby('label').count().toPandas()
      label count
      0 0.0 712980
      1 1.0 2926
      train_new = train_initial.sampleBy('label', fractions={0: 2926./712980, 1: 1.0}).cache()


      The above code performs under-sampling, but I think this might lead to loss of information. However, I am not sure how to perform upsampling. I also tried to use sample function as below:



      train_up = train_initial.sample(True, 10.0, seed = 2018)


      Although, it increases the count of 1 in my data set, it also increases the count of 0 and gives the below result.



         label    count                                                               
      0 0.0 7128722
      1 1.0 29024


      Can someone please help me to achieve up-sampling in py-spark.



      Thanks a lot in Advance!!







      machine-learning pyspark random-forest sampling






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Nov 13 '18 at 2:57









      Tushar MehtaTushar Mehta

      407




      407
























          1 Answer
          1






          active

          oldest

          votes


















          1














          The problem is that you are oversampling the whole data frame. You should filter the data from the two classes



          df_class_0 = df_train[df_train['label'] == 0]
          df_class_1 = df_train[df_train['label'] == 1]
          df_class_1_over = df_class_1.sample(count_class_0, replace=True)
          df_test_over = pd.concat([df_class_0, df_class_1_over], axis=0)


          the example comes from : https://www.kaggle.com/rafjaa/resampling-strategies-for-imbalanced-datasets



          Please note that there are better way to perform oversampling (e.g. SMOTE)






          share|improve this answer





















          • Thanks a lot for the response, The sample function in pyspark would not take the count of the class.Can I instead use the proportion of class 0 over the entire data set to sample class 1. Would it give the same result. Also, I am not sure if I can use SMOTE in pyspark. I could not find any library to import SMOTE in py-spark documentation. Any idea, how can I use SMOTE in py-spark.
            – Tushar Mehta
            Nov 13 '18 at 19:59












          • @TusharMehta just to be clear I'm not a pyspark user. I suppose that you can use the percentage for sampling. Can't you use the standard sample function for the resampling and pass the result to Py-spark function?
            – Roberto
            Nov 14 '18 at 6:52











          Your Answer






          StackExchange.ifUsing("editor", function () {
          StackExchange.using("externalEditor", function () {
          StackExchange.using("snippets", function () {
          StackExchange.snippets.init();
          });
          });
          }, "code-snippets");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "1"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53273133%2fhow-to-perform-up-sampling-using-sample-functionpy-spark%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          1














          The problem is that you are oversampling the whole data frame. You should filter the data from the two classes



          df_class_0 = df_train[df_train['label'] == 0]
          df_class_1 = df_train[df_train['label'] == 1]
          df_class_1_over = df_class_1.sample(count_class_0, replace=True)
          df_test_over = pd.concat([df_class_0, df_class_1_over], axis=0)


          the example comes from : https://www.kaggle.com/rafjaa/resampling-strategies-for-imbalanced-datasets



          Please note that there are better way to perform oversampling (e.g. SMOTE)






          share|improve this answer





















          • Thanks a lot for the response, The sample function in pyspark would not take the count of the class.Can I instead use the proportion of class 0 over the entire data set to sample class 1. Would it give the same result. Also, I am not sure if I can use SMOTE in pyspark. I could not find any library to import SMOTE in py-spark documentation. Any idea, how can I use SMOTE in py-spark.
            – Tushar Mehta
            Nov 13 '18 at 19:59












          • @TusharMehta just to be clear I'm not a pyspark user. I suppose that you can use the percentage for sampling. Can't you use the standard sample function for the resampling and pass the result to Py-spark function?
            – Roberto
            Nov 14 '18 at 6:52
















          1














          The problem is that you are oversampling the whole data frame. You should filter the data from the two classes



          df_class_0 = df_train[df_train['label'] == 0]
          df_class_1 = df_train[df_train['label'] == 1]
          df_class_1_over = df_class_1.sample(count_class_0, replace=True)
          df_test_over = pd.concat([df_class_0, df_class_1_over], axis=0)


          the example comes from : https://www.kaggle.com/rafjaa/resampling-strategies-for-imbalanced-datasets



          Please note that there are better way to perform oversampling (e.g. SMOTE)






          share|improve this answer





















          • Thanks a lot for the response, The sample function in pyspark would not take the count of the class.Can I instead use the proportion of class 0 over the entire data set to sample class 1. Would it give the same result. Also, I am not sure if I can use SMOTE in pyspark. I could not find any library to import SMOTE in py-spark documentation. Any idea, how can I use SMOTE in py-spark.
            – Tushar Mehta
            Nov 13 '18 at 19:59












          • @TusharMehta just to be clear I'm not a pyspark user. I suppose that you can use the percentage for sampling. Can't you use the standard sample function for the resampling and pass the result to Py-spark function?
            – Roberto
            Nov 14 '18 at 6:52














          1












          1








          1






          The problem is that you are oversampling the whole data frame. You should filter the data from the two classes



          df_class_0 = df_train[df_train['label'] == 0]
          df_class_1 = df_train[df_train['label'] == 1]
          df_class_1_over = df_class_1.sample(count_class_0, replace=True)
          df_test_over = pd.concat([df_class_0, df_class_1_over], axis=0)


          the example comes from : https://www.kaggle.com/rafjaa/resampling-strategies-for-imbalanced-datasets



          Please note that there are better way to perform oversampling (e.g. SMOTE)






          share|improve this answer












          The problem is that you are oversampling the whole data frame. You should filter the data from the two classes



          df_class_0 = df_train[df_train['label'] == 0]
          df_class_1 = df_train[df_train['label'] == 1]
          df_class_1_over = df_class_1.sample(count_class_0, replace=True)
          df_test_over = pd.concat([df_class_0, df_class_1_over], axis=0)


          the example comes from : https://www.kaggle.com/rafjaa/resampling-strategies-for-imbalanced-datasets



          Please note that there are better way to perform oversampling (e.g. SMOTE)







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Nov 13 '18 at 9:37









          RobertoRoberto

          50210




          50210












          • Thanks a lot for the response, The sample function in pyspark would not take the count of the class.Can I instead use the proportion of class 0 over the entire data set to sample class 1. Would it give the same result. Also, I am not sure if I can use SMOTE in pyspark. I could not find any library to import SMOTE in py-spark documentation. Any idea, how can I use SMOTE in py-spark.
            – Tushar Mehta
            Nov 13 '18 at 19:59












          • @TusharMehta just to be clear I'm not a pyspark user. I suppose that you can use the percentage for sampling. Can't you use the standard sample function for the resampling and pass the result to Py-spark function?
            – Roberto
            Nov 14 '18 at 6:52


















          • Thanks a lot for the response, The sample function in pyspark would not take the count of the class.Can I instead use the proportion of class 0 over the entire data set to sample class 1. Would it give the same result. Also, I am not sure if I can use SMOTE in pyspark. I could not find any library to import SMOTE in py-spark documentation. Any idea, how can I use SMOTE in py-spark.
            – Tushar Mehta
            Nov 13 '18 at 19:59












          • @TusharMehta just to be clear I'm not a pyspark user. I suppose that you can use the percentage for sampling. Can't you use the standard sample function for the resampling and pass the result to Py-spark function?
            – Roberto
            Nov 14 '18 at 6:52
















          Thanks a lot for the response, The sample function in pyspark would not take the count of the class.Can I instead use the proportion of class 0 over the entire data set to sample class 1. Would it give the same result. Also, I am not sure if I can use SMOTE in pyspark. I could not find any library to import SMOTE in py-spark documentation. Any idea, how can I use SMOTE in py-spark.
          – Tushar Mehta
          Nov 13 '18 at 19:59






          Thanks a lot for the response, The sample function in pyspark would not take the count of the class.Can I instead use the proportion of class 0 over the entire data set to sample class 1. Would it give the same result. Also, I am not sure if I can use SMOTE in pyspark. I could not find any library to import SMOTE in py-spark documentation. Any idea, how can I use SMOTE in py-spark.
          – Tushar Mehta
          Nov 13 '18 at 19:59














          @TusharMehta just to be clear I'm not a pyspark user. I suppose that you can use the percentage for sampling. Can't you use the standard sample function for the resampling and pass the result to Py-spark function?
          – Roberto
          Nov 14 '18 at 6:52




          @TusharMehta just to be clear I'm not a pyspark user. I suppose that you can use the percentage for sampling. Can't you use the standard sample function for the resampling and pass the result to Py-spark function?
          – Roberto
          Nov 14 '18 at 6:52


















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.





          Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


          Please pay close attention to the following guidance:


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53273133%2fhow-to-perform-up-sampling-using-sample-functionpy-spark%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          這個網誌中的熱門文章

          Hercules Kyvelos

          Tangent Lines Diagram Along Smooth Curve

          Yusuf al-Mu'taman ibn Hud