transform dataset into case class via (wrapped) encoders





.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty{ height:90px;width:728px;box-sizing:border-box;
}







1















I am new to Scala. Excuse my lack of knowledge.
This is my dataset:



val bfDS = sessions.select("bf")
sessions.select("bf").printSchema


|-- bf: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- s: struct (nullable = true)
| | | |-- a: string (nullable = true)
| | | |-- b: string (nullable = true)
| | | |-- c: string (nullable = true)
| | |-- a: struct (nullable = true)
| | | |-- a: integer (nullable = true)
| | | |-- b: long (nullable = true)
| | | |-- c: integer (nullable = true)
| | | |-- d: array (nullable = true)
| | | | |-- element: struct (containsNull = true)
| | | | | |-- a: string (nullable = true)
| | | | | |-- b: integer (nullable = true)
| | | | | |-- c: long (nullable = true)
| | |-- tr: struct (nullable = true)
| | | |-- a: integer (nullable = true)
| | | |-- b: long (nullable = true)
| | | |-- c: integer (nullable = true)
| | | |-- d: array (nullable = true)
| | | | |-- element: struct (containsNull = true)
| | | | | |-- e: string (nullable = true)
| | | | | |-- f: integer (nullable = true)
| | | | | |-- g: long (nullable = true)
| | |-- cs: struct (nullable = true)
| | | |-- a: integer (nullable = true)
| | | |-- b: long (nullable = true)
| | | |-- c: integer (nullable = true)
| | | |-- d: array (nullable = true)
| | | | |-- element: struct (containsNull = true)
| | | | | |-- e: string (nullable = true)
| | | | | |-- f: integer (nullable = true)
| | | | | |-- g: long (nullable = true)


1) I don't think I understand Scala datasets very well. A dataset is composed of rows, but when I print the schema, it shows an array. How does the dataset map to the array? Is each row is an element in the array?



2) I want to convert my dataset into a case class.



case class Features( s: Iterable[CustomType], a: Iterable[CustomType], tr: Iterable[CustomType], cs: Iterable[CustomType])


How do I convert my dataset and how do I use encoders?



Many thanks.










share|improve this question































    1















    I am new to Scala. Excuse my lack of knowledge.
    This is my dataset:



    val bfDS = sessions.select("bf")
    sessions.select("bf").printSchema


    |-- bf: array (nullable = true)
    | |-- element: struct (containsNull = true)
    | | |-- s: struct (nullable = true)
    | | | |-- a: string (nullable = true)
    | | | |-- b: string (nullable = true)
    | | | |-- c: string (nullable = true)
    | | |-- a: struct (nullable = true)
    | | | |-- a: integer (nullable = true)
    | | | |-- b: long (nullable = true)
    | | | |-- c: integer (nullable = true)
    | | | |-- d: array (nullable = true)
    | | | | |-- element: struct (containsNull = true)
    | | | | | |-- a: string (nullable = true)
    | | | | | |-- b: integer (nullable = true)
    | | | | | |-- c: long (nullable = true)
    | | |-- tr: struct (nullable = true)
    | | | |-- a: integer (nullable = true)
    | | | |-- b: long (nullable = true)
    | | | |-- c: integer (nullable = true)
    | | | |-- d: array (nullable = true)
    | | | | |-- element: struct (containsNull = true)
    | | | | | |-- e: string (nullable = true)
    | | | | | |-- f: integer (nullable = true)
    | | | | | |-- g: long (nullable = true)
    | | |-- cs: struct (nullable = true)
    | | | |-- a: integer (nullable = true)
    | | | |-- b: long (nullable = true)
    | | | |-- c: integer (nullable = true)
    | | | |-- d: array (nullable = true)
    | | | | |-- element: struct (containsNull = true)
    | | | | | |-- e: string (nullable = true)
    | | | | | |-- f: integer (nullable = true)
    | | | | | |-- g: long (nullable = true)


    1) I don't think I understand Scala datasets very well. A dataset is composed of rows, but when I print the schema, it shows an array. How does the dataset map to the array? Is each row is an element in the array?



    2) I want to convert my dataset into a case class.



    case class Features( s: Iterable[CustomType], a: Iterable[CustomType], tr: Iterable[CustomType], cs: Iterable[CustomType])


    How do I convert my dataset and how do I use encoders?



    Many thanks.










    share|improve this question



























      1












      1








      1








      I am new to Scala. Excuse my lack of knowledge.
      This is my dataset:



      val bfDS = sessions.select("bf")
      sessions.select("bf").printSchema


      |-- bf: array (nullable = true)
      | |-- element: struct (containsNull = true)
      | | |-- s: struct (nullable = true)
      | | | |-- a: string (nullable = true)
      | | | |-- b: string (nullable = true)
      | | | |-- c: string (nullable = true)
      | | |-- a: struct (nullable = true)
      | | | |-- a: integer (nullable = true)
      | | | |-- b: long (nullable = true)
      | | | |-- c: integer (nullable = true)
      | | | |-- d: array (nullable = true)
      | | | | |-- element: struct (containsNull = true)
      | | | | | |-- a: string (nullable = true)
      | | | | | |-- b: integer (nullable = true)
      | | | | | |-- c: long (nullable = true)
      | | |-- tr: struct (nullable = true)
      | | | |-- a: integer (nullable = true)
      | | | |-- b: long (nullable = true)
      | | | |-- c: integer (nullable = true)
      | | | |-- d: array (nullable = true)
      | | | | |-- element: struct (containsNull = true)
      | | | | | |-- e: string (nullable = true)
      | | | | | |-- f: integer (nullable = true)
      | | | | | |-- g: long (nullable = true)
      | | |-- cs: struct (nullable = true)
      | | | |-- a: integer (nullable = true)
      | | | |-- b: long (nullable = true)
      | | | |-- c: integer (nullable = true)
      | | | |-- d: array (nullable = true)
      | | | | |-- element: struct (containsNull = true)
      | | | | | |-- e: string (nullable = true)
      | | | | | |-- f: integer (nullable = true)
      | | | | | |-- g: long (nullable = true)


      1) I don't think I understand Scala datasets very well. A dataset is composed of rows, but when I print the schema, it shows an array. How does the dataset map to the array? Is each row is an element in the array?



      2) I want to convert my dataset into a case class.



      case class Features( s: Iterable[CustomType], a: Iterable[CustomType], tr: Iterable[CustomType], cs: Iterable[CustomType])


      How do I convert my dataset and how do I use encoders?



      Many thanks.










      share|improve this question
















      I am new to Scala. Excuse my lack of knowledge.
      This is my dataset:



      val bfDS = sessions.select("bf")
      sessions.select("bf").printSchema


      |-- bf: array (nullable = true)
      | |-- element: struct (containsNull = true)
      | | |-- s: struct (nullable = true)
      | | | |-- a: string (nullable = true)
      | | | |-- b: string (nullable = true)
      | | | |-- c: string (nullable = true)
      | | |-- a: struct (nullable = true)
      | | | |-- a: integer (nullable = true)
      | | | |-- b: long (nullable = true)
      | | | |-- c: integer (nullable = true)
      | | | |-- d: array (nullable = true)
      | | | | |-- element: struct (containsNull = true)
      | | | | | |-- a: string (nullable = true)
      | | | | | |-- b: integer (nullable = true)
      | | | | | |-- c: long (nullable = true)
      | | |-- tr: struct (nullable = true)
      | | | |-- a: integer (nullable = true)
      | | | |-- b: long (nullable = true)
      | | | |-- c: integer (nullable = true)
      | | | |-- d: array (nullable = true)
      | | | | |-- element: struct (containsNull = true)
      | | | | | |-- e: string (nullable = true)
      | | | | | |-- f: integer (nullable = true)
      | | | | | |-- g: long (nullable = true)
      | | |-- cs: struct (nullable = true)
      | | | |-- a: integer (nullable = true)
      | | | |-- b: long (nullable = true)
      | | | |-- c: integer (nullable = true)
      | | | |-- d: array (nullable = true)
      | | | | |-- element: struct (containsNull = true)
      | | | | | |-- e: string (nullable = true)
      | | | | | |-- f: integer (nullable = true)
      | | | | | |-- g: long (nullable = true)


      1) I don't think I understand Scala datasets very well. A dataset is composed of rows, but when I print the schema, it shows an array. How does the dataset map to the array? Is each row is an element in the array?



      2) I want to convert my dataset into a case class.



      case class Features( s: Iterable[CustomType], a: Iterable[CustomType], tr: Iterable[CustomType], cs: Iterable[CustomType])


      How do I convert my dataset and how do I use encoders?



      Many thanks.







      scala dataframe dataset encoder






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Nov 23 '18 at 15:39







      kaileena

















      asked Nov 23 '18 at 15:02









      kaileenakaileena

      618




      618
























          1 Answer
          1






          active

          oldest

          votes


















          2














          Welcome to StackOverflow. Sadly this question is too board for SO, take a look at "how to ask" to improve this and future questions.

          However I will try to answer a few of your questions.





          First, Spark Rows can encode a variety of values, including Arrays & Structures.



          Second, your dataframe's rows are composed of only one column of type Array[...].



          Third, if you want to create a Dataset from your df, your case class must match your schema, in such case it should be something like:



          case class Features(array: Array[Elements])
          case class Elements(s: CustomType, a: CustomType, tr: CustomType, cs: CustomType)


          Finally, an Encoder is used to transform your case classes and their values to the Spark internal representation. You shouldn't bother too much about them yet - you just need to import spark.implicits._ and all the encoders you need will be there automatically.



          val spark = SparkSession.builder.getOrCreate()
          import spark.implicits._
          val ds: Dataset[Features] = df.as[Features]




          Also, you should take a look to this as a reference.






          share|improve this answer



















          • 1





            Hello. why are you so mean? I stated that i did not understand the topic. I thank you anyway for the answer.

            – kaileena
            Nov 23 '18 at 15:41












          Your Answer






          StackExchange.ifUsing("editor", function () {
          StackExchange.using("externalEditor", function () {
          StackExchange.using("snippets", function () {
          StackExchange.snippets.init();
          });
          });
          }, "code-snippets");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "1"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53448976%2ftransform-dataset-into-case-class-via-wrapped-encoders%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          2














          Welcome to StackOverflow. Sadly this question is too board for SO, take a look at "how to ask" to improve this and future questions.

          However I will try to answer a few of your questions.





          First, Spark Rows can encode a variety of values, including Arrays & Structures.



          Second, your dataframe's rows are composed of only one column of type Array[...].



          Third, if you want to create a Dataset from your df, your case class must match your schema, in such case it should be something like:



          case class Features(array: Array[Elements])
          case class Elements(s: CustomType, a: CustomType, tr: CustomType, cs: CustomType)


          Finally, an Encoder is used to transform your case classes and their values to the Spark internal representation. You shouldn't bother too much about them yet - you just need to import spark.implicits._ and all the encoders you need will be there automatically.



          val spark = SparkSession.builder.getOrCreate()
          import spark.implicits._
          val ds: Dataset[Features] = df.as[Features]




          Also, you should take a look to this as a reference.






          share|improve this answer



















          • 1





            Hello. why are you so mean? I stated that i did not understand the topic. I thank you anyway for the answer.

            – kaileena
            Nov 23 '18 at 15:41
















          2














          Welcome to StackOverflow. Sadly this question is too board for SO, take a look at "how to ask" to improve this and future questions.

          However I will try to answer a few of your questions.





          First, Spark Rows can encode a variety of values, including Arrays & Structures.



          Second, your dataframe's rows are composed of only one column of type Array[...].



          Third, if you want to create a Dataset from your df, your case class must match your schema, in such case it should be something like:



          case class Features(array: Array[Elements])
          case class Elements(s: CustomType, a: CustomType, tr: CustomType, cs: CustomType)


          Finally, an Encoder is used to transform your case classes and their values to the Spark internal representation. You shouldn't bother too much about them yet - you just need to import spark.implicits._ and all the encoders you need will be there automatically.



          val spark = SparkSession.builder.getOrCreate()
          import spark.implicits._
          val ds: Dataset[Features] = df.as[Features]




          Also, you should take a look to this as a reference.






          share|improve this answer



















          • 1





            Hello. why are you so mean? I stated that i did not understand the topic. I thank you anyway for the answer.

            – kaileena
            Nov 23 '18 at 15:41














          2












          2








          2







          Welcome to StackOverflow. Sadly this question is too board for SO, take a look at "how to ask" to improve this and future questions.

          However I will try to answer a few of your questions.





          First, Spark Rows can encode a variety of values, including Arrays & Structures.



          Second, your dataframe's rows are composed of only one column of type Array[...].



          Third, if you want to create a Dataset from your df, your case class must match your schema, in such case it should be something like:



          case class Features(array: Array[Elements])
          case class Elements(s: CustomType, a: CustomType, tr: CustomType, cs: CustomType)


          Finally, an Encoder is used to transform your case classes and their values to the Spark internal representation. You shouldn't bother too much about them yet - you just need to import spark.implicits._ and all the encoders you need will be there automatically.



          val spark = SparkSession.builder.getOrCreate()
          import spark.implicits._
          val ds: Dataset[Features] = df.as[Features]




          Also, you should take a look to this as a reference.






          share|improve this answer













          Welcome to StackOverflow. Sadly this question is too board for SO, take a look at "how to ask" to improve this and future questions.

          However I will try to answer a few of your questions.





          First, Spark Rows can encode a variety of values, including Arrays & Structures.



          Second, your dataframe's rows are composed of only one column of type Array[...].



          Third, if you want to create a Dataset from your df, your case class must match your schema, in such case it should be something like:



          case class Features(array: Array[Elements])
          case class Elements(s: CustomType, a: CustomType, tr: CustomType, cs: CustomType)


          Finally, an Encoder is used to transform your case classes and their values to the Spark internal representation. You shouldn't bother too much about them yet - you just need to import spark.implicits._ and all the encoders you need will be there automatically.



          val spark = SparkSession.builder.getOrCreate()
          import spark.implicits._
          val ds: Dataset[Features] = df.as[Features]




          Also, you should take a look to this as a reference.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Nov 23 '18 at 15:27









          Luis Miguel Mejía SuárezLuis Miguel Mejía Suárez

          2,79521023




          2,79521023








          • 1





            Hello. why are you so mean? I stated that i did not understand the topic. I thank you anyway for the answer.

            – kaileena
            Nov 23 '18 at 15:41














          • 1





            Hello. why are you so mean? I stated that i did not understand the topic. I thank you anyway for the answer.

            – kaileena
            Nov 23 '18 at 15:41








          1




          1





          Hello. why are you so mean? I stated that i did not understand the topic. I thank you anyway for the answer.

          – kaileena
          Nov 23 '18 at 15:41





          Hello. why are you so mean? I stated that i did not understand the topic. I thank you anyway for the answer.

          – kaileena
          Nov 23 '18 at 15:41




















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53448976%2ftransform-dataset-into-case-class-via-wrapped-encoders%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          這個網誌中的熱門文章

          Xamarin.form Move up view when keyboard appear

          Post-Redirect-Get with Spring WebFlux and Thymeleaf

          Anylogic : not able to use stopDelay()