getItem with argument is column name












0















My column col1 is an Array.



I know that col1.getItem(2) allows you to access the second argument of the column. Is there a function to access with argument as column col1.getItem(col2) ?



I can create a UDF but I would have to specify which type the array is (and it can be multiple type) so a generic way would be better and welcome !



The UDF I use:



  def retrieveByIndexSingle[T : ClassTag](value:Seq[T] ,index:Int,offset:Int=0):T = value(index + offset)

def retrieveByIndexSingleDUDF = udf((value:Seq[Double] ,index:Int) => {
retrieveByIndexSingle[Double](value, index)
})

def retrieveByIndexSingleSUDF = udf((value:Seq[String] ,index:Int) => {
retrieveByIndexSingle[String](value, index)
})









share|improve this question



























    0















    My column col1 is an Array.



    I know that col1.getItem(2) allows you to access the second argument of the column. Is there a function to access with argument as column col1.getItem(col2) ?



    I can create a UDF but I would have to specify which type the array is (and it can be multiple type) so a generic way would be better and welcome !



    The UDF I use:



      def retrieveByIndexSingle[T : ClassTag](value:Seq[T] ,index:Int,offset:Int=0):T = value(index + offset)

    def retrieveByIndexSingleDUDF = udf((value:Seq[Double] ,index:Int) => {
    retrieveByIndexSingle[Double](value, index)
    })

    def retrieveByIndexSingleSUDF = udf((value:Seq[String] ,index:Int) => {
    retrieveByIndexSingle[String](value, index)
    })









    share|improve this question

























      0












      0








      0








      My column col1 is an Array.



      I know that col1.getItem(2) allows you to access the second argument of the column. Is there a function to access with argument as column col1.getItem(col2) ?



      I can create a UDF but I would have to specify which type the array is (and it can be multiple type) so a generic way would be better and welcome !



      The UDF I use:



        def retrieveByIndexSingle[T : ClassTag](value:Seq[T] ,index:Int,offset:Int=0):T = value(index + offset)

      def retrieveByIndexSingleDUDF = udf((value:Seq[Double] ,index:Int) => {
      retrieveByIndexSingle[Double](value, index)
      })

      def retrieveByIndexSingleSUDF = udf((value:Seq[String] ,index:Int) => {
      retrieveByIndexSingle[String](value, index)
      })









      share|improve this question














      My column col1 is an Array.



      I know that col1.getItem(2) allows you to access the second argument of the column. Is there a function to access with argument as column col1.getItem(col2) ?



      I can create a UDF but I would have to specify which type the array is (and it can be multiple type) so a generic way would be better and welcome !



      The UDF I use:



        def retrieveByIndexSingle[T : ClassTag](value:Seq[T] ,index:Int,offset:Int=0):T = value(index + offset)

      def retrieveByIndexSingleDUDF = udf((value:Seq[Double] ,index:Int) => {
      retrieveByIndexSingle[Double](value, index)
      })

      def retrieveByIndexSingleSUDF = udf((value:Seq[String] ,index:Int) => {
      retrieveByIndexSingle[String](value, index)
      })






      scala apache-spark user-defined-functions






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Nov 19 '18 at 23:21









      Guillaume GGuillaume G

      768714




      768714
























          1 Answer
          1






          active

          oldest

          votes


















          1














          It is possible to use SQL expression for example with expr:



          import org.apache.spark.sql.functions.expr

          val df = Seq(
          (Seq("a", "b", "c"), 0), (Seq("d", "e", "f"), 2)
          ).toDF("col1", "col2")
          df.withColumn("col3", expr("col1[col2]")).show


          +---------+----+----+
          | col1|col2|col3|
          +---------+----+----+
          |[a, b, c]| 0| a|
          |[d, e, f]| 2| f|
          +---------+----+----+


          or, in Spark 2.4 or later, element_at function:



          import org.apache.spark.sql.functions.element_at

          df.withColumn("col3", element_at($"col1", $"col2" + 1)).show


          +---------+----+----+
          | col1|col2|col3|
          +---------+----+----+
          |[a, b, c]| 0| a|
          |[d, e, f]| 2| f|
          +---------+----+----+


          Please note that at the moment (Spark 2.4) there is inconsistency between these two methods:




          • SQL indexing is 0-based.


          • element_at indexing is 1-based.






          share|improve this answer


























          • thanks. The 1-based notation is terrible ...

            – Guillaume G
            Nov 20 '18 at 2:46











          Your Answer






          StackExchange.ifUsing("editor", function () {
          StackExchange.using("externalEditor", function () {
          StackExchange.using("snippets", function () {
          StackExchange.snippets.init();
          });
          });
          }, "code-snippets");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "1"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53384054%2fgetitem-with-argument-is-column-name%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          1














          It is possible to use SQL expression for example with expr:



          import org.apache.spark.sql.functions.expr

          val df = Seq(
          (Seq("a", "b", "c"), 0), (Seq("d", "e", "f"), 2)
          ).toDF("col1", "col2")
          df.withColumn("col3", expr("col1[col2]")).show


          +---------+----+----+
          | col1|col2|col3|
          +---------+----+----+
          |[a, b, c]| 0| a|
          |[d, e, f]| 2| f|
          +---------+----+----+


          or, in Spark 2.4 or later, element_at function:



          import org.apache.spark.sql.functions.element_at

          df.withColumn("col3", element_at($"col1", $"col2" + 1)).show


          +---------+----+----+
          | col1|col2|col3|
          +---------+----+----+
          |[a, b, c]| 0| a|
          |[d, e, f]| 2| f|
          +---------+----+----+


          Please note that at the moment (Spark 2.4) there is inconsistency between these two methods:




          • SQL indexing is 0-based.


          • element_at indexing is 1-based.






          share|improve this answer


























          • thanks. The 1-based notation is terrible ...

            – Guillaume G
            Nov 20 '18 at 2:46
















          1














          It is possible to use SQL expression for example with expr:



          import org.apache.spark.sql.functions.expr

          val df = Seq(
          (Seq("a", "b", "c"), 0), (Seq("d", "e", "f"), 2)
          ).toDF("col1", "col2")
          df.withColumn("col3", expr("col1[col2]")).show


          +---------+----+----+
          | col1|col2|col3|
          +---------+----+----+
          |[a, b, c]| 0| a|
          |[d, e, f]| 2| f|
          +---------+----+----+


          or, in Spark 2.4 or later, element_at function:



          import org.apache.spark.sql.functions.element_at

          df.withColumn("col3", element_at($"col1", $"col2" + 1)).show


          +---------+----+----+
          | col1|col2|col3|
          +---------+----+----+
          |[a, b, c]| 0| a|
          |[d, e, f]| 2| f|
          +---------+----+----+


          Please note that at the moment (Spark 2.4) there is inconsistency between these two methods:




          • SQL indexing is 0-based.


          • element_at indexing is 1-based.






          share|improve this answer


























          • thanks. The 1-based notation is terrible ...

            – Guillaume G
            Nov 20 '18 at 2:46














          1












          1








          1







          It is possible to use SQL expression for example with expr:



          import org.apache.spark.sql.functions.expr

          val df = Seq(
          (Seq("a", "b", "c"), 0), (Seq("d", "e", "f"), 2)
          ).toDF("col1", "col2")
          df.withColumn("col3", expr("col1[col2]")).show


          +---------+----+----+
          | col1|col2|col3|
          +---------+----+----+
          |[a, b, c]| 0| a|
          |[d, e, f]| 2| f|
          +---------+----+----+


          or, in Spark 2.4 or later, element_at function:



          import org.apache.spark.sql.functions.element_at

          df.withColumn("col3", element_at($"col1", $"col2" + 1)).show


          +---------+----+----+
          | col1|col2|col3|
          +---------+----+----+
          |[a, b, c]| 0| a|
          |[d, e, f]| 2| f|
          +---------+----+----+


          Please note that at the moment (Spark 2.4) there is inconsistency between these two methods:




          • SQL indexing is 0-based.


          • element_at indexing is 1-based.






          share|improve this answer















          It is possible to use SQL expression for example with expr:



          import org.apache.spark.sql.functions.expr

          val df = Seq(
          (Seq("a", "b", "c"), 0), (Seq("d", "e", "f"), 2)
          ).toDF("col1", "col2")
          df.withColumn("col3", expr("col1[col2]")).show


          +---------+----+----+
          | col1|col2|col3|
          +---------+----+----+
          |[a, b, c]| 0| a|
          |[d, e, f]| 2| f|
          +---------+----+----+


          or, in Spark 2.4 or later, element_at function:



          import org.apache.spark.sql.functions.element_at

          df.withColumn("col3", element_at($"col1", $"col2" + 1)).show


          +---------+----+----+
          | col1|col2|col3|
          +---------+----+----+
          |[a, b, c]| 0| a|
          |[d, e, f]| 2| f|
          +---------+----+----+


          Please note that at the moment (Spark 2.4) there is inconsistency between these two methods:




          • SQL indexing is 0-based.


          • element_at indexing is 1-based.







          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited Nov 20 '18 at 1:10

























          answered Nov 20 '18 at 1:05









          user10465355user10465355

          1,8732417




          1,8732417













          • thanks. The 1-based notation is terrible ...

            – Guillaume G
            Nov 20 '18 at 2:46



















          • thanks. The 1-based notation is terrible ...

            – Guillaume G
            Nov 20 '18 at 2:46

















          thanks. The 1-based notation is terrible ...

          – Guillaume G
          Nov 20 '18 at 2:46





          thanks. The 1-based notation is terrible ...

          – Guillaume G
          Nov 20 '18 at 2:46




















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53384054%2fgetitem-with-argument-is-column-name%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          這個網誌中的熱門文章

          Tangent Lines Diagram Along Smooth Curve

          Yusuf al-Mu'taman ibn Hud

          Zucchini