Python: How to tokenize every type of URL paths?





.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty{ height:90px;width:728px;box-sizing:border-box;
}







0















I have a dataframe of website urls and I need to first extract url domains (e.g. google.com) and url paths (e.g. foo/foo2/foo3/sjj.html), and second to tokenize the path part of the urls. The problem is that they can be in any of the following forms:



1- https://www.politics.com/watch?v=4PykB_cU 
(desired output: [watch])

2- https://www.politics.com/video/2014/USA/hello_world_how_are_you
(desired output: [video, USA, hello, world, how, are, you])

3- https://www.politics.com/video/2014/USA/hello-world-how-are-you
(desired output: [video, USA, hello, world, how, are, you])

4- https://www.politics.com/video/2014/USA/helloworldhowareyou
(desired output: [video, USA, hello, world, how, are, you]

5- https://www.politics.com/video/2014/USA/HelloWorldHowAreYou
(desired output: [video, USA, Hello, World, How, Are, You]

6- https://www.politics.com/1VOuFvY
(desired output: )


Is there any function or package that can automatically parse and tokenize all these types of url paths?










share|improve this question





























    0















    I have a dataframe of website urls and I need to first extract url domains (e.g. google.com) and url paths (e.g. foo/foo2/foo3/sjj.html), and second to tokenize the path part of the urls. The problem is that they can be in any of the following forms:



    1- https://www.politics.com/watch?v=4PykB_cU 
    (desired output: [watch])

    2- https://www.politics.com/video/2014/USA/hello_world_how_are_you
    (desired output: [video, USA, hello, world, how, are, you])

    3- https://www.politics.com/video/2014/USA/hello-world-how-are-you
    (desired output: [video, USA, hello, world, how, are, you])

    4- https://www.politics.com/video/2014/USA/helloworldhowareyou
    (desired output: [video, USA, hello, world, how, are, you]

    5- https://www.politics.com/video/2014/USA/HelloWorldHowAreYou
    (desired output: [video, USA, Hello, World, How, Are, You]

    6- https://www.politics.com/1VOuFvY
    (desired output: )


    Is there any function or package that can automatically parse and tokenize all these types of url paths?










    share|improve this question

























      0












      0








      0








      I have a dataframe of website urls and I need to first extract url domains (e.g. google.com) and url paths (e.g. foo/foo2/foo3/sjj.html), and second to tokenize the path part of the urls. The problem is that they can be in any of the following forms:



      1- https://www.politics.com/watch?v=4PykB_cU 
      (desired output: [watch])

      2- https://www.politics.com/video/2014/USA/hello_world_how_are_you
      (desired output: [video, USA, hello, world, how, are, you])

      3- https://www.politics.com/video/2014/USA/hello-world-how-are-you
      (desired output: [video, USA, hello, world, how, are, you])

      4- https://www.politics.com/video/2014/USA/helloworldhowareyou
      (desired output: [video, USA, hello, world, how, are, you]

      5- https://www.politics.com/video/2014/USA/HelloWorldHowAreYou
      (desired output: [video, USA, Hello, World, How, Are, You]

      6- https://www.politics.com/1VOuFvY
      (desired output: )


      Is there any function or package that can automatically parse and tokenize all these types of url paths?










      share|improve this question














      I have a dataframe of website urls and I need to first extract url domains (e.g. google.com) and url paths (e.g. foo/foo2/foo3/sjj.html), and second to tokenize the path part of the urls. The problem is that they can be in any of the following forms:



      1- https://www.politics.com/watch?v=4PykB_cU 
      (desired output: [watch])

      2- https://www.politics.com/video/2014/USA/hello_world_how_are_you
      (desired output: [video, USA, hello, world, how, are, you])

      3- https://www.politics.com/video/2014/USA/hello-world-how-are-you
      (desired output: [video, USA, hello, world, how, are, you])

      4- https://www.politics.com/video/2014/USA/helloworldhowareyou
      (desired output: [video, USA, hello, world, how, are, you]

      5- https://www.politics.com/video/2014/USA/HelloWorldHowAreYou
      (desired output: [video, USA, Hello, World, How, Are, You]

      6- https://www.politics.com/1VOuFvY
      (desired output: )


      Is there any function or package that can automatically parse and tokenize all these types of url paths?







      python-3.x pandas nltk






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Nov 23 '18 at 23:59









      msmazhmsmazh

      1781110




      1781110
























          1 Answer
          1






          active

          oldest

          votes


















          1














          First three can be accomplished with string.split()



          Fifth you can split in the capital letter with regex or just iterating through.



          Fourth one will require much more effort. The only method I can think of is entity recognition with the entire English dictionary as entities to match, and even then you’ll need to disambiguate some conflicting matches.






          share|improve this answer
























          • Thanks. I know how to individually take of the first four types. But am looking for sth that can automatically identify each type and tokenize it accordingly (if exists).

            – msmazh
            Nov 24 '18 at 0:13











          • Ah, I hear you. Haven’t heard of one. I’ll look around

            – John H
            Nov 24 '18 at 0:16












          Your Answer






          StackExchange.ifUsing("editor", function () {
          StackExchange.using("externalEditor", function () {
          StackExchange.using("snippets", function () {
          StackExchange.snippets.init();
          });
          });
          }, "code-snippets");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "1"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53454043%2fpython-how-to-tokenize-every-type-of-url-paths%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          1














          First three can be accomplished with string.split()



          Fifth you can split in the capital letter with regex or just iterating through.



          Fourth one will require much more effort. The only method I can think of is entity recognition with the entire English dictionary as entities to match, and even then you’ll need to disambiguate some conflicting matches.






          share|improve this answer
























          • Thanks. I know how to individually take of the first four types. But am looking for sth that can automatically identify each type and tokenize it accordingly (if exists).

            – msmazh
            Nov 24 '18 at 0:13











          • Ah, I hear you. Haven’t heard of one. I’ll look around

            – John H
            Nov 24 '18 at 0:16
















          1














          First three can be accomplished with string.split()



          Fifth you can split in the capital letter with regex or just iterating through.



          Fourth one will require much more effort. The only method I can think of is entity recognition with the entire English dictionary as entities to match, and even then you’ll need to disambiguate some conflicting matches.






          share|improve this answer
























          • Thanks. I know how to individually take of the first four types. But am looking for sth that can automatically identify each type and tokenize it accordingly (if exists).

            – msmazh
            Nov 24 '18 at 0:13











          • Ah, I hear you. Haven’t heard of one. I’ll look around

            – John H
            Nov 24 '18 at 0:16














          1












          1








          1







          First three can be accomplished with string.split()



          Fifth you can split in the capital letter with regex or just iterating through.



          Fourth one will require much more effort. The only method I can think of is entity recognition with the entire English dictionary as entities to match, and even then you’ll need to disambiguate some conflicting matches.






          share|improve this answer













          First three can be accomplished with string.split()



          Fifth you can split in the capital letter with regex or just iterating through.



          Fourth one will require much more effort. The only method I can think of is entity recognition with the entire English dictionary as entities to match, and even then you’ll need to disambiguate some conflicting matches.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Nov 24 '18 at 0:10









          John HJohn H

          1,241416




          1,241416













          • Thanks. I know how to individually take of the first four types. But am looking for sth that can automatically identify each type and tokenize it accordingly (if exists).

            – msmazh
            Nov 24 '18 at 0:13











          • Ah, I hear you. Haven’t heard of one. I’ll look around

            – John H
            Nov 24 '18 at 0:16



















          • Thanks. I know how to individually take of the first four types. But am looking for sth that can automatically identify each type and tokenize it accordingly (if exists).

            – msmazh
            Nov 24 '18 at 0:13











          • Ah, I hear you. Haven’t heard of one. I’ll look around

            – John H
            Nov 24 '18 at 0:16

















          Thanks. I know how to individually take of the first four types. But am looking for sth that can automatically identify each type and tokenize it accordingly (if exists).

          – msmazh
          Nov 24 '18 at 0:13





          Thanks. I know how to individually take of the first four types. But am looking for sth that can automatically identify each type and tokenize it accordingly (if exists).

          – msmazh
          Nov 24 '18 at 0:13













          Ah, I hear you. Haven’t heard of one. I’ll look around

          – John H
          Nov 24 '18 at 0:16





          Ah, I hear you. Haven’t heard of one. I’ll look around

          – John H
          Nov 24 '18 at 0:16




















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53454043%2fpython-how-to-tokenize-every-type-of-url-paths%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          這個網誌中的熱門文章

          Tangent Lines Diagram Along Smooth Curve

          Yusuf al-Mu'taman ibn Hud

          Zucchini