Python: How to tokenize every type of URL paths?
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty{ height:90px;width:728px;box-sizing:border-box;
}
I have a dataframe of website urls and I need to first extract url domains (e.g. google.com) and url paths (e.g. foo/foo2/foo3/sjj.html), and second to tokenize the path part of the urls. The problem is that they can be in any of the following forms:
1- https://www.politics.com/watch?v=4PykB_cU
(desired output: [watch])
2- https://www.politics.com/video/2014/USA/hello_world_how_are_you
(desired output: [video, USA, hello, world, how, are, you])
3- https://www.politics.com/video/2014/USA/hello-world-how-are-you
(desired output: [video, USA, hello, world, how, are, you])
4- https://www.politics.com/video/2014/USA/helloworldhowareyou
(desired output: [video, USA, hello, world, how, are, you]
5- https://www.politics.com/video/2014/USA/HelloWorldHowAreYou
(desired output: [video, USA, Hello, World, How, Are, You]
6- https://www.politics.com/1VOuFvY
(desired output: )
Is there any function or package that can automatically parse and tokenize all these types of url paths?
python-3.x pandas nltk
add a comment |
I have a dataframe of website urls and I need to first extract url domains (e.g. google.com) and url paths (e.g. foo/foo2/foo3/sjj.html), and second to tokenize the path part of the urls. The problem is that they can be in any of the following forms:
1- https://www.politics.com/watch?v=4PykB_cU
(desired output: [watch])
2- https://www.politics.com/video/2014/USA/hello_world_how_are_you
(desired output: [video, USA, hello, world, how, are, you])
3- https://www.politics.com/video/2014/USA/hello-world-how-are-you
(desired output: [video, USA, hello, world, how, are, you])
4- https://www.politics.com/video/2014/USA/helloworldhowareyou
(desired output: [video, USA, hello, world, how, are, you]
5- https://www.politics.com/video/2014/USA/HelloWorldHowAreYou
(desired output: [video, USA, Hello, World, How, Are, You]
6- https://www.politics.com/1VOuFvY
(desired output: )
Is there any function or package that can automatically parse and tokenize all these types of url paths?
python-3.x pandas nltk
add a comment |
I have a dataframe of website urls and I need to first extract url domains (e.g. google.com) and url paths (e.g. foo/foo2/foo3/sjj.html), and second to tokenize the path part of the urls. The problem is that they can be in any of the following forms:
1- https://www.politics.com/watch?v=4PykB_cU
(desired output: [watch])
2- https://www.politics.com/video/2014/USA/hello_world_how_are_you
(desired output: [video, USA, hello, world, how, are, you])
3- https://www.politics.com/video/2014/USA/hello-world-how-are-you
(desired output: [video, USA, hello, world, how, are, you])
4- https://www.politics.com/video/2014/USA/helloworldhowareyou
(desired output: [video, USA, hello, world, how, are, you]
5- https://www.politics.com/video/2014/USA/HelloWorldHowAreYou
(desired output: [video, USA, Hello, World, How, Are, You]
6- https://www.politics.com/1VOuFvY
(desired output: )
Is there any function or package that can automatically parse and tokenize all these types of url paths?
python-3.x pandas nltk
I have a dataframe of website urls and I need to first extract url domains (e.g. google.com) and url paths (e.g. foo/foo2/foo3/sjj.html), and second to tokenize the path part of the urls. The problem is that they can be in any of the following forms:
1- https://www.politics.com/watch?v=4PykB_cU
(desired output: [watch])
2- https://www.politics.com/video/2014/USA/hello_world_how_are_you
(desired output: [video, USA, hello, world, how, are, you])
3- https://www.politics.com/video/2014/USA/hello-world-how-are-you
(desired output: [video, USA, hello, world, how, are, you])
4- https://www.politics.com/video/2014/USA/helloworldhowareyou
(desired output: [video, USA, hello, world, how, are, you]
5- https://www.politics.com/video/2014/USA/HelloWorldHowAreYou
(desired output: [video, USA, Hello, World, How, Are, You]
6- https://www.politics.com/1VOuFvY
(desired output: )
Is there any function or package that can automatically parse and tokenize all these types of url paths?
python-3.x pandas nltk
python-3.x pandas nltk
asked Nov 23 '18 at 23:59
msmazhmsmazh
1781110
1781110
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
First three can be accomplished with string.split()
Fifth you can split in the capital letter with regex or just iterating through.
Fourth one will require much more effort. The only method I can think of is entity recognition with the entire English dictionary as entities to match, and even then you’ll need to disambiguate some conflicting matches.
Thanks. I know how to individually take of the first four types. But am looking for sth that can automatically identify each type and tokenize it accordingly (if exists).
– msmazh
Nov 24 '18 at 0:13
Ah, I hear you. Haven’t heard of one. I’ll look around
– John H
Nov 24 '18 at 0:16
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53454043%2fpython-how-to-tokenize-every-type-of-url-paths%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
First three can be accomplished with string.split()
Fifth you can split in the capital letter with regex or just iterating through.
Fourth one will require much more effort. The only method I can think of is entity recognition with the entire English dictionary as entities to match, and even then you’ll need to disambiguate some conflicting matches.
Thanks. I know how to individually take of the first four types. But am looking for sth that can automatically identify each type and tokenize it accordingly (if exists).
– msmazh
Nov 24 '18 at 0:13
Ah, I hear you. Haven’t heard of one. I’ll look around
– John H
Nov 24 '18 at 0:16
add a comment |
First three can be accomplished with string.split()
Fifth you can split in the capital letter with regex or just iterating through.
Fourth one will require much more effort. The only method I can think of is entity recognition with the entire English dictionary as entities to match, and even then you’ll need to disambiguate some conflicting matches.
Thanks. I know how to individually take of the first four types. But am looking for sth that can automatically identify each type and tokenize it accordingly (if exists).
– msmazh
Nov 24 '18 at 0:13
Ah, I hear you. Haven’t heard of one. I’ll look around
– John H
Nov 24 '18 at 0:16
add a comment |
First three can be accomplished with string.split()
Fifth you can split in the capital letter with regex or just iterating through.
Fourth one will require much more effort. The only method I can think of is entity recognition with the entire English dictionary as entities to match, and even then you’ll need to disambiguate some conflicting matches.
First three can be accomplished with string.split()
Fifth you can split in the capital letter with regex or just iterating through.
Fourth one will require much more effort. The only method I can think of is entity recognition with the entire English dictionary as entities to match, and even then you’ll need to disambiguate some conflicting matches.
answered Nov 24 '18 at 0:10
John HJohn H
1,241416
1,241416
Thanks. I know how to individually take of the first four types. But am looking for sth that can automatically identify each type and tokenize it accordingly (if exists).
– msmazh
Nov 24 '18 at 0:13
Ah, I hear you. Haven’t heard of one. I’ll look around
– John H
Nov 24 '18 at 0:16
add a comment |
Thanks. I know how to individually take of the first four types. But am looking for sth that can automatically identify each type and tokenize it accordingly (if exists).
– msmazh
Nov 24 '18 at 0:13
Ah, I hear you. Haven’t heard of one. I’ll look around
– John H
Nov 24 '18 at 0:16
Thanks. I know how to individually take of the first four types. But am looking for sth that can automatically identify each type and tokenize it accordingly (if exists).
– msmazh
Nov 24 '18 at 0:13
Thanks. I know how to individually take of the first four types. But am looking for sth that can automatically identify each type and tokenize it accordingly (if exists).
– msmazh
Nov 24 '18 at 0:13
Ah, I hear you. Haven’t heard of one. I’ll look around
– John H
Nov 24 '18 at 0:16
Ah, I hear you. Haven’t heard of one. I’ll look around
– John H
Nov 24 '18 at 0:16
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53454043%2fpython-how-to-tokenize-every-type-of-url-paths%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown