Hash Table vs Other data structures like AVLTrees












0















I was looking at the complexities of Hash Table and AVLTree. Theoretically, the complexity for inserting and deleting an element on Hash Tables is O(1) and for AVLTree O(log(n)) (studying the complexities with the workflow being the number of elements on the Data Structure), seeing this results Hash Tables seem to be better than AVLTrees but if we take into account the size of the structure itself and the time needed to compute the hashing function for a given key, aren´t AVLTrees better?










share|improve this question


















  • 1





    bigocheatsheet.com

    – plasmacel
    Nov 23 '18 at 0:20











  • I can't imagine an O(log(n)) algorithm would ever be better than an O(1) algorithm for reasonably large data sets - but why don't you try it? SUGGESTION: Create a benchmark that compares the two, and post your results.

    – FoggyDay
    Nov 23 '18 at 0:21











  • @FoggyDay Of course O(log(n)) is worst than O(1), but add operation of a Hash Table needs the result of the hashing function for what you want to add with the given key, if the hashing function has a comlexity O(n) even though when actually adding the element you add it directly to the position that the hashing function outputed, for huge keys (a string with 200 characters for example) wouldn´t it be slower than the adding operation of AVLTrees which is always O(log(n)) regardless of what you are actually adding? (I talk about overall execution time I may have used complexity notation wrong)

    – Angelixus
    Nov 23 '18 at 0:28













  • SUGGESTION: Create a benchmark that compares the two, and post your results

    – FoggyDay
    Nov 23 '18 at 5:03











  • @Angelixus You're forgetting that when searching for the insertion point, you might have to do O(log n) string comparisons of up to 200 characters each. And each of those string comparisons is O(keylength). With hashing, you do one O(keylength) hash code computation. Asymptotic analysis doesn't tell the whole story.

    – Jim Mischel
    Jan 23 at 3:30


















0















I was looking at the complexities of Hash Table and AVLTree. Theoretically, the complexity for inserting and deleting an element on Hash Tables is O(1) and for AVLTree O(log(n)) (studying the complexities with the workflow being the number of elements on the Data Structure), seeing this results Hash Tables seem to be better than AVLTrees but if we take into account the size of the structure itself and the time needed to compute the hashing function for a given key, aren´t AVLTrees better?










share|improve this question


















  • 1





    bigocheatsheet.com

    – plasmacel
    Nov 23 '18 at 0:20











  • I can't imagine an O(log(n)) algorithm would ever be better than an O(1) algorithm for reasonably large data sets - but why don't you try it? SUGGESTION: Create a benchmark that compares the two, and post your results.

    – FoggyDay
    Nov 23 '18 at 0:21











  • @FoggyDay Of course O(log(n)) is worst than O(1), but add operation of a Hash Table needs the result of the hashing function for what you want to add with the given key, if the hashing function has a comlexity O(n) even though when actually adding the element you add it directly to the position that the hashing function outputed, for huge keys (a string with 200 characters for example) wouldn´t it be slower than the adding operation of AVLTrees which is always O(log(n)) regardless of what you are actually adding? (I talk about overall execution time I may have used complexity notation wrong)

    – Angelixus
    Nov 23 '18 at 0:28













  • SUGGESTION: Create a benchmark that compares the two, and post your results

    – FoggyDay
    Nov 23 '18 at 5:03











  • @Angelixus You're forgetting that when searching for the insertion point, you might have to do O(log n) string comparisons of up to 200 characters each. And each of those string comparisons is O(keylength). With hashing, you do one O(keylength) hash code computation. Asymptotic analysis doesn't tell the whole story.

    – Jim Mischel
    Jan 23 at 3:30
















0












0








0








I was looking at the complexities of Hash Table and AVLTree. Theoretically, the complexity for inserting and deleting an element on Hash Tables is O(1) and for AVLTree O(log(n)) (studying the complexities with the workflow being the number of elements on the Data Structure), seeing this results Hash Tables seem to be better than AVLTrees but if we take into account the size of the structure itself and the time needed to compute the hashing function for a given key, aren´t AVLTrees better?










share|improve this question














I was looking at the complexities of Hash Table and AVLTree. Theoretically, the complexity for inserting and deleting an element on Hash Tables is O(1) and for AVLTree O(log(n)) (studying the complexities with the workflow being the number of elements on the Data Structure), seeing this results Hash Tables seem to be better than AVLTrees but if we take into account the size of the structure itself and the time needed to compute the hashing function for a given key, aren´t AVLTrees better?







data-structures hashtable avl-tree






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Nov 23 '18 at 0:17









AngelixusAngelixus

347




347








  • 1





    bigocheatsheet.com

    – plasmacel
    Nov 23 '18 at 0:20











  • I can't imagine an O(log(n)) algorithm would ever be better than an O(1) algorithm for reasonably large data sets - but why don't you try it? SUGGESTION: Create a benchmark that compares the two, and post your results.

    – FoggyDay
    Nov 23 '18 at 0:21











  • @FoggyDay Of course O(log(n)) is worst than O(1), but add operation of a Hash Table needs the result of the hashing function for what you want to add with the given key, if the hashing function has a comlexity O(n) even though when actually adding the element you add it directly to the position that the hashing function outputed, for huge keys (a string with 200 characters for example) wouldn´t it be slower than the adding operation of AVLTrees which is always O(log(n)) regardless of what you are actually adding? (I talk about overall execution time I may have used complexity notation wrong)

    – Angelixus
    Nov 23 '18 at 0:28













  • SUGGESTION: Create a benchmark that compares the two, and post your results

    – FoggyDay
    Nov 23 '18 at 5:03











  • @Angelixus You're forgetting that when searching for the insertion point, you might have to do O(log n) string comparisons of up to 200 characters each. And each of those string comparisons is O(keylength). With hashing, you do one O(keylength) hash code computation. Asymptotic analysis doesn't tell the whole story.

    – Jim Mischel
    Jan 23 at 3:30
















  • 1





    bigocheatsheet.com

    – plasmacel
    Nov 23 '18 at 0:20











  • I can't imagine an O(log(n)) algorithm would ever be better than an O(1) algorithm for reasonably large data sets - but why don't you try it? SUGGESTION: Create a benchmark that compares the two, and post your results.

    – FoggyDay
    Nov 23 '18 at 0:21











  • @FoggyDay Of course O(log(n)) is worst than O(1), but add operation of a Hash Table needs the result of the hashing function for what you want to add with the given key, if the hashing function has a comlexity O(n) even though when actually adding the element you add it directly to the position that the hashing function outputed, for huge keys (a string with 200 characters for example) wouldn´t it be slower than the adding operation of AVLTrees which is always O(log(n)) regardless of what you are actually adding? (I talk about overall execution time I may have used complexity notation wrong)

    – Angelixus
    Nov 23 '18 at 0:28













  • SUGGESTION: Create a benchmark that compares the two, and post your results

    – FoggyDay
    Nov 23 '18 at 5:03











  • @Angelixus You're forgetting that when searching for the insertion point, you might have to do O(log n) string comparisons of up to 200 characters each. And each of those string comparisons is O(keylength). With hashing, you do one O(keylength) hash code computation. Asymptotic analysis doesn't tell the whole story.

    – Jim Mischel
    Jan 23 at 3:30










1




1





bigocheatsheet.com

– plasmacel
Nov 23 '18 at 0:20





bigocheatsheet.com

– plasmacel
Nov 23 '18 at 0:20













I can't imagine an O(log(n)) algorithm would ever be better than an O(1) algorithm for reasonably large data sets - but why don't you try it? SUGGESTION: Create a benchmark that compares the two, and post your results.

– FoggyDay
Nov 23 '18 at 0:21





I can't imagine an O(log(n)) algorithm would ever be better than an O(1) algorithm for reasonably large data sets - but why don't you try it? SUGGESTION: Create a benchmark that compares the two, and post your results.

– FoggyDay
Nov 23 '18 at 0:21













@FoggyDay Of course O(log(n)) is worst than O(1), but add operation of a Hash Table needs the result of the hashing function for what you want to add with the given key, if the hashing function has a comlexity O(n) even though when actually adding the element you add it directly to the position that the hashing function outputed, for huge keys (a string with 200 characters for example) wouldn´t it be slower than the adding operation of AVLTrees which is always O(log(n)) regardless of what you are actually adding? (I talk about overall execution time I may have used complexity notation wrong)

– Angelixus
Nov 23 '18 at 0:28







@FoggyDay Of course O(log(n)) is worst than O(1), but add operation of a Hash Table needs the result of the hashing function for what you want to add with the given key, if the hashing function has a comlexity O(n) even though when actually adding the element you add it directly to the position that the hashing function outputed, for huge keys (a string with 200 characters for example) wouldn´t it be slower than the adding operation of AVLTrees which is always O(log(n)) regardless of what you are actually adding? (I talk about overall execution time I may have used complexity notation wrong)

– Angelixus
Nov 23 '18 at 0:28















SUGGESTION: Create a benchmark that compares the two, and post your results

– FoggyDay
Nov 23 '18 at 5:03





SUGGESTION: Create a benchmark that compares the two, and post your results

– FoggyDay
Nov 23 '18 at 5:03













@Angelixus You're forgetting that when searching for the insertion point, you might have to do O(log n) string comparisons of up to 200 characters each. And each of those string comparisons is O(keylength). With hashing, you do one O(keylength) hash code computation. Asymptotic analysis doesn't tell the whole story.

– Jim Mischel
Jan 23 at 3:30







@Angelixus You're forgetting that when searching for the insertion point, you might have to do O(log n) string comparisons of up to 200 characters each. And each of those string comparisons is O(keylength). With hashing, you do one O(keylength) hash code computation. Asymptotic analysis doesn't tell the whole story.

– Jim Mischel
Jan 23 at 3:30














1 Answer
1






active

oldest

votes


















1














Hash tables and AVL trees serve fundamentally different functions. In short, a hash table is for lookup, whereas a tree structure is for search and traversal.



A hash table gives you very fast retrieval of values that are associated with individual keys. If you've inserted a value with key "xqj57", then you can quickly retrieve it. And if "xqj57" hasn't been inserted, the lookup will say that no such record exists. Both insertion and lookup are O(1) operations.



AVL trees and similar structures are good for storing things in sorted order. That lets you traverse the structure in sorted order, and also lets you do things like easily retrieve, in order, all records whose key starts with "A", or get the first record whose key is greater than or equal to "foo", etc. Insertion and search are O(log n) operations. Traversal, of course, is O(n).



If you want to traverse a hash table in sorted order, you would have to sort it first.



A hash table does have a little extra memory overhead when compared to a tree structure, but not that much.



Unless keys are exceptionally long, or your hashing function is really complicated, the cost of computing a hash value for a key is cheap compared to the cost of the log(n) key comparisons you'll have to do to find something in a tree structure.



If you just want fast insertion and lookup, then it's hard to beat a hash table. If you want to work with a sorted list, then a hash table is definitely not the way to go. An AVL tree, red-black tree, skip list, or other such sorted structure is going to perform better.






share|improve this answer























    Your Answer






    StackExchange.ifUsing("editor", function () {
    StackExchange.using("externalEditor", function () {
    StackExchange.using("snippets", function () {
    StackExchange.snippets.init();
    });
    });
    }, "code-snippets");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "1"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53439326%2fhash-table-vs-other-data-structures-like-avltrees%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    1














    Hash tables and AVL trees serve fundamentally different functions. In short, a hash table is for lookup, whereas a tree structure is for search and traversal.



    A hash table gives you very fast retrieval of values that are associated with individual keys. If you've inserted a value with key "xqj57", then you can quickly retrieve it. And if "xqj57" hasn't been inserted, the lookup will say that no such record exists. Both insertion and lookup are O(1) operations.



    AVL trees and similar structures are good for storing things in sorted order. That lets you traverse the structure in sorted order, and also lets you do things like easily retrieve, in order, all records whose key starts with "A", or get the first record whose key is greater than or equal to "foo", etc. Insertion and search are O(log n) operations. Traversal, of course, is O(n).



    If you want to traverse a hash table in sorted order, you would have to sort it first.



    A hash table does have a little extra memory overhead when compared to a tree structure, but not that much.



    Unless keys are exceptionally long, or your hashing function is really complicated, the cost of computing a hash value for a key is cheap compared to the cost of the log(n) key comparisons you'll have to do to find something in a tree structure.



    If you just want fast insertion and lookup, then it's hard to beat a hash table. If you want to work with a sorted list, then a hash table is definitely not the way to go. An AVL tree, red-black tree, skip list, or other such sorted structure is going to perform better.






    share|improve this answer




























      1














      Hash tables and AVL trees serve fundamentally different functions. In short, a hash table is for lookup, whereas a tree structure is for search and traversal.



      A hash table gives you very fast retrieval of values that are associated with individual keys. If you've inserted a value with key "xqj57", then you can quickly retrieve it. And if "xqj57" hasn't been inserted, the lookup will say that no such record exists. Both insertion and lookup are O(1) operations.



      AVL trees and similar structures are good for storing things in sorted order. That lets you traverse the structure in sorted order, and also lets you do things like easily retrieve, in order, all records whose key starts with "A", or get the first record whose key is greater than or equal to "foo", etc. Insertion and search are O(log n) operations. Traversal, of course, is O(n).



      If you want to traverse a hash table in sorted order, you would have to sort it first.



      A hash table does have a little extra memory overhead when compared to a tree structure, but not that much.



      Unless keys are exceptionally long, or your hashing function is really complicated, the cost of computing a hash value for a key is cheap compared to the cost of the log(n) key comparisons you'll have to do to find something in a tree structure.



      If you just want fast insertion and lookup, then it's hard to beat a hash table. If you want to work with a sorted list, then a hash table is definitely not the way to go. An AVL tree, red-black tree, skip list, or other such sorted structure is going to perform better.






      share|improve this answer


























        1












        1








        1







        Hash tables and AVL trees serve fundamentally different functions. In short, a hash table is for lookup, whereas a tree structure is for search and traversal.



        A hash table gives you very fast retrieval of values that are associated with individual keys. If you've inserted a value with key "xqj57", then you can quickly retrieve it. And if "xqj57" hasn't been inserted, the lookup will say that no such record exists. Both insertion and lookup are O(1) operations.



        AVL trees and similar structures are good for storing things in sorted order. That lets you traverse the structure in sorted order, and also lets you do things like easily retrieve, in order, all records whose key starts with "A", or get the first record whose key is greater than or equal to "foo", etc. Insertion and search are O(log n) operations. Traversal, of course, is O(n).



        If you want to traverse a hash table in sorted order, you would have to sort it first.



        A hash table does have a little extra memory overhead when compared to a tree structure, but not that much.



        Unless keys are exceptionally long, or your hashing function is really complicated, the cost of computing a hash value for a key is cheap compared to the cost of the log(n) key comparisons you'll have to do to find something in a tree structure.



        If you just want fast insertion and lookup, then it's hard to beat a hash table. If you want to work with a sorted list, then a hash table is definitely not the way to go. An AVL tree, red-black tree, skip list, or other such sorted structure is going to perform better.






        share|improve this answer













        Hash tables and AVL trees serve fundamentally different functions. In short, a hash table is for lookup, whereas a tree structure is for search and traversal.



        A hash table gives you very fast retrieval of values that are associated with individual keys. If you've inserted a value with key "xqj57", then you can quickly retrieve it. And if "xqj57" hasn't been inserted, the lookup will say that no such record exists. Both insertion and lookup are O(1) operations.



        AVL trees and similar structures are good for storing things in sorted order. That lets you traverse the structure in sorted order, and also lets you do things like easily retrieve, in order, all records whose key starts with "A", or get the first record whose key is greater than or equal to "foo", etc. Insertion and search are O(log n) operations. Traversal, of course, is O(n).



        If you want to traverse a hash table in sorted order, you would have to sort it first.



        A hash table does have a little extra memory overhead when compared to a tree structure, but not that much.



        Unless keys are exceptionally long, or your hashing function is really complicated, the cost of computing a hash value for a key is cheap compared to the cost of the log(n) key comparisons you'll have to do to find something in a tree structure.



        If you just want fast insertion and lookup, then it's hard to beat a hash table. If you want to work with a sorted list, then a hash table is definitely not the way to go. An AVL tree, red-black tree, skip list, or other such sorted structure is going to perform better.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Nov 24 '18 at 5:48









        Jim MischelJim Mischel

        107k12132252




        107k12132252
































            draft saved

            draft discarded




















































            Thanks for contributing an answer to Stack Overflow!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53439326%2fhash-table-vs-other-data-structures-like-avltrees%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            這個網誌中的熱門文章

            Xamarin.form Move up view when keyboard appear

            Post-Redirect-Get with Spring WebFlux and Thymeleaf

            Anylogic : not able to use stopDelay()