data structures - probabilistic skip list space complexity -
so have seen question probabilistic skip list space consumption: (answer)
but think asker wasn't clear if wanted expected approach or worst case approach.
so raise question debate again , explain why i'm confused.
to clear - i'm looking space complexity of probabilistic skip list in the worst case. had in mind:
on 1 hand, assume maximum levels number log(n) it's easy infer in worst case might have n nodes in each level give o(nlogn). on other hand, assume there might more log(n) levels (e.g lists) , set bound n - nn => o(n^2)
but! don't understand why have right limit levels, if new level depends on coin toss, let's assume in worst case infinte times heads (which means new level) , difer it's not bounded?! i'm confused.
if don't set upper bound on height of skip list, there no upper-bound worst-case space usage. bound place, there's horribly unlucky , astronomically improbable execution of skiplist result in number of layers high upper bound wouldn't hold. reason, in case there isn't upper bound on space usage.
that said, standard skiplist implementations place upper bound m on height of each node, chosen m going bigger log n. in case, worst-case space usage θ(mn), happens if every mode uses m levels
hope helps!
Comments
Post a Comment