if you’re the author, don’t you have a non-twitter link?
Not at the moment - I just made it and tweeted about it. However, I plan on putting it under “X in One Picture” series on https://roadmap.sh
I posted it elsewhere, for posterity, or something:
I also wondered whether tweets are endorsed here…
I think most of the backlash against Twitter is that it’s a poor format for reading long form writing, content is usually light or unsubstantiated (rumors), it’s a redirect to a better primary source.
For the most part, Twitter links do pretty well in Lobster.rs: https://lobste.rs/domain/twitter.com
A couple of remarks though:
O(3) = 3
Thanks Arthur for the feedback. I mostly kept it as simple as possible for the sake of fitting it in an image and be easier to understand at the same time. I agree with all the points that you have mentioned and will address them in the associated article. Thank you!
The logarithmic complexity examples on the right are wrong. If T(10) = 1, T(20) = 2, T(30) = 3, T(40) = 4, etc. the running time grows linearly with the input size, since if the input size doubles, the running time doubles. In logarithmic time (e.g. log 10 for simplicity), you’d expect T(10) = 1, T(100) = 2, T(1000) = 3, etc.
More in general, but I think this is a difference between big-O in theory and how it is used in practice: big-O provides an upper bound on the growth rate of a function. E.g. an algorithm that runs in linear time is also O(n^2). Of course, proving an O(n) upper bound (if possible) is more useful than proving an O(n^2) upper bound.
Ah, good catch. I will fix it, thank you!
Why aren’t they in sorted order?
Very nice, I like it. I would have also added O(n log(n)) to the examples.
The text for log n complexity implies that the example is binary search when it is in fact a very contrived example ;)
Perhaps you could fit a binary search implementation in there or change the surrounding text.
I like how “the most famous example” is listed twice (O log n). Might be good if the code example was actually a binary search.
I disagree with the general advice about combining multiple complexities: I prefer to keep terms separate if they are easily identifiable and distinct qualities of input. For example, a hash table insertion is O(k) not O(1) because the cost of hashing the input is proportional to the bits (k) in the input (and it isn’t constant). Yes even if the lookup itself isn’t proportional to the number of elements in the set. If you don’t do this, you lose the ability to see (clearly) how an “O(1)” algorithm can be beaten so incredibly easily by an “O(n)” one (or even an “O(n^2)” one) in many circumstances, and that can mean the difference between success/failure where I work.