1. 2

    Dynamic programming is such an interesting concept, it took a good while (and lots of examples coded) to get my head around it.

    The interesting thing is that I have not used it outside of preparation for interviews. I am not sure if it is still the case, but Google and Facebook would often ask problems that needed dynamic programming approaches to solve (e.g. the classic backpack problem or one of its permutations).

    The same way, I have noticed that the only engineers who can knock dynamic programming solutions off the shelf are either the ones who did competitive programming back in the day (where this is a basic technique) or others who’ve gone through thorough interview preparation.

    Anyone used dynamic programming in the real world or know of libraries that have this kind of algos implemented?

    1. 7

      I’ve used it in the real world. Had a problem a decade or so ago that required computing the difference between tree-structured data, and a dynamic programming method similar to substring edit-distance was ideal. No libraries implemented it for me, and it was a pretty straightforward implementation via a dynamic programming approach. The use case was in the context of static program analysis where the trees I was working with were ASTs. I wrote part of it up a couple years ago for fun in F# (http://syntacticsalt.com/blog/2015-12-24-comparing-trees-functionally.html).

      1. 1

        That’s a cool use case and the post was a good read. Thanks for sharing!

      2. 5

        Anyone used dynamic programming in the real world or know of libraries that have this kind of algos implemented?

        Dynamic programming algorithms are at the core of many, many bioinformatics and computational biology solutions (DNA sequencing, the human genome, and etc). One of the earliest was Needleman-Wunsch (circa 1970). Local alignments, global alignments, different cost structures all took off from the same basic trick.

        1. 4

          I’ve gotten to use dynamic programming at my $DAYJOB, since it was the only reasonable way of solving a problem.

          In school, in competitive programming, and in interview situations, the problems are typically quite well-defined, and the data structure of choice is typically an n-dimensional matrix of integers. For an actual messy real-world problem, issues such as data structure choice, complicated solution types, choice of forward/backward/recursive organization of computation, memory limiting for extreme cases, and making operations parallel crop up. The core took half a day, but making it “industrial strength” took many months with many people involved.

          1. 3

            I think it’s likely that all the common uses of such techniques come as part of standard libraries in most languages, so when you need to use them, you have prepackaged versions. C/C++ code is the most likely place where you end up rolling your own, but with the STL since you can run these algorithms on your own objects it is even less needed.

            1. 2

              One recent use I remember was finding a (bit-) optimal parse for a LZ style compression algorithm. Basically it finds the shortest path from each position to the end (source for anyone interested).

              1. 2

                In university, I had a graph computation problem that had a recursive solution, but you could derive another graph from that that was easier to solve, also recursive. The paper only had pseudo-code and the proof of the time complexity just assumed you could use another algorithm in the middle that had certain complexity, calculated using the recursive method.

                In practice, though, using the recursive way of doing things absolutely sucked and switching it to the dynamic approach changed it from calculating results for subgraphs to, instead, combining results from already calculated subgraphs.

                The funny thing was how straightforward it seemed for the problem given than it was for all the motivating examples before.

                I wish I remembered anything about the problem but it’s all gone with time (this was years ago). It wasn’t particularly complex but it was very satisfying to ‘flip’ it to bottom up and see the effect.

              1. 5

                Seems like a reasonably fair comparison without any too contrived examples.

                Globbing is usually avoided in CMake because it is a build system generator (the problem is who knows when a source file as added or removed).

                In the dependency support section it might be better to use imported targets from “modern CMake” (and if the package is required, generation will fail if it is not found). Something like:

                add_executable(test main.c)
                
                find_package(OpenSSL REQUIRED)
                target_link_libraries(test PRIVATE OpenSSL::SSL)
                
                find_package(ZLIB REQUIRED)
                target_link_libraries(test PRIVATE ZLIB::ZLIB)
                

                Incidentally, I believe if you use the cmake_paths generator with Conan the above should work there as well, which has the benefit (depending on your use case) of making the CMake script independent of the package manager.

                1. 11

                  While the examples convey the idea, there are a number of issues with using them for teaching C. For instance:

                  • The macros are missing parenthesis around the macro parameters. SetBit(A, i + 1) will not expand to what you want
                  • There is no range checking
                  • It assumes that int is 32-bit. The standard only specifies a minimum range for int which is 16 bits. You likely want to use one of the fixed-width integer types from C99 like uint32_t.
                  • Signed integers do not necessarily allow you to use all possible bit patterns. Signed int might not be two’s complement, and there could be values that are trap representations (I know this is entering the realm of the unlikely, but if you want portable C you have consider this).
                  1. 7

                    You could also use CHAR_BIT * sizeof(unsigned int) to get the number of bits in an unsigned integer (which is better suited for this type of job). Both are constants, so there should be no runtime overhead for calculating that result. I would also be inclined to make the functions static inline (a C99 feature) and define them in a header. That avoids any issues with macros while still producing decent code. Something like:

                    #include <limits.h>
                    #include <stdlib.h>
                    
                    static inline void setbit(unsigned int *a,size_t size,size_t idx)
                    {
                      size_t       off = idx / (CHAR_BIT * sizeof(unsigned int));
                      unsigned int bit = idx % (CHAR_BIT * sizeof(unsigned int));
                      unsigned int val = 1u << bit;
                    
                      if (off >= size) abort();
                    
                      a[off] |= val;
                    }
                    
                    1. 3

                      Out of a really morbid curiosity, are there any modern common platforms where CHAR_BIT isn’t just 8?

                      1. 5

                        It’s very hard to say, as I’ve found very scant information about non-byte-oriented, non-2’s-complement C compilers. The Unisys 1100/2200 use 9 bit characters and they are still in use (for various values of “use”). I’ve also heard that C compilers for DSPs tend to have to define char as larger than 8-bits, but I’ve never worked with a DSP so I can’t say for sure.

                        1. 2

                          Any modern platform that is, is not POSIX-compliant.

                          1. 2

                            The standard (indirectly) requires CHAR_BIT to be at least 8. I’ve never worked on a platform where it wasn’t 8, but have heard about embedded devices where it was 16 or 32 – a quick web search found mention of such as well [1].

                            I think one important thing the UB debacles over the recent years has shown is that if you rely on behaviour not guaranteed by the standard, your code may work right now with the current compiler on the current hardware, but may break randomly in the future.

                            [1] https://stackoverflow.com/questions/32091992/is-char-bit-ever-8/38360262

                        2. 1

                          Signed int might not be two’s complement,

                          Are there modern common architectures where this is not the case?

                          1. 3

                            I’m not aware of any non-2’s-complement machines made since the 70s, but as I’ve learned over the years, there are always exceptions. So one may be lurking out there [1]. I’m also not aware of any 2’s complement machines that have a trap representation, but I do know of several 2’s complement byte-addressable machines that can trap on overflow—the VAX [2] and MIPS [3].

                            [1] IEEE-754 floating point math is sign-magnitude, and that’s in wide use these days.

                            [2] There’s a flag in the condition codes register that enables/disables such trapping.

                            [3] The ADD instruction will trap on overflow, but all compilers I’ve used on MIPS default to ADDU, which does not trap.

                        1. 3

                          I think part of the reason is the emphasis many people put on portability of C code. For instance, until recently there was one major compiler vendor who did not support large parts of C99 including designated initializers. Also, some people may be forced to use arcane versions of GCC or proprietary compilers for embedded platforms, which might not support C99.

                          A problem with designated initializers specifically is that they are not part of C++ (at least until C++20), so they are not ideal for example code which might be picked up by both C and C++ programmers.

                          An interesting aspect is that compilers are aware of old idiomatic ways to do things and tend to optimize them so you end up with the same code when optimization is turned on.

                          Don’t get me wrong, I am all for moving forward, and there are a lot of things in C99 I wouldn’t like to be without (// comments and declaring variables anywhere to name two), but sometimes I still write ANSI C for maximum portability.

                          1. 41

                            Well I’m thoroughly impressed. Not just at the reverse engineering, but also your ability to expose one of my biggest gripes with tech’s hiring practices.

                            “Great job showcasing all the necessary skills and techniques that we use on our day-to-day job. Awesome work. Can you reverse this linked list?”

                            1. 25

                              Exactly. I interviewed at Google back in 2011 or 2012. Maybe it’s different now, but there were no questions about fit, personality, drive, or what you wanted to do with your career. There were no questions about my past work or anything like that. It was an all-day on-site series of gotcha questions.

                              (My favorite was “you’re trying to unmount an external drive on Linux and you can’t, why?” I ran through everything I could think of and after an hour got to “the umount binary is on the disk you’re unmounting.”)

                              (That or, I was asked how you would measure latency between two systems. I said “ping.” The interviewer just stared at me, so I kept talking. “Lots of pings averaged over time and throw out or explain any outliers.” Kept staring. This sort of thing for an hour.)

                              The whole experience really turned me off of Google, honestly. I was expected to just sit and have questions barked at me for hours because by God I should want nothing more than to work for Google.

                              1. 8

                                I interviewed at Google last year and had a great time with the on-site (Zurich). There were no trick questions, I think they stopped doing those years ago.

                                The typical question was something reasonably simple algorithmically, that then turned out to contain a lot of details you could delve into as time allowed.

                                I rather liked that the focus was on showing your enjoyment in solving problems.

                                1. 6

                                  I had a similar experience in 2013/2014. Screener full of questions with lots of gotchas and edge cases (syscalls, what data structures don’t guarantee log n, what component is not in a passwd entry, …), even a sudden “quick, what’s 2^24?”. Remote technical interview was an hour with a shared google doc writing C (have fun indenting while keeping your sanity) with a phone to my ear to the interviewer. Offered an on-site interview but evolving home situation meant I couldn’t proceed. I’m very, very thankful for that now.

                                  1. 3

                                    the umount binary is on the disk you’re unmounting.

                                    Weird. I don’t see why this could be a limitation. Disks can be unmounted with the umount(2) syscall, and a process using this syscall doesn’t have to be tied to its executable file (just like any other process), right?

                                    (I know this is a bit off-topic)

                                    1. 5

                                      The executable would be opened and resident, holding a reference to the mount.

                                      1. 3

                                        Oh I see. From the Linux man pages:

                                               MNT_FORCE (since Linux 2.1.116)
                                                      [...] If, after aborting requests, some processes still have active
                                                      references to the filesystem, the unmount will still fail.
                                        

                                        It’s probably a Linux-specific limitation. I just tried it on OpenBSD and MNT_FORCE unmounts the filesystem in anyway (processes started from the unmounted filesystem keep running). It fails without MNT_FORCE though.

                                        1. 2

                                          I’m surprised OpenBSD is okay with it. I wonder if it has to do with OpenBSD’s umount being statically linked. Linux holds an open reference to the executing file for demand-loading/page-sharing purposes (I’m pretty sure most recent versions of Solaris and other Unices do too), and will block attempts to write to an executing file with ETXTBSY to prevent you from modifying a running file. That open reference to the executable prevents the umount from happening.

                                          Interestingly, I just ran an experiment and Linux doesn’t prevent you from writing to open shared objects that aren’t being directly executed (that is, dynamically linked libraries). So, I can start a process foo linked to libfoo and while I can’t write to foo, I can write to libfoo and cause a SIGBUS/SIGILL/whatever when foo loads that code.

                                          That’s interesting.

                                          EDIT: I guess it makes sense because you can dynamically load modules via dlsym or whatever and you would want to pick up changes in that case. It’s still interesting though.

                                          1. 1

                                            Linux holds an open reference to the executing file for demand-loading/page-sharing purposes

                                            Oooh, you mean Linux reads executable sections from the disk on demand? I didn’t know this! Locking the file makes sense in this case. I think it also makes sense that OpenBSD doesn’t do it given OpenBSD’s emphasis on simplicity and security.

                                            1. 2

                                              It’s not that Linux reads executables on demand, it’s that executable pages can be read back from disk if those pages end up being paged out due to memory pressure.

                                              (IIRC, I’m at game night and waiting for my turn to come up.)

                                      2. 1

                                        I’d think once the binary is executing and in memory… Maybe the filesystem shows it open thus the disk is busy?

                                        Note: I should reload the page before replying

                                      3. 2

                                        My interview experience was pretty bad back in 2016, too. No more trick questions, but I wasn’t a “culture fit” lol

                                      4. 1

                                        Why do people always use reversing a linked list as an example? Google usually asked about graphs… If anybody can’t reverse a linked list then I doubt they’ve done much coding, right?

                                      1. 5

                                        Probably a DOS 16-bit executable packer I wrote back around 1997 (aPACK) and the compression library I wrote for it (aPLib), I still sometimes get an email from somebody telling me how they are using them, which is awesome.

                                        If you are using some obscure little free library or tool, consider taking the time to send the author a quick email. Sometimes it can really brighten your day to get one of those.

                                        1. 2

                                          Isn’t this basically Lomuto using the first element as the pivot instead of the last, and moving the pivot element along using an extra swap instead of swapping it to its final place at the end?

                                          1. 2

                                            IANAL, but I wonder about the patent section:

                                            Each contributor licenses you to do everything with this software that would otherwise infringe any patent claims they can license or become able to license.

                                            How about if a piece of software doing process B is released under this license, and later process ABCD is patented, does this give permission to do ABCD as long as the B piece is the one licensed under this software? What is the scope of “do everything”?

                                            Also, if I release a piece of software under this, and someone later receives a patent that covers some part of it, am I supposed to ensure access to that patent for all who received the software under this license?

                                            1. 6

                                              I think the “in practice” from the original title is relevant.

                                              1. 2

                                                Probably should also include in Python 3. The behavior happens because Python silently switches to bignum rather than overflowing when the operands exceed MAX_INT (sys.maxsize).

                                              1. 7

                                                git commit --amend is pretty handy to edit the last commit

                                                1. 1

                                                  I’ve got a similar alias to fix mistakes in the last commit using the same commit message:

                                                  fixup = commit --amend -C HEAD
                                                  
                                                  1. 2

                                                    In Mercurial, this is just hg amend or hg am (all Mercurial commands can be abbreviated to their shortest unique prefix). If you do want to edit the commit message, it’s hg amend --edit.

                                                1. 2

                                                  Is it assuming registrants will not fail?

                                                  Otherwise the hardcoded number of registrants could be a problem – say one of them fails and never starts, then the algorithm will not finish (this could be a feature rather than an issue of course, depending on use).

                                                  Also, if registrants can fail, then for example imagine one registrant has just re-added its id as the final one then crashes before it continues, half the other registrants see the full size and continue doing rest of work, while the one registrant restarts and resets the data structure and the other half then loop on a half-full data structure.

                                                  1. 2

                                                    I use this in cloud-init scripts so restarts aren’t an issue because cloud-init scripts aren’t re-executed on restart so what happens in practice is that cluster formation fails.

                                                  1. 2

                                                    It is nice to see a data compression example that is both easy to follow (with the blog post in hand) and not trivial.

                                                    Also, it is great that he links to the blog posts of ryg and cbloom which contain lots of practical knowledge.

                                                    The code is set up to test using enwik8. It is easy to end up optimizing for a specific type of data which might inversely affect the results on other types, so I would suggest testing with some non-text data sets as well if he isn’t already (perhaps the Silesia corpus, which contains various types of data).

                                                    1. 3

                                                      Compressing enwik8 took 22 seconds, a quick hack to try and compress silesia.tar which is roughly twice as big, took almost 2 minutes to compress and segfaults on decompression. A good indication it needs some testing on other types of data.

                                                    1. 2

                                                      I think the amount of discussion about the hows and whys and portability of small things like resolving dependencies is a good argument for using something like Meson, at least for non-trivial projects:

                                                      project('foobar', 'c')
                                                      cc = meson.get_compiler('c')
                                                      m_dep = cc.find_library('m', required : false)
                                                      executable('foobar', 'main.c', 'foo.c', 'bar.c', dependencies : m_dep)
                                                      
                                                      1. 2

                                                        One could argue from the C language perspective it’s not technically a buffer overflow unless b1 happens to be placed near the end of the memory object allocated from the system.

                                                        It is a violation of the logic the pool allocator implements, which address sanitizer cannot possibly know about – it is entirely possible to imagine a valid program that would allocate an area of memory and write possibly overlapping strings into it.

                                                        Don’t get me wrong, it is a buffer overflow at the program level in this case, and it seems like a good idea to be able to somehow mark the areas that are logical units to be able to catch this.

                                                        1. 2

                                                          Fuzzing is a fantastic tool for anything that parses or processes input. Another fuzzer worth checking out is LLVM’s libFuzzer, which is a simple way to do in-process fuzzing, and can be combined with for instance address sanitizer.

                                                          I sometimes add a conditional block with the fuzz target function at the end of the source file that does the processing, so it is easy to build and start fuzzing when you make changes. See for instance the one in tinf.

                                                          For Windows people, libFuzzer runs on WSL, and the latest builds of clang targeting MSVC also started including support.

                                                          1. 1

                                                            Didn’t know about Robson traversal (mentioned at the end). Another algorithm using O(1) space is Morris traversal which is conceptually quite simple. There was a nice explanation of it in Nicholas Ormrod’s talk at CppCon 2017.

                                                            1. 2

                                                              https://www.geeksforgeeks.org/inorder-tree-traversal-without-recursion-and-without-stack/ is a good description of the Morris traversal.

                                                              It turns a normal binary tree into a threaded tree as part of the traversal and then back. By doing this as part of the traversal it’s unnecessary to store those 2 bits of information, giving the traversal O(1) space.

                                                              On the downside, since the Morris traversal destructively turns right pointers into threads it leaves the tree in an unsafe state for reading or writing (the same warning the author mentions about the Link-Inversion model). It also doesn’t realize the O(1) amortized successor function mentioned by the author in the case where you keep your tree in a Threaded state.

                                                            1. 6

                                                              People often gripe about the STL but what Alex Stepanov created is truly a thing of beauty. It was truly radical at the time. It’s even radical now considering that generic programming is still not mainstream.

                                                              However, it isn’t what most people are used to. Most people still don’t understand what iterators are. For those that do not know, iterators are a generalization of a pointer. Pointers are a subset of iterators.

                                                              People don’t get the STL because people can’t math.

                                                              1. 5

                                                                The STL is amazing (see Sean Parent’s talk on generic programming for some background).

                                                                But along with all the benefits it has brought to C++, it has also been a factor in some of the issues the author touches on (and which are well-known), like long compile times, slow debug performance, hard to understand error messages. Whether it is math that makes the C++ ranges implementation shown hard to read, or the sheer volume of packaging of that math – compare it to the python and haskell implementation above.

                                                                What I find a more interesting question raised by this discussion is whether the prevalence of certain industries in the C++ committees has steered the language in a direction where things like compile times are neglected because they can just throw more hardware at such problems. But of course this corporate interest is also part of the reason C++ had a revival in the first place.

                                                                1. 2

                                                                  indeed ! STL starts from algorithms and not objects, and algorithms are defined on algebraic structures. imho, extending these structures with complexity requirements is the crux, and iterators are the key :)

                                                                  1. 1

                                                                    For (advanced) generic programming to become mainstream, someone’s got to work on the compile times.

                                                                    Maybe we need a -fno-monomorphization flag to get super fast debug builds at least?..

                                                                    1. 1

                                                                      I’m wondering if concepts can help speed up compile times. I know they are meant to improve error messages and readability, but certainly, they can be used to short-circuit parts of the compilation process… I haven’t really explored that idea much though.

                                                                  1. 2

                                                                    I’ve used µnit and greatest in the past – granted not as simple as this, but close.

                                                                    1. 4

                                                                      I’ll probably continue to play with optimal compression of various formats.

                                                                      After releasing the new version of BriefLZ (compression library) with an optimal compression level, I used that in an example of the same for LZ4 (see discussion over at Encode’s Forum if interested).

                                                                      I made one for CRUSH and Snappy as well, but they are all dreadfully slow, so mostly of theoretical interest.

                                                                      1. 5

                                                                        I wrote a very simple data compression library called BriefLZ many years ago, which recently appears to have found some use in a few other projects.

                                                                        I’ve been implementing some algorithms to get better compression out of it without changing the data format. I am hoping to get a few of these polished enough to release.