The first chapter of Structure and Interpretation of Computer Programs, especially section 1.2, contains a great discussion of these ideas. The authors distinguish between recursive procedures and recursive processes. “Recursive procedure” describes how a function is defined (in terms of itself). “Recursive process” describes how the definition expands when it’s evaluated. A recursive process is often slow because they expand into tree structures where the same sub-expression is computed more than once. Why I’m mentioning it here is that SICP shows that a more performant implementation does not depend on imperative constructs like for-loops. You can define recursive procedures which generate what the authors call “iterative processes”, similar in resource consumption to for-loops. They make it very clear why some implementations are faster than others and that it’s more nuanced than recursion vs. for-loops.

Just an FYI for those who might not know, functools.lru_cache in the standard Python library provides for easy memoization in Python. Caching can be messy, but if you’re working on some simple scripts or something like analytics, then this is an easy win (passing max_size=None to that decorator will make the cache be of infinite size).

Dynamic programming is such an interesting concept, it took a good while (and lots of examples coded) to get my head around it.

The interesting thing is that I have not used it outside of preparation for interviews. I am not sure if it is still the case, but Google and Facebook would often ask problems that needed dynamic programming approaches to solve (e.g. the classic backpack problem or one of its permutations).

The same way, I have noticed that the only engineers who can knock dynamic programming solutions off the shelf are either the ones who did competitive programming back in the day (where this is a basic technique) or others who’ve gone through thorough interview preparation.

Anyone used dynamic programming in the real world or know of libraries that have this kind of algos implemented?

I’ve used it in the real world. Had a problem a decade or so ago that required computing the difference between tree-structured data, and a dynamic programming method similar to substring edit-distance was ideal. No libraries implemented it for me, and it was a pretty straightforward implementation via a dynamic programming approach. The use case was in the context of static program analysis where the trees I was working with were ASTs. I wrote part of it up a couple years ago for fun in F# (http://syntacticsalt.com/blog/2015-12-24-comparing-trees-functionally.html).

Anyone used dynamic programming in the real world or know of libraries that have this kind of algos implemented?

Dynamic programming algorithms are at the core of many, many bioinformatics and computational biology solutions (DNA sequencing, the human genome, and etc). One of the earliest was Needleman-Wunsch (circa 1970). Local alignments, global alignments, different cost structures all took off from the same basic trick.

I’ve gotten to use dynamic programming at my $DAYJOB, since it was the only reasonable way of solving a problem.

In school, in competitive programming, and in interview situations, the problems are typically quite well-defined, and the data structure of choice is typically an n-dimensional matrix of integers. For an actual messy real-world problem, issues such as data structure choice, complicated solution types, choice of forward/backward/recursive organization of computation, memory limiting for extreme cases, and making operations parallel crop up. The core took half a day, but making it “industrial strength” took many months with many people involved.

I think it’s likely that all the common uses of such techniques come as part of standard libraries in most languages, so when you need to use them, you have prepackaged versions. C/C++ code is the most likely place where you end up rolling your own, but with the STL since you can run these algorithms on your own objects it is even less needed.

One recent use I remember was finding a (bit-) optimal parse for a LZ style compression algorithm. Basically it finds the shortest path from each position to the end (source for anyone interested).

In university, I had a graph computation problem that had a recursive solution, but you could derive another graph from that that was easier to solve, also recursive. The paper only had pseudo-code and the proof of the time complexity just assumed you could use another algorithm in the middle that had certain complexity, calculated using the recursive method.

In practice, though, using the recursive way of doing things absolutely sucked and switching it to the dynamic approach changed it from calculating results for subgraphs to, instead, combining results from already calculated subgraphs.

The funny thing was how straightforward it seemed for the problem given than it was for all the motivating examples before.

I wish I remembered anything about the problem but it’s all gone with time (this was years ago). It wasn’t particularly complex but it was very satisfying to ‘flip’ it to bottom up and see the effect.

An interesting question is, Where did the name, dynamic programming, come from? The 1950s were not good years for mathematical research. We had a very interesting gentleman in Washington named Wilson. He was Secretary of Defense, and he actually had a pathological fear and hatred of the word research. I’m not using the term lightly; I’m using it precisely. His face would suffuse, he would turn red, and he would get violent if people used the term research in his presence. You can imagine how he felt, then, about the term mathematical. The RAND Corporation was employed by the Air Force, and the Air Force had Wilson as its boss, essentially. Hence, I felt I had to do something to shield Wilson and the Air Force from the fact that I was really doing mathematics inside the RAND Corporation. What title, what name, could I choose? In the first place I was interested in planning, in decision making, in thinking. But planning, is not a good word for various reasons. I decided therefore to use the word “programming”. I wanted to get across the idea that this was dynamic, this was multistage, this was time-varying. I thought, let’s kill two birds with one stone. Let’s take a word that has an absolutely precise meaning, namely dynamic, in the classical physical sense. It also has a very interesting property as an adjective, and that is it’s impossible to use the word dynamic in a pejorative sense. Try thinking of some combination that will possibly give it a pejorative meaning. It’s impossible. Thus, I thought dynamic programming was a good name. It was something not even a Congressman could object to. So I used it as an umbrella for my activities.

Comments like this are such a bummer. The ideas in this article are completely independent from any definition of the term “dynamic programming” and he provides a link to a well-known authority using the term in the same way. What the author is “trying to communicate” is well communicated, even if you are confused by the use of the term. Lobste.rs is not a place for this kind of snarkiness.

The first chapter of

Structure and Interpretation of Computer Programs, especially section 1.2, contains a great discussion of these ideas. The authors distinguish between recursive procedures and recursive processes. “Recursive procedure” describes how a function is defined (in terms of itself). “Recursive process” describes how the definition expands when it’s evaluated. A recursive process is often slow because they expand into tree structures where the same sub-expression is computed more than once. Why I’m mentioning it here is that SICP shows that a more performant implementation does not depend on imperative constructs like for-loops. You can define recursiveprocedureswhich generate what the authors call “iterative processes”, similar in resource consumption to for-loops. They make it very clear why some implementations are faster than others and that it’s more nuanced than recursion vs. for-loops.Just an FYI for those who might not know,

`functools.lru_cache`

in the standard Python library provides for easy memoization in Python. Caching can be messy, but if you’re working on some simple scripts or something like analytics, then this is an easy win (passing`max_size=None`

to that decorator will make the cache be of infinite size).Dynamic programming is such an interesting concept, it took a good while (and lots of examples coded) to get my head around it.

The interesting thing is that I have not used it outside of preparation for interviews. I am not sure if it is still the case, but Google and Facebook would often ask problems that needed dynamic programming approaches to solve (e.g. the classic backpack problem or one of its permutations).

The same way, I have noticed that the only engineers who can knock dynamic programming solutions off the shelf are either the ones who did competitive programming back in the day (where this is a basic technique) or others who’ve gone through thorough interview preparation.

Anyone used dynamic programming in the real world or know of libraries that have this kind of algos implemented?

I’ve used it in the real world. Had a problem a decade or so ago that required computing the difference between tree-structured data, and a dynamic programming method similar to substring edit-distance was ideal. No libraries implemented it for me, and it was a pretty straightforward implementation via a dynamic programming approach. The use case was in the context of static program analysis where the trees I was working with were ASTs. I wrote part of it up a couple years ago for fun in F# (http://syntacticsalt.com/blog/2015-12-24-comparing-trees-functionally.html).

That’s a cool use case and the post was a good read. Thanks for sharing!

Dynamic programming algorithms are at the core of many, many bioinformatics and computational biology solutions (DNA sequencing, the human genome, and etc). One of the earliest was Needleman-Wunsch (circa 1970). Local alignments, global alignments, different cost structures all took off from the same basic trick.

I’ve gotten to use dynamic programming at my $DAYJOB, since it was the only reasonable way of solving a problem.

In school, in competitive programming, and in interview situations, the problems are typically quite well-defined, and the data structure of choice is typically an n-dimensional matrix of integers. For an actual messy real-world problem, issues such as data structure choice, complicated solution types, choice of forward/backward/recursive organization of computation, memory limiting for extreme cases, and making operations parallel crop up. The core took half a day, but making it “industrial strength” took many months with many people involved.

I think it’s likely that all the common uses of such techniques come as part of standard libraries in most languages, so when you need to use them, you have prepackaged versions. C/C++ code is the most likely place where you end up rolling your own, but with the STL since you can run these algorithms on your own objects it is even less needed.

One recent use I remember was finding a (bit-) optimal parse for a LZ style compression algorithm. Basically it finds the shortest path from each position to the end (source for anyone interested).

In university, I had a graph computation problem that had a recursive solution, but you could derive another graph from that that was easier to solve, also recursive. The paper only had pseudo-code and the proof of the time complexity just assumed you could use another algorithm in the middle that had certain complexity, calculated using the recursive method.

In practice, though, using the recursive way of doing things absolutely sucked and switching it to the dynamic approach changed it from calculating results for subgraphs to, instead, combining results from already calculated subgraphs.

The funny thing was how straightforward it seemed for the problem given than it was for all the motivating examples before.

I wish I remembered anything about the problem but it’s all gone with time (this was years ago). It wasn’t particularly complex but it was very satisfying to ‘flip’ it to bottom up and see the effect.

I have the dreaded live coding exercise in a few days - I have been thinking about recursion and the Fibonacci sequence lately - thanks for sharing.

Here is a great run down of that example - https://hplgit.github.io/primer.html/doc/pub/half/book.pdf

What’s that?

[Comment from banned user removed]

I didn’t invent the term and I agree it’s a confusing name. See this blog post for more details about the etymology: http://arcanesentiment.blogspot.com/2010/04/why-dynamic-programming.html?m=1

“dynamic programming” as a term is older than lisp is.

Some history on the name:

Comments like this are such a bummer. The ideas in this article are completely independent from any definition of the term “dynamic programming” and he provides a link to a well-known authority using the term in the same way. What the author is “trying to communicate” is well communicated, even if you are confused by the use of the term. Lobste.rs is not a place for this kind of snarkiness.

It’s a fair criticism in that I should have talked about where the name comes from and that’s it’s confusing for me too.

This has nothing to do with dynamic vs. static typing.