Dynamic programming is simply the combination of recursion and memoization. In a typical language with recursion and first-class procedures (e.g., ML or Scheme), you could even implement a custom fixpoint operator that makes a recursive function use dynamic programming.
For example, it could be something like:
(* Assume an existing implementation of hash tables HT. *)
type 'a func = HT.key -> 'a
fun dynfix (f : 'a func -> 'a func) : 'a func =
let val ht = HT.new ()
fun go k =
case HT.lookup (ht, k) of
SOME v => v
| NONE =>
let val v = f go k
in HT.update (ht, k, v); v end
in go end
Disclaimer: I haven’t tested this code.
Edit: Fix’d silly mistake.
I want to thank you for this comment. For years, I’ve understood recursion, but the few times I’ve thought about dynamic programming, never really understood what the article or speaker was trying to convey (because it always seems muddled). The insight that it’s simply recursion with memoization, and that every recursive algorithm can be easily converted to be “dynamic,” made it click.
Historically it’s usually not implemented that way (I think dynamic programming even significantly predates the term “memoization”), though yeah it’s morally equivalent. It’s usually implemented more like “upside-down” memoized recursion: instead of starting at the top, and then on-demand recursing into, calculating, and memoizing values closer to the base case (eventually bottoming out in the base case), you instead start with the base cases, filling in those entries into the table (they don’t depend on anything else so this can be done directly), and then moving “upwards”, always filling in table cells that only depend on other table cells that have already been computed. Usually this is implemented iteratively: you work out an iteration order through the table so that you always hit a cell after its dependencies were already computed, so you don’t need to do a memoization-style check of whether the value’s available yet (the case HT.lookup in the code above).