Dynamic programming is a computer programming technique that helps to efficiently solve a class of problems that have overlapping subproblems and optimal substructure properties.
Such problems include repeatedly calculating the value of the same subproblems to find the optimal solution.
Dynamic programming example
Let's take the case of generating the Fibonacci sequence .
If the sequence is F(1) F(2) F(3) ........ F(50), then the rule follows F(n) = F(n-1) + F(n-2)
F(50) = F(49) + F(48)
F(49) = F(48) + F(47)
F(48) = F(47) + F(46)
...
Note that there are overlapping subproblems here because we need to compute F(48) in order to compute both F(50) and F(49). This is exactly the algorithm where an example of dynamic programming is clearly visible.
How Dynamic Programming Works
Dynamic programming works by storing the results of subproblems so that when their solutions are needed, they are at hand and we don't have to recalculate them.
This technique of storing the value of subtasks is called memoization (remembering) . By storing the values in an array, we save time for calculating subproblems that we have already encountered.
var m = map(0 → 0, 1 → 1) function fib(n) if key n is not in map m m[n] = fib(n − 1) + fib(n − 2) return m[n]
Dynamic programming by memoization is a top-down approach to dynamic programming. By reversing the direction in which the algorithm works, i.e. starting from the base case and working towards the solution, we can also use Bottom-Up Dynamic Programming .
function fib(n) if n = 0 return 0 else var prevFib = 0, currFib = 1 repeat n − 1 times var newFib = prevFib + currFib prevFib = currFib currFib = newFib return currentFib
Recursion vs Dynamic Programming
Dynamic programming is mainly applied to recursive algorithms. This is not a coincidence, most optimization problems require recursion, and dynamic programming is used for optimization.
But not all tasks that use recursion can use Dynamic Programming. If there are no overlapping subproblems, as in the case of the Fibonacci sequence, recursion can only reach a solution using a divide and conquer approach .
For this reason, a recursive algorithm such as "Merge Sort" cannot use dynamic programming because the subtasks do not overlap in any way.
Greedy vs Dynamic Programming
Greedy algorithms are similar to dynamic programming in that they are both optimization tools.
However, a greedy algorithm searches for a locally optimal solution, or, in other words, a "greedy" choice in the hope of finding a global optimum. Therefore, greedy algorithms can make a guess that looks optimal, but at the same time becomes expensive and does not guarantee finding a global optimum.
Dynamic programming, on the other hand, finds the optimal solution for subproblems and then makes an informed choice by combining the results of those subproblems to find the most optimal solution.