💻
Grokking Algorithm
  • Chapter 1: Big O notation & Binary Search
  • Chapter 2: Selection Sort
  • Chapter 3: Recursion
  • Chapter 4: QuickSort
  • Chapter 5: Hash Tables
  • Chapter 6: Breadth-first Search
  • Chapter 7: Dijkstra's Algorithm
  • Chapter 8: Greedy Algorithm
  • Chapter 9: Dynamic Programming
  • Chapter 10: K-nearest neighbors
  • Chapter 11: Next?
Powered by GitBook
On this page

Was this helpful?

Chapter 9: Dynamic Programming

  • Every dynamic-programming solution involves a grid.

  • The values in the cells are usually what you’re trying to optimize. For the knapsack problem, the values were the value of the goods.

  • Each cell is a subproblem, so think about how you can divide your problem into subproblems. That will help you figure out what the axes are.

  • Dynamic programming is useful when you’re trying to optimize something given a constraint.

  • You can use dynamic programming when the problem can be broken into discrete subproblems.

  • Every dynamic-programming solution involves a grid.

  • The values in the cells are usually what you’re trying to optimize.

  • Each cell is a subproblem, so think about how you can divide your problem into subproblems.

  • There’s no single formula for calculating a dynamic-programming solution.

Implementation:

  • git diff

  • Biologists use the longest common subsequence to find similarities in DNA strands

PreviousChapter 8: Greedy AlgorithmNextChapter 10: K-nearest neighbors

Last updated 5 years ago

Was this helpful?