Dynamic Programming is an efficient way to implement some divide and conquer algorithms — it solves overlapping subproblems once, stores them, and uses those results to build the final answer. It works best when:

A. Assembly Line Scheduling

Time complexity: $O(n)$ Dynamic Table: $f_i[j]$ Only “$j$” needs to be iterated (is changing), so one-dimension. (The line numbers, 1 and 2, are fixed)

image.png

image.png

B. The 0-1 Knapsack Problem

Time complexity: $O(nW)$ Dynamic Table: $f[i][w]$ “i” and “w” need to be iterated, so two-dimension. (”v” follows “i”)

image.png

C. Longest Common Sequence (Lab 5)

2021-2022 Resit

image.png

Goal:

Find the longest common length: 3

Find the longest common sequence: ACG

Need to know how to draw the table

Describing the design of algorithm (F2023-2024 Q3)

Which paradigm to use?

DP (0-1 knapsack problem)

  1. 0-1 knapsack problem has optimal substructure, meaning the solution to a problem can be constructed from solutions to its subproblems.
  2. It also has overlapping subproblems, since the same sub-instances (i.e., selecting items up to a certain capacity) are solved repeatedly.
  3. (Why not others)

Greedy (Activity Selection Problem)

  1. Activity Selection has the greedy-choice property: choosing the activity that finishes earliest leads to an optimal overall schedule.
  2. It also satisfies optimal substructure, as the remaining problem (after selecting an activity) is similar in structure and independent.
  3. (Why not others)