Let's start with an example, the variant of the subset sum problem can be described as follows:
Given a set of strictly positive numbers which contain possible duplicates which sum up to M, for a given K <= M, determine whether or not there is a subset of the numbers such that they sum up to exactly K.
Obviously our DP table will depend on the maximum sum available so we have to be rather reasonable in this dimension. First, let's construct a possible DP state that we may come up with: Let D(i, k) determine whether or not it is possible to construct a sum of exactly k using item 1 to item i. Then we can appropriately define D(i, k) = 1 if it's possible and D(i, k) = 0 if it's impossible. Defining the base cases as D(0, 0) = 1 (possible to construct a sum of 0 by not choosing anything), and D(0, x) = 0 for x > 0 (as it's not possible to construct a positive sum without using any items).
As our base cases and states are defined, as usual we move to define the recurrence relation. This is rather simple:
F(i, k) = max { F(i-1, k - A[i]), F(i-1, k) }
This shows that we are given 2 decisions, we can either use item i into the set and the only way that this is feasible is that if F(i-1, k-A[i]) was also feasible. Take note that we must ensure that k-A[i] >= 0 otherwise we will run into trouble. The other decision is to not use item i into the set (which is perfectly fine since we are allowed any subset of items from 1 to i). So if we can reach the sum of k using item 1 to item i-1 then we can certainly get to the same sum by choosing not to use item i.
The C++ code is listed below:
#include <iostream>
#include <vector>
#include <string>
#define M 10000
#define N 1000
int dp[N+1][M+1];
using namespace std;
int main() {
vector<int> vec;
int val;
int num;
int totalSum = 0;
cin >> num;
while (num-- && cin >> val) { vec.push_back(val); totalSum += val; }
// check for size and sum constraint
if (vec.size() > N || totalSum > M) {
cerr << "Exceeds size and/or sum constraint\n";
return 1;
}
for (int i = 0; i < M+1; i++) dp[0][i] = 0;
dp[0][0] = 1;
for (int i = 1; i <= vec.size(); i++) {
for (int j = 0; j <= totalSum; j++) {
dp[i][j] = dp[i-1][j];
if (j - vec[i-1] >= 0) {
dp[i][j] = max(dp[i][j],dp[i-1][j-vec[i-1]]);
}
}
}
// begin queries
while (cin >> val) {
if (val > M) { cerr << "Query too large. Skipping.\n"; continue; }
string outcome = dp[vec.size()][val] ? "Possible" : "Impossible";
cout << outcome << "\n";
}
return 0;
}
However there is an obvious optimisation we can make to the memory usage. This optimisation can be applied to all DP problems - first we observe the recurrence relation and take note that it only depends on the previous item (i-1) and not anything before that. So we can compress the table by just using 2 rows (or columns depending on how you visualise arrays) and swap them around each iteration and process them again. Alternatively, one can use modulus operations to simulate this effect and this is especially fast for recurrences with a "depth" of 2, as we can simply use a bitwise-AND operation to determine whether its odd or even.
As another example, if our recurrence depended on i-1, i-3, and i-4. Then we need to keep track of a depth of at least 5. We can then use modulus to alternate where we calculate from and where we store newly calculated values to.
Such a simple optimisation reduces our memory footprint dramatically and it's very simple to apply, now our DP space complexity only scales with the maximum sum and has no relation with the number of items we are using.
The C++ code is listed below which uses this optimisation:
#include <iostream>
#include <vector>
#include <string>
#define M 10000
int dp[2][M+1];
using namespace std;
int main() {
vector<int> vec;
int val;
int num;
int totalSum = 0;
cin >> num;
while (num-- && cin >> val) { vec.push_back(val); totalSum += val; }
// check for sum constraint
if (totalSum > M) {
cerr << "Exceeds sum constraint\n";
return 1;
}
for (int i = 1; i < M+1; i++) dp[0][i] = 0;
dp[0][0] = 1;
for (int i = 1; i <= vec.size(); i++) {
for (int j = 0; j <= totalSum; j++) {
dp[i&1][j] = dp[(i-1)&1][j];
if (j - vec[i-1] >= 0) {
dp[i&1][j] = max(dp[i&1][j],dp[(i-1)&1][j-vec[i-1]]);
}
}
}
// begin queries
while (cin >> val) {
if (val > M) { cerr << "Query too large. Skipping.\n"; continue; }
string outcome = dp[vec.size()&1][val] ? "Possible" : "Impossible";
cout << outcome << "\n";
}
return 0;
}
Sometimes there is an less obvious space optimisation, such as in this problem. We can effectively further reduce the memory footprint by 2 by only using a 1D array. To see this, it is often useful to draw order dependency diagrams which depict the direction in which dependencies are made in the DP table. Obviously this becomes harder as you reach DP tables with dimensions higher than 3. But as our DP table is only 2D it is easy to visualise on paper:
As you can see the column we wish to calculate (i) is dependent on things equal to or directly below it in row in column i-1. So we can traverse from top to bottom on each column and still be assured that all the entries below the one we are calculating are still effectively column i-1. Hence our correctness is assured and we have essentially compressed the two columns in 1 reducing the memory footprint by a factor of 2.
The C++ code is listed below which uses this optimisation:
#include <iostream>
#include <vector>
#include <string>
#define M 10000
int dp[M+1];
using namespace std;
int main() {
vector<int> vec;
int val;
int num;
int totalSum = 0;
cin >> num;
while (num-- && cin >> val) { vec.push_back(val); totalSum += val; }
// check for sum constraint
if (totalSum > M) {
cerr << "Exceeds sum constraint\n";
return 1;
}
for (int i = 1; i < M+1; i++) dp[i] = 0;
dp[0] = 1;
for (int i = 1; i <= vec.size(); i++) {
for (int j = totalSum; j >= 0; j--) {
if (j - vec[i-1] >= 0) {
dp[j] = max(dp[j],dp[j-vec[i-1]]);
}
}
}
// begin queries
while (cin >> val) {
if (val > M) { cerr << "Query too large. Skipping.\n"; continue; }
string outcome = dp[val] ? "Possible" : "Impossible";
cout << outcome << "\n";
}
return 0;
}
This post is a real gem.
ReplyDelete