Standard problems on Dynamic Programming
A longest common subsequence (LCS) is defined as the longest subsequence which is common in all given input sequences.
Examples:
Input: S1 = “AGGTAB”, S2 = “GXTXAYB”
Output: 4
Explanation: The longest subsequence which is present in both strings is “GTAB”.
Matrix chain multiplication is an optimization problem that needs the most efficient method of multiplying a given sequence of matrices. The problem is not to perform the multiplications, but rather to determine the order of the matrix multiplications involved. Dynamic programming could be used to solve the problem.
Example:
Input: arr[] = {40, 20, 30, 10, 30}
Output: 26000
Explanation:There are 4 matrices of dimensions 40×20, 20×30, 30×10, 10×30.
Let the input 4 matrices be A, B, C and D.
The minimum number of multiplications are obtained by
putting parenthesis in following way (A(BC))D.
The minimum is 20*30*10 + 40*20*10 + 40*10*30
The 0/1 knapsack problem represents that either all or none of the items in a knapsack are completely filled. For example, consider two items weighing 2kg and 3kg, respectively. If we select the 2kg item, we cannot select a 1kg item from the 2kg item (item is not divisible); we must select the entire 2kg item. This is a 0/1 knapsack problem in which we either completely pick the item or pick that item. Dynamic programming is used to solve the 0/1 knapsack problem.
Example:
Input: N = 3, W = 4, profit[] = {1, 2, 3}, weight[] = {4, 5, 1}
Output: 3
Explanation: There are two items which have weight less than or equal to 4. If we select the item with weight 4, the possible profit is 1. And if we select the item with weight 1, the possible profit is 3. So the maximum possible profit is 3. Note that we cannot put both the items with weight 4 and 1 together as the capacity of the bag is 4.
Given a cost matrix cost[][] and a position (M, N) in cost[][], write a function that returns cost of minimum cost path to reach (M, N) from (0, 0). Each cell of the matrix represents a cost to traverse through that cell. The total cost of a path to reach (M, N) is the sum of all the costs on that path (including both source and destination). You can only traverse down, right and diagonally lower cells from a given cell, i.e., from a given cell (i, j), cells (i+1, j), (i, j+1), and (i+1, j+1) can be traversed.
Example:
Input:
The path with minimum cost is highlighted in the following figure. The path is (0, 0) –> (0, 1) –> (1, 2) –> (2, 2). The cost of the path is 8 (1 + 2 + 2 + 3).
Output:
The subset sum problem is a decision problem. In its most general form, there is a multiset of integers and a target sum, and the problem is to determine whether any subset of the integers sums exactly. It is well known that the problem is NP-hard. Furthermore, some restricted variants of it are NP-complete as well.
Example:
Input: set[] = {3, 34, 4, 12, 5, 2}, sum = 9
Output: True
Explanation: There is a subset (4, 5) with sum 9.
Bellman-Ford Algorithm is a single source shortest path algorithm that determines the shortest path between a given source vertex and every other vertex in a graph. This algorithm can be used on both weighted and unweighted graphs.
A Bellman-Ford algorithm is also guaranteed to find the shortest path in a graph, similar to Dijkstra’s algorithm. Although Bellman-Ford is slower than Dijkstra’s algorithm, it is capable of handling graphs with negative edge weights, which makes it more versatile. The shortest path cannot be found if there exists a negative cycle in the graph. If we continue to go around the negative cycle an infinite number of times, then the cost of the path will continue to decrease (even though the length of the path is increasing). As a result, Bellman-Ford is also capable of detecting negative cycles, which is an important feature.
The Floyd Warshall Algorithm is an all pair shortest path algorithm unlike Dijkstra and Bellman Ford which are single source shortest path algorithms. This algorithm works for both the directed and undirected weighted graphs. But, it does not work for the graphs with negative cycles (where the sum of the edges in a cycle is negative). It follows Dynamic Programming approach to check every possible path going via every possible node in order to calculate shortest distance between every pair of nodes.
A number is non-decreasing if every digit (except the first one) is greater than or equal to the previous digit. For example, 223, 4455567, 899, are non-decreasing numbers.
So, given the number of digits n, you are required to find the count of total non-decreasing numbers with n digits.
Examples:
Input: n = 1
Output: count = 10Input: n = 2
Output: count = 55
The problem is to find the smallest power of 2 that is greater than or equal to a positive integer ‘n’. We must devise an algorithm to compute this value efficiently without using the power function. This can be useful in a variety of situations, including determining the appropriate size for data structures and optimizing algorithms.
Example:
Input: n = 5
Output: 8Input: n = 17
Output: 32
Dynamic Programming (DP) Notes for GATE Exam [2024]
As the GATE Exam 2024 is coming up, having a good grasp of dynamic programming is really important for those looking to tackle tricky computational problems. These notes are here to help you understand the basic principles, strategies, and real-life uses of dynamic programming. They’re like a handy guide to becoming a pro at dynamic programming, making it easier for you to ace the GATE exam. So, get ready to dive into the world of dynamic programming and make problem-solving a breeze!
Table of Content
- What is Dynamic Programming?
- Overlapping Subproblems Property
- Optimal Substructure Property
- Standard problems on Dynamic Programming
- Previously Asked Problems of Dynamic Programming on GATE