Dynamic Programming with Bitmasking

This technique uses bitmasks to represent states in dynamic programming problems, allowing for efficient compression of state space and optimization of memory usage. It proves particularly useful for problems with numerous states, where traditional approaches might consume more memory. The main idea is to assign a value to each mask (and, therefore, to each subset) and thus calculate the values for new masks using values of the already computed masks. DP with Bitmasking is generally used when we are given N number of elements and we have to choose subsets of elements to get the most optimal answer. Here, we represent the state of all elements using a bitmask of size N where the ith bit is set (1) if we have selected the ith element in the subset else the ith bit is unset (0).

25 Essential Concepts for Competitive Programming

Competitive Programming is a mental sport which enables you to code a given problem under provided constraints. The purpose of this article is to provide an overview of the most frequent tricks used in Competitive Programming.

These 25 tricks will help you master competitive programming and solve the problems efficiently:

Similar Reads

1. Fast I/O Techniques:

In competitive programming, it is important to read input as fast as possible, so we save valuable time....

2. Range XOR with Prefix Sum:

In competitive programming, Range XOR with Prefix Sum is an efficient technique for calculating the XOR of elements within a specific range in an array. It utilizes the concept of prefix sums, which store the cumulative sum of elements up to a given index. Similarly, we can maintain a prefix XOR array which stores the cumulative XOR of elements upto a given index. Now, to calculate the XOR of elements within a range [l, r] in an array, we can utilize the prefix sum array. The XOR of the range can be expressed as:...

3. Logarithmic Complexity with Binary Search:

In competitive programming, Binary Search is defined as a searching algorithm used in a sorted array by repeatedly dividing the search interval in half. The idea of binary search is to use the information that the array is sorted and reduce the time complexity to O(log N). We can also apply Binary Search on Answer to find our answer with the help of given search space in which we take an element [mid] and check its validity as our answer, if it satisfies our given condition in the problem then we store its value and reduce the search space accordingly....

4. Fibonacci with Matrix Exponentiation:

This is one of the most used techniques in competitive programming. We can use Matrix Exponentiation to find the Nth Fibonacci number in log(N) time. The Matrix Exponentiation method uses the following formula:...

5. Sparse Table for Range Queries:

In competitive programming, Sparse table concept is used for fast queries on a set of static data (elements do not change). It preprocesses so that the queries can be answered efficiently. The idea is to make a lookup table of size N X logN such that lookup[i][j] contains the answer of range starting from i and of size 2j. Sparse Table can be used to answer queries like Range Minimum Query, Range GCD query in logN time....

6. DAGs and Topological Sorting:

In competitive programming, Directed Acyclic Graphs (DAGs) and topological sorting can simplify complex problems in surprising ways. Topological sorting for Directed Acyclic Graph (DAG) is a linear ordering of vertices such that for every directed edge u-v, vertex u comes before v in the ordering. Topological Sorting is commonly used in problems where completion of one task depends on other task(s) and we are required to find the optimal sequence to perform all the tasks. We can also use Kahn’s Algorithm for topological sort to detect cycles in directed graphs....

7. Two Pointer Techniques for Arrays:

In competitive programming, two pointers technique is really an easy and effective technique that is typically used for searching pairs in a sorted array. In two pointer approach, we maintain the search space of our answer using two pointers (start and end) and then decrease our search space be either incrementing the start or decrementing the end until we get our answer. Two Pointer approach is commonly used for problems like Two Sum, Triplet Sum, 4 Sum and other problems where we need to find pairs, triplets or quadruplets of elements....

8. Mo’s Algorithm for Offline Queries:

In competitive programming, Mo’s Algorithm can be used to answer offline queries such as finding the sum of every query range. The idea of MO’s algorithm is to pre-process all queries so that result of one query can be used in next query. It can be used in many problems that require processing range queries in a static array, i.e., the array values do not change between the queries. In each query, for a given range [a, b] the idea is to calculate the value based on the array elements between positions of a and b. Since the array is static, the queries can be processed in any order, and Mo’s Algorithm processes the queries in a special order which guarantees that the algorithm works efficiently....

9. Persistent Segment Trees:

In competitive programming, Segment Tree is itself a great data structure that comes into play in many cases. In this post we will introduce the concept of Persistency in this data structure. Persistency simply means to retain the changes. But obviously, retaining the changes cause extra memory consumption and hence affect the Time Complexity....

10. Chinese Remainder Theorem:

In competitive programming, Chinese Remainder Theorem is used when we are given k numbers which are pairwise coprime and remainders of these numbers when an unknown number x is divided by them. We need to find the minimum possible value of x that produces given remainders....

11. Regular Expressions for String Manipulation:

A regular expression (regex) is a sequence of characters that define a search pattern. Here’s how to write regular expressions:...

12. Dynamic Bitsets for Efficient Bits:

In competitive programming, A bitset is an array of bools but each boolean value is not stored in a separate byte instead, bitset optimizes the space such that each boolean value takes 1-bit space only, so space taken by bitset is less than that of an array of bool or vector of bool. A limitation of the bitset is that size must be known at compile time i.e. size of the bitset is fixed....

13. Sparse Matrices for Storage:

In competitive programming, Sparse matrices can optimize storage for problems involving large two-dimensional arrays with mostly zero values. Sparse matrices come to the rescue by offering a more efficient representation for arrays with a high concentration of zeros. Instead of storing all elements explicitly, they only store the non-zero values along with their corresponding indices. This approach significantly reduces memory requirements, especially for matrices with a sparsity ratio (percentage of zero values) exceeding 80%....

14. Compressed Tries for Strings:

In competitive programming, Compressed tries are efficient data structures for storing and retrieving strings in memory. Tries with nodes of degree at least 2. It is accomplished by compressing the nodes of the standard trie. It is also known as Radix Tries. It is used to achieve space optimization....

15. Radix Sort for Strings:

In competitive programming, Radix sort is a hidden gem for sorting strings in linear time. Radix Sort is a linear sorting algorithm that sorts elements by processing them digit by digit. It is an efficient sorting algorithm for integers or strings with fixed-size keys. Radix Sort is an efficient non-comparison-based sorting algorithm which can sort a dataset in linear O(N) time complexity and hence, can be better than other competitive algorithms like Quick Sort. It uses another algorithm namely Counting Sort as a subroutine....

16. Dynamic Programming with Bitmasking:

This technique uses bitmasks to represent states in dynamic programming problems, allowing for efficient compression of state space and optimization of memory usage. It proves particularly useful for problems with numerous states, where traditional approaches might consume more memory. The main idea is to assign a value to each mask (and, therefore, to each subset) and thus calculate the values for new masks using values of the already computed masks. DP with Bitmasking is generally used when we are given N number of elements and we have to choose subsets of elements to get the most optimal answer. Here, we represent the state of all elements using a bitmask of size N where the ith bit is set (1) if we have selected the ith element in the subset else the ith bit is unset (0)....

17. Heavy-Light Decomposition:

In competitive programming, Heavy-Light Decomposition is a strategic approach for navigating and querying in trees. Heavy Light decomposition (HLD) is one of the most used techniques in competitive programming. HLD of a rooted tree is a method of decomposing the vertices of the tree into disjoint chains (no two chains share a node), to achieve important asymptotic time bounds for certain problems involving trees. HLD can also be seen as ‘coloring’ of the tree’s edges. The ‘Heavy-Light’ comes from the way we segregate edges. We use size of the subtrees rooted at the nodes as our criteria....

18. Euler’s Totient Function:

Euler’s Totient Function has applications beyond number theory, such as in hashing. Euler Totient Function or Phi-function for ‘n’, gives the count of integers in range ‘1′ to ‘n’ that are co-prime to ‘n’. It is denoted by Φ(N).For example, the below table shows the ETF value of first 20 positive integers:...

19. Fenwick Tree for Efficient Range Updates and Queries:

Also known as a Binary Indexed Tree (BIT), this data structure allows for efficient manipulation and querying of prefix sums in arrays. It excels at handling problems involving range updates and subsequent queries for the sum within specific intervals. As compared to Segment Trees, Fenwick trees require less space (the size of Fenwick tree is equal to the number of elements whereas the size of segment tree is 4 times the number of elements in the input array). Also, Fenwick Trees are faster to code as it only requires few lines to implement....

20. Centroid Decomposition of Trees:

In competitive programming, Centroid decomposition is a lesser-known technique for tree-related problems. Centroid of a Tree is a node which if removed from the tree would split it into a ‘forest’, such that any tree in the forest would have at most half the number of vertices in the original tree....

21. Lowest Common Ancestor with Binary Lifting:

In competitive programming, Binary Lifting is an important technique for efficiently finding the Lowest Common Ancestor in a tree. Binary Lifting is a Dynamic Programming approach for trees where we precompute some ancestors of every node. It is used to answer a large number of queries where in each query we need to find an arbitrary ancestor of any node in a tree in logarithmic time....

22. Power of Inversion Counting:

In Competitive Programming, Inversion counting techniques can solve problems related to counting inversions in arrays. Inversion Count for an array indicates – how far (or close) the array is from being sorted. If the array is already sorted, then the inversion count is 0, but if the array is sorted in reverse order, the inversion count is the maximum. We can count inversions in an array using various techniques like Merge Sort or using Binary Indexed Trees, etc....

23. Randomized Algorithms:

In competitive programming, Randomized algorithms can provide efficient solutions to certain problems with a touch of randomness. An algorithm that uses random numbers to decide what to do next anywhere in its logic is called Randomized Algorithm. For example, in Randomized Quick Sort, we use a random number to pick the next pivot (or we randomly shuffle the array). Typically, this randomness is used to reduce time complexity or space complexity in other standard algorithms. Randomized Algorithms are often used for solving problems like Birthday Paradox, Expectation or expected value of an array, Shuffle a deck of cards, etc....

24. Josephus Problem:

The Josephus Problem hides an elegant recursive solution applicable to various scenarios. There are N people standing in a circle waiting to be executed. The counting out begins at some point in the circle and proceeds around the circle in a fixed direction. In each step, a certain number of people are skipped, and the next person is executed. The elimination proceeds around the circle (which is becoming smaller and smaller as the executed people are removed), until only the last person remains, who is given freedom. The recursive solution for Josephus Problem is:...

25. Fast Fourier Transform (FFT) for Polynomial Multiplication:

...