Partition Parities

 

Behold, thou desirest truth in the inward parts: and in the hidden part thou shalt make me to know wisdom.

                                                                                                Psalm 51:6

 

Given an ordered set of four binary bits, arranged in a square array for convenience, we can determine the sums modulo 2 of each row and of each column.  These four parity bits are not sufficient to fully specify the original four bits in the array, because if we take the 2’s complement of each of the original four bits we get the same parity bits.  For example, the figure below shows the two complementary arrays consistent with one particular set of parity bits.

 

 

We say these two arrays constitute the equivalence class of compatible arrays under row and column parities. There are 24 = 16 distinct arrays of four bits, consisting of 8 sets of complementary pairs, and the row-column parities are sufficient to uniquely identify each complementary pair. Since there are 16 possible sets of four ordered bits, we know only half of all sets of four bits are viable row-column parity bits.

 

In general the number of arrays compatible with a given set of parities for a given set of partitions depends only on the partitions, independent of the parities – assuming the parities are viable (meaning that they apply to at least one array of values). To prove this, consider any two arrays with the same parities for a given set of partitions. We will say they are “equivalent” under that set of partitions.  If we change the value of the bits in any fixed location of these two arrays, it will affect the parities in exactly the same way for both arrays.  Hence the two modified arrays are equivalent under that set of partitions. As a result, we have a one-to-one mapping of the members of any equivalence class to the members of any other equivalence class under a given set of partitions, so every equivalence class has the same number of members. This fact enables us to determine the size of the ambiguity classes for any given set of partitions by just counting the number of compatible arrays with all zero parity bits (called the null parity condition).

 

If we consider an ordered set of nine binary bits arranged in a square array, we find that the row and column parities are only sufficient to identify an equivalence class consisting of 16 distinct arrays. For example, the figure below shows the 16 arrays compatible with all zero parities for each row and column.

 

 

Obviously these arrays form a group under the operation of bit-wise addition (modulo 2), because each partition of these arrays sums to zero, and hence each partition of the sum of any two of these arrays also sums to zero. (This doesn’t apply to the other equivalence classes, but they can be mapped to the elements of this “null parity” group as discussed above.) The group table for these arrays is shown below.

 

 

This group table has a self-similar arrangement, repeating the AB\BA pattern on four successive levels. Clearly this group of 16 elements has subgroups of size 8, 4, 2, and 1. By propagating the self-similar pattern we can form a group of order 2N for any N.

 

If, in addition to the row and column parities, we also specify the parities of the left-to-right “diagonal” sets, i.e., with the indices (11,22,33), (12,23,31), and (13,21,32), we reduce the number of consistent arrays for any given set of parities to just four. For null parities these four must be a subset of the 16 arrays shown above, and we also know they form a group, so they must form a sub-group of the one described above. Indeed we find that the compatible arrays under these conditions are A, H, J, and O, which comprise a subgroup of the above group. If instead of the left-to-right diagonal partition we augment the row and column parities with the right-to-left partitions (13,22,31), (12,21,33) and (11,23,32) we get the subgroup A, G, L, and N. Both of these subgroups are Klein 4-groups. If we impose the null parity requirements on all four of these partitions (rows, columns, left-to-right diagonals, and right-to-left diagonals) we reduce the compatible arrays to just the single array A.

 

Note that in this case our partitions have an odd number of elements, so we can uniquely identify an array by its parity bits for these partitions. This is in contrast to the case where the partitions have an even number of elements, in which case we cannot distinguish an array from its complement, meaning that the smallest ambiguity set is of size 2.

 

Now consider an ordered set of 16 binary bits. There are 216 = 65535 distinct such sets. Consider a partitioning of the 16 bits into four mutually exclusive subsets of four bits, and take the sum modulo 2 of the elements of each subset. These sums are the “parity bits” for the respective subsets. As just mentioned, since the number of bits in each partition is even, we can’t reduce the size of the ambiguity set to just 1, because the parity of the number of 1s equals the parity of the number of 0s.  Hence if a given array is compatible then so is its complement, i.e., the array formed by taking the 2’s complement of each element, so the minimum size of the ambiguity set for even partitions is 2.

 

As before, if we arrange the 16 bits in a square array, we might take the rows as the subsets, and take the parity of each row. Alternatively we could take the columns as the subsets, and take the parity of each column.  The figure below shows an example.

 

 

Obviously specifying the parities of the columns and rows is not sufficient to determine the contents of the array.  In fact, any viable set of row and column parities is compatible with 512 distinct arrays. Again the arrays with null parity form a self-similar group of the kind described above, but with 29 = 512 elements.

 

For convenience, let us assign the letters a through p to the cells as shown below

 

 

we can impose an additional set of parity conditions by specifying the parities of the left-to-right “diagonal” subsets {a,f,k,p}, {b,g,l,m}, {c,h,i,n}, and {d,e,j,o}. For the preceding example these have the values 0, 1, 0, and 0 respectively. Imposing these conditions reduces the size of the ambiguity set of compatible arrays to 128.  One might think we could reduce this still further by imposing parities on the opposite diagonals (the family parallel to {d,g,j,m} etc), but this actually doesn’t reduce the number at all, since these conditions are redundant to those already imposed.

 

In fact, even if we specify the parities of five partitions (which is the maximum possible number) such that no two elements appear in the same subset in more than one partition, it is still not sufficient. Finding a set of such partitions is sometimes called the social golf tournament problem. We seek to schedule 16 golfers for five rounds of play such that no two players are in the same foursome twice. An example of such a schedule is shown below

 

 

For any given viable set of parities for these 20 subsets there are still 128 compatible arrays of the binary bits a, b, c,…, p.   (Recall that a “viable” set of parities is one for which there is at least one compatible array.)

 

In view of this we might be tempted to conclude that no five partitions would be sufficient to uniquely specify a complementary pair, but that is not the case. For example, the five partitions indicated in the figure below are sufficient.

 

 

For ease of visualization, here are the same partitions with colors assigned to each subset.

 

 

Notice that one pair of a elements appears together in three of the four partitions, and another pair appears together in two of the partitions. This shows that there is benefit in applying the opposite of the “social golfer” criterion. In other words, there is added robustness in selecting partitions that differ only slightly. We previously were able to reduce the ambiguity set with four partitions only down to 128 consistent arrays, but we can do better in addition to the row, column, and left-to-right diagonal parities, we also stipulate the parity conditions on the “broken line” partitions, {anod}, {ebch}, {ifgl}, and {mjkp}. This reduces the number of compatible arrays to 32. Notice that some pairs of elements appear in the same subset of more than one partition.

 

Carrying this strategy to its extreme, we consider the set of four partitions shown below,

 

 

We will call these the sequential segment partitions. With null parities for the partitions abcd and bcde it follows that a = e, and similarly we see that the array must consist of four identical rows. This leads to an ambiguity set of size 8 (consisting of 4 complementary pairs). Each of these arrays consists of four identical rows, and the rows are 0000, 0011, 0101, 0110, 1001, 1010, 1100, and 1111 respectively. Clearly we can reduce this ambiguity set down to just a single complementary pair by specifying the parities of the “notched column” partition

 

 

The same strategy works for square arrays of any size. For example, returning to the 9-bit arrays, we see that the three partitions formed by sequential segments leads to the ambiguity set consisting of four arrays with identical rows exemplified by 000, 011, 101, and 110. Augmenting these three partitions with a “notched column” partition uniquely specifies the contents of the array. Likewise if we consider 25-bit square arrays we can specify the parities of the partitions consisting of sequential segments, leading to an ambiguity set consisting of 16 arrays with identical rows. In general for an NxN array with these partitions the rows are all possible strings of N bits with an even number of 1s. Hence the size of the ambiguity set with these partitions is

 

 

In each case the resulting ambiguity set can be reduced to size either 1 or 2 (accordingly as N is odd or even) by imposing the parities for one further partition, which we can take to be the “notched columns”. (Obviously the un-notched columns would also work for odd N, but the notched columns work for both.)  Thus for any N we have an explicit set of N+1 partitions of N2 bits into subsets of size N such that the parities of those N(N+1) subsets are sufficient to specify the N2 bits, up to the complementary pair for even N, and uniquely for odd N.

 

With this somewhat artificial set of partitions it’s relatively easy to invert the solution, i.e., to infer the square array of bits given the parities of the partitions. We can begin by choosing any top row of bits that has the correct parity (as given explicitly by the first partition). Then by examining the other sequential segment partitions we can infer the remaining bits in that column.  For example, if the subsets abcd and bcde have the same parity, then so to a and e, whereas if those subsets have opposite parity than so do a and e. So the only uncertainty is in the initial choice of the first row. We check each possibility using the notched column parities, and only one (or two, for even-ordered partitions) of the arrays in the ambiguity set will satisfy those parities.

 

Notice that the conditions imposed by the sequential segment partitions represent N equations in the N unknowns, but the determinant of the coefficient array for this system of equations is zero. This is because we could omit one subset from each partition without loss of information, since the parities of the remaining subsets are sufficient to uniquely specify the respective column of bits. So, we could replace those with the parities of the notched column partition, giving again N equations in N unknowns. However, in general the determinant of the coefficient array is still zero, because even though these conditions are sufficient to uniquely solve for the array bits modulo 2, they are not sufficient to solve for the array elements over the reals.

 

Return to MathPages Main Menu