Anti-Symmetric Arrays for Linear Recurrences


Given a linear recurrence



where An and Bn are arbitrary functions of the index n, let sk(a,b) denote the value of sk for the initial values s0 = a and s1 = b. The values of sk for any initial values can be expressed as a linear combination of the values of any two independent basis sequences. For example


Conversely, if the terms of a sequence are a fixed linear combination of the corresponding terms of two sequences that each satisfy the recurrence, then that sequence satisfies the recurrence.


In another article we noted the identity



For simple recurrences with constant coefficients this identity is fairly intuitive, as can be seen from the array below, whose top two rows satisfy the simple Fibonacci recurrence with initial values 0,1 and -1,0 respectively.



The column values are computed from the top two values using the Fibonacci recurrence, and the identity above simply states that we reach a value of zero on the diagonal. In this simple case the top row begins with 0 and every column is just the reverse of the top row sequence up to that point, with alternating sign, so it obviously reaches zero on the diagonal. For other recurrences, especially with variable coefficients, the result is slightly less obvious, as shown by the array below for the recurrence sk+2 = (2k+3) sk+1 + (k+1)2 sk.



To prove the identity in general, we need only note that, since any linear combination of two solution sequences is another solution sequence, every row satisfies the recurrence, as well as every column. Also, the initial values in the first two columns are the negatives of the initial values in the first two rows, so the entire array must be anti-symmetric, which implies that the values on the diagonal are all zero.


The same reasoning applies to any anti-symmetric 2x2 array of initial conditions, so we have the identity



for any constant a. Furthermore, since the entire array is anti-symmetric, we have



for any to non-negative indices j and k.


For a slightly more constructive proof of the vanishing of the diagonal terms, notice that we can apply the recurrence alternately in the vertical and horizontal directions just around the diagonal, as shown below:



Naturally these relations generalize to linear recurrences of any order. For example, with third-order recurrences, letting sk(a,b,c) denote the value of sk given the initial values s0 = a, s1 = b, s2 = c, we have the identity



and the corresponding anti-symmetry relation for the off-diagonal terms of the array.


We can express these relations, for an nth-order recurrence, in terms of matrices by associating with each term in the array the square n x n matrix with that term in the upper left corner. For a simple example, consider the 3rd-order recurrence



where A, B, C are constant coefficients. We can form an array of elements si,j that satisfy this recurrence in both the vertical and horizontal directions



Now we define an array of 3 x 3 matrices Sij by



Given the operator matrix



We have the recurrence relations



Consequently we can express Si+m,j+n in terms of Si,j by either of the formulas



To show that these are equivalent, we make use of the identity (XY)T = YTXT, which enables us to write the expressions as





we have



Hence if S00 is anti-symmetric, meaning that (S00)T= S00, then



For the matrices on the diagonal this implies



which signifies that Snn is anti-symmetric just if S00 is anti-symmetric. This is another proof that if the diagonal terms of the initial matrix are zero, then the diagonal terms of the entire array of sij values are all zero. The example above was for a recurrence with a constant coefficient matrix Q, but we could replace the powers of this matrix with products of individual coefficient matrices Qj to arrive at analogous results.


We could generalize this idea to higher-dimensional arrays whose terms satisfy a given recurrence in each direction.


Return to MathPages Main Menu