AntiSymmetric Arrays for Linear Recurrences 

Given a linear recurrence 



where A_{n} and B_{n} are arbitrary functions of the index n, let s_{k}(a,b) denote the value of s_{k} for the initial values s_{0} = a and s_{1} = b. The values of s_{k} for any initial values can be expressed as a linear combination of the values of any two independent basis sequences. For example 



Conversely, if the terms of a sequence are a fixed linear combination of the corresponding terms of two sequences that each satisfy the recurrence, then that sequence satisfies the recurrence. 

In another article we noted the identity 



For simple recurrences with constant coefficients this identity is fairly intuitive, as can be seen from the array below, whose top two rows satisfy the simple Fibonacci recurrence with initial values 0,1 and 1,0 respectively. 



The column values are computed from the top two values using the Fibonacci recurrence, and the identity above simply states that we reach a value of zero on the diagonal. In this simple case the top row begins with 0 and every column is just the reverse of the top row sequence up to that point, with alternating sign, so it obviously reaches zero on the diagonal. For other recurrences, especially with variable coefficients, the result is slightly less obvious, as shown by the array below for the recurrence s_{k+2} = (2k+3) s_{k+1} + (k+1)^{2 }s_{k}. 



To prove the identity in general, we need only note that, since any linear combination of two solution sequences is another solution sequence, every row satisfies the recurrence, as well as every column. Also, the initial values in the first two columns are the negatives of the initial values in the first two rows, so the entire array must be antisymmetric, which implies that the values on the diagonal are all zero. 

The same reasoning applies to any antisymmetric 2x2 array of initial conditions, so we have the identity 



for any constant a. Furthermore, since the entire array is antisymmetric, we have 



for any to nonnegative indices j and k. 

For a slightly more constructive proof of the vanishing of the diagonal terms, notice that we can apply the recurrence alternately in the vertical and horizontal directions just around the diagonal, as shown below: 



Naturally these relations generalize to linear recurrences of any order. For example, with thirdorder recurrences, letting s_{k}(a,b,c) denote the value of s_{k} given the initial values s_{0} = a, s_{1} = b, s_{2} = c, we have the identity 



and the corresponding antisymmetry relation for the offdiagonal terms of the array. 

We can express these relations, for an nthorder recurrence, in terms of matrices by associating with each term in the array the square n x n matrix with that term in the upper left corner. For a simple example, consider the 3^{rd}order recurrence 



where A, B, C are constant coefficients. We can form an array of elements s_{i,j} that satisfy this recurrence in both the vertical and horizontal directions 



Now we define an array of 3 x 3 matrices S_{ij} by 



Given the operator matrix 



We have the recurrence relations 



Consequently we can express S_{i+m,j+n} in terms of S_{i,j} by either of the formulas 



To show that these are equivalent, we make use of the identity (XY)^{T} = Y^{T}X^{T}, which enables us to write the expressions as 



Since 


we have 



Hence if S_{00} is antisymmetric, meaning that (S_{00})^{T}= –S_{00}, then 



For the matrices on the diagonal this implies 



which signifies that S_{nn} is antisymmetric just if S_{00} is antisymmetric. This is another proof that if the diagonal terms of the initial matrix are zero, then the diagonal terms of the entire array of s_{ij} values are all zero. The example above was for a recurrence with a constant coefficient matrix Q, but we could replace the powers of this matrix with products of individual coefficient matrices Q_{j} to arrive at analogous results. 

We could generalize this idea to higherdimensional arrays whose terms satisfy a given recurrence in each direction. 
