2.2  Optimum First Order Solution

 

2.2.1  Basic Assumptions

 

As noted earlier, equation (2.1-1) is indeterminate when the input variable is specified only by a sequence of discrete values, unless some assumption is made as to the behavior of the input variable between the discrete values. When only two discrete values (the current and one past value) are available at any one time, as is the case in virtually all first-order algorithms, the best possible assumption is that “x” varies linearly with time between the two discrete values. If x were significantly non-linear over the interval, then clearly two values would be insufficient to specify the function. So, we can represent the input function over an interval Δt by

 

 

where the subscripts p and c denote “past” and “current” discrete values, corresponding to the initial and final values for that interval. The derivative of this x(t) during the interval is then just

 

so equation (2.1-1) can be rewritten

 

 

Now, if τD and τN are known constants, and we are given the discrete values of xp, yp, and xc for any given interval Δt, then equation (2.2.1-1) can be solved exactly for the current output value yc. This is the basis for the optimum solution for constant τ presented in Section 2.2.3.

 

If the coefficients τD and τN are variable, the situation is somewhat more complicated. We will consider only the case where these coefficients may be treated as functions of time, so that (2.1-1) remains linear. In this case the coefficients are specified for a given interval by their initial and final values (i.e., past and current). As with the input function x(t), this in itself is insufficient specification unless the coefficients can be assumed to vary linearly with time between the current instants. Therefore, we make the assumption that

 

 

On this basis, equation (2.2.1-1) can be written

 

 

where

 

 

Knowing the initial and final values of x, τN, and τD, and the initial value of y for a given Δt, equation (2.2.1-2) can be solved exactly for the final (current) value of y. Therefore, this equation is the basis for the optimum solution to be presented in Section 2.2.4 for variable τ.

 

 

2.2.2  General First Order Recurrence Formulas

 

Before deriving the actual solutions of equations (2.2.1-1) and (2.2.1-2) we will consider the form these solutions must take. We require a formula that can be applied recursively to compute the current value of y based on the current value of x and the past values of x and y. We will find that, in general, the solution can be expressed as a linear combination of the three given values, i.e.,

 

 

where f1, f2, and f3 are all functions of τD, τN, and Δt. However, these three functions are obviously not independent, because if yp = xp = xc = μ then clearly yc = μ, so we have

μ = μf1 + μf2 + μf3 which implies 1 = f1 + f2 + f3. Taking advantage of this fact, we can eliminate f1 from equation (2.2.2-1) to give

 

 

which can be written

 

 

where the coefficients A and B are defined as

 

 

Any valid lead/lag algorithm that computes yc as a linear function of yp, xp, and xc must be expressible in the form of equation (2.2.2-2), regardless of the assumptions made concerning the interpolation of the independent variable. Therefore, we define equation (2.2.2-2) as the standard recurrence formula for digital lead/lag simulations. In subsequent sections we will frequently identify algorithms simply by specifying their standard recurrence coefficients A and B.

 

 

2.2.3  Optimum Recurrence Coefficients for Constant τ

 

When τD and τN are constant, the optimum recurrence coefficients can be determined by solving equation (2.2.1-1), which can be rewritten as follows

 

 

Recall that the solution of any equation of the form

 

 

where F(t) and G(t) are functions of t is given by

 

 

where K is a constant of integration. Making the substitutions

 

 

we have

 

Performing the integration and dividing through by et/τD gives

 

 

To determine K we apply the initial condition y = yp at t = 0, which gives

 

 

Inserting this back into the preceding equation, recalling the definition of , and noting that y = yc at t = Δt, we arrive (after some algebraic manipulation) at the result

 

 

where

 

Note that as required the sum of f1, f2, and f3 is identically 1. Substituting the expressions for f2 and f3 into equations (2.2.2-3) gives the optimum recurrence coefficients for constant τ:

 

 

 

2.2.4  Optimum Recurrence Coefficients For Variable τ

 

When τN and τD are variable, we base the optimum recurrence coefficients on equation (2.2.1-2). For convenience we define the following parameters

 

 

In these terms equation (2.2.1-2) can be rewritten

 

 

This is in the form of equation (2.2.3-1), so the general solution is given by equation (2.2.3-2) where

 

 

Therefore we have

 

 

Performing the integration and dividing through by (at + b)1/a gives

 

 

The constant of integration K is determined by the initial condition y = yp at t = 0, which gives

 

 

We can now compute yc by evaluating equation (2.2.4-2) at t = Δt. If we then replace a,b,c, and d with their respective definitions, we arrive at the result

 

 

where

                       

Note that, as required, the sum of f1, f2, and f3 is identically 1. Substituting the expressions for f2 and f3 into equations (2.2.2-3) gives the optimum recurrence coefficients for variable τ:

 

 

 

2.2.5  Discussion

 

When referring to the recurrence formulas (2.2.3-3) and (2.2.4-3) the term “optimum” is justified in the following sense: The total solution y(t) equals the solution to the homogeneous equation plus a particular solution for the given forcing function. The only ambiguity is in the particular solution, which depends on how we choose to interpolate the independent function x(t). Note that the forcing function has no effect on the homogeneous solution, and that the particular solution is independent of the actual y(t) values. It follows that the coefficients of the “y terms” in the recurrence relation must be exactly as given in (2.2.3-3) and (2.2.4-3), regardless of how the x(t) function is interpolated. The only ambiguity in the recurrence formula is in the “x-term” coefficients (f2 and f3), and even those have a fully determined sum. In view of this, the term “optimum” is used throughout this document to denote a recurrence formula that has the exact “y-term” coefficient(s), and for which the “x-term” coefficients sum to the correct value.

 

With regard to the variable-τ solution, we would expect to find that it has as a special case the constant-τ solution, and in fact if  and  are zero then clearly equation (2.2.4-5) is equivalent to the constant-τ expression for B given by equation (2.2.3-4). However, it may not be self-evident that equation (2.2.4-4) reduces to the A of equation (2.2.3-4) for constant τ. To show that it does, we can rewrite equation (2.2.4-4) as follows

 

 

If we now recall the series expansion of the natural log

 

 

we see that if the ratio τDP/τDC is close to 1 we can approximate the natural log in equation (2.2.5-1) very accurately by just the first term of the expansion, i.e.,

 

 

in which case equation (2.2.5-1) becomes

 

 

This makes it clear that as  goes to zero, and τDC approaches τDP, the variable-τ expression for A does in fact reduce to the constant-τ case given by equation (2.2.3-4).

 

We now demonstrate that the constant-τ response approaches the variable-τ response if a sufficiently small time interval Δt is used. First, notice that as Δt goes to zero the ratio of τDP to τDC can be made arbitrarily close to 1 for any finite value of . Also, as we have already seen, equation (2.2.4-4) is equivalent to A of equation (2.2.3-4) in the limit as τDP/τDC goes to 1. Therefore, as Δt goes to zero the A coefficient for both constant τ and variable τ is given by equation (2.2.3-4). Furthermore, if we recall the power series expansion

 

 

we see that as Δt goes to zero, A of equation (2.2.3-4) becomes simply A = Δt/τD. Substituting this into the expressions for B in equations (2.2.3-4) and (2.2.4-5), along with the stipulation that τNCτNP for a sufficiently small Δt, we see that both solutions give B = τN/τD. Thus, for sufficiently small Δt, the constant-τ and variable-τ solutions both reduce to the form

 

 

Notice that if Δt is actually zero, but xc – xp does not vanish, then this can be written as

 

 

which is the expected response to a step input, viz, an instantaneous change in x yields an instantaneous change in y with a magnitude amplified by the factor τN/τD.

 

Examination of equation (2.2.4-4) also shows that the variable-τ solution requires τDP and τDC have the same sign, since if they didn’t, the ratio τDP/τDC would be negative and the result of the exponentiation would, in general, be complex. Another way of stating this restriction is that τD can never pass through zero, which effectively prohibits a change of sign for a continuous function. In one sense, the “reason” for this restriction is that we divided by τD when we wrote equation (2.2.4-1). More fundamentally, equation (2.1-1) shows that when τD is zero the differential term in y vanishes and the equation is singular.

 

It may appear that equation (2.2.4-5) also exhibits a singularity, specifically when  equals –1. However, as long as the absolute value of Δt (/τDP) is less than 1 (which corresponds to the requirement that τD never pass through zero during the interval) it can be shown that B remains analytic at  = –1, and is given by

 

 

Return to Table of Contents