Powers and Roots

of the

Alternating Geometric Series

Back to Contents

    Consider the alternating geometric series

(Eq'n 1)

for x>-1. We require that last criterion to ensure that the series converges on some finite value.

    We now ask what we get when we multiply that series by itself. What does S2 look like? To answer that question we apply the process of multiplication by way of partial products, displaying the partial products in a matrix:

When we add up those partial products, proceeding diagonally from upper right to lower left, we get

(Eq'n 2)

We can see that we could also have obtained that result if we had simply differentiated the series, term by term, with respect to x and multiplied the result by minus one, as we can verify by differentiating Equation 1:

(Eq'n 3)

    If we multiply Equation 2 by Equation 1, we get the cube of S. In that case the partial product matrix looks like this:

When we add up those partial products we get

(Eq'n 4)

Compare that with what we get when we differentiate Equation 1 twice with respect to x:

(Eq'n 5)

From those two equations we infer that

(Eq'n 6)

    An examination of the partial product matrices above and a contemplation of how we add their elements should suffice to convince us that the coefficients of the M-th power of S comprise the set of the (M-2)nd order Gaussian sums; that is, we have in general by induction

(Eq'n 7)

    Because S and its powers comprise infinite series, we expect that the roots of S will also comprise infinite series. We can't find the roots of S by direct multiplication, as we did with the powers, so we must proceed by assuming that we can express a given root of S as a Taylor series,

(Eq'n 8)

and then manipulating that series to obtain the root indirectly. Inspection of that equation tells us that we can calculate the n-th coefficient by taking

(Eq'n 9)

and evaluating it at x=0. Before I apply that equation to determining the roots of S I want to look at an alternative way of determining the form of the square root of S.

    We assume that Equation 8 represents the square root of S, multiply it by itself, and then equate the product, term by term, with S. In this case we set up the partial product matrix with only the coefficients expressed and the coefficients of S put into a column on the left side of the matrix. We have then

Adding up those partial products diagonally, from upper right to lower left, and equating the sums with the corresponding coefficients of S gives us an infinite series of equations that we can solve in sequence:

(Eq'ns 10 - 21)

    Now let's apply Equation 9 to calculating the first six coefficients of the Taylor series representing the square root of S. We get

(Eq'n 22)

(Eq'n 23)

(Eq'n 24)

(Eq'n 25)

(Eq'n 26)

(Eq'n 27)

which is what we got from solving the partial products matrix. By inspection of those equations, we see that for the n-th coefficient the last factor in the inverse factorial is n and that the last factor generated by the sequential differentiations is -(2n-1)/2. Thus the n-th coefficient has a last factor such that

(Eq'n 28)

Applying that fact over the values of n in the sequence of positive integers gives us then

(Eq'n 29)

    Now I want to look at the powers and roots of a slight variation on the alternating geometric series. I take

(Eq'n 30)

and multiply it by itself repeatedly to generate the infinite series that comprise the set of its powers. As we did with the simple alternating geometric series, we get a partial products matrix for each multiplication. And as before, adding the partial products diagonally (adding together the same powers of x) gives us the appropriate set of Gaussian sums as coefficients in the series representing the given powers of S' in accordance with Equation 7 (albeit with the exponents of x doubled). In this case Equation 7 becomes

(Eq'n 31)

    I did not include in that equation a formula based on repeated differentiation, as I did in Equation 7. In this case repeated differentiation does not yield a simple product of a constant and the appropriate power of the series: it yields the product of a constant, the appropriate power of the series, and an increasingly complicated function of x.

    To determine the roots of S' we again assume the Taylor series of Equation 8. To find the square root of S' we multiply that series by itself via the partial products matrix and then solve the equations that come from our equating the diagonal sums of the partial products with the appropriate coefficients of Equation 30. Those solutions give us zero for all of the coefficients with odd-numbered indices and

(Eq'ns 32-37)

for the first six coefficients with even-numbered indices. Again, these coefficients match the beginning coefficients of the Taylor series representing the square root of the simple geometric series. That fact means that we can obtain a general equation for the coefficients by modifying the indices of the coefficients in Equation 29: thus, for n taking all integer values from zero to infinity, we have

(Eq'n 38)

And in the same way we can obtain descriptions of the other roots of S'.

    Next I want to look at the other roots of S and the powers of those roots. I want to derive the descriptions deductively rather than inductively, but that's another essay.


Back to Contents