**Open Journal of Statistics**

Vol.08 No.01(2018), Article ID:82248,20 pages

10.4236/ojs.2018.81005

The Method of Finite Difference Regression

Arjun Banerjee^{ }

Computer Science Department, Purdue University, West Lafayette, IN, USA

Copyright © 2018 by author and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).

http://creativecommons.org/licenses/by/4.0/

Received: December 23, 2017; Accepted: January 30, 2018; Published: February 2, 2018

ABSTRACT

In this paper^{1} I present a novel polynomial regression method called Finite Difference Regression for a uniformly sampled sequence of noisy data points that determines the order of the best fitting polynomial and provides estimates of its coefficients. Unlike classical least-squares polynomial regression methods in the case where the order of the best fitting polynomial is unknown and must be determined from the R^{2} value of the fit, I show how the t-test from statistics can be combined with the method of finite differences to yield a more sensitive and objective measure of the order of the best fitting polynomial. Furthermore, it is shown how these finite differences used in the determination of the order, can be reemployed to produce excellent estimates of the coefficients of the best fitting polynomial. I show that not only are these coefficients unbiased and consistent, but also that the asymptotic properties of the fit get better with increasing degrees of the fitting polynomial.

**Keywords:**

polynomial Regression, t-Test, Finite differences

1. Introduction

In this paper I present a novel polynomial regression method for a uniformly sampled sequence of noisy data points that I call the method of Finite Difference Regression. I show how the statistical t-test2 can be combined with the method of finite differences to provide a more sensitive and objective measure of the order of the best fitting polynomial and produce unbiased and consistent estimates of its coefficients.

Regression methods, also known as fitting models to data, have had a long history. Starting from Gauss (see [1] pp. 249-273) who used it to fit astronomical data, it is one of the most useful techniques in the arsenal of science and technology. While historically it has been used in the “hard” sciences, both natural and biological, and in engineering, it is finding increasing use in all human endeavors, even in the “soft” sciences like social sciences (see for example [2] ), where data is analyzed in order to find an underlying model that may explain or describe it.

Polynomial regression methods are methods that determine the best polynomial that describes a sequence of noisy data samples. In classical polynomial regression methods when the order of the underlying model is known, the coefficients of the terms are estimated by minimizing the sum of squares of the residual errors to the fit. For example, in classical linear regression (see [2] pp. 675-730) one assumes that the order of the fitting polynomial is 1 and then proceeds to find the coefficients of the best fitting line as those that minimize the sum of the squares of its deviations from the noisy samples. The same method can be extended to polynomial regression. On the other hand, when the model order is unknown, the order of the best fitting polynomial must be determined before finding the coefficients of the polynomial terms. In classical regression methods this order is determined from the R^{2} value of its fit. More on these two different aspects of polynomial regression are provided below.

1) Determining the order of the best fitting polynomial (Model order unknown)

In classical regression, the goodness of the order of the fitting polynomial is given by the R^{2} value of the best fitting polynomial―the greater the R^{2} value the better the fit. However, this means that one can go on fitting with higher degree polynomials and thereby increasing R^{2} values. Given the insensitivity of these R^{2} values (see for example Section 2.4) it is indeed hard to find a stopping criterion. To get around this problem, heuristic methods like AIC, BIC, etc. (see [3] for example) that penalize too many fitting parameters have been proposed. In contrast, in the method of Finite Difference Regression described below, the order of the best fitting polynomial is determined by using a t-test, namely determining the index of a finite difference row (or column) that has a high probability (beyond a certain significance level) of being zero. The sensitivity of this method is high compared to classical regression methods as I shall demonstrate in Section 2.5.

2) Determining the coefficients of the best fitting polynomial (Model order known/determined)

In classical polynomial regression, the polynomial coefficients of the fit that minimizes the sum of squares of the residual errors are found by a computation that is equivalent to inverting a square matrix. In contrast in the method I describe below, I once again employ the finite differences I used to find the order, to iteratively determine the polynomial coefficients. In Section 2.7, I show that the results of this method are remarkable―not only are the coefficients unbiased and consistent, but also that the asymptotic properties of the fit get better with increasing degrees of the fitting polynomial.

Before describing the Finite Difference Regression method in Section 2, I briefly recapitulate the general method of finite differences whose formulation can be traced all the way back to Sir Isaac Newton (see [4] ). The finite differences method (see [5] ) is used for curve fitting with polynomial models. On its 2^{nd} row, Table 1 shows the y-value outputs for the sequence of regularly spaced
^{st} row. A plot of this data is shown in Figure 1. The problem is to determine the order of the polynomial that can generate these outputs and to find its coefficients.

The remaining rows in Table 1 show the successive differences of the y-values. The fact that the 3^{rd} difference row in Table 1 is composed of all zeros implies that a quadratic polynomial best describes this data. In fact this data was generated by the quadratic polynomial:
$y=2{x}^{2}+x+3$
.

I assume throughout this paper that the data are sampled at regular intervals. Without loss of generality these intervals can be assumed to be of unit length. Then the following general observations can be made:

1) If the
${\left(m+1\right)}^{\text{st}}$
difference row is all zero, then the data is best described by an m^{th} order polynomial:
${a}_{m}{x}^{m}+{a}_{m-1}{x}^{m-1}+\cdots +{a}_{1}x+{a}_{0}$

2) If the
${\left(m+1\right)}^{\text{st}}$
difference row is all zero, then
$m!{a}_{m}$
is the constant in the m^{th} row. From this observation the coefficient of the highest term can be found. For example in Table 1 since the 3^{rd} difference row is all zero then
$2!{a}_{2}$
is the constant in the 2^{nd} difference row. Since this constant is 4, it immediately follows that the highest term should be
$4{x}^{2}/2!=2{x}^{2}$

3) The lower order terms can be found successively by back substituting and subtracting the highest term. In this method, the data y in Table 1 is replaced by ${y}^{\prime}$ where ${y}^{\prime}=y-{a}_{m}{x}^{m}$ , and then reapplying the method of finite differences on the ${y}^{\prime}$ data. Using this approach for the data in Table 1, one obtains the next term as follows:

Figure 1. Graph of data in Table 1.

In Table 2 the 2^{nd} difference row is all zero. Hence
$1!{a}_{1}$
is the constant in the 1^{st} difference row. Since this constant is 1, the next highest term should be
$1x/1!=x$
.

Again, the final term can be found by replacing ${y}^{\prime}$ in Table 2 by ${y}^{\u2033}$ where ${y}^{\u2033}={y}^{\prime}-{a}_{m-1}{x}^{m-1}$ :

The computation in Table 3 yields 3 as the constant term for the polynomial fit.

Hence, by successive applications of the finite difference method in this manner, the best fit polynomial to the

2. The Method of Finite Difference Regression

In this section I describe the Finite Difference Regression method, compare the results obtained from this method to those from classical regression, and show unbiasedness and consistency properties of the Finite Difference Regression coefficients.

The intuitive idea behind the method of Finite Difference Regression is simple. Suppose one were to add normally distributed noise with mean 0 independently to each of the y-values for example in Table 1 to make them noisy, then unlike the non-noisy situation, one cannot expect all entries in a difference row to be all

Table 1. Finite Differences on y-value outputs for the sequence of regularly spaced x-value inputs.

Table 2. Second iteration of finding terms to fit the model.

Table 3. Final iteration of finding terms to fit the model.

identically equal to zero. However, since differences between normally distributed random variables are also normally distributed (albeit with higher variance), each of the samples in a difference row (as in the rows of Table 1) would also be normally distributed with different means until one hits a difference row where the mean is zero with high probability. It is this highly probable zero-mean row that should be the all zero difference row in the absence or elimination of noise. The condition that the mean is zero with high probability can be objectively tested on each difference row using a t-test on the samples in that row, where the null hypothesis H_{0} tests for zero population mean and the alternative hypothesis H_{a} tests for non-zero population mean. This is the main idea behind the method of Finite Difference Regression―the replacement of the notion of an all zero difference row in the non-noisy case by the notion of a highly probable row of all zeros in the noisy case and usage of the statistical t-test to test for the zero population mean hypothesis.

2.1. Statistical independence and Its Relation to the degrees of Freedom

While it is true that successive differences will produce normally distributed samples, not all the samples so produced are statistically independent of one another. For example as shown in Figure 2, the noisy data samples
${y}_{1},{y}_{2},{y}_{3},{y}_{4}$
generate the next difference sequence
${{y}^{\prime}}_{1},{{y}^{\prime}}_{2},{{y}^{\prime}}_{3}$
―not all of these values are independent. Since
${{y}^{\prime}}_{1}$
and
${{y}^{\prime}}_{2}$
both depend upon the value of y_{2} they are not independent. Similarly,
${{y}^{\prime}}_{2}$
and
${{y}^{\prime}}_{3}$
are not independent because both depend upon
${y}_{3}$
. However,
${{y}^{\prime}}_{1}$
and
${{y}^{\prime}}_{3}$
having no point in common, are independent!

This observation can be generalized by the following process. Suppose I had K observations to begin with. If I were to produce successive differences by skipping over every other term, in the manner of a binary tree as shown in Figure 3, I would obtain independent normally distributed samples. Each level of the tree would have independent normally distributed finite differences denoted in Figure 3 (shown with $K=8$ ) by ${y}^{\prime},{y}^{\u2033},\cdots $ and so on. This method would retain all the properties of finite differences, in particular the highly probable zero mean row; plus it would have the added benefit of not having any dependent samples. If this method were implemented, and a t-test invoked at every level of the binary tree, then the number of samples in the test would be halved at each level, and consequently the degrees of freedom would be affected. In other words, if k denotes the number of samples3 at a particular level of the tree, then in the next level one would get k/2 samples. Then according to the theory of t-tests, the degrees of freedom for the first level would be $k-1$ , in the next level it would be $k/2-1$ , and so on. However, discarding good data will have negative consequences for coefficient estimation (see Section 2.6). While some of the data at every level of a successive finite difference table (Table 1 for example) are not independent, they could still be used for t-tests. The only thing to remember is

Figure 2. Successive finite differences.

Figure 3. A binary tree of successive differences.

that at every stage the number of independent samples is halved and therefore the degrees of freedom is one less than this number.

2.2. An Example of Order Finding

Table A1 in Appendix 2 gives an example of how the Finite Difference Regression method works to find the order of the best fitting polynomial. The 1^{st} two columns of Table A1 are the pivoted rows of Table 1. Independent zero-mean normally distributed random noise with standard deviation of 30.0 is generated in the 3^{rd} column and added to the values of y in the 4^{th} column. Thus, the 4^{th} column represents noisy values of the data y. A graph of this noisy data is shown in Figure 4. Finally, columns 5 through 9 represent successive finite differences from 1^{st} to 5^{th}, computed in the same way as in Table 1. The last 6 rows of Table 9 highlighted in yellow show the statistical computations associated with the method. These are described below in Section 2.3.

2.3. The Statistical Computations

The last 6 rows of Table A1 use the one-sample t-test for a population mean as described in [2] pp. 547-548. Note that the Null Hypothesis is ${H}_{0}:\mu =0$ because the hypothesized value is 0. The Alternative Hypothesis is ${H}_{a}:\mu \ne 0$ . The descriptions of these rows are:

Figure 4. Graph of the noisy data in Table 5 with insert showing magnified view for $10\le x\le 20$ .

Header titled n = Number of samples

Header titled $\overline{x}$ = Sample mean

Header titled s = Sample standard deviation

Header titled t = Test statistic = $\frac{x-\mu}{s/\sqrt{n}}=\frac{x}{s/\sqrt{n}}$ (substituting $\mu =0$ )

Header titled df = Degrees of freedom computed as detailed in Section 2.1

Header titled P-value = $\{\begin{array}{l}2\times \text{Areatorightof}t\text{withgiven}\text{\hspace{0.17em}}df\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.05em}}\text{for}\text{\hspace{0.17em}}t>0\\ 2\times \text{Areatoleftof}\text{\hspace{0.17em}}t\text{\hspace{0.17em}}\text{withgiven}\text{\hspace{0.17em}}df\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{for}\text{\hspace{0.17em}}t<0\end{array}$

2.4. Conclusions from the t-test

One of the assumptions of the t-test is that the sample size n is large. Typically,
$n\ge 30$
is required. This assumption is violated in some of the columns of Table A1. However the table is merely for illustrative purposes. Even so, it shows the power of the method to correctly ascertain the order of the best fitting polynomial because one can hardly fail to notice the precipitous drop in P-value in the 3^{rd} difference column (titled Diff 3). The plot in Figure 5 showing the P-values of the 5 difference columns graphically displays this drop.

The insignificant P-value in the 3^{rd} difference column implies that with high probability the order of the best fitting polynomial should be
$m=2$
, i.e. a quadratic polynomial is the best fit polynomial to the data in Table A1.

2.5. Comparison to Classical regression

In classical polynomial regression, ${R}^{2}$ values instead of P-values represent the goodness of fit. Using Microsoft Excel’s built-in program to generate ${R}^{2}$ values to polynomial fits, I generated Table 4 by restricting the highest degree m of the best fitting polynomial ${a}_{m}{x}^{m}+{a}_{m-1}{x}^{m-1}+\cdots +{a}_{1}x+{a}_{0}$ for the noisy data in Table A1.

Figure 5. Graph of P-value vs. Diff.

Table 4. r^{2} values in classical regression as a function of degree m.

Table 4 shows that the R^{2} values used in classical regression do not have the sensitivity that the P-values with t-tests possess. The clear drop that occurs in the P-values as shown in Figure 5 does not have a corresponding sharp rise in R^{2} values in Table 4. Also, unlike classical regression which requires R^{2} values to be fitted with every polynomial the Finite Difference Regression is a one-shot test that gives a clear estimate of the order of the best fitting polynomial by testing the significance of the probabilities (P-values). The first successive difference column whose P-value is less than the significance level (a belief threshold for the viable model) comes out to be the best fitting polynomial model.

Unlike classical regression, however, this method requires a large sample size n, preferably one that is a power of 2. Also, because the degrees of freedom get halved at each successive difference and the degrees of freedom have to remain positive, the highest order polynomial that can be fit by this method is of the order ${\mathrm{log}}_{2}n$ . This is generally acceptable because the order of the best fitting polynomials should by definition be small constants. Other classical model order selection heuristics like AIC or BIC (see [3] ) would also heavily penalize high order models.

2.6. Determining the Coefficients

Once the order of the best fitting polynomial has been found using the t-test in combination with finite differences, I will evaluate the coefficients of the best fitting polynomial using the method of successive finite differences as detailed in Section 1.

In the non-noisy situation described in Section 1, the matter was simple: there was a difference row of all zeros and therefore the previous difference row was a constant that could be used to determine the coefficient of the highest degree. In the noisy case, the issue is complicated because there is no such row that consists of all zeros and therefore the previous difference row is not a constant. However, the intuitive idea behind the method of Finite Difference Regression (see the 2^{nd} paragraph of Section 2), led me to assume that the sample mean
$\overline{x}$
would be an excellent choice for the previous difference row constant, since it is the unique number that minimizes the sum of squares of the residuals. From Table A1, the all zero column is the 3^{rd} difference column, and therefore
$\overline{x}=4.29$
(the sample mean of the 2^{nd} difference column in Table A1) corresponds to the constant difference. Thus as described in Section 1, the highest term should be
$\overline{x}{x}^{2}/2!=2.145{x}^{2}$
; the next term determined from the sample mean of the “constant” column after subtracting the values of the highest term yields −7.464x; and finally the constant term is 78.69. Thus
$\widehat{y}=2.145{x}^{2}-7.464x+78.69$
is the estimated polynomial model.

Table a2 in Appendix 2 compares the noisy data to the best fit polynomial estimate using the method of successive finite differences. Note that the residuals are not very small; however, their average turns out to be −0.00078 indicating a good fit. The polynomial fit is overlaid on the noisy y data in Figure 6, in which the blue line shows the noisy data, and the dashed red line depicts the values of the estimated $\widehat{y}$ .

Figure 6. Graph of best fitting polynomial $\widehat{y}=2.145{x}^{2}-7.464x+78.69$ overlaid on the noisy observations of Table A1.

2.7. Unbiasedness and Consistency of the Coefficient Estimates

In this section I prove that the coefficients estimated by the Finite Difference Regression method as detailed in Section 2.6 are unbiased and consistent. The properties of unbiasedness and consistency are very important for estimators (see [6] p. 393). The unbiasedness property shows that there is no systematic offset between the estimate and the actual value of a parameter. It is proved by showing that the mean of the difference between the estimate and the actual value is zero. The consistency property shows that the estimates get better with more observations. It is proved by showing that a non-zero difference between the estimate and the actual value becomes highly improbable with a large number of observations. The analysis focuses on the estimate of the leading coefficient i.e. that associated with highest degree because the asymptotic properties of the polynomial fit will be most sensitive to that value.

Consider the following table that shows the symbolic formulas for the successive differences for the n observations ${y}_{1},{y}_{2},\cdots ,{y}_{n}$ :

For subsequent ease of notation, I introduce a change of variables ${z}_{i}\stackrel{\text{def}}{=}{y}_{i+1}$ for $0\le i\le n-1$ . With this renaming/re-indexing, Table 5 can be rewritten as (note that the row indices have also changed):

For any m,
$0\le m\le n-1$
, the m^{th} Diff column consists of expressions in rows indexed from m through
$n-1$
. Using mathematical induction, the m^{th} Diff column can be shown to be:

If the underlying polynomial is an m^{th} order polynomial
$f\left(x\right)={a}_{m}{x}^{m}+{a}_{m-1}{x}^{m-1}+\cdots +{a}_{1}x+{a}_{0}$
, then in the case of noiseless observations, every row of Table 7 evaluates to the constant
$m!{a}_{m}$
. In particular, for the k^{th} row in the m^{th} Diff column shown in Table 7 one gets:

Table 5. Successive differences for n observations ${y}_{1},{y}_{2},\cdots ,{y}_{n}$ .

Table 6. Changing ${y}_{i+1}$ from Table 5 to ${z}_{i}$ for $0\le i\le n-1$ .

Table 7. The m^{th} diff column.

$\begin{array}{l}\left(\begin{array}{c}m\\ 0\end{array}\right){z}_{k}-\left(\begin{array}{c}m\\ 1\end{array}\right){z}_{k-1}+\left(\begin{array}{c}m\\ 2\end{array}\right){z}_{k-2}+\cdots +{\left(-1\right)}^{m}\left(\begin{array}{c}m\\ m\end{array}\right){z}_{k-m}\\ =\left(\begin{array}{c}m\\ 0\end{array}\right){y}_{k+1}-\left(\begin{array}{c}m\\ 1\end{array}\right){y}_{k}+\left(\begin{array}{c}m\\ 2\end{array}\right){y}_{k-1}+\cdots +{\left(-1\right)}^{m}\left(\begin{array}{c}m\\ m\end{array}\right){y}_{k+1-m}\\ =\left(\begin{array}{c}m\\ 0\end{array}\right)f\left(k+1\right)-\left(\begin{array}{c}m\\ 1\end{array}\right)f\left(k\right)+\left(\begin{array}{c}m\\ 2\end{array}\right)f\left(k-1\right)+\cdots +{\left(-1\right)}^{m}\left(\begin{array}{c}m\\ m\end{array}\right)f\left(k+1-m\right)\\ =m!{a}_{m}\end{array}$ (1)

In case of noisy observations, let each re-indexed observation z now be corrupted by a sequence of additive zero-mean independent identically distributed random variables ${N}_{0},{N}_{1},\cdots ,{N}_{n-1}$ , each with variance ${\sigma}^{2}$ . In other words:

$\mathbb{E}\left[{N}_{i}\right]=0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{and}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathbb{E}\left[{N}_{i}^{2}\right]={\sigma}^{2}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{for}\text{\hspace{0.17em}}0\le i<n$

$\mathbb{E}\left[{N}_{i}{N}_{j}\right]=\mathbb{E}\left[{N}_{i}\right]\mathbb{E}\left[{N}_{j}\right]=0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{for}\text{\hspace{0.17em}}0\le i<j<n$

where $\mathbb{E}\left[\text{\hspace{0.05em}}\right]$ denotes the expectation operator. (See for example [7] pp. 220-223.)

This addition of random variables makes the z’s also random variables denoted now as
$Z$
, and so
${Z}_{i}=f\left(i+1\right)+{N}_{i}$
for
$0\le i<n$
. In this case, the k^{th} row in the m^{th} Diff column shown in Table 7 is the random variable
${R}_{k}$
where:

$\begin{array}{c}{R}_{k}=\left(\begin{array}{c}m\\ 0\end{array}\right){Z}_{k}-\left(\begin{array}{c}m\\ 1\end{array}\right){Z}_{k-1}+\left(\begin{array}{c}m\\ 2\end{array}\right){Z}_{k-2}+\cdots +{\left(-1\right)}^{m}\left(\begin{array}{c}m\\ m\end{array}\right){Z}_{k-m}\\ =\left(\begin{array}{c}m\\ 0\end{array}\right){N}_{k}-\left(\begin{array}{c}m\\ 1\end{array}\right){N}_{k-1}+\left(\begin{array}{c}m\\ 2\end{array}\right){N}_{k-2}+\cdots +{\left(-1\right)}^{m}\left(\begin{array}{c}m\\ m\end{array}\right){N}_{k-m}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\left(\begin{array}{c}m\\ 0\end{array}\right)f\left(k+1\right)-\left(\begin{array}{c}m\\ 1\end{array}\right)f\left(k\right)+\left(\begin{array}{c}m\\ 2\end{array}\right)f\left(k-1\right)+\cdots +{\left(-1\right)}^{m}\left(\begin{array}{c}m\\ m\end{array}\right)f\left(k+1-m\right)\\ =\left(\begin{array}{c}m\\ 0\end{array}\right){N}_{k}-\left(\begin{array}{c}m\\ 1\end{array}\right){N}_{k-1}+\left(\begin{array}{c}m\\ 2\end{array}\right){N}_{k-2}+\cdots +{\left(-1\right)}^{m}\left(\begin{array}{c}m\\ m\end{array}\right){N}_{k-m}+m!{a}_{m}\end{array}$ (2)

where, the last step of Equation (2) follows from the identity in Equation (1).

Let the random variable
${\widehat{A}}_{m}\left(n\right)$
denote the estimate of
${a}_{m}$
with the n observations. Then as described in Section 2.6, since there are
$\left(n-m\right)$
rows in the m^{th} Diff column, the sample mean of the R’s divided by
$m!$
provides the required estimate. In other words,

${\widehat{A}}_{m}\left(n\right)=\frac{1}{m!\left(n-m\right)}{\displaystyle \sum}_{k=m}^{n-1}\text{\hspace{0.05em}}{R}_{k}$ (3)

Now, define the reduced row random variable
${Q}_{k}$
for the m^{th} Diff column shown in Table 7 as:

${Q}_{k}\stackrel{\text{def}}{=}{R}_{k}-m!{a}_{m}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{for}\text{\hspace{0.17em}}m\le k\le n-1$ (4)

Then, substituting back into Equation (3):

$\begin{array}{c}{\widehat{A}}_{m}\left(n\right)=\frac{1}{m!\left(n-m\right)}{\displaystyle \sum}_{k=m}^{n-1}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{Q}_{k}+\frac{1}{m!\left(n-m\right)}{\displaystyle \sum}_{k=m}^{n-1}\text{\hspace{0.05em}}\text{\hspace{0.05em}}m!{a}_{m}\\ =\frac{1}{m!\left(n-m\right)}{\displaystyle \sum}_{k=m}^{n-1}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{Q}_{k}+\frac{m!{a}_{m}\left(n-m\right)}{m!\left(n-m\right)}\\ =\frac{1}{m!\left(n-m\right)}{\displaystyle \sum}_{k=m}^{n-1}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{Q}_{k}+{a}_{m}\end{array}$

Which, upon rearranging yields:

${\widehat{A}}_{m}\left(n\right)-{a}_{m}=\frac{1}{m!\left(n-m\right)}{\displaystyle \sum}_{k=m}^{n-1}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{Q}_{k}$ (5)

Note that it is the statistics of the expression
${\widehat{A}}_{m}\left(n\right)-{a}_{m}$
on the left-hand side of equation (5) that will prove the properties of unbiasedness and consistency of the estimate
${\widehat{A}}_{m}\left(n\right)$
. Also note that by definition (4) and from equation (2), the reduced row random variable
${Q}_{k}$
for the m^{th} Diff column can be explicitly written as:

${Q}_{k}=\left(\begin{array}{c}m\\ 0\end{array}\right){N}_{k}-\left(\begin{array}{c}m\\ 1\end{array}\right){N}_{k-1}+\left(\begin{array}{c}m\\ 2\end{array}\right){N}_{k-2}+\cdots +{\left(-1\right)}^{m}\left(\begin{array}{c}m\\ m\end{array}\right){N}_{k-m}$ (6)

2.7.1. Result 1: ${\widehat{A}}_{m}\left(n\right)$ Is an Unbiased estimate of ${a}_{m}$

Taking the expectation operator $\mathbb{E}\left[\text{\hspace{0.05em}}\right]$ on both sides of equation (6):

$\begin{array}{c}\mathbb{E}\left[{Q}_{k}\right]=\mathbb{E}\left[\left(\begin{array}{c}m\\ 0\end{array}\right){N}_{k}-\left(\begin{array}{c}m\\ 1\end{array}\right){N}_{k-1}+\left(\begin{array}{c}m\\ 2\end{array}\right){N}_{k-2}+\cdots +{\left(-1\right)}^{m}\left(\begin{array}{c}m\\ m\end{array}\right){N}_{k-m}\right]\\ =\left(\begin{array}{c}m\\ 0\end{array}\right)\mathbb{E}\left[{N}_{k}\right]-\left(\begin{array}{c}m\\ 1\end{array}\right)\mathbb{E}\left[{N}_{k-1}\right]+\cdots +{\left(-1\right)}^{m}\left(\begin{array}{c}m\\ m\end{array}\right)\mathbb{E}\left[{N}_{k-m}\right]=0\end{array}$ (7)

The last step in equation (7) follows from the fact that the random variables ${N}_{i}$ are all zero-mean. Then, taking the expectation operator on both sides of equation (5), and using the result from Equation (7):

$\mathbb{E}\left[{\widehat{A}}_{m}\left(n\right)-{a}_{m}\right]=\frac{1}{m!\left(n-m\right)}{\displaystyle \sum}_{k=m}^{n-1}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\mathbb{E}\left[{Q}_{k}\right]=0$ (8)

Hence, $\mathbb{E}\left[{\widehat{A}}_{m}\left(n\right)-{a}_{m}\right]=0$ for all n. This proves the unbiasedness property of ${\widehat{A}}_{m}\left(n\right)$ .

2.7.2. Result 2: ${\widehat{A}}_{m}\left(n\right)$ Is a Consistent estimate of ${a}_{m}$

Since the random variable ${\widehat{A}}_{m}\left(n\right)-{a}_{m}$ has been shown to be of zero-mean by virtue of its unbiasedness property, the variance of this random variable is

simply $\mathbb{E}\left[{\left({\widehat{A}}_{m}\left(n\right)-{a}_{m}\right)}^{2}\right]$ . I show the consistency property of the estimate ${\widehat{A}}_{m}\left(n\right)$ by way of deriving an upper-bound for $\mathbb{E}\left[{\left({\widehat{A}}_{m}\left(n\right)-{a}_{m}\right)}^{2}\right]$ and showing that this upper-bound vanishes to 0 in the limit $n\to \infty $ .

Writing out the reduced row random variables
${Q}_{k}$
for each row k for the m^{th} Diff column from Equation (6) one gets the following series of
$\left(n-m\right)$
equations:

${Q}_{m}=\left(\begin{array}{c}m\\ 0\end{array}\right){N}_{m}-\left(\begin{array}{c}m\\ 1\end{array}\right){N}_{m-1}+\left(\begin{array}{c}m\\ 2\end{array}\right){N}_{m-2}+\cdots +{\left(-1\right)}^{m}\left(\begin{array}{c}m\\ m\end{array}\right){N}_{0}$

${Q}_{m+1}=\left(\begin{array}{c}m\\ 0\end{array}\right){N}_{m+1}-\left(\begin{array}{c}m\\ 1\end{array}\right){N}_{m}+\left(\begin{array}{c}m\\ 2\end{array}\right){N}_{m-1}+\cdots +{\left(-1\right)}^{m}\left(\begin{array}{c}m\\ m\end{array}\right){N}_{1}$

$\cdots $

${Q}_{k}=\left(\begin{array}{c}m\\ 0\end{array}\right){N}_{k}-\left(\begin{array}{c}m\\ 1\end{array}\right){N}_{k-1}+\left(\begin{array}{c}m\\ 2\end{array}\right){N}_{k-2}+\cdots +{\left(-1\right)}^{m}\left(\begin{array}{c}m\\ m\end{array}\right){N}_{k-m}$

$\cdots $

${Q}_{n-1}=\left(\begin{array}{c}m\\ 0\end{array}\right){N}_{n-1}-\left(\begin{array}{c}m\\ 1\end{array}\right){N}_{n-2}+\left(\begin{array}{c}m\\ 2\end{array}\right){N}_{n-3}+\cdots +{\left(-1\right)}^{m}\left(\begin{array}{c}m\\ m\end{array}\right){N}_{n-1-m}$

Summing up these $\left(n-m\right)$ equations column by column:

$\sum}_{k=m}^{n-1}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{Q}_{k}=\left(\begin{array}{c}m\\ 0\end{array}\right){S}_{m}^{n-1}-\left(\begin{array}{c}m\\ 1\end{array}\right){S}_{m-1}^{n-2}+\left(\begin{array}{c}m\\ 2\end{array}\right){S}_{m-2}^{n-3}+\cdots +{\left(-1\right)}^{m}\left(\begin{array}{c}m\\ m\end{array}\right){S}_{0}^{n-1-m$ (9)

where for $\left(n-1-m\right)\le i\le \left(n-1\right)$ and $0\le j\le m$ define:

${S}_{j}^{i}\stackrel{\text{def}}{=}{\displaystyle \sum}_{k=j}^{i}\text{\hspace{0.05em}}{N}_{k}$ (10)

Squaring both sides of Equation (10), and then taking expectations:

$\begin{array}{c}\mathbb{E}\left[{\left({S}_{j}^{i}\right)}^{2}\right]=\mathbb{E}\left[{\left({\displaystyle \sum}_{k=j}^{i}\text{\hspace{0.05em}}{N}_{k}\right)}^{2}\right]=\mathbb{E}\left[{\displaystyle \sum}_{k=j}^{i}\text{\hspace{0.05em}}{N}_{k}^{2}+SCT\right]\\ ={\displaystyle \sum}_{k=j}^{i}\left(\mathbb{E}\left[{N}_{k}^{2}\right]+\mathbb{E}\left[SCT\right]\right)={\displaystyle \sum}_{k=j}^{i}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\sigma}^{2}=\left(i-j+1\right){\sigma}^{2}\end{array}$ (11)

In equation (11) I have invoked the zero-mean, independent nature of the N’s to cause the “sum of cross-terms” term SCT to vanish.

Define a new sequence of $\left(m+1\right)$ random variables ${T}_{0},{T}_{1},\cdots ,{T}_{m}$ as follows:

${T}_{i}\stackrel{\text{def}}{=}{\left(-1\right)}^{i}\left(\begin{array}{c}m\\ i\end{array}\right){S}_{m-i}^{n-1-i}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{for}\text{\hspace{0.17em}}0\le i\le m$ (12)

By squaring both sides of definition (12), and then taking expectations:

$\mathbb{E}\left[{T}_{i}^{2}\right]=\mathbb{E}\left[{\left(-1\right)}^{2i}{\left(\begin{array}{c}m\\ i\end{array}\right)}^{2}{\left({S}_{m-i}^{n-1-i}\right)}^{2}\right]={\left(\begin{array}{c}m\\ i\end{array}\right)}^{2}\mathbb{E}\left[{\left({S}_{m-i}^{n-1-i}\right)}^{2}\right]={\left(\begin{array}{c}m\\ i\end{array}\right)}^{2}\left(n-m\right){\sigma}^{2}$ (13)

where, the final step in Equation (13) follows from Equation (11).

Using definition (12), Equation (9) can be rewritten as:

$\sum}_{k=m}^{n-1}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{Q}_{k}={T}_{0}+{T}_{1}+\cdots +{T}_{m$ (14)

Then as $\mathbb{E}\left[{\left({T}_{0}+{T}_{1}+\cdots +{T}_{m}\right)}^{2}\right]\le \left(m+1\right)\left(\mathbb{E}\left[{T}_{0}^{2}\right]+\mathbb{E}\left[{T}_{1}^{2}\right]+\cdots +\mathbb{E}\left[{T}_{m}^{2}\right]\right)$ (see Appendix 1, squaring both sides of Equation (14), and then taking expectations:

$\begin{array}{l}\mathbb{E}\left[{\left({\displaystyle \sum}_{k=m}^{n-1}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{Q}_{k}\right)}^{2}\right]=\mathbb{E}\left[{\left({T}_{0}+{T}_{1}+\cdots +{T}_{m}\right)}^{2}\right]\le \left(m+1\right)\left(\mathbb{E}\left[{T}_{0}^{2}\right]+\mathbb{E}\left[{T}_{1}^{2}\right]+\cdots +\mathbb{E}\left[{T}_{m}^{2}\right]\right)\\ =\left(m+1\right)\left(n-m\right){\sigma}^{2}\left({\left(\begin{array}{c}m\\ 0\end{array}\right)}^{2}+{\left(\begin{array}{c}m\\ 1\end{array}\right)}^{2}+\cdots +{\left(\begin{array}{c}m\\ m\end{array}\right)}^{2}\right)\text{}\left[\text{FromEquation}\left(\text{13}\right)\right]\\ =\left(m+1\right)\left(n-m\right){\sigma}^{2}\left(\begin{array}{c}2m\\ m\end{array}\right)\text{}[\text{Frombinomialidentity}(\text{see}\left[\text{8}\right]\text{pg}.\text{64})]\end{array}$ (15)

Finally, by squaring both sides of Equation (5), and then taking expectations I obtain the following inequality from inequality (15):

$\mathbb{E}\left[{\left({\widehat{A}}_{m}\left(n\right)-{a}_{m}\right)}^{2}\right]=\frac{1}{{\left(m!\right)}^{2}{\left(n-m\right)}^{2}}\mathbb{E}\left[{\left({\displaystyle \sum}_{k=m}^{n-1}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{Q}_{k}\right)}^{2}\right]\le \frac{\left(m+1\right)\left(2m\right)!{\sigma}^{2}}{{\left(m!\right)}^{4}\left(n-m\right)}$ (16)

where, the last step in (16) is a rewriting of the binomial coefficient $\left(\begin{array}{c}2m\\ m\end{array}\right)$ in terms of factorials and subsequent simplification.

It can be seen that the factor $\left(m+1\right)\left(2m\right)!/{\left(m!\right)}^{4}$ on the right hand side of inequality (16) is a super-exponentially decreasing function of m (above a constant) due to presence of the large ${\left(m!\right)}^{4}$ factor in its denominator. This factor is evaluated for a few values of m in Table 8.

Table 8 shows that:

$\mathbb{E}\left[{\left({\widehat{A}}_{m}\left(n\right)-{a}_{m}\right)}^{2}\right]\le \frac{4.5\times {\sigma}^{2}}{\left(n-m\right)}$ (17)

For the Finite Difference Regression method, $m\le {\mathrm{log}}_{2}n$ (see Section 0 and the comments in Section 0). Thus, the right-hand-side denominator $n-m\ge n-{\mathrm{log}}_{2}n$ in inequality (17) and therefore,

$\mathbb{E}\left[{\left({\widehat{A}}_{m}\left(n\right)-{a}_{m}\right)}^{2}\right]\le \frac{4.5\times {\sigma}^{2}}{\left(n-{\mathrm{log}}_{2}n\right)}$ (18)

Taking limits on both sides of inequality (18),

$\underset{n\to \infty}{\mathrm{lim}}\mathbb{E}\left[{\left({\widehat{A}}_{m}\left(n\right)-{a}_{m}\right)}^{2}\right]\le \underset{n\to \infty}{\mathrm{lim}}\frac{4.5\times {\sigma}^{2}}{\left(n-{\mathrm{log}}_{2}n\right)}=0$ (19)

Table 8. The factor $\left(m+1\right)\left(2m\right)!/{\left(m!\right)}^{4}$ evaluated for a few values of m.

However, as $\mathbb{E}\left[{\left({\widehat{A}}_{m}\left(n\right)-{a}_{m}\right)}^{2}\right]\ge 0$ , inequality (19) implies that:

$\underset{n\to \infty}{\mathrm{lim}}\mathbb{E}\left[{\left({\widehat{A}}_{m}\left(n\right)-{a}_{m}\right)}^{2}\right]=0$ (20)

Then invoking Chebyshev’s inequality (see [7] p. 233, [9] p. 151) yields:

$\underset{n\to \infty}{\mathrm{lim}}Prob\left[\left|{\widehat{A}}_{m}\left(n\right)-{a}_{m}\right|>\epsilon \right]\le \frac{\underset{n\to \infty}{\mathrm{lim}}\mathbb{E}\left[{\left({\widehat{A}}_{m}\left(n\right)-{a}_{m}\right)}^{2}\right]}{{\epsilon}^{2}}=0,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{for}\text{\hspace{0.17em}}\text{any}\text{\hspace{0.17em}}\epsilon >0$

This proves the consistency property of ${\widehat{A}}_{m}\left(n\right)$ .

2.7.3. A Note on the Asymptotic Properties of the Fit

Inequality (16) and the values in Table 8 show that the higher the degree m of the polynomial, the lower is the variance of the estimate of its leading coefficient. This excellent property implies that the asymptotic properties of the fit (that are most sensitive to this leading coefficient) are stable, in the sense that the value of the asymptote becomes less varying with increasing degrees of the fitting polynomial.

3. Conclusions and Further Work

In this paper I have presented a novel polynomial regression method for uniformly sampled data points called the method of Finite Difference Regression. Unlike classical regression methods in which the order of the best fitting polynomial model is unknown and is estimated from the R^{2} values, I have shown how the t-test can be combined with the method of finite differences to provide a more sensitive and objective measure of the order of the best fitting polynomial. Furthermore, I have carried forward the method of Finite Difference Regression, reemploying the finite differences obtained during order determination, to provide estimates of the coefficients of the best fitting polynomial. I have shown that not only are these coefficients unbiased and consistent, but also that the asymptotic properties of the fit get better with increasing degrees of the fitting polynomial.

At least three other avenues of further research remain to be explored:

1) The polynomial coefficients obtained by this method do not minimize the sum of the squared residuals. Is there an objective function that they do minimize?

2) The classical regression methods work on non-uniformly sampled data sets. Extending this method to non-uniformly sampled data should be possible.

3) Automatically finding a good t-test significance level i.e. the precipitously low P-value that sets the order of the best fitting polynomial (as described in Section 2.4) remains an open problem. For this, heuristics in the spirit of AIC or BIC methods (see [3] ) in the classical case may perhaps be required.

Acknowledgements

I thank Prof. Brad Lucier for his interest and for the helpful discussions and comments that made this a better paper. I am also grateful to Dr. Jan Frydendahl for encouraging me to pursue this research activity.

Cite this paper

Banerjee, A. (2018) The Method of Finite Difference Regression. Open Journal of Statistics, 8, 49-68. https://doi.org/10.4236/ojs.2018.81005

References

- 1. Gauss, C.F. (1857) Theory of the Motion of the Heavenly Bodies Moving about the Sun in Conic Sections. Davis, C.H., Trans., Little, Brown & Co., Boston, MA.
- 2. Peck, R., Olsen, C. and Devore, J. (2005) Introduction to Statistics and Data Analysis. 2nd Edition, Thomson Brooks/Cole, Belmont, CA.
- 3. Burnham, K.P. and Anderson, D.R. (2004) Multimodel Inference: Understanding AIC and BIC in Model Selection. Sociological Methods and Research, 33, 261-304. https://doi.org/10.1177/0049124104268644
- 4. Newton, I. (1848) Newton’s Principia: The Mathematical Principles of Natural Philosophy. Motte, A., Trans., Daniel Adee, New York.
- 5. Hammerlin, G. and Hoffmann, K.-H. (1991) Numerical Mathematics. Schumaker, L.L., Trans., Springer, New York. https://doi.org/10.1007/978-1-4612-4442-4
- 6. Cover, T.M. and Thomas, J.A. (2006) Elements of Information Theory. 2nd Edition, John Wiley & Sons, Inc., New York.
- 7. Feller, W. (1968) An Introduction to Probability Theory and Its Applications. 3rd Edition, Vol. I, John Wiley & Sons, Inc., New York.
- 8. Tucker, A. (1980) Applied Combinatorics. John Wiley & Sons, Inc., New York.
- 9. Feller, W. (1971) An Introduction to Probability Theory and Its Applications. 2nd Edition, Vol. II, John Wiley & Sons, Inc., New York.
- 10. Student (1908) The Probable Error of a Mean. Biometrika, 6, 1-25. https://doi.org/10.1093/biomet/6.1.1

Appendix 1

To show that for any sequence of $\left(m+1\right)$ random variables ${T}_{0},{T}_{1},\cdots ,{T}_{m}$ :

$\mathbb{E}\left[{\left({T}_{0}+{T}_{1}+\cdots +{T}_{m}\right)}^{2}\right]\le \left(m+1\right)\left(\mathbb{E}\left[{T}_{0}^{2}\right]+\mathbb{E}\left[{T}_{1}^{2}\right]+\cdots +\mathbb{E}\left[{T}_{m}^{2}\right]\right)$

The proof follows from a variance-like consideration. Given the random variables ${T}_{0},{T}_{1},\cdots ,{T}_{m}$ and any random variable $\overline{T}$ , the sum of squares random variable ${{\displaystyle \sum}}_{i=0}^{m}{\left({T}_{i}-\overline{T}\right)}^{2}$ is always non-negative. Its expected value $\mathbb{E}\left[{{\displaystyle \sum}}_{i=0}^{m}{\left({T}_{i}-\overline{T}\right)}^{2}\right]$ is therefore always non-negative. Choosing the random variable $\overline{T}\stackrel{\text{def}}{=}{{\displaystyle \sum}}_{i=0}^{m}\text{\hspace{0.05em}}{T}_{i}/\left(m+1\right)$ i.e. the “mean” of ${T}_{0},{T}_{1},\cdots ,{T}_{m}$ proves the inequality as follows:

$\begin{array}{c}0\le \mathbb{E}\left[{\displaystyle \sum}_{i=0}^{m}{\left({T}_{i}-\overline{T}\right)}^{2}\right]\\ =\mathbb{E}\left[{\displaystyle \sum}_{i=0}^{m}\left({T}_{i}^{2}-2\overline{T}{T}_{i}+{\overline{T}}^{2}\right)\right]\\ =\mathbb{E}\left[{\displaystyle \sum}_{i=0}^{m}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{T}_{i}^{2}\right]-2\mathbb{E}\left[{\displaystyle \sum}_{i=0}^{m}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\overline{T}{T}_{i}\right]+\mathbb{E}\left[{\displaystyle \sum}_{i=0}^{m}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\overline{T}}^{2}\right]\\ ={\displaystyle \sum}_{i=0}^{m}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\mathbb{E}\left[{T}_{i}^{2}\right]-2\mathbb{E}\left[\overline{T}{\displaystyle \sum}_{i=0}^{m}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{T}_{i}\right]+\mathbb{E}\left[{\overline{T}}^{2}{\displaystyle \sum}_{i=0}^{m}\text{\hspace{0.05em}}\text{\hspace{0.05em}}1\right]\\ =\begin{array}{cc}{\displaystyle \sum}_{i=0}^{m}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\mathbb{E}\left[{T}_{i}^{2}\right]-2\mathbb{E}\left[\overline{T}\times \left(m+1\right)\overline{T}\right]+\mathbb{E}\left[{\overline{T}}^{2}\times \left(m+1\right)\right]& \left(\text{fromdefn}\text{.}\text{\hspace{0.17em}}\text{of}\text{\hspace{0.17em}}\overline{T}\right)\end{array}\end{array}$

$\begin{array}{l}={\displaystyle \sum}_{i=0}^{m}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\mathbb{E}\left[{T}_{i}^{2}\right]-2\left(m+1\right)\mathbb{E}\left[{\overline{T}}^{2}\right]+\left(m+1\right)\mathbb{E}\left[{\overline{T}}^{2}\right]\\ ={\displaystyle \sum}_{i=0}^{m}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\mathbb{E}\left[{T}_{i}^{2}\right]-\left(m+1\right)\mathbb{E}\left[{\overline{T}}^{2}\right]\\ =\begin{array}{cc}{\displaystyle \sum}_{i=0}^{m}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\mathbb{E}\left[{T}_{i}^{2}\right]-\left(m+1\right)\mathbb{E}\left[{\left(\frac{{{\displaystyle \sum}}_{i=0}^{m}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{T}_{i}}{m+1}\right)}^{2}\right]& \left(\text{fromdefn}\text{.}\text{\hspace{0.17em}}\text{of}\text{\hspace{0.17em}}\overline{T}\right)\end{array}\\ ={\displaystyle \sum}_{i=0}^{m}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\mathbb{E}\left[{T}_{i}^{2}\right]-\frac{\left(m+1\right)}{{\left(m+1\right)}^{2}}\mathbb{E}\left[{\left({T}_{0}+{T}_{1}+\cdots +{T}_{m}\right)}^{2}\right]\end{array}$

Multiplying both sides of the above inequality by $\left(m+1\right)$ and rearranging yields the required result.

Appendix 2

Table A1. Example showing how the finite difference regression method works.

Table A2. The noisy data compared to the best fit polynomial estimate $\widehat{y}=2.145{x}^{2}-7.464x+78.69$ determined from the successive finite differences method.

NOTES

^{1}Portions of this independent research were submitted to the 2013 Siemens Math, Science, and Technology Competition, and to the 2014 Intel Science Talent Search Competition for scholarship purposes. It received the Scientific Research Report Badge from the latter competition.

^{2}The t-test often called Student’s t-test was introduced in statistics by William S. Gosset in 1908 and published under the author’s pseudonym “Student” (see [10] ).

^{3}Assume that K and k are powers of 2 for the purposes of this discussion.