American Journal of Computational Mathematics, 2014, 4, 464473 Published Online December 2014 in SciRes. http://www.scirp.org/journal/ajcm http://dx.doi.org/10.4236/ajcm.2014.45039 How to cite this paper: Tabanjeh, M.M. (2014) Combining Algebraic and Numerical Techniques for Computing Matrix De terminant. American Journal of Computational Mathematics, 4, 464473. http://dx.doi.org/10.4236/ajcm.2014.45039 Combining Algebraic and Numerical Techniques for Computing Matrix Determinant Mohammad M. Tabanjeh Department of Mathematics and Computer Science, Virginia State University, Petersburg, USA Email: mtabanjeh@vsu.edu Received 22 October 2014; revised 1 December 2014; accepted 9 December 2014 Copyright © 2014 by author and Scientific Research Publishing Inc. This work is licensed under the Creative Commons Attribution International License (CC BY). http://creativecommons.org/licenses/by/4.0/ Abstract Computing the sign of the determinant or the value of the determinant of an n × n matrix A is a classical wellknow problem and it is a challenge for both numerical and algebraic methods. In this paper, we review, modify and combine various techniques of numerical linear algebra and ra tional algebraic computations (with no error) to achieve our main goal of decreasing the bit precision for computing detA or its sign and enable us to obtain the solution with few arithmetic operations. In particular, we improved the precision bits of the padic lifting algorithm (H = 2h for a natural number h), which may exceed the computer precision β (see Section 5.2), to at most bits (see Section 6). The computational cost of the padic lifting can be performed in O(hn4). We reduced this cost to O(n3) by employing the faster padic lifting technique (see Section 5.3). Keywords Matrix Determinant, Sign of the Determinant, pAdic Lifting, Modular Determinant, Matrix Factorization, BitPrecision 1. Introduction Computation of the sign of the determinant of a matrix and even the determinant itself is a challenge for both numerical and algebraic methods. That is, to testing whether , , or for an n × n matrix A. In computational geometry, most decisions are based on the signs of the determinants. Among the geometric applications, in which the sign of the determinant needs to be evaluated, are the computations of co n
M. M. Tabanjeh vex hulls, Voronoi diagrams, testing whether the line intervals of a given family have nonempty common inter section, and finding the orientation of a high dimensional polyhedron. We refer the reader to [1][5], and to the bibliography therein for earlier work. These applications have motivated extensive algorithmic work on compu ting the value and particulary the sign of the determinant. One of the wellknown numerical algorithms is Clarkson’s algorithm. In [6], Clarkson uses an adaption of GramSchmidt procedure for computing an ortho gonal basis and employs approximate arith metics. His algo rithm runs in time and uses bits to represent the values for a d × d determinant with bbit integer entries. On the other hand, the authors of [7] proposed a method for evaluating the signs of the determinant and bounding the precision in the case where . Their algorithm asymptotic worst case is worse than that of Clarkson. However, it is simpler and uses re spectively b and bit arithmetic. In this paper we examine two different approaches of computing the de terminant of a matrix or testing the sign of the determinant; the numerical and the algebraic approach. In partic ular, numerical algorithms for computing various factorizations of a matrix A which include the orthogonal fac torization and the triangular factorizations , , and seem to be the most effective algorithms for computing the sign of provided that is large enough relative to the computer precision. Alternatively, the algebraic techniques for computing modulo an integer M based on the Chinese remainder theorem and the padic lifting for a prime p use lower precision or less arith metic operations. In fact, the Chinese remainder theorem required less arithmetic operations than the padic lift ing but with higher precision due to (9). We also demonstrate some effective approaches of combining algebraic and numerical techniques in order to decrease the precision of the computation of the padic lifting and to intro duce an alternative technique to reduce the arithmetic operations. If is small, then obtaining the sign of the determinant with lower precision can be done by effective algebraic methods of Section 4. Although, the Chinese remainder algorithm approach requires low precision computations provided that the determinant is bounded from above by a fixed large number. This motivated us to generalize to the case of any input matrix A and to decrease the precision at the final stage of the Chinese remaindering at the price of a minimal increase in the arithmetic cost. Furthermore, in Section 5 we extend the work to an algorithm that computes using low precision by relying on the padic lifting rather than the Chinese remaindering. 2. Definitions, Basic Facts, and Matrix Norms Definition 1. For an n × n matrix A, the determinant of A is given by either ( )( ) ( ) 1 det1det, for1,,, ij n ij ij j AAa Ain + = ==−= ∑ or ( )() ( ) 1 det1det, for 1,,, ij n ij ij i AAa Ajn + = ==−= ∑ where is by matrix obtained by deleting the ith row and jth column of A. Fact 1. Wellknown properties of the determinant include , , and for any two matrices and . Definition 2. If A is a triangular matrix of order n, then its determinant is the product of the entries on the main diagonal. Definition 3. For a matrix A of size m × n, we define 1norm: , norm: , pnorms: 00 1 sup supmax p p xx x pp pp p Ax x AA Ax xx ≠≠ = = == , and 2norm: . 3. Numerical Computations of Matrix Determinant We find the following factorizations of an n × n matrix A whose entries are either integer, real, or rational by Gaussian elimination. (1) (2) (3) We reduce A to its upper triangu lar form U. The matrix L is a unit lower triangular whose diagonal entries are
M. M. Tabanjeh all 1 and the remaining entries of L can be filled with the negatives of the multiples used in the elementary op er ations. Pr is a permutation matrix results from row interchange and Pc is a permutation matrix results from col umn interchange. Fact 2. If P is a permutati on matrix, then . Moreover, , where I is an identity matrix. 3.1. Triangular Factorizations The following algorithm computes or its sign based on the fa c torizations (1)(3). Algorithm 1. Triangular factorizations. Input: An n × n matrix A. Output: . Computations: 1) Compute the matrices L, U satisfying Equation (1), or Pr, L, U satisfying Equation (2), or Pr, L, U, Pc satisfying Equation (3). 2) Compute , for Equation (1), or , , for Equation (2), or , , , for Equation (3). 3) Compute and output for Equation (1), or ()( )( ) detdetdet det r AP LU = for Eq uation (2), or ( )()()() ( ) detdetdet detdet rc APLU P= for Equation (3). One can easily output the sign of from the above algorithm. We only need to compute the sign of , , , and at stage 2 and multiply these signs. The value of and can be easily found by tracking the number of row or column interchanges in Gaussian elimination process which will be either 1 or −1 since those are permutation matrices. As L is a unit lower triangular matrix, . Finally, the computations of required multiplications since U is an upper triangular matrix. Therefore, the overall arithmetic operations of Algorithm 1 will be dominated by the computational cost at stage 1, that is, ( )() 12 1 12 1 6 n i nn ni − = −− = ∑ multiplications, the same number for additions, subtractions, and comparisons, but divisions. However, Algorithm 1 uses comparisons to compute (2) rather than . 3.2. Orthogonal Factorization Definition 4. A square matrix Q is called an orthogonal matrix if Equivalently, , where QT is the transpose of Q. Lemma 1. If Q is an orthogona l matrix, then . Proof. Since Q is orthogonal, then . Now and thus , but which implies that , hence . Definition 5. If , then there exists an orthogonal matrix and an upper triangular matrix so that (4) Algorithm 2. QRfactorization. Input: An n × n matrix A. Output: . Computations: 1) Compute an orthogonal matrix Q satisfying the Equation (4). 2) Compute . 3) Compute the matrix . 4) Compute , where are the main diagonal entries of R. Here, we consider two wellknown effective algorithms for computing the factorization; the House holder and the Given algorithms. The Householder transformation (or reflection) is a matrix of the form where . Lemma 2. H is symmetric and orthogonal. That is, and Proof. and since and then ( ) T TT TT 2 2.HII H=− =−=vv vv Hence,
M. M. Tabanjeh H is symmetric. On the other hand, ( )( )() TTTTTTT T 224444.HH IIIvII=−− =−+=−+=vvvvvvv vvvvvv Therefore, H is orthogonal, that is, . The Householder algorithm computes as a product of matrices of Householder trans formations, and the upper triangular matrix . Lemma 3. for a matrix Q computed by the Househol de r algorit h m. Proof. Since for vectors , , Lemma 2 implies that and hence . Notice that where . Now for , we define the trans formation . Then the function ( )( )() ( ) T detDtIvtvt= − is continuous and . Hence ( ) ( ) T 11 1 det21DI vv=−=− since and . This proves that . But , we have . The Givens rotations can also be used to compute the QR factorization, where , t is the total number of rotations, Gj is the the jth rotation matrix, and is an upper triangular matrix. Since the Givens algorithm computes Q as a product of the matrices of Givens rotations, then . We define ()( ) ,, ,, ii kk G WcsWcsc= == , for two fixed integers i and k, and for a pair of nonzero rea l numbers c and s that satisfies such that . If the Householder is used to find the QR, Algo rithm 2 uses multiplications at stage 3. It involves multiplications at stage 1, and evaluations of the square roots of positive numbers. If the Givens algorithm is used to find the QR, then it in volves multiplications at stage 3, multiplications at stage 1, and evalua tions. This shows some advantage of the Householder over the Givens algorithm. However, the Givens rotations algorithm allows us to update the QR of A successively by using only arithmetic operations and square root evaluations if rows or columns of A are successively replaced by new ones. Therefore, in this case, the Givens algorithm is more effective. 4. Algebraic Computations of Matrix Determinant 4.1. Determinant Computation Based on the Chinese Remainder Theorem Theorem 1. (Chinese remainder theorem.) Let be distinct pairwise relatively prime integers such that (5) Let , and let D denote an integer satisfying th e inequality (6) Let (7) ( )( ) , mod , 1mod , 1,,. iiii iii i M Ms Mmysmik m = ≡≡∀= (8) Then D is a unique integer satisfying (6) and (7). In additio n, ( ) ( ) 1 mod . k ii i i D MryM = =∑ (9) Algorithm 3. (Computat i on of based on the Chinese remainder theorem.) Input: An integer matrix A, an algorithm that computes for any fixed integer , k in tegers satisfying (5) and are pairwise r elatively prime, and . Output: . Computations: 1) Compute , . 2) Compute the integers Mi, si, and yi as in (8) . 3) Comput e an d output D as in (9).
M. M. Tabanjeh The values of yi in (8) are computed by the Euclidean algorithm when applied to si and mi for . Algorithm 3 uses arithmetic operations (ops) at stage 1, ops at stage 2, and ops at stage 3. The computations of the algorithm are performed with precision of at most bits at stage 1, and at most bits at stages 2 and 3. The latest precision can be decreased at the cost of slightly in creasing the arithmetic operations [8]. Lemma 4. For any pair of integers M and d such that , we have ( )( )( ) mod if m od , and mod otherwise. 2 M ddM dMddMM= <=− (10) Suppose that the input matrix A is filled with integers and an upper bound is known. Then we may choose an integer M such that and compute . Hence, can be recovered by using (10). 4.2. Modular Determinant Let and . In order to calculate , we choose a prime p that is bigger than and perform Gaussian elimination (2) on . This is the same as Gau s s ian elimination over , except that when dividing by a pivot element a we have to calculate its inverse modulo p by the ex tended Euclidean algorithm. This requires arithmetic operations modulo p. On the other hand, if we need to compute for the updated matrix A, we rely on the QR factorization such that and , where D is a diagonal matrix and R is a unit upper triangular matrix. The latest factorization can be computed by the Givens algorithm [9]. If the entries of the matrix A are much smaller than p, then we do not need to reduce modulo p the results at the initial steps of Gaussian elimination. That is, the computation can be performed in exact rational arithmetics using lower precision. In this case, one may apply the algorithm of [10] and [11] to keep the precision low. The computations modulo a fixed integer can be performed with precision bits. In such a computation, we do not need to worry about the growth of the intermediate values anymore. However, to reduce the cost of the computations, one can work with small primes modular instead of a large prime, that is, these primes can be chosen very small with loga rithmic length. Then the resulting algorithm will have lower cost of and can be performed in parallel. Now, one can find a bound on the without actually calculating the determinant. This bound is given by Hadamard’s inequality which says that (11) where (nonnegative integers) is the maximal absolute value of an entry of A. 5. Alternative Approach for Computing Matrix Inverses and Determinant 5.1. pAdic Lifting of Matrix Inverses In this section, we present an alternative approach for computing matrix determinant to one we discussed in ear lier sections. The main goal is to decrease the precision of algebraic computations with no rounding errors. The technique relies on Newton’sHensel’s lifting (padic lifting) and uses arithmetic operations. However, we will also show how to modify this technique to use order of n3 arithmetic operations by employing the faster padic lifting. Given an initial approximation to the inverse, say S0, a wellknown formula for New ton’s iteration rapidly improves the initial ap proximation to the inverse of a nonsingula r n × n matrix A: ()( ) 1 222, 0. iiiiiiii SSIASSS ASIS ASi + =−=− =−≥ (12) Algorithm 4. (padic lifting of matrix inver ses.) Input: An n × n matrix A, an int ege r , the matrix , and a nat ural number h. Output: The matrix , . Computations: 1) Recursively compute the matrix by the equation,
M. M. Tabanjeh ( ) ( ) 11 2mod ,where2, 1,,. Jj jj j SSIASpJjh −− =−== (13) 2) Finally, outputs . Note that, , , . This follows from the equation ( ) ( ) 2 1mod J jj I ASI ASp − −=− . The above algorithm uses arithmetic operations performed modulo pJ at the jth stage, that is, a to tal of arithmetic operations with precision of at most bits. 5.2. pAdic Lifting of Matrix Determinant We can further extend padic lifting of matrix inverses to padic lifting of matrix determinants, that is , using the following formula [12]: (14) Here, Ak denotes the k × k leading principal (northwestern) submatrix of A so that for . Moreover, denotes the th entry of a matrix B. In order to use Formula (14), we must have the inverses of Ak available modulo a fixed prime integer p for all . A nonsingular matrix A modulo p is called strongly nonsingular if the inverses of all submatrices Ak exist modulo p. In general, a matrix A is called strongly nonsingular if A is nonsingular and all k × k leading principle submatrices are nonsingular. Here, we assume that A is strongly nonsingular for a choice of p. Finally, Algorithm 4 can be extended to lifting (see Algo rithm 5). Algorithm 5. (padic lifting of matrix determinant.) Input: An integer , an n × n matrix A, the matrices , and a natur al number h. Output: , . Computations: 1) Apply Algorithm 4 to all pairs of matrices and , (replacing the matrices A and in the algo rithm), so as to compute the matrices , for . 2) Compute the value ( ) ( ) 22 ,, , 1 1mod 2mod . det n HH hkkhkkk k pSI ASp A = = − ∏ (15) 3) Compute an d output the va lue , as the reciprocal of . The overall computational cost of Algorithm 5 at stage 1 is arithmetic operations performed with precision at most bits. However, at stage 2, the algorithm uses operations with precision bits. At stage 3, only one single multiplication is needed. All the above operations are calculated modulo p2H. Now, we will estimate the value of H and p that must satisfy the bound . But, due to Hadamard’s bound (11), we have which implies that 2logloglog 2 ppp H nann>+ + . Therefore, it is suffices to choose p and H satisfying the inequalities and . In the next section, we will present an alternative faster tech nique to computing matrix determinant that uses only based on the divideandconquer algorithm . Example 1. Let 1 1718 11819. 5 1620 A = Then, , and 3 1 1718 11819. 5 1620 A = By Algorithm 4, compute the matrices: and 1 3 56 521 75701. 7469 1 A − − = − −− Now, we compute
M. M. Tabanjeh 1 0,2 2 01 mod 3, 21 SA − == and 1 0,3 3 111 mod 3011. 202 SA − == We apply Algorithm 4 (that is, ( ) 0, 0, 2 mod , 2, 1,, Jj k kkk SSIASpJjh=−== ) to all pair s of matrices: ( ) [ ] 2 1,110,110,1 2mod 31,SSSI AS==−= ( ) 2 1,220,22 0,2 01 2mod 3, 81 SSSIAS ==−= and ( ) 2 1,330,33 0,3 7 71 2mod 3671. 2 38 SSSIAS ==−= We then compute the value () () () [ ] ()()( ) 3 2 221 1,11 1,11,22 1,21,331,3 11,12,2 3,3 1,1 2,2 3,3 1mod 222mod 3 det 2243 2783 2510 144 17 12100 2603 2348 1216 1612113 2580 2269 11612269365309 mo H k pSIASSIASSIAS A ×× = =−⋅−⋅− −−− −− = ⋅⋅−−− −− −−− =−− = ∏ ( ) d 8180.= Therefore, we output the value of , since . 5.3. Faster pAdic Lifting of the D eterminant Assuming A is strongly nonsingular modulo p, block GaussJordan elimination can be applied to the block 2 × 2 matrix to obtai n the we llknown bl ock factorization of A: 1 1 , IO BOI BC AEBIO SOI − − = where is the Schur complement of B in A. The following is the divideandconquer algorithm for computing . Algorithm 6. (Computing the determinant of a strongly nonsingular matrix .) Input: An n × n stron gl y nonsingul ar matrix A. Output: . Computations: 1) Partition A according to the block factorization above, where B is an matrix. ( refers to the floor of .) Compute and S using the matrix equation . 2) Compute and . 3) Comput e an d output . Since A is strongly nonsingular modulo p, we can compute the inverse of k × k matrix by arithmetic operations using Algorithm 4. From the above factorization of A, we conclude that ( ) ( )()( ) 2 22Dn DnDnIn≤++ . The computational cost of computing the determinant is . This can be derived from the above factorization of A using recursive factorization ap plied to B and S and the inverses modulo . Here, is the cost of computing the inverse and is
M. M. Tabanjeh the cost of computing the determinant. 6. Improving the Precision of the pAdic Lifting Algorithm Definition 6. Let e be a fixed integer such that where β is the computer precision. Then any integer z, such that , will fit the computer pre cision and will be called sh ort integer or einteger. The integ ers that exceed will be called long integers and we will associate them with the epolynomials , for all i. Recall that Algorithm 4 uses bits at stage j. For large j, this value is going to exceed β. In this section we decrease the precision of the padic algorithm and keep it below β. In order to do this, we choose the base where K is a power of 2, , K divides , and (16) Now, let us associate the entries of the matrices and with the epolynomials in x for and for all j and k. These polynomials have degrees and take values of the entries for . Define the polynomial matrices and . Then for , we associate the padic lifting step of (13) with the matrix polynomial ( )()()()( ) ( ) ,1,1,,1, 2mod . JK jkjnjnjn jn SxSxSxAxSx x −− − = − (17) The input polynomial matrices a re and for . We perform the computations in (17) modulo . The idea here is to apply ereduction to all the entries of the output matrix polynomial followed by a new reduction modulo . The resulting entries are just polynomials with integer coefficients ranging between and . This is due to the recursive ereductions and then taking modular reductions again. Note that the output entries are polynomials with coefficients in the range to for even before applying the ereductions. This shows that the βbit precision is sufficient in all of the computation s due to (16). The same argument can be applied to the matrices for all j and . 7. Numerical Implementation of Matrix Determinant In this section we show numerical implementation of the determinant of n × n matrices based on the triangular factorization LU, PrLU, and PrLUPc. Algorithm 1 computes the triangular matrices and , the permutation matrices and , where EL and EU are the perturbations of L and U. In gener al, we can assume that and since the rounding error is small. Definition 7. Let 11 , , rc APAP LUAE −− ′ ′′ = −= (18) and let , a, , and denote the maximum absolute value of the entries of the matrices , A, , and . Also, is the error matrix of the order of the roundoff in the entries of A. Assuming floating point arithmetic with double precision (64bits) and round to bit. Then the upper bound on the magnitude of the relative rounding errors is , where is the machine epsilon. Our goal is to estimate . Theorem 2. ([13]) F or a m atrix , let denote the matrix . Then unde r (18), and (19) From the following matrix norm property ,, 1max ijij qq A aA n ≤≤ , for , we obtain the bound Theorem 3. Let denotes the maximum absolute value of all entries of the matrices computed by Gaussian elimination, which reduces to the upper triangular form and let as define d above. Then
M. M. Tabanjeh ( ) 2 24 21.8 . ln eeEnaan+ + +∞ ′′ ≤= ≤< (20) By using the following results, we can estimate the magnitude of the perturbation of and caused by the pe rturbati on of . Here we assume that and write, ( ) , detdet. d eeeAE A ′ = =+− (21) We use the following facts to estimate of (21) in Lemma 5 below. Fact 3. , . , , if ; , , if B is a submatrix of A. , for ; . Lemma 5. ( ) 12, f or 1,2,. n dq eAneneq − ≤+ =∞ Combining Lemma 5 with the bound (19) enables us to extend Algorithm 1 as follows: Algorithm 7. (Bounding de t erminant.) Input: An real matrix A and a positive . Output: A pair of real numbers and such that Computations: 1) Apply Algorithm 1 in floating po int arithmetic with unit roundoff bounded by . Let denote the computed upper triangular matrix, approximating the factor U of A from (2) or (3). 2) Compute the upper bound of (20) where . 3) Substitute e+ for e and ( ) 1 2 11 min,, AAAA ∞∞ for in Lemma 5 and obtain an upper bound on . 4) Output the values and . Example 2. Let 5 31 036 01.6667 5.1667 19 3164.7500 1.50003.2000 . 42 53.4000 5.25001.3333 17 2112 54 9 A − − =≈ Using Matlab, Gaussian elimination with complete pivoting gives the matrix U rounded to 4 bits: 5.2500 1.33333.4000 05.5899 1.0794 00 3.2342 U = −− , 1 00 0.3175 1 0 0.28570.5043 1 L = − , , where EU and EL are the matrices obtained from accu mulation of rounding errors. Also , and 001 1 0 0. 010 c P = Then 55 5 06 106 10 00 0 003 10 A E −− ′ − ×× = × is a perturbation in A, and . We now compute the upper bound of (20). Therefore, , where is the maximum of of A, l = 1 is th e ma x imu m o f of L, , , and is the size of the input matrix . Hence we obtain the following upper bound on by Lemma 5. That is, ( ) 1 127 2 11 min,,2.2347 10 n d eAAAAnene − ++ ∞∞ ≤ +=× , where , , and . Hence, is an upper bound of . 7 det2.2347 10 d d Ue + − = −=−× and 7 det2.2347 10 d d Ue + += +=× . Since , we have
M. M. Tabanjeh ( ) det det dd UeA Ue ++ − ≤≤+ . On the other hand ()( ) detdet 0.00123 d eA EAA = +−= which is less than the upper bound we have computed. Acknowledgements We thank the editor and the referees for their valuable comments. References [1] Muir, T. (1960) The Theory of Determinants in Historical Order of Development. Vol. IIV, Dover, New York. [2] Fortune, S. (1992) Voronoi Diagrams and Delaunay Triangulations. In: Du, D.A. and Hwang, F.K., Eds., Computing in Euclidean Geometry, World Scientific Publishing Co., Singapore City, 193233. [3] Brönnimann, H., Emiris, I.Z., Pan, V.Y. and Pion, S. (1997) Computing Exact Geometric Predicates Using Modular Arithmetic. Proceedings of the 13th Annual ACM Symposium on Computational Geometry, 174182. [4] Brönnimann, H. and Yvinec, M. (2000) Efficient Exact Evaluation of Signs of Determinant. Algorithmica, 27, 2156. http://dx.doi.org/10.1007/s004530010003 [5] Pan, V.Y. and Yu, Y. (2001) Certification of Numerical Computation of the Sign of the Determinant of a Matrix. Algo rithmica, 30, 708724. http://dx.doi.org/10.1007/s0045300100328 [6] Clarkson, K.L. (1992) Safe and Effective Determinant Evaluation. Proceedings of the 33rd Annual IEEE Symposium on Foundations of Computer Science, IEEE Computer Society Press, 387395. [7] Avnaim, F., Boissonnat, J., Devillers, O., Preparata, F.P. and Yvinec, M. (1997) Eval uating Signs of Determinants Us ing SinglePrecision Arithmetic. Algorithmica, 17, 111132. http://dx.doi.org/10.1007/BF02522822 [8] Pan, V., Yu, Y. and Stewart, C. (1997) Algebraic and Numerical Techniques for Computation of Matrix Determinants. Computers & Mathematics with Applications, 34, 4370. http://dx.doi.org/10.1016/S08981221(97)000977 [9] Golub, G.H. and Van Loan, C.F. (1996) Matrix Computations. 3rd Edition, John Hopkins University Press, Baltimore. [10] Edmonds, J. (1967) Systems of Distinct Representatives and Linear Algebra. Journal of Research of the National Bu reau of Standards, 71, 241245. http://dx.doi.org/10.6028/jres.071B.033 [11] Bareiss, E.H. (1969) Sylvester’s Identity and Multistep IntegerPreserving Gaussian Elimination. Mathematics of Computation, 22, 565578. [12] Bini, D. and Pan, V.Y. (1994) Polynomial and Matrix Computations, Fundamental Algorithms. Vol. 1, Birkhäuser, Boston. http://dx.doi.org/10.1007/9781461202653 [13] Conte, S.D. and de Boor, C. (1980) Elementary Numerical Analysis: An Algorithm Approach. McGrawHill, New York.
