### Paper Menu >>

### Journal Menu >>

Applied Mathematics, 2011, 2, 309-314 doi:10.4236/am.2011.23036 Published Online March 2011 (http://www.SciRP.org/journal/am) Copyright © 2011 SciRes. AM On Decompositions of Real Polynomials Using Mathematical Programming Methods Janez Povh Faculty of Information Studies Novo Mesto, Novo mesto, Slovenia Institute of Mathematics, Physics and Mechanics Ljubljana, Ljubljana, Slovenia E-mail: janez.povh@fis.unm.si Received March 5, 2010; revised January 11, 2011; accepted January 15, 2011 Abstract We present a procedure that gives us an SOS (sum of squares) decomposition of a given real polynomial in n variables, if there exists such decomposition. For the case of real polynomials in n non-commutative variables we extend this procedure to obtain a sum of hermitian squares SOHS) decomposition whenever there exists any. This extended procedure is the main scientific contribution of the paper. Keywords: Commutative Polynomial, Noncommutative Polynomial, Sum Of Squares, Semidefinite Programming, Newton Polytope 1. Introduction Operations research, especially mathematical program- ming as subfield, very often makes progress by using re- sults from other (“pure”) areas of mathematics, such as analysis, algebra, probability etc. Rarely happens the contrary, i.e. that the new methods, developed by mathe- matical programming commu nity, in spires fu rther resear- ch in pure mathematics. One of the main breakthrough in operations research in last decades was development of theoretically and pra- ctically efficient interior-point methods for convex opti- mization problems. These methods were initially deve- loped for linear programming (LP) problems and later extended to many other convex optimization problems. The first non-trivial class of optimization problems, to which the interior point methods were extended, was the class of linear programs over the cone of positive semi- definite matrices or over the second order cone (see Sec- tion 2 and [1]). Once we became able to solve semidefinite programs efficiently (i.e. we can find -optimal solution in the time, which is a polynomial function of log and the size of the input data), the class of instances of the pro- blems of this type and the methods to so lve them (called semidefinite programming ) became very important tool in optimization, control and also many other areas of applied mathematics and engineering, see e.g. [1]. Computing global minimum of a given real polyno- mial over the set, defined by po lynomial inequalities (we call such set a semi-algebraic set) is an NP-hard problem, since it includes linear binary problems. Several authors, in particular Lasserre [2-4], Parillo [5], Parrilo and Stur- mfels [6] and Schweighofer [7,8] have shown how to solve such a problem approximately by a sequence of se- midefinite progr a ms. The main results needed to construct such a sequence are the fact that if a given polynomial is a sum of squares (SOS), then it is non-negative, and the dual theory of moments [2]. Precise and comprehensive overview of the results about topic has been done by M. Laurent in [9]. Checking whether given polynomial is non-negative is an NP-hard problem, but checking if given polynomial is SOS can be done efficiently in theory and practise by se- midefinite programming. This topic has been well explo- ited in recent years and currently there are available soft- ware packages to detect SOS polynomials and to do po- lynomial opti mizatio n. Readers in terested in so lvin g sums of squares problems for commuting polynomials are re- ferred to one of the great existing packages SOSTOOLS [10,11], GloptiPoly [12], YALMIP [13,14], and Sparse- POP [15]. Comparing to commuta tive po lynomials is t he problem of writing a on-commutative polynomial as a sum of her- mitian squares (SOHS) much less exploited—theoreti- cally and practically. However, several results show that this area is interesting and important. Helton [16] proved that real given polynomial in NC J. POVH Copyright © 2011 SciRes. AM 310 variables is SOHS if and only if it yields a positive semi- definite matrix (PSD) after substituting the variable by symmetric real matrices of the same size. For a beautiful exposition, we refer the reader to [17]. Together with coworkers Helton pursued line of re- search of NC polynomials further, studied positivity and convexity of NC polynomials and gave applications to control theory, optimization, systems engineering, etc.; see [18] for a nice survey of these beginnings of free se- mialgebraic geometry. The first author in [19] connected sums of hermitian squares of NC polynomials to an old open problem of Connes on von Neumann algebras, and, somewhat related, found applications to mathematical physics [20]. Many of these results were obtained with the aid of computer programs written in an ad-hoc man- ner. Despite the fast rise of free semialgebraic geometry, it seems that [21] is the first publication about theoretical algorithm and publicly available software for computing (or determining existence) of sums of hermitian squares (SOHS) decompositions of NC polynomials. In this paper we illustrate the procedures mentioned above on a number of well-chosen examples. 1.1. Notation In the paper we use standard notation from optimization and algebra. By 0X we denote that X is positive se- midefinite (i.e. X is symmetric and has only non-negative eigenvalues). The scalar product of two matrices X and Y of the same size is , , ,= = Tiji j ij X YtraceXY xy .By I we denote the identity matrix. A set of real polynomials in n (commutative) vari- ables (algebraists call this set algebra of real polyno- mials) is denoted by 1 =,, n x xx. Elements of x are therefore polynomials, which can be written as follows 12 12 == , n iiii ii n ii pxcxcx xx where i are vectors from n , which define the expo- nents of monomials. The degree of monomial x is =i deg xi and the degree of polynomial px =i i icx is =ii j deg pj max . If all monomials in p have the same degree d, we say that p is d-form. Note that the coefficient i c and the exponent vectors i completely determine the polynomial. Example 1. Polynomial 53 27 ,=10 2pxyxyxy has vectors of exponents 1=5,3 and 2=2,7 . We have =9deg p. The set (free algebra) of non-commutative polynomi- als in variables 12 =,,, n x xx x is denoted by x and consists of linear combinations of all finite words with letters 1,, n x x. We endow x with the invo- lution *, which reverses the order of the letters in any word, i.e. it holds *** =pq p+qand *** =pqq p . The degree of polynomial is length of the longest word. Example 2. Let 222 =1,pxxy xyxxy. We have *222 =1pyxxyx and =5deg p. Remark 1. In the non-commutative case we can not express monomials simply by the vectors of exponents, since monomials 2 x y and 2 y x, which are in commut- ative case equal with the same exponent 21,, are no more equal in the non-commutative case. Instead of this we keep working with monomials, i.e. with the words. 2. Semidefinite Programming Semidefinite programming consists of problems, where we are inte rested for the optimum of a linear function su- bject to linear constraints and additional constraint that all variables are taken from positive semidefinite matrix. More precisely, given symmetric matrices 1 ,,, m CA A and vector m b, we formulate semidefinite program in standard primal form (in the sequel we refer to prob- lems of this type by PSDP) as follows: min , ..,=,=1,, , 0. ii CX PSDP stA Xbim X The conic dual problem to PSDP is the semidefinite program in the standard dual form (DSDP) max ..= , ,0. T ii i m by s tyAZCDSDP yZ As we already mentioned, the importance of semidefi- nite programming was spurred by development of effici- ent methods, which can find -optimal solution in a po- lynomial time of ,nm and log , where n is the or- der of matrix variables Z and X . There exist several freeware package, which also in practice find such solu- tions. If the problem is of the medium size (i.e. 1000n and 10.000m ), these packages are based on the interior point methods, while packages for larger semidefinite programs use some variant of the first order methods (see web page [22] with a comprehensive list of state-of-the-art SDP solvers and also [23]). 3. SOS Decompositions of Commutative Polynomials by SDP SOS decomposition appears naturally when we are in- terested in finding the global optimum of a given poly- J. POVH Copyright © 2011 SciRes. AM 311 nomial. Indeed, the problem =min: n p optpxx (1) is equivalent to max:0 .px Both problems are in general very difficult (i.e. NP- hard), therefore we are forced to relax the problems in order to obtain a tractable one. One of the possibilities is using the SOS decomposition. Definition 1. Polynomial px has a sum of squares decomposition if there exist polynomials 1,, k qq such that 2 =i i pq . If the polynomial p can be written as a sum of squares, then clearly we h ave 0p, while the converse is in general not true. This leads to the following lower bound for the problem (1): max: . p optpxis SOS The inequality from above might be strict, as also fol- lows from the following example. Example 3. Polynomial 42 24 6222 12312 12 3123 ,, =3 M xxxxxxxx xxx is well-known as Motzkin form and is defined by four coefficients and four vectors: 112 233 =1, =4,2,0, =1, =2,4,0, =1, =0,0,6cc c in 44 =3, =2,2,2c It has been shown (see [24, 25]) that 123 ,, 0Mxxx while M has no SOS de- composition. There exist only few sets of polynomials where it holds that a polynomial from this set is non-negative if and only if it has SOS decomposition. This is true for all polynomials in two variables, for all 2-forms and for 4- forms in 3 variables. Obviously the Motzkin form from above is not in any of these sets. Testing whether a given polynomial has SOS decom- position can be done efficiently by semidefinite program- ming, as follows from the following theorem. For the proof see e.g. a proof of non-commutative version [21, Prop. 2.1]. Theorem 1. Polynomial px has SOS decom- position if and only if there exists a positive semidefinite matrix Q such that =, T px UQU where U is the column containing all monomials of degree 2degp. We demonstrate this theorem by the following exam- ple. Example 4. Let us consider 43 224 12112 122 ,=2 25pxxxxx xxx. Since p if 4-form, it follows from above that p has SOS decomposition if and only if it is non-negative. Moreover, in the SOS decomposition we can have only monomials 22 1212 ,, x xxx . Polynomial p has therefore the SOS decomposition if there exists positive semidefinite matrix Q such that 12 ,= T pxx UQU, for 22 1212 =,, T x xxxU. For the ma- trix Q we have linear constraints that its components must coincide with the coefficients of p. We have in p monomial 4 1 x with coefficient 2, therefore it must hold 1,1 =2q and similarly 2,2 =5q. Monomial 3 12 x x has coefficient 2 in and can be obtained as 2 112 x xx, hence we have also constraint 1,33,1 =2qq or equiva- lently 1,3 3,1 ==1qq . Monomial 22 12 x x can be obtained as a product of 2 1 x in 2 2 x or as a square of 12 x x, there- fore 1,22,13,3 =1qqq . Monomial 3 12 x x does not appear p, hence 2,33,2 =0qq and since Q is sym- metric: 2,3 3,2 ==0qq . If we put things together, we are looking for a positive semidefinite matrix Q with com- ponents satisfying the constraints above. Note that the objective function is not important here, i.e. we have the semidefinite feasibility problem. We can solve it by heart: by setting 3,3 =5q we obtain 1,2 2,1 ==3qq. The ma- trix is now completely defined: 231 231 231 1 =350=013 013 2 105 T Q We have the following SOS decomposition: 22 22 2 121212212 1 ,=2 33. 2 pxxxxxxxxx Remark 2. Even though we have an SDP feasibility problem, we usually use the follo wing objective fun ction trace Q, since it is widely accepted as an efficient heu- ristics to obtain the feasible solution with lowest rank. This is important since the rank of Q is exactly the number of factors in SOS decomposi tion. In general we must include in the vector U all mono- mials of degree d, if the polynomial is 2d-form. We have 1nd d of such monomials. If the polynomial has degree 2d, but is not 2d-form, the vector U contains all possible monomials of degree d-there are nd d many of them. We can find examples, where we indeed need all these monomials. If =4n and =10d, we ob- tain 1286 nd d and nd d 1001, hence the re- sulting SDPs are already on the boundary of the set of in- stances, solvable by interior point methods. Nevertheless, very often, especially if the polynomial J. POVH Copyright © 2011 SciRes. AM 312 is sparse (has only few monomials) it is possible to con- siderable decrease the number of monomials in U. We can use a result, first formulated in [26], that characteri- zes the monomials that can appear in a sum of squares repre- sentation. Define the Newton polytope p of given polynomial p of degree 2d as the integer lattice poi n ts in th e co nv ex hull of th e d egr ees , which appear in p. Then, it can be shown that the only mono- mials x that can appear in a sum of squares represen- tation are those such that 2 is in the p (or equivalently p 2 1 ). Example 5. If p is from Example 4, we have 2 =4,0,3,1,2,2,0,4 pCONV and 1=2,0,1,1,0,2 2 hence it defines exactly the monomials which appear in the SOS decomposition. The package SO Stools and some other packages for SOS decompositions are essentially based on the Newton polytope algorithm. 4. SOHS Decompositions for Non-Commutative Polynomials A non-commutative polynomial px has a sum- of-Hermitian-squares (SOHS) decomposition if there exist polynomials 1,, k qq such that * =ii i pqq . One can prove a result, similar to Theorem 1: px of degree 2d has S OH S decomposition if and only if * =,pUQU (2) for some 0Q, where U is vector of all monomials of degree d. If we consider all possible monomials of degree d, then vector U has 111 d nd components. This is much larger comparing to the commutative case. Ne- vertheless, we have also many criteria which tell us when we can leave out some of the monomials from U. We demonstrate the procedure to find SOHS decomposition by the following example. Example 6. Let 12 222 112121 12121212 ,= 136 3 pxx x xxxxxxx xxxxx x Since p has degree 4, there are 3 21=7 monomi- als which might appear in the vector U from the SOHS decomposition: 22 1212211 2 =1, ,,,,,. T xxxxxxx xU Due to the structure of p we can cross out some com- ponents of U: for any monomial from U of (maxi- mum) degree 2 it must hold that its hermitian square mus t be in p. If this is note the case for some monomial, we can eliminate it from U. Therefore we can cross out 2 1 x in 2 2 x from U to obtain 121221 =1, ,,,. T xxxxxxU We have * 12 ,=pxx UQU (3) if and only if Q satisfies equations: 1,1 2,2 2,33,21,44,1 1,55,1 2,5 5,2 5,5 4,4 =1 =3 =2 =6 =3 =1. q q qqqqqq qq q q Beside these constraints we get also a bunch of homo- genous equations: since 2 2 x does not appear in p, Q must satisfy 3,3 =0q. We also miss 2 12 x x and 2 21 x x in p, hence 2,43,5 4,2 5,3=0qqqq . We obtain 13 equa- tions in total, hence we have to solve SDP of order 5 with 13 linear constraints. By using objective trace Q (note Remark 2) our SDP solver (Sedumi [27]) gives the following optimal solution 10010 0300 3 == 00000 1001 0 03003 T QPP where 10010 =. 0300 3 P Therefore we obtain 12 12121 211 21 ,= 113 ** pxx x xxxxxxxxx. Note that 2 x does not appear in the SOHS decompo- sition, therefore we could eliminate it from U. Previous example clearly shows the main steps of the general algorithm to obtain the SOHS decomposition, presented in Figure 1. We can obtain in general case much larger SDP in the non-commutative case, but we have also much more pos- sibilities to exploit th e fact that the polynomials are non- commutative in order to reduce the size of the SDP. In J. POVH Copyright © 2011 SciRes. AM 313 INPUT: p x 1) Construct the vector of possible monomials U. 2) Construct linear equatio n s o n matrix Q. 3) Solve SDP in variable Q. 4) IF the SDP is infeasible, then p has no SOHS decomposition. RETURN. 5) ELSE compute p such that T =QPP. Let i q be the ith component of P X. OUTPUT: SOHS decomposition * =ii i p qq . Figure 1. Algorithm to obtain the SOHS decomposition. particular, we can execute the following: If monomial m appears in the SOHS decompo- sition and must therefore appear in U, then there should exist a symmetric monomial in p which coincides with m in the ||m rightmost variables (letters). At least one of the monomials of highest and of lowest degree must be symmetric. p must be symmetric, i.e. *=pp. For every variable i x the monomial, where i x has highest/lowest degree, must be symmetric. The first observation is very important: it leads to the so-called Newton chip method [21], where we obtain the vector U simply by taking all possible right chips (tails) of all possible symmetric monomials in p, which have length between half of the minimum and half of the maximum degree in p, see also paper [21]. We have also written a Matlab based software package NCsostools, which is a non-commutative version of the Sostools package. It includes an implementation of the algorithm from above and also implementations of most important operations from x . It is freely available from http://ncsostools.fis.unm.si, see also the detailed description of commands [28]. However, since all cons- traints in the resulting SDP are always orthogonal, we see a strong potential in the boundary poin t method [23], which performs very good on such SDP problems, espe- cially if we have large scale SDP. 5. Conclusions In this paper we demonstrated the power of mathematical programming in other, more pure areas of mathematics. Semidefinite programming turned out to be a very strong tool in approximating optimal values of polynomials, since it gives SOS or SOHS decompositions of real polynomials, when they exist. The next step of the re- search will be publishing the NCsostools package, which will contain implementation of the algorithm for finding the SOHS decompositions and also implementations of most important operations in x . A very challenging task for future research is a pr oce- dure to extract an exact rational solution of the SDP from the numerical solution Q, obtained by SDP solver. It is very important since in practice SDP solvers return only approximately feasible so lutions, which ar e sufficient for most purposes. But if we want to have correct proof for SOS or SOHS decomposition, we need an exact rational solution. This is in general very difficult problem (NP- hard) and not much research has been done in this dire- ction. 6. Acknowledgements We gratefully acknowledge financial supported by the Slovenian Research Agency (project No. 1000-08- 210518). 7. References [1] H. Wolkowicz, R. Saigal and L. Vandenberghe, “Hand- Book of Semidefinite Programming,” Kluwer, Academic Publishers, Boston, 2000. [2] J. B. Lasserre, “Global Optimization with Polynomials and the Problem of Moments,” SIAM Journal on Optimi- zation, Vol. 11, No. 3, January 2000, pp. 796-817. doi:10.1137/S1052623400366802 [3] J. -B. Lasserre, “A Sum of Squares Approximation of Nonnegative Polynomials, SIAM Journal on Optimization, Vol. 16, No. 3, 2006, pp. 751-765. doi:10.1137/04061413X [4] D. Henrion and J. -B. Lasserre, “GloptiPoly: Global Op- timization over Polynomials with Matlab and SeDuMi,” ACM Transactions on Mathematical Software, Vol. 29, No. 2, 2003, pp. 165-194. doi:10.1145/779359.779363 [5] P. A. Parrilo, “Semidefinite Programming Relaxations for Semialgebraic Problems,” Mathematical Programming, Vol. 96, No. 2, Series B, 2003, pp. 293-320. [6] P. A. Parrilo and B. Sturmfels, “Minimizing Polynomial Functions,” In Algorithmic and Quantitative Real Alge- braic Geometry (Piscataway, NJ, 2001), DIMACS Series in Discrete Mathematics and Theoretical Computer Science, American Mathematical Society, Providence, RI, Vol. 60, 2003, pp. 83-99. [7] M. Schweighofer, “Optimization of Polynomials on Compact Semialgebraic Sets,” SIAM Journal on Optimi- zation, Vol. 15, No. 3, 2005, pp. 805-825. doi:10.1137/S1052623403431779 [8] M. Schweighofer, “Global Optimization of Polynomials Using Gradient Tentacles and Sums of Squares,” SIAM Journal on Optimization, Vol. 17, No. 3, 2006, pp. 920-942. doi:10.1137/050647098 [9] M. Laurent, “Sums of Squares, Moment Matrices and Optimization over Polynomials,” In Emerging Applica- tions of Algebraic Geometry, IMA Volumes in Mathe- matics and Its Applications, Springer, New York, Vol. J. POVH Copyright © 2011 SciRes. AM 314 149, 2009, pp. 157-270. [10] SOSTools, 2011. http://www.cds.caltech.edu/sostools/. [11] S. Prajna, A. Papachristodoulou, P. Seiler and P. A. Par- rilo, “SOSTOOLS and Its Control Applications,” Positive Polynomials in Control, Lecture Notes in Control and In- formation and Science, Springer, Berlin, Vol. 312, 2005, pp. 273-292. [12] D. Henrion, J.-B. Lasse rre and J. Löfberg, “GloptiPoly 3: Moments, Optimiz ation and Semidefinite Progra mming,” 2011. http://www.laas.fr/~henrion/software/ gloptipoly3/ [13] YALMIP, 1 March 2009. http://control.ee.ethz.ch/~joloef/wiki/pmwiki.php [14] J. Löfberg, “YALMIP: A Toolbox for Modeling and Op- timization in MATLAB,” Proceedings of the CACSD Conference, Taipei, 2004. http://control.ee.ethz.ch/~joloef/yalmip.php. [15] H. Waki, S. Kim, M. Kojima and M. Muramatsu, “Sums of Squares and Semidefinite Program Relaxations for Polynomial Optimization Problems with Structured Spar- sity,” SIAM Journal on Optimization, Vol. 17, No. 1, 2006, pp. 218-242. doi:10.1137/050623802 [16] J. W. Helton, ““Positive” Noncommutative Polynomials are Sums of Squares,” Annals of Mathematics, Vol. 156, No. 2, 2002, pp. 675-694. doi:10.2307/3597203 [17] S. McCullough and M. Putinar, “Noncommutative Sums of Squares,” Pacific Journal of Mathematics, Vol. 218, No. 1, 2005, pp. 167-171. doi:10.2140/pjm.2005.218.167 [18] M. C. de Oliveira, J. W. Helton, S. McCullough and M. Putinar, “Engineering Systems and Free Semi-Algebraic Geometry,” Emerging Applications of Algebraic Geome- try, IMA Volumes in Mathematics and Its Applications, Springer, Vol. 149, 2008, pp. 17-62. [19] I. Klep and M. Schweighofer, “Connes’ Embedding Conjecture and Sums of Hermitian Squares,” Advances in Mathematics, Vol. 217, No. 4, 2008, pp. 1816-1837. doi:10.1016/j.aim.2007.09.016 [20] I. Klep and M. Schweighofer, “Sums of Hermitian Squares and the BMV Conjecture,” Journal of Statistical Physics, Vol. 133, No. 4, 2008, pp. 739-760. doi:10.1007/s10955-008-9632-x [21] I. Klep and J. Povh, “Semidefinite Programming and Sums of Hermitian Squares of Noncommutative Polyno- mials,” Journal of Pure and Applied Algebra, Vol. 214, No. 6, 2010, pp. 740-749. doi:10.1016 /j.jpaa.2009.07.003 [22] H. Mittelman, 2011. http://plato.asu.edu/sub/pns.html. [23] J. Povh, F. Rendl and A. Wiegele, “A Boundary Point Method to Solve Semidefinite Programs,” Computing, Vol. 78, No. 3, 2006, pp. 277-286. doi:10.1007/s00607-006-0182-2 [24] P. Parrilo, “Structured Semidefinite Programs and Se- mialgebraic Geometry Methods in Robustness and Opti- mization,” PhD Thesis, California Institute of Technolo- gy, California, 2000. [25] B. Reznick, “Some Concrete Aspects of Hilbert’s 17th Problem,” In Real Algebraic Geometry and Ordered Structures (Baton Rouge, LA, 1996), Contemporary Ma- thematics, American Mathematical Society, Providence, RI, Vol. 253, 2000, pp. 251-272. [26] B. Reznick, “Extremal PSD Forms with Few Terms,” Duke Mathematical Journal, Vol. 45, No. 2, 1978, pp. 363-374. doi:10.1215/S0012-7094-78-04519-2 [27] SeDuMi, 29 June 2009. http://sedumi.ie.lehigh.edu/ [28] K. Cafuta, I. Klep and J. Povh, “NCSOStools: A Com- puter Algebra System for Symbolic and Numerical Computation with Noncommutative Polynomials,” 2011. http: //ncsostools.fis.unm.si |