Advances in Linear Algebra & Matrix Theory
Vol.08 No.01(2018), Article ID:82800,15 pages
10.4236/alamt.2018.81003

High Order Tensor Forms of Growth Curve Models

Zerong Lin1, Dongzhe Liu2, Xueying Liu3, Lingling He1, Changqing Xu1*

1School of Mathematics and Physics, Suzhou University of Science and Technology, Suzhou, China

2School of Energy and Environment, City University of Hong Kong, Hong Kong, China

3Zhejiang University of Water Resources and Electric Power, Hang Zhou, China

Copyright © 2018 by authors and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).

http://creativecommons.org/licenses/by/4.0/

Received: November 20, 2017; Accepted: February 26, 2018; Published: March 1, 2018

ABSTRACT

In this paper, we first study the linear regression model y = X β + ε and obtain a norm-minimized estimator of the parameter vector β by using the g-inverse and the singular value decomposition of matrix X . We then investigate the growth curve model (GCM) and extend the GCM to a generalized GCM (GGCM) by using high order tensors. The parameter estimations in GGCMs are also achieved in this way.

Keywords:

Tensor, Generalized Linear Model, Growth Curve Model, Parameter Estimation, Generalized Inverse

1. Introduction

Linear regression model, or called linear model (LM), is one of the most widely used models in statistics. There are many kinds of linear models including simple linear models, general linear models, generalized linear models, mixed effects linear models and some other extended forms of linear models [1] [2] [3] [4] [5] . The growth curve model (GCM) is a special kind of general linear models which have applications in many areas such as the psychology data analysis [6] . The GCMs can be used to handle longitudinal data or missing data or even the hierarchical multilevel mixed case [2] [3] [5] [7] - [12] . There are some variations of GCMs such as the latent GCMs which are also very useful. The traditional treatment of the GCMs in the estimation of the parameters in the case of mixed effects for a single response factor is usually to stack all the dependent observations vertically into a very long column vector, usually denoted by y, and all the design matrices (both the fixed effect design matrix and the random effect design matrix), the random errors are accordingly concatenated to fit the size of y. This treatment makes the implementation of the related programming much slow due to the magnificent dimensions of the data (matrices and vectors). Things will get even worse if we encounter a huge dataset (big data) such as the data of genome, web related, image gallery, or social network etc.

In this paper, we first use the generalized inverse of matrices and the singular value decomposition to obtain the norm-minimized estimation of the parameters in the linear model. Then we introduce some basic knowledge about tensors before we employ tensors to express and extend the multivariate mixed effects linear models. The extended tensor form of the model can be also regarded as a generalization of the GCM.

Let us first begin with some basic linear regression models. We let y be a response variable and x 1 , , x r be independent random variables for explaining y. The most general regression model between y and x 1 , , x r is in form

y = f ( x 1 , , x r ) + ε (1.1)

where ε is the error term, and f is an unknown regression function. In linear regression model, f is assumed to be a linear function, i.e.,

y i = β 0 + β 1 x 1 + β 2 x 2 + + β r x r + ε (1.2)

where all β i are unknown parameters. Denote x = ( x 1 , , x r ) which is called a random vector. Let P = ( y , x ) , an ( r + 1 ) -dimensional random vector, which is called an observable vector. Given N observations of P, say P i = ( y i , x i 1 , x i 2 , , x i r ) , i = 1 , 2 , , N . Here y i stands for the ith observation of the response variable y, and x i 1 , x i 2 , , x i r are the corresponding explanatory observations. The sample model of Equation (1.2) turns out to be

y i = β 0 + β 1 x i 1 + β 2 x i 2 + + β r x i r + ε (1.3)

or equivalently

y = X β + ε (1.4)

where y = ( y 1 , y 2 , , y N ) T N (here and throughout the paper T stands for the transpose of a matrix/vector) is the sample vector of the response variable y, X = ( x i j ) N × ( r + 1 ) is the data matrix or the design matrix each of whose rows corresponding to an observation of x, β = ( β 0 , β 1 , , β r ) r + 1 is the regression coefficient vector, which is to be estimated, and ε = ( ε 1 , ε 2 , , ε N ) T is the random error vector. A general linear regression model, abbreviated GLM, is a LM (1.4) with the error terms ε i satisfying:

1) Zero-mean: Ε [ ε i ] = 0 , i [ N ] , i.e., the expected value of the error term is zero for all the observations.

2) Homoskedasticity: Var [ ε ] = σ 2 , i.e., all the error term are distributed with the same variance.

3) Uncorrelation: Cov ( ε i , ε j ) = 0 for all distinct i , j , i.e., distinct error terms are uncorrelated.

Equations (1)-(3) is called Gauss-Markov assumption [13] . The model (1.4) under the Gauss-Markov assumption is called the Gauss-Markov model. Note that the variance reflects the uncertainty of the model, the zero-mean, homoskedasticity and the uncorrelation of the sample errors form the Gauss-Markov assumption. An alternative form of the Gauss-Markov model is

Ε [ Y ] = X β , Cov ( ε ) = σ 2 I N (1.5)

where I N is the N × N identity matrix and σ 2 > 0 . In order to investigate the general linear model and extend the properties, we recall some known results concerning the linear combinations of some random variables. Suppose α n is a constant vector with the same length as that of y, the random vector under the investigation.

Let A m × n . The g-inverse of A, denoted A g , is a generalized inverse defined as an n × m matrix satisfying [4] A A g A = A . An equivalent definition for g-inverse is that x = A g b is always a solution to equation A x = b whenever b C ( A ) , the column space of A. A well known result is that all the solutions to A x = b (when compatible) are in form

x = A g b + ( I A g A ) ω , ω n . (1.6)

It is easy to verify that A g = A 1 is unique when A is invertible. The g-inverse of a matrix (usually not unique) can be calculated by using singular value decomposition (SVD).

Lemma 1.1. Let X N × p with a SVD decomposition X = U D V T such that U N × N and V p × p are orthogonal matrices, and D N × p is in form D = d i a g ( σ 1 , σ 2 , , σ r , 0 , , 0 ) where r = r a n k ( X ) min ( N , p ) , and σ 1 σ 2 σ r . Then

A g = V [ D r 1 * * * ] U T n × m (1.7)

where * denotes any matrix of suitable size and D r = d i a g ( σ 1 , σ 2 , , σ r ) .

The Gauss-Markov Theorem (e.g. Page 51 of [13] ) is stated as:

Lemma 1.2. Suppose that model (1.4) satisfies the Gauss-Markov assumption and a N be a constant vector. Then z = a T y is estimable, and a T β ^ is the best (minimum variance) linear unbiased estimator (BLUE) of a T β , with β ^ = ( X T X ) g X T y .

Based on Lemma 1.2, we get

Proposition 1.3. If r a n k ( X ) = r < min ( N , p ) in Equation (1.4) and X satisfies condition in Lemma 1.2. Then the estimator of β with minimal 2-norm is in form

β ^ = V [ D r 1 y ˜ 1 0 ] (1.8)

where y ˜ = U T y = [ y ˜ 1 T , y ˜ 2 T ] T with y ˜ 1 r , y ˜ 2 N r .

Proposition 1.3 tells us that by taking D as a block upper triangle form in the decomposition

( X T X ) g = V [ G 11 G 12 0 G 22 ] V T .

We can reach a norm-minimised estimator of β . Now denote H : = ( X T X ) g X . By Gauss-Markov Theorem, we have

β 2 = H y , H y = y T H T H y = y T X ( X T X ) 2 g X T y

which implies that β = ( X X T ) g y .

The generalized linear model (GLM) is a generalization of LM [1] . In a GLM model some basic assumptions in linear regression model are relaxed. Also the fitting values of the response variables are no longer directly expressed as a linear combination of parameters, but rather a function which is usually called a link function. A GLM consists of the independent random components y i in exponential distribution, the predictive value X i T β , the system components, and the link function f, strictly monotone differentiable function in GLM η i = f ( X i T β ) . The parameters in a GLM include regression parameters β and the discrete parameters in the covariance matrix, both can be estimated with maximum likelihood method. The estimation of the regression parameter for model (1.4) can be expressed as

β ( m ) = ( X T W ( m 1 ) X ) g X T W ( m 1 ) z ( m 1 ) , W = d i a g ( W 1 , W 2 , , W N )

where W i = w i / { ϕ v ( μ i ) [ g T ( μ i ) ] 2 } with w i being a known priori weight,

ϕ the dispersion parameter, v ( ) a variance function, g a link function, and z N the work dependent variable with z i = η i + ( y i μ i ) g T ( μ i ) . The moment estimation of discrete parameters is

ϕ ^ = 1 N k 1 i = 1 N w i ( y i μ ^ i ) 2 v ( μ ^ i ) . (1.9)

In order to extend the GLMS to more general case, we need some knowledge on tensors. In the next section, we will introduce some basic terminology and operations implemented on tensors, especially on low order tensors.

2. The 3-Order Tensors and Their Applications in GLMs

A tensor is an extension of a matrix in the case of high order, which is an important tool to study high-dimensional arrays. The origin of tensor can be traced back to early nineteenth century when Cayley studied linear transformation theory and invariant representation. Gauss and Riemann et al. promoted the development of tensor in mathematics. In 1915 Albert Einstein used tensor to describe his general relativity, leading tensor calculus more widely accepted. In the early twentieth century, Ricci and Levi-Civita further developed tensor analysis in absolute differential methods and explored their applications [14] .

For our convenience we denote [ n ] : = { 1 , 2 , , n } and use S ( m , n ) to denote the index set

S ( m , n ) : = { τ = ( i 1 , i 2 , , i m ) : i k [ n ] , k [ m ] } .

Let I k ( k [ m ] ) be any positive integer (usually larger than 1). Sometimes we abuse I k as a set [ I k ] . Denote I : = I 1 × I 2 × × I m . If I k stands for an index set, then I is a tensor product of I 1 , I 2 , , I m . An m-order tensor A = ( A σ ) of size I is an m-array whose entries are denoted by A σ : = A i 1 i 2 i m with σ = ( i 1 , i 2 , , i m ) I . Note that a vector is a 1-order tensor and an m × n matrix is a 2-order or second order tensor. An m × n tensor is a tensor with I 1 = I 2 = = I m = n . We denote by T m , n the set of all mth order n-dimensional real tensors . An m × n tensor A is called symmetric if A σ is constant under any permutation on its index.

An mth order n-dimensional real tensors A is always associated with an m-order homogeneous polynomial f A ( x ) which is defined by

f A ( x ) : = A x m = i 1 , i 2 , , i m A i 1 , i 2 , , i m x i 1 x i 2 x i m . (2.10)

A is called positive definite or pd (positive semidefinite or psd) if

f A ( x ) : = A x m 0 , x n . (2.11)

A nonzero psd tensor must be of an even order. Let A be of size m × n × p . Given an r-order tensor A n 1 × n 2 × × n r and a matrix U = ( u i j ) I k × J k where k [ r ] . The product of A with U along k-mode is defined as the r-order tensor A × k U defined as

( A × k U ) i 1 , , i k 1 , i k , i k + 1 , , i m = i = 1 n A i 1 , , i k 1 , i k , i k + 1 , , i m u i i k . (2.12)

Note that A × k U is compressed into an ( m 1 ) -order tensor when U I k is a column vector ( J k = 1 ) . There are two kinds of tensor decomposition, i.e., the rank-1 decomposition, also called the CP decomposition, and the Tucker decomposition, or HOSVD. The former is the generalization of matrix rank-1 decomposition and the latter is the matrix singular value decomposition in the high order case. A zero tensor is a tensor with all entries being zero. A diagonal tensor is a tensor whose off-diagonal elements are all zero, i.e., A i 1 , i 2 , , i m = 0 if i 1 , i 2 , , i m are not identical. Thus an m × n tensor has n diagonal elements. By this way, we can define similarly (and analogous to matrix case) the identity tensor and a scalar tensor.

For any i [ n ] , an i-slice of an m-order tensor A = ( A i 1 i 2 i m ) along mode k for any given k [ m ] is an ( m 1 ) × n tensor B with

B i 1 , i 2 , , i m 1 = A i 1 , , i k 1 , i , i k + 1 , , i m , i = 1 , 2 , , n .

A slice of 3-order tensor A = ( A i j k ) m × n × p along mode-3 is an m × n matrix A ( : , : , k ) with k [ p ] , and a slice of a 4-order tensor is a 3-order tensor.

Let A m × n 1 × p , B n 1 × n × p be two tensors of 3-order. The slice-wise product of A , B , denoted by, is defined as where C ( : , : , k ) = A ( : , : , k ) B ( : , : , k ) for all k [ p ] . This multiplication can be used to build a regression model

(2.13)

where A ( : , : , k ) is the matrix consisting of n sample points of size m in class k and X ( : , : , k ) is the design matrix corresponding to the kth sample (there are n 1 observations in each class in this situation).

Let k be a positive integer. The k-moment of a random variable x is defined as the expectation of x, i.e., m x ( k ) ( x ) = Ε ( x k ) . The traditional extension of moments to a multivariate case is done by an iterative vectorization imposed on k. This technique is employed not only in the definition of moments but also in other definitions such as that of a characteristic function. By introducing the tensor form into these definitions, we find that the expressions will be much easier to handle than the classical ones. In the next section, we will introduce the tensor form of all these definitions.

Let x = ( x 1 , x 2 , , x n ) T be a random vector. Denote by x m the (symmetric) rank-one m-order tensor with

x σ m = x i 1 x i 2 x i m , σ : = ( i 1 , i 2 , , i m ) S ( m , n ) .

x m is called a rank-1 tensor generated by x which is also symmetric. It is shown by Comon et al. [15] that a real tensor A (with size I 1 × I 2 × × I m ) can always be decomposed into form

(2.14)

where α i ( j ) I i for all j [ r ] , i [ m ] . The smallest positive integer r is called the rank of A , denoted by r a n k ( A ) . We note that Equation (2.14) can also be used to define the tensor product of two matrices, which will be used in our next work on the covariance of random matrices. Note that the tensor product of two rank-one matrices is

( α 1 × β 1 ) × ( α 2 × β 2 ) = α 1 × α 2 × β 1 × β 2 .

Now consider two matrices A m × n , B p × q . Then write A , B in a rank-1 decomposition, i.e.,

A = j = 1 R 1 α 1 ( j ) × β 1 ( j ) , B = k = 1 R 2 α 2 ( k ) × β 2 ( k ) .

Tucker decomposition decomposes the original tensor into a product of the core tensor and a number of unitary matrices in different directions [15] so A can be decomposed into

A = S × 1 U 1 × 2 U 2 × 3 × N U N (2.15)

where S is the core tensor, and U 1 , U 2 , , U N are unitary matrices.

Example 2.1. Let be an 2 × 2 × 2 tensor which is defined by

X ( : , : , 1 ) = [ 1 3 2 4 ] , X ( : , : , 2 ) = [ 5 7 6 8 ] .

Then the unfolded matrices along 1-mode, 2-mode and 3-mode are respectively

X 1 = [ 1 5 3 7 2 6 4 8 ] , X 2 = [ 1 3 5 7 2 4 6 8 ] , X 3 = [ 1 2 5 6 3 4 7 8 ] .

3. Application of 3-Order Tensors in GLMs

The growth curve model (GCM) is one of the GLMs introduced by Wishart in 1938 [16] to study the growth situation of animals and plant between different groups. It is a kind of generalized multivariate variance analysis model, and has been widely used in modern medicine, agriculture and biology etc. GCM originally referred to a wide array of statistical models for repeated measures data [2] [14] . The contemporary use of GCM allows the estimation of inter-object variability such as time trends, time paths, growth curves or latent trajectories, in intra-object patterns of change over time [17] . The trajectories are the primary focus of analysis in most cases, whereas in others, they may represent just one part of a much broader longitudinal model. The most basic GCMs contain fixed and random effects that best capture the collection of individual trajectories over time. In a GCM, the fixed effects represent the mean of the trajectory pooling of all individuals, and the random effects represent the variance of the individual trajectories around these group means. For example, the fixed effects in a linear trajectory are estimates of the mean intercept and mean slope that define the underlying trajectory pooling of the entire sample, and the random effects are estimates of the between-person variability in the individual intercepts and slopes. Smaller random effects imply the more similar parameters that define the trajectory across the sample of individuals; at the extreme situation where the random effects equal 0, all individuals are governed by precisely the same trajectory parameters (i.e., there is a single trajectory shared by all individuals). In contrast, larger random effects imply greater individual differences in the magnitude of the trajectory parameters around the mean values.

The analysis of a GCM focuses on the functional relationship among ordered responses. Conventional GCM methods apply to growth data and to other analogs such as dose-response data (indexed by dose), location-response data (indexed by distance), or response-surface data (indexed by two or more variables such as latitude and longitude). The GCM methods mainly focus on longitudinal observations on a one-dimensional characteristic even though they may also be used in multidimensional cases [2] .

A general GCM can be indicated by

Y = X B T + E (3.16)

where Y N × p is the random response matrix whose rows are mutually independent and columns correspond to the response variables ordered according to d = [ d 1 , d 2 , , d p ] T ; X N × q is the fixed design matrix with r : = r a n k ( X ) q N ; The matrix B q × m is a fixed parameter matrix whose entries are the regression coefficients; T m × p is a within-subject design matrix each of whose entries is a fixed function of d, and E N × p is a random error with matrix normal distribution E N N , p ( 0 , Σ , I N ) where Σ p × p is an unknown symmetric positive definite matrix. Suppose the samplings corresponding to each object are recorded at p different times (moments) d 1 , d 2 , , d p . Consider an example of a pattern of the children’s weight. The plotting of the weights against the ages indicates a temporal pattern of growth. A univariate linear model for weight given age could be fitted with a design matrix T expressing the central tendency of the children’s weights as a linear or curvilinear function of age. Here T is an example of a within-subject design matrix. If N > 1 , a separate curve could be fitted for each subject to obtain a separate matrix of regression parameter estimators for each independent sampling units, β ^ i = Y i T ( T T T ) 1 for i [ N ] , and a simple average of the N fitted curves is a proper (if not efficient) estimator of the population growth curve, that is,

B ^ = 1 N j = 1 N B ^ j .

The efficient estimator has the form

B ^ = [ ( X T X ) 1 X T ] [ T ( T T T ) 1 ] . (3.17)

If the subjects are grouped in a balanced way, i.e., The N observations are clustered into m groups, each containing the same number, say n, of observations. For simplicity, we may assume that first n each then X = l N , the all-one vector, is the appropriate choice for computing B ^ . The choice of T defines the functional form of the population growth curve by describing a function relationship between weight and age.

Example 3.1. We recorded the heights of n boys and n girls whose ages are 2, 3, 4 and 6 years. From the observations we make an assumption that the average height increases linearly with age. Since the observed data is partitioned into two groups (one is for the heights of n boys and another is for the height of n girls), each consisting of n objects, and p = 4 with age vector d = [ 2 , 3 , 4 , 6 ] T . Thus the model for the height vs. age shall be Y = X B T + E where

X = [ l n 0 0 l n ] , T = [ l p T d T ]

where l k k is an all-ones vector of dimension k.

Here β 11 , β 12 are respectively the intercept and the slope for girls and β 21 , β 22 are respectively the intercept and the slope for boys. We find that it is not so easy for us to investigate the relationship between the gender, height, weight, and age. In the following we employ tensor expression to deal with this issue.

Using the notation in tensor theory, we rewrite model (3.16) in form

Y = X × 2 B × 2 T + E

or equivalently

Y = B × 1 X T × 2 T + E (3.18)

where B is regarded as a second order tensor and X , T as two matrices. Actually, according to Equation (2.12), we have

( B × 1 X T ) i j = k = 1 q X i k B k j .

Similarly we can define B × 2 V . Note that

B × 1 U × 2 V = B × 2 V × 1 U .

Now we extend model (3.16) in a more general form as

A = B × 1 X 1 × 2 X 2 × 3 X 3 + E (3.19)

where A n 1 × n 2 × n 3 , B = ( B i j k ) m 1 × m 2 × m 3 is a 3-order tensor, which is usually an unknown constant parameter tensor or the kernel tensor, and X i n i × m i for i = 1 , 2 , 3 . Here the tensor-matrix multiplication is defined by Equation (2.12) according to the dimensional coherence along each mode.

The potential applications of Equation (3.21) are obvious. A HOSVD (high order singular value decomposition) of a 3-order tensor can be regarded as a good example for this model.

Example 3.2. A sequence of 1000 images extracted from a repository of face images of ten individuals, each with 100 face images. Suppose each face image is of size 256 × 256 . Then these images can be restored in an 256 × 256 × 1000 tensor A . Let A be decomposed as

A = B × 1 U 1 T × 2 U 2 T × 3 U 3 T (3.20)

where

B 16 × 16 × 50 , U 1 256 × 16 , U 2 256 × 16 , U 3 1000 × 50 .

The decomposition Equation (3.20) yields a set of compressed images, each with size 16 × 16 . If each individual can be characterized by five images (this is called a balanced compression), then the kernel tensor B consists of 50 compressed images where each U i is a projection matrix along mode-i (i = 1, 2, 3). Specifically, U 1 and U 2 together play a role of compression of each image into an 16 × 16 image, while U 3 finds the representative elements (here is the 50 images) among a large set of images (the set of 1000 face images).

Analog to GCM, we let Y i j k be the measured value of Index I k in Class C i at time T j . A tensor Y = ( Y i j k ) m × n × p can be used to express m objects, say P 1 , , P m , each having p indexes I 1 , , I p measured respectively at times t 1 , , t n . For each index I k , k [ p ] , we have GCM form:

Y k = B k × 1 X × 2 T + E k (3.21)

where Y k = ( y 1 ( k ) , , y n ( k ) ) m × n . Suppose each row of Y k stands for a class of individuals, e.g., partitioned by ages. To make things more clear, we consider a concrete example.

Example 3.3. There are 30 persons under health test, each measured, at time T 1 , , T 4 , 10 indexes such as the lower/higher blood pressures, heartbeat rate, urea, cholesterol, bilirubin, etc. We label these indexes respectively by I 1 , , I 10 . Suppose that the 30 people are partitioned into three groups (denoted by C 1 , C 2 , C 3 ) with respect to their ages, consisting of 5, 10, 15 individuals respectively. Denote

X = [ l 5 0 0 0 l 10 0 0 0 l 15 ] , T = [ 1 1 1 1 t 1 t 2 t 3 t 4 t 1 2 t 2 2 t 3 2 t 4 2 ]

and

B k = [ β 11 k β 12 k β 1 p k β 21 k β 22 k β 2 p k β 31 k β 32 k β 3 p k ] .

Denote by Y i j k the measurement of Index I k in group C i at time T j . Set Y ( : , : , k ) = Y k , B ( : , : , k ) = B k for k = 1 , 2 , , 10 . Then we have

Y = B × 1 X T × 2 T T + ε (3.22)

where Y , ε 30 × 4 × 10 , X 30 × 3 , T 4 × 3 and B 3 × 3 × 10 is an unknown constant parameter tensor to be estimated, where B ( : , : , k ) = B k is the parameter matrix corresponding to the k th index model. The model (3.22) can be further promoted to manipulate a balanced linear mixed model when multiple responses are measured for balanced clustered (i.e., there are same number of subjects in each cluster) subjects.

4. Tensor Normal Distributions

In the multivariate analysis, the correlations between the coordinates of a random vector x = ( x 1 , , x n ) T are represented by the covariance matrix Σ : = Σ ( x ) , which is symmetric and positive semidefinite. When the variables are arrayed as a matrix, say X = ( X i j ) m × n , which is called a random matrix, the correlation between any pair of entries, say X i 1 j 1 and X i 2 j 2 of matrix X , is represented as an entry of a matrix Σ which is defined as the covariance matrix of the vector. A matrix normal distribution is defined. μ m × n , and Σ m × m , ϕ n × n are two positive definite matrices. A random matrix X m × n is said to obey a matrix normal distribution, denoted by X N m , n ( μ , Σ , ϕ ) , if it satisfies the following the conditions:

1), i.e., for each i [ m ] , j [ n ] .

2) Each row X i of X obeys normal distribution X i N p ( 0 , ϕ ) for i [ m ] .

3) Each column X j obeys normal distribution X j N q ( 0 , Σ ) .

It is easy to show that a matrix normal distribution X N p . q ( μ , Σ , ϕ ) is equivalent to vec ( X ) N p q ( vec ( μ ) , Σ ϕ ) (see e.g. [8] ).

We now define the tensor normal distribution. Let A = ( A i 1 i 2 i m ) T m be an m-order tensor of size N : = N 1 × N 2 × × N m , each of whose entries is a random variable. Let μ = ( μ i 1 i 2 i m ) be an m-order tensor of the same size as that of A , and Σ k ( k [ m ] ) be an N k × N k positive definite matrix. For convenience, we denote by I ( n ) the ( m 1 ) -tuple ( i 1 , i 2 , , i n 1 , i n + 1 , , i m ) with i k [ N k ] . We denote by A ( I ( n ) ) and μ ( I ( n ) ) , both in N n respectively the corresponding fibre (vector) of A and μ , indexed by I ( n ) , i.e.,

A ( I ( n ) ) : = A ( i 1 , i 2 , , i n 1 , : , i n + 1 , , i m ) , i k N k , k [ m ] \ { n } .

A ( I ( n ) ) μ ( I ( n ) ) is called a fibre of A ( μ resp.) along mode-n indexed by I ( n ) . A is said to obey a tensor normal distribution with parameter matrices ( μ , Σ 1 , , Σ m ) or denoted by

A N T ( μ , Σ 1 , , Σ m )

if for any n [ m ] , we have

A ( i 1 , , i n 1 , : , i n + 1 , , i m ) N N n ( μ ( I ( n ) ) Σ n ) .

A is said to follow a standard tensor normal distribution if all the Σ k ’s are identity matrices. A model (2.13) with a tensor normal distribution is called a general tensor normal (GTN) model.

To show the application of tensor normal distribution, we consider the 3-order tensor. For our convenience, we use ( i , j ) -value to denote the value related to the ith subject at jth measurement for any ( i , j ) [ m ] × [ n ] . For example, the kth response observation Y i j k at ( i , j ) represents the kth response value measured on the ith subject at time j. Now we let m , n , p be respectively the number of observed objects, number of measuring times for each subject, and the number of responses for each observation. Denote by Y the response tensor with Y i j k being the kth response at ( i , j ) , and by X the covariate tensor with X i j : r being the covariate vector at ( i , j ) for fixed effects, and by U i j : the covariate vector at ( i , j ) for random effects. Further, for each k [ p ] , we denote by B k r the coefficient vector related to the fixed effects corresponding to the kth response Y i j k at ( i , j ) for each pair ( i , j ) [ m ] × [ n ] , and similarly by C k q the coefficient vector related to the random effects. Now let B = [ B 1 , , B p ] and γ = [ C 1 , , C p ] . Then B r × p , γ q × p . We call X , U respectively the design matrix for fixed effects and the design matrix for random effects. Then we have

(4.23)

with Y = ( Y i j k ) m × n × p , X = ( X i j k ) m × n × r , B = ( β i j ) r × p , U = ( U i j k ) m × n × q , γ = ( γ i j ) q × p , ε = ( ε i j k ) m × n × p where ε i j k is the error term. Here the tensor-matrix multiplications X B and are defined by

( X B ) i j k = X ( i , j , : ) B k = k = 1 r X i j k β k k , i [ m ] , j [ n ] , k [ p ]

.

Denote B = [ β 1 , , β p ] , γ = [ γ 1 , , γ p ] , and

E i ( 1 ) : = E ( i , : , : ) , E j ( 2 ) : = E ( : , j , : ) , E k ( 3 ) : = E ( : , : , k ) , i [ m ] , j [ n ] , k [ p ]

E j k ( 1 ) : = E ( : , j , k ) , E i k ( 2 ) : = E ( i , : , k ) , E i j ( 3 ) : = E ( i , j , : ) , i [ m ] , j [ n ] , k [ p ]

where each matrix E l ( s ) is called a slice on mode s, and each vector E l t [ s ] is called a fibre along mode-s. We also use E [ s ] to denote the set consisting of all fibres of ε along mode-s, and use notation E [ s ] P to express that each element of E [ s ] obeys distribution P where P is a distribution function. For example, E [ 1 ] N m ( 0 , I m ) means that each 1-mode fibre E j k [ 1 ] (there are n p 1-mode fibres) obeys a standard normal distribution, i.e., E j k [ 1 ] N m ( 0 , I m ) .

Now for convenience we let ( n 1 , n 2 , n 3 ) : = ( m , n , p ) . We assume that

1) γ obeys matrix normal distribution γ N q , p ( 0 , Σ , ϕ ) .

2) The random vectors in are independent with for each.

3) For any with being positive definite of size.

The model (4.23) with conditions (I, II, III) is called a 3-order general mixed tensor (GMT) model. We will generalise this model to a more general case. In the following we first define the standard normal 3-order tensor distribution:

Definition 4.1. Let be a random tensor, i.e., each entry of is a random variable. Let be a constant tensor and be a positive definite matrix for each. Then is said to obey 3-order tensor standard normal (TSN) distribution if for all.

A 3-order random tensor satisfying TSN distribution has the following property:

Theorem 4.2. Let be an 3-order random tensor which obeys the tensor standard normal (TSN) distribution. Then each slice of shall obey a standard matrix normal distribution. Specifically, we have

(4.24)

Proof. □

Note that condition (III) is a generalization to the matrix normal distribution, and we denote it by,

.

Note that is a diagonal matrix since both and are diagonal. Write where is the expansion of along the third mode, specifically,

here is the tensor consisting of n identity matrices of size stacking along the third mode, thus and. Then we have

. (4.25)

Now we unfold along mode-3 to get matrix and respectively. Then Equation (4.25) is equivalent to

(4.26)

where and are generated similarly as.

The multivariate linear mixed model (4.23) or (4.25) can be transformed into a general linear model through the vectorization of matrices. Recall that the vectorization of a matrix is a vector of dimension, denoted by, formed by vertically stacking the columns of A in order, that is,

where are the column vectors of A. The vectorization is closely related to Kronecker product of matrices. The following lemma presents some basic properties of the vectorization and Kronecker product. We will use the following lemma (Proposition 1.3.14 on Page 89 of [4] ) to prove our main result:

Lemma 4.3. Let be matrices of appropriate sizes such that all the operations defined in the following are valid. Then

1).

2).

3).

The following property of the multiplication of a tensor with a matrix is shown by Kolda [18] and will be used to prove our main result.

Lemma 4.4. Let be a real tensor of size, and where. Then if and only if

(4.27)

where are respectively the flattened matrices of and along mode-n.

Proof. Let. Then for any, we have

. (4.28)

From which the result Equation (4.27) is immediate. □

Note that our Formula (4.27) is different from that in Section 2.5 in [18] since the definition of tensor-matrix multiplication is different.

We have the following result for the estimation of the parameter matrix:

Theorem 4.5. Suppose in Equation (4.23). Then the optimal estimation of the parameter matrix in Equation (4.23) is

. (4.29)

Proof. We first write Equation (4.25) in a matrix-vector form by vectorization by using the first item in Lemma 4.3,

(4.30)

where is the sum of two random terms. By the property of the vectorizations (see e.g. [4] ), we know that. By the ordinary least square solution method we get

By using (1) of Lemma 4.3 again (this time in the opposite direction), we get result (4.29). □

For any, we denote when, and when (stands for an g-inverse of a matrix). Then can be regarded as the projection from into since. Furthermore, we have. We now end the paper by presenting the following result as a pre diction model which follows directly from Theorem 4.5.

Theorem 4.6. Suppose in Equation (4.23). Then the mean of the response tensor in Equation (4.23) is

(4.31)

where.

Proof. By Theorem 4.5 and Equation (4.26), we have

It follows that

. (4.32)

By employing Lemma 4.4, we get result (4.31). □

Acknowledgements

This research was partially supported by the Hong Kong Research Grant Council (No. PolyU 15301716) and the graduate innovation funding of USTS. We shall thank the anonymous referees for their patient and elaborate reading and their suggestions which improved the writing of the paper.

Cite this paper

Lin, Z.R., Liu, D.Z., Liu, X.Y., He, L.L. and Xu, C.Q. (2018) High Order Tensor Forms of Growth Curve Models. Advances in Linear Algebra & Matrix Theory, 8, 18-32. https://doi.org/10.4236/alamt.2018.81003

References

  1. 1. Bilodeau, M. and Brenner, D. (1961) Theory of Multivariate Statistics. Springer, New York.

  2. 2. Bollen, K.A. and Curran, P.J. (2006) Latent Curve Models: A Structure Equation Perspective. Wiley, Hoboken.

  3. 3. Bryk, A.S. and Raudenbush, S.W. (1987) Application of Hierarchical Linear Models to Assessing Change. Psychological Bulletin, 101, 147-158. https://doi.org/10.1037/0033-2909.101.1.147

  4. 4. Kollo, T. and Rosen, D. (2005) Advanced Multivariate Statistics with Matrices. Springer, New York. https://doi.org/10.1007/1-4020-3419-9

  5. 5. Raudenbush, S.W. and Bryk, A.S. (2002) Hierarchical Linear Models: Applications and Data Analysis Methods 2. Sage Publications, Thousand Oaks, CA.

  6. 6. Bauer, D.J. (2007) Observations on the Use of Growth Mixture Models in Psychological Research. Multivariate Behavioral Research, 42, 757-786. https://doi.org/10.1080/00273170701710338

  7. 7. Bollen, K.A. and Curran, P.J. (2004) Autoregressive Latent Trajectory (ALT) Models: A Synthesis of Two Traditions. Sociological Methods and Research, 32, 336-383. https://doi.org/10.1177/0049124103260222

  8. 8. Coffman, D.L. and Millsap, R.E. (2006) Evaluating Latent Growth Curve Models Using Individual Fit Statistics. Structural Equation Modeling, 13, 1-27. https://doi.org/10.1207/s15328007sem1301_1

  9. 9. Hedeker, D. and Gibbons, R. (2006) Longitudinal Data Analysis. Wiley Inc., New York.

  10. 10. Little, R.J. and Rubin, D.B. (1987) Statistical Analysis with Missing Data. Wiley Inc., New York.

  11. 11. Singer, J.D. (1998) Using SAS Proc Mixed to Fit Multilevel Models, Hierarchical Models, and Individual Growth Models. Journal of Educational and Behavioral Statistics, 23, 323-355. https://doi.org/10.3102/10769986023004323

  12. 12. Singer, J.D. and Willett, J.B. (2003) Applied Longitudinal Data Analysis: Modeling Change and Event Occurrence. Oxford University Press, New York.https://doi.org/10.1093/acprof:oso/9780195152968.001.0001

  13. 13. Hastie, T., Tibshirani, R. and Friedman, J. (2009) The Elements of Statistical Learning. 2nd Edition, Springer, New York. https://doi.org/10.1007/978-0-387-84858-7

  14. 14. Bollen, K.A. (2007) On the Origins of Latent Curve Models. In: Cudeck, R. and MacCallum, R., Eds., Factor Analysis, at 100, Lawrence Erlbaum Associates, Mahwah, NJ, 79-98.

  15. 15. Comon, P., Golub, G., Lim, L.-H. and Mourrain, B. (2008) Symmetric Tensors and Symmetric Tensor Rank. SIAM Journal on Matrix Analysis and Applications, 30, 1254-1279. https://doi.org/10.1137/060661569

  16. 16. Wishart, J. (1938) Growth Rate Determinations in Nutrition Studies with the Bacon Pig and Their Analysis. Biometrika, 30, 16-28. https://doi.org/10.1093/biomet/30.1-2.16

  17. 17. McArdle, J.J. (2009) Latent Variable Modeling of Differences and Changes with Longitudinal Dynamic Structural Analysis. Annual Review of Psychology, 60, 577-605. https://doi.org/10.1146/annurev.psych.60.110707.163612

  18. 18. Kolda, T.G. and Bader, B.W. (2009) Tensor Decompositions and Applications. SIAM Review, 51, 455-500. https://doi.org/10.1137/07070111X