Constructing Precision Matrices based on Correlated Gaussian Samples

Introduction

Here is a draft of a method to construct precision matrices based on correlated Gaussian samples with mean zero and variance one.

Method

Let yt be a random vector of size J and t{1,,T}, where T is the number of temporal replicates. We assume that ytN(0,Q1) and that the marginal variance of the i-th element of yt, yi,t, is 1. This means that the diagonal of Q1 is a vector of ones, and that

E(yt)=0,cov(yt)=Q1.

Furthermore, it is assumed that Q is a sparse precision matrix. Using the properties of Gaussian conditional distributions, we have

E(yi,t|yi,t)=Qi,i1jAi,jiQi,jyj,t,

Prec(yi,t|yi,t)=Qi,i=(var(yi,t|yi,t))1=τi2,

where Ai is the set containing the neighbors of site i, i.e. the sites that are such that Qi,j0 if jAi.

Assume that we have realizations of y1,yt that can be used to infer the precision matrix Q. We set up a regression model to estimate the non-zero elements of Q. Here, we consider yi,t as a realization, i.e. as an observation. The regression model for each site, i, will be

yi,t=jA,jiβi,jyj,t+εi,t,t{1,,T}.

At each site i, we estimate the parameter vector βi with

β^i=(XiTXi)1XiTyi,

where

Xi=(yj1,i,1yjm,i,1yj1,i,Tyjm,i,T),

and yjl,i,1 is the l-th neighbor oy yi,t at time t. The variance of εi,t is τi2 and it is estimated with

τ^i2=T1(yiXiβ^i)T(yiXiβ^i).

The next step is to transform β^i and τ^i2 such that they give estimates of the elements of Q, namely

Q^i,j={τ^i2β^i,j,if ij,τ^i2, if i=j,

where β^i,j is the j-th element og β^i. Let B^ be a matrix with (i,j)-th element β^i,j. Note that β^i,i=0, and thus B^i,i=0. Furthermore let K^ be a diagonal matrix such that

K^=diag(τ^12,,τ^J2).

An estimate of Q can now be presented as

Q^=K^(I+B^),

where I is an identity matrix of size J.

We have to make sure that Q^ is symmetric. This can be achieved by setting

Q~i,j=Q~j,i=12(τ^i2β^i,j+τ^j2β^j,i),

and defining new regression parameters β~i,j that are such that

τ^i2β~i,j=Q~ij=Q~j,i=τ^j2β~j,i,

which gives

β~i,j=τ^i2Q~i,j,β~j,i=τ^j2Q~i,j,

and let Q~ and B~ be the matrices containing the Q~i,j’s and the β~i,j’s.

We can not be sure of Q~ being positive definite. One way to check whether the matrix is positive definite or not, is to compute the Cholesky decomposition of Q~, that is, Q~=LLT, and check whether all the diagonal elements of L are positive. If the matrix Q~ is invertible then it is more likely that it is positive definite, while if Q~ is not invertible then it is not positive definite. The estimated precision matrix, Q~, is invertible if (i+B~) is invertible, where Q~=K^(I+B~). Strictly diagonally dominant matrices are invertible. In general, the n×n A, with elements {ai,j}i,j, is strictly diagonally dominant if

|ai,i|>ji|ai,j|,1in.

The matrix (I+B~) is strictly diagonally dominant if

1>jAi|β~i,j|,1iJ,

for all i{1,,J}. Alternatively, λi(0,1) is found for each i to tune Q~ such taht it is strictly diagonally dominant, using

τ^i2>λijAi,ji|Q~i,j|,1iJ,