Gumbel: Assignment 2

From MathWiki

Table of contents

Problem 1

Show : Σ is a variance matrix \Longrightarrow all the eigenvalues of Σ are non-negative

GM comment: Elements of a matrix are normally represented in lower case. Capitals suggest sub-matrices of a partitioned matrix.

Let \Sigma=\left( \sigma_{ij} \right), where \sigma_{ij}=cov\left(y_i,y_j \right),

note that

\sigma_{ji}=cov\left(y_j,y_i \right)=cov\left(y_i,y_j \right)=\sigma_{ij},

so it is clear that \Sigma\! is symmetric, therefore there exists an orthogonal matrix \Gamma_{p \times p} and a diagonal matrix \Lambda_{p\times p}=diag\left( \lambda_i \right),

satisfying \ \lambda_1\ge\lambda_2\ge\cdots\ge\lambda_p ,such that \Sigma=\Gamma\Lambda\Gamma'\!.

\forall\  x \isin\ R^{p} , \ x'\Sigma x=x'E \left(Y-EY\right)\left(Y-EY\right)'x=E\left(x'\left(Y-EY\right)\right)\left(x'\left(Y-EY\right)\right)'=E \left(x'\left(Y-EY\right)\right)^2\ge 0,

thus we obtain \Sigma \! is non-negative definite.

\forall\  y \isin\ R^{p}, \exist\ z\isin R^{p},which satisfies \ y = \Gamma^{'}z, so \ y'\Lambda y=z'\Gamma\Lambda\Gamma'z=z'\Sigma z\ge 0

thus we obtain \Lambda\! is also non-negative definite.

Let y take the value \left( 0 \cdots 0\cdots 1 \right)'

Therefore, \lambda_p=y'\Lambda y \ge 0 So \ \lambda_1\ge\lambda_2\ge\cdots\ge\lambda_p\ge 0

Which completes the proof.

Problem 2

Show : \Sigma\! has positive eigenvalues \iff If \forall x\neq 0\in R^p\! ,then x' \Sigma x > 0\!


Suppose that

First we write out the matrix product for x' \Sigma x\! for x\neq 0 using the spectral decomposition theorem:

x'\Sigma x = x' \Gamma\Lambda\Gamma' x\!

First we have:

x'\Gamma=\begin{bmatrix}\sum_{i=1}^p x_i\gamma_{i1} \cdots \sum_{i=1}^p x_i\gamma_{ip} \end{bmatrix}

Note that

\Gamma' x = \left( x'\Gamma \right) ' =  \begin{bmatrix}\sum_{i=1}^p x_i\gamma_{i1} \cdots \sum_{i=1}^p x_i\gamma_{ip} \end{bmatrix}'

It is easy to see that:

x'\Gamma\Lambda\Gamma' x = \sum_{j=1}^p \lambda_j \left( \sum_{i=1}^p x_i \gamma_{ij} \right)^2.

Now assume 2. Clearly, the above equation is greater than 0 for non-zero x.

GM comment: I'm not sure what was intended with this tag:


CONVERSE:

If \forall x\neq 0\in R^p\! ,then x' \Sigma x > 0\!

Therefore, choose x = \gamma_j \! where \gamma_j \! is the jth column vector in \Gamma \!.

Now

\gamma_j'\Sigma\gamma_j = \gamma_j'\lambda_j\gamma_j = \lambda_j\gamma_j'\gamma_j = \lambda_j > 0\!

By the fact that \lambda_j's\! are eigenvalues and by the orthogonality of \Gamma\!.

Clearly, this works for j=1 \cdots p hence all\lambda_i's \! > 0

So we can conclude that \Sigma\! is non-singular positive symmetric matrix.

Sbush 02:53, 3 Oct 2006 (EDT)

Problem 3

Show : Σ is positive definate \iff Σ − 1 is positive definate.


Assume that Σ is PD. Using the Spectral Decomposition Theorem and the orthogonality of Γ, and using the expansion of Σ in Problem 2, we have:

x'\Sigma^{-1} x = x'\Gamma\Lambda^{-1}\Gamma' x = \sum_{j=1}^p \frac{1}{\lambda_j} \left( \sum_{i=1}^p x_i \gamma_{ij} \right)^2.

Since \lambda_1, \cdots, \lambda_p are positive by the matrix variance theorem, we know that \frac{1}{\lambda_1}, \cdots, \frac{1}{\lambda_p} are also positive.

Therefore, x'\Sigma^{-1} x > 0 \!

Sbush 10:41, 3 Oct 2006 (EDT)


Another Solution


(1)\Longrightarrow

If \Sigma\! is PD ,then \exist an orthogonal matrix \Gamma\! and a diagonal \Lambda=\begin{bmatrix} \lambda_1 & \cdots & 0 \\ \vdots & \ddots & \vdots \\ 0 & \cdots & \lambda_p\end{bmatrix}\! ,we get \Sigma =\Gamma \Lambda \Gamma'\! and all the \lambda_{i}'s > 0\! And we get \left(\frac{1}{\lambda_i}\right)'s>0

Let \Gamma = [\gamma_1,\gamma_2,\cdot\cdot\cdot,\gamma_p] then we get \Sigma =\sum_{i=1}^p\lambda_i\gamma_i\gamma_i'

\forall x \ne 0\in R^p\! with \Sigma\! is PD , we have x'\Sigma x= x'\sum_{i=1}^p\lambda_i\gamma_i\gamma_i'x=\sum_{i=1}^p\lambda_i\left(x'\gamma_i\right)\left(x'\gamma_i\right)'=\sum_{i=1}^p\lambda_i\left(x'\gamma_i\right)^2>0

Those guarantee that at least one of \left(x'\gamma_i\right)^2\ne 0 and it should be positive.

Now x' \Sigma^{-1} x=x'\left(\Gamma \Lambda \Gamma'\right)^{-1} x=(x'\Gamma) \Lambda^{-1}\left(x'\Gamma\right)'=\sum_{i=1}^p\left(\frac{1}{\lambda_i}\right)\left(x'\gamma_i\right)^2 Since \left(x'\gamma_i\right)^2\ge 0 and at least one of them is positive, with all \left(\frac{1}{\lambda_i}\right)'s>0 , we can conclude that x' \Sigma^{-1} x=\sum_{i=1}^p\left(\frac{1}{\lambda_i}\right)\left(x'\gamma_i\right)^2 should be positive.

Then we can get \Sigma^{-1}\! is PD.


(2)\Longleftarrow


If \Sigma^{-1}\! is PD ,so it should be a non-singular matrix. Thus there exists a non-singular and FCR matrix A\!, such that \Sigma^{-1}=AA'\! So\Sigma=\left(AA'\right)^{-1}=(A')^{-1}A^{-1}=(A^{-1})'A^{-1}=(A^{-1})'((A^{-1})')'\!

Because \Sigma^{-1}>0\! , So \forall x \ne 0\in R^p\! , we have x'\Sigma^{-1} x=x'AA'x=(x'A)(x'A)'>0 \!

So x'\Sigma x = x'(A^{-1})'((A^{-1})')'x = [x'(A^{-1})'][x'(A^{-1})']'\! which is a number.

Because A\! is a non-singular and FCR matrix , So (A^{-1})'\! is also a non-singular and FCR matrix , which implies that all the column vectors of matrix (A^{-1})'\! are linearly independent. With x \ne 0\in R^p\! ,we get [x'(A^{-1})']\ne 0\!

So x'\Sigma x = [x'(A^{-1})'][x'(A^{-1})']' > 0\!

Thus we conclude that \Sigma\! is PD .


PS


For the converse part, if we know that \Sigma^{-1}>0\!, then by \Longrightarrow part, we can say that \Sigma =\left(\Sigma^{-1}\right)^{-1}\! is PD.

Problem 4

Prove: E[X_2|X_1=x_1] = \mu_2+\Sigma_{21}\Sigma_{11}^{-1}(x_1-\mu_1)

Let X=\begin{pmatrix} X_1 \\ X_2 \end{pmatrix}, \mu = \begin{pmatrix} \mu_1 \\ \mu_2 \end{pmatrix}, \Sigma=\begin{pmatrix} \Sigma_{11} & \Sigma_{12}  \\ \Sigma_{21} &\Sigma_{22}   \end{pmatrix}

We know that this random vector follows a normal distribution. We can use the fact that \left|\Sigma\right|=\left|\Sigma_{11}\right|\left|\Sigma_{22.1}\right|

The pdf of the multivariate normal is:

f(Y)=\frac{1}{(2\pi)^{p/2}(2\pi)^{q/2}\left|\Sigma_{11}\right|^{1/2}\left|\Sigma_{22.1}\right|^{1/2}}exp{(X-\mu)'\Sigma^{-1}(X-\mu)}

where p + q = n

We can rewrite the argument in the exponential term as:

-\frac{1}{2}\begin{pmatrix} (X_1 -\mu_1)', & (X_2-\mu_2)'-(X_1-\mu_1)'\Sigma_{11}^{-1}\Sigma_{12} \end{pmatrix} \begin{pmatrix} \Sigma_{11}^{-1} & 0 \\ 0 & \Sigma_{22.1}^{-1} \end{pmatrix} \begin{pmatrix} (X_1 -\mu_1) \\ (X_2-\mu_2)-(X_1-\mu_1)\Sigma_{11}^{-1}\Sigma_{12} \end{pmatrix}

Since Σ is symmetric, we know that \Sigma_{12} = \left(\Sigma_{21}\right)' and that \left(\Sigma_{11}^{-1}\right)'=\Sigma_{11}^{-1}.

Therefore,

(X_2-\mu_2)'-(X_1-\mu_1)'\Sigma_{11}^{-1}\Sigma_{12} = \left((X_2-\mu_2)-\Sigma_{21}\Sigma_{11}^{-1}(X_1-\mu_1)\right)'.

Now the exponential term becomes:

-\frac{1}{2}\begin{pmatrix} (X_1 -\mu_1)', & \left((X_2-\mu_2)-\Sigma_{21}\Sigma_{11}^{-1}(X_1-\mu_1)\right)' \end{pmatrix} \begin{pmatrix} \Sigma_{11}^{-1} & 0 \\ 0 & \Sigma_{22.1}^{-1} \end{pmatrix} \begin{pmatrix} (X_1 -\mu_1) \\ (X_2-\mu_2)-(X_1-\mu_1)\Sigma_{11}^{-1}\Sigma_{12} \end{pmatrix} = -\frac{1}{2}(X_1-\mu_1)'\Sigma_{11}^{-1}(X_1-\mu_1) -\frac{1}{2}\left((X_2-\mu_2)-\Sigma_{21}\Sigma_{11}^{-1}(X_1-\mu_1)\right)\Sigma_{22.1}^{-1}\left((X_2-\mu_2)-\Sigma_{21}\Sigma_{11}^{-1}(X_1-\mu_1)\right)

Therefore,

f(Y)=\frac{1}{(2\pi)^{p/2}(2\pi)^{q/2}\left|\Sigma_{11}\right|^{1/2}\left|\Sigma_{22.1}\right|^{1/2}} \exp{-\frac{1}{2}(X_1-\mu_1)'\Sigma_{11}^{-1}(X_1-\mu_1) -\frac{1}{2}\left((X_2-\mu_2)-\Sigma_{21}\Sigma_{11}^{-1}(X_1-\mu_1)\right)\Sigma_{22.1}^{-1}\left((X_2-\mu_2)-\Sigma_{21}\Sigma_{11}^{-1}(X_1-\mu_1)\right)}


We can now separate this distribution into a product of 2 terms:

f(Y)=\frac{1}{(2\pi)^{p/2}\left|\Sigma_{11}\right|^{1/2}} \exp{-\frac{1}{2}(X_1-\mu_1)'\Sigma_{11}^{-1}(X_1-\mu_1)} \frac{1}{(2\pi)^{q/2}\left|\Sigma_{22.1}\right|^{1/2}} \exp{-\frac{1}{2}\left((X_2-\mu_2)-\Sigma_{21}\Sigma_{11}^{-1}(X_1-\mu_1)\right)\Sigma_{22.1}^{-1}\left((X_2-\mu_2)-\Sigma_{21}\Sigma_{11}^{-1}(X_1-\mu_1)\right)}


Using the fact that the joint distribution is the product of the marginal and conditional distributions, we see that the first term is the marginal distribution of Y1. The second term then is the conditional distribution of Y2 given Y1.

We immediately see that the conditional mean is given by: E[X_2|X_1=x_1] = \mu_2+\Sigma_{21}\Sigma_{11}^{-1}(x_1-\mu_1)

Problem 5

Y \sim N_p(u,\Sigma)\! and \Sigma\! is a non-singular positive matrix , then (Y-u)'\Sigma^{-1}(Y-u)\sim\chi_p^2


Y \sim N_p(u,\Sigma)\Longrightarrow\! , Let Z=\Sigma^{-\frac{1}{2}}(Y-u)\!

then E(Z)=\Sigma^{-\frac{1}{2}}E(Y-u)=0 and Var(Z)=\Sigma^{-\frac{1}{2}}Var(Y-u)(\Sigma^{-\frac{1}{2}})'=\Sigma^{-\frac{1}{2}}\Sigma\Sigma^{-\frac{1}{2}}=I_p

Then we get Z \sim N_p(0, I_p) , Let Z=[z_1,\cdot\cdot\cdot,z_p]'

We can say that z_i's\! are mutually independent and all z_i's \sim N(0,1)\!

So \sum_{i=1}^p (z_i)^2=Z'Z=[\Sigma^{-\frac{1}{2}}(Y-u)]'[\Sigma^{-\frac{1}{2}}(Y-u)]=(Y-u)'\Sigma^{-1}(Y-u)\sim\chi_p^2

GLH Test : H01 = β2 = 0

1-0 Coding

### read input and make a copy ###
> dd <- read.table("data1.txt", header = T)
> dd2 <- dd
> 
> ### make required variables ###
> dd2$Sex <- ifelse(dd2$Sex == "M",1,0)
> dd2$sex.Educ <- dd2$Sex*dd2$Educ
> 
> ### create X and Y matricies
> X1<-as.matrix(dd2[,1])
> X2<-as.matrix(dd2[,2])
> Y<-as.matrix(dd2[,3])
> 
> ### without interaction
> fit<-lm(Y ~ X1 + X2)
> ### with interaction
> fit.interaction<-lm(Y ~ X1 * X2)
> 
> ### define GLH matricies  
> L<-as.matrix(cbind(c(0,0),diag(2)))
> L.interaction<-as.matrix(cbind(c(0,0),diag(2),c(0,0)))
> 
> ### define GLH function
> glh<-function(fit,L,eta0=0){
+ 
+ betahat<-coef(fit)
+ etahat<-L%*%betahat
+ deltahat<-etahat-eta0
+ V<-vcov(fit)
+ VL<-L%*%V%*%t(L)
+ Fobs<-t(deltahat)%*%solve(VL)%*%deltahat/nrow(L)
+ ### mistake in the notes.. need 1 - pf to obtain p-value ###
+ list(F=Fobs, "p-value" = 1 - pf(Fobs,nrow(L),fit$df.residual))
+ 
+ }
> 
> ### without interaction terms
> glh(fit,L)
$F
        [,1]
[1,] 42.2489
$`p-value`
             [,1]
[1,] 2.663697e-05
> ### with interaction terms
> glh(fit.interaction,L.interaction)
$F
         [,1]
[1,] 17.49585
$`p-value`
            [,1]
[1,] 0.001199006
> 
> ### we know the glh method works because summary(fit) tells us that 
> ### the p-value on the overall F-statistic is the same as the one the 
> ### glh function computes as expected

Sbush 03:38, 6 Oct 2006 (EDT)

code F: -1 ; M: 1

Without Interaction

> a<- read.csv("Q:\\R\\data.csv",header=T)
> a$Sex<- ifelse(a$Sex=="M",1,-1)
> X1<-as.matrix(a[,1])
>  X2<-as.matrix(a[,2])
> Y<-as.matrix(a[,3])
> fit<-lm(Y~X1+X2)
> fit
Call:
lm(formula = Y ~ X1 + X2)
Coefficients:
(Intercept)           X1           X2  
    -13.513       11.453        4.305  
> L<-as.matrix(cbind(c(0,0),diag(2)))
> L
     [,1] [,2] [,3]
[1,]    0    1    0
[2,]    0    0    1
> glh<-function(fit,L,eta0=0){
+  betahat<-coef(fit)
+  etahat<-L%*%betahat
+ deltahat<-etahat-eta0
+ V<-vcov(fit)
+  VL<-L%*%V%*%t(L)
+  Fobs<-t(deltahat)%*%solve(VL)%*%deltahat/nrow(L)
+ list(F=Fobs,"P-value"=1-pf(Fobs,nrow(L),fit$df.resid))
+ }
>  glh(fit,L)
$F
        [,1]
[1,] 42.2489
$"P-value"
             [,1]
[1,] 2.663697e-05
Then we reject the null hypothesis \begin{pmatrix} \beta_0 \\ \beta_1 \end{pmatrix}= \begin{pmatrix} 0 \\ 0 \end{pmatrix}

With Interaction

> a<- read.csv("Q:\\R\\data.csv",header=T)
> a$Sex<- ifelse(a$Sex=="M",1,-1)
> a$Sex.Educ<-a$Sex*a$Educ
> a
   Sex Educ  Sal Sex.Educ
1    1    5 17.2        5
2    1    9 28.9        9
3    1   10 42.5       10
4    1   12 56.1       12
5    1   14 56.6       14
6    1   16 70.5       16
7   -1    8 21.6       -8
8   -1   11 18.5      -11
9   -1   11 17.9      -11
10  -1   14 31.8      -14
11  -1   14 35.2      -14
12  -1   15 39.5      -15
>  X1<-as.matrix(a[,1])
>  X2<-as.matrix(a[,2])
> X3<-as.matrix(a[,4])
> Y<-as.matrix(a[,3])
> fit1<-lm(Y~X1*X2,a)
> fit1 
Call:
lm(formula = Y ~ X1 * X2, data = a)
Coefficients:
(Intercept)           X1           X2        X1:X2  
    -8.6355      -0.3264       3.9346       0.9983    
> glh<-function(fit1,L,eta0=0){
+  betahat<-coef(fit)
+ etahat<-L%*%betahat
+  deltahat<-etahat-eta0
+  betahat<-coef(fit1)
+ fit1
+ }
> fit1 
Call:
lm(formula = Y ~ X1 * X2, data = a) 
Coefficients:
(Intercept)           X1           X2        X1:X2  
    -8.6355      -0.3264       3.9346       0.9983    
> L1<-as.matrix(cbind(c(0,0),diag(2),c(0,0)))
> glh<-function(fit1,L,eta0=0){
+   betahat<-coef(fit1)
+ etahat<-L1%*%betahat
+  deltahat<-etahat-eta0
+  betahat<-coef(fit1)
+  V<-vcov(fit1)
+ VL<-L1%*%V%*%t(L1)
+  Fobs<-t(deltahat)%*%solve(VL)%*%deltahat/nrow(L1) 
+  list(F=Fobs,"P-value"=1-pf(Fobs,nrow(L1),fit$df.resid))
+ }
>  glh(fit1,L1)
$F
         [,1]
[1,] 31.43431
$"P-value"
             [,1]
[1,] 8.702904e-05
Also we reject the null hypothesis \begin{pmatrix} \beta_1 \\ \beta_2 \end{pmatrix}= \begin{pmatrix} 0 \\ 0 \end{pmatrix}

1-2 coding

Without Interaction

> dd<-read.csv("Q:\\6630a1.csv",header=T)
> dd2<-dd
> dd2$Sex<-ifelse(dd2$Sex=="M",2,1)
> dd2
    X Sex Educ  Sal
1   1   2    5 17.2
2   2   2    9 28.9
3   3   2   10 42.5
4   4   2   12 56.1
5   5   2   14 56.6
6   6   2   16 70.5
7   7   1    8 21.6
8   8   1   11 18.5
9   9   1   11 17.9
10 10   1   14 31.8
11 11   1   14 35.2
12 12   1   15 39.5
>  X1<-as.matrix(dd2[,2])
>  X2<-as.matrix(dd2[,3])
> Y<-as.matrix(dd2[,4])
>  fit<-lm(Y~X1+X2,dd2)
> fit
Call:
lm(formula = Y ~ X1 + X2, data = dd2)
Coefficients:
(Intercept)           X1           X2  
    -47.872       22.906        4.305  
>  L<-cbind(rep(0,2),diag(2))
> L
     [,1] [,2] [,3]
[1,]    0    1    0
[2,]    0    0    1
> glh<-function(fit,L,eta0=0){
+ betahat<-coef(fit)
+ etahat<-L%*%betahat
+ deltahat<-etahat-eta0
+ V<-vcov(fit)
+ VL<-L%*%V%*%t(L)
+ Fobs<-t(deltahat)%*%solve(VL)%*%deltahat/nrow(L)
+ list(F=Fobs, "p-value" = 1 - pf(Fobs,nrow(L),fit$df.residual))
+ }
> glh(fit,L)
$F
        [,1]
[1,] 42.2489
$"p-value"
             [,1]
[1,] 2.663697e-05
###Since P-value is so small, we reject H0 and conclude at least one of the Sex and Educ should be in the model###

With Interaction

> dd<-read.csv("Q:\\6630a1.csv",header=T)
> dd2<-dd
> dd2$Sex<-ifelse(dd2$Sex=="M",2,1)
> dd2
    X Sex Educ  Sal
1   1   2    5 17.2
2   2   2    9 28.9
3   3   2   10 42.5
4   4   2   12 56.1
5   5   2   14 56.6
6   6   2   16 70.5
7   7   1    8 21.6
8   8   1   11 18.5
9   9   1   11 17.9
10 10   1   14 31.8
11 11   1   14 35.2
12 12   1   15 39.5
>  X1<-as.matrix(dd2[,2])
>  X2<-as.matrix(dd2[,3])
> Y<-as.matrix(dd2[,4])
> fit<-lm(Y~X1*X2,dd2)
> fit
Call:
lm(formula = Y ~ X1 * X2, data = dd2)
Coefficients:
(Intercept)           X1           X2        X1:X2  
    -7.6563      -0.6528       0.9398       1.9965  
> L<-diag(4)[c(2,3),]
> L
     [,1] [,2] [,3] [,4]
[1,]    0    1    0    0
[2,]    0    0    1    0
>  glh<-function(fit,L,eta0=0){
+  betahat<-coef(fit)
+  etahat<-L%*%betahat
+  deltahat<-etahat-eta0
+ V<-vcov(fit)
+ VL<-L%*%V%*%t(L)
+  Fobs<-t(deltahat)%*%solve(VL)%*%deltahat/nrow(L)
+ list(F=Fobs, "p-value" = 1 - pf(Fobs,nrow(L),fit$df.residual))
+ }
> glh(fit,L)
$F
         [,1]
[1,] 1.463782
$"p-value"
          [,1]
[1,] 0.2872543
###Since P-value is relatively large, so we fail to reject H0 and conclude that both Sex and Educ are significant.###

Confidence Ellipses

1-0 Coding

> ### read input and make a copy ###
> dd <- read.table("data1.txt", header = T)
> dd2 <- dd
> 
> ### make required variables ###
> dd2$Sex <- ifelse(dd2$Sex == "M",1,0)
> dd2$sex.Educ <- dd2$Sex*dd2$Educ
> 
> ### create X and Y matricies
> X1<-as.matrix(dd2[,1])
> X2<-as.matrix(dd2[,2])
> Y<-as.matrix(dd2[,3])
> 
> ### without interaction
> fit<-lm(Y ~ X1 + X2)
> ### with interaction
> fit.interaction<-lm(Y ~ X1 * X2)
> 
> ### define GLH matricies  
> L<-as.matrix(cbind(c(0,0),diag(2)))
> L.interaction<-as.matrix(cbind(c(0,0),diag(2),c(0,0)))
> 
> ### write a function for ellipses
> circ<-function(n=100){
+  thetas<-((0:n)/n)*2*pi
+  cbind(cos(thetas),sin(thetas))
+ }
> 
> ell<-function(mu=0,V,r=1){
+  cir<-circ()
+  ee<-cir%*%chol(V)
+  t(t(r*ee)+mu)
+ }
> 
> 
> ### build function for confidence ellipses for "beta-space"
> ConfEll<-function(fit,L,alpha=.95,CIGE="false"){ 
+  betas<-fit$coefficients
+  eta<-L%*%betas
+  sigma<-L%*%vcov(fit)%*%t(L)
+  h<-ifelse(CIGE=="true",1,nrow(eta))
+  v<-fit$df.residual
+  F<-sqrt(h*qf(1-alpha,h,v))
+  plot(ell(as.vector(eta),sigma,F),type="l",xlab="B1",ylab="B2")
+ }

Data Ellipse

 Data Ellipse

Confidence Ellipse for parameters (No Interaction)

 Confidence Ellipse for parameters (No Interaction)

Confidence Ellipse for parameters (Interaction)

 Confidence Ellipse for parameters (Interaction)

CIGE for parameters (No Interaction)

 Confidence Ellipse for parameters (No Interaction)

CIGE for parameters (Interaction)

 Confidence Ellipse for parameters (No Interaction)

code F: -1 ; M: 1

data ellipse,95% confidence ellipse without interaction,95% confidence general ellipse without interaction

> a<- read.csv("Q:\\R\\data.csv",header=T)
> a$Sex<- ifelse(a$Sex=="M",1,-1)
> X1<-as.matrix(a[,1])
> X2<-as.matrix(a[,2])
> Y<-as.matrix(a[,3])
> fit<-lm(Y~X1+X2)
> L<-as.matrix(cbind(c(0,0),diag(2)))
> m2<-mean(X2) 
> m1<-mean(X1)
> v1<-var(X1)
> v2<-var(X2)
> cov<-cov(X1,X2)
> sigma<-matrix(c(v1,cov,cov,v2),ncol=2)
> sigma
           [,1]       [,2]
[1,]  1.0909091 -0.6363636
[2,] -0.6363636 10.4469697
>  r<-1
> mu<-c(m1,m2)
> h<-nrow(L)
> p<-ncol(L)
> elpha<-0.05
> dim<-2
> etahat <- (L %*% coef(fit))[c(1, 2)]
> etahat
[1] 11.453158  4.305414
> Sigma <- L %*% vcov(fit) %*% t(L)
> Sigma
          [,1]      [,2]
[1,] 3.0818860 0.1877291
[2,] 0.1877291 0.3218213
> n<-100
> r1<-sqrt(h * qf(1-elpha, h, n-p)) 
> r2 <- sqrt(dim * qf(1-elpha, dim, n-p))
> circ <- function(n) 
+ { thetas <-((0:n)/n) * 2 * pi
+  cbind(cos(thetas), sin(thetas)) 
+  } 
> ell <- function(mu, A, r)
+ {
+  cc <- circ(100)
+ ee <- cc %*% chol(A)
+ t(t(r * ee) + mu)
+ }
 plot(ell(mu,sigma,r),type="l")
 plot(ell(etahat,Sigma,r1), type = "l")
 plot(ell(etahat,Sigma,r2), type = "l")

[[1] (http://wiki.math.yorku.ca/index.php/Image:Gdatae.jpeg)] [[2] (http://wiki.math.yorku.ca/index.php/Image:G95CI1.jpeg)] [[3] (http://wiki.math.yorku.ca/index.php/Image:G95CGI1.jpeg)]

95% confidence ellipse with interaction,95% confidence general ellipse with interaction

> a<- read.csv("Q:\\R\\data.csv",header=T)
> a$Sex<- ifelse(a$Sex=="M",1,-1)
> a$Sex.Educ<-a$Sex*a$Educ 
> X1<-as.matrix(a[,1])
> X2<-as.matrix(a[,2]) 
> X3<-as.matrix(a[,4])
> Y<-as.matrix(a[,3])
> fit1<-lm(Y~X1*X2,a)
> r<-1
> L1<-as.matrix(cbind(c(0,0),diag(2),c(0,0)))
> p<-ncol(L1)
> elpha<-0.05
> dim<-2
> etahat <- (L1 %*% coef(fit1))[c(1, 2)]
> etahat
[1] -0.3263756  3.9346292
> Sigma <- L1 %*% vcov(fit1) %*% t(L1)
> Sigma
          [,1]      [,2]
[1,] 43.573597 1.4433561
[2,]  1.443356 0.2954238
> n<-100
> r1<-sqrt(h * qf(1-elpha, h, n-p))
> r2 <- sqrt(dim * qf(1-elpha, dim, n-p))
> circ <- function(n)
+ { thetas <-((0:n)/n) * 2 * pi
+  cbind(cos(thetas), sin(thetas))
+ 
+  }
>  ell <- function(mu, A, r)
+  {
+  cc <- circ(100)
+  ee <- cc %*% chol(A) 
+  t(t(r * ee) + mu)
+ }
> plot(ell(etahat,Sigma,r1), type = "l")
> plot(ell(etahat,Sigma,r2), type = "l")

[[4] (http://wiki.math.yorku.ca/index.php/Image:G95CI2.jpeg)] [[5] (http://wiki.math.yorku.ca/index.php/Image:G95CGI2.jpeg)]



1-2 coding


without interaction


data ellipse

> dd<-read.csv("Q:\\6630a1.csv",header=T)
> dd2<-dd
> dd2$Sex<-ifelse(dd2$Sex=="M",2,1)
> X1<-as.matrix(dd2[,2])
> X2<-as.matrix(dd2[,3])
> Y<-as.matrix(dd2[,4])
> fit<-lm(Y~X1+X2,dd2)
> cir<-function(n=100){
+ thetas<-((0:n)/n)*2*pi
+ cbind(cos(thetas),sin(thetas))
+  }
> Sex<-dd2$Sex
> Educ<-dd2$Educ
> mu<-c(mean(Sex),mean(Educ))
> Sigma<-matrix(c(var(Sex),cov(Educ,Sex),cov(Sex,Educ),var(Educ)),ncol=2)
> r<-1
> ell<-function(mu,V,r){
+ cir<-cir()
+ ee<-cir%*%chol(V)
+ t(t(r*ee)+mu)
+  }
> plot(ell(mu,Sigma,r),type="l")

[[6] (http://wiki.math.yorku.ca/index.php/Image:6630_a2_1.jpeg)]


95% Confidence Ellipse

> dd<-read.csv("Q:\\6630a1.csv",header=T)
> dd2<-dd
> dd2$Sex<-ifelse(dd2$Sex=="M",2,1)
> X1<-as.matrix(dd2[,2])
> X2<-as.matrix(dd2[,3])
> Y<-as.matrix(dd2[,4])
> fit<-lm(Y~X1+X2,dd2)
> cir<-function(n=100){
+ thetas<-((0:n)/n)*2*pi
+ cbind(cos(thetas),sin(thetas))
+  }
> Sex<-dd2$Sex
> Educ<-dd2$Educ
> mu<-c(mean(Sex),mean(Educ))
> Sigma<-matrix(c(var(Sex),cov(Educ,Sex),cov(Sex,Educ),var(Educ)),ncol=2)
> r<-1
> ell<-function(mu,V,r){
+ cir<-cir()
+ ee<-cir%*%chol(V)
+ t(t(r*ee)+mu)
+  }
> n<-12
> L <- diag(3)[c(2,3), ]
> h <- nrow(L)
> p <- ncol(L)
> elpha <- 0.05
> dim <- 2
> etahat <- (L %*% coef(fit))[c(1, 2)]
> V<- L %*% vcov(fit) %*% t(L)
> r_ci <- sqrt(h * qf(1-elpha, h, n-p)) 
> plot(ell(etahat,V,r_ci),type="l")

[[7] (http://wiki.math.yorku.ca/index.php/Image:6630_a2_2.jpeg)]


95% CIGE

> dd<-read.csv("Q:\\6630a1.csv",header=T)
> dd2<-dd
> dd2$Sex<-ifelse(dd2$Sex=="M",2,1)
> X1<-as.matrix(dd2[,2])
> X2<-as.matrix(dd2[,3])
> Y<-as.matrix(dd2[,4])
> fit<-lm(Y~X1+X2,dd2)
> cir<-function(n=100){
+ thetas<-((0:n)/n)*2*pi
+ cbind(cos(thetas),sin(thetas))
+  }
> Sex<-dd2$Sex
> Educ<-dd2$Educ
> mu<-c(mean(Sex),mean(Educ))
> Sigma<-matrix(c(var(Sex),cov(Educ,Sex),cov(Sex,Educ),var(Educ)),ncol=2)
> r<-1
> ell<-function(mu,V,r){
+ cir<-cir()
+ ee<-cir%*%chol(V)
+ t(t(r*ee)+mu)
+  }
> n<-12
> L <- diag(3)[c(2,3), ]
> h <- nrow(L)
> p <- ncol(L)
> elpha <- 0.05
> dim <- 2
> etahat <- (L %*% coef(fit))[c(1, 2)]
> V<- L %*% vcov(fit) %*% t(L)
> r_cige <- sqrt(dim * qf(1-elpha, dim, n-p))
> plot(ell(etahat,V,r_cige),type="l")

[[8] (http://wiki.math.yorku.ca/index.php/Image:6630_a2_3.jpeg)]


with interaction


data ellipse

> dd<-read.csv("Q:\\6630a1.csv",header=T)
> dd2<-dd
> dd2$Sex<-ifelse(dd2$Sex=="M",2,1)
> X1<-as.matrix(dd2[,2])
> X2<-as.matrix(dd2[,3])
> Y<-as.matrix(dd2[,4])
> fit<-lm(Y~X1*X2,dd2)
> cir<-function(n=100){
+ thetas<-((0:n)/n)*2*pi
+ cbind(cos(thetas),sin(thetas))
+  }
> Sex<-dd2$Sex
> Educ<-dd2$Educ
> mu<-c(mean(Sex),mean(Educ))
> Sigma<-matrix(c(var(Sex),cov(Educ,Sex),cov(Sex,Educ),var(Educ)),ncol=2)
> r<-1
> ell<-function(mu,V,r){
+ cir<-cir()
+ ee<-cir%*%chol(V)
+ t(t(r*ee)+mu)Missing image
Example.jpg
Image:Example.jpg

+ } > plot(ell(mu,Sigma,r),type="l")

[[9] (http://wiki.math.yorku.ca/index.php/Image:6630_a2_4.jpeg)]


95% Confidence Ellipse

> dd<-read.csv("Q:\\6630a1.csv",header=T)
> dd2<-dd
> dd2$Sex<-ifelse(dd2$Sex=="M",2,1)
> X1<-as.matrix(dd2[,2])
> X2<-as.matrix(dd2[,3])
> Y<-as.matrix(dd2[,4])
> fit<-lm(Y~X1*X2,dd2)
> cir<-function(n=100){
+ thetas<-((0:n)/n)*2*pi
+ cbind(cos(thetas),sin(thetas))
+  }
> Sex<-dd2$Sex
> Educ<-dd2$Educ
> mu<-c(mean(Sex),mean(Educ))
> Sigma<-matrix(c(var(Sex),cov(Educ,Sex),cov(Sex,Educ),var(Educ)),ncol=2)
> r<-1
> ell<-function(mu,V,r){
+ cir<-cir()
+ ee<-cir%*%chol(V)
+ t(t(r*ee)+mu)
+  }
> n<-12
> L <- diag(4)[c(2,3), ]
> h <- nrow(L)
> p <- ncol(L)
> elpha <- 0.05
> dim <- 2
> etahat <- (L %*% coef(fit))[c(1, 2)]
> V<- L %*% vcov(fit) %*% t(L)
> r_ci <- sqrt(h * qf(1-elpha, h, n-p))
> plot(ell(etahat,V,r_ci),type="l")

[[10] (http://wiki.math.yorku.ca/index.php/Image:6630_a2_5.jpeg)]


95% CIGE


> dd<-read.csv("Q:\\6630a1.csv",header=T)
> dd2<-dd
> dd2$Sex<-ifelse(dd2$Sex=="M",2,1)
> X1<-as.matrix(dd2[,2])
> X2<-as.matrix(dd2[,3])
> Y<-as.matrix(dd2[,4])
> fit<-lm(Y~X1*X2,dd2)
> cir<-function(n=100){
+ thetas<-((0:n)/n)*2*pi
+ cbind(cos(thetas),sin(thetas))
+  }
> Sex<-dd2$Sex
> Educ<-dd2$Educ
> mu<-c(mean(Sex),mean(Educ))
> Sigma<-matrix(c(var(Sex),cov(Educ,Sex),cov(Sex,Educ),var(Educ)),ncol=2)
> r<-1
> ell<-function(mu,V,r){
+ cir<-cir()
+ ee<-cir%*%chol(V)
+ t(t(r*ee)+mu)
+  }
> n<-12
> L <- diag(4)[c(2,3), ]
> h <- nrow(L)
> p <- ncol(L)
> elpha <- 0.05
> dim <- 2
> etahat <- (L %*% coef(fit))[c(1, 2)]
> V<- L %*% vcov(fit) %*% t(L)
> r_cige <- sqrt(dim * qf(1-elpha, dim, n-p))
> plot(ell(etahat,V,r_cige),type="l")

[[11] (http://wiki.math.yorku.ca/index.php/Image:6630_a2_6.jpeg)]