cumlogit               package:glmmAK               R Documentation

_C_u_m_u_l_a_t_i_v_e _l_o_g_i_t _m_o_d_e_l _f_o_r _o_r_d_i_n_a_l _r_e_s_p_o_n_s_e_s

_D_e_s_c_r_i_p_t_i_o_n:

     Fits the cumulative logit model using the maximum-likelihood. The
     log-likelihood is maximized using the Newton-Raphson algorithm.
     The function returns the inverse of both observed and expected
     information matrix. Summary of the model produced by the 'summary'
     function uses by default the inverse of the observed information
     matrix for the inference. This can be changed by the user such
     that the expected information is used instead.

_U_s_a_g_e:

     cumlogit(y, v, x, C=1, logit.order=c("decreasing", "increasing"),
        epsilon=1e-08, maxit=25, trace=FALSE)

     ## S3 method for class 'cumlogit':
     print(x, vcov=c("observed", "expected"), ...)

     ## S3 method for class 'cumlogit':
     summary(object, vcov=c("observed", "expected"), ...)

_A_r_g_u_m_e_n_t_s:

       y: response vector taking values 0,1,...,C.

       v: matrix or data.frame with covarites whose effect does not
          necessarily satisfy proportional odds assumption.

          Intercept is included by default in the model and should be
          included neither in 'x', nor in 'v'. 

       x: vector, matrix or data.frame with covarites whose effect is
          assumed to satisfy proportional odds assumption. 

       C: number of response categories minus 1.

logit.order: either "decreasing" or "increasing" indicating in which
          direction the logits are formed.

          For 'logit.order="decreasing"':   

              log[P(Y>=1)/P(Y=0)]   =   beta'x + gamma[1]'v
             log[P(Y>=2)/P(Y<=1)]   =   beta'x + gamma[2]'v
                                   ...  
            log[P(Y=C)/P(Y<=C-1)]   =   beta'x + gamma[C]'v

          For 'logit.order="increasing"':        

              log[P(Y=0)/P(Y>=1)]   =   beta'x + gamma[1]'v
             log[P(Y<=1)/P(Y>=2)]   =   beta'x + gamma[2]'v
                                   ...  
            log[P(Y<=C-1)/P(Y=C)]   =   beta'x + gamma[C]'v

    vcov: character indicating which type of the information matrix
          should be used for the inference

 epsilon: positive convergence tolerance epsilon. The iterations
          converge when

              abs((l[new] - l[old])/l[new]) <= epsilon,

          where l denotes the value of the log-likelihood. 

   maxit: integer giving the maximal number of iterations.

   trace: logical indicating if output should be produced for each
          iteration.

  object: an object of class "cumlogit".

     ...: other arguments passed to 'print' or 'summary'.

_V_a_l_u_e:

     An object of class "cumlogit". This has components 

coefficients: the coefficients of the linear predictor.

  loglik: the value of the log-likelihood.

   score: the score vector.

    vcov: the inverse of the observed information matrix.

expect.vcov: the inverse of the expected information matrix.

logit.order: character indicating the way in which the logits are
          formed.

linear.predictors: the values of the linear predictor for each
          observation and each logit.

fitted.values: the values of fitted category probabilities for each
          observation.

converged: logical indicating whether the optimization routine
          converged.

    iter: number of iterations performed

       C: see the function argument

       y: see the function argument

       v: see the function argument

       x: see the function argument

_A_u_t_h_o_r(_s):

     Arno&#353t Kom&#225rek arnost.komarek[AT]mff.cuni.cz

_R_e_f_e_r_e_n_c_e_s:

     Agresti, A. (2002). _Categorical Data Analysis. Second edition_.
     Hoboken: John Wiley & Sons. Section 7.2.

_S_e_e _A_l_s_o:

     'glm', 'polr'.

_E_x_a_m_p_l_e_s:

     ## Simulate the data for C = 3
     ## ============================
     set.seed(775988621)
     N3 <- 1000

       ## covariates:
     x3 <- data.frame(x1=rnorm(N3), x2=runif(N3, 0, 1))
     v3 <- data.frame(v1=rnorm(N3), v2=runif(N3, 0, 1))

       ## regression coefficients
       ## (theta = c(0.1, -0.2,   1, 0.1, -0.2,   0, 0.05, -0.25,   -0.5, 0.05, -0.25)):
     alpha3 <- c(1, 0, -0.5)     
     beta3 <- c(0.1, -0.2)
     gamma3 <- list(c(0.1, -0.2), c(0.05, -0.25), c(0.05, -0.25))

      ## linear predictors and inverse logits:
     eta3 <- data.frame(eta1=alpha3[1] + as.matrix(x3) %*% beta3 +
                             as.matrix(v3) %*% gamma3[[1]],
                        eta2=alpha3[2] + as.matrix(x3) %*% beta3 +
                             as.matrix(v3) %*% gamma3[[2]],
                        eta3=alpha3[3] + as.matrix(x3) %*% beta3 +
                             as.matrix(v3) %*% gamma3[[3]])                    
     ilogit3 <- data.frame(ilogit1 = exp(eta3[,1])/(1 + exp(eta3[,1])),
                           ilogit2=exp(eta3[,2])/(1 + exp(eta3[,2])),
                           ilogit3=exp(eta3[,3])/(1 + exp(eta3[,3])))                         

      ## category probabilities:
     pis3 <- data.frame(pi0=1-ilogit3[,1],
                        pi1=ilogit3[,1]-ilogit3[,2],
                        pi2=ilogit3[,2]-ilogit3[,3],
                        pi3=ilogit3[,3])

      ## response:
     rtemp <- function(prob, C){
       return(sample(0:C, size=1, prob=prob))
     }
     y3 <- apply(pis3, 1, rtemp, C=3)

     ## Fit the model
     ## ===============
     fit <- cumlogit(y=y3, x=x3, v=v3, C=3)
     print(fit)

     fit2 <- cumlogit(y=y3, x=x3, v=v3, C=3, logit.order="increasing")
     print(fit2)

