forwardback           package:HiddenMarkov           R Documentation

_F_o_r_w_a_r_d _a_n_d _B_a_c_k_w_a_r_d _P_r_o_b_a_b_i_l_i_t_i_e_s

_D_e_s_c_r_i_p_t_i_o_n:

     These functions calculate the forward and backward probabilities
     for a 'dthmm' process, as defined in MacDonald & Zucchini (1997,
     Page 60).

_U_s_a_g_e:

     backward(x, Pi, distn, pm, pn = NULL)
     forward(x, Pi, delta, distn, pm, pn = NULL)
     forwardback(x, Pi, delta, distn, pm, pn = NULL)

_A_r_g_u_m_e_n_t_s:

       x: is a vector of length n containing the observed process.

      Pi: is the m times m transition probability matrix of the hidden
          Markov chain.

   delta: is the marginal probability distribution of the m hidden
          states.

   distn: is a character string with the distribution name, e.g.
          '"norm"' or '"pois"'. If the distribution is specified as
          '"wxyz"' then a probability (or density) function called
          '"dwxyz"' should be available, in the standard R format (e.g.
          'dnorm' or 'dpois').

      pm: is a list object containing the current (Markov dependent)
          parameter estimates associated with the distribution of the
          observed process (see 'dthmm').

      pn: is a list object containing the observation dependent
          parameter values associated with the distribution of the
          observed process (see 'dthmm').

_D_e_t_a_i_l_s:

     Denote the n times m matrices containing the forward and backward
     probabilities as A and B, respectively. Then the (i,j)th elements
     are

       alpha_{ij} = Pr{ X_1 = x_1, cdots, X_i = x_i, C_i = j }

     and

 beta_{ij} = Pr{ X_{i+1} = x_{i+1}, cdots, X_n = x_n ,|, C_i = j } ,.

     Further, the diagonal elements of the product matrix A B^prime are
     all the same, taking the value of the log-likelihood.

_V_a_l_u_e:

     The function 'forwardback' returns a list with two matrices
     containing the forward and backward probabilities, 'logalpha' and
     'logbeta', respectively, and the log-likelihood ('LL').

     The functions 'backward' and 'forward' return a matrix containing
     the forward and backward probabilities, 'logalpha' and 'logbeta',
     respectively.

_A_u_t_h_o_r(_s):

     David Harte, 2005. The algorithm has been taken from Zucchini
     (2005).

_R_e_f_e_r_e_n_c_e_s:

     MacDonald, I.L. & Zucchini, W. (1997). _Hidden Markov and Other
     Models for Discrete-valued Time Series._ Chapman and Hall/CRC,
     Boca Raton.

     Zucchini, W. (2005). _Hidden Markov Models Short Course, 3-4 April
     2005._ Macquarie University, Sydney.

_S_e_e _A_l_s_o:

     'logLik'

_E_x_a_m_p_l_e_s:

     #    Set Parameter Values

     Pi <- matrix(c(1/2, 1/2,   0,   0,   0,
                    1/3, 1/3, 1/3,   0,   0,
                      0, 1/3, 1/3, 1/3,   0,
                      0,   0, 1/3, 1/3, 1/3,
                      0,   0,   0, 1/2, 1/2),
                  byrow=TRUE, nrow=5)

     p <- c(1, 4, 2, 5, 3)
     delta <- c(0, 1, 0, 0, 0)

     #------   Poisson HMM   ------

     x <- dthmm(NULL, Pi, delta, "pois", list(lambda=p), discrete=TRUE)

     x <- simulate(x, nsim=10)

     y <- forwardback(x$x, Pi, delta, "pois", list(lambda=p))

     # below should be same as LL for all time points
     print(log(diag(exp(y$logalpha) %*% t(exp(y$logbeta)))))
     print(y$LL)

     #------   Gaussian HMM   ------

     x <- dthmm(NULL, Pi, delta, "norm", list(mean=p, sd=p/3))

     x <- simulate(x, nsim=10)

     y <- forwardback(x$x, Pi, delta, "norm", list(mean=p, sd=p/3))

     # below should be same as LL for all time points
     print(log(diag(exp(y$logalpha) %*% t(exp(y$logbeta)))))
     print(y$LL)

