Viterbi             package:HiddenMarkov             R Documentation

_V_i_t_e_r_b_i _A_l_g_o_r_i_t_h_m _f_o_r _H_i_d_d_e_n _M_a_r_k_o_v _M_o_d_e_l _O_b_j_e_c_t_s

_D_e_s_c_r_i_p_t_i_o_n:

     Provides methods for the generic function 'Viterbi'. This predicts
     the most likely sequence of Markov states given the observed
     dataset. There is currently no method for objects of class
     '"mmpp"'.

_U_s_a_g_e:

     ## S3 method for class 'dthmm':
     Viterbi(object, ...)
     ## S3 method for class 'mmglm':
     Viterbi(object, ...)

_A_r_g_u_m_e_n_t_s:

  object: an object with class '"dthmm"' or '"mmglm"'.

     ...: other arguments.

_D_e_t_a_i_l_s:

     The purpose of the Viterbi algorithm is to _globally decode_ the
     underlying hidden Markov state at each time point. It does this by
     determining the sequence of states (k_1^*, cdots, k_n^*) which
     maximises the joint distribution of the hidden states given the
     entire observed process, i.e.

 (k_1^*, cdots, k_n^*) = _{stackrel{mbox{argmax}}{{k_1, cdots, k_n in {1, 2, cdots, m}}}} Pr{ C_1=k_1, cdots, C_n=k_n ,|, X^{(n)}=x^{(n)} },.


     The algorithm has been taken from Zucchini (2005), however, we
     calculate sums of the logarithms of probabilities rather than
     products of probabilities. This lessens the chance of numerical
     underflow. Given that the logarithmic function is monotonically
     increasing, the _argmax_ will still be the same. Note that
     _argmax_ can be evaluated with the R function 'which.max'.

     Determining the _a posteriori_ most probable state at time i is
     referred to as _local decoding_, i.e.

 k_i^* = _{stackrel{mbox{argmax}}{k in {1, 2, cdots, m}}} Pr{ C_i=k ,|, X^{(n)}=x^{(n)} },.

     Note that the above probabilities are calculated by the function
     'Estep', and are contained in 'u[i,j]' (output from 'Estep'), i.e.
     k_i^* is simply 'which.max(u[i,])'.

     The code for the methods '"dthmm"' and '"mmglm"' can be viewed by
     typing 'Viterbi.dthmm' or 'Viterbi.mmglm', respectively, on the R
     command line.

_V_a_l_u_e:

     A vector of length n containing integers (1, cdots, m)
     representing the hidden Markov states for each node of the chain.

_E_x_a_m_p_l_e_s:

     Pi <- matrix(c(1/2, 1/2,   0,   0,   0,
                    1/3, 1/3, 1/3,   0,   0,
                      0, 1/3, 1/3, 1/3,   0,
                      0,   0, 1/3, 1/3, 1/3,
                      0,   0,   0, 1/2, 1/2),
                  byrow=TRUE, nrow=5)
     delta <- c(0, 1, 0, 0, 0)
     lambda <- c(1, 4, 2, 5, 3)
     m <- nrow(Pi)

     x <- dthmm(NULL, Pi, delta, "pois", list(lambda=lambda), discrete=TRUE)
     x <- simulate(x, nsim=2000)

     #------  Global Decoding  ------

     states <- Viterbi(x)
     states <- factor(states, levels=1:m)

     #  Compare predicted states with true states
     #  p[j,k] = Pr{Viterbi predicts state k | true state is j}
     p <- matrix(NA, nrow=m, ncol=m)
     for (j in 1:m){
         a <- (x$y==j)
         p[j,] <- table(states[a])/sum(a)
     }
     print(p)

     #------  Local Decoding  ------

     #   locally decode at i=100

     print(which.max(Estep(x$x, Pi, delta, "pois", list(lambda=lambda))$u[100,]))

     #---------------------------------------------------
     #   simulate a beta HMM

     Pi <- matrix(c(0.8, 0.2,
                    0.3, 0.7),
                  byrow=TRUE, nrow=2)
     delta <- c(0, 1)

     y <- seq(0.01, 0.99, 0.01)
     plot(y, dbeta(y, 2, 6), type="l", ylab="Density", col="blue")
     points(y, dbeta(y, 6, 2), type="l", col="red")

     n <- 100
     x <- dthmm(NULL, Pi, delta, "beta",
                list(shape1=c(2, 6), shape2=c(6, 2)))
     x <- simulate(x, nsim=n)

     #   colour denotes actual hidden Markov state
     plot(1:n, x$x, type="l", xlab="Time", ylab="Observed Process")
     points((1:n)[x$y==1], x$x[x$y==1], col="blue", pch=15)
     points((1:n)[x$y==2], x$x[x$y==2], col="red", pch=15)

     states <- Viterbi(x)
     #   mark the wrongly predicted states
     wrong <- (states != x$y)
     points((1:n)[wrong], x$x[wrong], pch=1, cex=2.5, lwd=2)

