forwardbackward             package:RHmm             R Documentation

_f_o_r_w_a_r_d-_b_a_c_k_w_a_r_d _p_r_o_c_e_d_u_r_e

_D_e_s_c_r_i_p_t_i_o_n:

     The forward-backward procedure is used to compute quantities used
     in the Baum-Welch algorithm.

_U_s_a_g_e:

     forwardbackward(HMM, obs)

_A_r_g_u_m_e_n_t_s:

     HMM: a HMMClass or a HMMFitClass object

     obs: a vector (matrix) of observations, or a list of vectors (or
          matrices) if there are more than one samples

_V_a_l_u_e:

     If obs is one sample, a list of following elements, if obs is a
     list of samples, a list of list of following elements. See *note*
     for mathematical definitions.    

   Alpha: The matrix of 'forward' probabilities (size: number of obs.
          times number of hidden states)

    Beta: The matrix of 'backward' probabilities (size: number of obs.
          times number of hidden states)

   Gamma: The matrix of probabilities of being at time t in state i
          (size: number of obs. times number of hidden states)

     Xsi: The matrix of probabilities of being in state i at time t and
          being in state j at time t + 1 (size: number of obs. times
          number of hidden states)

     Rho: The vector of probabilities of seeing the partial sequence
          obs[1] ... obs[t] (size number of obs.)

     LLH: Log-likelihood

_N_o_t_e:

     Let obs=(obs[1], ... obs[T]) be the  vector of observations, and
     O=(O[t],  1, ..., T), the corresponding random variables. Let
     (Q[t], t=1, ..., T)  be the hidden Markov chain whose values are
     in {1, ..., nStates}  We have the  following definitions:

     Alpha[i][t] =  P(O[1]=obs[1],,...,,O[t]=obs[t],,Q[t]=i | HMM)
     which is  the probability of seeing the partial sequence  obs[1],
     ..., obs[t] and ending up  in state i at time t.

     Beta[i][t] =  P(O[t+1]=obs[t+1],,...,,O[T]=obs[T],,Q[t]=i | HMM)
     which  is the probability of the ending partial sequence obs[t+1],
     ..., obs[T]  given that we started at state i at time t.

     Gamma[i][t] = P(Q[t]=i | O=obs, HMM) which is the probability of
     being in state i  at time t for the state sequence O=obs. 
      Xsi[i][t]=P(Q[t]=i, Q[t+1]=j | O=obs, HMM) which is the
     probability of being  in state i at time t and being in state j at
     time t + 1.

     Rho[t] = P(O[1]=obs[1], ..., O[t]=obs(t) | HMM) witch is
     probabilities of seeing  the partial sequence obs[1] ... obs[t].

     LLH=ln(Rho[T])

_R_e_f_e_r_e_n_c_e_s:

     Jeff A. Bilmes (1997) _ A Gentle Tutorial of the EM Algorithm and
     its Application to Parameter Estimation for Gaussian Mixture and
     Hidden Markov Models_ <URL:
     http://ssli.ee.washington.edu/people/bilmes/mypapers/em.ps.gz>

_E_x_a_m_p_l_e_s:

         data(geyser)
         obs <- geyser$duration
         #Fits an 2 states gaussian model for geyser duration
         ResFitGeyser <- HMMFit(obs)
         #Forward-backward procedure
        fb <- forwardbackward(ResFitGeyser, obs)

