metrop                 package:mcmc                 R Documentation

_M_e_t_r_o_p_o_l_i_s _A_l_g_o_r_i_t_h_m

_D_e_s_c_r_i_p_t_i_o_n:

     Markov chain Monte Carlo for continuous random vector using a
     Metropolis algorithm.

_U_s_a_g_e:

     metrop(obj, initial, nbatch, blen = 1, nspac = 1, scale = 1, outfun,
         debug = FALSE, ...)

_A_r_g_u_m_e_n_t_s:

     obj: an R function that evaluates the log unnormalized probability
          density of the desired equilibrium distribution of the Markov
          chain. First argument is the state vector of the Markov
          chain.  Other arguments arbitrary and taken from the '...'
          arguments of this function. Should return '- Inf' for points
          of the state space having probability zero under the desired
          equilibrium distribution. Alternatively, an object of class
          '"metropolis"' from a previous run can be supplied, in which
          case any missing arguments (including the log unnormalized
          density function) are taken from this object.

 initial: a real vector, the initial state of the Markov chain.

  nbatch: the number of batches.

    blen: the length of batches.

   nspac: the spacing of iterations that contribute to batches.

   scale: controls the proposal step size.  If scalar or vector, the
          proposal is 'x + scale * z' where 'x' is the current state
          and 'z' is a standard normal random vector. If matrix, the
          proposal is 'x + scale %*% z'.

  outfun: controls the output.  If a function, then the batch means of
          'outfun(state, ...)' are returned.  If a numeric or logical
          vector, then the batch means of 'state[outfun]' (if this
          makes sense).  If missing, the the batch means of 'state' are
          returned.

   debug: if 'TRUE' extra output useful for testing.

     ...: additional arguments for 'obj' or 'outfun'.

_D_e_t_a_i_l_s:

     Runs a "random-walk" Metropolis algorithm with multivariate normal
     proposal producing a Markov chain with equilibrium distribution
     having a specified unnormalized density.  Distribution must be
     continuous.  Support of the distribution is the support of the
     density specified by argument 'obj'. The initial state must
     satisfy 'obj(state, ...) > 0'.

_V_a_l_u_e:

     an object of class '"mcmc"', subclass '"metropolis"', which is a
     list containing at least the following components: 

  accept: fraction of Metropolis proposals accepted.

   batch: 'nbatch' by 'p' matrix, the batch means, where 'p' is the
          dimension of the result of 'outfun' if 'outfun' is a
          function, otherwise the dimension of 'state[outfun]' if that
          makes sense, and the dimension of 'state' when 'outfun' is
          missing.

 initial: value of argument 'initial'.

   final: final state of Markov chain.

initial.seed: value of '.Random.seed' before the run.

final.seed: value of '.Random.seed' after the run.

    time: running time of Markov chain from 'system.time()'.

     lud: the function used to calculate log unnormalized density,
          either 'obj' or 'obj$lud' from a previous run.

  nbatch: the argument 'nbatch' or 'obj$nbatch'.

    blen: the argument 'blen' or 'obj$blen'.

   nspac: the argument 'nspac' or 'obj$nspac'.

  outfun: the argument 'outfun' or 'obj$outfun'.

_E_x_a_m_p_l_e_s:

     h <- function(x) if (all(x >= 0) && sum(x) <= 1) return(1) else return(-Inf)
     out <- metrop(h, rep(0, 5), 1000)
     out$accept
     # acceptance rate too low
     out <- metrop(out, scale = 0.1)
     out$accept
     # acceptance rate o. k. (about 25 percent)
     plot(out$batch[ , 1])
     # but run length too short (few excursions from end to end of range)
     out <- metrop(out, nbatch = 1e4)
     out$accept
     plot(out$batch[ , 1])
     hist(out$batch[ , 1])

