rbftrain               package:neural               R Documentation

_R_B_F _n_e_u_r_a_l _n_e_t_w_o_r_k

_D_e_s_c_r_i_p_t_i_o_n:

     A simple RBF neural network which suitable for approximation.

_U_s_a_g_e:

      rbftrain(inp,neurons,out,weight=c(),dist=c(),alfa=0.2,it=40,err=0,
             sigma=NaN,online=TRUE,permute=TRUE,visual=TRUE, ...)

_A_r_g_u_m_e_n_t_s:

     inp: a matrix that contains one input data in each row.

 neurons: the number of neurons in the hidden layer.

     out: a matrix that contains one output data in each row.

  weight: the starting weights of the network.

    dist: the starting distortions of the network.

    alfa: the learning-rate parameter of the back-propagation
          algorithm.

      it: the maximum number of training iterations.

     err: the average error at the studying points,if the average error
          anytime lower than this value,the algorithm will stop.

   sigma: the width of the Gauss functions.

  online: if TRUE the algorithm will operate in sequential mode of
          back-propagation,if FALSE the algorithm will operate in batch
          mode of back-propagation.

 permute: if TRUE the algorithm will use a random permutation of the
          input data in each epoch.

  visual: a logical value, that switches on/off the graphical user
          interface.

     ...: currently not used...

_D_e_t_a_i_l_s:

     The function creates an RBF neural network on the basis of the
     function parameters. After the creation of the network the
     function trains it using the back-propagation algorithm using the
     inp and out parameter. This two parameters row number must be the
     same, else the function will stop with an error message. 

     If you use the weight or dist argument, than that variables won't
     be determined by random.  This could be useful if you want to
     retrain your network. In that case use both of this two arguments
     in the same time. 

     The function works with normalized Gauss-functions, which width
     parameter will be the sigma argument. If you want to give the
     values, this argument  should be a matrix, with rows equal the
     number of neurons in the first layer, and columns equal the number
     of neurons in the second layer.  If the sigma argument is NaN,
     then the width of each Gauss function will be the half of the
     distance between the two nearest training samples times 1,1. If
     the sigma argument is exactly one number,  then all sigma value
     will be that exact number. 

     The function has a graphical user interface that can be switched
     on and off, with the visual argument. If the graphical user
     interface is on, then the function could show the result of the
     approximation in a co-ordinate system, if it's a function with one
     parameter.

     The result of the function is the parameters of the trained RBF
     neural network. Use the rbf function for information recall.

_V_a_l_u_e:

     list with 4 argument 

  weight: the weights of the network.

    dist: the distortion of the network.

 neurons: a numeric vector with length equals to the number of layers
          in the network, and the ith layer will contains neurons[i]
          neuron.

   sigma: the width of the Gauss functions.

_S_e_e _A_l_s_o:

     `rbf' for recalling; `mlp' and `mlptrain' for classification.

_E_x_a_m_p_l_e_s:

             x<-t(matrix(-5:10*24,1,16));
             y<-t(matrix(sin(pi/180*(-5:10*24)),1,16));
             neurons<-8;
             ## Not run: 
             data<-rbftrain(x,neurons,y,sigma=NaN)
             rbf(x,data$weight,data$dist,data$neurons,data$sigma)
             
     ## End(Not run)

