mlptrain               package:neural               R Documentation

_M_L_P _n_e_u_r_a_l _n_e_t_w_o_r_k

_D_e_s_c_r_i_p_t_i_o_n:

     A simple MLP neural network that is suitable for classification
     tasks.

_U_s_a_g_e:

      mlptrain(inp,neurons,out,weight=c(),dist=c(),alfa=0.2,it=200,online=TRUE,
                     permute=TRUE,thresh=0,dthresh=0.1,actfns=c(),diffact=c(),visual=TRUE, ...)

_A_r_g_u_m_e_n_t_s:

     inp: a matrix that contains one input data in each row.

 neurons: a numeric vector with length equals to the number of layers
          in the network, and the ith layer will contains neurons[i]
          neuron.

     out: a matrix that contains one output data in each row.

  weight: the starting weights of the network.

    dist: the starting distortions of the network.

    alfa: the learning-rate parameter of the back-propagation
          algorithm.

      it: the maximum number of training iterations.

  online: if TRUE the algorithm will operate in sequential mode of
          back-propagation,if FALSE the algorithm will operate in batch
          mode of back-propagation.

 permute: if TRUE the algorithm will use a random permutation of the
          input data in each epoch.

  thresh: the maximal difference between the desired response and the
          actual response that is regarded as zero.

 dthresh: if the difference between the desired response and the actual
          response is lesser than this value, the corresponding neuron
          is drawn in red, otherwise it is drawn in green.

  actfns: a list, which contains the numeric code of the activation
          functions, or the activation function itself. The length of
          the vector must be the same as the length of the neurons
          vector, and each element of the vector must be between 1 and
          4 or a function. The possible numeric codes are the
          following: 1: Logistic function 2: Hyperbolic tangent
          function 3: Gauss function 4: Identical function.

 diffact: a list, which contains the differential of the activation
          functions. Only need to use if you use your own activation
          functions.

  visual: a logical value, that switches on/off the graphical user
          interface.

     ...: currently not used.

_D_e_t_a_i_l_s:

     The function creates an MLP neural network on the basis of the
     function parameters. After the creation of the network it is
     trained with the back-propagation algorithm using the inp and out
     parameters. The inp and out parameters has to be the same number
     of rows, otherwise the function will stop with an error message. 

     If you use the weight or dist argument, than that variables won't
     be determined by random.  This could be useful if you want to
     retrain your network. In that case use both of this two arguments. 

     From this vesion of the package there is the chance to use your
     own activation functions, by using the actfns argument. If you do
     this, don't forget to set the differential of the activation
     functions in the diffact argument at  the same order and the same
     position where you are using the new activation function. (No need
     of using the diffact argument if you're using the preset
     activation functions.)

     The function has a graphical user interface that can be switched
     on and off using the visual argument. If the graphical interface
     is on, the activation functions can be set in manually. If the
     activation functions are not set then each of them will be
     automatically the logistic function. The result of the function
     are the parameters of the trained MLP neural network. Use the mlp
     function for information recall.

_V_a_l_u_e:

     a list with 5 arguments:

  weight: the weights of the network.

    dist: the distortions of the network.

 neurons: a numeric vector with length equals to the number of layers
          in the network, and the ith layer will contains neurons[i]
          neuron.

  actfns: a list, that contains the activation functions. The length of
          the list is equal with the number of active layers.

 diffact: a list, which contains the differential of the activation
          functions. The length of the list is equal with the number of
          active layers.

_S_e_e _A_l_s_o:

     `mlp' for recall; `rbftrain' and `rbf' for training an RBF
     network.

_E_x_a_m_p_l_e_s:

             x<-matrix(c(1,1,0,0,1,0,1,0),4,2)
             y<-matrix(c(0,1,1,0),4,1)
             neurons<-4
             ## Not run: 
             data<-mlptrain(x,neurons,y,it=4000);
             mlp(x,data$weight,data$dist,data$neurons,data$actfns)
             
     ## End(Not run)

