ksvm                 package:kernlab                 R Documentation

_S_u_p_p_o_r_t _V_e_c_t_o_r _M_a_c_h_i_n_e_s

_D_e_s_c_r_i_p_t_i_o_n:

     Support Vector Machines are an excellent tool for classification, 
     novelty detection, and regression. 'ksvm' supports the  well known
     C-svc, nu-svc, (classification) one-class-svc (novelty) eps-svr,
     nu-svr (regression) formulations along with the Crammer-Singer 
     for multi-class classification formulation spoc-svc and  the 
     bound-constraint SVM C-bsvc, eps-bsvr formulations.
      'ksvm' also supports class-probabilities output and  confidence
     intervals for regression.

_U_s_a_g_e:

     ## S4 method for signature 'formula':
     ksvm(x, data = NULL, ..., subset, na.action = na.omit, scaled = TRUE)

     ## S4 method for signature 'vector':
     ksvm(x, ...)

     ## S4 method for signature 'matrix':
     ksvm(x, y = NULL, scaled = TRUE, type = NULL, kernel ="rbfdot", 
          kpar = list(sigma = 0.1), C = 1, nu = 0.2, epsilon = 0.1, 
          prob.model = FALSE, class.weights = NULL, cache = 40, 
          tol = 0.001, shrinking = TRUE, cross = 0, fit = TRUE, ..., 
          subset, na.action = na.omit)

     ## S4 method for signature 'kernelMatrix':
     ksvm(x, y = NULL, type = NULL, C = 1,
     nu = 0.2, epsilon = 0.1, prob.model = FALSE, class.weights = NULL,
     cross = 0, fit = TRUE, cache = 40, tol = 0.001, shrinking = TRUE, ...)

     ## S4 method for signature 'list':
     ksvm(x, y = NULL, type = NULL, kernel = "stringdot",
     kpar = list(length = 4, lambda = 0.5), C = 1, nu = 0.2, epsilon = 0.1,
     prob.model = FALSE, class.weights = NULL, cross = 0, fit = TRUE,
     cache = 40, tol = 0.001, shrinking = TRUE, ... ,na.action = na.omit)

_A_r_g_u_m_e_n_t_s:

       x: a symbolic description of the model to be fit.  When not
          using a formula x can be a matrix or vector containg the
          training data 
           or a kernel matrix of class 'kernelMatrix' of the training
          data or a list of character vectors (for use with the string
          kernel).
          . Note, that the intercept is always excluded, whether given
          in the formula or not.

    data: an optional data frame containing the training data, when
          using a formula. By default the data is taken from the
          environment which `ksvm' is called from.

       y: a response vector with one label for each row/component of
          'x'. Can be either a factor (for classification tasks) or a
          numeric vector (for regression).

  scaled: A logical vector indicating the variables to be scaled. If
          'scaled' is of length 1, the value is recycled as many times
          as needed and all non-binary variables are scaled. Per
          default, data are scaled internally (both 'x' and 'y'
          variables) to zero mean and unit variance. The center and
          scale values are returned and used for later predictions.

    type: 'ksvm' can be used for classification , for regression, or
          for novelty detection. Depending on whether 'y' is a factor
          or not, the default setting for 'type' is 'C-svc' or
          'eps-svr', respectively, but can be overwritten by setting an
          explicit value.
           Valid options are:

             *  'C-svc'   (C classification)

             *  'nu-svc'  (nu classification)

             *  'C-bsvc'  bound-constraint svm (classification) 

             *  'spoc-svc'  (Crammer Singer multi-class)

             *  'one-svc'  (novelty detection)

             *  'eps-svr'  (epsilon regression)

             *  'nu-svr'   (nu regression)

             *  'eps-bsvr'  bound-constraint svm (regression)

  kernel: the kernel function used in training and predicting. This
          parameter can be set to any function, of class kernel, which
          computes the inner product in fetaure space between two
          vector arguments (see 'kernels'). 
           kernlab provides the most popular kernel functions which can
          be used by setting the kernel parameter to the following
          strings:

             *  'rbfdot' Radial Basis kernel "Gaussian"

             *  'polydot' Polynomial kernel

             *  'vanilladot' Linear kernel 

             *  'tanhdot' Hyperbolic tangent kernel 

             *  'laplacedot' Laplacian kernel 

             *  'besseldot' Bessel kernel 

             *  'anovadot' ANOVA RBF kernel 

             *  'splinedot' Spline kernel 

             *  'stringdot' String kernel 

          Setting the kernel parameter to "matrix" treats 'x' as a
          kernel matrix calling the 'kernelMatrix' interface.

          The kernel parameter can also be set to a user defined
          function of class kernel by passing the function name as an
          argument. 

    kpar: the list of hyper-parameters (kernel parameters). This is a
          list which contains the parameters to be used with the kernel
          function. For valid parameters for existing kernels are :

             *  'sigma' inverse kernel width for the Radial Basis
                kernel function "rbfdot" and the Laplacian kernel
                "laplacedot".

             *  'degree, scale, offset' for the Polynomial kernel
                "polydot"

             *  'scale, offset' for the Hyperbolic tangent kernel
                function "tanhdot"

             *  'sigma, order, degree' for the Bessel kernel
                "besseldot". 

             *  'sigma, degree' for the ANOVA kernel "anovadot".

             *  'length, lambda, normalized' for the "stringdot" kernel
                where length is the length of the strings considered,
                lambda the decay factor and normalized a logical
                parameter determining if the kernel evaluations should
                be normalized.

          Hyper-parameters for user defined kernels can be passed
          through the kpar parameter as well. In the case of a Radial
          Basis kernel function (Gaussian) kpar can also be set to the
          string "automatic" which uses the heuristics in  'sigest' to
          calculate a good 'sigma' value for the Gaussian RBF or
          Laplace kernel, from the data. (default = "automatic").

       C: cost of constraints violation (default: 1)-this is the
          `C'-constant of the regularization term in the Lagrange
          formulation.

      nu: parameter needed for 'nu-svc', 'one-svc', and 'nu-svr'. The
          'nu' parameter sets the upper bound on the training error and
          the lower bound on the fraction of data points to become
          Support Vectors (default: 0.2).

 epsilon: epsilon in the insensitive-loss function used for 'eps-svr',
          'nu-svr' and 'eps-bsvm' (default: 0.1)

prob.model: if set to 'TRUE' builds a model for calculating class
          probabilities or in case of regression, calculates the
          scaling parameter of the Laplacian distribution fitted on the
          residuals. Fitting is done  on output data created by
          performing a 3-fold cross-validation on the training data.
          For details see references. (default: 'FALSE')

class.weights: a named vector of weights for the different classes,
          used for asymmetric class sizes. Not all factor levels have
          to be supplied (default weight: 1). All components have to be
          named.

   cache: cache memory in MB (default 40)

     tol: tolerance of termination criterion (default: 0.001)

shrinking: option whether to use the shrinking-heuristics (default:
          'TRUE')

   cross: if a integer value k>0 is specified, a k-fold cross
          validation on the training data is performed to assess the
          quality of the model: the accuracy rate for classification
          and the Mean Squared Error for regression

     fit: indicates whether the fitted values should be computed and
          included in the model or not (default: 'TRUE')

     ...: additional parameters for the low level fitting function

  subset: An index vector specifying the cases to be used in the
          training sample.  (NOTE: If given, this argument must be
          named.)

na.action: A function to specify the action to be taken if 'NA's are
          found. The default action is 'na.omit', which leads to
          rejection of cases with missing values on any required
          variable. An alternative is 'na.fail', which causes an error
          if 'NA' cases are found. (NOTE: If given, this argument must
          be named.)

_D_e_t_a_i_l_s:

     'ksvm' uses the John Platt's SMO algorithm for solving the SVM QP
     problem an most SVM formulations. On the 'spoc-svc', 'C-bsvc' and
     'eps-bsvr' formulations a chunking algorithm based on the TRON QP
     solver is used. 

     For multiclass-classification with k classes, k > 2, 'ksvm' uses
     the `one-against-one'-approach, in which k(k-1)/2 binary
     classifiers are trained; the appropriate class is found by a
     voting scheme, only the 'spoc-svc' is not using a voting sheme for
     multi-class classification.
      If the predictor variables include factors, the formula interface
     must be used to get a correct model matrix. 
      In classification when 'prob.model' is 'TRUE' a 3-fold cross
     validation is performed on the data and a sigmoid function is
     fitted on the resulting decision values f. The data can be passed
     to the 'ksvm' function in a 'matrix' or a 'data.frame', in
     addition 'ksvm' also supports input in the form of a kernel matrix
     of class 'kernelMatrix' or as a list of character vectors where a
     string kernel has to be used.
      The 'plot' function for binary classification 'ksvm' objects
     displays a contour plot of the decision values with the
     corresponding support vectors highlighted.

     The predict function can return class probabilities for 
     classification problems by setting the 'type' parameter to
     "probabilities". 

     The problem of model selection is partially addressed by an
     empirical observation for the RBF kernels (Gaussian , Laplace)
     where the optimal values of the sigma width parameter are shown to
     lie in between the 0.1 and 0.9 quantile of the |x- x'| statistics.
     When using an RBF kernel and setting 'kpar' to "automatic", 'ksvm'
     uses the 'sigest' function to estimate the quantiles and uses the
     median of the values.

_V_a_l_u_e:

     An S4 object of class '"ksvm"' containing the fitted model,
     Accessor functions can be used to access the slots of the object
     (see examples) which include: 

   alpha: The resulting support vectors, (alpha vector) (possibly
          scaled).

alphaindex: The index of the resulting support vectors in the data
          matrix. Note that this index refers to the pre-processed data
          (after the possible effect of 'na.omit' and 'subset')

    coef: The corresponding coefficients times the training labels.

       b: The negative intercept.

     nSV: The number of Support Vectors

   error: Training error

   cross: Cross validation error, (when cross > 0)

prob.model: Contains the width of the Laplacian fitted on the residuals
          in case of regression, or the parameters of the sigmoid
          fitted on the decision values in case of classification.

_N_o_t_e:

     Data is scaled internally, usually yielding better results.

_A_u_t_h_o_r(_s):

     Alexandros Karatzoglou (SMO optimizers in C/C++ by Chih-Chung
     Chang & Chih-Jen Lin)
      alexandros.karatzoglou@ci.tuwien.ac.at

_R_e_f_e_r_e_n_c_e_s:

        *  Chang, Chih-Chung and Lin, Chih-Jen:
            _LIBSVM: a library for Support Vector Machines_
            <URL: http://www.csie.ntu.edu.tw/~cjlin/libsvm>

        *  Exact formulations of models, algorithms, etc. can be found
           in the document:
            Chang, Chih-Chung and Lin, Chih-Jen:
            _LIBSVM: a library for Support Vector Machines_
            <URL:
           http://www.csie.ntu.edu.tw/~cjlin/papers/libsvm.ps.gz>

        *  J. Platt
            _Probabilistic outputs for support vector machines and
           comparison to regularized likelihood methods_ 
            Advances in Large Margin Classifiers, A. Smola, P.
           Bartlett, B. Schoelkopf and D. Schuurmans, Eds. Cambridge,
           MA: MIT Press, 2000.
            <URL: http://citeseer.nj.nec.com/platt99probabilistic.html>

        *  H.-T. Lin, C.-J. Lin and R. C. Weng
            _A note on Platt's probabilistic outputs for support vector
           machines_
            <URL:
           http://www.csie.ntu.edu.tw/~cjlin/papers/plattprob.ps>

        *  C.-W. Hsu and C.-J. Lin 
            _A comparison on methods for multi-class support vector
           machines_
            IEEE Transactions on Neural Networks, 13(2002) 415-425.
            <URL:
           http://www.csie.ntu.edu.tw/~cjlin/papers/multisvm.ps.gz>

        *  C.-W. Hsu and C.-J. Lin. 
            _ A simple decomposition method for support vector
           machines_
             Machine Learning 46(2002), 291-314.
            <URL:
           http://www.csie.ntu.edu.tw/~cjlin/papers/decomp.ps.gz>

        *  K. Crammer, Y. Singer
            _On the learnability and design of output codes for
           multiclass prolems_
            Computational Learning Theory, 35-46, 2000.
            <URL:
           http://www.cs.huji.ac.il/~kobics/publications/mlj01.ps.gz>

_S_e_e _A_l_s_o:

     'predict.ksvm', 'couple'

_E_x_a_m_p_l_e_s:

     ## simple example using the spam data set
     data(spam)

     ## create test and training set
     index <- sample(1:dim(spam)[1])
     spamtrain <- spam[index[1:floor(2 * dim(spam)[1]/3)], ]
     spamtest <- spam[index[((2 * ceiling(dim(spam)[1]/3)) + 1):dim(spam)[1]], ]

     ## train a support vector machine
     filter <- ksvm(type~.,data=spamtrain,kernel="rbfdot",kpar=list(sigma=0.05),C=5,cross=3)
     filter

     ## predict mail type on the test set
     mailtype <- predict(filter,spamtest[,-58])

     ## Check results
     table(mailtype,spamtest[,58])

     ## Another example with the famous iris data
     data(iris)

     ## Create a kernel function using the build in rbfdot function
     rbf <- rbfdot(sigma=0.1)
     rbf

     ## train a bound constraint support vector machine
     irismodel <- ksvm(Species~.,data=iris,type="C-bsvc",kernel=rbf,C=10,prob.model=TRUE)

     irismodel

     ## get fitted values
     fitted(irismodel)

     ## Test on the training set with probabilities as output
     predict(irismodel, iris[,-5], type="probabilities")

     ## Demo of the plot function
     x <- rbind(matrix(rnorm(120),,2),matrix(rnorm(120,mean=3),,2))
     y <- matrix(c(rep(1,60),rep(-1,60)))

     svp <- ksvm(x,y,type="C-svc")
     plot(svp,data=x)


     #### Use custom kernel 

     k <- function(x,y) {(sum(x*y) +1)*exp(-0.001*sum((x-y)^2))}
     class(k) <- "kernel"

     data(promotergene)

     ## train svm using custom kernel
     gene <- ksvm(Class~.,data=promotergene,kernel=k,C=10,cross=5)

     gene

     ## regression
     # create data
     x <- seq(-20,20,0.1)
     y <- sin(x)/x + rnorm(401,sd=0.03)

     # train support vector machine
     regm <- ksvm(x,y,epsilon=0.01,kpar=list(sigma=16),cross=3)
     plot(x,y,type="l")
     lines(x,predict(regm,x),col="red")

