This is the last post in series about Support Vector Machine classifier. We already feel the basics of SVM. We have our data preprocessed. Finally, we know the influence of some major hyperparameters on the classifier. Now, let's choose proper hyperparameters for a given problem. This is done by **validation** or **cross-validation**. These techniques are very common in Machine Learning and are also helpful in finding a proper SVM model. The example will cover building the classifier for the foreground/background estimation problem in Flover project.

# Tag: cross-validation

This time we are going to discuss the influence of two basic variables on the quality of SVM classifier. They are called *hyperparameters* to distinguish them from the *parameters* optimized in a machine learning procedures. Two previous posts introduced Support Vector Machine itself and data preprocessing for this classifier. As in other Machine Learning techniques there is also a need to properly adjust some system variables to find the best model for our needs. Here, we will focus on description of complexity parameter and gamma parameter from the Gaussian kernel. In the next article we will find an optimum SVM model for the foreground/background estimation problem in Flover project using model validation techniques.