{"id":521,"date":"2016-05-01T23:00:14","date_gmt":"2016-05-01T21:00:14","guid":{"rendered":"http:\/\/tomaszkacmajor.pl\/?p=521"},"modified":"2020-05-17T16:17:00","modified_gmt":"2020-05-17T14:17:00","slug":"svm-model-selection2","status":"publish","type":"post","link":"https:\/\/tomaszkacmajor.pl\/index.php\/2016\/05\/01\/svm-model-selection2\/","title":{"rendered":"SVM model selection &#8211; how to adjust all these knobs pt. 2"},"content":{"rendered":"\n<p>This is the last post in series about Support Vector Machine classifier. We already feel the <a href=\"http:\/\/tomaszkacmajor.pl\/index.php\/2016\/04\/17\/support-vector-machine\/\" target=\"_blank\" rel=\"noopener noreferrer\">basics<\/a> of SVM. We have our data <a href=\"http:\/\/tomaszkacmajor.pl\/index.php\/2016\/04\/24\/data-preprocessing\/\" target=\"_blank\" rel=\"noopener noreferrer\">preprocessed<\/a>. Finally, we know the influence of some major <a href=\"http:\/\/tomaszkacmajor.pl\/index.php\/2016\/04\/24\/svm-model-selection\/\" target=\"_blank\" rel=\"noopener noreferrer\">hyperparameters<\/a> on the classifier. Now, let&#8217;s choose proper hyperparameters for a given problem. This is done by <strong>validation<\/strong> or <strong>cross-validation<\/strong>. These techniques are very common in Machine Learning and are also helpful in finding a proper SVM model. The example will cover building the classifier for the <a href=\"http:\/\/tomaszkacmajor.pl\/index.php\/2016\/04\/17\/eliminacja-tla-obrazu\/\" target=\"_blank\" rel=\"noopener noreferrer\">foreground\/background estimation<\/a> problem in <a href=\"http:\/\/tomaszkacmajor.pl\/index.php\/2016\/03\/19\/flover-project-4\/\" target=\"_blank\" rel=\"noopener noreferrer\">Flover<\/a> project.<br><\/p>\n\n\n\n<!--more-->\n\n\n\n<h2 class=\"wp-block-heading\">Is it a &#8220;black art&#8221;?<\/h2>\n\n\n\n<h4 class=\"wp-block-heading\">Or can we automate something?<\/h4>\n\n\n\n<p>The most common hyperparameters to choose in SVM model are <strong>complexity<\/strong> (margin softness) and <strong>gamma<\/strong> (interchangeably sigma) which controls width of a gaussian kernel. But for some more complicated Machine Learning architectures there are many more hyperparameters to optimize. For example, for deep neural networks one has to choose proper learning rate, learning rate schedule, number of training iterations, number of hidden layers or momentum <a href=\"#Yoshua-Bengio\">[4]<\/a>. Do we have any fixed routine to find these values or we have to rely on our experience with a given problem and architecture?<\/p>\n\n\n\n<p>We can list down some common methods of finding hyperparameters for our classifier.<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li><strong>Manual Search<\/strong><\/li><\/ul>\n\n\n\n<p>When we have a knowledge on a given topic and know some basics about certain classifier we can just manually search the space of hyperparameters. We simply take any set of parameters and train the model. Observing the generalization error we tweak the model until we are satisfied with the results. It sounds like more academic approach but it&#8217;s still very common in the industry as well.<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li><strong>Grid Search<\/strong><\/li><\/ul>\n\n\n\n<p>Here, we select ranges of our hyperparameters and choose some intervals of sampling them. This way we obtain a grid of these parameters which let us do an exhaustive search. Now, we just run a model training for every parameter set in the grid. It&#8217;s quite computationally expensive, especially when we the training takes a long time and there are many hyperparameters. On the other hand, it&#8217;s very easy to parallelize such process.<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li><strong>Random Search<\/strong><\/li><\/ul>\n\n\n\n<p>Like in grid search, we have to pick hyperparameters ranges. This time, the values of parameters are randomly chosen. This method is faster than grid search and also can be paralellized.<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li><strong>Automated optimization<\/strong><\/li><\/ul>\n\n\n\n<p>Some researchers constantly propose new methods for the automatic search of hyperparameters. The new set of parameters is chosen after each iteration of training in order to converge to the best available set. The most common method is Bayesian optimization <a href=\"#Bayesian-optimization\">[6]<\/a>. Another gradient-based method, specifically for SVM is presented in <a href=\"#gradient-based\">[7]<\/a>. There are also some gradient-free methods like Nelder-Mead optimization or evolutionary methods like genetic algorithms or particle swarm optimization.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Validation and cross-validation<\/h2>\n\n\n\n<p>It&#8217;s very important to split our data set into <strong>training<\/strong> samples and <strong>testing<\/strong> samples. This is a very common approach to prevent our classifier from <strong>overfitting<\/strong>. Model is overfit when it shows excellent performance on the training set but doesn&#8217;t respond well for the previously unseen data. That is why, we sort out a testing set to check the model real performance. When we observe the learning curves for the chosen model, it&#8217;s quite normal that in some point the testing (generalization) error reaches its minimum while learning error continues to decrease. In theory, we should stop the learning in this point to prevent overfitting.<\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter is-resized\"><a href=\"http:\/\/tomaszkacmajor.pl\/wp-content\/uploads\/2016\/05\/1220px-Overfitting_svg.svg_-1.png\"><img loading=\"lazy\" decoding=\"async\" src=\"http:\/\/tomaszkacmajor.pl\/wp-content\/uploads\/2016\/05\/1220px-Overfitting_svg.svg_-1.png\" alt=\"1220px-Overfitting_svg.svg\" class=\"wp-image-564\" width=\"305\" height=\"225\"\/><\/a><figcaption>By <a href=\"\/\/commons.wikimedia.org\/wiki\/User:Gringer\">Gringer<\/a> &#8211; Own work, <a href=\"http:\/\/creativecommons.org\/licenses\/by\/3.0\">CC BY 3.0<\/a>, <a rel=\"noreferrer noopener\" href=\"https:\/\/commons.wikimedia.org\/w\/index.php?curid=2959742\" target=\"_blank\">wikipedia<\/a><\/figcaption><\/figure><\/div>\n\n\n\n<h4 class=\"wp-block-heading\">Validation<\/h4>\n\n\n\n<p>So, we can evaluate different set of hyperparameters until we find the minimum testing error. It turns out that this way we can overfit to the testing data! One can say that some information about test set leaks to the models search algorithm. Therefore, we should extract one more set which is called a validation set. Training is still performed on the trainig set, hyperparameters are chosen based on error from validation set. Then, when we are satisfied, we can perform final check on the testing set.<\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter is-resized\"><a href=\"http:\/\/tomaszkacmajor.pl\/wp-content\/uploads\/2016\/05\/validation-set.jpeg\" rel=\"attachment wp-att-569\"><img loading=\"lazy\" decoding=\"async\" src=\"http:\/\/tomaszkacmajor.pl\/wp-content\/uploads\/2016\/05\/validation-set.jpeg\" alt=\"validation-set\" class=\"wp-image-569\" width=\"518\" height=\"163\" srcset=\"https:\/\/tomaszkacmajor.pl\/wp-content\/uploads\/2016\/05\/validation-set.jpeg 2070w, https:\/\/tomaszkacmajor.pl\/wp-content\/uploads\/2016\/05\/validation-set-300x94.jpeg 300w, https:\/\/tomaszkacmajor.pl\/wp-content\/uploads\/2016\/05\/validation-set-768x242.jpeg 768w, https:\/\/tomaszkacmajor.pl\/wp-content\/uploads\/2016\/05\/validation-set-1024x322.jpeg 1024w\" sizes=\"auto, (max-width: 518px) 100vw, 518px\" \/><\/a><figcaption>from <a rel=\"noreferrer noopener\" href=\"http:\/\/www.intechopen.com\/books\/advances-in-data-mining-knowledge-discovery-and-applications\/selecting-representative-data-sets\" target=\"_blank\">T.Borovicka &#8211; &#8220;Selecting Representative Data Sets&#8221;<\/a><\/figcaption><\/figure><\/div>\n\n\n\n<p>Unfortunately, in this method, by partitioning data into three sets we lose some of our valuable data samples which could be used for the training. Here comes cross-validation.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\">Cross-validation<\/h4>\n\n\n\n<p>We get rid of the validation set leaving test set for a final check. The training set has to be divided into <em>k<\/em> parts. Then, we train the model based on <em>k<\/em>-1 sets while the validation is performed on the remaining set. Such procedure is repeated <em>k<\/em> times. Each time the validation set is different. The whole performance of a model is represented by the average of these <em>k<\/em> validations. This is the most common type of cross-validation named <em>k<\/em>-fold. Usually <em>k<\/em> is set to 5 or 10. It&#8217;s quite computationally expensive method but is very helpful if we don&#8217;t have much data and we can&#8217;t afford extraction of an additional validation set.<\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter is-resized\"><a href=\"http:\/\/tomaszkacmajor.pl\/wp-content\/uploads\/2016\/05\/cross-validation.png\" rel=\"attachment wp-att-567\"><img loading=\"lazy\" decoding=\"async\" src=\"http:\/\/tomaszkacmajor.pl\/wp-content\/uploads\/2016\/05\/cross-validation.png\" alt=\"cross-validation\" class=\"wp-image-567\" width=\"311\" height=\"237\" srcset=\"https:\/\/tomaszkacmajor.pl\/wp-content\/uploads\/2016\/05\/cross-validation.png 415w, https:\/\/tomaszkacmajor.pl\/wp-content\/uploads\/2016\/05\/cross-validation-300x228.png 300w\" sizes=\"auto, (max-width: 311px) 100vw, 311px\" \/><\/a><figcaption>from <a rel=\"noreferrer noopener\" href=\"http:\/\/www.edureka.co\/blog\/implementation-of-decision-tree\/\" target=\"_blank\">www.edureka.co<\/a><\/figcaption><\/figure><\/div>\n\n\n\n<h2 class=\"wp-block-heading\">SVM model selection<\/h2>\n\n\n\n<p>Let&#8217;s go back to my dataset from the <a rel=\"noopener noreferrer\" href=\"http:\/\/tomaszkacmajor.pl\/index.php\/2016\/03\/19\/flover-project-4\/\" target=\"_blank\">Flover<\/a> project. It consists of <a rel=\"noopener noreferrer\" href=\"http:\/\/tomaszkacmajor.pl\/index.php\/2016\/04\/03\/slic-superpiksele\/\" target=\"_blank\">superpixels<\/a> with features like color, color variance and position on the image. They are labeled as foreground or background (FG\/BG). I chose to perform a <strong>grid search<\/strong> because of its simplicity. I divided the dataset into 10000 learning samples, 5000 validation samples and 5000 testing samples. I had many more samples to use, so I could sort out a <strong>validation<\/strong> set without consequences. Below are the obtained results. I present the quality of FG\/BG estimation in percents (accuracy of FG\/BG prediction) for different hyperparameters <img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/tomaszkacmajor.pl\/wp-content\/ql-cache\/quicklatex.com-f34f74d98915e33f37a086f8cbfb996a_l3.png\" class=\"ql-img-inline-formula quicklatex-auto-format\" alt=\"&#67;\" title=\"Rendered by QuickLaTeX.com\" height=\"12\" width=\"14\" style=\"vertical-align: 0px;\"\/> and <img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/tomaszkacmajor.pl\/wp-content\/ql-cache\/quicklatex.com-4de02fc502ed5dbd15f371728ea270a3_l3.png\" class=\"ql-img-inline-formula quicklatex-auto-format\" alt=\"&#92;&#103;&#97;&#109;&#109;&#97;\" title=\"Rendered by QuickLaTeX.com\" height=\"12\" width=\"10\" style=\"vertical-align: -4px;\"\/> discussed <a rel=\"noopener noreferrer\" href=\"http:\/\/tomaszkacmajor.pl\/index.php\/2016\/04\/24\/svm-model-selection\/\" target=\"_blank\">before<\/a>. Green fields indicate the best results, red &#8211; the worst.<\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter is-resized\"><a href=\"http:\/\/tomaszkacmajor.pl\/wp-content\/uploads\/2016\/04\/model_results.png\"><img loading=\"lazy\" decoding=\"async\" src=\"http:\/\/tomaszkacmajor.pl\/wp-content\/uploads\/2016\/04\/model_results.png\" alt=\"model_results\" class=\"wp-image-536\" width=\"372\" height=\"318\" srcset=\"https:\/\/tomaszkacmajor.pl\/wp-content\/uploads\/2016\/04\/model_results.png 744w, https:\/\/tomaszkacmajor.pl\/wp-content\/uploads\/2016\/04\/model_results-300x256.png 300w\" sizes=\"auto, (max-width: 372px) 100vw, 372px\" \/><\/a><\/figure><\/div>\n\n\n\n<p>We can see an interesting phenomena. The model with the best learning performance &#8211; 98.7% is clearly overfit because it&#8217;s validation acurracy equals 84.7% which is for sure not the best result. The SVM model which is the best here have following hyperparameters: <img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/tomaszkacmajor.pl\/wp-content\/ql-cache\/quicklatex.com-4de02fc502ed5dbd15f371728ea270a3_l3.png\" class=\"ql-img-inline-formula quicklatex-auto-format\" alt=\"&#92;&#103;&#97;&#109;&#109;&#97;\" title=\"Rendered by QuickLaTeX.com\" height=\"12\" width=\"10\" style=\"vertical-align: -4px;\"\/>=0.08, <img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/tomaszkacmajor.pl\/wp-content\/ql-cache\/quicklatex.com-f34f74d98915e33f37a086f8cbfb996a_l3.png\" class=\"ql-img-inline-formula quicklatex-auto-format\" alt=\"&#67;\" title=\"Rendered by QuickLaTeX.com\" height=\"12\" width=\"14\" style=\"vertical-align: 0px;\"\/>=20. <strong>Learning<\/strong> accuracy equals 90% and <strong>validation<\/strong> accuracy &#8211; 87.5%. The performance obtained on the final <strong>testing<\/strong> set is very similar to the validation accuracy &#8211; 87.6%.<\/p>\n\n\n\n<p>Sources:<br>1. <a href=\"http:\/\/scikit-learn.org\/stable\/modules\/grid_search.html#grid-search\" target=\"_blank\" rel=\"noopener noreferrer\">Documentation <\/a>of scikit-learn library for Python.<br>2. Ben-Hur, Weston &#8211; &#8220;<a href=\"http:\/\/pyml.sourceforge.net\/doc\/howto.pdf\" target=\"_blank\" rel=\"noopener noreferrer\">A User\u2019s Guide to Support Vector Machines<\/a>&#8220;.<br>3. &#8220;Guideline to select the hyperparameters in Deep Learning&#8221; on <a href=\"http:\/\/stats.stackexchange.com\/questions\/95495\/guideline-to-select-the-hyperparameters-in-deep-learning\" target=\"_blank\" rel=\"noopener noreferrer\">StackExchange <\/a><br>4. Yoshua Bengio &#8211; &#8220;<a href=\"http:\/\/arxiv.org\/pdf\/1206.5533v2.pdf\" target=\"_blank\" rel=\"noopener noreferrer\" name=\"Yoshua-Bengio\">Practical Recommendations for Gradient-Based Training of Deep Architectures<\/a>&#8220;<br>5. Hyperparameter optimization on <a href=\"https:\/\/en.wikipedia.org\/wiki\/Hyperparameter_optimization\" target=\"_blank\" rel=\"noopener noreferrer\">Wikipedia<\/a><br>6. Ryan P. Adams &#8211; &#8220;<a href=\"https:\/\/dash.harvard.edu\/handle\/1\/11708816\" target=\"_blank\" rel=\"noopener noreferrer\" name=\"Bayesian-optimization\">Practical Bayesian Optimization of Machine Learning Algorithms<\/a>&#8220;<br>7. Olivier Chapelle et al. &#8211; &#8220;<a href=\"http:\/\/www.chapelle.cc\/olivier\/pub\/mlj02.pdf\" target=\"_blank\" rel=\"noopener noreferrer\" name=\"gradient-based\">Choosing multiple parameters for support vector machines<\/a>&#8220;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>This is the last post in series about Support Vector Machine classifier. We already feel the basics of SVM. We have our data preprocessed. Finally, we know the influence of some major hyperparameters on the classifier. Now, let&#8217;s choose proper hyperparameters for a given problem. This is done by validation or cross-validation. These techniques are [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":558,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3],"tags":[17,12,11,14,15],"class_list":["post-521","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-flover-project","tag-cross-validation","tag-daj-sie-poznac","tag-flover","tag-machine-learning","tag-svm"],"blocksy_meta":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v24.6 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>SVM model selection - how to adjust all these knobs pt. 2 - tomaszkacmajor.pl<\/title>\n<meta name=\"description\" content=\"SVM model validation techniques are presented in order to find the best set of SVM hyperparameters for a given problem and data.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/tomaszkacmajor.pl\/index.php\/2016\/05\/01\/svm-model-selection2\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"SVM model selection - how to adjust all these knobs pt. 2 - tomaszkacmajor.pl\" \/>\n<meta property=\"og:description\" content=\"SVM model validation techniques are presented in order to find the best set of SVM hyperparameters for a given problem and data.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/tomaszkacmajor.pl\/index.php\/2016\/05\/01\/svm-model-selection2\/\" \/>\n<meta property=\"og:site_name\" content=\"tomaszkacmajor.pl\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/ProggBlogg\/\" \/>\n<meta property=\"article:published_time\" content=\"2016-05-01T21:00:14+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2020-05-17T14:17:00+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/tomaszkacmajor.pl\/wp-content\/uploads\/2016\/05\/SVM-Model-Selection-Methods2.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1024\" \/>\n\t<meta property=\"og:image:height\" content=\"768\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"tomasz.kacmajor\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@tkacmajor\" \/>\n<meta name=\"twitter:site\" content=\"@tkacmajor\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"tomasz.kacmajor\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"5 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/tomaszkacmajor.pl\/index.php\/2016\/05\/01\/svm-model-selection2\/\",\"url\":\"https:\/\/tomaszkacmajor.pl\/index.php\/2016\/05\/01\/svm-model-selection2\/\",\"name\":\"SVM model selection - how to adjust all these knobs pt. 2 - tomaszkacmajor.pl\",\"isPartOf\":{\"@id\":\"https:\/\/tomaszkacmajor.pl\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/tomaszkacmajor.pl\/index.php\/2016\/05\/01\/svm-model-selection2\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/tomaszkacmajor.pl\/index.php\/2016\/05\/01\/svm-model-selection2\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/tomaszkacmajor.pl\/wp-content\/uploads\/2016\/05\/SVM-Model-Selection-Methods2.jpg\",\"datePublished\":\"2016-05-01T21:00:14+00:00\",\"dateModified\":\"2020-05-17T14:17:00+00:00\",\"author\":{\"@id\":\"https:\/\/tomaszkacmajor.pl\/#\/schema\/person\/5f40890309a32ae4f63fa6a284215b6c\"},\"description\":\"SVM model validation techniques are presented in order to find the best set of SVM hyperparameters for a given problem and data.\",\"breadcrumb\":{\"@id\":\"https:\/\/tomaszkacmajor.pl\/index.php\/2016\/05\/01\/svm-model-selection2\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/tomaszkacmajor.pl\/index.php\/2016\/05\/01\/svm-model-selection2\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/tomaszkacmajor.pl\/index.php\/2016\/05\/01\/svm-model-selection2\/#primaryimage\",\"url\":\"https:\/\/tomaszkacmajor.pl\/wp-content\/uploads\/2016\/05\/SVM-Model-Selection-Methods2.jpg\",\"contentUrl\":\"https:\/\/tomaszkacmajor.pl\/wp-content\/uploads\/2016\/05\/SVM-Model-Selection-Methods2.jpg\",\"width\":1024,\"height\":768},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/tomaszkacmajor.pl\/index.php\/2016\/05\/01\/svm-model-selection2\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Strona g\u0142\u00f3wna\",\"item\":\"https:\/\/tomaszkacmajor.pl\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"SVM model selection &#8211; how to adjust all these knobs pt. 2\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/tomaszkacmajor.pl\/#website\",\"url\":\"https:\/\/tomaszkacmajor.pl\/\",\"name\":\"tomaszkacmajor.pl\",\"description\":\"\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/tomaszkacmajor.pl\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/tomaszkacmajor.pl\/#\/schema\/person\/5f40890309a32ae4f63fa6a284215b6c\",\"name\":\"tomasz.kacmajor\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/tomaszkacmajor.pl\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/a1fe5c8a80549b9a680c7a6f9ea33a94?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/a1fe5c8a80549b9a680c7a6f9ea33a94?s=96&d=mm&r=g\",\"caption\":\"tomasz.kacmajor\"},\"url\":\"https:\/\/tomaszkacmajor.pl\/index.php\/author\/tomasz-kacmajor\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"SVM model selection - how to adjust all these knobs pt. 2 - tomaszkacmajor.pl","description":"SVM model validation techniques are presented in order to find the best set of SVM hyperparameters for a given problem and data.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/tomaszkacmajor.pl\/index.php\/2016\/05\/01\/svm-model-selection2\/","og_locale":"en_US","og_type":"article","og_title":"SVM model selection - how to adjust all these knobs pt. 2 - tomaszkacmajor.pl","og_description":"SVM model validation techniques are presented in order to find the best set of SVM hyperparameters for a given problem and data.","og_url":"https:\/\/tomaszkacmajor.pl\/index.php\/2016\/05\/01\/svm-model-selection2\/","og_site_name":"tomaszkacmajor.pl","article_publisher":"https:\/\/www.facebook.com\/ProggBlogg\/","article_published_time":"2016-05-01T21:00:14+00:00","article_modified_time":"2020-05-17T14:17:00+00:00","og_image":[{"width":1024,"height":768,"url":"https:\/\/tomaszkacmajor.pl\/wp-content\/uploads\/2016\/05\/SVM-Model-Selection-Methods2.jpg","type":"image\/jpeg"}],"author":"tomasz.kacmajor","twitter_card":"summary_large_image","twitter_creator":"@tkacmajor","twitter_site":"@tkacmajor","twitter_misc":{"Written by":"tomasz.kacmajor","Est. reading time":"5 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/tomaszkacmajor.pl\/index.php\/2016\/05\/01\/svm-model-selection2\/","url":"https:\/\/tomaszkacmajor.pl\/index.php\/2016\/05\/01\/svm-model-selection2\/","name":"SVM model selection - how to adjust all these knobs pt. 2 - tomaszkacmajor.pl","isPartOf":{"@id":"https:\/\/tomaszkacmajor.pl\/#website"},"primaryImageOfPage":{"@id":"https:\/\/tomaszkacmajor.pl\/index.php\/2016\/05\/01\/svm-model-selection2\/#primaryimage"},"image":{"@id":"https:\/\/tomaszkacmajor.pl\/index.php\/2016\/05\/01\/svm-model-selection2\/#primaryimage"},"thumbnailUrl":"https:\/\/tomaszkacmajor.pl\/wp-content\/uploads\/2016\/05\/SVM-Model-Selection-Methods2.jpg","datePublished":"2016-05-01T21:00:14+00:00","dateModified":"2020-05-17T14:17:00+00:00","author":{"@id":"https:\/\/tomaszkacmajor.pl\/#\/schema\/person\/5f40890309a32ae4f63fa6a284215b6c"},"description":"SVM model validation techniques are presented in order to find the best set of SVM hyperparameters for a given problem and data.","breadcrumb":{"@id":"https:\/\/tomaszkacmajor.pl\/index.php\/2016\/05\/01\/svm-model-selection2\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/tomaszkacmajor.pl\/index.php\/2016\/05\/01\/svm-model-selection2\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/tomaszkacmajor.pl\/index.php\/2016\/05\/01\/svm-model-selection2\/#primaryimage","url":"https:\/\/tomaszkacmajor.pl\/wp-content\/uploads\/2016\/05\/SVM-Model-Selection-Methods2.jpg","contentUrl":"https:\/\/tomaszkacmajor.pl\/wp-content\/uploads\/2016\/05\/SVM-Model-Selection-Methods2.jpg","width":1024,"height":768},{"@type":"BreadcrumbList","@id":"https:\/\/tomaszkacmajor.pl\/index.php\/2016\/05\/01\/svm-model-selection2\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Strona g\u0142\u00f3wna","item":"https:\/\/tomaszkacmajor.pl\/"},{"@type":"ListItem","position":2,"name":"SVM model selection &#8211; how to adjust all these knobs pt. 2"}]},{"@type":"WebSite","@id":"https:\/\/tomaszkacmajor.pl\/#website","url":"https:\/\/tomaszkacmajor.pl\/","name":"tomaszkacmajor.pl","description":"","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/tomaszkacmajor.pl\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/tomaszkacmajor.pl\/#\/schema\/person\/5f40890309a32ae4f63fa6a284215b6c","name":"tomasz.kacmajor","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/tomaszkacmajor.pl\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/a1fe5c8a80549b9a680c7a6f9ea33a94?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/a1fe5c8a80549b9a680c7a6f9ea33a94?s=96&d=mm&r=g","caption":"tomasz.kacmajor"},"url":"https:\/\/tomaszkacmajor.pl\/index.php\/author\/tomasz-kacmajor\/"}]}},"_links":{"self":[{"href":"https:\/\/tomaszkacmajor.pl\/index.php\/wp-json\/wp\/v2\/posts\/521","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/tomaszkacmajor.pl\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/tomaszkacmajor.pl\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/tomaszkacmajor.pl\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/tomaszkacmajor.pl\/index.php\/wp-json\/wp\/v2\/comments?post=521"}],"version-history":[{"count":15,"href":"https:\/\/tomaszkacmajor.pl\/index.php\/wp-json\/wp\/v2\/posts\/521\/revisions"}],"predecessor-version":[{"id":1709,"href":"https:\/\/tomaszkacmajor.pl\/index.php\/wp-json\/wp\/v2\/posts\/521\/revisions\/1709"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/tomaszkacmajor.pl\/index.php\/wp-json\/wp\/v2\/media\/558"}],"wp:attachment":[{"href":"https:\/\/tomaszkacmajor.pl\/index.php\/wp-json\/wp\/v2\/media?parent=521"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/tomaszkacmajor.pl\/index.php\/wp-json\/wp\/v2\/categories?post=521"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/tomaszkacmajor.pl\/index.php\/wp-json\/wp\/v2\/tags?post=521"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}