Hyperparameters
Hyperparameter 
Type/Values 
Default 
Meaning 


* 
C 
<float> 
1.0 
Regularization parameter. The strength of the regularization is inversely proportional to C. Must be strictly positive. 
* 
kernel 
{“liblinear”, “linear”, “poly”, “rbf”, “sigmoid”} 
linear 
Specifies the kernel type to be used in the algorithm. It must be one of ‘liblinear’, ‘linear’, ‘poly’ or ‘rbf’. 
* 
max_iter 
<int> 
1e5 
Hard limit on iterations within solver, or 1 for no limit. 
* 
random_state 
<int> 
None 
Controls the pseudo random number generation for shuffling the data for probability estimates. Ignored when probability is False. 
max_depth 
<int> 
None 
Specifies the maximum depth of the tree 

* 
tol 
<float> 
1e4 
Tolerance for stopping criterion. 
* 
degree 
<int> 
3 
Degree of the polynomial kernel function (‘poly’). Ignored by all other kernels. 
* 
gamma 
{“scale”, “auto”} or <float> 
scale 
Kernel coefficient for ‘rbf’, ‘poly’ and ‘sigmoid’. 
split_criteria 
{“impurity”, “max_samples”} 
impurity 
Decides (just in case of a multi class classification) which column (class) use to split the dataset in a node**. 

criterion 
{“gini”, “entropy”} 
entropy 
The function to measure the quality of a split (only used if max_features != num_features). 

min_samples_split 
<int> 
0 
The minimum number of samples required to split an internal node. 0 (default) for any 

max_features 
<int>, <float> 
None 
The number of features to consider when looking for the split: 

splitter 
{“best”, “random”, “trandom”, “mutual”, “cfs”, “fcbf”, “iwss”} 
“random” 
The strategy used to choose the feature set at each node (only used if max_features < num_features). 

normalize 
<bool> 
False 
If standardization of features should be applied on each node with the samples that reach it 

* 
multiclass_strategy 
{“ovo”, “ovr”} 
“ovo” 
Strategy to use with multiclass datasets: 
* Hyperparameter used by the support vector classifier of every node
** Splitting in a STree node
The decision function is applied to the dataset and distances from samples to hyperplanes are computed in a matrix. This matrix has as many columns as classes the samples belongs to (if more than two, i.e. multiclass classification) or 1 column if it’s a binary class dataset. In binary classification only one hyperplane is computed and therefore only one column is needed to store the distances of the samples to it. If three or more classes are present in the dataset we need as many hyperplanes as classes are there, and therefore one column per hyperplane is needed.
In case of multiclass classification we have to decide which column take into account to make the split, that depends on hyperparameter split_criteria, if “impurity” is chosen then STree computes information gain of every split candidate using each column and chooses the one that maximize the information gain, otherwise STree choses the column with more samples with a predicted class (the column with more positive numbers in it).
Once we have the column to take into account for the split, the algorithm splits samples with positive distances to hyperplane from the rest.