# Evaluation and algorithms combination

### Evaluation and algorithms combination

FSinR evaluation measures and algorithms functions are designed following a common interface so that they are easier to use.

In this example a selection of evaluation measures (Inconsistent Examples, Inconsistent Examples Pairs, Mutual Information and Binary consistency) are being used with Sequential Feature Selection algorithm.

library(FSinR)

sfs(iris, 'Species', IEConsistency)
#> $bestFeatures #> Sepal.Length Sepal.Width Petal.Length Petal.Width #> [1,] 1 0 1 1 #> #>$bestFitness
#> [1] 1
sfs(iris, 'Species', mutualInformation)
#> $bestFeatures #> Sepal.Length Sepal.Width Petal.Length Petal.Width #> [1,] 1 1 1 0 #> #>$bestFitness
#> [1] 1.584963

In order to save some lines of code, as these functions share the same inteface, they can be used as following:

measures <- list(IEConsistency, mutualInformation)
for (measure in measures) {
result <- sfs(iris, 'Species', measure)
print(attr(measure,'name'))
print(result$bestFeatures) } #> [1] "Inconsistent Examples Consistency" #> Sepal.Length Sepal.Width Petal.Length Petal.Width #> [1,] 1 0 1 1 #> [1] "Mutual Information" #> Sepal.Length Sepal.Width Petal.Length Petal.Width #> [1,] 1 1 1 0 Algorithms also share the same interface, so they can be combined as well. In this example Sequential Forward Selection, Genetic Algorithm and Las Vegas Wrapper are run with all the previous mentioned evaluation measures: measures <- list(IEConsistency, mutualInformation) algorithms <- list(sfs, lvw) for (algorithm in algorithms) { for (measure in measures) { result <- algorithm(iris, 'Species', measure) print(paste("Algorithm: ",attr(algorithm,'name'))) print(paste("Evaluation measure: ", attr(measure,'name'))) print(result$bestFeatures)
}
}
#> [1] "Algorithm:  Sequential Forward Selection"
#> [1] "Evaluation measure:  Inconsistent Examples Consistency"
#>      Sepal.Length Sepal.Width Petal.Length Petal.Width
#> [1,]            1           0            1           1
#> [1] "Algorithm:  Sequential Forward Selection"
#> [1] "Evaluation measure:  Mutual Information"
#>      Sepal.Length Sepal.Width Petal.Length Petal.Width
#> [1,]            1           1            1           0
#> [1] "Algorithm:  Las Vegas Wrapper"
#> [1] "Evaluation measure:  Inconsistent Examples Consistency"
#>      Sepal.Length Sepal.Width Petal.Length Petal.Width
#> [1,]            1           1            0           1
#> [1] "Algorithm:  Las Vegas Wrapper"
#> [1] "Evaluation measure:  Mutual Information"
#>      Sepal.Length Sepal.Width Petal.Length Petal.Width
#> [1,]            1           1            1           0

Wrapper evaluation measures can also be combined. In the example a knn model is used, since the iris problem is a classification problem. The FSinR package is able to detect automatically depending on the metric whether the objective of the problem is to maximize or minimize. To tune the model, the resampling method is established as a 10-fold crossvalidation, the dataset is centered and scaled, the accuracy is used as a metric, and a grid of the k parameter is performed. The wrapper evaluation measure function is created and added to the list of features:

resamplingParams <- list(method = "cv", number = 10)
fittingParams <- list(preProc = c("center", "scale"), metric = "Accuracy", tuneGrid = expand.grid(k = c(1:20)))

wrapper <- wrapperGenerator("knn", resamplingParams, fittingParams)

measures <- list(IEConsistency, mutualInformation, wrapper)