ORION Package Vignette

LM Schäfer, L Lausser, HA Kestler

2021-06-07

Introduction

The ORION package is designed for screening feature representations for potential ordinal relations among distinct classes. It provides evidence for hypotheses of type \(stage_1 \prec stage_2 \prec stage_3\) by evaluating their reflections in feature space.

ORION can be applied in an explorative way, which allows for revealing new unknown relations, leading to new hypotheses. The package is optimized for exhaustive screens through all possible permutations within all subgroups of class labels.

Overall, ORION allows for:

  1. confirming whether proposed relations are reflected or not
  2. hypothesize new ordinal relations (explorative analysis)
  3. filter, organize and analyze ordinal relations

The underlying algorithm is an extended version of the CASCADES algorithm published in L Lausser*, LM Schäfer*, LR Schirra*, R Szekely, F Schmid, and HA Kestler, Assessing phenotype order in molecular data. Sci Rep 9, 1-10 (2019) *equal contribution.

Potential ordinal relations are identified in cross-validation experiments with ordinal classifier cascades. Cascades are in general prone to incorrect assumptions on the class order. Those, which pass a threshold on the minimal class-wise sensitivity, are seen as potential candidates. ORION allows cascades of any type of binary classifier.

Notation

A cascade is written as a character string. The relation between the class labels is given as ‘>’. For example, the cascade denoted as ‘1>2>3’ indicates that 1 is neighbored by class 2, which itself is neighbored by 3. The class labels are sequential numeric values from 0 to (N-1), for N classes.

Class overview

Function overview

ORION provides basic functions to search for general ordinal (sub)cascades and the reflection of class orders in labelled data as well as a TunePareto wrapper for the classifier of the ordinal classifier cascade.

Additionally, there are several functions that help summarizing and getting an overview of the results.

  1. Filtering
    • dropSizes()
    • dropThreshold()
    • dropSets()
    • keepSizes()
    • keepThreshold()
    • keepSets()
  2. Summary
    • summarySubcascades()
    • summaryClasswise()
    • summarySubcascades()
  3. Transformation
    • as.groupwise()
    • as.subcascades()
    • as.edgedataframe()
  4. Plotting (S3methods)
    • plot.Subcascades()
    • plot.ConfusionTable()
    • plot.Conf()

Example

Analyze a dataset

Data preparation and preprocessing

We are going to show the workflow of the ORION package by using the employee selection dataset, an ordinal, real-world datasets donated by Dr. Arie Ben David (Holon Inst. of Technology/Israel) and downloaded from https.//www.cs.waikato.ac.nz/ml/weka/datasets.html. This dataset contains measurements of 488 job applicants. Based on 4 different features, corresponding to psychometric test results and interviews with the candidates a score was defined that shows how well the candidate fit to the job. The score is used as class label and it is expected that the order on the score (minimal score < … < maximal score) is reflected in the candidate evaluation measured in the 4 features.

# Load and evaluate the dataset
data(esl_org)
#names of the features
colnames(esl_org)
#> [1] "in1"  "in2"  "in3"  "in4"  "out1"
#dimensions of the dataset
dim(esl_org)
#> [1] 488   5

The out1 variable is used as class label. One can see that there are 9 different scores given. Based on the number of samples per class it might be reasonable to merge several scores into one group. We hence define a best and worst score which combines the best three and the worst three scores.

# Extract and define class labels
labels = esl_org[,5]-1
table(labels)
#> labels
#>   0   1   2   3   4   5   6   7   8 
#>   2  12  38 100 116 135  62  19   4
#maybe merge subgroups
labels[labels<=2] = 2
labels[labels>=6] = 6
table(labels)
#> labels
#>   2   3   4   5   6 
#>  52 100 116 135  85

One can observe that after defining the classes the labels are not starting at 0 anymore, which is however required for the analysis.

#redefine the labels such that they start at 0 and are consecutive
labels = labels-2
table(labels)
#> labels
#>   0   1   2   3   4 
#>  52 100 116 135  85

After having the class labels defined one can define the data.

data = esl_org[,1:4]

The data matrix with features as columns and samples as rows and the label vector are the input for the cascades analysis.

Applying the cascade algorithm

We want to search for (sub)cascades by performing 10x10 cross-validation experiments based on binary linear support vector machines.

The first step within the analysis is the generation of a fold list. This is done using the generateCVRuns command from the TunePareto package.

#generate a fold list
library(TunePareto)
foldList = generateCVRuns(  labels= labels,
                            ntimes      = 10,
                            nfold       = 10,
                            leaveOneOut = FALSE,
                            stratified  = TRUE)

Within the next step the object of type PredictionMap is generated as basis for the further analysis. This function requires the samples to be in the rows and the features in the columns. The PredictionMap object consists of the prediction matrix and the corresponding meta data. Each row in the prediction matrix corresponds the one binary base classifier. The elements show the class prediction for the samples of the run and fold given in the corresponding column of the meta data. The corresponding label row states the original class of the sample analyzed.

#generate the predicition map
genMap = gen.predictionMap(data, labels, foldList = foldList, 
                            classifier = tunePareto.svm(), kernel='linear')
#> Loading required package: e1071

The printed output shows that there is an additional packages loaded. The e1071 package is loaded as the tunePareto.svm() function uses that package. If parallel is defined as TRUE a parallel evaluation is performed using the doParallel-package.

The predicition map is made up of two different elements.

names(genMap)
#> [1] "pred" "meta"

Both elements are matrices. We can look at specific elements column 1, 11, 21 of the meta element.

genMap$meta[,c(1,11,21)]
#>       [,1] [,2] [,3]
#> label    0    0    0
#> run      1    1    1
#> fold     1    2    4

The meta information shows that those columns correspond to the first run of fold 1, 2 and 4 and the analysed samples belong the class 0.

We can now check the first five rows of the corresponding prediction element

genMap$pred[1:5,c(1,11,21)]
#>        [,1] [,2] [,3]
#> [0vs0]   -1   -1   -1
#> [0vs1]    0    0    1
#> [0vs2]    0    0    2
#> [0vs3]    0    0    0
#> [0vs4]    0    0    0

The rownames of this matrix show the classes of the binary base classifier. The elements are the prediction result of a specific training. The rows that correspond to base classifiers that would separate the same class consists of -1. Those rows are not used within the analysis.

Furthermore, in this example we can see that, in fold 1 and 2 of the first run, 0 is predicted for the given sample using the base classifiers 0 vs. 1, 0 vs. 2, 0 vs. 3 and 0 vs. 4. This means that for those cases the base classifier has classified the sample as its first class. In contrast to that the sample is classified as the base classifiers second class in fold 4 of the first run, as this is class 1 for 0 vs.1 and 2 for 0 vs. 2 the values are 1 and 2.

We can search for all subcascades that have a minimal class-wise sensitivity higher than a given threshold, like 0.6. It is also possible to search only for subcascades of a given cascade size. If the parameters are not set the default values are all subcascades from maximal length to a length of 2 that pass a threshold of 0.

#analyse the subcascades
subcascades = subcascades(genMap, thresh=0.6)
print(subcascades, max = 10)
#> $size.5
#>               pos.0     pos.1    pos.2     pos.3     pos.4
#> 0>1>2>3>4 0.6519231 0.7550000 0.787931 0.8081481 0.7870588
#> 4>3>2>1>0 0.7905882 0.8074074 0.787069 0.7550000 0.6461538
#> 
#> $size.4
#>             pos.0     pos.1     pos.2     pos.3
#> 4>3>2>0 0.7905882 0.8074074 0.8577586 0.9211538
#> 4>3>2>1 0.7905882 0.8074074 0.7870690 0.8190000
#>  [ reached getOption("max.print") -- omitted 10 rows ]
#> 
#> $size.3
#>           pos.0     pos.1     pos.2
#> 4>2>0 0.9752941 0.9517241 0.9211538
#> 0>2>4 0.9192308 0.9508621 0.9764706
#> 0>2>3 0.9192308 0.8577586 0.8992593
#>  [ reached getOption("max.print") -- omitted 23 rows ]
#> 
#> $size.2
#>         pos.0     pos.1
#> 0>4 1.0000000 1.0000000
#> 4>0 1.0000000 1.0000000
#> 0>3 1.0000000 1.0000000
#> 3>0 1.0000000 1.0000000
#> 1>4 0.9920000 0.9882353
#>  [ reached getOption("max.print") -- omitted 15 rows ]
#> 
#> attr(,"class")
#> [1] "Subcascades"

To get the model of one of these cascades or to test in advance the classification performance of a specific class order the package provides a TunePareto wrapper for an ordinal classifier cascade. This model can be used to evaluate the decision boundaries, to calculate the performance of one specific class order or it can be applied to classify new unknown samples.

# create a trained classifier model for to top ranked cascade of length 5
model <- trainTuneParetoClassifier( 
          classifier  = tunePareto.occ( base.classifier = tunePareto.svm()),
          trainData   = data,
          trainLabels = labels,
          class.order = as.character(c(0,1,2,3,4)),
          kernel      = 'linear',
          cost        = 1)
          
# predict labels
prediction <- predict(object = model, newdata = data)
# calculate the class-wise sensitivities
sensitivites = table(prediction[prediction==labels])/table(labels)

Analyze the results

Overviews

A general summary of the subcascades characteristics show that there are 60 cascades returned, which means that the threshold of 0.6 has already sorted out 260 out of the 320 possibilities. The 60 cascades are distributed over all lengths. 2 cascades have the maximal length of 5 and as 20 cascades of length 2 are found we know that all pairwise cascades were returned. This part of the summary can also solely be calculated by summarySubcascades(subcascades). Looking at the occurence by classes and size (also possible by summaryClasses(subcascades)) it can be seen that class 2 is underrepresented within the cascades of length 3 and 4. As longer cascades are less likely to occure by chance and hence are of a specific interest also their performances are given in this summary.

summary(subcascades)
#> -------------------------------------------------------------------------------
#> Overall:
#> Number of cascades (number): 60
#> Minimal classwise sensitivity (min.class.sens): 0.642
#> Maximal cascades length: 5
#> Number of involved classes: 5
#> 
#> Classes: 
#> [1] "cl.0" "cl.1" "cl.2" "cl.3" "cl.4"
#> -------------------------------------------------------------------------------
#> Sorted by size:
#> 
#>        number min.class.sens
#> size.5      2          0.646
#> size.4     12          0.646
#> size.3     26          0.631
#> size.2     20          0.646
#> ---------------------------------------------
#> Occurrence of classes:
#> 
#>     cl.0 cl.1 cl.2 cl.3 cl.4
#> all   36   36   32   36   36
#> 
#> Occurrence of classes by size:
#> 
#>        cl.0 cl.1 cl.2 cl.3 cl.4
#> size.5    2    2    2    2    2
#> size.4   10   10    8   10   10
#> size.3   16   16   14   16   16
#> size.2    8    8    8    8    8
#> ---------------------------------------------
#> Longest cascades(size=5):
#> 
#>           pos.0 pos.1 pos.2 pos.3 pos.4
#> 0>1>2>3>4 0.652 0.755 0.788 0.808 0.787
#> 4>3>2>1>0 0.791 0.807 0.787 0.755 0.646
#> -------------------------------------------------------------------------------

As the given dataset comprises scores we expected the order ‘0>1>2>3>4’ to be reflected within the given feature space. We can see that only this order, together with its reverse, can pass a minimal sensitivity criteria of 0.6 requiring all classes to be part of the cascade. The lacking difference between the forward and backward direction shows the symmetry of the dataset.

Before we analyse whether also the found subcascades are of the expected order, we look into the class subgroup characteristics.

To analyze whether there are specific class subgroups that are only found in one or two directions the function summaryGroupwise counts how many permutations of one subgroup are found.

mat <- summaryGroupwise(subcascades)
mat
#>        perm.2 perm.3 perm.4
#> size.5      1      0      0
#> size.4      4      0      1
#> size.3      4      6      0
#> size.2     10      0      0

As there are 2 cascades of length 5 corresponding to the forward and backward direction, there is one group of classes of which two permutations are found. For cascades of length 2 one can see that there are 10 groups, corresponding of the number of pairwise combinations, and all of them are found in 2 different ways. For cascades of length 3 and 4 one can see that there are 10 and 5 group of classes, respectively.

As there is one subgroup found which is reflected in 4 different orders we might ask which classes correspond to this group.

To go into more detail about those subgroups consisting of the same classes one can have a look in the corresponding orders and their sensitivity performance measure.

groupwise <- as.groupwise(subcascades)
groupwise$size.4
#> $`1-2-3-4`
#>             pos.0     pos.1     pos.2     pos.3
#> 4>3>2>1 0.7905882 0.8074074 0.7870690 0.8190000
#> 1>2>3>4 0.8200000 0.7879310 0.8081481 0.7870588
#> 
#> $`0-2-3-4`
#>             pos.0     pos.1     pos.2     pos.3
#> 4>3>2>0 0.7905882 0.8074074 0.8577586 0.9211538
#> 0>2>3>4 0.9192308 0.8577586 0.8081481 0.7870588
#> 
#> $`0-1-3-4`
#>             pos.0     pos.1     pos.2     pos.3
#> 0>1>3>4 0.6519231 0.9040000 0.8911111 0.7870588
#> 0>1>4>3 0.6519231 0.9270000 0.7905882 0.7548148
#> 4>3>0>1 0.7905882 0.9081481 0.6519231 0.7900000
#> 4>3>1>0 0.7905882 0.8903704 0.9040000 0.6461538
#> 
#> $`0-1-2-4`
#>             pos.0     pos.1     pos.2     pos.3
#> 0>1>2>4 0.6519231 0.7550000 0.8810345 0.9764706
#> 4>2>1>0 0.9752941 0.8810345 0.7550000 0.6461538
#> 
#> $`0-1-2-3`
#>             pos.0    pos.1    pos.2     pos.3
#> 0>1>2>3 0.6519231 0.755000 0.787931 0.8992593
#> 3>2>1>0 0.8992593 0.787069 0.755000 0.6461538

This lets us determine that the one group occurring with 4 different permutations consists of class 0, 1, 3 and 4. One might hypothesize that for this group 4 permutations are possible because of class 2 separating (0,1) and (3,4). Also the observation that within all 4 permutations class 0 and 1 are on one side and 3 and 4 on the other supports the hypothesis that there is a broader separation between 0,1 and 3,4.

The notation per subgroup can be reverted resulting in the original Subcascades object.

subcascades.rev <- as.subcascades(groupwise)

Filtering

It is possible to filter the returned subcascades for those that show a minimal class-wise sensitivity higher (keepThreshold()) or lower (dropThreshold()) than a given threshold.

#subcascades passing the threshold of 0.7
result <- keepThreshold(subcascades, thresh = 0.7)
#the minimal sensitivity of all filtered cascade is > 0.7
names(result)
#> [1] "size.4" "size.3" "size.2"
apply(result$size.4,1,min)
#>   4>3>2>0   4>3>2>1   1>2>3>4   0>2>3>4 
#> 0.7905882 0.7870690 0.7870588 0.7870588

If a better classification performance is required no cascades of length 5 are returned anymore and only 4 of length 4.

One might ask whether there is a performance gap between these four new longest cascades and the remaining cascades of length 4 that do not pass a threshold of 0.7. To answer this one can filter all cascades that do not pass the threshold and check their performances.

#subcascades that do not show a minimal class-wise sensitivity higher than 0.70
result <- dropThreshold(subcascades, thresh = 0.7)
#the minimal sensitivity of all filtered cascade is < 0.7
apply(result$size.4,1,min)
#>   0>1>3>4   0>1>4>3   4>3>0>1   0>1>2>4   0>1>2>3   4>3>1>0   4>2>1>0   3>2>1>0 
#> 0.6519231 0.6519231 0.6519231 0.6519231 0.6519231 0.6461538 0.6461538 0.6461538

One can see that the other cascades of length 4 have a minimal classwise sensitivity of 0.64 instead of 0.79 and hypothesize that ‘1>2>3>4’, ‘4>3>2>1’, ‘0>2>3>4’, ‘4>3>2>0’ are better reflected than the others.

There is also a function that allows to filter for a specific length (keepSize()) and one to filter for all except a specific length (dropSize()).

#keep all cascades of length 3
result <-keepSize(subcascades, size = 4)
#only size.4 is returned
names(result)
#> [1] "size.4"
#keep all cascades except those of length 3
result <-dropSize(subcascades, size = 3)
#size.3 is not within the set anymore
names(result)
#> [1] "size.5" "size.4" "size.2"

keepSets can filter for specific orders (ordered = TRUE) or class combinations (ordered = FALSE). Observed earlier that the group (2,3,4) seems to be closer related than (0,1,2) we can filter for exactly those class groups or orders. The function allows for different set representations.

result <- keepSets(subcascades, sets = list(c(0,1,2),c(2,3,4)), 
                    direction = 'exact', ordered=F)
unlist(lapply(result,rownames))
#> size.31 size.32 size.33 size.34 
#> "4>3>2" "2>3>4" "0>1>2" "2>1>0"
result <- keepSets(subcascades, sets = c(0,1,2),
                    direction = 'exact',ordered=T)
unlist(lapply(result,rownames))
#>  size.3 
#> "0>1>2"
result <- keepSets(subcascades, sets = c('0>1>2','2>3>4'), 
                    direction = 'exact',ordered=T)
unlist(lapply(result,rownames))
#> size.31 size.32 
#> "2>3>4" "0>1>2"

There are also more complex filtering options for keepSets and dropSets. The direction parameter defines whether a subset (‘sub’) or superset (‘super’), or as previously the exact case ‘exact’, should be returned. Furthermore, if not the ‘exact’ case is used the neighborhood parameter defines whether the given classes have to be neighbors within the cascade (direct) or not (indirect). Furhtermore the function provides the additional type-parameter (all,any). If there is a list of vectors given, the all parameter requires that the returned subcascades are part of all vectors within the list.

Again we analyse the groups (2,3,4) and (0,1,2) and filter for all their supersets. As those are groups of 3 classes we do not filter for subgroups as we are not interested in cascades of length 2.

#cascades can be given either as list of numeric vectors or as a vector of character strings
set1 = list(c(0,1,2),c(2,3,4))
set2 = c('0>1>2','2>3>4')

We start filtering for groups that contain those classes without requiring a specific class order but we require that those classes are direct neighbors.

result <- keepSets(subcascades, sets=set1, direction = 'super', 
                    ordered = FALSE, neighborhood = 'direct', type = 'any')
result
#> $size.5
#>               pos.0     pos.1    pos.2     pos.3     pos.4
#> 0>1>2>3>4 0.6519231 0.7550000 0.787931 0.8081481 0.7870588
#> 4>3>2>1>0 0.7905882 0.8074074 0.787069 0.7550000 0.6461538
#> 
#> $size.4
#>             pos.0     pos.1     pos.2     pos.3
#> 4>3>2>0 0.7905882 0.8074074 0.8577586 0.9211538
#> 4>3>2>1 0.7905882 0.8074074 0.7870690 0.8190000
#> 1>2>3>4 0.8200000 0.7879310 0.8081481 0.7870588
#> 0>2>3>4 0.9192308 0.8577586 0.8081481 0.7870588
#> 0>1>2>4 0.6519231 0.7550000 0.8810345 0.9764706
#> 0>1>2>3 0.6519231 0.7550000 0.7879310 0.8992593
#> 4>2>1>0 0.9752941 0.8810345 0.7550000 0.6461538
#> 3>2>1>0 0.8992593 0.7870690 0.7550000 0.6461538
#> 
#> $size.3
#>           pos.0     pos.1     pos.2
#> 4>3>2 0.7905882 0.8074074 0.8758621
#> 2>3>4 0.8758621 0.8081481 0.7870588
#> 0>1>2 0.6519231 0.7550000 0.9120690
#> 2>1>0 0.9112069 0.7550000 0.6461538
#> 
#> attr(,"class")
#> [1] "Subcascades"

This analysis confirms that the cascades grouping class 2 to class 0 and 1 perform worse than grouping class 2 to class 3 and 4, as those are ranked lower. Setting the parameter type to ‘all’ requires that both groups (2,3,4) and (0,1,2) are part of the cascade and results in returning the full cascades.

result.all <- keepSets(subcascades, sets=set1, direction = 'super', 
                        ordered = FALSE, neighborhood = 'direct', type = 'all')
unlist(t(lapply(result.all,rownames)))
#> [1] "0>1>2>3>4" "4>3>2>1>0"

We might ask whether the result changes if we do not require the classes to be within a direct neighborhood.

result <- keepSets(subcascades, sets=set1, direction = 'super', 
                    ordered = FALSE, neighborhood = 'direct', type = 'any')
unlist(t(lapply(result,rownames)))
#>  [1] "0>1>2>3>4" "4>3>2>1>0" "4>3>2>0"   "4>3>2>1"   "1>2>3>4"   "0>2>3>4"  
#>  [7] "0>1>2>4"   "0>1>2>3"   "4>2>1>0"   "3>2>1>0"   "4>3>2"     "2>3>4"    
#> [13] "0>1>2"     "2>1>0"
result <- keepSets(subcascades, sets=set1, direction = 'super', 
                    ordered = FALSE, neighborhood = 'indirect', type = 'any')
unlist(t(lapply(result,rownames)))
#>  [1] "0>1>2>3>4" "4>3>2>1>0" "4>3>2>0"   "4>3>2>1"   "1>2>3>4"   "0>2>3>4"  
#>  [7] "0>1>2>4"   "0>1>2>3"   "4>2>1>0"   "3>2>1>0"   "4>3>2"     "2>3>4"    
#> [13] "0>1>2"     "2>1>0"

There are no additional cascades returned, which shows that those classes are always next to each other if they are part of the same cascade.

So far we have confirmed that the expected order is reflected in the feature space. We have figured out that there is a grouping of 0,1 and 3,4, with 2 being closer to 3,4 and that there are no cascades reflected that show a further class in the group 0,1,2 and 2,3,4.

We might ask what orders are shown by the cascades that are not showing 2,3,4 or 0,1,2, so we filter out all of these cascades.

result <- dropSets(subcascades, sets=set1, direction = 'super', 
                    ordered = FALSE, neighborhood = 'direct', type = 'any')
result$size.4
#>             pos.0     pos.1     pos.2     pos.3
#> 0>1>3>4 0.6519231 0.9040000 0.8911111 0.7870588
#> 0>1>4>3 0.6519231 0.9270000 0.7905882 0.7548148
#> 4>3>0>1 0.7905882 0.9081481 0.6519231 0.7900000
#> 4>3>1>0 0.7905882 0.8903704 0.9040000 0.6461538

The longest that do not show the given three class patterns contain the classes 0,1,3,4. This is the class group that is found with 4 permutations.

Furthermore, it might be of interest to check the classification result of the classes that are not part of the cascade. As an example we take the cascade 0>1>3>4 and ask to which class the samples of class 2 are allocated.

result.neighbourhood = confusion.table(genMap, cascade = '0>1>3>4', 
                                        other.classes='all', sort = TRUE)
result.neighbourhood
#>             cl.0  cl.1       cl.3      cl.4       cl.2
#> pred.0 0.6519231 0.065 0.00000000 0.0000000 0.00000000
#> pred.1 0.3480769 0.904 0.01777778 0.0000000 0.55775862
#> pred.3 0.0000000 0.031 0.89111111 0.2129412 0.43362069
#> pred.4 0.0000000 0.000 0.09111111 0.7870588 0.00862069
#> attr(,"class")
#> [1] "ConfusionTable"

One can observe that the samples of class 2 split between class 1 and 3 and hence between its neighbouring classes of the longest cascade. This analysis has an impact especially if no cascade including all classes is found as it allows to gain insight into the neighbourhood of classes that are not part of the order returned.

Visualize the results

There are plot-functions for the Subcascades, the ConfusionTable and the Conf-object implemented. The cascades themselves can be exported to the igraph package.

Subcascades

The plot of the Subcascades object gives a visual overview of the cascades. The rows correspond to the cascades (if there are many cascades within the Subcascades object it is recommended to preselect first), the columns correspond to the classes. Here, the cascades of size 4 and 5 are plotted. The classes on the x-axis are sorted based on the first element in the Subcascades object, which is in this case the longest one with the best performance. It can be seen that as soon as class 1 is part of the cascade class 0 reaches only 0.64 percent. Similar behaviour can be observed for classes 4 and 3: class 4 is better classified if class 3 is not included in the cascade. The gaps highlight that all 5 candidate cascades of size 4 are present with both directions. And the plot shows that the class-wise sensitivities for the single classes in the forward and backward direction are always the same within this example, with the exception of the cascades composed ofthe classes 0,1,3,4. Here, a symmetry break can be observed.

subcascades = dropSize(subcascades,c(2,3))
plot(subcascades,row.sort = 'max',digits=2,thresh = 0.6)

ConfusionTable

The neighbourhood analysis is shown using the plot function for a ConfusionTable-object. This plot nicely shows the predicted class distribution per class label. One can observe that the samples of class 1 are partly classified as class 0 and 2.

Conf

The visual analysis of the base classifier performance shows that the binary classifiers perfom better for not-neigbored classes and the worst for separating class 0 and 1.

base.classifier = gen.conf(genMap)

Graph

Another possibility to visualize selected cascades is as graph. To easily use the R-package igraph, there is a function called as.edgedataframe implemented in the package. This function converts the subcascades object into a 4-column dataframe. The first and second column shows between which classes relations are found. The third column corresponds to a subcascade ID to recover which relations belong to the same cascade. This ID is created as increasing number and might be used to color the subcascades differently within the graph. In the last column the size of the subcascade to which the relation belongs is saved. This might be used to draw the cascades of different size differently. Within our example the graph representation shows that there are four cascades of size 4, passing the threshold of 0.7, indicated by the different colors. The overlap of these subcascades is also easily visible.

#filter for minimal class-wise sensitivity and size
result = keepThreshold(subcascades,0.7)
result = keepSize(result,c(4,5))
#convert to a dataframe that can be used in igraph
edges = as.edgedataframe(result)
#use the first and second column to make a graph object
g = graph_from_data_frame(edges[,c(1,2)], directed = TRUE)
#assign the subcascade IDs as edge weights
E(g)$weight = edges[,3]