A model may be preferred over another for a number of reasons. These reasons will influence the choice of a learning method as well as the selection of a particular classifier produced by the method.

    • Accuracy
      • Classifier accuracy: predicting class label
    • Speed
      • Time to construct the model (training time)
      • Time to use the model (classification/prediction time)
    • Robustness: handling noise and missing values
    • Scalability: efficiency in disk-resident databases
    • Interpretability
      • Understanding and insight provided by the model. This is especially important in cases where the model is never intended to be put into practice over unseen data, but instead to influence systemic behaviours such as business rules and policies. It may also be critical to enable qualitative evaluation of the model for embedded bias.
    • Availability and Trust
      • Does the business environment have the technical infrastructure, skills and policy or governance framework to use it?
      • Will the business environment trust the results to be used for the intended purpose?
    • Other measures specific to the method, e.g., goodness of rules, such as decision tree size or compactness of classification rules.

    Note well that these kind of factors are just as influential on the selection of a method and a model for mining problems other than classification and prediction. For example, association rules are great for interpretability and scalability, but may not be considered trustworthy.