|Title:||Comparing Knowledge-Based Sampling to Boosting|
|Abstract:||Boosting algorithms for classifcation are based on altering the initial distribution assumed to underly a given example set. The idea of knowledge-based sampling (KBS) is to sample out prior knowledgeand previously discovered patterns to achieve that subsequently applied data mining algorithms automatically focus on novel patterns without any need to adjust the base algorithm. This sampling strategy anticipates a user's expectation based on a set of constraints how to adjust the distribution. In the classified case KBS is similar to boosting. This article shows that a specific, very simple KBS algorithm is able to boost weak base classifiers. It discusses differences to AdaBoost.M1 and LogitBoost, and it compares performances of these algorithms empirically in terms of predictive accuracy, the area under the ROC curve measure, and squared error.|
ROC curve measure
|Appears in Collections:||Sonderforschungsbereich (SFB) 475|
This item is protected by original copyright
All resources in the repository are protected by copyright.