The machines take over: a comparison of various supervised learning approaches for automated scoring of divergent thinking tasks
Loading...
Date
2022-08-08
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Traditionally, researchers employ human raters for scoring responses to creative thinking tasks. Apart from the associated costs this approach entails two potential risks. First, human raters can be subjective in their scoring behavior (inter-rater-variance). Second, individual raters are prone to inconsistent scoring patterns (intra-rater-variance). In light of these issues, we present an approach for automated scoring of Divergent Thinking (DT) Tasks. We implemented a pipeline aiming to generate accurate rating predictions for DT responses using text mining and machine learning methods. Based on two existing data sets from two different laboratories, we constructed several prediction models incorporating features representing meta information of the response or features engineered from the response’s word embeddings that were obtained using pre-trained GloVe and Word2Vec word vector spaces. Out of these features, word embeddings and features derived from them proved to be particularly effective. Overall, longer responses tended to achieve higher ratings as well as responses that were semantically distant from the stimulus object. In our comparison of three state-of-the-art machine learning algorithms, Random Forest and XGBoost tended to slightly outperform the Support Vector Regression.
Correction for this article: https://doi.org/10.1002/jocb.627
Correction for this article: https://doi.org/10.1002/jocb.627
Description
Table of contents
Keywords
Divergent thinking, Creative quality, Human ratings, Supervised learning, Random Forest, Gradient boosting, Support Vector Regression