Full metadata record
DC FieldValueLanguage
dc.contributor.authorBuczak, Philip-
dc.contributor.authorHuang, He-
dc.contributor.authorForthmann, Boris-
dc.contributor.authorDoebler, Philipp-
dc.date.accessioned2024-02-15T10:40:12Z-
dc.date.available2024-02-15T10:40:12Z-
dc.date.issued2022-08-08-
dc.identifier.urihttp://hdl.handle.net/2003/42333-
dc.identifier.urihttp://dx.doi.org/10.17877/DE290R-24170-
dc.description.abstractTraditionally, researchers employ human raters for scoring responses to creative thinking tasks. Apart from the associated costs this approach entails two potential risks. First, human raters can be subjective in their scoring behavior (inter-rater-variance). Second, individual raters are prone to inconsistent scoring patterns (intra-rater-variance). In light of these issues, we present an approach for automated scoring of Divergent Thinking (DT) Tasks. We implemented a pipeline aiming to generate accurate rating predictions for DT responses using text mining and machine learning methods. Based on two existing data sets from two different laboratories, we constructed several prediction models incorporating features representing meta information of the response or features engineered from the response’s word embeddings that were obtained using pre-trained GloVe and Word2Vec word vector spaces. Out of these features, word embeddings and features derived from them proved to be particularly effective. Overall, longer responses tended to achieve higher ratings as well as responses that were semantically distant from the stimulus object. In our comparison of three state-of-the-art machine learning algorithms, Random Forest and XGBoost tended to slightly outperform the Support Vector Regression.en
dc.description.abstractCorrection for this article: https://doi.org/10.1002/jocb.627en
dc.language.isoende
dc.relation.ispartofseriesThe journal of creative behavior;57(1)-
dc.rights.urihttps://creativecommons.org/licenses/by-nc/4.0/de
dc.subjectDivergent thinkingen
dc.subjectCreative qualityen
dc.subjectHuman ratingsen
dc.subjectSupervised learningen
dc.subjectRandom Foresten
dc.subjectGradient boostingen
dc.subjectSupport Vector Regressionen
dc.subject.ddc310-
dc.titleThe machines take over: a comparison of various supervised learning approaches for automated scoring of divergent thinking tasksen
dc.typeTextde
dc.type.publicationtypeArticlede
dcterms.accessRightsopen access-
eldorado.secondarypublicationtruede
eldorado.secondarypublication.primaryidentifierhttps://doi.org/10.1002/jocb.559de
eldorado.secondarypublication.primarycitationBuczak, P., Huang, H., Forthmann, B. and Doebler, P. (2023), The Machines Take Over: A Comparison of Various Supervised Learning Approaches for Automated Scoring of Divergent Thinking Tasks. J Creat Behav, 57: 17-36. https://doi.org/10.1002/jocb.559de
Appears in Collections:Statistische Methoden in den Sozialwissenschaften



This item is protected by original copyright



This item is licensed under a Creative Commons License Creative Commons