Boge, Florian J.2025-02-242025-02-242024-12-27http://hdl.handle.net/2003/4349610.17877/DE290R-25329Deep Neural Networks (DNNs) are becoming increasingly important as scientific tools, as they excel in various scientific applications beyond what was considered possible. Yet from a certain vantage point, they are nothing but parametrized functions of some data vector , and their ‘learning’ is nothing but an iterative, algorithmic fitting of the parameters to data. Hence, what could be special about them as a scientific tool or model? I will here suggest an integrated perspective that mediates between extremes, by arguing that what makes DNNs in science special is their ability to develop functional concept proxies (FCPs): Substructures that occasionally provide them with abilities that correspond to those facilitated by concepts in human reasoning. Furthermore, I will argue that this introduces a problem that has so far barely been recognized by practitioners and philosophers alike: That DNNs may succeed on some vast and unwieldy data sets because they develop FCPs for features that are not transparent to human researchers. The resulting breach between scientific success and human understanding I call the ‘Actually Smart Hans Problem’.enSynthese; 203(1)https://creativecommons.org/licenses/by/4.0/Deep Neural NetworksConceptsReasoningClever Hans ProblemAutomated science100Functional concept proxies and the actually smart Hans problem: what’s special about deep neural networks in scienceResearchArticleTiefes neuronales NetzDeep Neural NetworkKonzeptionBeweisführungClever Hans PhenomenonAutomatisierungstechnik