Satellite Program 1 – How AI Reshapes Social Science
About this Session
Time
Thu. 16.04. 09:50
Room
Plenary Hall
Speaker
Between Incremental Improvement and a Reset from Scratch: How AI Reshapes Social Science
Chair: Sandra Walzenbach
Across the research pipeline, AI now plays a role in data generation, analysis, interpretation, and reporting. In some cases, these applications appear as extensions of familiar practices: accelerating statistical modeling, scaling qualitative coding, or automating routine research tasks. From this perspective, AI offers efficiency gains while leaving foundational epistemological commitments largely intact. In other cases, however, AI challenges the very distinction between empirical observation and computational construction. The growing use of synthetic data, AI-driven simulations, and generative models raises questions about what counts as evidence, how validity should be assessed, and whether traditional notions of representativeness and causality remain adequate. The panel examines these developments with particular attention to methodological and epistemological implications. AI-assisted data analysis promises new ways of working with large, complex, and unstructured datasets, yet it also introduces risks related to opacity, automation bias, and the delegation of interpretive judgment. As AI systems increasingly function as research assistants – or even as quasi-analytic agents – longstanding norms of transparency, replicability, and accountability are put under pressure. Beyond methods, AI is reshaping how social scientific knowledge is communicated and evaluated. Automated writing support, summarization tools, and AI-generated visualizations are becoming commonplace, prompting debates about authorship, originality, peer review, and academic labor. These changes raise broader institutional and ethical questions about responsibility, expertise, and the governance of AI use in research contexts.
Chair: Sandra Walzenbach
Across the research pipeline, AI now plays a role in data generation, analysis, interpretation, and reporting. In some cases, these applications appear as extensions of familiar practices: accelerating statistical modeling, scaling qualitative coding, or automating routine research tasks. From this perspective, AI offers efficiency gains while leaving foundational epistemological commitments largely intact. In other cases, however, AI challenges the very distinction between empirical observation and computational construction. The growing use of synthetic data, AI-driven simulations, and generative models raises questions about what counts as evidence, how validity should be assessed, and whether traditional notions of representativeness and causality remain adequate. The panel examines these developments with particular attention to methodological and epistemological implications. AI-assisted data analysis promises new ways of working with large, complex, and unstructured datasets, yet it also introduces risks related to opacity, automation bias, and the delegation of interpretive judgment. As AI systems increasingly function as research assistants – or even as quasi-analytic agents – longstanding norms of transparency, replicability, and accountability are put under pressure. Beyond methods, AI is reshaping how social scientific knowledge is communicated and evaluated. Automated writing support, summarization tools, and AI-generated visualizations are becoming commonplace, prompting debates about authorship, originality, peer review, and academic labor. These changes raise broader institutional and ethical questions about responsibility, expertise, and the governance of AI use in research contexts.