
Project Summary
Generative Artificial Intelligence (GenAI) has become a matter of critical debate across the world. While advancing GenAI research represents a long-standing practice within scientific fields, the relevance of GenAI across the Social Sciences is much more recent. The challenge facing social scientists is to determine whether it is possible to realise world-leading research that incorporates responsible GenAI while sustaining research excellence.
Fortunately, social scientists have already begun to test GenAI’s use in research. Studies have investigated the affordances of GenAI’s predictive power to fill gaps in research, automate transcription processes, and annotate and analyse images. Moreover, there is now a growing body of work that has set out to investigate GenAI use for supporting and automating prevalent and labour-intensive social scientific approaches, such as numerical and text analysis.
The extent to which elements of Social Science research can be automated remains unresolved, however. For some, the expediency of GenAI is unmatched, while for others, questions of quality, ethics, transparency, and replicability undermine this expediency. While scholars continue to debate this issue, research funders need to make decisions every day about the kinds of GenAI use they will fund. In turn, funders have developed high-level policies on responsible GenAI use. Broadly, these policies are designed to offer guidance to applicants and reviewers that coheres around the need for fairness, transparency, and accountability in research processes. As GenAI develops apace and new questions of its application emerge, funders’ reliance on the judgments of expert reviewers, for example in the context of peer review, poses challenges. These challenges are owing to the disparate perspectives on what constitutes responsible GenAI use in the Social Sciences, which will be reflected in the reviewer base. Looking forward, these views must be reconciled to support the development of more nuanced and situated funding policies.
Yet, developing such a consensus for the Social Sciences is a challenge. The Social Sciences are home to a plurality of disciplines and their disciplinary and sub-disciplinary perspectives shape their use of GenAI. This means that any resolution surrounding GenAI use cannot take the form of a simple one-size fits all ruling. It must, instead, consider disciplinary and sub-disciplinary variation. Yet, notably, existing GenAI funding guidance omits any mention of discipline. Thus, there is an evident need to enhance such policies by offering nuanced disciplinary insight on responsible GenAI use.
In this Fellowship, I unpack GenAI use in three disciplines: (i.) Business & Economics, (ii.) Education, and (iii.) Linguistics. Through systematic reviews and linguistic analyses of academic communication, I will establish current disciplinary practices in GenAI use and reporting. I will then access academics perceptions of responsible GenAI use and reporting practices through focus groups. Finally, bringing published and testified practices together, I will produce recommendations for policy development to share with funding bodies, through the Metascience Unit. The discipline-specific recommendations will enhance existing responsible GenAI use policies.

