Literature reviews play a crucial role in Information Systems (IS) research. However, scholars have expressed concerns regarding the reproducibility of their results and the quality of documentation. The involvement of human reproducers in these reviews is often hindered by the time-consuming nature of the procedures. The emergence of Large Language Models (LLMs) seems promising to support researchers and to enhance reproducibility. To explore this potential, we conducted experiments using various LLMs, focusing on abstract scanning, and have presented initial evidence suggesting that the application of LLMs in structured literature reviews could assist researchers in refining and formulating rules for abstract scanning. Based on our preliminary findings, we identify potential future research directions in this research in progress paper.
Titel | Using LLMs to Improve Reproducibility of Literature Reviews. |
---|---|
Medien | SIGSDA Symposium at the International Conference on Information Systems 2024. Bangkok, Thailand |
Verlag | --- |
Heft | --- |
Band | --- |
ISBN | --- |
Verfasser/Herausgeber | Prof. Dr. René Peinl, Armin Haberl, Jonathan Baernthaler, Sarang Chouguley, Stefan Thalmann |
Seiten | --- |
Veröffentlichungsdatum | 15.12.2024 |
Projekttitel | M4-SKI |
Zitation | Peinl, René; Haberl, Armin; Baernthaler, Jonathan; Chouguley, Sarang; Thalmann, Stefan (2024): Using LLMs to Improve Reproducibility of Literature Reviews. . SIGSDA Symposium at the International Conference on Information Systems 2024. Bangkok, Thailand. |