AI use within early research careers: help or hindrance?

Hannah is currently studying a BSc in Sport and Exercise Psychology at Loughborough University. In the 2023/24 academic year, she has been working at King's College London for her placement year. Her main roles have focused on developing and maintaining the Catalogue of Mental Health Measures, as well as working with Lived Experience Experts to help improve the accessibility of the content. She is interested in how mental health conditions develop throughout early life, and is excited to explore this further throughout her career after completing her undergraduate degree in the 24/25 academic year.

Posted on

The use of Artificial Intelligence (AI) across various disciplines has increased significantly over the past few years, and research is no different. As AI continues to become embedded within many platforms used in academia, it represents a significant consideration for the next generation of researchers.

Introductions to AI

Widely speaking, AI refers to systems that learn from data and input to perform tasks such as automating data-entry processes or supplementing human-like reasoning, including problem-solving (Holzinger et al., 2019). These tools have been used in different stages of research to enhance efficiency and accuracy, including supporting the generation of research questions and automating reference lists.

How AI tools can support referencing

In the past few years, the use of AI tools to streamline processes has increased significantly. From an early career researcher’s perspective, these tools can help organise and initiate research processes, such as generating questions or exploring new fields. For example, websites such as Elicit.com can be used to identify relevant papers when considering different variables, and then provide DOI or web links for access and further research. This can provide a useful starting point for understanding a research field. However, it is important to remember that AI tools do not replace the need to critically read and evaluate the articles individually. Although summarising features of these tools may be helpful, they will likely exclude critical information for comparing or analysing the studies.

Additionally, referencing systems such as MyBib, Scribbr and CiteThisForMe use forms of AI to improve the efficiency of creating reference lists through auto-filling information from provided links or articles. These tools can be helpful to researchers at any stage of their career, but particularly for early career researchers due to potentially limited knowledge or understanding of specialist fields. Through highlighting the most impactful or highly cited articles, new researchers, such as PhD students, can develop their foundational understanding.

girl sat at desk wearing headphones writing on a notebook with a laptop in front of her

The strengths and weaknesses of AI tools for literature reviews

Following the development of research questions, numerous articles have highlighted the potential for AI tools to be used for streamlining systematic reviews:

  • An ACAMH blog suggested that the involvement of AI tools can reduce processing times for a systematic review from 67 weeks to just 2 weeks (Fabiano, 2024).
  • AI screening is approximately 88% accurate for excluding articles that don’t meet search criteria, and classifying those included (Alshami et al., 2023).
  • Large Language Models (LLMs), such as ChatGPT-4, can be highly effective for article screening with up to 93% efficiency when prompted with inclusion and exclusion criteria (Li et al., 2024).

In the initial stages of a literature review, this can be extremely helpful to streamline the process of selecting relevant articles and papers, and thus be a great aid to narrowing the scope of articles for further exploration. However, despite these advantages, limitations to the use of AI within early research stages remain:

  • It is emphasised that errors can compound across steps within the process of an AI-aided literature review, calling for caution and researcher verification (Fabiano et al., 2024)
  • Generative AI can overlook the context or cultural sensitivity of study findings, potentially exaggerating results or impact (Wang et al., 2025).
  • Outputs from large language models can overgeneralise, risking application and understanding for lesser-represented demographics within the training data (Goetz et al., 2024).

While AI tools can be helpful, it is important to consider their role at each stage of the research process and to remain clear about the limitations that exist in their use. For example, if using LLMs or generative AI for summarising within literature reviews, critical consideration must be given to outputs. This is due to the risk of overgeneralisation or hallucinations, consequently resulting in poorer outcomes within evidence-based practice for underrepresented demographics within the training data (Goetz et al., 2024; IBM, 2023). Hallucinations are incorrect or fabricated outputs, which in particular risk the integrity of research outputs that have been developed with the support of AI tools. Additionally, AI tools can be beneficial for screening a high volume of articles, but caution should be practised to ensure significant articles are not excluded. Early career researchers encounter these tools within referencing and literature reviews directly, and thus, critical reflection on their use is vital.

AI tools for peer reviewing

However, the need for critical consideration when using AI tools is not limited to these two early stages of research. AI can also support other aspects of research, including the peer review process. A recent ACAMH blog discussed the potential implications of utilising AI tools for assisting the peer review process (Fabiano, 2024; ACAMH blogs, 2024). This blog followed the consensus of utilising AI tools for research, highlighting the positive implications for their use while stressing the need for critical consideration. Following a similar procedure to tools that enhance data entry processes, AI tools can check manuscripts for formatting or reporting guidelines, often much more efficiently than human reviewers, thus reducing the time taken for peer reviews. However, due to the risk of hallucinated AI outputs (IBM, 2023), the tools’ reliability for reviewing the accuracy or validity of research within a manuscript is limited. Understanding how AI can be used to support article submission will support researchers looking to publish their work, a key aspect of early research careers. However, the blog signalled confidentiality risks. AI tools are built from a large database to inform their processes, and reviewing the manuscript could add this information to their programming, risking release into the public domain; a concern further highlighted within recent discussions regarding the use of AI within clinical observations, particularly within CAMH research (Minnis et al., 2024).

boy on reclined on sofa with laptop on lap and headphones on

In summary

For early career researchers, these tools can provide valuable guidance for initiating research processes, such as signposting to key articles or supporting organisation of complex processes. Additionally, they can be helpful tools for conducting different aspects of research. However, as with any automated system, their use requires critical reflection. To maintain the validity and meaningfulness of research, the tools should only serve as a supportive aid, enhancing efficiency without replacing deep engagement and comprehensive understanding that are key to ethical and effective research.

However, it is important to consider their limitations, and thus AI tools should only be used to aid researchers’ work and exploration, not replace it. This is critical to ensure the validity and accuracy of research, as well as the development of early career researcher practice and understanding. Following the rapid development of AI tools for a range of processes, not limited to research, this consideration is paramount. So, critical reflection on their outputs and use should be at the forefront of researchers’ minds.

NB This blog has been peer reviewed

References

  • Alshami, A., Elsayed, M., Ali, E., Eltoukhy, A. E. E., & Zayed, T. (2023). Harnessing the Power of ChatGPT for Automating Systematic Review Process: Methodology, Case Study, Limitations, and Future Directions. Systems, 11(7), 351. https://www.mdpi.com/2079-8954/11/7/351
  • Cite This For Me. (2010). Save Time and Improve your Marks with CiteThisForMe, The No. 1 Citation Tool. Cite This for Me. https://www.citethisforme.com/
  • Elicit. (2023). Elicit. Elicit.com. https://elicit.com/
  • Fabiano, N. (2024, September 25). AI for Peer Review. ACAMH Blogs. AI for Peer Review https://www.acamh.org/blog/ai-for-peer-review/
  • Fabiano, N., Gupta, A., Bhambra, N., Luu, B., Wong, S., Maaz, M., Fiedorowicz, J. G., Smith, A. L., & Solmi, M. (2024). How to optimize the systematic review process using AI tools. JCPP Advances, 4(2). https://doi.org/10.1002/jcv2.12234
  • Goetz, L., Nabeel Seedat, Vandersluis, R., & van. (2024). Generalization – a key challenge for responsible AI in patient-facing clinical applications. Npj Digital Medicine, 7(1). https://doi.org/10.1038/s41746-024-01127-3
  • Holzinger, A., Langs, G., Denk, H., Zatloukal, K., & Müller, H. (2019). Causability and explainability of artificial intelligence in medicine. WIREs Data Mining and Knowledge Discovery, 9(4). https://doi.org/10.1002/widm.1312
  • IBM. (2023, September 1). What are AI hallucinations? IBM. https://www.ibm.com/think/topics/ai-hallucinations
  • Li, M., Sun, J., & Tan, X. (2024). Evaluating the effectiveness of large language models in abstract screening: a comparative analysis. Systematic Reviews, 13(1). https://doi.org/10.1186/s13643-024-02609-x
  • Minnis, H., Alessandro Vinciarelli, & Huda Alsofyani. (2024). The use and potential of artificial intelligence for supporting clinical observation of child behaviour. Child and Adolescent Mental Health. https://doi.org/10.1111/camh.12714
  • MyBib. (2025). MyBib. MyBib. https://www.mybib.com/
  • Peters, U., & Chin-Yee, B. (2025). Generalization bias in large language model summarization of scientific research. Royal Society Open Science, 12(4). https://doi.org/10.1098/rsos.241776
  • Scribbr. (2019). Scribbr – your path to academic success. Scribbr. https://www.scribbr.com/
  • Wang, L., Bhanushali, T., Huang, Z., Yang, J., Badami, S., & Hightow-Weidman, L. (2024). Evaluating Generative AI in Mental Health: A Systematic Review of Capabilities and Limitations (Preprint). JMIR Mental Health. https://doi.org/10.2196/70014

About the author

Hannah Lewis
Hannah Lewis is currently studying a BSc in Sport and Exercise Psychology at Loughborough University. In the 2023/24 academic year, she has been working at King’s College London for her placement year. Her main roles have focused on developing and maintaining the Catalogue of Mental Health Measures, as well as working with Lived Experience Experts to help improve the accessibility of the content. She is interested in how mental health conditions develop throughout early life, and is excited to explore this further throughout her career after completing her undergraduate degree in the 24/25 academic year.

Add a comment

Your email address will not be published. Required fields are marked *

*