How to Optimize the Systematic Review Process using AI Tools

Avatar photo
You can listen to this podcast directly on our website or on the following platforms; SoundCloud, iTunes, Spotify, CastBox, Deezer, Google Podcasts, Podcastaddict, JioSaavn, Listen notes, Radio Public, and Radio.com (not available in the EU).

Posted on

In this Papers Podcast, Dr. Nicholas Fabiano discusses his JCPP Advances Methodological Review ‘How to optimize the systematic review process using AI tools’ (https://doi.org/10.1002/jcv2.12234). Nicholas is a co-first author of the paper, along with Arnav Gupta and Nishaant Bhambra.

There is an overview of the paper, methodology, key findings, and implications for practice.

Discussion points include:

  • Background into what a systematic review refers to.
  • What is Artificial Intelligence (AI)?
  • How AI is being used in the systematic review process.
  • How widely utilised AI is used in research and systematic reviews.
  • The advantages of utilising AI, as well as the risks and limitations.
  • What a balanced use of AI would look like.

In this series, we speak to authors of papers published in one of ACAMH’s three journals. These are The Journal of Child Psychology and Psychiatry (JCPP)The Child and Adolescent Mental Health (CAMH) journal; and JCPP Advances.

#ListenLearnLike

Subscribe to ACAMH mental health podcasts on your preferred streaming platform. Just search for ACAMH on; SoundCloudSpotifyCastBoxDeezerGoogle Podcasts, Podcastaddict, JioSaavn, Listen notesRadio Public, and Radio.com (not available in the EU). Plus we are on Apple Podcasts visit the link or click on the icon, or scan the QR code.

App Icon Apple Podcasts  

Nicholas Fabiano
Nicholas Fabiano

Nicholas Fabiano is a second-year psychiatry resident at the University of Ottawa. He is in the research stream and has an interest in the overlap between mental and physical health, particularly exploring the impact of lifestyle (exercise, diet, and sleep) on mental health. From a methodological standpoint, he has expertise in systematic and umbrella reviews, with interests in optimization of this process.

Transcript

[00:00:10.000] Mark Tebbs: Hello, and welcome to the Papers Podcast series for the Association of Child and Adolescent Mental Health, or ACAMH for short. I’m Mark Tebbs. I’m a former Director of Mental Health Commissioning, with a background in psychology, and currently Chief Exec and Executive Coach, and Freelance Consultant. In this series, we speak to the authors of papers published in one of ACAMH’s three journals. They are the Journal of Child Psychology and Psychiatry, commonly known as JCPP, the Journal of Child and Adolescent Mental Health, known as CAMH, and JCPP Advances.

If you’re one of the fans of our Papers Podcast series, please subscribe on your preferred streaming platform, let us know how we did, with a rating or review, and do share with friends and colleagues.

Today, I’m really delighted to be talking to Dr. Nicholas Fabiano, who’s the leader author of a paper entitled, “How to Optimize the Systematic Review Process Using AI Tools,” recently published in JCPP Advances. Welcome, Nick, really lovely to be speaking to you today. Let’s start with some introductions. Maybe if you could say a little bit about yourself, your career to date, and your research interests?

[00:01:13.584] Dr. Nicholas Fabiano: Yeah, well, first off, thank you for having me, it’s a pleasure to be here. So, as you said, my name is Nick Fabiano, and a little bit about me. I’m a Second-Year Psychiatry Resident right now, at the University of Ottawa, here in Canada. And my main research interest right now is really the overlap between mental and physical health, but I also have interests in looking at the efficiency of the research process. So, looking at different ways that we can, kind of, optimise different parts, such that it’s done faster, or more accurate. That way we can get more research done and have more stuff, kind of, in a holistic picture.

[00:01:45.383] Mark Tebbs: Good stuff. I think that speed thing’s really important, isn’t it? So, how quickly can we get research into practice, so, really, really important stuff. So, you collaborated on – with a number of colleagues on the paper, from Canada, US, Europe. So, if you could give your colleagues a little bit of a name check, that’d be great.

[00:02:01.972] Dr. Nicholas Fabiano: Hmmm, yeah. So, we’ll, kind of, go through everyone that’s, kind of, listed as an author there. So, just to start off, I had two co-first authors, as well, with me on this piece. So, the first one was Arnav Gupta, he’s a Internal Medicine Resident at the University of Calgary. Then there’s Nishaant Bhambra, or Shaan, he’s a Family Medicine Resident here at the University of Ottawa. There’s Brandon Luu, he is an Internal Medicine Resident at the University of Toronto. There’s Stanley Wong, he’s a Psychiatry Resident at the University of Toronto. There’s Muhammad Maaz, who is an MD PhD student at University of Toronto.

And then there are two Staff Physicians on the paper as well, too. So, there’s Dr. Jess Fiedorowicz, he is a Professor at the University of Ottawa, Department of Psychiatry, and then there’s also Andrew Smith, who is a Assistant Professor at the University of Ottawa, Department of Psychiatry, and then PI of the paper, Dr. Marco Solmi, who is a Associate Professor at the University of Ottawa, Department of Psychiatry.

[00:02:57.513] Mark Tebbs: Brilliant, good stuff. So, the paper is a methodological review, so could you just start by giving us a brief overview, setting the scene for our listeners?

[00:03:07.142] Dr. Nicholas Fabiano: Yeah, for sure. So, this paper, essentially, the goal of it was to focus on how we can improve the systemic review process, specifically using AI or artificial intelligence tools. And just to give a little bit of a background in terms of what a systemic review actually refers to, it’s our cornerstone for how we synthesise the available evidence on a given topic. It allows us to combine a lot of the different primary studies, to clarify some associations, and also determine where there are gaps that may exist are. But, the thing is, due to the ever-increasing volume of research that’s churning out year-after-year, the traditional methods to do a systemic review are becoming less efficient and more time consuming.

So, I found that there were so many different AI tools that were being released, and a lot of them had different potentials to optimise different parts of the systemic review process. Even if they weren’t specifically geared just for a systemic review AI tool, they had capabilities that would be helpful for it. So, our goal for this piece was to provide an overview of what tools are available and how they can be incorporated and where.

And that’s, kind of, what we do in the paper, we break down the paper by each section of the systemic review. So, looking at the introduction, the methods, the results, the discussion, and piecing in where each tool can fit. And the important thing is, it’s not an absolute rule for each tool to each section, nor is it a complete list, there’s more and more tools that have came out since then. But we wanted to provide a starting point, so it’s not overwhelming for someone who wants to learn a little bit about it.

[00:04:35.647] Mark Tebbs: Brilliant, I found it a really interesting paper. I think it’d be really good for us to row back a little bit, and just go over some definitions. So, you know, we hear the term ‘artificial intelligence’ a lot, so what does it mean? What is artificial intelligence?

[00:04:49.883] Dr. Nicholas Fabiano: Yeah, so we hear – yeah, artificial intelligence, or AI, a lot, especially recently over the last few years, with a lot of the different tools and ChatGPT and different stuff of that sort being released. In the broadest sense, we can really refer to AI as intelligence exhibited by machines, in its simplest definition. And a lot of these tools, the automation – that are used for the automation of the systemic review process, they depend on the AI capabilities of what are called ‘large language models,’ or LLMs. And without getting into too much detail there, essentially, these models, they’ve been trained on very large datasets of text, and that allows them to comprehend the provided text and have outputs for that, and that’s the basis for a lot of these different tools. That’s a – probably a good start for AI.

[00:05:39.401] Mark Tebbs: Cool, and so, how is AI being used in that, kind of, systemic review process that you described?

[00:05:47.378] Dr. Nicholas Fabiano: So, to, kind of, go back to it, the systemic review process itself can be segmented into several stages, each has different unique challenges that the AI tools can help address. So, it’s important that we’re able to effectively integrate these tools into the process, because there’s considerable potential to really improve the efficiency and streamline the research workflow, while accelerating development of more targeted tools.

[00:06:10.148] Mark Tebbs: I must admit, I was really surprised at how many AI tools there were and so I – AI tools available at, kind of, every stage of that, kind of, review process. So, can you – like, do you have a sense of how widely utilised AI is in, kind of, you know, research and, kind of, systemic reviews generally?

[00:06:29.101] Dr. Nicholas Fabiano: Yeah, so it’s hard to provide exact estimates in terms of how widely it’s used for systemic reviews, or even in papers in general. And to my knowledge, there’s no study that quantifies the extent of usage at the different stages. But there are possibilities, as I mentioned, for uses at each stage, so whether it even be formulating the research question, the introductions, the methods, the results or the discussion. So, due to the vast amount of tools that are available though, I imagine that a lot of tools are used for assistance in lots of the writing portions. So, the introduction and discussion parts, because a lot of these tools can help prevent that writer’s block, so they can suggest different text, they can input different text, so I imagine there’s a lot of use there.

And as more research comes out regarding the use of AI tools, I think we’ll see more usage with the methods, as well, too. Particularly the screening process of the systemic reviews, because it eliminates the need sometimes for duplicate screening, where you need two humans to agree on a screening process, but instead maybe you can have an AI and a human, and if there’s discrepancy, that can be resolved, but it eliminates or halves the workload required.

And then the other portion would be the results section, but this requires right now a lot more of manual input in terms of how you want to organise that data. But I do think that the AI does have capabilities with regards to the data synthesis methods, so being able to clarify what methods can be done, and how the most efficient way would be to conduct that. But the caveat to that would be that it needs to be really verified by an experienced author. You can’t just input this AI tool and have an output and trust that. You need someone to really verify the integrity of that.

[00:08:03.398] Mark Tebbs: So, you mentioned some of the advantages already, so I’d just like to go through, you know, what are the advantages or opportunities of AI? And then we’ll come onto maybe what are some of the risks or limitations?

[00:08:16.312] Dr. Nicholas Fabiano: I think the biggest advantage for AI is efficiency, and just allowing us to do stuff faster, while maintaining that accuracy. It provides the potential to automate a lot of those routine and time consuming processes that traditional manual reviews have to do essentially manually, and it takes a lot more time, and it also helps to assist with writing and editing of the final manuscript itself.

So, as an example, there’s been research to, kind of, guesstimate how long does it take to do a manual systemic review. You know, on average, it found that it takes about six to seven weeks to complete that, and considerable human resources, to take a systemic review from protocol registration to publication. And the range of that can be anywhere from six months to three years even, depending on the scope, the methodology, and then what resources are available.

But another recent paper came out, and when they incorporated AI tools to the process, the authors claimed that the timeline could be significantly reduced to an impressively short two weeks, so that’s a pretty significant change. But, again, I do want to mention that these fast timelines should be considered with caution, as well, because we want to ensure that we’re not compromising study quality that are influenced as a result.

[00:09:27.628] Mark Tebbs: Yeah, for sure, that’s a massive shift, isn’t it? From, kind of, six to seven weeks to two weeks. So, yeah, so what are some of the, kind of, risks and maybe some of the limitations of AI?

[00:09:38.640] Dr. Nicholas Fabiano: So, there’s various risks that can really influence accuracy, the reliability, credibility, and even the overall end product, and we’ll, kind of, go through a few. Number one, AI systems can sometimes generate information that seems plausible, but isn’t actually supported by real evidence or data. So, if you’re, kind of, blindly using an AI tool to write for you, or to input different things and you’re not checking, it may seem plausible, but it’s very important that you have an expert on the team or you’re verifying any claims that are made, because you can’t really take it at face value.

It can also introduce errors sometimes in the process of data summarisation. So if you give it a large block of text, or a table, and you’re instructing the AI to summarise it, it won’t always summarise the portions that are necessarily important to you. It could omit details that could make the data erroneous, so, again, highlighting the importance of really verifying a lot of those different things.

Then the other things that are not necessarily risks, but more so limitations, looking at it at a global perspective, is sometimes AI tools are only accessible behind paywalls. And depending on socioeconomic factors, this may further augment and propagate inequity between – in science. So people that are able to afford these tools are now getting faster at research, maybe they’re publishing more, maybe they’re getting more grants, and those that maybe don’t have access to the tools are further, kind of, distance from others. So, it’s something that we need to be cognisant of.

And then the other piece of that too is due to the complex and nuanced algorithms of some of these AI tools, and the confidentiality of codes from some of the companies that have them, sharing AI tools may be difficult, from a reproducibility perspective, and that can be impacted. So, again, just to highlight from the limitations perspective, that users must always apply their expertise to evaluate the information, because even some of these advanced tools are very prone to errors at times.

[00:11:30.597] Mark Tebbs: Yeah, I think that, kind of, principle around still being responsible for what is published, that even if it’s produced by AI, you’re still ultimately responsible for that content. I guess – I think that’s a really good description of the opportunities and some of the limitations. Do you, in the paper, come to a conclusion about what a, you know, almost, like, a balanced use of AI would look like?

[00:11:54.193] Dr. Nicholas Fabiano: So, yeah, I think, overall, it’s hard to come to a definite conclusion right now, in terms of, is it good, is it bad? I think it would be more fair to say that the integration of these AI tools, in the systemic review process, has considerable potential to improve the efficiency and streamline that workflow, but, also, remembering those limitations that we mentioned before. And then highlighting that the AI tools should only be a supplement to the human content and clinical insight, quality check, manual review, writing evaluation, are all things that still need to be done from that human side, we can’t just offload everything. So, just to reiterate, the fine line is that it has considerable potential, but we also need to acknowledge its limitation.

[00:12:33.108] Mark Tebbs: So, are you conducting more research in this area? Is there other work that’s going on in the field that you’re aware of, that you’d be able to share?

[00:12:41.600] Dr. Nicholas Fabiano: For myself, not specifically in the field of incorporating AI into systemic reviews, specifically, but I do have interests in teaching the systemic review process and meta-analysis process to early career Researchers. And around the same time as this piece was published in JCPP Advances, we had another piece that was published titled, “On the Value of Meta-Research for Early Career Researchers.” And without getting too much into depth, it just provides, kind of, an overview, an entry level for some of these Researchers, in terms of important things to know, different things about the methods, and stuff of that sort. So, it is definitely an area of interest for me, so it’s, kind of, within the scope of that efficiency and knowledge surrounding systemic reviews.

And I think in the future I’m going to be planning on starting a piece on common errors in the systemic review and meta-analysis process. So just for people to be able to read, and whether they’re new to it, whether they’re even an expert in it, it’s helpful to, kind of, read over and see, are there things that I’m doing, from a methodological perspective, that perhaps could introduce AI or have a higher risk? And then, even for myself, research in general, a lot of my research is based on systemic review and meta-analyses or umbrella reviews, and I have a lot of different projects on the overlap between mental and physical health underway currently.

[00:13:53.843] Mark Tebbs: Great stuff. We’ve come to the end of the podcast, so thank you for your time. I’m just wondering, is there, like, any final take home message for our listeners?

[00:14:02.291] Dr. Nicholas Fabiano: Yeah, so I’d say, overall, just to, kind of, reiterate before, it’s really that combination between AI tools and human expertise, that can really harness the power of those tools, while minimising the potential risks, resulting in systemic reviews that are both efficient and trustworthy. So, making sure, again, that we’re not offloading all responsibility onto AI, but using it as a tool to supplement our knowledge, or to increase efficiency, rather than offloading, would be the main message.

[00:14:28.378] Mark Tebbs: Brilliant. Thank you so much, really interesting conversation, I really appreciate your time. For more details on Dr. Nicholas Fabiano, please visit the ACAMH website, www.acamh.org, and Twitter @ACAMH. ACAMH is spelt A-C-A-M-H, and don’t forget to follow us on your preferred streaming platform, let us know if you enjoy the podcast, with a rating or review, and do share with friends and colleagues.

Add a comment

Your email address will not be published. Required fields are marked *

*