Thought of the Week
How can AI be used in research?
August 29, 2024 5 Minute Read

Nearly 70% of people have used AI in the last year, according to government statistics. While most people use AI for personal use (such as planning travel or in their smart home features), around a quarter of people use it in a work or study environment. Perhaps unsurprisingly the younger generation tend to use it more.
Some industries lend themselves better to adopting AI in the workplace, and research is one of these. In fact, a recent study showed that a third of researchers found AI tools essential or very useful to their role. Moreover, they acknowledged that the future potential is higher, with the number of researchers who think AI tools will become essential, or very useful, in the future doubles to two thirds.
AI can benefit researchers through its ability to increase efficiency and save time. It can provide faster ways to process data, and identify trends and correlations quicker than traditional methods of analysis. It can also sift through large data sets to identify errors and anomalies to clean and transform the data more accurately and efficiently than a researcher.
AI can not only be used for data analysis, but it can also retrieve knowledge more quickly, finding relevant literature in a fraction of the time, and then synthesising larger studies and articles to be more digestible for the reader. By freeing up time that would otherwise be spent on other tasks, researchers can dedicate more time to critical thinking and hypothesis generation.
The growth of generative AI in recent years has the potential to elevate research capabilities further. Generative AI can help with brainstorming new research ideas and help with generating new hypotheses. One way AI can do this is through identifying gaps in existing literature and highlighting overlooked or under researched areas.
Despite the benefits, under 10% of researchers use gen AI tools such as ChatGPT regularly. And over half of researchers have never used generative AI. The barriers that prevent use include a lack of skills, a lack of training resources, and a lack of funding.
Other inhibitors include the perceived risks. In particular, there is a concern that a reliance on pattern-recognition without an understanding on the subject and context could lead to spurious conclusions. In addition, there are fears AI might proliferate misinformation, bring mistakes or inaccuracies into research papers and may entrench bias into research texts.
But there are ways to mitigate risks. The key is to understand AI is a co-pilot, not a pilot. Fundamentally, human oversight is very important; researchers will need to review AI outputs, to validate the data or information at every stage of the research process to minimise or eradicate inaccuracies being published. Additionally, AI tools operate based on algorithms and patterns and aren’t always capable of considering nuances or dynamic situations. It is therefore the responsibility of humans to apply contextual understanding and amend AI outputs accordingly to avoid bias. This may involve training around providing AI with the correct prompts for example. So, while AI can be a huge game changer for the research industry, it will still very much rely on individuals. Not least to pull together complex ideas and combine these with the lived experience, and this is something AI is unable to replicate. Therefore, while AI can enhance the research process, researchers themselves will remain critical to outputs.
What do you think are positive impacts of AI in research?
68%
56%
45%
Do you feel that there are barriers preventing you, or your research team, from developing or using AI as much as you would like?
57%
44%
55%
Where do you think generative AI may have negative impacts on research?
71%
68%
53%
