How do we balance the risks and rewards of using AI in the lab?

by | Dec 17, 2024

AI might be fast and efficient, but scientists still don’t know whether integrating it with cloud-based labs will be worth the rewards.
An automated lab.

Cloud-based laboratories, where equipment can be remotely controlled, are transforming the future of research by offering scientists worldwide access to cutting-edge instrumentation. Imagine the possibilities — this shift has the potential to accelerate scientific discovery at an unprecedented pace.

However, this exciting development also raises important questions about the role of artificial intelligence (AI) in these labs, particularly regarding security, ethics, and the reliability of research.

“[Cloud-based labs] are ideal systems to reliably and safely test a very large number of variables over a relatively short period of time,” said Nirosha Murugan, research chair in tissue biophysics and assistant professor in the Department of Health Science at Wilfrid Laurier University. “As [they] become integrated with AI, their potential to solve problems and make discoveries on their own — which is to say, without relying on human decision-making — is increasing.”

AI could not only help accelerate scientific advancements but also address the growing problem of the replication crisis — the issue of research results that are difficult or impossible for other scientists to reproduce. By automating data analysis, refining experimental design, and enabling more accurate predictions, AI has the potential to enhance the reliability and consistency of scientific findings, ultimately fostering greater trust in research outcomes.

Risks vs. benefits

However, the use of AI in any setting has always raised ethical questions as well as questions about security and safety. Murugan and her colleague Nicolas Rouleau argue that its integration into the cloud-based laboratory setting may not be all positive.

For one, AI is still relatively new, and it’s not possible to predict all the potential problems it might create, especially when given access to vast databases of scientific knowledge.

“Even when given an explicit goal, AI can make unexpected decisions that are deeply unintuitive to achieve said goal,” said Rouleau, assistant professor of biomedical engineering at Wilfrid Laurier University. “If the AI in question is self-directed, it might manufacture or manipulate data to achieve its goal.”

A hypothetical scenario to illustrate this involves AI-operated cloud-based laboratories used for drug discovery, which could lead to portfolios filled with “fictitious, undeliverable pharmaceuticals, supported by manufactured data”. Such a situation underscores the risks of relying too heavily on AI without proper safeguards, as it could lead to misleading conclusions and hinder genuine scientific progress.

 “The profitability of a publicly traded pharmaceutical company hinges on its perceived worth, which could be greatly exaggerated by falsified data,” said Rouleau. “But the greater risk is in being unable to discriminate between genuine discoveries and the hallucinations of AI.”

Another potential problem is attempting to balance the natural bias of an AI system with the need to interpret scientific data in an unbiased way. “To become a useful tool, AI-based technologies must become biased to achieve their goal states. They must favor some outcomes over others and weigh variables differently,” said Rouleau. “With AI, the critical thing is aligning the biases with everything we care about, like truth, safety, respect of human rights, and so on.

“Just as a child can be taught to value some things over others, so too can an AI — that’s one way to control rather than eliminate bias. But the question is: Will it always defer to our authority? Or will it rebel, as all children eventually (and must!) do?”

AI might be faster and more efficient, but scientists still don’t know whether the risks of integrating it with cloud-based laboratories will be worth the rewards. “We have a very powerful scientific community that can and does make discoveries every day without these technologies,” said Rouleau. “Therefore, it is not necessary to embody AIs with [cloud-based laboratories] — it’s just very convenient.”

AI may never replace human ingenuity

Tom Froese, a cognitive scientist at the Okinawa Institute of Science and Technology in Japan, who was not involved in the study, said he understands the claim that the use of AI in cloud-based laboratories can help contribute to scientific discoveries.

“But I would caution against extrapolating from such advanced automation to a vision of the future in which AI systems become increasingly independent decision-makers that outpace even the most brilliant and creative people,” he said. “At least for the moment, despite all the impressive advances in AI that have overturned my expectations about what is possible, I would argue that our capacity to exercise free will or volition remains a fundamental stumbling block for artificial agents. The challenge of getting an AI system to take initiative, rather than be prompted, still stands in the way of attempts at embodying an artificial scientist that can rival human pioneers.”

“These tools could redefine how we conduct research, accelerating discoveries and opening doors to treatments once thought impossible,” added Rouleau. “However, they also raise complex ethical, security, and societal questions that must be addressed as we move forward.”

In this sense, scientists should work with engineers, policymakers, and perhaps even philosophers on the best ways to work with and integrate AI into scientific endeavors in order to reduce risk, maximize rewards, and make safety a priority.

“I think we should take philosophers much more seriously when considering questions of AI, intelligence, agency, and sentience,” Rouleau said. “The lines between living organisms and machines are becoming blurred. A new science of cognition that applies to more than just animals should be developed to address these technology-driven questions.”

Reference: Nicolas Rouleau and Nirosha J. Murugan, The Risks and Rewards of Embodying AI with Cloud-Based Laboratorie, Advanced Intelligent Systems (2024). DOI: 10.1002/aisy.202400193

Feature image credit: Unsplash

ASN Weekly

Sign up for our weekly newsletter and receive the latest science news.

Related posts: