The history of artificial intelligence (AI) has been full of ups and downs, where periods of waning interest and funding — known as “AI winters” — limited its extension into real life applications. Without the persistence of those early scientists, AI would not be where it is today — an integral part of our daily lives, facilitating scientific advancement and making our lives easier and more connected. The potential of AI appears to be limited only by our imagination and the power of computers.
While the benefits are clear, so too are the controversies that surround this technology. On the one hand, we use and interact with AI on a daily basis, often without thinking about it. But there is still a hesitancy to fully embrace it, with comical dark visions of machines taking over the world — though the ethical dilemmas that exist within the field are far more pressing and substantial.
An artificial deep learning network that has its training data space narrowed for immediate profits tends to have a simplistic view of the world, a view loaded with racism, homophobia, and sexist stereotypes. A well-known example of an under-trained network was Google’s mishap in human face recognition or Amazon’s algorithm that favored men during recruitment. This is primarily the fault of the operator who decides what the network should do — it is in no way an algorithm error.
These examples concerned either an under-educated network or a network deliberately oriented towards profit. On the other hand, a more disturbing phenomenon is what happens when a deep learning network is learned correctly and with full attention to quality. An example is the latest model of the GPT-3 chatbot, which resorts to jokes and even lies during discussions. This means that in the very nature of correctly implemented deep learning, there is a potential for perversity.
There is an element of inevitability to the development of AI, predicted even in the last century. The science fiction author, Stanisław Lem, who was prominent in the 1960s, predicted the AI, and in his work titled “Summa Technologiae“, addressed the evolution of the mechanical mind, calling it intellectronics. The essay also touched on moral and sociological issues surrounding AI, along with the fact that its development and eventual autonomization (becoming independent) would be very likely. On the one hand, Lem stated that, “As technology develops, the complexity of regulatory processes grows so that it is necessary to use regulators that manifest a higher degree of variability than a human brain does”. On the other hand, the author feared that “after several painful lessons, humanity could turn into a well-behaved child, always ready to listen to (no one’s) good advice”.
One of the first milestones in the development of AI was the 1997 victory of a computer program called Deep Blue over chess world champion Garry Kasparov. Although Deep Blue’s algorithm was a far cry from the learning-based systems we know today—using brute force searches of numerous possibilities to win the game— it was an early, successful demonstration of the future of “artificial reasoning”. Today, these systems are finding ever-increasing application in pattern recognition, data processing, predictions, and modeling.
One groundbreaking example being the application of machine learning to the design of pharmaceuticals. Ai is a powerful ally to scientists and researchers, both in the search for new drugs and in discovering new applications for existing ones. Of great importance are deep learning algorithms, which are not only capable of recognizing patterns in data sets, especially those that elude human perception, but also classifying them autonomously.
The importance of minimizing the time from discovery to commercial availability of new drugs has been perfectly demonstrated throughout the current COVID-19 pandemic and the frantic search for vaccines to end it. Only through the express efforts of pharmaceutical companies was it possible to minimize the effects of the pandemic.
The existence of AI today was, as stated earlier, inevitable and is quite literally evolving before our eyes. However, instead of fighting it, we should seek to enhance its benefits, knowing its limitations and therefore push for regulations that ensure our safety.
It will one day surpass us, becoming more efficient in its thinking, discovering patterns hidden from us, and developing enhanced abilities in forecasting. The real “breakthrough” will come when AI, itself, begins to design new, more effective AI algorithms.
Just as AI learns about the world, we too are learning about how it thinks. By drawing the correct conclusions from current experiences, we can establish an order that ensures AI does not become a threat but a very useful aid.
Written by: Maciej Staszak, Katarzyna Staszak, Karolina Wieszczycka, Anna Bajek, Krzysztof Roszkowski, and Bartosz Tylkowski
Reference: Staszak, M, et al., Machine learning in drug design: Use of artificial intelligence to explore the chemical structure–biological activity relationship, WIREs Comput Mol Sci. (2021). DOI: 10.1002/wcms.1568