Advanced technologies such as neural networks have found extensive application in image recognition, big data processing, financial analysis, and many other fields. However, training them demands significant computational resources and energy consumption, posing challenges for their widespread use and further development.
The problem comes down to bottlenecks imposed by traditional computing systems built using transistors, which keep their memory and processing units separate, requiring power-intensive and performance-degrading data transfers between them. Perhaps an even greater disadvantage is that their memory requires constant power to store information, which further increases energy consumption.
To solve this, researchers propose an alternative: the memristor. “Memristors, also known as memory resistors, are switches that can ‘remember’ their previous electric state, even after power is switched off,” said Desmond Loke, professor at the Singapore University of Technology and Design, in an email.
“Memristors can be used to create devices that store data because they have the ability to drastically reduce the time and energy required for data transmission between memory and processors in traditional microchips,” he continued. “They might be perfect for constructing neural networks, artificial intelligence systems for medical scan processing, and enabling driverless vehicles.”
In a study published in Advanced Physics Research, Loke and his colleagues investigated neural network training using new memristor design that stores information in a unique material made up of germanium, tellurium, and antimony.
This substance exists in an amorphous phase, but when exposed to an electric current and a change in temperature, it exhibits ordered crystalline regions. Depending on the properties of the pulse, such as its intensity and signal shape, the number and size of these regions changes, consequently affecting the material’s electrical and optical properties, which are preserved after the current is turned off. In this way, information can be recorded in the memristor and read from it by repeatedly applying an electric current.
In total, the authors could create fifteen, clearly distinguishable stages of crystallization in the memristor, each corresponding to specific information necessary for training the neural network. “We used [the] memristors to simulate a neural network,” Loke explained. “In trials, the neural network recognized handwritten numerals with an accuracy of over 96%.”
This accuracy is an improvement on other memristor designs by more than 5%, say the team, indicating a promising trajectory for this type of memristor computing technology. But there are still significant hurdles to overcome before this technology can be implemented into real-world computing systems.
“System integration and scale pose considerable challenges,” Loke explained. Nevertheless, the researchers remain hopeful, envisioning a future where memristors might play a pivotal role in training large neural networks. They anticipate smaller, more potent, and dramatically more energy-efficient units compared to conventional computers if the technology can be developed further.
“Currently, this work is just a proof of concept,” Loke concluded. “The next step is to develop a fully integrated circuit and a big neural network.”
Reference: Kian-Guan Lim et al, Toward Memristive Phase-Change Neural Network with High-Quality Ultra-Effective Highly-Self-Adjustable Online Learning, Advanced Physics Research (2024). DOI: 10.1002/apxr.202300085
Feature image credit: AcatXIo on Pixabay