Researchers have developed a range of analog and other non-traditional machine learning systems in the hope that they will prove to be much more energy efficient than today’s computers. But training these AIs to perform their tasks was a big stumbling block. Researchers at NTT Device Technology Labs and the University of Tokyo now say they’ve come up with a learning algorithm (announced by NTT last month) that goes a long way in making these systems deliver on their promises.

Their results, obtained on an optical analog computer, represent progress towards the potential efficiency gains that researchers have long sought from “non-traditional” computer architectures.

Modern artificial intelligence programs use a biological architecture called an artificial neural network to perform tasks such as image recognition or text generation. The strength of connections between artificial neurons that control the results of calculations must be modified or trained using standard algorithms. The best known of these algorithms is called backpropagation, which updates the strength of the connection to reduce network errors while it processes the trial data. Since the adjustment of some parameters depends on the adjustment of others, there is a need for active transmission and routing of information by the computer.

How Range explained elsewhere: “Backpropagation is like doing inference in reverse order, moving from the last layer of the network back to the first layer; The weight update then combines the information from the original forward output with these backpropagation errors to adjust the network weights in a way that makes the model more accurate.”

Alternative computing architectures that trade complexity for efficiency often fail to provide the information transfer required by the algorithm. As a consequence, the trained network parameters must be obtained from an independent physical simulation of the entire hardware installation and its information processing. But creating simulations of sufficient quality can be a challenge in itself.

“We found that applying backpropagation algorithms to our device was very difficult,” said Katsuma Inoue of NTT Device Technology Labs, one of the researchers involved in the study. “There has always been a gap between the mathematical model and the real device due to several factors such as physical noise and inaccurate modeling.”

The difficulty of implementing backpropagation prompted the authors to explore and implement an alternative learning algorithm. It is based on an algorithm called direct feedback equalization (DFA), first introduced in a 2016 paper. This algorithm has reduced the need for information transfer during training and thus the extent to which the physical system needs to be modeled. The authors’ new “advanced DFA algorithm” completely eliminates the need for any detailed device modeling.

To study and test the algorithm, they implemented it on an optical analog computer. In it, the connections between neurons are represented as the intensity of light passing through the ring of optical fiber, and not as numbers, represented digitally. The connections of the neural network are represented by the intensity of the light beam passing through the annular optical fiber.

“This is an absolutely essential demonstration,” said Daniel Brunner of the FEMTO-ST Institute, a French public research organization. Brunner is developing non-traditional photonic computers similar to those used by the researchers in the study. “The beauty of this particular algorithm is that it’s not that hard to implement in hardware – that’s why it’s so important.”

From articles on your site

Related articles online


Please enter your comment!
Please enter your name here

twenty − 19 =