Researchers have developed a photonic processor that enables ultra-fast AI processing with unprecedented energy efficiency. The device, which combines electronics and optics, has achieved high accuracy in training tests and inference. It performs key computations in under half a nanosecond and has the potential to be scaled up for real-world applications.
Deep neural networks are composed of many interconnected layers of nodes, or neurons, that operate on input data to produce an output. One key operation in a deep neural network involves the use of linear algebra to perform matrix multiplication, which transforms data as it is passed from layer to layer. However, in addition to these linear operations, deep neural networks also perform nonlinear operations that help the model learn more intricate patterns.
The challenge with photonic devices is that they can’t perform nonlinear operations on the chip. Optical data has to be converted into electrical signals and sent to a digital processor to perform nonlinear operations. This makes it challenging to build a system that can do it in a scalable way.
To overcome this challenge, researchers designed devices called nonlinear optical function units (NOFUs), which combine electronics and optics to implement nonlinear operations on the chip. They built an optical deep neural network on a photonic chip using three layers of devices that perform linear and nonlinear operations.
The photonic system achieved more than 96 percent accuracy during training tests and more than 92 percent accuracy during inference, which is comparable to traditional hardware. In addition, the chip performs key computations in less than half a nanosecond.
Scaling up their device and integrating it with real-world electronics like cameras or telecommunications systems will be a major focus of future work. The researchers also want to explore algorithms that can leverage the advantages of optics to train systems faster and with better energy efficiency.