2 MIN READ

Deeplite & IMT-Atlantique Partner to Make Deep Learning More Efficient

The Deeplite team is stoked to announce our partnership with IMT-Atlantique, a top university and research institute in Nantes, France! This partnership gathers Associate Professor of Electronics Dr. Mathieu Léonardon and his AI algorithms expertise with Deeplite's research team to develop novel methods to optimize Deep Neural Networks (DNNs) for deployment on low-power, cost-effective computing hardware.

 

Together, Deeplite and IMT-Atlantique will develop algorithms and methods to enable accurate DNNs with extreme low-bit precision, using in some cases few as 1 or 2 bits for model weights and activations. Unlike previous work on extreme low-bit precision and DNN quantization, this research will explore methods applicable to general-purpose processors like x86 or ARM CPUs instead of specialized ASICs or FPGAs, which are often required to implement low-precision DNNs.

 

Increasingly, high-performance DNNs are used to solve complex problems in a wide range of computer vision and NLP tasks. Newer models have deeper layers and require a greater number of parameters to achieve state-of-the-art results. For example, new Transformer models can be as large as 175 billion parameters, almost a 3,000x increase compared to the AlexNet model that achieved record performance on the ImageNet visual recognition challenge only 9 years ago. This massive increase in model size and compute requirements has made many deep learning solutions inefficient for use on edge devices, incurring high costs in data centers and preventing AI researchers and practitioners from using these solutions!

Deeplite helps AI engineers and researchers overcome these barriers by optimizing their DNNs to meet the computation constraints without sacrificing model accuracy. Our automated software optimizer, Deeplite Neutrino™, uses a novel design space exploration to create compact AI model designs independent of the target processor. It is also complementary to many other hardware-related optimizations such as 8bit quantization, a common practice used in frameworks like Intel's OpenVino, Facebook's QNNPACK, Tencent's NCNN, and Google's  TensorFlow Lite.

In this partnership, we'll advance the applicability and accessibility of deep learning as our research progresses! At Deeplite, we aim to enable AI for everyday life and are always proud to partner with like-minded researchers to make this vision possible.

Read On