AI for Everyday Life | Blog | Deeplite

Introducing Deeplite's ActNAS and BAViT at NeurIPS 2024

Written by Deeplite team | Dec 27, 2024 6:41:23 PM

Sudhakar Sah (Sud), cofounder at Deeplite presented not one but two of our research papers at the NeurIPS FITML Workshop in Vancouver, Canada on December 14th:  

ActNAS: Generating Efficient YOLO Models using Activation NAS

Token Pruning using a Lightweight Background Aware Vision Transformer

 

ActNAS: Generating Efficient YOLO Models using Activation NAS

This paper describes our novel approach of using mixed activation functions to optimize YOLO and other CNN models as opposed to current methods of using a single activation function throughout the model. This approach has demonstrated a reduction in latency by 30-70%, with minimal impact to accuracy, and up to 65% reduction in memory usage on target edge processors.

You can check out our ActNAS blog 👉 HERE

A shout out to the authors of the ActNAS paper: Sudhakar Sah (Sud), Ravish Kumar, Darshan C G and Ehsan Saboori (PhD, Eng)

 

Token Pruning using a Lightweight Background Aware Vision Transformer

In our second paper we introduce BAViT, a Background Aware Vision Transformer. BAViT is designed to identify and prune background tokens which reduces the number of tokens processed by a vision transformer. With BAViT's lightweight design, it is suitable for edge AI applications and has demonstrated the ability to boost throughput of ViT object detection models up to 40%!



You can find our BAViT blog here 👉 HERE

A shout out to the authors of the BAViT paper: Sudhakar Sah (Sud), Ravish Kumar, Honnesh Rohmetra and Ehsan Saboori (PhD, Eng).

Interested in finding out more about optimized models for you edge AI application?

Contact us at info@deeplite.ai!

I hope you enjoyed this blog. Please let me know if you have any questions or feedback. 😊