The patent badge is an abbreviated version of the USPTO patent document. The patent badge does contain a link to the full patent document.
The patent badge is an abbreviated version of the USPTO patent document. The patent badge covers the following: Patent number, Date patent was issued, Date patent was filed, Title of the patent, Applicant, Inventor, Assignee, Attorney firm, Primary examiner, Assistant examiner, CPCs, and Abstract. The patent badge does contain a link to the full patent document (in Adobe Acrobat format, aka pdf). To download or print any patent click here.
Patent No.:
Date of Patent:
Sep. 30, 2025
Filed:
Sep. 14, 2021
Combining compression, partitioning and quantization of dl models for fitment in hardware processors
Tata Consultancy Services Limited, Mumbai, IN;
Swarnava Dey, Kolkata, IN;
Arpan Pal, Kolkata, IN;
Gitesh Kulkarni, Bangalore, IN;
Chirabrata Bhaumik, Kolkata, IN;
Arijit Ukil, Kolkata, IN;
Jayeeta Mondal, Kolkata, IN;
Ishan Sahu, Kolkata, IN;
Aakash Tyagi, Bangalore, IN;
Amit Swain, Bangalore, IN;
Arijit Mukherjee, Kolkata, IN;
Tata Consultancy Services Limited, Mumbai, IN;
Abstract
Small and compact Deep Learning models are required for embedded AI in several domains. In many industrial use-cases, there are requirements to transform already trained models to ensemble embedded systems or re-train those for a given deployment scenario, with limited data for transfer learning. Moreover, the hardware platforms used in embedded application include FPGAs, AI hardware accelerators, System-on-Chips and on-premises computing elements (Fog/Network Edge). These are interconnected through heterogenous bus/network with different capacities. Method of the present disclosure finds how to automatically partition a given DNN into ensemble devices, considering the effect of accuracy—latency power—tradeoff, due to intermediate compression and effect of quantization due to conversion to AI accelerator SDKs. Method of the present disclosure is an iterative approach to obtain a set of partitions by repeatedly refining the partitions and generating a cascaded model for inference and training on ensemble hardware.