The patent badge is an abbreviated version of the USPTO patent document. The patent badge does contain a link to the full patent document.

The patent badge is an abbreviated version of the USPTO patent document. The patent badge covers the following: Patent number, Date patent was issued, Date patent was filed, Title of the patent, Applicant, Inventor, Assignee, Attorney firm, Primary examiner, Assistant examiner, CPCs, and Abstract. The patent badge does contain a link to the full patent document (in Adobe Acrobat format, aka pdf). To download or print any patent click here.

Date of Patent:
Nov. 29, 2022

Filed:

Dec. 10, 2018
Applicant:

Beijing University of Posts and Telecommunications, Beijing, CN;

Inventors:

Jianxin Liao, Beijing, CN;

Jingyu Wang, Beijing, CN;

Jing Wang, Beijing, CN;

Qi Qi, Beijing, CN;

Jie Xu, Beijing, CN;

Attorney:
Primary Examiner:
Assistant Examiner:
Int. Cl.
CPC ...
G06N 3/08 (2006.01); G06N 3/04 (2006.01);
U.S. Cl.
CPC ...
G06N 3/08 (2013.01); G06N 3/04 (2013.01);
Abstract

Embodiments of the present invention provide a method and apparatus for accelerating distributed training of a deep neural network. The method comprises: based on parallel training, the training of deep neural network is designed as a distributed training mode. A deep neural network to be trained is divided into multiple sub-networks. A set of training samples is divided into multiple subsets of samples. The training of the deep neural network to be trained is performed with the multiple subsets of samples based on a distributed cluster architecture and a preset scheduling method. The multiple sub-networks are simultaneously trained so as to fulfill the distributed training of the deep neural network. The utilization of the distributed cluster architecture and the preset scheduling method may reduce, through data localization, the effect of network delay on the sub-networks under distributed training; adapt the training strategy in real time; and synchronize the sub-networks trained in parallel. As such, the time required for the distributed training of the deep neural network may be reduced and the training efficiency of the deep neural network may be improved.


Find Patent Forward Citations

Loading…