The patent badge is an abbreviated version of the USPTO patent document. The patent badge does contain a link to the full patent document.

The patent badge is an abbreviated version of the USPTO patent document. The patent badge covers the following: Patent number, Date patent was issued, Date patent was filed, Title of the patent, Applicant, Inventor, Assignee, Attorney firm, Primary examiner, Assistant examiner, CPCs, and Abstract. The patent badge does contain a link to the full patent document (in Adobe Acrobat format, aka pdf). To download or print any patent click here.

Date of Patent:
Sep. 16, 2025

Filed:

Mar. 13, 2023
Applicant:

Salesforce, Inc., San Francisco, CA (US);

Inventors:

Le Xue, Mountain View, CA (US);

Chen Xing, Palo Alto, CA (US);

Juan Carlos Niebles Duque, Mountain View, CA (US);

Caiming Xiong, Menlo Park, CA (US);

Ran Xu, Mountain View, CA (US);

Silvio Savarese, Palo Alto, CA (US);

Assignee:

Salesforce, Inc., San Francisco, CA (US);

Attorney:
Primary Examiner:
Int. Cl.
CPC ...
G06N 3/08 (2023.01); G06F 40/126 (2020.01); G06F 40/40 (2020.01); G06T 19/20 (2011.01); G06V 10/764 (2022.01); G06V 10/774 (2022.01); G06V 10/776 (2022.01); G06V 10/82 (2022.01);
U.S. Cl.
CPC ...
G06N 3/08 (2013.01); G06F 40/126 (2020.01); G06F 40/40 (2020.01); G06T 19/20 (2013.01); G06V 10/764 (2022.01); G06V 10/774 (2022.01); G06V 10/776 (2022.01); G06V 10/82 (2022.01); G06T 2210/56 (2013.01); G06T 2219/2004 (2013.01);
Abstract

A method of training a neural network based three-dimensional (3D) encoder is provided. A training dataset is generated using a plurality of 3D models of a 3D model dataset. To generate a first sample of the training dataset, an image generator with multi-view rendering is used to generate a plurality of image candidates of a first 3D model. A word is chosen from metadata associated with the first 3D model. A language model is used to generate one or more text descriptions using the selected word and a plurality of prompts. A point cloud is generated by randomly sampling points in the 3D model. The first sample is generated to include a first image randomly selected from the plurality of image candidates, one or more text descriptions, and the point cloud is generated. The 3D encoder is trained using the training dataset including the first sample.


Find Patent Forward Citations

Loading…