Growing community of inventors

Redmond, WA, United States of America

George Petre

Average Co-Inventor Count = 6.17

ph-index = 3

The patent ph-index is calculated by counting the number of publications for which an author has been cited by other authors at least that same number of times.

Forward Citations = 23

George PetreChad Balling McBride (25 patents)George PetreAmol Ashok Ambardekar (25 patents)George PetreKent D Cedola (25 patents)George PetreLarry Marvin Wall (25 patents)George PetreBoris Bobrov (23 patents)George PetreBenjamin Eliot Lundell (2 patents)George PetreTimothy Hume Heil (2 patents)George PetreAleksandar Tomic (2 patents)George PetreJoseph Leon Corkery (2 patents)George PetreChad Balling Mcbride (0 patent)George PetreAleksandar Tomic (0 patent)George PetreGeorge Petre (25 patents)Chad Balling McBrideChad Balling McBride (51 patents)Amol Ashok AmbardekarAmol Ashok Ambardekar (31 patents)Kent D CedolaKent D Cedola (31 patents)Larry Marvin WallLarry Marvin Wall (29 patents)Boris BobrovBoris Bobrov (26 patents)Benjamin Eliot LundellBenjamin Eliot Lundell (9 patents)Timothy Hume HeilTimothy Hume Heil (5 patents)Aleksandar TomicAleksandar Tomic (5 patents)Joseph Leon CorkeryJoseph Leon Corkery (2 patents)Chad Balling McbrideChad Balling Mcbride (0 patent)Aleksandar TomicAleksandar Tomic (0 patent)
..
Inventor’s number of patents
..
Strength of working relationships

Company Filing History:

1. Microsoft Technology Licensing, LLC (25 from 54,719 patents)


25 patents:

1. 12154027 - Increased precision neural processing element

2. 11909422 - Neural network processor using compression and decompression of activation data to reduce memory bandwidth utilization

3. 11750212 - Flexible hardware for high throughput vector dequantization with dynamic vector length and codebook size

4. 11722147 - Dynamic sequencing of data partitions for optimizing memory utilization and performance of neural networks

5. 11604972 - Increased precision neural processing element

6. 11528033 - Neural network processor using compression and decompression of activation data to reduce memory bandwidth utilization

7. 11507349 - Neural processing element with single instruction multiple data (SIMD) compute lanes

8. 11494237 - Managing workloads of a deep neural network processor

9. 11487342 - Reducing power consumption in a neural network environment using data management

10. 11476869 - Dynamically partitioning workload in a deep neural network module to reduce power consumption

11. 11405051 - Enhancing processing performance of artificial intelligence/machine hardware by data sharing and distribution as well as reuse of data in neuron buffer/line buffer

12. 11341399 - Reducing power consumption in a neural network processor by skipping processing operations

13. 11256976 - Dynamic sequencing of data partitions for optimizing memory utilization and performance of neural networks

14. 11205118 - Power-efficient deep neural network module configured for parallel kernel and parallel input processing

15. 11182667 - Minimizing memory reads and increasing performance by leveraging aligned blob data in a processing unit of a neural network environment

Please report any incorrect information to support@idiyas.com
idiyas.com
as of
12/28/2025
Loading…