London, United Kingdom

Michael David Firman

USPTO Granted Patents = 10 

Average Co-Inventor Count = 4.4

ph-index = 2

Forward Citations = 11(Granted Patents)


Company Filing History:


Years Active: 2021-2025

Loading Chart...
10 patents (USPTO):Explore Patents

Title: Michael David Firman: Innovator in Scene Reconstruction and Depth Estimation

Introduction

Michael David Firman is a prominent inventor based in London, GB. He has made significant contributions to the fields of scene reconstruction and depth estimation, holding a total of nine patents. His innovative work has the potential to enhance virtual content generation and improve image processing techniques.

Latest Patents

Among his latest patents, Firman has developed a high-speed real-time scene reconstruction model. This model outputs a heightfield for a series of input images by predicting depth maps and extracting feature maps. It builds a 3D model utilizing the predicted depth maps and camera poses, allowing for the generation of virtual content augmented on real-world images. Another notable patent is his self-supervised multi-frame monocular depth estimation model. This model is designed to output a depth map for an input image based on both the input image and an additional image, enhancing the accuracy of depth estimation.

Career Highlights

Michael David Firman is currently employed at Niantic, Inc., where he continues to push the boundaries of technology in his field. His work has garnered attention for its innovative approach to solving complex problems in image processing and virtual reality.

Collaborations

Firman has collaborated with notable colleagues, including Gabriel J Brostow and James Watson, contributing to a dynamic and innovative work environment.

Conclusion

Michael David Firman's contributions to scene reconstruction and depth estimation exemplify the impact of innovation in technology. His patents reflect a commitment to advancing the capabilities of virtual content generation and image processing.

This text is generated by artificial intelligence and may not be accurate.
Please report any incorrect information to support@idiyas.com
Loading…