Redmond, WA, United States of America

Trisha Lian

USPTO Granted Patents = 1 

Average Co-Inventor Count = 5.0

ph-index = 1

Forward Citations = 3(Granted Patents)


Company Filing History:


Years Active: 2022

Loading Chart...
1 patent (USPTO):Explore Patents

Title: Trisha Lian: Innovator in Sensory Modeling

Introduction

Trisha Lian is a prominent inventor based in Redmond, WA (US). She has made significant contributions to the field of sensory modeling, particularly in the way we understand and approximate the human eye's perception.

Latest Patents

Trisha holds a patent titled "Calibrated sensitivity model approximating the eye." This innovative method involves projecting a source image onto a surface using a lens approximation component. The surface is associated with sampling points that approximate the photoreceptors of the eye. Each sampling point corresponds to a specific photoreceptor type, allowing for the sampling of color information from the projected source image. The method further includes accessing pooling units that approximate retinal ganglion cells (RGCs) of the eye, calculating weighted aggregations of the sampled color information, and computing a perception profile for the source image based on these aggregations. Trisha has 1 patent to her name.

Career Highlights

Trisha is currently employed at Facebook Technologies, LLC, where she continues to push the boundaries of innovation in sensory technology. Her work has garnered attention for its potential applications in various fields, including virtual reality and augmented reality.

Collaborations

Throughout her career, Trisha has collaborated with notable colleagues such as Todd Goodall and Anjul Patney. These partnerships have contributed to her success and the advancement of her projects.

Conclusion

Trisha Lian is a trailblazer in the realm of sensory modeling, with her innovative patent showcasing her expertise and dedication to advancing technology. Her contributions are paving the way for future developments in how we perceive and interact with visual information.

This text is generated by artificial intelligence and may not be accurate.
Please report any incorrect information to support@idiyas.com
Loading…