Daejeon, South Korea

Kyu-Chang Kang

USPTO Granted Patents = 5 

Average Co-Inventor Count = 5.8

ph-index = 1

Forward Citations = 2(Granted Patents)


Company Filing History:


Years Active: 2016-2019

Loading Chart...
5 patents (USPTO):Explore Patents

Title: Kyu-Chang Kang: Innovator in Video Interpretation Technology

Introduction

Kyu-Chang Kang is a prominent inventor based in Daejeon, South Korea. He has made significant contributions to the field of video interpretation technology, holding a total of 5 patents. His work focuses on developing advanced systems that enhance the way video data is processed and understood.

Latest Patents

Kyu-Chang Kang's latest patents include innovative technologies such as a video interpretation apparatus and method. This apparatus is designed to generate object information from input videos, create dynamic spatial relations between objects, and produce general event information based on these relations. Additionally, it generates video information that includes sentences and event descriptions derived from the object and event data. Another notable patent is the query input apparatus and method, which features a graphic user interface (GUI) for users to input schematized composite activities. This system generates queries based on user requests, facilitating efficient activity searches.

Career Highlights

Kyu-Chang Kang is affiliated with the Electronics and Telecommunications Research Institute, where he continues to push the boundaries of technology. His work has been instrumental in advancing video interpretation methods, making significant impacts in various applications.

Collaborations

Kyu-Chang Kang has collaborated with notable colleagues such as Jin-Young Moon and Chang-Seok Bae. These partnerships have fostered innovation and contributed to the success of their projects.

Conclusion

Kyu-Chang Kang is a key figure in the realm of video interpretation technology, with a strong portfolio of patents that reflect his innovative spirit. His contributions are shaping the future of how we interact with video data.

This text is generated by artificial intelligence and may not be accurate.
Please report any incorrect information to support@idiyas.com
Loading…