Company Filing History:
Years Active: 2022-2023
Title: Haijun Shan: Innovator in Language Model Compression
Introduction
Haijun Shan is a prominent inventor based in Hangzhou, China. He has made significant contributions to the field of language models, particularly in the area of model compression and fine-tuning techniques. With a total of 4 patents, his work is paving the way for advancements in artificial intelligence and machine learning.
Latest Patents
One of Haijun Shan's latest patents is titled "Method and platform for meta-knowledge fine-tuning based on domain-invariant features." This patent discloses a method for fine-tuning meta-knowledge using domain-invariant features. The method focuses on learning highly transferable common knowledge across different datasets of similar tasks. It aims to improve the parameter initialization and generalization abilities of universal language models, ultimately leading to a common compression framework for various downstream tasks.
Another notable patent is the "Method for automatically compressing multitask-oriented pre-trained language model and platform thereof." This invention describes a method for automatically compressing multi-task oriented pre-trained language models. It involves designing a meta-network structure generator and constructing a knowledge distillation coding vector. The method utilizes Bernoulli distribution sampling for training the structure generator, allowing for the generation of weights for different distillation structures.
Career Highlights
Haijun Shan is currently associated with Zhejiang Lab, where he continues to innovate and develop cutting-edge technologies in the field of artificial intelligence. His work has garnered attention for its practical applications and potential to enhance the efficiency of language models.
Collaborations
Some of his notable coworkers include Hongsheng Wang and Shengjian Hu, who contribute to the collaborative efforts at Zhejiang Lab.
Conclusion
Haijun Shan's contributions to the field of language model compression and fine-tuning are noteworthy. His innovative patents reflect a deep understanding of artificial intelligence and its applications. As he continues to work at Zhejiang Lab, his future endeavors are anticipated to further advance the capabilities of language models.