The patent badge is an abbreviated version of the USPTO patent document. The patent badge does contain a link to the full patent document.
The patent badge is an abbreviated version of the USPTO patent document. The patent badge covers the following: Patent number, Date patent was issued, Date patent was filed, Title of the patent, Applicant, Inventor, Assignee, Attorney firm, Primary examiner, Assistant examiner, CPCs, and Abstract. The patent badge does contain a link to the full patent document (in Adobe Acrobat format, aka pdf). To download or print any patent click here.
Patent No.:
Date of Patent:
Dec. 26, 2000
Filed:
Mar. 06, 1998
John A Antoniades, Fulton, MD (US);
Mark M Baumback, University Park, MD (US);
Jeffrey H Bowles, Alexandria, VA (US);
John M Grossman, Falls Church, VA (US);
Daniel G Haas, Lothian, MD (US);
Peter J Palmadesso, Manassas, VA (US);
The United States of America as represented by the Secretary of the Navy, Washington, DC (US);
Abstract
The Compression of Hyperdata with ORASIS Multisegment Pattern Sets, (CHOMPS), system is a collection of algorithms designed to optimize the efficiency of multi spectral data processing systems. The CHOMPS system employs two types of algorithms, Focus searching algorithms and Compression Packaging algorithms. The Focus algorithms employed by CHOMPS reduce the computational burden of the prescreening process by reducing the number of comparisons necessary to determine whether or not data is redundant, by selecting only those exemplars which are likely to result in the exclusion of the incoming sensor data for the prescreener comparisons. The Compression Packaging algorithms employed by CHOMPS, compress the volume of the data necessary to describe what the sensor samples. In the preferred embodiment these algorithms employ the Prescreener, the Demixer Pipeline and the Adaptive Learning Module Pipeline to construct a compressed data set. The compression is realized by constructing the data set from the exemplars defined in the prescreening operation and expressing those exemplars in wavespace with the necessary scene mapping data, or further processing the exemplars through the adaptive learning pipeline and expressing the exemplars in terms of endmembers, to facilitate the efficient storage, download and the later reconstruction of the complete data set with minimal deterioration of signal information.