Document Type

Report

Publisher

Space Dynamics Laboratory

Publication Date

2024

First Page

1

Last Page

6

Creative Commons License

Creative Commons Attribution-Noncommercial 4.0 License
This work is licensed under a Creative Commons Attribution-Noncommercial 4.0 License

Abstract

Target size is an important quantity in both classification and multi-sensor target tracking applications, but it is often an unknown. In some imaging applications, target size is assumed based on classification results. This assumed size is sometimes used in the synthesis of range measurements from monocular imagery, based on the apparent target size that is observed. Range-from-apparent-size measurements may be the only range measurements available during times when sensors that produce range information (such as radar) are not available. This range-from-apparent-size approach is typically plagued by range bias in the resulting measurements due to several factors. In this work, we present a complementary approach where we estimate the size of the target using sensor fusion techniques. The resulting size estimate is pose-dependent, but by observing the target in multiple poses over time, it is possible to produce an estimate of the largest and smallest dimensions. The resulting target size estimate and uncertainty can inform data association and target classification. Since the target size is assumed to be invariant, the target size estimate can be subsequently leveraged to synthesize more accurate range-from-apparent-size measurements from imagery when ranging sensor (such as radar) data are not available.

Share

COinS