A Machine Learning Framework for Building Passive Surveillance Photogrammetry Models
Contributing USMA Research Unit(s)
Robotics Research Center, Electrical Engineering and Computer Science
Determining the geographic location of an object using two-dimensional (2D) images recorded at high-oblique angles is a nontrivial problem. Existing methods to solve this problem rely on parameters that are either difficult to measure or are based on assumptions. This paper investigates the accuracy of building photogrammetric models using machine learning. Our novel approach involves the collection of training examples before using supervised learning to build a nonlinear, multitarget prediction model. We collected training examples using an unmanned ground vehicle (UGV) that moved throughout the fields of view of multiple cameras. The UGV was tracked and bounded using existing computer vision techniques. With each image frame, the center pixel position (x, y image coordinates) of the vehicle and its bounding box area (in pixels) were mapped to its current GPS coordinates. Multiple machine learning models were created using various combinations of cameras to determine the key features for building accurate photogrammetric models. Data was collected under realistic conditions for ground-based surveillance systems, which may require cameras to be placed at low elevations and high-oblique angles. We found the prediction accuracy of our models to be between 0.58 and 3.54 meters depending upon a number of factors, including the locations, heights, and orientations of the cameras used.
Sturzinger, E.M., Whitehall, B., Tyler, J., Lowrance, C.J., “A Machine Learning Framework for Building Passive Surveillance Photogrammetry Models”, IEEE SoutheastCon, Apr. 11-14, Huntsville, AL, 2019.
Record links to items hosted by external providers may require fee for full-text.