= TestF3F Photometric Function Sensitivity Test Results = == Definitions == ||'''CompareOBJ RMS:'''||The root mean square of the distance from each bigmap pixel/line location to the nearest facet of the truth OBJ.|| ||'''RESIDUALS RMS'''||The root mean square residual error reported by RESIDUALS.|| == Key Findings == * The Lommel-Seeliger Photometric Function subtests (F3F1 and F3F2) performed well with small differences in measures of accuracy and correlation. * The Clark and Takir Photometric Function subtest (F3F3) performed poorly throughout testing, with pervasive degradation of the digital terrain with every processing step conducted, eventually failing to align/correlate images for a subset of landmarks at the 5cm tiling step. * CompareOBJ RMS with and without optimal translation and rotation shows an inability to distinguish between subtests which perform well and subtests which perform poorly. Indeed, there is no indication from the CompareOBJ RMS that the poorly performing F3F3 subtest failed to complete the 5cm tiling processing step. * There is also no indication of performance from an inspection of the RESIDUALS RMSs. * The normalized cross correlation scores are clearly distinct for the well-performing and poor-performing subtests. * A single North-South transit through the center of the evaluation region immediately indicates good and poor performance at all resolutions tiled. == Results and Discussion == Results from testing the three photometric functions split into two groups characterized by differing digital terrain accuracy and model behavior. Subtests F3F1 and F3F2 (Lommel-Seeliger photometric function without the 2 and with the 2 respectively) performed well with minor differences in the measurements of accuracy, whereas subtest F3F3 (Clark and Takir photometric function) performed poorly with pervasive degradation of the digital terrain with every processing step conducted. A detailed analysis of the behavior of F3F3 is reported here: [[Test F3F3 - Analysis]]. === CompareOBJ RMS === Three CompareOBJ RMS values for the final 5cm resolution 20m x 20m evaluation bigmap are presented for each subtest and each S/C position and camera pointing uncertainty: * The largest CompareOBJ RMS (approx. 57cm across subtests) is obtained by running CompareOBJ on the untranslated and unrotated evaluation model. * The second smallest CompareOBJ RMS (approx. 15cm across subtests) is obtained by running CompareOBJ with its optimal translation and rotation option. * The smallest CompareOBJ RMS (approx. 6cm across subtests F3F1/2) is obtained by manually translating the evaluation model and searching for a local CompareOBJ RMS minimum. The CompareOBJ optimal translation routine is not optimized for the evaluation model scale (5cm pix/line resolution). Manual translations of the bigmap were therefore conducted in an attempt to find a minimum CompareOBJ RMS. The manually translated evaluation models gave the smallest CompareOBJ RMSs. The CompareOBJ RMS without translation or rotation is similar across subtests showing an inability to distinguish performance differences apparent from visual inspection of the evaluation maps, the normalized cross correlation scores, and the failure of F3F3 subtest at the 5cm tiling step. The CompareOBJ RMS with optimal translation and rotation is little better at distinguishing performance with some decrease in RMS of the poor-performing F3F3 subtest when compared with the well-performing F3F1 and F3F2 subtests. CompareOBJ with manual translation shows the most ability to distinguish between good- and poor-performing subtests, but the RMS of the poorly performing F3F3 subtest is still unexpectedly low. CompareOBJ RMSs do not change with iteration. {{attachment:CompareOBJ_resized60pct.png}} '''CompareOBJ with Manual Translation - RMS:''' || ||||||'''CompareOBJ RMS (cm)'''|| ||'''Processing Step'''||'''F3F1 (Lommel-Seeliger without the 2)'''||'''F3F2 (Lommel-Seeliger with the 2)'''||'''F3F3 (Clark and Takir)'''|| ||20cm Iteration 00||9.0284||8.6072||13.7263|| ||10cm Iteration 00||7.2155||6.6544||13.0804|| ||5cm Tiling (incomplete)|| || ||10.7716|| ||5cm Iteration 00||6.1890||5.8275|| || ||5cm Iteration 20||5.4468||5.7187|| || '''CompareOBJ with Manual Translation - Translation:''' || || || ||||||'''Translation'''|| ||'''Subtest'''||'''Photometric Function'''||'''Processing Step'''||'''x (cm)'''||'''y (cm)'''||'''z (cm)'''||'''Distance (cm)'''|| ||F3F1||Lommel-Seeliger without the 2||5cm Iteration 20||175.7||41||-40||184.80|| ||F3F2||Lommel-Seeliger with the 2||5cm Iteration 20||175.7||41||-40||184.80|| ||F3F3||Clark and Takir||5cm Tiling (incomplete)||180.9||41.2||-40||189.80|| === RESIDUALS RMS === Again, there is very little difference in RESIDUALS RMS across the subtests. At the 10cm iteration steps, the RESIDUALS RMS decreases once GEOMETRY is performed, conversely at the 5cm iteration steps, the RESIDUALS RMS increases once GEOMETRY is performed. RESIDUALS RMSs do not change with iteration. {{attachment:residualRMS_resized60pct.png}} '''RESIDUALS RMSs:''' || ||||||'''RESIDUALS RMS (cm)'''|| ||'''Processing Step'''||'''F3F1 (Lommel-Seeliger without the 2)'''||'''F3F2 (Lommel-Seeliger with the 2)'''||'''F3F3 (Clark and Takir)'''|| ||20cm Iteration 00||42.5852||42.6027||42.6358|| ||10cm Iteration 00 (pre Geometry)||42.3146||42.3362||42.4303|| ||10cm Iteration 00 (post Geometry)||41.3606||41.3900||41.4881|| ||5cm Tiling (Incomplete)|| || ||41.0550|| ||5cm Iteration 00 (pre Geometry)||40.8840||40.8434|| || ||5cm Iteration 00 (post Geometry)||41.6120||41.4276|| || ||5cm Iteration 20||41.6355||41.4529|| || === Normalized Cross Correlation Scores === The evaluation maps were compared with a truth map via a cross-correlation routine which derives a correlation score. As a guide the following scores show perfect and excellent correlations: * A map cross-correlated with itself will give a correlation score of approx. 1.0; * Different sized maps sampled from the same truth (for example a 1,100 x 1,100 5cm sample map and a 1,000 x 1,000 5cm sample map) give a correlation score of approx. 0.8. There is very little difference between the normalized cross correlation scores for the Lommel-Seeliger Photometric Function subtests (F3F1 and F3F2), both exhibiting very good correlation between the evaluation map and the truth map. The data however shows a poor correlation between the evaluation map and the truth map for the Clark and Takir Photometric Function subtest (F3F3). {{attachment:normCrossCor_resized6-pct.png}} '''Correlation Scores:''' || ||||||'''Correlation Score'''|| ||'''Processing Step'''||'''F3F1 (Lommel-Seeliger without the 2)'''||'''F3F2 (Lommel-Seeliger with the 2)'''||'''F3F3 (Clark and Takir)'''|| ||20cm Iteration 00||0.6141||0.6133||0.4572|| ||10cm Iteration 00 (post Geometry)||0.7143||0.7168||0.4506|| ||5cm Iteration 00 (post Geometry)||0.7679||0.7756|| || ||5cm Iteration 10||0.7839||0.7564|| || ||5cm Iteration 20||0.7872||0.7884|| || === Transits === The following charts show North-South transits through the center of the evaluation region. The entire set of tests show a displacement from the truth. It is clear from inspection of the transits that subtest F3F3 is performing poorly, failing to represent features smaller than 3m. {{attachment:transit_20cmStepA.png}} {{attachment:transit_10cmStepA.png}} {{attachment:transit_05cmStepA.png}}