How SPC Works
SPC is a suite of software routines that uses both stereo imaging and photoclinometry to derive topography. The routines are able to generate topography with accuracy near the resolution of the source images. If the images have significant overlap, then it can produce a stereo derived topographic model. If the images have significantly different illumination conditions, then it can produce a photoclinometry-based model. However, as most planetary missions are, there are many more images that do not qualify as stereo pairs, so a combination of the two methods provide topographic solutions based on a much wider base of images.
At its root, SPC is geometric stereo, meaning that it starts with geometry to calculate the height of individual control points on the surface using two or more images taken from different directions. The advantage of SPC is that it integrates both the albedo and shape of the surface in an iterative process to increase precision and resolution. If there is no change in illumination conditions, SPC is reduced to multi-image photogrammetry. If there is an insignificant stereo angle, SPC works as 2D photoclinomerty with multiple images.
Requirements for SPC
Traditional stereo
- Images that are taken with similar lighting conditions
- Stereo angle between 10 and 40°
- Images have nearly identical image resolution
Stereophotoclinomery works best with
A minimum of three images (typically >30, this study 4)
- Two stereo images
- Emission angles 45° (acceptable 35-48°, limit 5-60°)
- Stereo angle 90° (acceptable 70-110°, limit 10-120°)
- Incidence angle 0° (acceptable 0-20°, limit 0-60°)
- Three photoclinometry images
- Emission 0° (acceptable 0-20°, limit 0-60°)
- Incidence angles 45° (acceptable 30-50°, limit 0-60°)
- Variation in illumination geometry of 90° (acceptable 40-90°, limit 10-120°). This refers to having different positions of the Sun with respect to the observed target.
In general, geometric stereo is more accurate for long scale topography, while photoclinometry is better for short scale (Kirk 2013). SPC uses both of these components to mitigate the errors that are caused by each of them. Stereo creates a position in 3D space using 3-5 pixels from an image. SPC typically sets control points (landmarks) with a spacing of 30 pixels. Thus, the landmarks (or point clouds) have a spacing that is a factor of 10 lower than traditional geometric results.
SPC will use photoclinomerty to calculate [the] and instead of the? fill in the heights of the points between the landmarks. The horizontal ground sample distance (resolution) is typically on the order of the resolution of the images, and we have successfully generated topography at double the image resolution. The calculations of the points between landmarks are solved in 99x99 grids (or maplets), so most heights are solved by multiple maplets. The iterative solution is continued until the solutions for points common to multiple maplets agree, resulting in the accuracy of all of the topographic heights being controlled by stereo but generated by photoclinomerty at a higher ground sample distance (resolution) .
SPC Process
SPC uses a three-step iterative process to derive a shape model:
- register images
- warp the model
- update camera position/pointing
We start with an initial shape model that is very low resolution but that provides a starting point for SPC. This shape can come from limb measurements, a radar shape model, spherical harmonics, or if need by, can be derived via a mathematical representation of a tri-axial shape. In this case, it is set to a flat surface.
The following discussions explain each of these three steps in more detail.
Step 1. Register
The first step is to register the images to a reference frame. To do this, we use a large number of control points, which we call landmarks. For each of these control points, or landmarks, we generate a small 99x99 pixel sub-map called a maplet. Each image that falls within one a maplet is orthorectified and projected onto the shape model (Fig 4 top). Associated with each of these maplet views is the shape model with albedo, which is illuminated at the same solar geometry (Fig. 4 bottom).
We use both manual and automated tools to co-register the images within each maplet such that the center point of each maplet is in the same location as all the rest. Because the images are orthorectified, it allows us to have the same view of each of them, increasing our ability to match identical features, even if the original viewing geometry makes them hard to identify. There are minor improvement in the co-registration of images as the shape model is improved because this reduces projection error, and correspondently, any misregistration.
Step 2. Warp the Model
Once the landmarks are registered, we use that information to update the topography, or shape model. While the height of the center of the landmark is initially derived from geometric stereo, we use photoclinometry to solve for the heights of every pixel of the landmark. Simple photoclinometry is 1D and requires the albedo to be a constant so that the only variation in pixel intensity is the topography. For SPC, we have a full 3D model that includes albedo. As described previously, for each image within a maplet, we illuminate the shape model to correspond to it. Because we are fully controlling for topography and albedo, any variations in each images pixel DN from its representative shape model DN is an error in the model. The deviations are turned into corrections for both albedo and topography. A configuration file within SPC provides weighting that are applied to the corrections for albedo and topography. When working with a low relief surface with high contrast, the user would put more weight on changing the albedo rather than the topography, and vice versa. The weighting only effects how fast the model converges to a solution not the solution itself. For example, if SPC has made an overcorrection to albedo, then when the terrain is processed again, the albedo will be corrected.
Step 3. Update Camera Position/Pointing
The next major step of SPC is to take the updated heights of the surface and use it to improve the actual position and pointing of the spacecraft. Figure _ shows a bright spot for the center of each landmark. Within an image, each of these landmarks (in this context, they are working like a control point) provides the exact sample/line position of the landmark. This data, along with the angular field of view of each pixel, allows SPC to determine the position and pointing of the spacecraft. Note, a narrow field of view makes it difficult to break the degeneracy where a displacement can be either position or pointing. This is handled by weighting each of these terms based upon the calculated errors in positions and pointing. Typically, these estimates are provided by the navigation team. This correction of camera position/pointing and the assumptions made by those generating the topography is where the major differences are between SPC and traditional stereogrammetry.
As the topography is processed, the steps of register/morph/update are iterated until the residuals of the solution approach a minimum. We use a variety of tools to evaluate mis-registration problems, image artifacts, or regions with insufficient image coverage (the worst of which is a lack of stereo). The main tool is a program called RESIDUALS that reports deviations in distance (meters) or pixels. It calculates the position on the shape model of each landmark/image combination. For each landmark, it takes the deviation of position of each of its images to calculate the RMS error. The RMS error shows how close all of the image data is aligned to represent the surface of the object. It is a fully geometric solution based upon the registration of the landmarks (or control points) and the position of the spacecraft.
The main output product for SPC is a shape model, a 3D set of position vectors that define the heights. This can be converted into a vector/plate model. The output from SPC for a bounded surface is called a BIGMAP. This product has a regular ground sample distance (resolution) and, for each point, it takes the average value of all the maplets that contain each point. The BIGMAP also contains a scaled albedo (or scaled average surface reflectance) for each point. Additional Processing
Output products