|
|
Positional accuracy
Positional accuracy has been described as 'the expected
deviance in the geographic location of an object in the data set from
its true ground position' (Aranoff,
1989), or 'the likelihood that the position of a point as determined
from a map will be the true position' (Aranoff, 1989).
Positional accuracy is tested by selecting specified
sample points and comparing their position coordinates with a more accurate
source.
Components
There are two main components of positional accuracy
- Bias refers to
systematic discrepancies between the represented and the true position.
Ideally this should be zero. This is measured by the mean or average
positional error of the sample points.
- Precision relates
to dispersion of the positional errors of the data elements. It is estimated
by calculating standard deviation of the selected test points. A low
standard deviation indicates a narrow dispersion of positional errors
ie relatively small error. The higher the precision, the greater the
confidence in the data set.
Types
There are two main types of error in positional accuracy
- Root mean square
(RMS) error is a common measure of positional accuracy used in surveying
and photogrammetry. This is calculated by determining the positional
error of the test points, squaring the individual deviations and taking
the square root of their sum. One limitation of this measure is that
it does not distinguish between bias and precision.
- Entity error, or positional
accuracy, can arise from incorrectly placed entities but also missing
entities and disordered entities. This is mainly a problem in vector
systems.
|