Select to expand quote
Phromsky said..
So on what pretext and by what means is the programme able to produce a significantly less error affected and more user friendly score in the result line?
GPS analysis programs like GPSResults and GPSSpeedreader use Gaussian error propagation (typical standard deviation formulas) to calculate the +/- numbers for results. This assumes that the measurement errors are independent (due to "white noise"), which is often true, at least in first approximation. The effect is that the error estimates go down proportionally to the square root of the number of points in a measurement. Thus, 10 second error estimates will be lower than 2 second estimates; 1 hour estimates will always be really low; and 10 Hz data will have lower estimates than 5 Hz data, which have lower estimates than 1 Hz data.
There are numerous complications and exceptions to the general description above. For example, GT-31 data are heavily filtered, so the assumption of independent errors is not valid in the 2-10 second range (but becomes more and more valid the longer the time frame is). This is sometimes taking into consideration by the programs, although I don't think any program gets it perfectly right. Other issues include non-randomness of errors when satellite reception is poor, and differences in how different manufacturers estimate and filter errors.
GPS Speedreader shows the plain average of the error estimates on the bottom line, and this number can be used as an
upper limit of the likely error, as long as it is low enough (below ~0.5 knots), and satellite reception was not lost due to a crash or similar. However, none of the error estimates should be taken to draw conclusions about the relative accuracy of devices that use chips based on different designs (Locosys vs. u-blox, like the Motion).