Select to expand quote
mathew said..boardsurfr said..
Comparing the +/- numbers for the speed results, the reported accuracy looks very similar. But the Motion data are 10 Hz, and the Locosys data are 5 Hz, and these numbers are based on Gaussian error propagation. That means that, if the error estimates were indeed nearly identical, the numbers in the results table for the Locosys data should be about 1.4 fold higher, not roughly the same.
That is not how error-propagation works - in simple words, the longer the sample-period the lower the error, because the error get swamped by the real signal - this is signal-processing 101. You are wrong.
Now stop being a bully.
You are quite amusing. It is good to see that you have a basic understanding of error propagation, but from what you write, your understanding is quite limited. I have no desire to change that, but since there are others who follow this thread with interest, I'll give a quick introduction in the basic principles of error propagation.
The basic idea is that a measurement can be inaccurate due to
random error. For example, if we are surfing at 40 knots, but our GPS can only measure with an accuracy of 2 knots, we
may get a reading of 38 knots or 42 knots, or anything in between. If we measure just once, there's a good chance that we measure 38 or 42 knots, so that we have an
actual error of 2 knots. But if we measure multiple times, some values are likely to be higher than 40 knots, and some lower. If we average the measurements, the actual error is very likely to be lower.
The commonly used term for the estimate error of multiple measurements is standard deviation. Wikipedia and plenty of other sources explain it quite nicely in detail, but basically, it is the
square root of the average square deviation.
The square root term means that if I measure the same thing 4 times, I can reduce the estimated error by 2 (the square root of 4). If I measure 100 times, the error estimate is reduced by a factor of 10.
For speedsurfing, if I measure a 2-second run at 5 Hz, I get 10 points, so the error estimate will be (roughly) the average error of single points, divided by the square root of 10 (3.16). So the error estimate (standard deviation) for the 2-second run will be about 3-fold lower than the error estimate for single points.
If I measure the 2-second same run at 10 Hz, I get 20 points, and the error estimate gets reduces by a factor 4.47 (the square root of 20). So simply going from 5 Hz to 10 Hz, while all else is equal, will reduce the error estimate by a factor of 4.47/3.13 = 1.41. That's the square root of 2.
The principle of calculating error estimates this way is often called "
Gaussian Error Propagation". That's what GPSResults does, and that's also what I implemented in GPS Speedreader. The +/- numbers in the category results are actually "2 sigma" error estimates - twice the calculated standard deviation. "2 sigma" indicates a 95% confidence level - for 95% of all measurements, the "true" speed should fall within the range given by the +/- estimates -
if, and only if, the errors for individual points are completely random.
And that is the important part that is quite easily missed by someone not familiar with statistics or scientific data analysis. There are two things that can violate this underlying assumption:
1. "Colored noise": non-random errors, for example due to distortion of the GPS signal in the atmosphere, or bias in the GPS chip.
2. Correlation (linkage) between neighboring data points.
Both of these apply to GPS data from the units we are using. I'll have to leave it to another post to explain this in more detail.