The definition of RSD is given below:
Relative Standard Deviation: In probability theory and statistics, the relative standard deviation (RSD or %RSD) is the absolute value of the coefficient of variation. It is often expressed as a percentage. It is useful for comparing the uncertainty between different measurements of varying absolute magnitude.
Could anyone please give me an explanation of "varying absolute magnitude" in the above definition? When is preferable to use RSD instead of SD?
$\endgroup$3 Answers
$\begingroup$One way that I use %RSD in my work is to analyze the relative output variance of business operations processes. For example, in a "pull" type operation, there will be varying demand elasticity, varying staffing, and possibly varying product mix across time (an example would be an accounting office that sees spikes in demand at certain times of the year or days of the week, and works on an "on-demand" basis). Another example might be comparing the performance of a 1st shift to a 2nd shift. In such a case, you will have differing s and x-bar between shifts. %RSD lets one evaluate the relative output variance despite this.
$\endgroup$ $\begingroup$If I'm measuring the distance to Mars in miles and you're measuring it in kilometers, your SD (just the number, if we ignore the dimensions) will be bigger than mine even if your measurement is slightly more precise. This is because the "absolute magnitude" of your measurement is larger than mine--for every million miles I measure, you measure 1.6 million kilometers.
The RSD corrects for this effect, so that if your SD (in kilometers) is 1.5 times as large as my SD (in miles) you can see that your measurement is actually more precise. Essentially, it takes the SD and makes it dimensionless, so that a value of 0.1 is always better than a value of 0.12, irrespective of the units they were initially measured in.
If you are careful about what units you're working in this is not a big deal (since looking at the units would tell you which method was better), but people are often not as careful about units as they should be.
$\endgroup$ 1 $\begingroup$If you have one time series with a mean of $1000$ and a SD of $500$, another with a mean of $1$ and a SD of $0.5$, comparing the SD is not meaningful. But in some sense, they seem "equally random". If you normalize as suggested by the RSD, this will be captured.
$\endgroup$ 3