The standard deviation can sometimes be misleading in comparing the risk, or uncertainty, surrounding alternatives if they differ in size. Consider two investment opportunities, A and B, whose normal probability distributions of one-year returns have the following characteristics:
|INVESTMENT A||INVESTMENT B|
|Expected return, R||0.08||0.24|
|Standard deviation, σ||0.06||0.08|
|Coefficient of variation, CV||0.75||0.33|
Can we conclude that because the standard deviation of B is larger than that of A, it is the riskier investment? With standard deviation as our risk measure, we would have to. However, relative to the size of expected return, investment A has greater variation. This is similar to recognizing that a $10,000 standard deviation of annual income to a multimillionaire is really less significant than an $8,000 standard deviation in annual income would be to you. To adjust for the size, or scale, problem, the standard deviation can be divided by the expected return to compute the coefficient of variation (CV):
Coefficient of variation (CV) = σ/R
Thus the coefficient of variation is a measure of relative dispersion (risk) - a measure of risk "per unit of expected return." The larger the CV, the larger the relative risk of the investment. Using the CV as our risk measure, investment A with a return distribution CV of 0.75 is viewed as being more risky than investment B, whose CV equals only 0.33.