Investors use standard deviation (SD) to measure the variability of a security’s returns. For example, if a stock’s price fluctuates greatly over time, its returns will show a high standard deviation, meaning greater risk to the investor. Securities with high standard deviations tend to have higher expected returns (and yields), because the market requires compensation for greater risk.
A security with a higher standard deviation has a greater probability of high or low returns than a security with a lower standard deviation. Thus, a high standard deviation means more risk for the investor.
In the example that follows, even though Securities A and B have the same average return of 4.4%, Security A has a higher standard deviation than Security B. In fact, an investor can expect to see the returns on Security A varying on average by 10.8%, but the returns of Security B should vary less than 1%. This makes Security A a much riskier investment than Security B.
The range of a set of scores (e.g., returns) is the difference between the lowest score and the highest score. In the table, Security A’s returns vary from -10% to 20%, resulting in a range of 30. In contrast, security B’s returns range from 3% to 5%, with a range of only 2. Securities with lower ranges show less variability in their returns, which usually means lower risk.