There are two common ways to measure performance: the standard deviation of returns and the Sharpe ratio. Here's a third way.
How do we measure risk? In the
financial industry, the generally
accepted method is the standard
deviation of returns. A low standard
deviation indicates that
expected returns vary little from
average returns (suggesting less
risk), while a high standard deviation
suggests that expected
returns vary greatly from average returns (implying more
risk). The assumption is that stable past returns are less risky,
yet many practitioners are uneasy with this concept.
FACTORING IN LOSSES
Letís examine a risk measure that factors in the actual losses
experienced by asset managers. Say that portfolio manager
Dick Smith experienced two negative years of -2.7% and
-7.9%, a combined loss totaling -10.6% over a 25-year
period. By dividing this total loss by the number of years in
our observation (25), we derive an average loss of -0.42% per
year. This is the average loss that an investor would have
experienced had he bought at the beginning of a negative
period, sold at the end of that span, and stayed on the sidelines
in the interim.
Let us also suppose that during that same 25-year period,
the Standard & Poorís 500 was down a total of five years for
a combined loss of -56.7%, averaging -2.27% a year. When
viewed from this perspective, Smithís portfolio was clearly
less risky than the S&P 500.