Monday, January 11, 2010

Optimization

I have decried curve fitting in previous posts but have never taken a serious look at optimization. Jaekle and Tomasini in Trading Systems (2009) contend that everybody optimizes in one way or another. The question is how well they do it. Done well, it is useful in system trading; by contrast, “its aberration, namely curve fitting or over-optimisation” has no forecasting power.

So how do we go about optimizing a promising system and checking it for robustness? First of all, we have to respect the constraints of degrees of freedom, described in an earlier post. We want to keep the number of inputs, conditions, and variables as small as possible. If we have multiple inputs that need to be optimized, it’s best to test one or two per turn while all other inputs are kept static. Second, we must decide on the size of the steps in our optimization software. For instance, a system developer who wants to optimize both a short-term moving average (say, 1 to 20 periods) and a long-term moving average (20 to 200) should try to match as closely as possible the relative step size between the two averages. In this case we could use a step of 2 for the short-term and a step of 20 for the long-term moving average.

Once having optimized a system, we have to decide whether it is robust on its surface and, if it is, what its most robust input values are. If “the average results are positive then we can assume that the trading system is a robust one. If you are more statistically inclined you can also subtract the standard deviation (or a multiple of it) from the average net profit and check if the average net profit remains positive in this case.” (p. 23) Assuming that the system is robust, we are looking for that area of the profit chart where profit tops and, even as we change inputs, profit remains almost constant. That is, we want a plateau, not a spike. (By the way, we don’t have to optimize for profit; we might want to optimize for minimum drawdown.)

We’re not finished yet, not by a long shot. We need to assess the forecasting power of the system on out-of-sample data. The quick, easy, and increasingly obsolete way to check is to apply the optimized system on data we kept outside the optimization process (usually 10% to 20% of the data window). If it performs in a similar fashion on the unseen data, it is robust.

The more efficient, more precise testing method is walk forward analysis, which can either be rolling or anchored. Let’s look at an example of rolling walk forward optimization. Assume that we used data from the years 2003 through 2005 for our initial optimization. Then we see how the system performs in 2006. The next step is to walk forward a year, using data from 2004 through 2006, re-optimize and apply the best parameters from this period to 2007. Add the performance results of 2007 to those of 2006. Continue to walk forward to the 2005-2007 period and apply the best parameters to 2008. We now have a three-year out-of-sample track record that has adapted to market changes. The results from this three-year period are, in effect (as long as we include realistic slippage and commissions figures), “real-time” results.

Of course, nothing is ever straightforward in trading. In doing a walk forward analysis, how long a study period should we use, how long an application period? What should we choose as our measure of performance? And a caveat: if the markets are fickle, we can change our parameters just in time to have our system underperform in the market’s new phase.

No comments:

Post a Comment