Greed is bad! (for GAs)
Posted: Mon Jun 19, 2006 8:21 pm
I know that there are a number of other GA folks out there so here's my beef.
How do I know when my GA is being Greedy (i.e. when it gets rewarded for getting stuck at local optima)?
Here's an example. I utilize the Grail GGO most of the time for my GA needs and it works really well I feel for about 10 parameters. But I like to push things a bit. Specifically, if I've got a set of entries that I like, I put together a host of typical exit methods and optimize them with the GGO in a switchable fashion (with stress testing off for the switches). Its sort of like using the CASB but only for exits.
This weekend I ran a set of tests for an entry system with about 5 parameters combined with the exit system which had about 35. So a total of 40 parameters. I ran the GGO optimization overnight and got the best NP and MaxDD of say about $20K/10k for a 3 year period. In frustration I manually optimized the system myself with just a couple of exit options I know work and came out with superior NP and MaxDD of about $40K/5k. Nothing to write home about. Then I setup a new run with the original 5 entry parameters about 7 exit parameters I felt were especially relevant to this market and timeframe (including switches). This ended up working great giving me a system with about $60k/3k (and this is for long positions only). So what's the problem?
How do I know my GA is doing its job (i.e. hunting efficiently for global optima)? I know it already succesfully avoids curve fitting (that's what GAs are supposed to do). And I know when I've curve fit anyway (poor OOS results). How do I know when my GA is getting stuck at a local optima? Do I need to run more iterations? Change the population or mutation rates? Are there any mathmatical guidelines?
If anyone has any materials or thoughts on how to quantitatively define and resolve these issues, please let me know. Thanks.
Edward
How do I know when my GA is being Greedy (i.e. when it gets rewarded for getting stuck at local optima)?
Here's an example. I utilize the Grail GGO most of the time for my GA needs and it works really well I feel for about 10 parameters. But I like to push things a bit. Specifically, if I've got a set of entries that I like, I put together a host of typical exit methods and optimize them with the GGO in a switchable fashion (with stress testing off for the switches). Its sort of like using the CASB but only for exits.
This weekend I ran a set of tests for an entry system with about 5 parameters combined with the exit system which had about 35. So a total of 40 parameters. I ran the GGO optimization overnight and got the best NP and MaxDD of say about $20K/10k for a 3 year period. In frustration I manually optimized the system myself with just a couple of exit options I know work and came out with superior NP and MaxDD of about $40K/5k. Nothing to write home about. Then I setup a new run with the original 5 entry parameters about 7 exit parameters I felt were especially relevant to this market and timeframe (including switches). This ended up working great giving me a system with about $60k/3k (and this is for long positions only). So what's the problem?
How do I know my GA is doing its job (i.e. hunting efficiently for global optima)? I know it already succesfully avoids curve fitting (that's what GAs are supposed to do). And I know when I've curve fit anyway (poor OOS results). How do I know when my GA is getting stuck at a local optima? Do I need to run more iterations? Change the population or mutation rates? Are there any mathmatical guidelines?
If anyone has any materials or thoughts on how to quantitatively define and resolve these issues, please let me know. Thanks.
Edward