Would it be possible to use the results generated in a backtest to build a symbol (should it be dynamic?) list for another strategy to use as its list of symbols?
Rename
It’s possible, but would require building a custom extension using our DataSet Provider API.
QUOTE:
use the results generated in a backtest to build a symbol (should it be dynamic?) list for another strategy
If you can generate the results in a "backtest" in the PreExecute{block} on a bar-by-bar basis (Can you?) to generate a List<BarHistory> to trade from in the Execute{block}, then you can do it all in a single strategy. That would be the simpler solution.
The question remains, will a bar-by-bar evaluation be enough to rank/pick your list? ScoreCard metrics, such as Sharpe Ratio, require evaluating all trades for a stock to compute. In contract, RSI can be computed on a bar-by-bar basis along the way. If you can do the latter, then look at the PreExecute example. https://www.wealth-lab.com/Support/ApiReference/UserStrategyBase#PreExecute
My theory is that there are symbols that just never trade well for any given strategy or group of strategies. So was curious about building a custom symbol list via code.
This failure to perform could be measured a number of different ways, but the end result is i'm trying to reduce DD or a negative profitability. My obvious concern is when does this cross the line to overfitting.
To test my theory this morning I took a strategy that trades the Nasdaq 100 and looked at symbol by symbol performance from 2015-2023, then took the losers and excluded them by code inside the strategy. So technically a year of OOS data to look at the performance.
Performance was improved. I took a meta Strategy and doubled its YoY performance to close to 40%.
But at what cost? The law of unintended consequences will come to play so I wanted feedback.
This failure to perform could be measured a number of different ways, but the end result is i'm trying to reduce DD or a negative profitability. My obvious concern is when does this cross the line to overfitting.
To test my theory this morning I took a strategy that trades the Nasdaq 100 and looked at symbol by symbol performance from 2015-2023, then took the losers and excluded them by code inside the strategy. So technically a year of OOS data to look at the performance.
Performance was improved. I took a meta Strategy and doubled its YoY performance to close to 40%.
But at what cost? The law of unintended consequences will come to play so I wanted feedback.
QUOTE:
... there are symbols that just never trade well for any given strategy or group of strategies. So was curious about building a custom symbol list via code.
I totally agree. I always manually craft my datasets around stocks that perform best under specific strategies. In this context, I would use the ScoreCard metrics, which means all trades for a stock under a specific strategy need to be considered. So the PreExecute bar-by-bar characterization approach above won't work for this.
Optimize you dataset with your chosen strategy first, then open the Strategy Ranking tool and select the Symbol Rankings tab as shown below. Run it, and pick only the stocks with high Sharpe Ratios, Win Rates, and Position Counts. Make only those stocks part of your custom dataset.
Now, for the stocks that fail the first time through, try parameter optimizing them again under different criteria. I usually optimize under APR, EquitySlope, ProfitPerBar, and RecoveryFactor. If you try optimizing under more complex ScoreCard metrics (e.g. SharpeRatio), that creates a rougher solution vector space that can confuse the optimizer--not ideal. If the optimizer gets confused, try adjusting some of its optimization parameters.
Stocks that fail under all your optimization attempts you need to find a different strategy for.
If you are really serious about automating this process to create a dynamic dataset, then you need to vote for the following feature request. https://www.wealth-lab.com/Discussion/How-to-change-Optimizer-settings-with-StrategyRunner-PerformOptimization-10492
QUOTE:
My theory is that there are symbols that just never trade well for any given strategy or group of strategies. So was curious about building a custom symbol list via code.
This is the basic idea behind a new, upcoming extension called finantic.DynamicPortfolio. It measures past correlations and past profitability for each symbol in a DataSet and constructs a dynamically changing set of symbols that are used to trade in the future.
(Correlations are imporant if you are intersted in improving the effects of diversification)
From the Help File:
QUOTE:
The dynamic portfolio mechanic is based on two assumptions:
1. If two trading instruments show a specific correlation for a past interval,
then this correlation value does not change immediately or significantly for a future interval.
2. If a trading instrument exhibits some properties that are exploitable by a trading strategy,
then this properties will not change too fast and too much in future intervals.
And this is the critical part: Does a metric like "Profitability" measured in some past interval show any "Persistence" i.e. will a stock symbol that was profitable in the past trade well in the future?
The extension comes with a set of plots that visualize these assumption.
Here the Persistence of Correlations for a basket of ETFs:
It is obvious that it is rather effective to build a dynamic DataSet based on past correlation measurements.
Here a similar plot for persistence of profitability, again for a basket of ETFs:
The profitability is much harder to predict.
The finantic.DynamicPortfolio extension is finished and currently under beta test with 7 people.
I expect this being published before mid of March.
as an example, could I feed it a strategy to use as the measure of performance?
Sorry, sounds like chinese to me. Could you please elaborate?
QUOTE:
could I feed it a strategy
The whole thing is implemened as a condition building block. It will enable an entry for the currently active symbols only.
That way a highly dynamic DataSet/Portfolio is realised.
I understand, thank you!! Looking forward to its release.
QUOTE:
upcoming extension called finantic.DynamicPortfolio. It measures past correlations and past profitability for each symbol in a DataSet
Correlation to what exactly? I don't follow.
What I've done in the past is try to correlate the metrics the Fidelity stock screener has to my best performing stocks. And that can be done with any stock screener service. But there are two problems:
1) This analysis really needs to be done with a stat package. It's a multi-variant fit with many terms, and one needs to compute a contrasting "P" (that's a probability of significance) for each term of the fit just like a macro economist would do.
2) Most stock screeners are setup to give you stocks that meet certain criteria, but not the other way around. What I really need is the other way around. I would like to give the screener my best stocks, and have it return which criteria all those stocks fit so I can use that criteria to find more similar stocks. Do you know a screener that does that?
QUOTE:
Correlation to what exactly? I don't follow.
what I do today is exclude symbols that backtest poorly within the group on the specific strategy I am analyzing, say the bottom 10% for example.
I like what you're suggesting, it gets rid of a lot of the bias and work that creep into doing it manually.
QUOTE:
Looking forward to its release.
finantic.DynamicPortfolio is available now!
For details see https://wealth-lab.com/extension/detail/finantic.DynamicPortfolio
Thanks for releasing this DynamicPortfolio tool. It looks very interesting.
Where would I find the help file for your new extension Dr. Koch?
** I found it :)
** I found it :)
Hi DrKoch, I am trying out the new Dynamic Portfolio tool and have a q regarding the plots. There are references to Lookback A, B and C but I cannot find in the help what these correspond to. Can you advise?
There are three result curves (to fight against overoptimization).
These results are calculated for three points in time:
A: Close to the beginning of your date range
B: Center of your date range
C: Close to the end of your date range.
These curves should teach you to not focus on a single maximum but instead apply some averaging across these three cases.
In the example shown above I'd conclude that about 250 bars is a good lookback period.
These results are calculated for three points in time:
A: Close to the beginning of your date range
B: Center of your date range
C: Close to the end of your date range.
These curves should teach you to not focus on a single maximum but instead apply some averaging across these three cases.
In the example shown above I'd conclude that about 250 bars is a good lookback period.
That sounds like a good idea for a general Visualizer
Your Response
Post
Edit Post
Login is required