Cone8
 ( 6.32% )
- ago
by Rene Koch:
Because all Optimization Algorithms use a single metric and all metrics in WL come from a Scorecard I am going to suggest another thing first:

Feature Request: Meta-Score-Cards
A Meta-Score Card has two features:
First: It is a collection of metrics form all other score cards. Usually I am interested in just a handful metrics which come from different score cards. (Basic, Advanced, my own).

I’d like to be able to define my personal Scorecard which shows just the metrics I am interested in.

Second: In the “Background” of such a Meta-Scorecard we should have some additional information for each metric:
• This metric is active in multi objective optimization. (A Checkbox)
• A “medium” value for this metric
• A “good” value for this metric
• A “excellent” value for this metric
• A “weight” for this metric, probably a percentage value between 0% and 100%.

Every trader can relate to the concept of “Good” or “Excellent” Values for say “Sharpe ratio”, so this way of creating a reference or target value does not need much explanation.

The weight says how “important” the metric is for the overall outcome.

Multi Objective Optimization
During an optimization run a “MOO-Metric” is calculated by the Meta-Scorecard for each iteration. The optimizer will use this metric as an optimization target.
Furthermore some limits are needed to avoid the optimizer walking in bad territory.
Some of the metrics on the Meta-Scorecard should be flagged as “Limits” with an (optional) Maximum and (optional) Minimum Value.
If an optimizer Iteration results in one of these limits hit, the score is set to “a very bad value” to prevent the optimizer to continue its search in that territory.
The obvious typical example is “Number of trades” which should be more than a certain limit.
(The idea of “Limits” comes form Carova)
If the limits are not hit, the score is calculated as follows:

Calculation of MOO-Metric
Metrics come in all sorts of scales: Percentages, Ratios, and so forth. To make the calculation of the MOO Metric as flexible and useful as possible we use a linear scoring system.

Each individual metric is compared against the “medium”, “good” and “excellent” values. This results in a score for this metric.
The final metric is just the weighted sum of all scores of all active metrics.

Calculation of score for a single metric
Step 1: Delta for one score point:
delta := ((excellent – good) + (good-medium) ) / 2

Step 2: Score of a current-metric value.
score := (currentMetricValue – good) / delta.

Example:
For the Metric “Sharpe Ratio” the user enters the following values:
Medium: 1.0
Good: 1.5
Excellent: 2.0

After one optimization run the current value for the Sharpe Ratio is 1.2. We calculate the score of this metric value as follows:
Delta := (2.0 – 1.5) + (1.5 -1.0) / 2 = 0.5
Score := (1.2 – 1.5) / 0.5 = -0.3 / 0.5 = -0.6
8
1,885
Solved
18 Replies

Reply

Bookmark

Sort
- ago
#1
The quotes below come from this other discussion. https://www.wealth-lab.com/Discussion/Recommended-Backtesting-Period-6424
QUOTE:
The question remains, what metric should be used to judge the success of a strategy? I think the answer lies with analyzing the Equity Curve somehow.

QUOTE:
I think you are right and it should be optimized towards overall profit!

It's more complicated than that. What needs to be done is to optimize against the ScoreCard interaction terms. In other words, the products of certain ScoreCard metrics including Net Profit.

For example, the interaction (product) term Trades*Winning% should definitely be optimized against based on some empirical analysis I've done. But the issue is much broader than this. Interaction terms including NetProfit, ProfitPerBar, and RecoveryFactor are also important.

The immediate problem is that Wealth-Lab currently doesn't allow you to optimize against interaction terms. That's a major shortcoming of WL, so you can vote up this feature.

The other problem is exactly how these interaction terms should be built in the fitting model? That issue needs to be addressed on a stat package (like R), which can test different models (say with stepwise regression). It's on my list of things to do (but it's not high on my list).
0
- ago
#2
QUOTE:
For example, the interaction (product) term Trades*Winning% should definitely be optimized against based on some empirical analysis I've done.

My research fully supports the value of "interaction terms" as optimization targets. I currently use a VERY CONVOLUTED approach in WL to achieve this goal. Being able to probe the Pareto Front is an essential part of trading system design. Though this is a feature in most professional system development packages, there are no retail products with this capability, probably because most retail traders are not aware of its value.

Vince
0
- ago
#3
QUOTE:
My research fully supports the value of "interaction terms" as optimization targets.... a feature in most professional ... packages, [but] there are no retail products with this capability,...
Interaction terms are something we worry about more in numerical analysis and graduate level applied stat courses. They are a good way to define an orthogonal term in a complex empirical model. But I think we see this type of multivariant modeling typically done in academic research. I can't think of a case at the moment where I've seen it employed commercially. Hmm.

But you see interaction terms in published research papers. I think the social sciences may be better with using them in modeling than the natural sciences.
0
- ago
#4
QUOTE:
I can't think of a case at the moment where I've seen it employed commercially.


Most of the professional trading system packages that I have seen use MOO and all manner of approaches to define the objective function, including mathematical constructs that give you interaction terms.

Vince
0
- ago
#6
QUOTE:
They are a good way to define an orthogonal term in a complex empirical model.


I do believe that most people have little understanding of why this is important.

Vince
0
- ago
#7
I don't think it's obvious that "conserved entities" (what mathematicians call "energies") must come between the plus signs and equals sign of an equation. And that they must behave linearly in order to be summed up as terms in an equation. My silly college math course (It was linear algebra.) had me proving linearity when I didn't fully understand its importance at the time. I still remember asking myself, "Why is this important?"

I think if math was taught in a more illustrative way, rather than a theoretical, analytical, almost-abstract way, more students would be majoring in it.

Metrics like NetProfit, ProfitPerBar, RecoveryFactor, etc aren't fully orthogonal to each other since they are all measuring "price gains" in slightly different ways. So the question remains, if you place them within the same optimizing equation, will the optimizer find a unique, stable solution, or will the system be singular (i.e. no unique solution)?

I'm going to have to assume other trading platforms wouldn't be doing this in the first place unless a stable solution could be found. But I could be wrong. It makes one wonder.
0
- ago
#8
QUOTE:
Metrics like NetProfit, ProfitPerBar, RecoveryFactor, etc aren't fully orthogonal to each other


They are practically co-linear! ;)

Vince
1
- ago
#9
QUOTE:
They are practically co-linear!
I'm afraid you're right.

I think the only option is to combine the (#ofTrades*Win%) product term with one of the NetProfit, ProfitPerBar, or RecoveryFactor price-gain terms. Anything outside of that doesn't make numerical sense from a stability point of view.

I also consider the (#ofTrades*Win%) product term to be a primary, first-level term; and not an interaction term; they should have never been reported separately. This product is measuring the amount of "trade energy" that's successful.
0
- ago
#10
There are a number of metrics that attempt to define oblique (not quite orthogonal) variables, and I have looked at a bunch of them, but I can find a "limitation" with each. Glitch's WealthLab Score turns out to be pretty good actually, and is one of those "interaction" terms that we have been discussing. However, it too is "incomplete" (e.g. it fails to include "equity curve variability" in its construct).

Any metric that assumes a Gaussian distribution (that even simple visual inspection can eliminate) is also suspect. This is why I keep going back to defining a statistically robust metric as the answer, since I doubt that we will ever find a metric with a Normal distribution. This is a very long path back to why I am intrigued by the PSR. :)

Vince
0
- ago
#11
With the advent of build 13 the first part of this feature request is realized with the new "Metric Columns" setting in Tools->Preferences.

In the meantime it became clear that the proposed linear combination of existing metrics will not satisfy every user.

(see posts above and for example https://www.wealth-lab.com/Discussion/Optimization-Target-Formula-6495)

So I'd like to propose this extension to the Meta Scorecard Feature request:

Please Implement a possibility to define a "virtual metric" by entering an expression in the user interface:
There could be two Textboxes which accept strings like:

Name of virtual metric: MyNewMetric
Expression of virtual metric: 2 * APR - MaxDrawdown / 5

This should be quite easy to implement using Roslyn
"https://en.wikipedia.org/wiki/Roslyn_(compiler)"


0
Glitch8
 ( 12.10% )
- ago
#12
I'd rather not re-invent the wheel and build a whole UI around coding a custom metric, when we already have an API capable of developing custom metrics. Rather then make the product more complicated, let's leverage the existing mechanism to create your own ScoreCard with custom Metrics as a custom extension. The selectable Metrics columns already satsify the other part of the equation.
0
- ago
#13
What would be nice is to weight the existing ScoreCard metrics into a composite metric without coding one's own custom ScoreCard metric. That's what this suggestion is about.
0
Glitch8
 ( 12.10% )
- ago
#14
Understood, I'm just weighing adding more complexity to WL7 versus the benefit for something you can already accomplish with the existing API.
0
- ago
#15
QUOTE:
... just weighing adding more complexity to WL7 versus ... something you can ... accomplish with the existing API.
Understood. I thought about writing my own custom ScoreCard, but in this process I realized there are many ways to proceed with a ScoreCard merit metric. My conclusion was to write several separate ScoreCard merit metrics, then solve for a linear combination that combines them into a composite metric.

Bottom line, one is going to have to solve for a composite metric eventually; there's no way around that. So it would be convenient if WL did the composite solving. I know what you're thinking. To fit a composite model, don't you have to solve for a P (probability) to determine how significant each term (ScoreCard metric) in the composite model is (and whether or not to include it), and isn't that something you would do with stepwise regression using R or similar stat package? And that answer is "yes," unfortunately.

So developing this composite metric maybe outside the calling of WL and more along the lines of a stat package supporting stepwise or all-possible-subset regression.

I guess what I was hoping for was a simple composite ScoreCard solution where WL users could use trial-and-error (say via a WPF slider interface) to adjust the weights of 3 or 4 ScoreCard terms. I appreciate this approach isn't ideal or publishable. If you wanted to publish, you would need to use the stat package to solve it rigorously. Academics would want to know the P values (significance) of each term in the composite.

Just keep the the composite solver simple with a WPF interface. After getting something working, someone can publish a rigorous analysis later and write a custom composite ScoreCard.
0
- ago
#16
It is on the way...
3
- ago
#17
This #Feature Request is implemented by the new finantic.ScoreCard extension.

So it should count as "Implemented" instead of "Declined" 😀
0
Best Answer
- ago
#18
Makes sense. Done ✅
0

Reply

Bookmark

Sort