- ago
How do I make a backtest deterministic. Although different results can lead to a better estimation of the strategy, there are scenarios in which a run should be reproducible. Especially at development time of code, strategy and especially important when tracing back problems.

How do I configure WL to generate deterministic backtests?
0
693
Solved
38 Answers

Reply

Bookmark

Sort
- ago
#1
It's a FAQ: "Every time I run a Strategy I get a different result. Why?"
https://www.wealth-lab.com/Support/Faq
0
MIH7
- ago
#2
Thanks Eugene, but that is not what i asked for.



The randomness might be controlled by setting a fixed Seed(). This could be provided in the backtest preferences.

The mentioned solution does not make the backtest deterministic. Determinism must be independent on the strategy setup. One goal is to trace back problems in a real used strategy and not to change the strategy setup.

0
- ago
#3
There is no fixed seed. To make backtests repeatable you should follow the FAQ.
0
MIH7
- ago
#4
This tells a lot how the software is developed.

If the advantages of a deterministic approach for your own development are not clear, then it is what it is.

Repeatable with certain cosmetic pre-treatments ... is not deteministic! I stop, this is a waste of time!

If the code base is clean, this can be implemented in 5 minutes. It's faster than reading this post. I actually assumed that this basic concept is part of your development strategy.
0
- ago
#5
There's no need to be insulting just because someone doesn't agree with you. If you want a deterministic result you can use a transaction weight like described in the FAQ.
0
MIH7
- ago
#6
I am not being offensive. This is simply not determinism. Neither by definition nor as a solution you propose.

Reducing the position size makes no sense if the position size grows again in the backtest. I am convinced that you are using transaction weight to override the random mechanism. That is only coorect if transaction weight is part of the strategy when it is used that way at a later time.

This is not determinism, there are no two opinions about it.
Your solution conceals or circumvents the requirement, but does not solve it.

Making a strategy 100% reproducible already requires not changing it and running it under the same conditions it will eventually be used.

You would not discuss this if you were using it. For development, this is a very important concept to debug complex software. Even if the software eventually runs in non-determinism mode at the end.

If you don't like this, then so be it.
0
- ago
#7
I obviously prefer the randomized approach because it lets you know that something MIGHT CHANGE if you were to try and run the strategy live with the current settings. It’s like a heads up, or a warning. I appreciate your opinion though.
0
Cone7
- ago
#8
@MIH - a scenario:
10 trade candidates to enter at Market tomorrow, but you have buying power for only 5 of them. What's your deterministic solution?

Ours is to assign a weight, in order to deterministically pick which 5 trades we're going to send to the market. In this case, I usually use an RSI value, so I would buy the 5 trades with the lowest RSI. This is deterministic.
0
MIH7
- ago
#9
@Glitch

Well, there is nothing wrong with random approach. But do you know that you can make random process repeatable?

If pseudo-random numbers are used, the generation of the random numbers can be repeated. The random number generator just needs to be initialized with the same value (https://en.wikipedia.org/wiki/Random_seed).

I am pretty sure you use the standard Random capabilities from C#. This is a random number generator. Basically all you need to do is to set one variable, the seed, to a fixed number and the complete process that relates to random numbers will be repeatet "1:1".

https://docs.microsoft.com/de-de/dotnet/api/system.random?view=net-6.0

The user might benefit from it when he wants to analyse the backtest deeply. It can be repeated. Tracing back problems will be possible at some points because problems can be reconstructed 1:1. Especially without changing a setup. Using the default seed (think it is system time) the process will be processed in non-determinism mode. (If you would save this value, you could even repeat that one).

The advantages for the software developers can even be much more important.

Note: i don't write this to convience anybody now. You can use this information as you like.
0
MIH7
- ago
#10
@Cone.

QUOTE:

@MIH - a scenario:
10 trade candidates to enter at Market tomorrow, but you have buying power for only 5 of them. What's your deterministic solution?

Ours is to assign a weight, in order to deterministically pick which 5 trades we're going to send to the market. In this case, I usually use an RSI value, so I would buy the 5 trades with the lowest RSI. This is deterministic.


Nothing wrong with it.

But you can make random walks repeatable too, very easy. This does not only work with Transaction weights. Do not unnecessarily restrict determinism. There are really many scenarios where this software and development can and will benefit.
0
Cone7
- ago
#11
Sure we're aware of that - it's the way it worked in Wealth-Lab 2.0, 23 years ago.

We changed it in Wealth-Lab 5.0 so that traders understood that to trade their single random-result strategy deterministically, they need to go the extra step to assign a Position.Priority, as it was called then. This actually becomes part of the strategy and how you trade it.

Of course, if you like the single random result, you can assign your seed and the random number to Transaction Weight. If you do it that way, you'll get out of sync with your live trades very quickly.

I'll just add that stop/limit systems should use Granular Processing (Advanced Settings) for a true-to-life deterministic backtest.
1
MIH7
- ago
#12
EDIT: you might want to read post #15, it was overlapping, but i don't want to delete what i wrote at this point. It looks like i misunderstood something.

Original:
Yes, but there is trading, developing, debugging and situations where you want to trace back problems, and there it will be very useful. There is no reason to throw it out completely. There are many useful scenarios.

Testing a strategy in the trading context, i would prefer non-determinism too.
But even there might occur situations where i would like to look into a backtest more than one time for deeper analysis.

From technical point of view, this can only help, but not hurt. I can not remember a single scenario why an algorithm running in determinism mode damaged somthing.
More or less it's free to maintain and having it at hand you will certainly use it in the right context pretty soon. I am sure.
0
- ago
#13
I’ve been programming for over 40 years, starting with video games for the Atari 800 in 6502 assembly language. I know about random seeds. Believe me I’ve watched enough Minecraft videos too to know what they can do 😂

This is a choice, not ignorance.
1
Cone7
- ago
#14
MIH - it's really only 2 lines of code. Create a static Random variable and assign the next one to the Transaction.Weight.

CODE:
//execute the strategy rules here, this is executed once for each bar in the backtest history public override void Execute(BarHistory bars, int idx) { if (!HasOpenPosition(bars, PositionType.Long)) {             //code your buy conditions here             Transaction t = PlaceTrade(bars, TransactionType.Buy, OrderType.Market);             t.Weight = _r.Next(); } else { //code your sell conditions here } }       //declare private variables below       static Random _r = new Random(123);
1
MIH7
- ago
#15
@ Glitch, sorry, I did not mean to be unkind :-)
@ Cone

It looks like i overread something. You wanted to tell me, that i can make this random stuff on my own. I did't realize that. Ok, good argument.
Is there something else that i need to have in mind to make a backtest deterministic? Or is it just about the "Transaction.Weight"?
0
MIH7
- ago
#16
First of all, I would like to apologize, I was probably a bit harsh.
But I also think that we could already be at this point from the beginning.

Well, as the subject of the thread is telling, i am interested how to do it.
It does not need to be done from you guys. But i need to know what has to be done.

The first step, to initialize the random numbers in the strategy code and assign it to the transaction weight is fine. (For me, as i can code it easily. Not sure how block users need to do it).

But this step alone will not give the same result. Even after reloading the strategy, and run it for the "first" time again, the results are different. Now, if this is a necessary step, something else is missing.

For example, it looks like there are running multiple threads. This might be another point. Can you confirm it and if so, can the number of threads be reduced on "1" for such a scenario? (I was curios about it for other reasons already. For example using 16 cores would be nice. I was not able to find a configuration for it so far)
0
- ago
#17
QUOTE:
(For me, as i can code it easily. Not sure how block users need to do it).

They simply drop a "Transaction Weight" condition onto an entry block.
0
MIH7
- ago
#18
Hello Eugene. Please show me a picture, where the strategies random generator is initialized like in the code by dropping the TW into the block.

So, we reached a point in the discussion where we can talk about the contents. Simply, let's do it.

1. Initializing the RG via Code and modifiying the TW. Solved. (not sufficient), next point ...
2. Threading, what can/needs to be done in this context.

Please stop giving these kind of answers. This seems ignorant and as if the questions are completely pointless.
They are not. For the simple reason that the proposed solutions and previous approaches do not lead to a desired result.
0
- ago
#19
Michael,
You asked a question re: Blocks, I answered close to the topic. 🤷‍♂️ Reading between the lines, Blocks users (and not just them) do not seem to be concerned over it.

QUOTE:
But even there might occur situations where i would like to look into a backtest more than one time for deeper analysis.

Have you checked out finantic.ScoreCard? It has a nice solution for your crusade quest for determinism in the form of "Compare Tool" that lets you save backtests to a vault and then compare with subsequent runs:
https://www.wealth-lab.com/extension/detail/finantic.ScoreCard#screenshots
0
MIH7
- ago
#20
QUOTE:

... for your crusade quest for determinism ...


lol

I just want to be able to create an identical signal sequence when running a backtest more than once. And it is a little bit more than saving and looking at the results. Running the process you can follow the logic and the status of variables in the code.
This is a significant difference. So another example, if a backtest setup is not deterministic but it should be, you can trace back the problem(s) for sure.

So, i continue the crusade :-) until the feedback is like "It is not possible to make a backtest deterministic with WL" or we solved it step by step. Thank you in advance.

One of many scenarios: The last trading session would be interesting too. I have the data for friday. If i would be able to run the strategy / process again (exactly the same way), i could look into the signals produced by WL again, in the same way they have been processed on friday. At least the WL activities could be traced that way.

You could even hand over this strategy with the data. You could look directly and see what the code base is doing and so on ...

Remeber, i am doing paper trading not only because of testing strategies. This is a useful way to analyse automated trading problems with a technical issue.
0
- ago
#21
I also see a decisive advantage in wealth-lab's current implementation:

Inexperienced users / traders might be fooled with a deterministic approach.

You start to develop initial strategies, and then the question comes up: why aren't my backtests consistent?
Then one deals with the topic and finds out that the "Transaction weight" or "Reducing position size" or probably "Margin Factor" provide the appropriate answer in backtesting.

Cone's solution with the "static Random variable" even provides a solution for your specific approach.
2
vk7
- ago
#22
QUOTE:
How do I configure WL to generate deterministic backtests?


I am not sure if you are satisfied with the suggestions you have received so far, even though I think they are all valid and good.
I am not a scientist nor a programmer but I think I know a few things about strategies.
If you wish to make the backtest deterministic the two ways suggested here are really the most practical way, "especially when tracing back problems". I can not see why reducing the position size will not solve the issue (tracing back problems) since it is not distorting the result.
Shall we have a call next week, I can send you my available time via calendly and we can talk in German?

VK
0
MIH7
- ago
#23
Thanks alpha70 and VK for your feedback.

For trading aspects i am fine with it as it is. And i can agree that testing strategies should "finally" not be tested this way. For me this is only one aspect.

Trading automatically requires really robust software, otherwise it will cost a lot of money. Especially intraday trading of maybe scalping strategies running automatically will produce a lot of signals. There will be loosing trades anyway, but i don't want to loose money because of software issues. There is another side, the broker software which will introduce problems from time to time.

Now, this kind of process can help a lot tracing back problems, because the problems can be repeated and therefore be analysed exactly the same way they occured. This is the most important point and is not about trading itself.

Cone's suggestion goes hand in hand with the FAQ. That is fine. It also provides a way that can be individually used, that is fine too. But there need to be some other elements to be taken into account. Just using his solution will not produce the same result for the backtest, it is only one step of some others.

So, it is really about the intention what you want to do with it. When i started to use WL some weeks ago, i only wanted to use blocks. Really, i did not want to code anything. But i was inspired to do so, because of the flexibility you can get. It's true! It is simple and you can express your ideas even better as with blocks. The next step then is to leave the context of WL coding area and to setup Visual Studio, also supported by the WL Team. Great! Once in this frame, however, you start to think differently about the complete package

Even if users would not use it actively, the developer team could analyse and maybe exclude problems caused by WL pretty straight forward. Actually there is a topic going on (only guessing what the problem might be). The user benefit would be that it results in a more robust product and tracing problems can be done more precise. You can say for example, use the time frame from T0 to T1, using following symbols pass all together with the strategy. The team would be able to follow the whole trading session exactly like it was in minutes.
That will not be always that easy as i describe it, but this can be a very useful and powerful scenario, improvement.

Having a such a feature in the back hand does not hurt at all. But having scenarios where you want to use it and you don't have it, that costs time and money.

Just, my two cents ...

Have a nice sunday!
0
- ago
#24
QUOTE:
That will not be always that easy as i describe it...

Oh yes, it won't be. The data downloaded can itself be different from PC to PC, vary from day to day (due to glitches, corrections, omissions etc.) and even can depend on subscription level (less bars for free, complete history for $$$). Ask @kazuna, he used to wipe out the data and refresh daily (at least in WL6).
0
MIH7
- ago
#25
In general you can pass data, settings and strategy or under the right circumstances you can even make a session on the users PC. Really Eugene, it is a matter of intention. The are are good reasons to have such a feature at hand. It can be used in the right and the wrong context. Like every powerful tool.

Cone did setup a strategy to trace back a problem on friday. Now this is pretty random. Maybe there was something useful, maybe not. It is not systematically driven. The chances of isolating the problem in such cases would be considerably increased. No guarantee, but a systematic approach would be traceable even via code tracing.

Edit: Reading your last post i see a lot of talent for scalping (just kidding)
0
- ago
#26
Just wanted to share this modified "Knife Juggler" that uses a static seeded random number generator to produce the same (but still random) results each run.

CODE:
using WealthLab.Backtest; using System; using WealthLab.Core; using WealthLab.Indicators; using System.Collections.Generic; namespace WealthScript1 { public class MyStrategy : UserStrategyBase {     public MyStrategy() : base() {          AddParameter("Percentage", ParameterType.Double, 2, 1, 20, 2); } public override void Initialize(BarHistory bars) {          source = bars.Close;          pct = Parameters[0].AsDouble;          pct = (100.0 - pct) / 100.0;          multSource = source * pct;          PlotStopsAndLimits(3);          PlotStopsAndLimits(3);          StartIndex = 20; } public override void Execute(BarHistory bars, int idx) {          int index = idx;          Position foundPosition0 = FindOpenPosition(0);          bool condition0;          if (foundPosition0 == null)          {             condition0 = false;             {                condition0 = true;             }             if (condition0)             {                val = multSource[idx];                _transaction = PlaceTrade(bars, TransactionType.Buy, OrderType.Limit, val, 0,"Buy at Limit 2% below Close");                _transaction.Weight = _rnd.NextDouble();             }          }          else          {             condition0 = false;             {                condition0 = true;             }             if (condition0)             {                Backtester.CancelationCode = 41;                if (idx - foundPosition0.EntryBar + 1 >= 2)                {                   ClosePosition(foundPosition0, OrderType.Market, 0,"Sell after 2 bars");                }             }             condition0 = false;             {                condition0 = true;             }             if (condition0)             {                Backtester.CancelationCode = 41;                value = (5.00 / 100.0) + 1.0;                ClosePosition(foundPosition0, OrderType.Limit, foundPosition0.EntryPrice * value, "Sell at 5% profit target");             }          } } public override void NewWFOInterval(BarHistory bars) {          source = bars.Close; }       private double pct;       private double val;       private TimeSeries source;       private TimeSeries multSource;       private double value;       private static Random _rnd = new Random(12345);       private Transaction _transaction; } }
0
MIH7
- ago
#27
Thank you Glitch. Cone already provided this solution. It would be a first step, but it does not repeat the results (necessarily). I need a break on this topic and i made my points in several posts.

Maybe you like to read #23 and #25 as summary. There has been feedback from different people i tried to answer and a response to Eugene.

In post #18 I tried to build on Cone's solution. But the discussion remained on the level of "right and wrong".

Have a nice sunday!
0
MIH7
- ago
#28
@VK, of course i would appreciate a call, no doubt.

I did not respond to your specific point about position size.

If you change the position size setup, you may end up back at that point during the backtest. The position size and the capital go hand in hand. You simply move the status from the beginning to the middle of a backtest. If you are lucky, you hide the effect that would have occurred in the original setup.

The second point is that a constant result is not the goal in itself. The result should be constant for given properties. So if you change the properties, you break the precondition. In other words, if you want to test a specific setup and want the result to be deterministic for some reason for this setup, you cannot change it. Otherwise you tested a different setup but not that specific one. (I would like to remember at this point that we are not necessarily talking about trading context. The more common case to repeat a test is because to examine technical issues)
0
- ago
#29
I am a bit late to the party, sorry.

There is a lot of randomness in the markets which makes it tricky to introduce even more randomness with backtest software.

It is certainly helpful to avoid "curve-fitting" and avoid unexperienced developers to fool themselves. This is the reason why WL chooses trades randomly if there is not enough capital for each as the default politics,

On the other hand the randomness introduced by the software makes it difficult to develop, compare, debug, optimize some small changes in the logic, parameters, position sizing and so forth, because the "second randomness" usually hides the effects you are looking for.

I recently came up with this solution:

I initialize a "weight" in PreExecute(), lets say like so: weight = 1000;

Then, whenever the strategy opens a position I assign this weight and increase it:

CODE:
transaction.Weight = weight++;


If I want to see the effects of choosing completely different positions (when there is no enough capital for all) I go:

CODE:
transaction.Weight = weight--;


With this ansatz you get the same positions every day (during backtest) even if you change things like positions size.

The solutions proposed above (in these other posts) change the whole picture if there is one single position missing form a backtest (due to whetaver small change you introduced).
2
Best Answer
- ago
#30
Great solution!
0
MIH7
- ago
#31
Hello DrKoch, thanks for joining.

Do you mean something like that?

CODE:
static int weight; public override void PreExecute(DateTime dt, List<BarHistory> participants) { weight = 1000; }


and when placing the order you simply increment the weight?

CODE:
Transaction t = PlaceTrade(bars, TransactionType.Buy, OrderType.Market, 0, "Buy"); t.Weight = weight++;


You solution and Cone's, Glitch's solution have in common that you only operate on the transaction weight to make the result repeatable. Maybe i missed something. We can clarify it now.

When i have opened a strategy and run it several times, it looks like, that i retrieve the same result for every run. That's fine. If i close the strategy and open it again, i will retrieve constant results for several runs again. So, there is only one different result which is created after reload of the strategy.

1. Loading strategy, running backtest 5 times, result 5 time -1,24% (same positions within the test)
2.Loading strategy, running backtest 5 times, result 5 time -0,99% (same position within the test)

So according to the subject the backtest seems to provide repeatable results. Great.

Maybe someone can explain what will be different when the strategy will be load.
While it seems to be deterministic from this point (and i did not mentioned that i do reload the strategy) it seems to have another status (data ?!) when its reloaded.

I just want to be aware of it.

I need to come back later. Thanks so far.
0
MIH7
- ago
#32
Hello everybody.

Now i provide a simple solution that improves DrKoch's solution.

Solution DrKoch

His solution allowed to repeat the strategy run and retrieving the same result.
As long the strategy is loaded all repetitions will provide the same result.
If the strategy is reloaded, it will return repetitive results, but not identical to the previous session. For what purposes this can be used has already been discussed in detail in the thread.

CODE:
// From his description a weight is set before executing the strategy public override void PreExecute(DateTime dt, List<BarHistory> participants) { weight = 1000; } // At a later point when the order will be placed the transaction weight will be incremented if (makeAnOrder) { Transaction t = PlaceTrade(bars, TransactionType.Buy, OrderType.Market, 0, "Buy HH40"); t.Weight = weight++; }


My solution

Here comes a different solution that is persistent over different loads/sessions.

CODE:
public override void Initialize(BarHistory bars) { ... // The sortSymbols() method will provide a weight for each symbol // It is called once per backtest. There may be a better place to call the // function, for illustration it is sufficient. if (!symbolsAreWeighted) { sortSymbols(); } // Each symbol has its own weight, that was initialized by SortSymbols() // method. Based on all ideas presented in the discussion this weight will // be assigned when a order will be placed. setSymbolWeight(bars.Symbol); }


The big difference is, that when the strategy is loaded again, the weights can be recreated and will be available from the start.
Setting up the same weights in the initialization routine makes the strategy repeatable after reloading it.

CODE:
// Although it is not C# style, it demonstrates the idea. Sorting the symbols // only provides a natural ranking and unique weight. Each other algorithm to // assign a weight can be used too. e.g. "bars.count" if you want to prefer symbols // that provide more statstical data, or whatever you have in mind void sortSymbols() { BarHistory tmp; bh = BacktestData; for (int i = 0; i < bh.Count - 1; i++) for (int j = i + 1; j < bh.Count; j++) { if (bh[i].Symbol.CompareTo(bh[j].Symbol) > 0) { tmp = bh[i]; bh[i] = bh[j]; bh[j] = tmp; } } symbolsAreWeighted = true; return; } // The formular is arbitrary. It simply assures to provide different weights. // It can be combined with other ideas like PRNGs or solutions like DrKoch's void setSymbolWeight(String symbolName) { for (int i = 0; i < bh.Count; i++) if (bh[i].Symbol.CompareTo(symbolName) == 0) { symbolWeight = i * 1000; break; } }


The weight will be assigned when a order will be placed.

CODE:
if (makeAnOrder) { Transaction t = PlaceTrade(bars, TransactionType.Buy, OrderType.Market, 0, "Buy HH40"); // t.Weight = weight++; DrKoch t.Weight = symbolWeight; }


Both solutions are lightweight. DrKoch's solution is time restricted as long as the strategy is loaded. My solution is persistent and can produce the same results after the strategy is reloaded. Rearrangments of the weight can be done easily, so different kind of deterministic results can be produced in both cases.
1
- ago
#33
Just back from vacation finding this interesting discussion between developers, traders, scientists...Whats the point ? Doesn't position sizing make the backtest deterministic? Thanks for short summary.
1
MIH7
- ago
#34

#32: persistent solution (MIH) / operation on the transaction weight, preparing symbol weight (Transaction.Weight) via Initialize()
#29: solution (DrKoch) / operation on the transaction weight, via PreExecute()
#23: reply to Vk(#22) and alpha70(#21)
#28: reply to VK context position sizing

In general, there was much discussion about the purpose of this procedure. The fact that this is not about black and white was often ignored. In summary, it can be said that measuring the trading performance of a strategy should be done with variance. You get results that are more realistic. In other application scenarios like development, debugging, detailed analysis, comparison and optimization it can be very useful to repeat the process.

Ultimately, it is up to the user to use the tools wisely.
Having the option available is essential (for me).

Algorithm

The final algorithm can be implemented in many forms.
It is important to provide weights for the symbols before the backtest starts and to be able to change them reproducibly during the backtest. The backtester processes the data in the same way as in previous runs. Only the same start state must be created.
0
Cone7
- ago
#35
Let's not lose sight of the fact that we have a Building Block rule to assign weights for reproducible backtests. The many solutions provided are simply different ways to assign weights for specific testing purposes.
1
MIH7
- ago
#36
The solutions discussed all differ.

1. FAQ - the solution does not necessarily lead to the same result. Whether it matters (in terms of content) to adjust the position size is case dependent. If it is adjusted, it is not guaranteed that the process will lead to repeatable results.

2. DrKoch's solution repeats the backtest 1:1 and the setup (position sizing) does not need to be adjusted.

3. My solution allows to repeat the backtest 1:1 even after the strategy is reloaded.

All scenarios can be useful, but they do not represent the same solution.
The common feature is, in fact, that the complete control can be managed through the Transaction.Weight.
I have not checked whether variant 2 and 3 can also be realized via blocks. That could be another distinction.
0
- ago
#37
Hi MIH,
after you improved my idea with a much better idea, let me improve your implementation a bit:

I assign random weights to each symbol in BacktestBegin():

CODE:
public override void BacktestBegin() {          int seed = 42; // change value for another deterministc run          Random rnd = new Random(seed);          foreach(BarHistory bh in this.BacktestData)          {             bh.UserData = (int)(rnd.NextInt64(10000) * 100);          } }


Then I use (and increment) these weights in Execute() for each new transaction:
CODE:
public override void Execute(BarHistory bars, int idx) { int weight = bars.UserDataAsInt; if ( ... entry logic... ) { Transaction t = PlaceTrade(bars, TransactionType.Buy, OrderType.Market, 0, -1, $"Weight={weight}"); t.Weight = weight++; // use weight-- to get a different deterministc run ...


This saves a few lines of code and CPU cycles. ;)

The implementation allows for different deterministic runs.

0
MIH7
- ago
#38
Thanks for sharing, i appreciate it.
0

Reply

Bookmark

Sort