- ago
Hello,
As I have red from the forum
1 Every Strategy window loads it's own copy of BarHistory Data for the DataSet
2 One BarHistory Data instance is used between all the threads of Exhaustive Optimizer
3 TimeSeries.Cache Dictionary can be used to cashe data(indicators and etc.) in BarHistory objects for use between the threads of Exhaustive Optimizer
4 Such a cash between the runs will reduce memory usage compared to using "new" statment every time
Am I correct?
0
365
Solved
14 Replies

Reply

Bookmark

Sort
Glitch8
 ( 12.33% )
- ago
#1
Yes, use the "Series" static method to create indicator instances, it will use the cache and reduce memory usage in this case.

RSI rsi = RSI.Series(bars.Close, 20);

instead of

RSI rsi = new RSI(bars.Close, 20);

The reason we can't do this in Blocks is because an indicator might not implement a Series method. Maybe we can use reflection to check and use Series ... hmmm ...
2
- ago
#2
QUOTE:
"Series" static method ... it will use the cache and reduce memory usage

If you cache the TimeSeries values, you'll increase (not reduce) memory usage because you're caching the TimeSeries in memory. Hmm,... now I'm wondering.... When you create a variable into a cached "Series", is that created by reference or by value? It doesn't seem like you're making a deep copy, so perhaps it's by reference? Could someone clarify that?

At any rate, if I declare the TimeSeries uniquely in the MyStrategy{block}, I typically use the "new" operator. But if the TimeSeries is used multiple times (like inside an indicator), then I cache it (say with Series.) because I only need to create it once for multiple uses.

The problem with caching is there's no simple way to purge the cache, so your memory usage keeps going up.

The advantage of using the "new" operator is that the memory used by the TimeSeries is reclaimed by the garage collector when MyStrategy is finished with that BarHistory and such. That will save you memory, but that also means if you re-execute the strategy again, it will have to recreate that TimeSeries again (which slows you down). If instead you cache it, it will run faster on the second execution, but you'll be using more memory.

Maybe we need a new blog article, "To cache, or not to cache?"
0
- ago
#3
For a TimeSeriesBase, the Cache property is an instance member. Its type is ConcurrentDictionary<string, object>. Hence, no deep copy occurs. When you add something to the Cache it is just assigning the reference (assuming a reference type) to the value for the given key. No deep copy occurs.

Because Cache is simply an instance member (not static) it is going to be garbage collected like any other member of any other object - when there are no references and the GC runs.

So, the question becomes when is the BarHistory done being referenced so that its members (e.g. Close, Open, etc.) have no references and their caches have no references and so on? At least two scenarios are when you run a strategy and when you run an optimization.

The big advantage of using Series (and hence the above noted Cache) is in optimization, especially. For a particular symbol, one BarHistory instance is used for the optimization run. Using Series (and hence the cache), you're not recreating data over and over.

The following code will let you explore when a new BarHistory is created when running a strategy or performing an optimization. Please read the comment in the code. This is a simulation that effectively simulates the use of the TImeSeriesBase Cache member like an indicator would (in its Series method) to explore when the BarHistory (and hence its TimeSeries members) is being re-used.

I always use Series and not new() because of the performance advantages in optimization runs.

Perform various sequences of runs of a strategy and optimizations and this should help clarify the cache and when objects are ending up in the GC. For example, if you do an optimization run (use a dataset with two symbols to keep it simple) you'll see the re-use. Right after that run the strategy. You should see that new BarHistory instances are being used and hence the ones used for the optimization run are off into the weeds. They're going to be garbage collected and their members (think cache) and so on.

CODE:
using System; using System.Collections.Generic; using WealthLab.Backtest; using WealthLab.Core; using WealthLab.Data; using WealthLab.Indicators; namespace WealthScript8 {    public class MyStrategy : UserStrategyBase    { public MyStrategy() {          AddParameter("Test Param", ParameterType.Int32, 1, 1, 5); }        //create indicators and other objects here, this is executed prior to the main trading loop public override void Initialize(BarHistory bars)       {          // The following expression of the if statement simulates the use of a TimeSeriesBase Cache          // inside of a Series method of an indicator. In this small if/else statement          // we can see when the BarHistory is effectively recreated for a run of a strategy.          // You'll need to go to the Log Viewer to see the output.          if (bars.Close.Cache.TryAdd("MyKey", "test"))          {             WLHost.Instance.AddLogItem("MyStrategy", $"Added for {bars.Symbol}", WLColor.Green);          }          else          {             WLHost.Instance.AddLogItem("MyStrategy", $"Already present for {bars.Symbol}", WLColor.Green);          }       }       //execute the strategy rules here, this is executed once for each bar in the backtest history       public override void Execute(BarHistory bars, int idx)       {          if (!HasOpenPosition(bars, PositionType.Long))          {             //code your buy conditions here          }          else          {             //code your sell conditions here          }       }       //declare private variables below    } }
2
Best Answer
- ago
#4
Thank you for that insightful discussion.

QUOTE:
... when is the BarHistory done being referenced so that its members (e.g. Close, Open, etc.) have no references and their caches have no references and so on?

This is an easy question when using the "new" operator because when the reference to any variable is removed so are all its values; that is, its garbage collected.

But with the cache, this isn't so clear cut. To remove something from the cache you need to both remove its reference (created by .Series) and Clear its key. If you don't clear the key, then it stays in the TimeSeries cache Directory<key,value>. So now the question becomes, "When is the key actually cleared?" Is there a destructor somewhere issuing a Clear command to remove the key from the cache Dictionary? This isn't done automatically; otherwise, when is that destructor called?

This discussion should really be put in a blog article because the cache key-clearing step isn't obvious. Typically, one would have to Clear the key explicitly.

QUOTE:
I always use Series and not new() because of the performance advantages in optimization runs.

The word "always" bothers me. Let's say you have an indicator that takes two parameters that are being optimized. Moreover, each parameter can take 8 possible values. That's going to create 8*8=64 single-use copies of that indicator in the cache. That's not good. (Hopefully the optimizer won't try all possible permutations.)

Let's instead say, "I use the .Series property on indicators whenever their parameters are not part of an optimization." That will get you better speed without filling up the cache with every possible parameter permutation.

I'm glad we got two computer guys discussing these particulars. ;-)
1
- ago
#5
From your statements about the ConcurrentDictionary, my understanding (which could be an incorrect interpretation) is that you assume the key has to be cleared to get rid of a reference to the corresponding value. That's not the case. Decompile ConcurrentDictionary. Then, in the code, find private sealed class Node. You'll see the key and the value sitting side-by-side as fields in that class. (You can chase the references all the way back up through Tables class and so on.) My suggestion is to not assume that the way a class (ConcurrentDictionary) is exposed for ease of use (with looking-up the key you get the value) causes reference dependency between the key and the value in its internal implemenation.

In any case, the references in the typical scenario for running a strategy and performing optimization are from BarHistory time series members (e.g. Close, Open, etc.) and UserStrategyBase member variables holding on to indicators that may be in the cache of a time series.

When do the references go away? It should be when the UserStrategyBase instance and BarHistory instance are no longer referenced. Then, assuming whatever are members of those instances don't have some esoteric reference then they no longer have references, with the exception of the UserStrategyBase and/or BarHistory reference, and so on and so forth until you get to the bottom of the whole chain. Its like PacMan gobbling up the things only it references.

Next, you mentioned for example 8*8 indicator instances. I see your point that could cause excessive memory usage if the strategy parameter values are used by indicators, and you have a lot of permutations using exhaustive optimization, for example. (Aside, I use SMAC like 99% of the time, because it is awesome.) So, for those indicators that utilize strategy parameter values you could use new() and for the others that don't use strategy parameter values you could use Series. (You know like maybe ATR with length 14 if that good enough). It could be the best of both worlds.

So, I ran some tests on a machine with an AMD 5950x processor (32 threads) and 64 GB memory. I used an intraday one-minute strategy, pre/post market data included, for 30 days of 20 symbols with 2 parameters enabled for optimization and those two are used by two indicators. There are 13 other indicators which use fixed values (you know, like EMA 20, etc.). In the end, its roughly 12,750 bars per symbol.

For exhaustive optimization there was 896 permutations. For SMAC, 300 permutations were configured. In many test runs I exited the app when the optimization time and progress bar was near 100% and not moving. I assumed WL8 was compiling stats at that point. I did not want to wait. Here are the results:

Using new() for all indicators with Exhaustive optimization:
Start memory: 589 MB
Peak memory: 6,194 MB
896 results
Run-time: 9 minutes, 28 seconds

Using new() for all indicators with SMAC optimization:
Start memory: 472 MB
Peak memory: 6,166 MB
296 results
Run-time: 41 seconds

Using Series for all indicators with Exhaustive optimization:
Start memory: 688 MB
Peak memory: 5,013 MB
896 results
Run-time: 5 minutes, 29 seconds

Using Series for all indicators with SMAC optimization:
Start memory: 693 MB
Peak memory: 2,140 MB
296 results
Run-time: 29 seconds

I did not run a test with using new() for just the indicators that utilize strategy parameter values and all other indicators using Series. I did not want to wait on the exhaustive optimization. I think its safe to assume the results would be somewhere between the two test sets results' shown above.

Using new() for all indicators has higher peak memory, and also it was using higher memory while running versus using exclusively Series for the indicators. But, I would assume it didn't do GC because there was still plenty of memory to use.
1
- ago
#6
Well, the experimental results speak for themselves. Clearly the SMAC optimizer and using caching (via the .Series method) gives the best results. I want to personally thank you for posting those runs. Very interesting results.

QUOTE:
When do the references go away? It should be when the UserStrategyBase instance and BarHistory instance are no longer referenced.

By a "reference", I'm assuming we are talking about the reference created by the .Series method. The problem is the key into the Dictionary still exists (when you null the reference) unless you Clear the key. And the GC shouldn't remove the value if the key is not cleared; otherwise, you'll have a key pointing to nothing.

CurrentDictionary must have a destructor, ~CurrentDictionary(), which handles the "delete" operation of its Dictionary objects. Exactly when is that destructor called and under what circumstances will it delete a Dictionary object? We don't know. Perhaps the destructor has a way of detecting when a BarHistory is no longer needed by the optimizer. I would certainly like to see the destructor code.

We should also remember the cache for BarHistory and the cache for TimeSeriesBase may behave differently. Can anyone comment on that? I've been using the BarHistory cache for my own indicators, but maybe I should be using the TimeSeries cache instead.
0
- ago
#7
Everybody using SMAC... So I will definitely read the KnowHow: Optimizers article and will try and dive deep into optimizers.
PS: it would be great to have some kind of separate forum web page for each paid plugin
PPS: it would be really nice to have a link to KnowHow: Optimizers in the Extensions Store in description of finantic.Optimizers
PPPS And I know that I ask too much - but a small video presentation with explanation on Youtube would do more then 1000 Screenshots
0
- ago
#8
QUOTE:
By a "reference", I'm assuming we are talking about the reference created by the .Series method. The problem is the key into the Dictionary still exists (when you null the reference) unless you Clear it. And the GC shouldn't remove the value if the key is not cleared; otherwise, you'll have a key pointing to nothing.


Start at the beginning and consider conceptually what happens to the Cache instance, like any instance that has a reference. For sake of keeping this example simple, assume that you have a UserStrategyBase instance and a BarHistory instance. The BarHistory instance is referenced only by the UserStrategyBase instance. The BarHistory's Close time series is holding onto one indicator in its Cache instance. The UserStrategyBase instance is also holding onto the indicator in one of its properties.

Now, assume for whatever reason that WL8 is done with the UserStrategyBase instance, and WL8 was the only thing referencing the UserStrategyBase instance. Assume the UserStrategyBase instance has the only reference to the BarHistory instance. BarHistory has the only reference to its Close time series. UserStrategyBase is a goner. It has no references. Therefore, the BarHistory is going to get clobbered because the sole object that was referencing it has zero references. Same holds true for the UserStrategyBase instance's property holding onto the indicator, and the BarHistory's Close member. The BarHistory instance is going to have zero references because the UserStrategyBase instance is dead. So, BarHistory has a Cache member and that Cache member is only being referenced by the BarHistory object that is going to die. Hence, next in line is the BarHistory's Close time series Cache property (the ConcurrentDictionary) to die because it is only being referenced by the Close time series object which is dying. The indicator that is sitting in the ConcurrentDictionary is going to have only two references to it - one from the ConcurrentDictionary and one from the property in UserStrategyBase. But, UserStrategyBase, BarHistory, BarHistory.Close and ConcurrentDictionary (Cache) we know are going away because of the chain of dying references. So, there's nothing left to reference the indicator in the cache. There's nothing left to reference the Cache. Its not needed anymore. Hence, they all just go away off to garbage collection.

To put it simply, follow the tree. If this wasn't the conceptual model you'd end up with memory issues in lots of apps.

Its like this: if I'm not being referenced then whatever I solely reference is also out of references and so on. The key important word to recognize is solely.
1
- ago
#9
Of course when the BarHistory instance goes away, so does its cache. But until then, the BarHistory cache remains with all its objects. That means whatever you place in BarHistory cache--even after its .Series created reference is deleted--is going to remain until the entire BarHistory is deleted.

In other words, nothing is garbage collected in the BarHistory until the BarHistory is deleted. And that means what get cached there never gets reclaimed until after the simulation run is over.

I guess the next question is whether that behavior is good or bad? It would be nice to reclaim BarHistory cache memory as soon as the backtester was finished with that particular BarHistory instance so the memory could be used by the next symbol in simulation.

But based on your experimental results, something is getting reclaimed from one symbol to the next. I can see that. There's a destructor somewhere doing that cache reclaiming we don't know about. I wish this behavior was documented better.
0
- ago
#10
QUOTE:
But based on your experimental results, something is getting reclaimed from one symbol to the next. I can see that. There's a destructor somewhere doing that cache reclaiming we don't know about. I wish this behavior was documented better.


Each test was performed after a fresh run of WL8. That is, I'd run WL8, run the test, record the results, exit WL8. That may impact your assessment.

Later today (2024/02/20 U.S. Eastern Time) I should have some time to run another test. I want to see what happens to memory if I run an optimization where the code uses new() for all indicators. Then, after optimization is complete, run a separate small strategy that ONLY runs (forces) a garbage collection (all generations) in Initialize to see if the memory use drops from 6 GB to perhaps 1 GB. I suspect a lot of the 6 GB for the new() test is just unreferenced indicator memory that the garbage collector has not reclaimed because it didn't run yet. From what I've read, Microsoft seems to make it a mystery as to when the garbage collector runs on its own.
0
- ago
#11
QUOTE:
I suspect a lot of the 6 GB for the new() test is just unreferenced indicator memory that the garbage collector has not reclaimed because it didn't run yet. From what I've read, Microsoft seems to make it a mystery as to when the garbage collector runs on its own.

0th and 1st generation garbage collection is run in the background. I would "guess" that is a continual process whenever the app gets a time slice with extra CPU cycles. In contrast, 2nd generation garbage collection must halt the app altogether because pointers (or "references" in the C# context) become stale during this process. That 2nd generation GC is only going to happen when the "new" operator doesn't have enough contiguous heap memory to complete its task (such as opening a new Chart window). So it's possible the 2nd generation GC hasn't run yet.

There's a book published by Microsoft Press (Inside Windows) that may describe the GC process somewhat. But since GC is a run-time CLR process (under .NET 8.0 in this case), I'm not sure how much an OS book will discuss a .NET topic like this.
0
- ago
#12
I ran some new tests.

When I performed tests yesterday, I said I was waiting for WL8 to summarize results at the end of an optimization run. That was not the case. Instead, the strategy I was testing uses parameterized selection of moving average types (e.g. EMA, Hull, DoubleEMA, etc. - 31 in total.) via a helper library. Well, I had Adaptive Laguerre moving average as part of the set of averages, and I wasn't aware it has a long calculation time, and hence the waiting I mentioned. I got rid of it and ran some more tests. This time, I used only SMAC for the optimizer because I didn't want to wait 10 or so minutes for Exhaustive to complete.

I used a separate strategy that used forced garbage collection of all generations. I ran the GC strategy at various points, so see below for the details...

Using new() for all indicators with SMAC optimization:
Start memory: 724 MB
Peak memory: 5,427 MB
257 results
Run-time: 36 seconds
Forced garbage collection after optimization complete : 5,199 MB
Run strategy: 5,562 MB
Forced garbage collection after run strategy: 4,201 MB
Change data range and run strategy: 3,024 MB
Forced garbage collection after data range change and run strategy: 1,887 MB
Close strategy: 1,889 MB
Forced garbage collection after close strategy: 1,867 MB

Using Series for all indicators with SMAC optimization:
Start memory: 708 MB
Peak memory: 2,389 MB
259 results
Run-time: 19 seconds
Forced garbage collection after optimization complete : 2,351 MB
Run strategy: 2,697 MB
Forced garbage collection after run strategy: 2,095 MB
Change data range and run strategy: 2,490 MB
Forced garbage collection after data range change and run strategy: 2,071 MB
Close strategy: 2,074 MB
Forced garbage collection after close strategy: 1,892 MB

When using new() for all indicators, changing the data range, and running the strategy probably forces a new BarHistory and perhaps new UserStrategyBase and hence the memory going down.

For my purposes, this is good enough. I'm going to continue to use Series for my strategies considering many have similar indicator characteristics to the test strategy.
0
- ago
#13
Thanks for posting all these details. What I take away from this is using indicator cache memory speeds things up (That's clear.) and may reduce memory usage in many circumstances. Precisely how memory is managed in the BarHistory cache is unclear. I wish the WL team would explained that better so we can make better choices on when to cache and when not to cache.

From a hardware prospective, memory is cheap today, so I would buy a processor chip with plenty of on-chip memory for WL. And that includes plenty of L3 on-chip cache memory. Happy computing.
1
- ago
#14
I have also thought a lot about cache, with the conclusion that it is a very useful and valuable feature to speed things up. But I will only use it to a very limited extent. It is public and you can make a lot of mistakes with it, consciously or unconsciously!

Just imagine if it didn't exist....?...You will think about the design of your solution!
0

Reply

Bookmark

Sort