I can clear the cache for a particular bar object by just saying ...
Is there a command like in WL6 that will purge all caches for all bars objects and TimeSeries objects?
CODE:So how can I clear all the TimeSeries caches for a particular stock? What's the command?
bars.Cache.Clear();
Is there a command like in WL6 that will purge all caches for all bars objects and TimeSeries objects?
Rename
CoPilot knocked the following out in about 2 seconds. It doesn't recurse because you probably wouldn't need that. You could call this from your strategy code from Initialize(...) BacktestBegin(...), etc. for whatever fits your needs.
To call from your strategy...
CODE:
using System; using System.Reflection; using WealthLab.Core; namespace WealthLab.Helpers { public static class TimeSeriesCacheHelper { /// <summary> /// Clears the Cache of all TimeSeriesBase members (fields and properties) /// in the given UserStrategyBase instance. No recursion. /// </summary> public static void ClearDirectTimeSeriesCaches(UserStrategyBase strategy) { if (strategy == null) return; Type type = strategy.GetType(); var members = type.GetMembers(BindingFlags.Instance | BindingFlags.NonPublic | BindingFlags.Public); foreach (var member in members) { object value = null; switch (member) { case FieldInfo field: value = field.GetValue(strategy); break; case PropertyInfo prop: if (prop.CanRead && prop.GetIndexParameters().Length == 0) { try { value = prop.GetValue(strategy); } catch { /* ignore */ } } break; } if (value is TimeSeriesBase ts) { ts.Cache.Clear(); } } } } }
To call from your strategy...
CODE:
TimeSeriesCacheHelper.ClearDirectTimeSeriesCaches(this);
Interesting. I'll have to take a look at it.
The reason I'm asking about cache clearing is because the processor utilization during Symbol-By-Symbol parameter optimization is very poor. The reason for this is high page-fault count! It's so high that the writes to the disk during optimization are fairly high for a compute bound optimization task. Check it out with Process Explorer.
Clearing the caches in Cleanup{} is a sledge hammer solution to this problem.
What I propose is a new generic Dictionary<string,object> class that will manage the cache employing the LRU (Least Recently Used) paradigm we currently use in processor cache-management hardware. Now in the processor LRU paradigm, we simply delete those memory blocks that are LRU because we have nowhere to put them. But ...
But for our Dictionary<string,object> class, we have a second non-nuclear option. We can best-effort sort the cached blocks by their LRU status to improve the Principle of Locally of the Dictionary cache. And that sorting will lead to decreased page faulting, which is our real goal, because the unused cache blocks will be group together and put aside.
Developing a generic Dictionary<string,object> class with LRU management is an ambitious task, but many applications (like WL) can benefit. I would propose starting a GitHub project for this effort to attract computer engineers with LRU experience to get involved in this project.
For now, I'll just be happy with the sledge hammer approach to fixing the page-faulting optimization problem. :)
The reason I'm asking about cache clearing is because the processor utilization during Symbol-By-Symbol parameter optimization is very poor. The reason for this is high page-fault count! It's so high that the writes to the disk during optimization are fairly high for a compute bound optimization task. Check it out with Process Explorer.
Clearing the caches in Cleanup{} is a sledge hammer solution to this problem.
What I propose is a new generic Dictionary<string,object> class that will manage the cache employing the LRU (Least Recently Used) paradigm we currently use in processor cache-management hardware. Now in the processor LRU paradigm, we simply delete those memory blocks that are LRU because we have nowhere to put them. But ...
But for our Dictionary<string,object> class, we have a second non-nuclear option. We can best-effort sort the cached blocks by their LRU status to improve the Principle of Locally of the Dictionary cache. And that sorting will lead to decreased page faulting, which is our real goal, because the unused cache blocks will be group together and put aside.
Developing a generic Dictionary<string,object> class with LRU management is an ambitious task, but many applications (like WL) can benefit. I would propose starting a GitHub project for this effort to attract computer engineers with LRU experience to get involved in this project.
For now, I'll just be happy with the sledge hammer approach to fixing the page-faulting optimization problem. :)
@superticker - I took your LRU cache idea and asked AI to help out. It did. Below is a problem solution produced primarily using Claude Sonnet 4.5. It wrote practically all of the code. I just had to guide it along the way.
To utilize the caching, put the CachedIndicatorFactory and LruCache classes in your utility project. There's also an example strategy on how to use the caching mechanism - see it to get familiar with the usage. You'll see that you have to change your indicator creation code (i.e. constructor calls or Series calls) to use a helper method. For example, to create an RSI indicator you do the following by calling a very small helper method named Ind in your strategy code:
The use of Ind<T>, and hence, indirectly, CachedIndicatorFactory.Create<T>, is necessary to have the LRU cache shenanigans work.
The bottom line is this lets you get the performance advantages of caching indicators while controlling memory usage.
If you have questions about the caching implementation and/or usage then I suggest adding the Github Copilot (or similar) extension to Visual Studio, learn to use it, and ask it questions about the code. Its amazing what AI can do! (I asked Claude Sonnet 4.5 about an idea to use WeakReference<T> and, in under one second, it took me to the cleaners about why I shouldn't do that in a 10 paragraph dissertation - imagine that - in ONE SECOND!)
To utilize the caching, put the CachedIndicatorFactory and LruCache classes in your utility project. There's also an example strategy on how to use the caching mechanism - see it to get familiar with the usage. You'll see that you have to change your indicator creation code (i.e. constructor calls or Series calls) to use a helper method. For example, to create an RSI indicator you do the following by calling a very small helper method named Ind in your strategy code:
CODE:
// Convenience wrapper to create cached indicators private T Ind<T>(params object[] args) where T : IndicatorBase => CachedIndicatorFactory.Create<T>(CurrentBars, args);
CODE:
_rsi = Ind<RSI>(bars.Close, rsiPeriod);
The use of Ind<T>, and hence, indirectly, CachedIndicatorFactory.Create<T>, is necessary to have the LRU cache shenanigans work.
The bottom line is this lets you get the performance advantages of caching indicators while controlling memory usage.
If you have questions about the caching implementation and/or usage then I suggest adding the Github Copilot (or similar) extension to Visual Studio, learn to use it, and ask it questions about the code. Its amazing what AI can do! (I asked Claude Sonnet 4.5 about an idea to use WeakReference<T> and, in under one second, it took me to the cleaners about why I shouldn't do that in a 10 paragraph dissertation - imagine that - in ONE SECOND!)
CODE:
using System; using System.Collections.Generic; namespace WLUtility.Caching; /// <summary> /// Represents a thread-safe, fixed-capacity cache that stores key-value pairs /// and evicts the least recently used items when the cache exceeds its capacity. /// </summary> /// <typeparam name="TKey"> /// The type of the keys in the cache. /// </typeparam> /// <typeparam name="TValue"> /// The type of the values in the cache. /// </typeparam> public class LruCache<TKey, TValue> { private readonly object _lock = new(); private readonly LinkedList<CacheItem> _lruList; private readonly Dictionary<TKey, LinkedListNode<CacheItem>> _map; private int _capacity; /// <summary> /// Represents a thread-safe, fixed-capacity cache implementation that stores key-value pairs. /// This cache evicts the least recently used (LRU) items when the cache exceeds its specified capacity. /// </summary> /// <typeparam name="TKey"> /// Type of the keys used to identify cached entries. /// </typeparam> /// <typeparam name="TValue"> /// Type of the values stored in the cache. /// </typeparam> public LruCache(int capacity) { ArgumentOutOfRangeException.ThrowIfNegativeOrZero(capacity); _capacity = capacity; _map = new Dictionary<TKey, LinkedListNode<CacheItem>>(); _lruList = []; } /// <summary> /// Gets or sets the maximum number of items that the cache can hold. /// When the cache exceeds this capacity, the least recently used items /// are removed to make space for new entries. /// </summary> /// <exception cref="ArgumentOutOfRangeException"> /// Thrown when attempting to set a value that is less than or equal to zero. /// </exception> /// <remarks> /// Access to this property is thread-safe. Updates to the capacity may trigger /// eviction of items if the current count exceeds the new capacity. /// </remarks> public int Capacity { get { lock (_lock) { return _capacity; } } set { ArgumentOutOfRangeException.ThrowIfNegativeOrZero(value); lock (_lock) { _capacity = value; TrimToCapacity(); } } } /// <summary> /// Attempts to retrieve the value associated with the specified key from the cache. /// If the key exists, the value is returned, and the entry is marked as the most recently used. /// </summary> /// <param name="key"> /// The key of the cache entry to retrieve. /// </param> /// <param name="value"> /// When this method returns, contains the value associated with the specified key, if the key is found; /// otherwise, the default value for the type of the value parameter. /// </param> /// <returns> /// True if the key was found in the cache; otherwise, false. /// </returns> public bool TryGet(TKey key, out TValue value) { lock (_lock) { if (!_map.TryGetValue(key, out var node)) { value = default!; return false; } // Mark as most recently used _lruList.Remove(node); _lruList.AddFirst(node); value = node.Value.Value; return true; } } /// <summary> /// Adds a new key-value pair to the cache or updates the value of an existing key. /// If the cache exceeds its capacity, the least recently used item /// will be removed to make space for the new item. /// </summary> /// <param name="key"> /// The key associated with the value to be added or updated in the cache. /// </param> /// <param name="value"> /// The value to be stored in the cache associated with the specified key. /// </param> public void Add(TKey key, TValue value) { lock (_lock) { if (_map.TryGetValue(key, out var existingNode)) { // Update value and mark as most recently used existingNode.Value = new CacheItem(key, value); _lruList.Remove(existingNode); _lruList.AddFirst(existingNode); } else { var item = new CacheItem(key, value); var node = new LinkedListNode<CacheItem>(item); _lruList.AddFirst(node); _map[key] = node; TrimToCapacity(); } } } /// <summary> /// Ensures the cache size does not exceed its defined capacity by removing the least recently used items. /// This method is called whenever an item is added to the cache or the capacity is adjusted. /// </summary> /// <remarks> /// If the number of items in the cache exceeds the specified capacity, this method will /// evict the least recently used items until the cache size is equal to or less than the capacity. /// The method operates on a thread-safe, fixed-capacity cache implementation. /// </remarks> private void TrimToCapacity() { while (_map.Count > _capacity && _lruList.Last != null) { var lruNode = _lruList.Last; _lruList.RemoveLast(); if (lruNode != null) { _map.Remove(lruNode.Value.Key); } } } /// <summary> /// Represents an individual item stored within the LRU cache, containing a key-value pair. /// </summary> private readonly struct CacheItem(TKey key, TValue value) { public TKey Key { get; } = key; public TValue Value { get; } = value; } }
CODE:
using System; using System.Collections.Concurrent; using System.Globalization; using System.Reflection; using System.Threading; using WealthLab.Core; using WealthLab.Indicators; namespace WLUtility.Caching; /// <summary> /// Provides functionality for creating and managing cached instances of indicators. /// Supports caching to improve performance by reusing previously created instances. /// Includes mechanisms to track the number of instances created and reused. /// </summary> public static class CachedIndicatorFactory { private const string CacheKey = "__lrucache__"; private static int _indicatorRequests; private static int _reusedIndicatorCount; private static readonly ConcurrentDictionary<Type, ConstructorInfo[]> ConstructorCache = new(); private static int _defaultCacheCapacity = 20; /// <summary> /// Gets or sets a value indicating whether caching is enabled. /// When set to true, caching mechanisms can be utilized to optimize /// the creation and reuse of indicators or other resources. /// Disabling caching may lead to increased memory usage and computation overhead /// due to the lack of reuse of previously created instances. /// This property is primarily used in scenarios where performance testing /// or controlled benchmarking of caching behavior is required. /// </summary> public static bool CachingEnabled { get; set; } = true; /// <summary> /// Gets or sets the default capacity for LRU (Least Recently Used) caches. /// This value determines the maximum number of items that can be stored in an /// individual cache instance created by the factory. If the cache exceeds this /// capacity, the least recently used items will be removed to make room for new entries. /// Changing this property affects all subsequently created caches but does not /// alter the capacity of already existing caches. /// The value must be a positive, non-zero integer. /// </summary> public static int DefaultCacheCapacity { get => _defaultCacheCapacity; set { ArgumentOutOfRangeException.ThrowIfNegativeOrZero(value); _defaultCacheCapacity = value; } } /// <summary> /// Retrieves the count of objects that were reused from the cache instead of being created anew. /// This value is incremented each time an existing cached object is reused. /// </summary> /// <returns> /// The total number of cache reuses, indicating how many objects were served from the cache. /// </returns> public static int GetReusedObjectCount() => _reusedIndicatorCount; /// <summary> /// Retrieves the total number of indicator objects that have been created. /// This count increases each time a new indicator object is instantiated, /// regardless of whether caching is enabled or disabled. /// </summary> /// <returns> /// The cumulative count of indicator creations since the last reset. /// </returns> public static int GetIndicatorCreationCount() => _indicatorRequests; public static void ResetCounters() { Interlocked.Exchange(ref _reusedIndicatorCount, 0); Interlocked.Exchange(ref _indicatorRequests, 0); } /// <summary> /// Creates or retrieves a cached IndicatorBase instance. /// Each BarHistory has its own LRU cache stored in BarHistory.Cache["__lrucache__"]. /// If CachingEnabled is false, always creates a new instance without caching. /// </summary> public static T Create<T>(BarHistory bars, params object[] args) where T : IndicatorBase { Interlocked.Increment(ref _indicatorRequests); // If caching is disabled, just create and return a new instance if (!CachingEnabled) { return CreateInstance<T>(args); } var lru = GetOrCreateCache(bars); var key = BuildKey(typeof(T), args); if (lru.TryGet(key, out var cached)) { Interlocked.Increment(ref _reusedIndicatorCount); return (T) cached; } var instance = CreateInstance<T>(args); lru.Add(key, instance); return instance; } /// <summary> /// Creates a new instance of type T using the best matching constructor. /// </summary> private static T CreateInstance<T>(object[] args) where T : IndicatorBase { // Find the best matching constructor var constructors = GetConstructors(typeof(T)); ConstructorInfo bestCtor = null; object[] finalArgs = null; foreach (var ctor in constructors) { if (!TryBuildArguments(ctor, args, out var candidateArgs)) { continue; } bestCtor = ctor; finalArgs = candidateArgs; break; } if (bestCtor != null && finalArgs != null) { return (T) bestCtor.Invoke(finalArgs); } var argTypeNames = new string[args.Length]; for (var i = 0; i < args.Length; i++) { argTypeNames[i] = args[i]?.GetType().Name ?? "null"; } var argTypes = string.Join(", ", argTypeNames); throw new MissingMethodException( $"No suitable constructor found for type {typeof(T).FullName} with arguments: ({argTypes})"); } /// <summary> /// Explicitly sets the capacity of the LRU cache for the specified BarHistory instance. /// If a cache for the provided BarHistory does not exist, a new cache will be created /// with the given capacity. If a cache already exists, its capacity will be updated /// to the specified value. This operation is thread-safe. /// </summary> /// <param name="bars"> /// The BarHistory instance for which the cache size is being set. It must not be null. /// </param> /// <param name="capacity"> /// The desired capacity of the cache. Must be a positive integer. An exception will be /// thrown if the value is zero or negative. /// </param> public static void SetCacheCapacity(BarHistory bars, int capacity) { ArgumentOutOfRangeException.ThrowIfNegativeOrZero(capacity); if (bars.Cache.TryGetValue(CacheKey, out var existing)) { var lru = (LruCache<string, IndicatorBase>) existing; if (lru.Capacity == capacity) { return; // Already correct size } // Thread-safe resize lru.Capacity = capacity; return; } // Try to add a new cache var newCache = new LruCache<string, IndicatorBase>(capacity); if (!bars.Cache.TryAdd(CacheKey, newCache)) { // Another thread added it, update size if needed var lru = (LruCache<string, IndicatorBase>) bars.Cache[CacheKey]; if (lru.Capacity != capacity) { lru.Capacity = capacity; } } } /// <summary> /// Attempts to construct an array of arguments to match the parameters of the specified constructor. /// Ensures that all supplied arguments are compatible with the constructor's parameter types and adds /// default values for any missing parameters if they are defined in the constructor. /// </summary> /// <param name="ctor">The constructor for which arguments are being prepared.</param> /// <param name="suppliedArgs">The array of arguments supplied for the constructor's parameters.</param> /// <param name="finalizedArgs"> /// When the method returns, contains the array of finalized arguments that match the constructor's parameters /// if the operation was successful; otherwise, null. /// </param> /// <returns> /// True if the arguments were successfully prepared to match the constructor's parameters; otherwise, false. /// </returns> private static bool TryBuildArguments(ConstructorInfo ctor, object[] suppliedArgs, out object[] finalizedArgs) { var parameters = ctor.GetParameters(); finalizedArgs = null; if (suppliedArgs.Length > parameters.Length) { return false; } var argList = new object[parameters.Length]; for (var i = 0; i < parameters.Length; i++) { if (i < suppliedArgs.Length) { if (!TryPrepareArgument(suppliedArgs[i], parameters[i].ParameterType, out var prepared)) { return false; } argList[i] = prepared; } else if (parameters[i].HasDefaultValue) { argList[i] = parameters[i].DefaultValue; } else { return false; } } finalizedArgs = argList; return true; } /// <summary> /// Attempts to prepare a given value to match the specified target type. /// This includes handling conversions, nullability, and enums. /// </summary> /// <param name="value"> /// The input object that needs to be prepared for conversion into the target type. /// </param> /// <param name="targetType"> /// The Type to which the input value needs to be converted. /// </param> /// <param name="prepared"> /// The resulting value if the conversion is successful, or null if unsuccessful. /// </param> /// <returns> /// A boolean indicating whether the value was successfully prepared to match the target type. /// </returns> private static bool TryPrepareArgument(object value, Type targetType, out object prepared) { prepared = null; if (value == null) { if (!targetType.IsValueType || Nullable.GetUnderlyingType(targetType) != null) { return true; } return false; } var nonNullableTarget = Nullable.GetUnderlyingType(targetType) ?? targetType; if (nonNullableTarget.IsInstanceOfType(value)) { prepared = value; return true; } try { if (nonNullableTarget.IsEnum) { prepared = value switch { string enumName => Enum.Parse(nonNullableTarget, enumName, true), _ => Enum.ToObject(nonNullableTarget, Convert.ChangeType(value, Enum.GetUnderlyingType(nonNullableTarget), CultureInfo.InvariantCulture)) }; return true; } prepared = Convert.ChangeType(value, nonNullableTarget, CultureInfo.InvariantCulture); return true; } catch { return false; } } /// <summary> /// Retrieves the existing Least Recently Used (LRU) cache associated with the provided /// <see cref="BarHistory" /> instance, or creates a new one if it does not exist. /// </summary> /// <param name="bars"> /// The <see cref="BarHistory" /> instance for which the LRU cache is to be retrieved or created. /// </param> /// <returns> /// The LRU cache associated with the provided <see cref="BarHistory" /> instance. If no cache /// exists, a new one is initialized with the default cache capacity. /// </returns> private static LruCache<string, IndicatorBase> GetOrCreateCache(BarHistory bars) { if (bars.Cache.TryGetValue(CacheKey, out var existing)) { return (LruCache<string, IndicatorBase>) existing; } var lruNew = new LruCache<string, IndicatorBase>(DefaultCacheCapacity); if (!bars.Cache.TryAdd(CacheKey, lruNew)) { // Another thread added it, return the existing one return (LruCache<string, IndicatorBase>) bars.Cache[CacheKey]; } return lruNew; } private static ConstructorInfo[] GetConstructors(Type indicatorType) => ConstructorCache.GetOrAdd(indicatorType, t => t.GetConstructors()); /// <summary> /// Builds a unique cache key for the given indicator type and arguments. /// The key is used to uniquely identify cached instances of indicators. /// </summary> /// <param name="type"> /// The type of the indicator for which the cache key is being created. /// </param> /// <param name="args"> /// An array of arguments used to initialize or configure the indicator. /// These arguments, combined with the indicator type, form the cache key. /// </param> /// <returns> /// A string that uniquely represents the combination of the indicator type and its initialization arguments. /// </returns> private static string BuildKey(Type type, object[] args) { // Combine type and arguments into a single array var allArgs = new object[args.Length + 1]; allArgs[0] = type.FullName; Array.Copy(args, 0, allArgs, 1, args.Length); return IndicatorBase.CacheKey(allArgs); } }
CODE:
using System; using WealthLab.Backtest; using WealthLab.Core; using WealthLab.Indicators; using WLUtility.Caching; namespace WealthLabStrategies.Test; /// <summary> /// Simple example demonstrating CachedIndicatorFactory usage. /// This strategy shows the recommended pattern for creating cached indicators /// to improve performance during optimization runs. /// </summary> public class CachedIndicatorExample : UserStrategyBase { private ATR _atr; // Store indicators as instance fields private RSI _rsi; private SMA _sma; public CachedIndicatorExample() { AddParameter("RSI Period", ParameterType.Int32, 14, 5, 30); AddParameter("SMA Period", ParameterType.Int32, 50, 20, 200); } // Convenience wrapper to create cached indicators private T Ind<T>(params object[] args) where T : IndicatorBase => CachedIndicatorFactory.Create<T>(CurrentBars, args); public override void Initialize(BarHistory bars) { // Step 1: Set cache capacity based on execution mode // Use larger capacity during optimization for better cache hit rates var cacheCapacity = Backtester.ExecutionMode == StrategyExecutionMode.Optimization ? 500 : 20; CachedIndicatorFactory.SetCacheCapacity(bars, cacheCapacity); // Step 2: Create indicators using the Ind<T> wrapper and store in fields var rsiPeriod = Parameters[0].AsInt; var smaPeriod = Parameters[1].AsInt; _rsi = Ind<RSI>(bars.Close, rsiPeriod); _sma = Ind<SMA>(bars.Close, smaPeriod); _atr = Ind<ATR>(bars, 14); // Step 3: Plot indicators (optional) PlotIndicator(_rsi, WLColor.Blue); DrawHorzLine(30, WLColor.Green, 1, LineStyle.Dashed, _rsi.PaneTag); DrawHorzLine(70, WLColor.Red, 1, LineStyle.Dashed, _rsi.PaneTag); PlotIndicator(_sma, WLColor.Orange); StartIndex = Math.Max(rsiPeriod, smaPeriod); } public override void Execute(BarHistory bars, int idx) { if (!HasOpenPosition(bars, PositionType.Long)) { // Buy signal: RSI oversold and price above SMA if (_rsi[idx] < 30 && bars.Close[idx] > _sma[idx]) { PlaceTrade(bars, TransactionType.Buy, OrderType.Market); } } else { var position = LastPosition; // Sell signal: RSI overbought if (_rsi[idx] > 70) { PlaceTrade(bars, TransactionType.Sell, OrderType.Market); } // Stop loss: 2x ATR else if (bars.Close[idx] < position.EntryPrice - 2.0 * _atr[idx]) { PlaceTrade(bars, TransactionType.Sell, OrderType.Market); } } } public override void BacktestComplete() { base.BacktestComplete(); // Optional: Log cache statistics to see effectiveness var totalRequests = CachedIndicatorFactory.GetIndicatorCreationCount(); var totalReuses = CachedIndicatorFactory.GetReusedObjectCount(); if (totalRequests <= 0) { return; } var cacheHitRate = totalReuses * 100.0 / totalRequests; var message = $"Cache Statistics: {totalRequests:N0} requests, " + $"{totalReuses:N0} reuses ({cacheHitRate:F1}% hit rate)"; WriteToDebugLog(message); WLHost.Instance.AddLogItem("Cache Stats", message, WLColor.Green); if (Backtester.ExecutionMode != StrategyExecutionMode.Optimization) { // Uncomment the following line and run one backtest (not optimization) // if you want to more-or-less reset stats for an upcoming optimization run CachedIndicatorFactory.ResetCounters(); } } }
QUOTE:
I took your LRU cache idea and asked AI to help out.
Well, thanks for looking into this. So CoPilot knows about LRU caching--that's interesting.
QUOTE:
... you have to change your indicator creation code (i.e. constructor calls or Series calls) to use a helper method.
That's a big limitation because I use many of the WL indicators, so this solution isn't going to help with those since I don't have access to the WL indicator code. Now if the WL developers would be interested in using this LRU code for cache management, I would be interested--of course.
QUOTE:
I suggest adding the Github Copilot (or similar) extension to Visual Studio, learn to use it, and ask it questions about the code.
And that may be the real take home message (and opportunity) here. We should all be using CoPilot to speed up our coding. I'll have to look into that. Do you have a URL for reference?
QUOTE:
That's a big limitation because I use many of the WL indicators, so this solution isn't going to help me. I don't have access to the WL indicator code. Now if the WL developers would be interested in using this LRU code for cache management, I would be interested--of course.
You don't need access to the WL indicator code. You simply change your calls for creating indicators so that they use the code I provided. Please see the example strategy code in CachedIndicatorExample class in Post #3.
To reiterate, let's say your Initialize method code creates an RSI indicator using bars.Close and a length of 14:
CODE:
_rsi = new RSI(bars.Close, 14);
If you use the classes I've provided just change the above to:
CODE:
_rsi = Ind<RSI>(bars.Close, 14);
and don't forget to include the little helper method I mentioned in Post #3.
As for Github CoPilot, it is integrated into Visual Studio 2022 version 17.10 and later (it used to be an extension). To utilize it I suggest you open VS 2022 and look in the upper right corner. There should be a little Copilot icon that perhaps is labeled as such. Click it and go from there. Refer to Microsoft documentation for additional details. Also, for additional information and guidance I suggest asking any AI like CoPilot, Gemini, Grok, etc.
I strongly suggest you explore using AI in coding. For me, its made a huge difference. Some things that could take me several hours to do, a lot of it busy work, takes only writing a short paragraph of what I want. Then the AI spits out code in about 10 seconds or less. For more complex problems, maybe about 2 minutes. Its truly mind-boggling.
QUOTE:
You don't need access to the WL indicator code. You simply change your calls for creating indicators so that they use the code I provided.
Okay. So we are avoiding calling the default Indicator.Series caching method altogether and passing the indicator as a function to Ind for caching purposes. That makes more sense now.
I got more immediate things to work on, but I will certainly take a look at it. It would certainly be nice to get more processor utilization during parameter optimization, so I'm very interested.
QUOTE:
As for Github CoPilot, it is integrated into Visual Studio 2022 version 17.10 and later
Yes, I noticed that, but I haven't explored it. But after seeing that it knows about LRU caching, I'm more interested in what else CoPilot can do (like on the robust statistical side).
Your Response
Post
Edit Post
Login is required