Inconsistent Particle Swarm Optimizer runs
Author: superticker
Creation Date: 4/6/2019 10:33 PM
profile picture

superticker

#1
For the same stock, strategy, and optimization parameters (algorithm Clerc Basic OEP 0 w/13 particles, 15 iterations) employing the Particle Swam optimizer, the first of four repeated optimization attempts starts with different initial modeling values--see attachment. Is that normal for the Particle Swam optimizer? Will it randomly set initial modeling values differently for each attempt?

The problem is if it starts with different modeling values for each attempt, then it will be zeroing in on a different maxima to optimize. That's not necessarily a good thing if I'm trying to change the code between each attempt to determine what code changes are best. Is there a way to turn this feature off so it always starts with the same modeling values?

Note: The attachment was generated for a single stock with repeated optimization attempts. If a whole DataSet is being optimized, the affects are even more inconsistent. Is that by design? This behavior might be desirable for some situations, but if I'm trying to compare one strategy code change with another between optimization attempts, it's a disadvantage. Can this behavior be switched off?

Having inconsistent initial modeling values between optimization attempts makes comparing code changes (between attempts) very difficult.

UPDATE: It gets weirder. If I open a fresh chart window, run the optimizer, then--without assigning new PVs--run the chart on the old PVs, I get different performance results. Somehow, the act of running the optimizer alone affects performance results even if new PVs aren't assigned. How is that possible? Now if I close the chart window, then reopen it again, the original performance results are restored.
profile picture

LenMoz

#2
That's normal behavior. Most places where a random value is needed use the same global Random variable, seeded like this...
CODE:
Please log in to see this code.

This behavior is by design. Rerunning an optimization may give a better (or worse) result.

Turning off the behavior would only matter for the initial, seeding, iteration. After that, parameter selection is based on calculation results as particles are steered toward good results.

Regarding your "UPDATE", I've observed weird things going on with the Strategy Parameters window when an optimization is running, where the displayed value is out of sync with the "real" value. Try this: While any optimization (not just PSO) is running, click as if to change a parameter value. You'll see a different parameter value in the update pop-up than is displayed on the screen. I don't know the implication of this regarding PVs, but it might be related. If you save parameter values after an optimization run, I expect you're saving, not what you see on the screen, but the last value used by the optimizer. I believe this to be a bug in the optimizer host code. UPDATE: It looks like it puts back the displayed values when the optimization run is cancelled or completes. smh

Len
profile picture

superticker

#3
QUOTE:
This behavior is by design. Rerunning an optimization may give a better (or worse) result.
It's not necessarily a bad design to mix up the starting values, but when I'm trying to compare one code change with another between optimization attempts, this behavior gets in the way. Can a check box be added to turn it off? (The default can be to mix up the starting values.)

One work around is to test code changes with large datasets, but if the starting values gravitate to a different model maxima, this too isn't much help. I need a reliable way to compare code changes between optimization attempts.

QUOTE:
Regarding your "UPDATE",... I believe this to be a bug in the optimizer host code. UPDATE: It looks like it puts back the displayed values when the optimization run is cancelled or completes.
Yes, I don't think it's the optimizer itself. It's a problem with the manager of the optimizer instead. The manager fails to reinstate the original PV parameters after an optimization. This is a different issue from the first behavior discussed.

Other than that, the Particle Swarm optimizer works great.
profile picture

LenMoz

#4
QUOTE:
It's not necessarily a bad design to mix up the starting values, but when I'm trying to compare one code change with another between optimization attempts, this behavior gets in the way.
IMHO, the only way to do what you're trying to do is with the intensive(?) Exhaustive Optimizer. Previously mentioned, the random seed only affects the starting parameters. After that parameter values are changed by the pull toward good results, also with a random component. Your code changes would vary the quality of most(?) parameter sets and thereby the intensity of the pull.

Unfortunately, even a small change is bigger than it looks, because I have other unpublished changes that would be simultaneously published and might break the non-U.S WealthLab.
profile picture

Eugene

#5
QUOTE:
might break the non-U.S WealthLab.

Len, what makes you think so?
profile picture

LenMoz

#6
The U.S. implemented the checkbox to freeze a parameter by adding an attribute to an optimizer host object. WLD had not implemented that change so the object signatures became out of sync. I implemented that functionality in a way that "should" work in the absence of that attribute, but untested. I don't use the feature so the WLP is also only superficially tested.

Overarching is the fact that you haven't convinced me that your request justifies the effort.

Len
profile picture

Eugene

#7
QUOTE:
The U.S. implemented the checkbox to freeze a parameter by adding an attribute to an optimizer host object.

Are you talking about selectable optimizer parameters? This has been implemented in WLD so Exhaustive and Monte Carlo support "Parameter Checkboxes" long ago.
profile picture

LenMoz

#8
Yes, selectable parameters. It's been months or years since I did anything to PSO. It has code in it like...
CODE:
Please log in to see this code.
… conditioning parameter behavior..
profile picture

superticker

#9
QUOTE:
the random seed only affects the starting parameters. After that parameter values are changed by the pull toward good results, also with a random component.
All I'm asking for is a check box to use the same seed at the start of each optimization when optimization/debugging code. What happens after that isn't a concern.

Another approach would be to simply used the "default values" of all the PV parameters as defined in the strategy instead of the seed generator when the "debugging" check box is checked.

Of course, one would want to leave the debugging check box unchecked for all other operations since mixing the starting parameters up would otherwise be a good thing. But during simulation code debugging/development, we need to compare "apples to apples" between optimization attempts.

Wealth-Lab optimization is too slow to employ the Exhaustive optimizer.
profile picture

LenMoz

#10
superticker, you're not hearing me. It's not only your change that would get promoted with a new release. Other features would get released as well, which might break the code for some users. The released code is stable. I'm not taking on the effort of a release right now.

(Platitude Alert) Thank you for your continued interest in PSO. We value your suggestions.

Update: Here's how I do it. After I make a change, I watch the log for 3 to 4 iterations (Clerc Tribes). That's usually enough to know whether the code change made the strategy better or worse. If worse, I cancel the optimization and try again.
profile picture

superticker

#11
QUOTE:
I'm not taking on the effort of a release right now.... We value your suggestions.
Understood. When you do have time to work on it, keep it's default behavior as it is now. It performs very well now. Just add a debugging/development mode to it with a check box. Aside from that, the Particle Swarm optimizer is a great tool for optimizing time-independent parameters.

When I make changes to my strategy code, it creates very subtle changes in performance results. Like the Profit per Bar might go from 200 to 210 over a dataset of 43 hand-picked stocks (15 iterations). Such small changes are hard to detect reliability. Running just 3 or 4 iterations won't pick up subtle differences. Perhaps I'm trying to squeeze too much out of the strategy code, but that's part of the fun (and challenge) of designing any simulation.
profile picture

superticker

#12
On extremely rare occasions, the "Best" plot falls below the "Average" plot on the Particle Swarm optimizer Fitness Graph; see screenshot attachment. This only happens on stocks I would not want to trade with; i.e. bad stocks. Is this to be expected?

The second attachment has the progress log for the iterations on ticker symbol FOX.
profile picture

LenMoz

#13
Oh, there it is...
CODE:
Please log in to see this code.
(The avgFitness calculation is using the incorrect divisor, should be countFitness)
QUOTE:
only happens on stocks I would not want to trade with
... and that's a contributing factor. Some combinations of parameters are producing no trades across the backtest period.
profile picture

superticker

#14
QUOTE:
Some combinations of parameters are producing no trades across the backtest period.
That's true. Some bad stocks don't produce any trades with certain parameter combinations. Good stocks don't have this problem with my strategy.

Thanks for fixing it. I would like to add, I love the Particle Swarm optimizer. It's amazing it can converge to useful solutions with 6-parameter discontinuous (i.e. buy/sell event driven) problems.
This website uses cookies to improve your experience. We'll assume you're ok with that, but you can opt-out if you wish (Read more).