Requests are:
logarithmic: 1, 10, 100, 1000, etc...
User defined steps: 5, 10, 20, 50 etc...
logarithmic: 1, 10, 100, 1000, etc...
User defined steps: 5, 10, 20, 50 etc...
Rename
A WL7 Parameter should have a field "IsLogarithic".
This would help the optimizers also in selecting a good stepsize.
This would help the optimizers also in selecting a good stepsize.
You can do all these now if handled in strategy code. I would handle this in strategy code...
Logarithmic example...
AddParameter("power", ParameterTypes.Double, 3.0, 1.0, 5.0, 1.0);
double xFactor = Math.Pow(10, Parameters.FindName("power").AsDouble);
You have your logarithmic parameter and
Limited only to your imagination and no work on optimizer code.
Why do I suggest this method?
The Particle Swarm Optimizer particle movement is sometimes based on distance between particles. I don't know how to compute distance for a mix of linear and log parameters. (and I was a math major [well, ok, in 1968])
Logarithmic example...
AddParameter("power", ParameterTypes.Double, 3.0, 1.0, 5.0, 1.0);
double xFactor = Math.Pow(10, Parameters.FindName("power").AsDouble);
You have your logarithmic parameter and
Limited only to your imagination and no work on optimizer code.
Why do I suggest this method?
The Particle Swarm Optimizer particle movement is sometimes based on distance between particles. I don't know how to compute distance for a mix of linear and log parameters. (and I was a math major [well, ok, in 1968])
QUOTE:
The Particle Swarm Optimizer particle movement is sometimes based on distance between particles. I don't know how to compute distance for a mix of linear and log parameters.
Excellent point. The goal here is to space the particles by a function of their "variance" (i.e. weight by 1.0/variance) to improve numerical stability in the numerical method. So the optimizer can't just take the anti-log of logarithmic parameters because that may lead to numerical instability.
This is why linear numerical methods offer an optional weighting factor to their fit. Typically, one would weight by the reciprocal of variance of the metric. The Kalman filter weights by the covariance matrix of all its inputs, which is the brute-force way for multi-variant (MIMO) problems.
If you add logarithmic parameters, you'll need to replace "adaptive" optimizers as well.
QUOTE:
You can do all these now if handled in strategy code...
Yes, of course, I can do (nearly) everything in code. Still this is a workaround and the "real" values are not displayed in the results-windows.
If I do a manual search for good parameters I tend to use logarithmic steps even for simple things like a period:
2, 5, 10, 20, 50, 100
If we could mark a parameter as logarithmic this would give some optimizers like "Exhaustive" or "Random Search" or "Grid Search" a much better coverage of the problem space.
Other optimizers which can't handle logarithmic steps could simply ignore the IsLogarithmic property and choose their candidates on a linear scale.
I see this feature request was implemented is WL7 Build 31. That's great. Now could someone tell me where the documentation is for how to use it? Thanks.
QUOTE:
It’s in the Parameter class documentation.
I already looked there. I was looking for a LogValue parameter datatype. Are you instead referring to AssignExplicitValues section instead? (I guess that will work.)
CODE:I wonder if the finantic SMAC optimizer is compatible with AssignExplicitValues?
public void AssignExplicitValues(IEnumerable<double> values) public void AssignExplicitValues(params object[] vals)
SMAC uses Start and End values only. It ignores Step sizes and explicit values.
Instead it generates candidate values randomly (uniform distribution).
So a property "isLogarithmic" would help...
Instead it generates candidate values randomly (uniform distribution).
So a property "isLogarithmic" would help...
QUOTE:
So a property "isLogarithmic" would help...
What I need here is an "isExponential" property because I want the series to grow exponentially so it covers more range. But I do think some of these choices may create problems for some optimizers that have a "regularized" (statistical and stochastic) estimation method for picking the next candidate point to evaluate.
The suggestion in Reply# 2 is good. WL could even set it up so the transforming relationship can be passed in as a delegate function. I'm just not sure how optimizers that employ regularization are going to deal with such a delegate.
Your Response
Post
Edit Post
Login is required