You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I will start by saying I know almost nothing about the search techniques in OpenTuner and have just been using the defaults.
The environment in which I am using OpenTuner (java openjdk/Hotspot flag mining) is one with many parameters. I have about 300 parameters, more than half of those boolean, most of the rest Integer ranges. Some parameters definitely have more effect than others. In fact I suspect that many of the parameters have very little effect.
I have noticed that often a particularly bad parameter choice might "stick around" for a long time. I am wondering if there are search techniques that will do better than the default ones in this environment.
To simulate this, I have written a fake workload tuner that:
creates a configurable number of boolean parameters. To make some parameters heavier than others, each parameter N has an external weight = N*N which determines how it will affect the run score.
creates a random goal for each parameter
On the run step, just sees which cfg parameters match the corresponding goal for that parameter and adjusts the time based on the weight of that parameter.
I have noticed that if I run this fake tuner with 300 parameters there are always a number of highly weighted parameters with the wrong setting even after 5000 generations. Is this unavoidable with this many parameters, or would there be technique settings that would help with this?
The text was updated successfully, but these errors were encountered:
I'd recommend trying many different techniques to see which ones work best for your problem. Run with --list-techniques to see the full list, you can also create hybrid techniques or change the parameters of the techniques with minor code changes.
For boolean parameters, there is no gradients to follow so the techniques that try to hill climb won't work well. If the parameters are very independent, then you could try the greedy techniques and tweak the mutation rate. If they are more dependent you could try evolutionary techniques.
I will start by saying I know almost nothing about the search techniques in OpenTuner and have just been using the defaults.
The environment in which I am using OpenTuner (java openjdk/Hotspot flag mining) is one with many parameters. I have about 300 parameters, more than half of those boolean, most of the rest Integer ranges. Some parameters definitely have more effect than others. In fact I suspect that many of the parameters have very little effect.
I have noticed that often a particularly bad parameter choice might "stick around" for a long time. I am wondering if there are search techniques that will do better than the default ones in this environment.
To simulate this, I have written a fake workload tuner that:
This fake tuner is available at http://github.com/tdeneau/opentuner under examples/convtest
I have noticed that if I run this fake tuner with 300 parameters there are always a number of highly weighted parameters with the wrong setting even after 5000 generations. Is this unavoidable with this many parameters, or would there be technique settings that would help with this?
The text was updated successfully, but these errors were encountered: