Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Automatic hyperparameter tuning #5

Open
dv516 opened this issue Jul 17, 2023 · 0 comments
Open

Automatic hyperparameter tuning #5

dv516 opened this issue Jul 17, 2023 · 0 comments
Labels
enhancement New feature or request

Comments

@dv516
Copy link
Collaborator

dv516 commented Jul 17, 2023

Find which configuration works best for the base implementation and using which hyperparameters. For example, should 'base' sampling or 'sampling_region' be implemented by default?

Then, set automatic hyperparameters based on configuration. These should be overwritten when the user provides any specific input unless there's an explicit clash. For example, good initial guesses would be:

  • Set initial radius to cover 10-20% or 50% of the search space for local and global flavour
  • Set beta_red to accuracy**(1/N) or accuracy**(2/N), where accuracy would be for example 0.001 or 0.01, could be related to min_radius for TIS or TIP routines
  • Set N_min_s to be for example a tenth of the budget, depending on local or global flavour or depending on PLS routine

These automatic hyperparameters should be specific to the 'flavour' or configuration that the user desires. Some ideas for configurations:

  • 'Exploration': Use 'base' sampling in combination with some exploration routine, potentially 'exploit_explore' or 'sampling_region'
  • 'Multistart': User should be able to choose between TIS and TIP, defaulting to either
  • 'Expensive_local': Implies that the user wants to get to some local optimum as quickly as possible. Use local method, and small trust regions
  • 'Expensive_global': Not sure if this makes sense, but potentially a routine with a lot of exploration, potentially more risky
  • 'Safety': Implement feasible sampling
  • 'structure' or 'high-dimensional': Use CUATRO_PLS or CUATRO_low when implemented, think of heuristics for n_e or n_pls
  • 'ill-behaved': should be similar to exploration or multistart
  • 'well-behaved' or 'convex': can use larger trust regions since the approximation should be quite good. not sure if the local or global method would work better
@dv516 dv516 added the enhancement New feature or request label Jul 17, 2023
@dv516 dv516 assigned dv516 and akshit18bansal and unassigned dv516 Jul 19, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants