Bayesian Optimization: From Foundations to Advanced Topics

Many engineering and scientific applications including automated machine learning (e.g., neural architecture search and hyper-parameter tuning) involve making design choices to optimize one or more expensive to evaluate objectives. Some examples include tuning the knobs of a compiler to optimize performance and efficiency of a set of software programs; designing new materials to optimize strength, elasticity, and durability; and designing hardware to optimize performance, power, and area. Bayesian Optimization (BO) is an effective framework to solve black-box optimization problems with expensive function evaluations. The key idea behind BO is to build a cheap surrogate statistical model (e.g., Gaussian Process) using the real experimental data; and employ it to intelligently select the sequence of experiments or function evaluations using an acquisition function, e.g., expected improvement (EI) and upper-confidence bound (UCB).

There is a large body of work on BO for single-objective optimization in the single-fidelity setting (i.e., experiments are expensive and accurate in function evaluation) for continuous input spaces. However, BO work in recent years has focused on more challenging problem settings including optimization of multiple objectives; optimization with multi-fidelity function evaluations (vary in resource cost and accuracy of evaluation); optimization with black-box constraints with applications to safety; optimization of combinatorial spaces (e.g., sequences, trees, and graphs); and optimization of hybrid spaces (mixture of discrete and continuous input variables. The goal of this tutorial is to present a comprehensive survey of BO starting from foundations to these recent advances by focusing on challenges, principles, algorithmic ideas and their connections, and important real-world applications.

The tutorial starts at 8:30 AM PST on 23th February, 2022.

Speakers

Avatar

Aryan Deshwal

Avatar

Jana Doppa

Avatar

Syrine Belakaria