List of references

General introduction and survey

  • Garnett, R. (2022). Bayesian Optimization. Cambridge University Press. Link
  • Shahriari, B., Swersky, K., Wang, Z., Adams, R. P., & De Freitas, N. (2015). Taking the human out of the loop: A review of Bayesian optimization. Proceedings of the IEEE, 104(1), 148-175. Link
  • Frazier, P. I. (2018). A tutorial on Bayesian optimization. arXiv preprint arXiv:1807.02811. Link
  • Greenhill, S., Rana, S., Gupta, S., Vellanki, P., & Venkatesh, S. (2020). Bayesian optimization for adaptive experimental design: A review. IEEE access, 8, 13937-13948. Link
  • Brochu, E., Cora, V. M., & De Freitas, N. (2010). A tutorial on Bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning. arXiv preprint arXiv:1012.2599. Link

Gaussian Processes

  • Williams, C. K., & Rasmussen, C. E. (2006). Gaussian processes for machine learning (Vol. 2, No. 3, p. 4). Cambridge, MA: MIT press. Link
  • Leibfried, F., Dutordoir, V., John, S. T., & Durrande, N. (2020). A tutorial on sparse Gaussian processes and variational inference. arXiv preprint arXiv:2012.13962. Link

Acquisition functions and their optimization

  • Močkus, J. (1975). On Bayesian methods for seeking the extremum. In Optimization techniques IFIP technical conference (pp. 400-404). Springer, Berlin, Heidelberg.
  • Srinivas, N., Krause, A., Kakade, S. M., & Seeger, M. (2009). Gaussian process optimization in the bandit setting: No regret and experimental design. arXiv preprint arXiv:0912.3995.
  • Frazier, P., Powell, W., & Dayanik, S. (2009). The knowledge-gradient policy for correlated normal beliefs. INFORMS journal on Computing, 21(4), 599-613.
  • Hennig, P., & Schuler, C. J. Entropy search for information-efficient global optimization. Journal of Machine Learning Research (JMLR), 13, 1809–1837, 2012.
  • Hernandez-Lobato, J. M., Hoffman, M. W., & Ghahramani, Z. Predictive entropy search for efficient global optimization of black-box functions. Advances in Neural Information Processing Systems (NIPS), pp. 918–926, 2014.
  • Wang, Z., & Jegelka, S. (2017, July). Max-value entropy search for efficient Bayesian optimization. In International Conference on Machine Learning (pp. 3627-3635). PMLR.
  • Jiang, S., Jiang, D., Balandat, M., Karrer, B., Gardner, J., & Garnett, R. (2020). Efficient nonmyopic bayesian optimization via one-shot multi-step trees. Advances in Neural Information Processing Systems, 33, 18039-18049.
  • Lam, R., Willcox, K., & Wolpert, D. H. (2016). Bayesian optimization with a finite budget: An approximate dynamic programming approach. Advances in Neural Information Processing Systems, 29.
  • González, J., Osborne, M., & Lawrence, N. (2016, May). GLASSES: Relieving the myopia of Bayesian optimisation. In Artificial Intelligence and Statistics (pp. 790-799). PMLR.
  • Balandat, M., Karrer, B., Jiang, D., Daulton, S., Letham, B., Wilson, A. G., & Bakshy, E. (2020). BoTorch: a framework for efficient Monte-Carlo Bayesian optimization. Advances in neural information processing systems, 33, 21524-21538.
  • Kim, J., & Choi, S. (2020, September). On local optimizers of acquisition functions in bayesian optimization. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases (pp. 675-690). Springer, Cham.
  • Wilson, J., Hutter, F., & Deisenroth, M. (2018). Maximizing acquisition functions for Bayesian optimization. Advances in neural information processing systems, 31.
  • Grosnit, A., Cowen-Rivers, A. I., Tutunov, R., Griffiths, R. R., Wang, J., & Bou-Ammar, H. (2021). Are we forgetting about compositional optimisers in bayesian optimisation?. Journal of Machine Learning Research, 22(160), 1-78.

Multi-fidelity Bayesian optimization

  • Huang, D., Allen, T. T., Notz, W. I., & Miller, R. A. (2006). Sequential kriging optimization using multiple-fidelity evaluations. Structural and Multidisciplinary Optimization, 32(5), 369-382.
  • Swersky, K., Snoek, J., & Adams, R. P. (2013). Multi-task bayesian optimization. Advances in neural information processing systems, 26.
  • Kandasamy, K., Dasarathy, G., Schneider, J., & Póczos, B. (2017, July). Multi-fidelity bayesian optimisation with continuous approximations. In International Conference on Machine Learning (pp. 1799-1808). PMLR.
  • Klein, A., Falkner, S., Bartels, S., Hennig, P., & Hutter, F. (2017, April). Fast bayesian optimization of machine learning hyperparameters on large datasets. In Artificial intelligence and statistics (pp. 528-536). PMLR.
  • Song, J., Chen, Y., & Yue, Y. (2019, April). A general framework for multi-fidelity bayesian optimization with gaussian processes. In The 22nd International Conference on Artificial Intelligence and Statistics (pp. 3158-3167). PMLR.
  • Wu, J., Toscano-Palmerin, S., Frazier, P. I., & Wilson, A. G. (2020, August). Practical multi-fidelity Bayesian optimization for hyperparameter tuning. In Uncertainty in Artificial Intelligence (pp. 788-798). PMLR.
  • Takeno, S., Fukuoka, H., Tsukada, Y., Koyama, T., Shiga, M., Takeuchi, I., & Karasuyama, M. (2020, November). Multi-fidelity Bayesian optimization with max-value entropy search and its parallelization. In International Conference on Machine Learning (pp. 9334-9345). PMLR.
  • Zhang, Y., Dai, Z., & Low, B. K. H. (2020, August). Bayesian optimization with binary auxiliary information. In Uncertainty in Artificial Intelligence (pp. 1222-1232). PMLR.
  • Li, S., Xing, W., Kirby, R., & Zhe, S. (2020). Multi-fidelity Bayesian optimization via deep neural networks. Advances in Neural Information Processing Systems, 33, 8521-8531.

Combinatorial Bayesian optimization

  • Deshwal, A., Belakaria, S., Doppa, J. R., & Fern, A. (2020). Optimizing discrete spaces via expensive evaluations: A learning to search framework. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 34, No. 04, pp. 3773-3780).
  • Deshwal, A., Belakaria, S., & Doppa, J. R. (2021). Mercer features for efficient combinatorial Bayesian optimization. In Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI) (pp. 7210-7218).
  • Deshwal, A., & Doppa, J. (2021). Combining Latent Space and Structured Kernels for Bayesian Optimization over Combinatorial Spaces. Advances in Neural Information Processing Systems, 34.
  • Deshwal, A., Belakaria, S., & Doppa, J. R. (2020). Scalable combinatorial Bayesian optimization with tractable statistical models. arXiv preprint arXiv:2008.08177.
  • Baptista, R., & Poloczek, M. (2018, July). Bayesian optimization of combinatorial structures. In International Conference on Machine Learning (pp. 462-471). PMLR.
  • Garrido-Merchán, E. C., & Hernández-Lobato, D. (2020). Dealing with categorical and integer-valued variables in bayesian optimization with gaussian processes. Neurocomputing, 380, 20-35.
  • Oh, C., Tomczak, J., Gavves, E., & Welling, M. (2019). Combinatorial bayesian optimization using the graph cartesian product. Advances in Neural Information Processing Systems, 32.
  • Swersky, K., Rubanova, Y., Dohan, D., & Murphy, K. (2020). Amortized bayesian optimization over discrete spaces. In Conference on Uncertainty in Artificial Intelligence (pp. 769-778). PMLR.
  • Moss, H., Leslie, D., Beck, D., Gonzalez, J., & Rayson, P. (2020). Boss: Bayesian optimization over string spaces. Advances in neural information processing systems, 33, 15476-15486.
  • Buathong, P., Ginsbourger, D., & Krityakierne, T. (2020). Kernels over sets of finite sets using RKHS embeddings, with application to Bayesian (combinatorial) optimization. In International Conference on Artificial Intelligence and Statistics (pp. 2731-2741). PMLR.
  • Dadkhahi, H., Shanmugam, K., Rios, J., Das, P., Hoffman, S. C., Loeffler, T. D., & Sankaranarayanan, S. (2020, August). Combinatorial black-box optimization with expert advice. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (pp. 1918-1927).
  • Kim, J., McCourt, M., You, T., Kim, S., & Choi, S. (2021). Bayesian optimization with approximate set kernels. Machine Learning, 110(5), 857-879.
  • Griffiths, R. R., & Hernández-Lobato, J. M. (2020). Constrained Bayesian optimization for automatic chemical design using variational autoencoders. Chemical science, 11(2), 577-586.
  • Maus, N., Jones, H. T., Moore, J. S., Kusner, M. J., Bradshaw, J., & Gardner, J. R. (2022). Local Latent Space Bayesian Optimization over Structured Inputs, Neural Information Processing Systems (NeurIPS) 2022.

Causal Bayesian Optimization

  • Aglietti V., Dhir N., González J. and Damoulas T. (2021). Dynamic Causal Bayesian Optimization, Neural Information Processing Systems (NeurIPS) 2021.

  • Aglietti V., Damoulas T., Alvarez A.M., & González J. (2020). Multi-task Causal Learning with Gaussian Processes, Neural Information Processing Systems (NeurIPS) 2020.

  • Aglietti V., Lu X., Paleyes A., & González J. (2020). Causal Bayesian Optimization, International Conference on Artificial Intelligence and Statistics (AISTATS) 2020.

  • Aglietti V., Bonilla E., Damoulas T. & Cripps S. (2019). Structured Variational Inference in Continuous Cox Process Models, Neural Information Processing Systems (NeurIPS) 2019.

  • Aglietti V., Damoulas T. & Bonilla E. (2019). Efficient Inference in Multi-task Cox Process Models, International Conference on Artificial Intelligence and Statistics (AISTATS) 2019.

BO over hybrid spaces

  • Deshwal, A., Belakaria, S., & Doppa, J. R. (2021). Bayesian optimization over hybrid spaces. In International Conference on Machine Learning (pp. 2632-2643). PMLR.
  • Daxberger, E., Makarova, A., Turchetta, M., & Krause, A. (2019). Mixed-variable bayesian optimization. arXiv preprint arXiv:1907.01329.
  • Oh, C., Gavves, E., & Welling, M. (2021). Mixed variable Bayesian optimization with frequency modulated kernels. In Uncertainty in Artificial Intelligence (pp. 950-960). PMLR.
  • Ru, B., Alvi, A., Nguyen, V., Osborne, M. A., & Roberts, S. (2020, November). Bayesian optimisation over multiple continuous and categorical inputs. In International Conference on Machine Learning (pp. 8276-8285). PMLR.

High-dimensional Bayesian optimization

  • Eriksson, D., Pearce, M., Gardner, J., Turner, R. D., & Poloczek, M. (2019). Scalable global optimization via local bayesian optimization. Advances in Neural Information Processing Systems, 32.
  • Li, C., Gupta, S., Rana, S., Nguyen, V., Venkatesh, S., & Shilton, A. (2017). High Dimensional Bayesian Optimization Using Dropout. Proceedings of the 26th International Joint Conference on Artificial Intelligence, 2096–2102. Melbourne, Australia: AAAI Press.
  • Mutny, M., & Krause, A. (2018). Efficient high dimensional bayesian optimization with additivity and quadrature fourier features. Advances in Neural Information Processing Systems, 31.
  • Wang, Z., Gehring, C., Kohli, P., & Jegelka, S. (2018, March). Batched large-scale Bayesian optimization in high-dimensional spaces. In International Conference on Artificial Intelligence and Statistics (pp. 745-754). PMLR.
  • Oh, C., Gavves, E., & Welling, M. (2018, July). BOCK: Bayesian optimization with cylindrical kernels. In International Conference on Machine Learning (pp. 3868-3877). PMLR.
  • Rolland, P., Scarlett, J., Bogunovic, I., & Cevher, V. (2018, March). High-dimensional Bayesian optimization via additive models with overlapping groups. In International conference on artificial intelligence and statistics (pp. 298-307). PMLR.
  • Li, C. L., Kandasamy, K., Póczos, B., & Schneider, J. (2016, May). High dimensional Bayesian optimization via restricted projection pursuit models. In Artificial Intelligence and Statistics (pp. 884-892). PMLR.
  • Zhang, M., Li, H., & Su, S. (2019). High dimensional Bayesian optimization via supervised dimension reduction. arXiv preprint arXiv:1907.08953.
  • Hoang, T. N., Hoang, Q. M., Ouyang, R., & Low, K. H. (2018, April). Decentralized high-dimensional Bayesian optimization with factor graphs. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 32, No. 1).
  • Maddox, W. J., Balandat, M., Wilson, A. G., & Bakshy, E. (2021). Bayesian optimization with high-dimensional outputs. Advances in Neural Information Processing Systems, 34.
  • Wan, X., Nguyen, V., Ha, H., Ru, B., Lu, C., & Osborne, M. A. (2021). Think global and act local: Bayesian optimisation over high-dimensional categorical and mixed search spaces. arXiv preprint arXiv:2102.07188.
  • Eriksson, D., & Jankowiak, M. (2021). High-dimensional Bayesian optimization with sparse axis-aligned subspaces. In Uncertainty in Artificial Intelligence (pp. 493-503). PMLR.

Batch Bayesian Optimization

  • Wu, J., & Frazier, P. (2016). The parallel knowledge gradient method for batch Bayesian optimization. Advances in neural information processing systems, 29.
  • Ginsbourger, D., Riche, R. L., & Carraro, L. (2010). Kriging is well-suited to parallelize optimization. In Computational intelligence in expensive optimization problems (pp. 131-162). Springer, Berlin, Heidelberg.
  • González, J., Dai, Z., Hennig, P., & Lawrence, N. (2016, May). Batch Bayesian optimization via local penalization. In Artificial intelligence and statistics (pp. 648-657). PMLR.
  • Nguyen, V., Rana, S., Gupta, S. K., Li, C., & Venkatesh, S. (2016, December). Budgeted batch Bayesian optimization. In 2016 IEEE 16th International Conference on Data Mining (ICDM) (pp. 1107-1112). IEEE.
  • Wang, Z., Hutter, F., Zoghi, M., Matheson, D., & de Feitas, N. (2016). Bayesian optimization in a billion dimensions via random embeddings. Journal of Artificial Intelligence Research, 55, 361-387.
  • Azimi, J., Fern, A., & Fern, X. (2010). Batch bayesian optimization via simulation matching. Advances in Neural Information Processing Systems, 23.
  • Azimi, J., Jalali, A., & Fern, X. (2012). Hybrid batch Bayesian optimization. arXiv preprint arXiv:1202.5597.
  • Gong, C., Peng, J., & Liu, Q. (2019, May). Quantile stein variational gradient descent for batch Bayesian optimization. In International Conference on Machine Learning (pp. 2347-2356). PMLR.
  • Kandasamy, K., Krishnamurthy, A., Schneider, J., & Póczos, B. (2018, March). Parallelised Bayesian optimisation via Thompson sampling. In International Conference on Artificial Intelligence and Statistics (pp. 133-142). PMLR.

Constrained Bayesian Optimization

  • Gardner, J. R., Kusner, M. J., Xu, Z. E., Weinberger, K. Q., & Cunningham, J. P. (2014, June). Bayesian Optimization with Inequality Constraints. In ICML (Vol. 2014, pp. 937-945).
  • Gelbart, Michael A., Jasper Snoek, and Ryan P. Adams. “Bayesian optimization with unknown constraints.” arXiv preprint arXiv:1403.5607 (2014). Hernández-Lobato, José Miguel, et al. “Predictive entropy search for bayesian optimization with unknown constraints.” International conference on machine learning. PMLR, 2015.
  • Perrone, Valerio, et al. “Constrained Bayesian optimization with max-value entropy search.” arXiv preprint arXiv:1910.07003 (2019).
  • Letham, Benjamin, et al. “Constrained Bayesian optimization with noisy experiments.” Bayesian Analysis 14.2 (2019): 495-519.
  • Eriksson, David, and Matthias Poloczek. “Scalable constrained bayesian optimization.” International Conference on Artificial Intelligence and Statistics. PMLR, 2021.
  • Chen, Wenjie, Shengcai Liu, and Ke Tang. “A new knowledge gradient-based method for constrained bayesian optimization.” arXiv preprint arXiv:2101.08743 (2021).
  • Takeno, S., Tamura, T., Shitara, K., & Karasuyama, M. (2021). Sequential-and Parallel-Constrained Max-value Entropy Search via Information Lower Bound. arXiv preprint arXiv:2102.09788.

Multi-Objective Bayesian Optimization

  • Belakaria, S., Deshwal, A., & Doppa, J. R. “Max-value entropy search for multi-objective bayesian optimization.” Advances in neural information processing systems 32 (2019).
  • Belakaria, S., Deshwal, A., Jayakodi, N. K., & Doppa, J. R. (2020, April). Uncertainty-aware search framework for multi-objective Bayesian optimization. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 34, No. 06, pp. 10044-10052).
  • Belakaria, S., Deshwal, A., & Doppa, J. R. (2020, April). Multi-fidelity multi-objective Bayesian optimization: An output space entropy search approach. In Proceedings of the AAAI Conference on artificial intelligence (Vol. 34, No. 06, pp. 10035-10043).
  • Belakaria, S., Deshwal, A., & Doppa, J. R. (2021). Output space entropy search framework for multi-objective Bayesian optimization. Journal of Artificial Intelligence Research, 72, 667-715.
  • Knowles, J. (2006). ParEGO: A hybrid algorithm with on-line landscape approximation for expensive multiobjective optimization problems. IEEE Transactions on Evolutionary Computation, 10(1), 50-66.
  • Emmerich, M., & Klinkenberg, J. W. (2008). The computation of the expected improvement in dominated hypervolume of Pareto front approximations. Rapport technique, Leiden University, 34, 7-3.
  • Zuluaga, M., Sergent, G., Krause, A., & Püschel, M. (2013, February). Active learning for multi-objective optimization. In International Conference on Machine Learning (pp. 462-470). PMLR.
  • Picheny, V. (2015). Multiobjective optimization using Gaussian process emulators via stepwise uncertainty reduction. Statistics and Computing, 25(6), 1265-1280.
  • Hernández-Lobato, D., Hernandez-Lobato, J., Shah, A., & Adams, R. (2016, June). Predictive entropy search for multi-objective bayesian optimization. In International conference on machine learning (pp. 1492-1501). PMLR.
  • Shah, A., & Ghahramani, Z. (2016, June). Pareto frontier learning with expensive correlated objectives. In International conference on machine learning (pp. 1919-1927). PMLR.
  • Paria, B., Kandasamy, K., & Póczos, B. (2020, August). A flexible framework for multi-objective bayesian optimization using random scalarizations. In Uncertainty in Artificial Intelligence (pp. 766-776). PMLR.
  • Zhang, R., Golovin, D.. (2020). Random Hypervolume Scalarizations for Provable Multi-Objective Black Box Optimization. Proceedings of the 37th International Conference on Machine Learning, PMLR
  • Daulton, S., Balandat, M., & Bakshy, E. (2020). Differentiable expected hypervolume improvement for parallel multi-objective Bayesian optimization. Advances in Neural Information Processing Systems, 33, 9851-9864.
  • Suzuki, S., Takeno, S., Tamura, T., Shitara, K., & Karasuyama, M. (2020, November). Multi-objective Bayesian optimization using Pareto-frontier entropy. In International Conference on Machine Learning (pp. 9279-9288). PMLR.
  • Konakovic Lukovic, M., Tian, Y., & Matusik, W. (2020). Diversity-guided multi-objective bayesian optimization with batch evaluations. Advances in Neural Information Processing Systems, 33, 17708-17720.
  • Daulton, S., Balandat, M., & Bakshy, E. (2021). Parallel Bayesian Optimization of Multiple Noisy Objectives with Expected Hypervolume Improvement. Advances in Neural Information Processing Systems, 34.
  • Garrido-Merchán, Eduardo C., and Daniel Hernández-Lobato. “Predictive entropy search for multi-objective bayesian optimization with constraints.” Neurocomputing 361 (2019): 50-68.
  • Fernández-Sánchez, D., Garrido-Merchán, E. C., & Hernández-Lobato, D. (2020). Improved Max-value Entropy Search for Multi-objective Bayesian Optimization with Constraints. arXiv preprint arXiv:2011.01150.
  • Chowdhury, S. R., & Gopalan, A. (2021, March). No-regret algorithms for multi-task bayesian optimization. In International Conference on Artificial Intelligence and Statistics (pp. 1873-1881). PMLR.

BO over conditional/tree-structured search space

  • Jenatton, R., Archambeau, C., González, J., & Seeger, M. (2017, July). Bayesian optimization with tree-structured dependencies. In International Conference on Machine Learning (pp. 1655-1664). PMLR.
  • Swersky, K., Duvenaud, D., Snoek, J., Hutter, F., & Osborne, M. A. (2014). Raiders of the lost architecture: Kernels for Bayesian optimization in conditional parameter spaces. arXiv preprint arXiv:1409.4011.
  • Ma, X., & Blaschko, M. (2020, June). Additive tree-structured covariance function for conditional parameter spaces in Bayesian optimization. In International Conference on Artificial Intelligence and Statistics (pp. 1015-1025). PMLR.