-
Chapter and Conference Paper
Big Transfer (BiT): General Visual Representation Learning
Transfer of pre-trained representations improves sample efficiency and simplifies hyperparameter tuning when training deep neural networks for vision. We revisit the paradigm of pre-training on large supervise...
-
Chapter and Conference Paper
A Statistical Learning Perspective of Genetic Programming
This paper proposes a theoretical analysis of Genetic Programming (GP) from the perspective of statistical learning theory, a well grounded mathematical toolbox for machine learning. By computing the Vapnik-Ch...
-
Chapter
Robust Optimizers for Nonlinear Programming in Approximate Dynamic Programming
Many stochastic dynamic programming tasks in continuous action-spaces are tackled through discretization. We here avoid discretization; then, approximate dynamic programming (ADP) involves (i) many learning ta...
-
Chapter and Conference Paper
General Lower Bounds for Evolutionary Algorithms
Evolutionary optimization, among which genetic optimization, is a general framework for optimization. It is known (i) easy to use (ii) robust (iii) derivative-free (iv) unfortunately slow. Recent work [8] in p...
-
Chapter and Conference Paper
On the Ultimate Convergence Rates for Isotropic Algorithms and the Best Choices Among Various Forms of Isotropy
In this paper, we show universal lower bounds for isotropic algorithms, that hold for any algorithm such that each new point is the sum of one already visited point plus one random isotropic direction multipli...
-
Chapter and Conference Paper
From Factorial and Hierarchical HMM to Bayesian Network: A Representation Change Algorithm
Factorial Hierarchical Hidden Markov Models (FHHMM) provides a powerful way to endow an autonomous mobile robot with efficient map-building and map-navigation behaviors. However, the inference mechanism in FHH...