![Loading...](https://link.springer.com/static/c4a417b97a76cc2980e3c25e2271af3129e08bbe/images/pdf-preview/spacer.gif)
-
Article
Rethinking statistical learning theory: learning using statistical invariants
This paper introduces a new learning paradigm, called Learning Using Statistical Invariants (LUSI), which is different from the classical one. In a classical paradigm, the learning machine constructs a classif...
-
Article
Knowledge transfer in SVM and neural networks
The paper considers general machine learning models, where knowledge transfer is positioned as the main method to improve their convergence properties. Previous research was focused on mechanisms of knowledge ...
-
Chapter and Conference Paper
Learning with Intelligent Teacher
The paper considers several topics on learning with privileged information: (1) general machine learning models, where privileged information is positioned as the main mechanism to improve their convergence pr...
-
Chapter and Conference Paper
Statistical Inference Problems and Their Rigorous Solutions
This paper presents direct settings and rigorous solutions of Statistical Inference problems. It shows that rigorous solutions require solving ill-posed Fredholm integral equations of the first kind in the sit...
-
Chapter and Conference Paper
Learning with Intelligent Teacher: Similarity Control and Knowledge Transfer
This paper introduces an advanced setting of machine learning problem in which an Intelligent Teacher is involved. During training stage, Intelligent Teacher provides Student with information that contains, al...
-
Article
Open AccessFalsificationism and Statistical Learning Theory: Comparing the Popper and Vapnik-Chervonenkis Dimensions
We compare Karl Popper’s ideas concerning the falsifiability of a theory with similar notions from the part of statistical learning theory known as VC-theory. Popper’s notion of the dimension of a theory is contr...
-
Article
Large margin vs. large volume in transductive learning
We consider a large volume principle for transductive learning that prioritizes the transductive equivalence classes according to the volume they occupy in hypothesis space. We approximate volume maximization ...
-
Chapter and Conference Paper
Large Margin vs. Large Volume in Transductive Learning
We focus on distribution-free transductive learning. In this setting the learning algorithm is given a ‘full sample’ of unlabeled points. Then, a training sample is selected uniformly at random from the full samp...
-
Book
-
Chapter
Methods of Expected-Risk Minimization
-
Chapter
Noninductive Methods of Inference: Direct Inference Instead of Generalization (2000–…)
-
Chapter
Estimation of Regression Parameters
-
Chapter
Realism and Instrumentalism: Classical Statistics and VC Theory (1960–1980)
-
Chapter
Methods of Parametric Statistics for the Problem of Regression Estimation
-
Chapter
A Method of Minimizing Empirical Risk for the Problem of Pattern Recognition
-
Chapter
The Method of Structural Minimization of Risk
-
Chapter
Estimation of Functional Values at Given Points
-
Chapter
The Problem of Estimating Dependences from Empirical Data
-
Chapter
Falsifiability and Parsimony: VC Dimension and the Number of Entities (1980–2000)
-
Chapter
Methods of Parametric Statistics for the Pattern Recognition Problem