Combinatorial bounds of overfitting for threshold classifiers
DOI:
https://doi.org/10.13108/2018-10-1-49Keywords:
computational learning theory, empirical risk minimization, combinatorial theory of overfitting,Abstract
Estimating the generalization ability is a fundamental objective of statistical learning theory. However, accurate and computationally efficient bounds are still unknown even for many very simple cases. In this paper, we study one-dimensional threshold decision rules. We use the combinatorial theory of overfitting based on a single probabilistic assumption that all partitions of a set of objects into an observed training sample and a hidden test sample are of equal probability. We propose a polynomial algorithm for computing both probability of overfitting and of complete cross-validation. The algorithm exploits the recurrent calculation of the number of admissible paths while walking over a three-dimensional lattice between two prescribed points with restrictions of special form. We compare the obtain sharp estimate of the generalized ability and demonstrate that the known upper bound are too overstated and they can not be applied for practical problems.Downloads
Published
20.03.2018
Issue
Section
Article