The number of binary classification tasks on a finite domain (representing a set of vectors of features, measurements, or observations) grows exponentially with its size. However for a given application area, relevance of many such tasks might be low or negligible. The lecture will investigate a probabilistic framework modeling a prior knowledge about probabilities that a presence of some features implies a property described by one of the classes. Impact of increasing sizes of domains on correlations between input-output mappings of computational models and randomly-chosen classifiers will be analyzed. It will be shown that on large domains, correlations between tasks and computational units are sharply concentrated around their mean values. Probabilistic bounds on correlations will be derived using various concentration inequalities, some of them holding also without the “naive Bayes assumption”. It will be shown that performance of random classifiers is almost deterministic, in the sense that either a given class of computational models can approximate well almost all tasks or none of them. Consequences for the choice of optimal computational models will be discussed.