Yannis Yatracos
IBD Salle 24
AMU - AMSE
5-9 boulevard Maurice Bourdet
13001 Marseille
Sullivan Hué : sullivan.hue[at]univ-amu.fr
Michel Lubrano : michel.lubrano[at]univ-amu.fr
Breiman (2001) urged statisticians to provide tools when data, X=s(Y) or X=s(θ,Y), but his suggestion was ignored; s is either known or a Black-Box; parameter θεΘ, Y is random, latent or not. However, computer scientists work with X=s(θ,Y), calling s learning machine. In this talk, statistical inference tools are presented for θ when X=s(θ), Y latent: a) The Empirical Discrimination Index (EDI), to detect a.s. θ-discrimination and identifiability. b) Matching estimates of θ with upper bounds on the errors in prob. that depend on the “massiveness” of Θ. c) For known stochastic model of X, Laplace’s 1774 frequentist Principle is proved without Bayes rule, thus obtaining a unique Fiducial distribution and showing finally Laplace’s and Fisher’s intuitions were correct! Frequentists can now reclaim, at least, distributions obtained with flat improper priors. For unknown X-model, an Approximate Fiducial distribution for θ is obtained. The tools are used in ABC, providing F-ABC, that includes all θ* drawn from a Θ-sampler, unlike the Rubin (1984) ABC-rejection method followed until now. Thus, when X=s(θ) and a cdf, Fθ, is assumed for X, a risk averse researcher can use instead the sampler, s, and a)-c), since Fθ and an assumed θ-prior may be wrong. Le Cam’s Statistical Experiments that use {Fθ* , θ*εΘ} are now extended to Data Generating Experiments using instead {s(θ*), θεΘ}, which allow “learning” cdfs Fθ* with repeated “training” samples.