Sunday, March 4, 2018

Random sampling (is it defined in probability theory ?)


Really random numbers are not easily obtainable (if they exists). The short story: I perceive that in probability theory the concept of random sampling is not contemplated. Randomness is used by probability theory but is not implied by its axioms.

Randomly literally means that there is no law (expressed in equations) or algorithm (expressed in actions or some programing code) that connects one pick in the sequence to another.  The elements in the sequence can depend on others (as described by their correlation) while this dependence does not imply causation (in the sense that one implies the other): "correlation does not imply causation".
Taking the problem from a different perspective, Judea Pearl stresses that probability is about  "association'' not "causality'' (which is, in a sense, the reverse of randomness): "An associational concept is any relationship that can be defined in terms of a joint distribution of observed variables, and a causal concept is any relationship that cannot be defined from the distribution alone. Examples of associational concepts are: correlation, regression, dependence, conditional independence, like-lihood, collapsibility, propensity score, risk ratio, odds ratio, marginalization, conditionalization, ``controlling for,'' and so on.
Examples of causal concepts are:randomization, influence, effect, confounding, "holding constant,'' disturbance, spurious correlation, faithfulness/stability, instrumental variables, intervention, explanation, attribution, and so on. The former can, while the latter cannot be defined in term of distribution functions." He also writes: "Every claim invoking causal concepts must rely onsome premises that invoke such concepts; it cannot be inferred from, or even defined in terms statistical associations alone.''
Therefore Pearl, at least, in the sense that random elements in a random sequence are not causally related, supports the idea that if probability is not about causality, it is not either about ranmdomness.

Wikipedia also supports my arguments: "Axiomatic probability theory deliberately avoids a definition of a random sequence [2]. Traditional probability theory does not state if a specific sequence is random, but generally proceeds to discuss the properties of random variables and stochastic sequences assuming some definition of randomness. The Bourbaki school considered the statement ``let us consider a random sequence" abuse of language [3] "
The same Wikipedia explains very clearly which is the state of art of randomness concept but, for a more interested reader, the educational review paper by Volchan [4], is certainly informative.

I report from Wikipedia the current state of art for the extractions of random sequences:
"Three basic paradigms for dealing with random sequences have now emerged [5]:
  •   The frequency / measure-theoretic approach. This approach started with the work of Richard von Mises and Alonzo Church. In the 1960s Per Martin-Loef noticed that the sets coding such frequency-based stochastic properties are a special kind measure zero sets, and that a more general and smooth definition can be obtained by considering all effectively measure zero sets.
  •   The complexity / compressibility approach. This paradigm was championed by A. N. Kolmogorov along with contributions Levin and Gregory Chaitin. For finite random sequences, Kolmogorov defined the ``randomness'' as the entropy, Kolmogorov complexity, of a string of length K of zeros and ones as the closeness of its entropy to K, i.e. if the complexity of the string is close to K it is very random and if the complexity is far below K, it is not so random.
  •   The predictability approach. This paradigm was due Claus P. Schnorr and uses a slightly different definition of constructive martingales than martingales used in traditional probability theory. Schnorr showed how the existence of a selective betting strategy implied the existence of a selection rule for a biased sub-sequence. If one only requires a recursive martingale to succeed on a sequence instead of constructively succeeds on a sequence, then one gets the recursively randomness concepts. Yongge Wang that recursively randomness concept is different from Schnorr's randomness concepts. "
"In most cases, theorems relating the three paradigms (often equivalence) have been proven.
I do not pretend to have fully understood the previous statements. However, in summary, we have to grow quite complicate if we want to understand what randomness is.
Once clarified what it is, we can have the problem to assess what can be a random arrangement for an arbitrary set of objects, say  $\Omega$. Taking example of the algorithms used to get a given random sequence of  numbers from a give distribution, we can observe that probability itself can be used to infer the random sequence of a set from a random sequence in [0,1] by inverting the probability $P$.

Random sampling is significant when the set of the domain is subdivided into disjoint parts: a partition. Therefore:

Definition: Given a set (having the structure of a~\(\sigma\)-algebra)~\(\Omega\) let a partition of
~\(\Omega\), denoted as:
${\mathcal P }(\Omega):=\{ x | \cup_{x \in \mathcal P} x = \Omega\, {\rm{and}}\ \forall y,z \in \Omega,\, y\cap z = \emptyset \}$
Through probability \(P\) defined over ${\mathcal P} (\Omega)$ each element $x$ of the set is mapped into the closed interval [0,1] and it is guaranteed that $P[\cup_{x \in \mathcal P} x] = 1$

There is not necessarily an ordering in the partition of $\Omega$ but we can arbitrarily arrange the set and associate each of its element with a subset of [0,1] of Lesbesgue measure (a.k.a. length) corresponding to its probability. By using the arbitrary order  of the partition, we can at the same time build the (cumulative) probability. By arranging or re-arranging the numbers in [0,1], we thus imply (since $P$ is bijective) a re-arrangement of the set ${\mathcal P} (\Omega)$.

Definition: we call sequence of elements in ${\mathcal P} (\Omega)$, denoted as $\mathcal S$ a numerable set of elements in ${\mathcal P} (\Omega)$:
$${\mathcal S} := \{ x_1 \cdot \cdot \cdot\}$$
Definition: we call a sequence a random sequence~~if it ha no
description shorter that itself via a universal Turing machine (orequivalently we can adopt one of the other two definition proposed above
Theorem: A random sequence of integers, through inverting the probability P, defines a random sequence on the set ${\mathcal P} (\Omega)$.
The prof is trivial. If there is a law that connects elements in ${\mathcal P} (\Omega)$ then through the probability $P$ a describing law is obtained also for the random sequence in [0,1], which is, therefore no more random.

So randomness of any set on which is defined a probability can be derived by getting a random sequence in [0,1].

References

[1] - Pearl, J. (2009). Causal inference in statistics: An overview. Statistics Surveys, 30, 96--146.
http://doi.org/10.1214/09-SS057
[2] Inevitable Randomness in Discrete Mathematics by József Beck 2009 ISBN 0-8218-4756-2 page 44
[3] Algorithms: main ideas and applications by Vladimir Andreevich Uspenskiĭ, Alekseĭ, Lʹvovich Semenov 1993 Springer ISBN 0-7923-2210-X page 166
[4] Sergio B. Volchan, What is a random sequences, The American Mathematical Monthly, Vol. 109, 2002, pp. 46–63[5] R. Downey, Some Recent Progress in Algorithmic Randomness, in Mathematical foundations of computer science 2004: by Jiří Fiala, Václav Koubek 2004 ISBN 3-540-22823-3 page 44

No comments:

Post a Comment