Post

Visualizzazione dei post da novembre, 2018

Research 7 -- Central Limit Theorem, LLN, and most common probability distributions

Immagine
Law of Large Numbers and CLT Intuitively, everyone can be convinced of the fact that the average of many measurements of the same unknown quantity tends to give a better estimate than a single measurement. The law of the large numbers (LLN) and the central limit theorem (CLT) formalise this general ideas through mathematics and random variables. Suppose X 1 , X 2 , ..., X n are independent random variables with the same underlying distribution. In this case, we say that the X i are independent and identically-distributed (or, i.i.d.). In particular, the X i have all the same mean μ and standard deviation σ. The average of the i.i.d. variables is defined as: The central limit theorem states that when an infinite number of successive random samples are taken from a population, the sampling distribution of the means of those samples will become approximately normally distributed with mean μ and standard deviation σ/ √N as the sample size becomes larger, irrespective of the sh...

Research 6 - Derivation of Chebyshev's inequality and its application to prove the (weak) LLN

Immagine
Chebyshev's Inequality In probability theory, the Chebyshev's inequality guarantees that, for a wide class of probability distributions, no more than a certain fraction of values can be more distant than a certain value from the mean. In particular, the mentioned inequality states that no more than 1/k 2 of the distribution's values can be more than k standard deviations away from the mean. In other words, this mean that at least (1 - 1/k 2 ) of the distribution's values are within k standard deviations of the mean. The Chebyshev's inequality can be easily derived from the Markov's inequality, where the latter defines an upper bound for the probability that a non-negative random variable is greater than (or equal to) some positive integer constant. Remember the Markov's inequality where   a  > 0 and  X  is a nonnegative random variable The Chebyshev inequality follows by considering the random variable ( X - E ( X )) 2  and the constant a 2...

Insight 5 - Floating Point Representation and their Computational Issues

Immagine
Floating-point representation In computing, a floating-point representation is an approximation which allows modern computers to support the trade-off between range and precision when handling real numbers. Representing numbers as integers in a fixed number of bits has some notable limitations. It cannot handle numbers that have a fraction and it's not suitable for very large numbers that don't fit into (e.g.) 32 bits. In general, a number is represented by an approximation to a fixed number of significant digits (the significant) and scaled using an exponent for some fixed base; the base for the scaling is normally two, ten, or sixteen. A number that can be represented exactly has the following form: where significand, base, and exponent are all integers and the base is greater than or equal to two. As an example, consider 12345 x 10 -4 = 1.2345, or 11111 x 10 -1 = 1111.1. The term floating point refers to the fact that a number's radix point (decimal point or...