Post

Insight 7 -- Montecarlo techniques

Monte Carlo Methods Monte Carlo methods (aka experiments) are a broad class of computational algorithms that rely on repeated random  sampling to obtain numerical results. Their essential idea is using randomness  to solve problems that might be deterministic in principle. Monte Carlo methods are typically used to solve problems which belongs to the following 'classes': optimisation; numerical integration; generating draws from probability distribution . In principle, Monte Carlo methods can be used to solve any problem having a probabilistic interpretation. In fact, since there is no known consensus on how Monte Carlo should be defined, we can say that a Monte Carlo method is just a technique to solve a mathematical/statistical problem whereas a Monte Carlo simulation is the actual Monte Carlo simulation/experiment (e.g. repeatedly sample from a specific population in order to compute the empirical mean and/or other statistics in interest). To put it simpler, pouri...

Applications

Applications Application 1,2,3 -- developed during lectures Application 4 Application 5 Application 6 Application 7 -- not assigned Application 8 -- missing 

Insight 6 -- Machine Epsilon

Rounding is a procedure for choosing the representation of a real number in a floating point number system. The machine epsilon, which is also called macheps or unit roundoff, is the maximum relative error of the chosen rounding procedure. Recall that a real number x can be represented as a floating-point number: x  = σ •  y  • b e where     •  σ = ± 1 is the sign     • e  is an integer, the exponent     • 1 ≤ y  < b , the significand or mantissa With the use of such a floating-point system, is important to note that the numbers we can store are not  equally spaced even though a wide range of variably-spaced numbers can be represented exactly. For instance, 2.125 = 2 1 + 2 -3 has an exact representation in binary whereas 3.1 does not: 3.1 ≈ 2 1 + 2 0 + 2 -4 + 2 -5 + 2 -8 + ...  And, of course, there are numbers (like π) that have no finite representation in either decimal or binary nu...

Insight 1 -- Most used programming languages within VS.NET

Microsoft Visual Studio is an integrated development environment (IDE) developed by Microsoft. It can be used to develop computer programs as well as websites, web applications, web services and mobile apps. Visual Studio includes a code editor supporting IntelliSense as well as code refactoring and, furthermore, it has an integrated debugger which works both as a source-level debugger and as a machine-level debugger. Visual Studio supports 36 different programming languages and allows the code editor and debugger to support nearly any programming language. Most famous programming languages that are supported include C, C++, Visual Basic .NET, C#, JavaScript, XML, HTML, and CSS. Support for other programming languages (such as Python and Ruby) is available through plug-ins. Online Translators A source-to-source compiler (or transcompiler) is a type of computer that takes the source code of a program written in one programming language as input and produces the equivalent source...

Research 8 -- Most common PRG

Immagine
Random number generators are important in many fields, including physics, engineering, mathematics, computer science and cryptographic applications (where they play a crucial role in some situations). Random number generators can be divided in two main classes: true random number generators (TRNG), which typically make use of an unpredictable physical mean to generate numbers (e.g. atmospheric noises);  and pseudorandom number generators (PRNG), which uses a mathematical deterministic algorithm, computed on an initial value (aka the seed) to generate the random numbers. Even though it should be obvious, is worth to insist on the fact that the choice of the random number generator strictly depend on what the generated numbers will be used for. For instance, not every PRG is good for cryptographic applications. Such a PRG should pass all the statistical tests another required condition is that an adversary, without knowing the seed, cannot distinguish its output from a truly ...

Research 7 -- Central Limit Theorem, LLN, and most common probability distributions

Immagine
Law of Large Numbers and CLT Intuitively, everyone can be convinced of the fact that the average of many measurements of the same unknown quantity tends to give a better estimate than a single measurement. The law of the large numbers (LLN) and the central limit theorem (CLT) formalise this general ideas through mathematics and random variables. Suppose X 1 , X 2 , ..., X n are independent random variables with the same underlying distribution. In this case, we say that the X i are independent and identically-distributed (or, i.i.d.). In particular, the X i have all the same mean μ and standard deviation σ. The average of the i.i.d. variables is defined as: The central limit theorem states that when an infinite number of successive random samples are taken from a population, the sampling distribution of the means of those samples will become approximately normally distributed with mean μ and standard deviation σ/ √N as the sample size becomes larger, irrespective of the sh...

Research 6 - Derivation of Chebyshev's inequality and its application to prove the (weak) LLN

Immagine
Chebyshev's Inequality In probability theory, the Chebyshev's inequality guarantees that, for a wide class of probability distributions, no more than a certain fraction of values can be more distant than a certain value from the mean. In particular, the mentioned inequality states that no more than 1/k 2 of the distribution's values can be more than k standard deviations away from the mean. In other words, this mean that at least (1 - 1/k 2 ) of the distribution's values are within k standard deviations of the mean. The Chebyshev's inequality can be easily derived from the Markov's inequality, where the latter defines an upper bound for the probability that a non-negative random variable is greater than (or equal to) some positive integer constant. Remember the Markov's inequality where   a  > 0 and  X  is a nonnegative random variable The Chebyshev inequality follows by considering the random variable ( X - E ( X )) 2  and the constant a 2...