Statlect is a free digital textbook on probability theory and mathematical statistics. Explore its main sections.
Read a rigorous yet accessible introduction to the main concepts of probability theory, such as random variables, expected value, variance, correlation, conditional probability.
Explore this compendium of common probability distributions, including the binomial, Poisson, uniform, exponential and normal distributions.
Learn about stochastic convergence, including convergence in probability, almost surely and in distribution; read about the Central Limit Theorem and the Law of Large Numbers.
This is a rigorous introduction to the basics of mathematical statistics; learn about statistical inference, point estimation, interval estimation and hypothesis testing.
Use this glossary to review the most important technical terms that are introduced in the digital textbook. Some glossary entries also contain additional explanations and examples.
Learn about mathematical concepts that are frequently used in probability theory and statistics.
This is a collection of lectures on the most important topics in matrix algebra: matrix addition and multiplication; linear combinations; linear independence, rank and span; linear systems.
Review the basics of calculus, learn about the fundamentals of combinatorial analysis, such as permutations and combinations; discover special functions used in statistics.
Explore some popular pages on Statlect.
The Beta distribution is a continuous probability distribution having two parameters. One of its most common uses is to model one's uncertainty about the probability of success of an experiment.
The Poisson distribution is a discrete probability distribution used to model the number of occurrences of an unpredictable event within a unit of time.
The exponential distribution is a continuous probability distribution used to model the time we need to wait before a given event occurs.
A discrete distribution used to model the number of successes obtained by repeating several times an experiment that can have two outcomes, either success or failure.
Bayes' rule is a formula that allows to compute the conditional probability of a given event, after observing a second event whose conditional and unconditional probabilities were known in advance.
Maximum likelihood is an estimation method that allows to use observed data to estimate the parameters of the probability distribution that generated the data.
The moment generating function is often used to characterize the probability distribution of a random variable. Its derivatives at zero are equal to the moments of the random variable.
The Beta function is often employed in probability theory and statistics, for example, as a normalizing constant in the density functions of the F and Student's t distributions.
The concept of convergence in probability is based on the following intuition: two random variables are "close to each other" if there is a high probability that their difference will be very small.
A Central Limit Theorem provides a set of conditions that are sufficient for the sample mean to have a normal distribution asymptotically (as the sample size increases).
Hypothesis about the probability distribution that generated a sample of data. It is rejected or not rejected based on the realization of a test statistic.
This glossary entry gives a definition of critical value and explains how to find critical values in one- and two-tailed tests. All the relevant cases are summarized in a table.
A gentle introduction to the concept of expected value, with an informal definition and more formal definitions based on the Stieltjes and Lebesgue integrals.
See what's new on Statlect.
The linear regression model is a conditional model in which the output variable is linearly related to the input variables and to an error term.
This lecture discusses the conditions under which the Ordinary Least Squares (OLS) estimators of the coefficients of a linear regression are consistent and asymptotically normal.
The logit model is a classification model used to predict the realization of a binary variable on the basis of a set of regressors.
The ridge estimator of the coefficients of a linear regression is biased but can have lower mean squared error than the OLS estimator.
If an explanatory variable in a linear regression is highly correlated with a linear combination of other variables, then coefficient estimates are very imprecise.
Most of the learning materials found on this website are now available in a traditional textbook format.