5.1 Point Estimation

Definition

An estimator $\widehat \Theta$ is a statistic (i.e. a function of the random sample $X_1,X_2,\cdots,X_n$) that is used to infer the value of an unknown population parameter $\theta$. The value $\hat \theta$ taken by an estimator is called a (point) estimate of the unknown parameter.

E.g.

The statistic $\bar X=\frac{X_1+X_2+\cdots+X_n}{n}$ is an estimator of the population mean $\mu$. The value $\bar x$ based on observations $x_1,x_2,\cdots,x_n$ is an estimate of $\mu$.

How to choose Estimator

We can have different estimators for an unknown population parameter. To decide which one to use, we look at

  1. Unbiasedness
  2. Variance

of an estimator.

5.1.1 Unbiased Estimator

Definition

A statistic $\widehat \Theta$ is said to be an unbiased estimator of the parameter $\theta$ if $E[\widehat \Theta]=\theta$.

E.g.

$S^2$ is an unbiased estimator of $\sigma^2$.

$S^2 = \frac{\sum (x_i-\bar x)^2}{n-1}\\ \mathbb E[S^2] = \sigma^2\\ \mathbb E[S] \neq \sigma$

Certainly we would like $\hat \Theta$ to be an unbiased estimator of $\theta$.

5.1.2 Variance of an Estimator

If $\widehat \Theta_1$ and $\widehat \Theta_2$ are 2 unbiased estimators of the same population parameter $\theta$, the estimator whose sampling distribution has the smallest variance should be chosen.