An estimator $\widehat \Theta$ is a statistic (i.e. a function of the random sample $X_1,X_2,\cdots,X_n$) that is used to infer the value of an unknown population parameter $\theta$. The value $\hat \theta$ taken by an estimator is called a (point) estimate of the unknown parameter.
The statistic $\bar X=\frac{X_1+X_2+\cdots+X_n}{n}$ is an estimator of the population mean $\mu$. The value $\bar x$ based on observations $x_1,x_2,\cdots,x_n$ is an estimate of $\mu$.
We can have different estimators for an unknown population parameter. To decide which one to use, we look at
of an estimator.
A statistic $\widehat \Theta$ is said to be an unbiased estimator of the parameter $\theta$ if $E[\widehat \Theta]=\theta$.
$S^2$ is an unbiased estimator of $\sigma^2$.
$S^2 = \frac{\sum (x_i-\bar x)^2}{n-1}\\ \mathbb E[S^2] = \sigma^2\\ \mathbb E[S] \neq \sigma$
Certainly we would like $\hat \Theta$ to be an unbiased estimator of $\theta$.
If $\widehat \Theta_1$ and $\widehat \Theta_2$ are 2 unbiased estimators of the same population parameter $\theta$, the estimator whose sampling distribution has the smallest variance should be chosen.