You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The solution presented from part a) to part c) is not general, it already assumes $X_1\sim Binomial(n_1,p_1)$ and $X_2\sim Binomial(n_2,p_2)$ only consist of 1 datapoint each, which we only know from part d).
Do not confuse the number of datapoints of $X_1$ and $X_2$ (let us call them $N_1$ and $N_2$) with the number of data contained in each $X_i$, which is given in the exercise as $n_i$ (i.e., $n_i$ is the number of $Bernoulli(p_i)$ repetitions).
The MLE of $p_i$ from $X_i \sim Binomial(n_i,p_i)$ consisting of $N_i$ datapoints comes from maximizing $$l_{N_i}(p_i)= \log\left(L_{N_i}(p_i)\right)=log(p_i) \sum_{j=1}^{N_i} X_{i,j} + log(1-p_i) \sum_{j=1}^{N_i} \left(n_i-X_{i,j}\right) +\sum_{j=1}^{N_i} log\binom{n_i}{X_{i,j}}$$
Taking the derivative of $l_{N_i}(p_i)$ respect to $p_i$ and equaling to zero lead us to the estimator $$\hat{p_i}= \frac{1}{n_i N_i} \sum_{j=1}^{N_i} X_{i,j}={\overline{X_i} \over n_i}$$
The MLE estimator of $\psi=p_1-p_2$ would be then $$\hat{\psi} = \hat{p_1} - \hat{p_2}= {\overline{X_1} \over n_1} - {\overline{X_2} \over n_2}$$
As in d) it is revealed that we only have 1 datapoint for $X_1$ and $X_2$, then $N_1=1$ and $N_2=1$, which lead to the correct expression showed in this solution.
The text was updated successfully, but these errors were encountered:
The solution presented from part a) to part c) is not general, it already assumes$X_1\sim Binomial(n_1,p_1)$ and $X_2\sim Binomial(n_2,p_2)$ only consist of 1 datapoint each, which we only know from part d).
Do not confuse the number of datapoints of$X_1$ and $X_2$ (let us call them $N_1$ and $N_2$ ) with the number of data contained in each $X_i$ , which is given in the exercise as $n_i$ (i.e., $n_i$ is the number of $Bernoulli(p_i)$ repetitions).
The MLE of$p_i$ from $X_i \sim Binomial(n_i,p_i)$ consisting of $N_i$ datapoints comes from maximizing
$$l_{N_i}(p_i)= \log\left(L_{N_i}(p_i)\right)=log(p_i) \sum_{j=1}^{N_i} X_{i,j} + log(1-p_i) \sum_{j=1}^{N_i} \left(n_i-X_{i,j}\right) +\sum_{j=1}^{N_i} log\binom{n_i}{X_{i,j}}$$
Taking the derivative of$l_{N_i}(p_i)$ respect to $p_i$ and equaling to zero lead us to the estimator
$$\hat{p_i}= \frac{1}{n_i N_i} \sum_{j=1}^{N_i} X_{i,j}={\overline{X_i} \over n_i}$$
The MLE estimator of$\psi=p_1-p_2$ would be then
$$\hat{\psi} = \hat{p_1} - \hat{p_2}= {\overline{X_1} \over n_1} - {\overline{X_2} \over n_2}$$
The Fisher's Matrix would be
The text was updated successfully, but these errors were encountered: