期刊名称:Sankhya. Series A, mathematical statistics and probability
印刷版ISSN:0976-836X
电子版ISSN:0976-8378
出版年度:2004
卷号:66
期号:04
出版社:Indian Statistical Institute
摘要:We compare the performances of the restricted and unrestricted maximum likelihood estimators of means $\mu_1$ and $\mu_2,$ and common variance $\sg^2,$ of two normal populations under LINEX (linear-exponential) loss functions, when it is known apriori that $\mu _1 \leq \mu _2$ . If $\del$ is any estimator of the real parameter $g(\uth),$ then the LINEX loss function is defined by $L(\uth, \del) = e^{a(\del - g(\uth))} - a(\del - g(\uth)) - 1,$ $a \ne 0.$ We show that the restricted maximum likelihood estimator (MLE) $\moh$ of $\mu_1$ is better than the unrestricted MLE $\xbo$, for $a \in [a_1,0)\cup (0,\infty),$ where $a_1 < 0$ is a constant depending on the sample sizes $n_1$ and $n_2$. For $a < a_1,$ the two estimators are shown to be not comparable. Similarly for a constant $a_{1}^{*} > 0$, depending on the sample sizes, the restricted MLE $\mth$ of $\mu_2$ is shown to be superior to the unrestricted MLE $\xbt$ for $a \in (-\infty, 0) \cup (0,a_{1}^{*}]$, and the two stimators are shown to be not comparable for $a > a_{1}^{*}$. Similar results are obtained for the simultaneous estimation of $(\mu_1,\mu_2)$ under the sum of LINEX loss functions. For the estimation of $\sg^2$, we show that the restricted MLE $\sgh ^2$ is superior to the unrestricted MLE $S^2$ for $ a \in (-\infty,0) \cup (0,a_2]$ and the two estimators are shown to be not comparable for $a \in (a_2,a_3),$ where $0
关键词:Squared error loss, LINEX loss, maximum likelihood estimation