摘要:Cohen’s kappa is a popular descriptive statistic for summarizing agreement between the classifications of two raters on a nominal scale. With raters there are several views in the literature on how to define agreement. The concept of g-agreement refers to the situation in which it is decided that there is agreement if g out of m raters assign an object to the same category. Given raters we can formulate multirater kappas, one based on 2-agreement, one based on 3-agreement, and so on, and one based on m-agreement. It is shown that if the scale consists of only two categories the multi-rater kappas based on 2-agreement and 3-agreement are identical.