This study aims to determine the optimal number of options for multiple-choice test items. A com-mon item equating design was employed to compare item statistics, distracter performance indices, person statistics, and test reliabilities across three similar vocabulary test forms, A, B, and C, which were different only in the number of options per item. Forms B and C were constructed by randomly deleting one distracter from each item in Form A and Form B respectively. Form A, Form B, and Form C, each containing 30 multiple-choice vocabulary test items with five, four, and three options per item, were randomly given to 180 graduate and undergraduate English major university students. The three test forms were linked by means of ten common items using concurrent common item equating. The Rasch model was applied to compare item difficulties, fit statis-tics, average measures, distracter-measure correlations, person response behaviors, and reliabilities across multiple-choice vocabulary test items with five, four, and three options per item. Except for discrimination power of distracters which was revealed to be inversely affected by the number of options per item, no significant change was observed in item difficulties (p < 0.05), item fit statis-tics, person response behaviors, and reliabilities across the three test forms. Considering the amount of time and energy needed for developing multiple-choice tests with more distracters, three options per item were concluded to be optimal.