Along with a more humanitarian movement in language testing, accountability to contextual variables in the design and development of any assessment enterprise is emphasized. However, when it comes to writing assessment, it is found that multiplicity of rating scales developed to fit diverse contexts is mainly headed by well-known native testing agencies. In fact, it seems that EFL/ESL assessment contexts are receptively influenced by the symbolic authority of native assessment circles. Hence, investigating the actualities of rating practice in EFL/ESL contexts would provide a realistic view of the way assessment is conceptualized and practiced. To investigate the issue, present study launched a wide-scale survey in the Iranian EFL writing assessment context. Results of a questionnaire and subsequent interviews with Iranian EFL composition raters revealed that rating scale in its common sense does not exist. In fact, raters relied on their own internalized criteria developed through their long years of practice. Therefore, native speaker legitimacy in the design and development of scales for the EFL context is challenged and the local agency in the design and development of rating scales is emphasized.