Previous research demonstrated that processing time was facilitated by number of related word senses (polysemy) and inhibited by number of unrelated word meanings (homonymy). The starting point of this research were the findings described by Moscoso del Prado Martín and colleagues, who offered a unique account of processing of two forms of lexical ambiguity. By applying the techniques they proposed, for the set of strictly polysemous Serbian nouns we calculated ambiguity measures they introduced. Based on the covariance matrix of the context vectors, we derived entropy of equivalent Gaussian distribution, and based on the context vectors probability density function, we derived differential entropy. Negentropy was calculated as the difference between the two. Based on interpretation that entropy of equivalent Gaussian mirrors sense cooperation, or polysemy, while negentropy mirrors meaning competition, or homonymy, we predicted that in the set of strictly polysemous nouns, negentropy effect would disappear. In accordance with our predictions, entropy of equivalent Gaussian distribution accounted for significant proportion of processing latencies variance. Negentropy did not affect reaction time. This finding is in accordance with the hypothesis that entropy of equivalent Gaussian distribution, as a measure of general width of activation in semantic space, reflects polysemy, that is, the existence of related senses. Therefore, polysemy advantage could be the result of the wide-spread activation in semantic space and reduced competition among overlapping Gaussians.