We prove a lower bound on the amount of nonuniform advice needed by black-box reductions for the Dense Model Theorem of Green, Tao, and Ziegler, and of Reingold, Trevisan, Tulsiani, and Vadhan. The latter theorem roughly says that for every distribution D that is -dense in a distribution that is -indistinguishable from uniform, there exists a ``dense model'' for D, that is, a distribution that is -dense in the uniform distribution and is -indistinguishable from D. This -indistinguishability is with respect to an arbitrary small class of functions F. For the natural case where () and O(1), our lower bound implies that (1)log(1)logF advice bits are necessary. There is only a polynomial gap between our lower bound and the best upper bound for this case (due to Zhang), which is O(12)log(1)logF . Our lower bound can be viewed as an analog of list size lower bounds for list-decoding of error-correcting codes, but for ``dense model decoding'' instead.