We propose a novel probabilistic framework that combines information acquired from different facial features for robust face recognition. The features used are the entire face, the edginess image of the face, and the eyes. In the training stage, individual feature spaces are constructed using principal component analysis (PCA) and Fisher's linear discriminant (FLD). By using the distance-in-feature-space (DIFS) values of the training images, the distributions of the DIFS values in each feature space are computed. For a given image, the distributions of the DIFS values yield confidence weights for the three facial features extracted from the image. The final score is computed using a probabilistic fusion criterion and the match with the highest score is used to establish the identity of a person. A new preprocessing scheme for illumination compensation is also advocated. The proposed fusion approach is more reliable than a recognition system which uses only one feature, trained individually. The method is validated on different face datasets, including the FERET database.