One of the major goals of software testing is to increase reliability of the software. As pointed out by many studies, fault detection does not necessarily increase the reliability of the software when all failures are considered to be equivalent to one another. Accordingly, we need to evaluate software testing techniques to check which technique is more suitable and effective in terms of increasing confidence by reducing risk (by detecting and isolating faults which affect reliability most). We here present a novel experiment which compares three defect detection techniques for reliability. Preliminary results suggest that testing techniques differ in terms of their ability to reduce risk in the software.