摘要:Objective To describe the evaluation process to assess data quality during development of an electronic case report application, and to describe the evaluation results Introduction Electronic case reporting (eCR) is defined as the fully or semi- automated generation and electronic transmission of reportable disease case reports from an electronic health record (EHR) system to public health authorities, replacing the historically paper-based process 1 . ECR has been reported to increase the number, accuracy, completeness and timeliness of surveillance case reports 2 . Chicago Department of Public Health (CDPH) collaborated with Alliance of Chicago (AOC) to develop an application to generate electronic provider reports (ePR) for chlamydia (CT) and gonorrhea (GC) cases from the EHR system managed by AOC and send ePR records to the Illinois National Electronic Disease Surveillance System (I-NEDSS). This application was tested in the EHR database of Health Center A in AOC’s network. It is essential to ensure ePR data are accurate, so that public health receives correct information to take actions if needed. Therefore, evaluation is needed to assess ePR records data quality. Methods CDPH developed a five step evaluation plan to validate ePR records data quality. Step 1 was to validate the ePR file format to ensure all I-NEDSS required fields are present, required value sets were used, and file format did not vary across files generated. Step 2 was to validate the algorithm accuracy. Chart review was conducted to ensure the ePR records do not include non-reportable cases. Step 3 was to review ePR records loaded in I-NEDSS to make sure all values in ePR raw files appeared correctly on the I-NEDSS front end. After the application passed steps 1 to 3, it moved to step 4, parallel validation. The first phase of parallel validation was to review historic cases. Test ePR records for CT and GC cases diagnosed by Health Center A in 2015 (n=510) were compared to the same 510 cases’ closed surveillance case reports in I-NEDSS. The completeness of treatment, race, and ethnicity was examined. The application then moved into testing daily data feed. Daily ePR records were compared with EHR charts and paper provider reports received by CDPH to assess completeness and timeliness. Step 5 was to re-evaluate algorithms. EPR records were validated against the electronic laboratory reports (ELR) records, which were used as gold standards of all reportable CT and GC cases, to find missing cases. Results The first three steps of evaluation occurred from January to April 2016. Test ePR files containing historic cases from Health Center A were vetted weekly. A total of 14 test ePR files were reviewed. This process identified required fields not present (patient address, treatment date, treatment, and race), race value sets not returned correctly, and additional logic statements needed to return correct pregnancy status at the time of diagnosis. These issues were discussed with the project team, and the application was modified accordingly. The historic case review found ePR data were more complete than closed surveillance reports. Compared to closed surveillance reports in I-NEDSS, 18% (94/510) of the cases had incomplete treatment information in the ePR records compared to 78% (400/510), 0.2% (1/510) of the cases did not have race information in the ePR records compared to 47% (240/510), and 0.7% (4/510) of the cases had no ethnicity information in the ePR records compared to 50% (253/510). These preliminary evaluation results suggest that eCR improves surveillance case reports data quality. The daily data feed data quality evaluation is still on-going, and ePR data quality will be monitored continuously. Conclusions Evaluation plays an integral role in developing and implementing the eCR process in Chicago. The stepwise evaluation process ensures ePR data quality meeting public health requirements, so that public health will be able to act on more complete information to improve population health.