首页    期刊浏览 2024年09月29日 星期日
登录注册

文章基本信息

  • 标题:Balancing statistical accuracy and subject privacy in large social-science data sets
  • 本地全文:下载
  • 作者:OLIVIA ANGIULI ; JOE BLITZSTEIN ; AND JIM WALDO
  • 期刊名称:ACM Queue (Online): tomorrow's computing today
  • 电子版ISSN:1542-7749
  • 出版年度:2015
  • 卷号:13
  • 期号:8
  • 语种:English
  • 出版社:Association for Computing Machinery
  • 摘要:Olivia Angiuli, Joe Blitzstein, and Jim Waldo - Harvard University Big data is all the rage; using large data sets promises to give us new insights into questions that have been difficult or impossible to answer in the past. This is especially true in fields such as medicine and the social sciences, where large amounts of data can be gathered and mined to find insightful relationships among variables. Data in such fields involves humans, however, and thus raises issues of privacy that are not faced by fields such as physics or astronomy. Such privacy issues become more pronounced when researchers try to share their data with others. Data sharing is a core feature of big-data science, allowing others to verify research that has been done and to pursue other lines of inquiry that the original researchers may not have attempted. But sharing data about human subjects triggers a number of regulatory regimes designed to protect the privacy of those subjects. Sharing medical data, for example, requires adherence to HIPAA (Health Insurance Portability and Accountability Act); sharing educational data triggers the requirements of FERPA (Family Educational Rights to Privacy Act). These laws require that, to share data generally, the data be de-identified or anonymized (note that, for the purposes of this article, these terms are interchangeable). While FERPA and HIPAA define the notion of de-identification slightly differently, the core idea is that if a data set has certain values removed, the individuals whose data is in the set cannot be identified, and their privacy will be preserved.
国家哲学社会科学文献中心版权所有