首页 | 本学科首页   官方微博 | 高级检索  
     


Transparency of machine-learning in healthcare: The GDPR & European health law
Affiliation:1. Centre for Health, Law and Emerging Technologies (‘HeLEX’), Faculty of Law, University of Oxford, Ewert House, Oxford OX2 7DD, UK;2. Faculty of Law, University of Copenhagen, Karen Blixens Plads 16, 2300 Copenhagen, Denmark;3. Novo Nordisk Foundation Center for Protein Research, Faculty of Health and Medical Sciences, University of Copenhagen, Blegdamsvej 3B, 2200 Copenhagen, Denmark
Abstract:Machine-learning (‘ML’) models are powerful tools which can support personalised clinical judgments, as well as patients’ choices about their healthcare. Concern has been raised, however, as to their ‘black box’ nature, in which calculations are so complex they are difficult to understand and independently verify. In considering the use of ML in healthcare, we divide the question of transparency into three different scenarios:
  • 1)Solely automated decisions. We suggest these will be unusual in healthcare, as Article 22(4) of the General Data Protection Regulation presents a high bar. However, if solely automatic decisions are made (e.g. for inpatient triage), data subjects will have a right to ‘meaningful information’ about the logic involved.
  • 2)Clinical decisions. These are decisions made ultimately by clinicians—such as diagnosis—and the standard of transparency under the GDPR is lower due to this human mediation.
  • 3)Patient decisions. Decisions about treatment are ultimately taken by the patient or their representative, albeit in dialogue with clinicians. Here, the patient will require a personalised level of medical information, depending on the severity of the risk, and how much they wish to know.
In the final category of decisions made by patients, we suggest European healthcare law sets a more personalised standard of information requirement than the GDPR. Clinical information must be tailored to the individual patient according to their needs and priorities; there is no monolithic ‘explanation’ of risk under healthcare law. When giving advice based (even partly) on a ML model, clinicians must have a sufficient grasp of the medically-relevant factors involved in the model output to offer patients this personalised level of medical information. We use the UK, Ireland, Denmark, Norway and Sweden as examples of European health law jurisdictions which require this personalised transparency to support patients’ rights to make informed choices. This adds to the argument for post-hoc, rationale explanations of ML to support healthcare decisions in all three scenarios.
Keywords:
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号