Approved October 19, 2022 AI and ML are a set of methods to draw inferences from data. They differ from traditional methods in that they require us to make few assumptions about that data, such as the nature of the underlying distribution. The price of this flexibility is their enormous appetite for data. While many of the underlying technologies, such as neural networks, were developed decades ago, ubiquitous data collection enabled by advances in computer technology, the use of the Internet and the Internet-of-Things now provide enough data to make AI and ML powerful practical tools with the potential to rapidly advance scientific knowledge. SACHRP also acknowledges that the availability of data creates pressure on scientists to use that data to learn and to build tools for public and commercial good. However, there is broad concern about the harms (e.g., bias, discrimination, creation and reinforcement of inequity) of ubiquitous data collection and the application of AI to individuals and, perhaps more importantly, to groups. Charge to SACHRP Under what conditions would collection of data for AI or AI validation activities meet the Common Rule definition of research that is “designed to develop or contribute to generalizable knowledge”? When AI involves research involving private identifiable information (PII), when are those persons human subjects? Does the research capture the “about whom” part of the HS definition? Are there other ethical considerations for these persons? When would collection of data for AI or AI validation activities typically be exempt under the Common Rule? For studies requiring review under the Common Rule, what human subject protections considerations are most prominent for the humans whose information is included in datasets used and shared for AI development? Do those considerations differ where the research is focused on the testing or validating of AI? Are other ethical considerations relevant for those who are not human subjects? Are there existing frameworks or tools that funding agencies, investigators, HRPP staff, and IRBs can use to illuminate and mitigate ethical concerns with human-focused AI research and development? Are there considerations specific to AI that impact the adequacy of disclosure of research activities in the research informed consent form? What is “unique” about research that includes AI that would require the IRB to think about and determine the applicability of the Common Rule that isn’t already considered for all human subject’s research? What specific sections of 45 CFR 46.111 would need special attention in research with AI; i.e. privacy and confidentiality; informed consent; risks? What are the specific considerations regarding AI that are pertinent to institutional /HRPP responsibilities, versus responsibilities for other studies under the purview of the IRB? Is there a larger potential for bias and/or flaws in the use of AI in research and how should IRB’s think about this potential in their review? (i.e. facial recognition algorithms could be heavily based on white males, but the researchers “using the algorithm” might not be aware of this.) Under what conditions would collection of data for AI or AI validation activities meet the Common Rule definition of research that is “designed to develop or contribute to generalizable knowledge”? Where data collection is part of the explicit research proposal, such collection comfortably fits the Common Rule definition of research. But AI often uses data that are collected for another purpose, e.g., medical records or social media posts. Under the current regulatory framework, such collection is not research in itself, and the subsequent secondary use of such data is often deemed to fall under the Common Rule exemption at 45 CFR 46.104(d)(4). Finding such research exempt can leave research subjects unprotected from foreseeable harms or leave them uninformed about how information they provide will be used. Examples of the latter include research using material collected from social media posts which are considered “publicly available.” Knowledge that such information will be used for other specific purposes may affect individuals willingness to provide it. Similarly, information gleaned from “de-identified” medical records was collected during the course of medical care, and such collection is a condition of receiving care, depriving individuals of the ability to choose whether or not to participate in research. Further, the value of de-identification is questionable, given advancing technology and the ability to combine disparate sources of data, leaving subjects vulnerable to harms consequent on exposure of sensitive and identifiable personal data. Exemptions try to balance the common good of new knowledge with limitations on the autonomy of individual subjects, and the current regulatory approach is not necessarily wrong, but was developed before Big Data (BD) and AI were common. Current research using BD and AI is taking advantage of research exemptions that were not developed for this purpose. This limitation was explicitly recognized in the 2018 Common Rule with the commitment at 102(e)(7)(i) to revisit the concept of identifiability on a regular basis. Consequently, much AI research is compliant but not necessarily adequately protective of the rights and welfare of research participants. It has been argued that AI validation activities (e.g., collecting data explicitly and only to train or subsequently validate an algorithm) is an activity that is not “designed to develop or contribute to generalizable knowledge. This argument is sound if the intended use of that algorithm is limited to its application to the original dataset, but if the intent is to build a tool to be applied to a broader community or to data not-yet-collected, the situation is directly akin to the development of a diagnostic tool (in the broadest sense). It is the nature of research that diagnostic tools must be developed with data from a subset of the full population, hence the requirement for research participation. In this sense, development of an AI/ML tool is not different from the development of an in vitro diagnostic device and SACHRP takes the position that it should have the same degree of regulatory oversight. When AI involves research involving private identifiable information (PII), when are those persons human subjects? Does the research capture the “about whom” part of the HS definition? Are there other ethical considerations for these persons? The Common Rule defines a human subject at 102(e)(1) as “a living individual about whom an investigator… conducting research: (i) Obtains information or biospecimens through intervention or interaction with the individual and uses, studies, or analyzes the information or biospecimens, or (ii) Obtains, uses, studies, analyzes, or generates identifiable private information or identifiable biospecimens.” The use of Big Data in research involving AI/ML invites a nuanced interpretation of this definition. Specifically, when is information, even PII, “about” an individual? If, for example, a researcher is characterizing human faces using images scraped from websites, are those images really “about” the individuals who they depict? More generally, is specific information collected without any intent to use that information to infer anything about the particular individuals really about them? SACHRP notes that similar arguments could be made about the prospective collection of laboratory data to establish normal ranges. Unlike scraping images from a website, such collection might involve intervention or interaction, but the “about whom” regulatory qualifier would still apply. The novelty of AI lies not in “about whom” but in the ability to use, study, analyze or generate identifiable private information without intervention or interaction. SACHRP believes that asking if PII is about an individual was not the intent of the regulatory language, that requiring such a specific determination would unnecessarily complicate and burden exemption determinations, and that introducing such a burden is not justified. Regarding other ethical considerations, both the regulatory language and the website example raise the question of what information is considered private versus public. Traditional definitions of “private” and “public” should not be assumed to apply in the novel environment created by the Internet, the World Wide Web, and (in the foreseeable future) the Metaverse, when information that was traditionally private must be shared as the cost of participation, which is becoming an expected social norm. When would collection of data for AI or AI validation activities typically be exempt under the Common Rule? SACHRP believe that the exemption criterion at 45 CFR 46.104(d)(4)(i) and (ii) can be interpreted to broadly cover much research involving AI/ML, but that AI/ML raises new issues around identifiability and the distinction between public and private that create a gap between compliance and appropriate protection. The Common Rule explicitly calls for reexamination of identifiability at 45 CFR 46.102(e)(7); such a reexamination is overdue. Information can be captured so that the identity of the human subjects cannot be “readily ascertained,” but the ability to combine datasets makes it very difficult to abide by the implicit promise to research subjects that data collected in this way will not be associated with them as individuals in the future. There are technologies that can make such a promise much more robust (see the first reference in question 5), but they are complex, poorly understood, and often come at some cost in the general utility of the data. For all these reasons, it is unlikely that such technologies will be used unless there is a regulatory interpretation that requires it. As noted in the recommendations that follow, while it is still possible to draw a line between “private” and “public,” it is no longer clear that such a line corresponds to an ethically meaningful distinction or properly reflects how research participants would distinguish information that should or should not be protected by the regulations. For studies requiring review under the Common Rule, what human subject protections considerations are most prominent for the humans whose information is included in datasets used and shared for AI development? Do those considerations differ where the research is focused on the testing or validating of AI? Are other ethical considerations relevant for those who are not human subjects? Under current regulatory interpretation, only a subset of studies would require review under the Common Rule. Such studies would be characterized as those in which data were considered “identifiable private information,” but not subject to the exemptions for secondary use. In other words, the data would not be “publicly available,” the identity of the human subjects would have to be “readily ascertained” by the investigator, and the data could not be protected under another regulatory regime, specifically HIPAA or the Federal Privacy Act. Research that meets these criteria would probably be considered minimal risk, in that collection and use of data has become a ubiquitous reality of everyday life and would meet the criterion for Expedited Review under category 5 (Research involving materials that have been collected solely for non-research purposes). Given current practice and the minimal risk nature of the research, protections would probably be those resulting from consideration of 111(a)(3) - equitable selection of subjects and 111(a)(7) - privacy protections. It is very likely that AI research, even if it met all the requirements to place it under the Rule’s active oversight, would qualify for a Waiver of Informed Consent, in that the research could not be practicably conducted without such a waiver. Note that the protections afforded by the risk/benefit calculation of 111(a)(2) would be limited, since the research would probably be deemed minimal risk and most IRBs would interpret the prohibition against consideration of long-range effects of applying knowledge gained to preclude protections against group harms resulting from profiling or reinforcement of existing bias. There is no clear difference in protections for those whose information is included in datasets used in AI development versus validation and testing, although there is at least one area where data are routinely collected for the purpose of AI development: mHealth. If the data collection itself is part of the research, as in the development of mobile devices, research participants would be expected to have the additional protection afforded by the requirement for their voluntary and informed consent to participate. Lastly, there are ethical considerations for those who are not considered human subjects under the regulations. As noted earlier, these include the perpetuation of group harms, profiling, and potential redirection of public resources away from addressing the root causes of disease and marginalization. Are there existing frameworks or tools that funding agencies, investigators, HRPP staff, and IRBs can use to illuminate and mitigate ethical concerns with human-focused AI research and development? The following documents provide some useful tools and perspectives for understanding the evolving area of AI and ethical concerns related to its use. The list is not exhaustive, nor does SACHRP necessarily endorse the perspectives of the authors. Keans, M., Roth, A. (2020) The Ethical Algorithm: The Science of Socially Aware Algorithm Design. Oxford University Press A book that provides a non-technical discussion of computing approaches to building principles and values into AI/ML algorithms themselves. Hutson, M. (2022, Feb 26). The Future of Computing. Science News, 201(4), 16-22. https://www.sciencenews.org/century/computer-ai-algorithm-moore-law-ethics A lay article that reviews the history of computing technology, the history and fundamental concepts behind AI/ML, and some of the ethical issues raised. Wolfram U. Zero to AI in 60 Minutes. https://www.wolfram.com/wolfram-u/machine-learning-zero-to-AI-60-minutes/ (last viewed 4.8.2022) A short online course that illustrates how easy it is for anyone with access to the appropriate tools to use AI/ML without deep knowledge of software design or coding. Fjeld, J., Achten, N., Hilligoss, H., Nagy, A. & Srikumar, M. Principled artificial intelligence: Mapping consensus in ethical and rights-based approaches to principles for AI. Berkman Klein Center Research Publication (2020). https://dash.harvard.edu/bitstream/handle/1/42160420/HLS%20White%20Paper%20Final_v3.pdf?sequence=1&isAllowed=y A white paper that presents an international survey of AI governance documents. Bernstein, M. S. et al. Ethics and society review: Ethics reflection as a precondition to research funding. Proc Natl Acad Sci U S A 118, e2117261118 (2021). A description of one university’s approach to addressing potential harms of AI/ML research. White House OSTP (2022) Blueprint for an AI Bill of rights. Available at https://www.whitehouse.gov/ostp/ai-bill-of-rights/ (accessed 19 October 2022) The perspective of the White House Office of Science and Technology Policy on harms and appropriate protections, largely focused on the commercial use of AI. Federal Trade Commission advanced notice of proposed rulemaking: Trade Regulation Rule on Commercial Surveillance and Data Security. August 22, 2022. Federal Register 87(161) pp.51273-51299 Proposed rules from the Federal Trade Commission that provide a landscape analysis and assessment of potential harms of the commercial use of Big Data. Are there considerations specific to AI that impact the adequacy of disclosure of research activities in the research informed consent form? Research that requires voluntary informed consent is likely to be a minority of all AI research. For AI research that does require informed consent, the nature of the risks and benefits of such research is ill-suited to the current required elements of consent. In particular, 116(b)(2) requires the disclosure of “any foreseeable risks or discomforts to the subject”, while 116(b)(3) requires disclosure of “any benefits to the subject or to others that may reasonably be expected from the research.” This asymmetric consideration of risk and benefit mirrors that in the IRB approval criteria. Risks of harm may accrue to all, as potential benefits are expected to accrue to all, but only the latter are allowed to be considered in the current regulations. The current conduct of AI research benefits from this discrepancy, in that the most important harms impact groups, not individuals. Regulated research is a public enterprise; risks and benefits should balance both private and public interests. The current regulatory structure gives only part of that task to the IRB. The disclosure requirement at 116(b)(9), added in the updated Common Rule, is also ill-suited to AI or BD research, in that it reflects an overly simplified concept of identifiability. Removal of identifiers no longer means that individuals cannot be identified, nor does it mean that private and sensitive information will not be disclosed and potentially connected back to the individual in the future. That risk should be explicitly disclosed. What is “unique” about research that includes AI that would require the IRB to think about and determine the applicability of the Common Rule that isn’t already considered for all human subject’s research? There is little “wiggle room” in the regulations for the IRB to determine whether the Common Rule is applicable; a better question might be whether the current definitions of human subject and human subjects research allows the IRB to adequately protect both individuals and groups in the context of research that includes AI. What specific sections of 45 CFR 46.111 would need special attention in research with AI; i.e. privacy and confidentiality; informed consent; risks? Please see response to questions 4 and 6, above. What are the specific considerations regarding AI that are pertinent to institutional /HRPP responsibilities, versus responsibilities for other studies under the purview of the IRB? AI raises issues of group harms, most related to poorly understood limits of datasets and the possibility that the use of AI tools may obscure underlying and addressable causes of disease, marginalization, and inequity. In addition, BD raises issues of privacy and identifiability that are not well addressed in the current regulations. In so far as institutions are responsible to the communities they serve or in which they are located, these considerations should be addressed by those institutions, possibly through their HRPPs. Many foreseeable harms, however, extend well beyond the domain of any single institution, and would be better addressed at a federal level. In addition, leaving this responsibility to individual institutions risks creating a patchwork of inconsistent protections that will inevitably allow the better protected to benefit at the expense of those less well protected. Is there a larger potential for bias and/or flaws in the use of AI in research and how should IRB’s think about this potential in their review? (i.e., facial recognition algorithms could be heavily based on white males, but the researchers “using the algorithm” might not be aware of this.) The potential harms of AI arise from unrecognized limitations or biases in data sets, such as those arising from systemic racism and discrimination and other circumstances where data do not represent the population to which its conclusions will be applied. Further, in most AI research, the assembly of the initial dataset is separate from the AI research, making it even more likely that investigators may be unaware of the limited generalizability of their conclusions. Recommendations Identifiability and privacy AI/ML and BD research expose the limits of the traditional concept of identifiability that serves as the basis for privacy protections under the Common Rule. The explicit ability to identify an individual from a specific dataset is a characteristic that was appropriate when data were analyzed in isolation, when data collection occurred primarily in the context of well-defined research studies (i.e., before the widespread use of electronic health records and ubiquitous data collection outside of healthcare), and before the routine collection and use of genomic data, which are arguably intrinsically identifying. SACHRP urges the Secretary to follow through on the Common Rule’s commitment to regularly reexamine the meaning of identifiability in response to evolving technology and research practices. Further, the combination of large datasets makes it possible to learn or infer information about individuals that they may not have knowingly disclosed. In a sense, this is one of the goals of AI/ML, in that it uses patterns in data to infer novel or undisclosed information about such individuals. If research can essentially recreate private and sensitive information about people, even if their identity is not explicit, would individuals consider this a violation of their right to privacy? In other words, do BD and AI/ML allow researchers to create “virtual subjects” on which research can be conducted without the burden of abiding by regulation, but without meaningful difference from research on identifiable data? SACHRP recommends that the Secretary consider whether identifiability remains a concept that would be recognized by research participants and the general public as useful in setting limits on federally guaranteed protections. Definition of human subject The Common Rule definition of human subject most relevant to AI/ML is “a living individual about whom an investigator (whether professional or student) conducting research: …(ii) Obtains, uses, studies, analyzes, or generates identifiable private information or identifiable biospecimens.” The 2018 update to the regulations added the possibility of generating identifiable private information to the existing definition, appropriately recognizing the possibility that datasets and genomic information were rarely used in isolation, and that the combination of datasets could identify individuals even if no single dataset was itself identifying. Nonetheless, the regulations continue to rely on the concept of “public” to exclude from their protections individuals who openly disclose information. This concern is not new; the line between public and private behavior has always been indistinct, and there has always been a tension between “public” behavior or speech and whether such behavior or speech was intended for a public audience. The internet and social media have made this concern more acute for a much broader population. Social media invites individuals to share information about themselves with the promise to its users that they will build communities, but with the commercial purpose of collecting data and profiling the behavior of groups. Similarly, the use of credit and debit cards as a replacement for cash offers users convenience and flexibility in financial management but now serves the additional purpose of data collection and profiling of purchasing patterns. Indeed, modern society is characterized by the collection of data on individuals at every possible opportunity. Whether such data collection is appropriately disclosed, whether individuals really can choose not to allow such collection without severely disadvantaging themselves socially and financially, or whether this data collection is exploitive is a much larger question than federal protection of research participants, but sits quietly in the background of AI/ML and BD considerations under the Common Rule. The Rule allows us to avoid considering these deeper questions, and specifically whether federally funded research should be held to a higher standard in these areas than commercial activity, by considering much of this information “public.” SACHRP recommends that the Secretary consider a more nuanced but explicit definition of public versus private behavior and private information that recognizes the deep changes wrought by technology since these concepts were first enshrined in regulation. The necessity of inclusion in setting new standards The original research regulations were written largely, if not exclusively, in response to harms that occurred in biomedical research, and their requirements disproportionately protect against physical harms that would be recognized as such by all members of society. Similarly, there is an assumed broad consensus that improving health and lessening the burden of disease is a worthwhile public good and role for the federal government. While AI/ML and BD can be used in the biomedical and healthcare research settings, many of the risks they present and the benefits they promise fall outside these domains. From the perspective of risk, many of their potential harms fall on groups. Relying on data that can only reflect current or past practice, their application risks cementing or falsely validating inappropriate group differences and biases that are necessarily captured in such data. Addressing such harms by protecting individual members of such groups, which is arguably the approach taken by regulation in response to physical harms that fall disproportionately on groups, is not adequate when the technology of AI makes reasoning opaque and “due process” difficult. From the perspective of benefit, many of the objectives of AI research may not be of obvious and equal value to all members of society. These different valuations may be the result of a history of group exclusion from the benefits of research, the concern that AI/ML will further social marginalization, or different cultural norms. Experience with research with Native American tribal communities illustrates this concern. Such communities are recognized as sovereign, and their right to their own cultural valuations is therefore enshrined in law and regulation, which explicitly allows them to adapt the Common Rule to their own communities. The genomic research conducted at Arizona State University using biospecimens derived from members of the Havasupai Tribe illustrates that concerns about group harms and group norms are relevant to research that could be described as biomedical; cultural variation in assessing the value of research is likely when the goal is to learn about group characteristics. While the sovereignty of Native Americans provides members of that diverse group with some unique jurisdictional and legal protections, there are many other groups with which individuals identify that have no such recognition or protection, but whose members are likely to feel equally strongly about their shared community values. How to include relevant voices in establishing or interpreting research regulations is a difficult problem that is unlikely to have a solution that will satisfy all, a characteristic of many issues that define the relationship between individuals, groups, and government in a pluralistic democracy. Nonetheless, this difficulty should not be an excuse not to explicitly consider the problem and seek a solution that tries to address group concerns fairly, particularly when research is publicly funded. SACHRP recommends that the Secretary consider establishing fora and mechanisms to facilitate dialogue, and ultimately, regulatory guidance, about how the interests of groups predictably affected by AI research might be considered and protected, consistent with maintaining scientific integrity. Further, SACHRP recommends that, based on such opportunities for dialogue, the Secretary establish formal guidance to ensure that anticipated benefits as well as risks of harm of research to affected groups, particularly of research outside the biomedical domain, are considered when HHS considers funding research projects that use AI or that refine AI methods and algorithms, when such group benefits and harms may predictably be at stake. Related SACHRP Documents Justice SACHRP issued a recommendation titled “Consideration of the Principle of Justice 45 CFR part 46” in July 2021. Central to this recommendation was the recognition that publicly funded research is a societal project that depends on public trust in researchers, research institutions, and government regulators. In so far as AI/ML raises questions of group harms and benefits and can, in many cases, be conducted without individual consent, it is an area where considerations of justice and public trust are particularly relevant. This document incorporates by reference the recommendations made in that earlier document for measures to address inequities in the burdens and benefits of research, and to restore the trustworthiness of the research enterprise. These recommendations should be considered as complementary to recommendations to the Secretary on Justice – AI/ML can exacerbate those same concerns. Risk to non-subjects SACHRP has been discussing risks to individuals who are affected by research activities but who do not meet the regulatory definition of human subjects and are thus not explicitly protected by current regulations. Much AI/ML research would not be considered to involve human subjects, and the recommendations of the document titled “The Protection of Non-Subjects from Research Harm” should similarly be incorporated by reference.