White House Issues ‘Blueprint’ for AI Bill of Rights
(November 6, 2022) The White House released “A Blueprint for an AI Bill of Rights, Making Automated Systems Work for the American People,” intended to support the development of policies and practices concerning artificial intelligence systems.
The Blueprint identifies five principles to “guide the design, use, and development of automated systems to protect the American public in the age of artificial intelligence.” The principles do not propose any changes to laws but rather are starting points for discussion. The five principles drafted from the consumer’s view are:
- You should be protected from unsafe or ineffective systems. This principle argues that automated systems should be developed with consultation from diverse communities. “Automated systems should not be designed with an intent or reasonably foreseeable possibility of endangering your safety or the safety of your community. They should be designed to proactively protect you from harms stemming from unintended, yet foreseeable, uses or impacts of automated systems.” The automatic systems should allow for independent evaluation and verification. The systems also should include ongoing monitoring procedures to ensure that they do not fall below an acceptable level over time.
- You should not face discrimination by algorithms, and systems should be used and designed in an equitable way. This principle notes there is extensive evidence that automated systems can produce inequitable outcomes and amplify existing inequity. To avoid the inequity, designers, developers, and deployers “should take proactive and continuous measures to protect individuals and communities from algorithmic discrimination,” such as conducting proactive equity assessments, use of representative data, and disparity testing of the results.
- You should be protected from abusive data practices via built-in protections, and you should have agency over how data about you is used. This principle notes that the United states “lacks a comprehensive statutory or regulatory framework governing the rights of the public when it comes to personal data.” You should be protected from violations of privacy by systems that include privacy protections by default. “Any consent requests should be brief, be understandable in plain language, and give you agency over data collection and the specific context of use; current hard-to-understand no-notice and choice practices for broad uses of data should be changed.” This principle also is designed to make you “free from unchecked surveillance; surveillance technologies should be subject to heightened oversight that includes at least pre-deployment assessment of their potential harms and scope limits to protect privacy and civil liberties.”
- You should know that an automated system is being used, and understand how and why it contributes to outcomes that impact you. In order to guard against potential harms, the public needs to know if an automated system is being used. The principle states that, while notice and explanation requirements are necessary in some situations, such practices should be included for all uses. At present, “the public is often unable to ascertain how or why an automated system has made a decision or contributed to a particular outcome. The decision-making processes of automated systems tend to be opaque, complex, and, therefore, unaccountable, whether by design or omission.” Instead, “you should know how and why an outcome impacting you was determined by an automated system, including when the automated system is not the sole input determining the outcome.” Systems should provide explanations that not only are technically correct but also meaningful and useful to the user.
- You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter. You should be able to opt out and use a human alternative who is properly trained. The principle explains that some people prefer not to use an automated system, or the system may be flawed, leading to unintended outcome, reinforce bias or be inaccessible. “Yet members of the public are often presented with no alternative, or are forced to endure a cumbersome process to reach a human decision-maker once they decide they no longer want to deal exclusively with the automated system or be impacted by its results.” Further, the principle observes that there are times when the automated system fails. In those instances, the public “deserves protection via human review against these outlying or unexpected scenarios.”
The White House did not provide any steps for implementation of the principles.