policy monitor

USA – White House Blueprint for an AI Bill of Rights

The White House Blueprint for an AI Bill of Rights is a non-binding whitepaper that sets out 5 principles to guide the development and governance of automated systems. The Blueprint explains the importance of each principle and offers recommendations (expectations) on how the principle should be achieved. The blueprint also contains real-life examples of how the principles can apply.

What: White paper

Impactscore: 4 - declaration of principles

For whom: international policymakers, AI-related companies, sector organisations

URL: https://www.whitehouse.gov/ost...

Summary

The white house office of science and technology policy has published a blueprint for an AI Bill of Rights with principles to guide the design, use and deployment of automated systems. The blueprint includes five principles to build and deploy automated systems while protecting the (American) public and it applies to automated systems that can potentiality meaningfully impact individuals’ or communities’ rights, opportunities or access. For each principle, the blueprint explains its importance, formulates expectations (recommendations for its development or use) and provides real-life examples of how the principles can apply. The blueprint is explicitly a non-binding whitepaper and should not be considered U.S. government policy. The blueprint also provides exceptions, and suggests alternative safeguards, for law enforcement activities and national security matters.

Principle: Safe and Effective Systems

Automated systems should be safe and effective. The blueprint urges developers to proactively protect the public from harm through public consultation with diverse stakeholders, pre-deployment testing as well as risk identification and mitigation for automated systems. The systems should also be subject to ongoing monitoring and organizational oversight. The automated system should be trained on relevant, high-quality data and the use of derived or re-used data should be tracked and limited. Finally, an independent evaluation of the systems should be possible and developers and users should report on the creation, functioning and safeguards of the automated systems.

Principle: algorithmic discrimination protections

Automated systems should be designed and deployed equitably and people should not face discrimination from algorithms. The blueprint requires developers, users and auditors of automated systems to perform proactive equity assessments for the design of the system. They should ensure that the used data are representative, robust and that they do not use proxies that enable algorithmic discrimination. Furthermore, they should allow for accessibility to people with disabilities. The systems should be tested, before and during deployment, for disparities among groups and, if such a disparity is found, it should be mitigated or eliminated. Finally, automated systems should be subject to independent evaluation as well as reporting (in the form of algorithmic impact assessments) and monitored during their deployment to regularly assess algorithmic discrimination and remedy it.

Principle: data privacy

People should have agency over the data that is used about them and be protected from abusive data practices. The blueprint here recommends built-in protections, data minimization and use limitations to safeguard the public’s privacy. This includes privacy by design and default in automated systems, data collection and use limits for data as well as risk identification and mitigation for sensitive data. Developers and users should ensure the security of the data. Surveillance systems should be subject to oversight, limited and proportional while individuals should be notified when possible. Any consent that is sought for data collection should be specific for a narrow use context and in plain language. The blueprint also argues in favour of data access and correction, consent withdrawal and data deletion as well as the possibility for persons to be supported by their own automated systems in requesting these measures. For sensitive data, the blueprint expects additional protections such as necessity limitations, ethical review and monitoring as well as use prohibitions in certain cases, additional care for data quality and limited access to sensitive data by others.

Principle: notice and explanation

People impacted by automated systems should be given clear, timely and understandable notices of use of an automated system and explanations of the decision(s) taken by the system. Such documentation should be public, easy to find and in plain language. It should identify an accountable entity and set out the working of the system. The notice to users should be timely and up-to-date as well as brief and clear. The explanations provided should be valid for a particular decision and tailored to the specific purpose of the explanation and to its audience as well as the system’s risk.

Principle: human alternatives, consideration and fallback

Where appropriate, persons should be able to opt out of automated systems and have access to a human alternative as well as human oversight and fallback to remedy errors. Instructions and notices to opt out should be brief, clear and accessible. A fallback and escalation system with human consideration should be in place in the event of failure, error or appeal by the impacted persons. These fallback systems should be proportionate, accessible, convenient, equitable, timely relative to the automated system and effective. Humans handling an automated system’s should be trained in the system. Especially in sensitive domains (e.g. health) human oversight should keep automated systems narrowly scoped and situation-specific while human consideration should be present in high-risk decisions. Developers of automated systems should consider giving meaningful access to systems that allow such oversight.

Conclusion

The blueprint and its principles bear strong similarities and overlap with various other proposals for the regulation of AI (and associated data) elsewhere in the world (such as the EU AI act or the Canadian AIDA) as well as other guidance documents. However, the blueprint at the moment is still a non-binding paper that is not indicative of government policy. Therefore it is unclear to which degree the blueprint can and will have an impact on the development and use of AI in the US. While there are sector-specific, state or other narrow regulations in the US, which are mentioned in the Blueprint, it remains to be seen if more general binding regulation for automated systems will follow in the US.