Guardrails

The Center for Public Sector AI is committed to pursuing two primary initial functions, including creating a set of 'evolving' guardrails for the ethical deployment of 'Contemporary Technologies' in government and the development of a 'Project Clearinghouse' that allows for government leaders to sharpen their projects through a well-designed process for evaluation and evolution.

Purpose of Guardrails

Many organizations and institutions are thinking deeply about the policy guardrails needed for the government to implement - and oversee - the use of generative AI and other emerging technologies safely. Using these resources as a foundation, CPSAI is bringing together national tech, data, and human services experts to publish operational guardrails and use case guidelines that can help agencies evaluate their options and make informed decisions about deploying these technologies to support benefit delivery.

These guardrails can also help align private-sector technological development with public-sector values and needs. Guardrails are not fixed criteria, but evolve over time as the technology both advances and impacts are appropriately evaluated.

What is a Guardrail?

A guardrail is an ethical norm, value, or rule that CPSAI has identified as being essential to the mission of any or all public sector agencies to ensure human dignity and well-being are protected as emerging technologies integrate into the public sector. These guardrails should be upheld by any technological innovations relating to AI (or likewise developing technologies) as they are implemented and integrated into the public sector.

Though a guardrail is an ethical norm, value, or rule, it is not unchanging. As contexts and technologies change, so may guardrails — they are applied ethical rules, not timeless ones.

Example Guardrail

The following example is in draft form but provides insight into CPSAI's direction when evaluating guardrails.

Negative Determination:

In the case that an AI is given authority to approve or reject an application for HHS benefits, the decision to make a negative determination cannot be within the AI's authority. A human employee must review all such cases. Clearly speaking, emerging technologies shouldn't tell people they are not eligible for food or housing benefits. The technology can be utilized to streamline the application process, but final negative determinations are not in the purview of technology.