CSG Law Alert: Artificial Intelligence Alert: New York Artificial Intelligence Bill of Rights

As businesses continue to explore the rapidly evolving arena of artificial intelligence (“AI”), it’s critical for stakeholders to anticipate and prepare for potential legislation and regulations that could reshape and transform the AI landscape.

On October 13, 2023, Assembly Bill A8129, cited as the “New York Artificial Intelligence Bill of Rights” (the “Bill”), was introduced to the New York Assembly and referred to the New York Assembly Science and Technology Committee (the “Committee”). At the state level, New York joins California, Connecticut, Maryland, Pennsylvania, and Washington in introducing legislation addressing the design, development, and deployment of AI.

If passed and enacted, the Bill would provide New York residents with certain rights and protections to “ensure that any system making decisions without human intervention impacting their lives do so lawfully, properly, and with meaningful oversight.”

As currently drafted, the Bill would grant New York residents the following:

  • the right to safe and effective systems;1
  • protections against algorithmic discrimination;2
  • protections against abusive data practices;3
  • the right to have control over one’s data;
  • the right to know when an automated system4 is being used;
  • the right to understand how and when an automated system contributed to outcomes;5
  • the right to opt out of an automated system; and
  • the right to work with a human in place of an automated system.

To What Businesses is the Bill Directed?

The Bill would apply to businesses that design, develop, and deploy automated systems that may impact New York residents’:

  • civil rights, civil liberties, and privacy, such as:
    • freedom of speech
    • voting rights
    • protections from discrimination
    • protections from excessive or unjust punishment
    • protections from unlawful surveillance
  • equal opportunities, including equal access to:
    • education
    • housing
    • credit
    • employment
  • access to critical resources or services

In particular, the Bill would penalize businesses that jeopardize or violate the well-being and security of New York residents, as well as impact New York residents’ access to and receipt of:

  • healthcare
  • financial services
  • safety
  • social services
  • non-deceptive information about goods and services
  • government benefits

This means that businesses that provide products or services that involve the above should be especially careful in connection with the design, development, and deploy of AI.

What conduct is targeted by the Bill?

Unsafe and Ineffective Automated Systems. For a business that designs, develops, or deploys automated systems, the Bill would require, among other things:

  • pre-deployment testing
  • risk identification and mitigation
  • ongoing monitoring that demonstrates safety and effectiveness in connection with intended use
  • independent evaluation6 and reporting7 that confirms the safety and effectiveness of the automated system

If the automated system fails to demonstrate safety and effectiveness, the Bill would both prohibit the deployment of such automated system and require the removal of unsafe and ineffective automated system that are already deployed and in use.

Algorithm Discrimination. The Bill would require that businesses that design, develop, or deploy automated systems to:

  • perform proactive equity assessments;
  • use representative data;
  • protect against proxies8 for demographic features;
  • assure accessibility for New York residents with disabilities;
  • perform pre-deployment and ongoing disparity testing and mitigation; and
  • conduct algorithmic impact assessments.

Data Privacy Practices. The Bill would require privacy by design, meaning that automated systems must:

  • contain built-in protections that protect the confidentiality, integrity, and availability of New York residents’ data
  • be able to honor choices made regarding the collection, use, access, transfer, and deletion of a New York resident’s data9
  • comply with consent in scenarios which consent is appropriate and meaningfully given

Transparency. The Bill would require businesses to provide New York residents with timely, accurate, current, and complete documentation, describing:

  • the overall system functioning
  • the role of automation
  • a notice of system use
  • identification of the individual or organization
  • explanations of outcomes

Human Alternatives. The Bill would grant New York residents the right to opt out of automated systems, meaning that businesses must offer and provide a human alternative that involves consideration by a human.

What are the penalties under the Bill?

Liability for a business in violation of the Bill would include a fine not less than three times such damages caused by such violation. While the Bill would not allow private causes of action by New York residents, the Bill would permit recovery for damages in an action brought by the attorney general.

What’s Next?

The Bill has been referred to the Committee for further consideration and will be the subject of public hearings to gather additional insight and further information. After such consideration, the Committee may amend the Bill, report it to the full house, or reject the Bill in full.

CSG Law will continue to track the Bill. For more information about current and pending AI legislation, resources for guidance to meet the Bill’s mandates, and AI considerations that impact your vendor risk assessments, please feel free to reach out to your CSG Law attorney.

1 On January 26, 2023, National Institute of Standards and Technology (“NIST”) published AI Risk Management Framework (AI RMF 1.0), a voluntary framework “intended for voluntary use and to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.” For more information, please see https://www.nist.gov/itl/ai-risk-management-framework.

2 The Bill defines “algorithmic discrimination” as “circumstances where an automated system contributes to an unjustified different treatment or impact which disfavors people based on their age, color, creed, disability,  domestic violence victim status, gender identity or expression, familial status, marital status, military status, national origin,  predisposing  genetic characteristics, pregnancy-related condition, prior arrest or conviction record,  race,  sex,  sexual orientation, or veteran status or any other classification protected by law.”

3 Abusive practices include misappropriation and misuse of personal data acquired via use of automated systems.

4 The Bill defines “automated systemas “any system, software, or process that affects New York residents and that uses computation as a whole or part of a system to determine outcomes, make or aid decisions, inform policy implementation, collect data or observations, or otherwise interact with New York residents or communities. Automated systems shall include, but not be limited to, systems derived from machine learning, statistics, or other data processing or artificial intelligence techniques, and shall exclude passive computing infrastructure.”

5 This right is an extension of already existing laws and rules in effect. Effective as of July 5, 2023, a regulation, adopted by the New York City Department of Consumer and Worker Protection, already governs the use of automated employment decision tools.

6 As currently drafted, the Bill does not explain who would be conducting such independent evaluations. However, the NIST’s AI RMF 1.0 states, “Processes for independent review can improve the effectiveness of testing,” which may include involvement of “internal experts who did not serve as front-line developers for the system,” independent assessors” involved in regular assessments and updates,” “domain experts,” and “AI actors external to the team that developed or deployed the AI system.” See AI RMF 1.0, pg. 29.

7 The Bill articulates that such reporting, including reporting of steps taken to mitigate potential harms, must be performed, and the results made public, “whenever possible.”

8 This means avoiding the use of inaccurate and improper substitute or “proxy” data pertaining to demographic characteristics.

9 While the New York Stop Hacks and Improve Electronic Data Security Act (“SHIELD Act”) strengthens New York’s data security laws, New York does not (yet) have privacy laws that would afford these rights to New York residents.