Testifying in Support of SB24–205 Consumer Protection for Artificial Intelligence

Beth Rudden
6 min readApr 25, 2024
Screen shot of those of us who were in favor of approving SB24–205 Consumer Protections for AI

Last night, I testified in front of the Colorado Senate Judiciary Committee, and it was awesome. It was fantastic to see so many people involved in our government and so many folks interested in AI regulation. The bill on the docket was SB24–205 Consumer Protections for Artificial Intelligence. The bill was amended hours before the committee met, and many of us got the strike-through minutes before we had to testify. I find the pace of our government fascinating — passing a bill means it is on its way to becoming a law — remembering the Schoolhouse Rock days of “I’m just a bill, sitting on Capitol Hill,” it’s a long, long way.

The three high school students who were present in the courthouse to testify in favor of this bill made my evening. Young people involved in politics is exactly why I say yes to sharing my my knowledge. It was also incredibly lucky for me as the panel to approve this bill consisted of these high school students and myself — I was representing Bast AI the only business who testified in favor of this bill. I got to be grouped with these fantastic young people. I could tell that they had some debate training as they read through their statements with pathos!

Below is my formal statement, but I wanted to bullet out why I favored this bill; in fact, it matches a lot of what Phaedra and I wrote about in our book “AI for the Rest of Us.”

Preventing Bias and Discrimination:

  • Transparent documentation of the AI system is mandated to identify and mitigate embedded biases that could lead to discriminatory outcomes.
  • Documentation of the AI system and the data sets used for the AI models is part of the developer’s responsibility to ensure representative data for the problem they are trying to solve. Proper documentation should be done regardless of this bill; it is part of being an accountable developer who knows it is necessary to “show their work.”

Enhancing Consumer Trust:

  • The most significant issue in achieving an ROI — or a return on investment — for using AI is getting folks to adopt the AI system. Transparency optimizes adoption and greatly assists with the change management necessary for humans to use AI effectively.
  • By making AI operations transparent, consumers can better understand how decisions are made. This understanding fosters trust, essential for the broad adoption and acceptance of AI technologies.

Facilitating Accountability:

  • Transparent procedures ensure that if something goes wrong, it is easier to trace the problem back to its source, thereby holding the right entities accountable. This accountability is crucial for maintaining public confidence in AI technologies.
  • “Every person involved in creating AI at any stage is accountable for considering the system’s impact on the world, as are the companies invested in its development.” Everyday Ethics — IBM Design for AI

Promoting Ethical Standards:

  • Ethics comes from the Greek word “Ethos,” which is translated as the atmosphere that signals what you value. Setting up the “ethos” for doing things with AI that display human values is desperately necessary. We need AI that models what we value — transparency and attribution are a great start.
  • Mandating transparency instills a culture of ethical thinking and responsibility among AI developers and deployers, ensuring that AI systems align with societal norms and values.

Enabling Informed Consent:

  • Users can make better choices, and if a high-risk AI system makes a decision that the human disagrees with, this bill sets a precedent where the human can appeal that decision.
  • Users can make more informed decisions about their participation when they understand how AI systems function and how their data is used. Making data handling transparent respects user autonomy and privacy.

Stimulating Innovation Responsibly:

  • Transparent practices encourage innovators to devise solutions that push technological boundaries and safeguard public welfare and rights. I want “Tasteful AI.”
  • Transparent practices make engineering go much faster. You can fix issues more easily when you know where they are coming from.

Supporting Business and Regulatory Compliance and Oversight:

  • Having both developers and deployers understand what a system is doing is essential for raising our level of AI literacy. Currently, developers make choices for businesses without understanding the risk they are putting those businesses in.
  • With transparent operations, regulators can more effectively monitor AI systems to ensure they comply with existing laws and regulations, adjusting oversight mechanisms as needed.

Catalyzing Improvement and Public Discourse:

  • Openness and simplification of AI systems are a necessity. We don’t have enough humans who understand that most of these systems are built without oversight or applied best practices.
  • Openness about AI capabilities and performance invites what should be “common sense.” Peer reviews, public scrutiny, and academic research all drive improvements in AI system design and functionality.

Risk Management:

  • Creating the documentation by the developer and deployer will unveil the intent by which the system was developed.
  • Transparency allows for a better understanding of the risks associated with AI deployments, ensuring mitigation strategies are in place before potential harm can occur.

Equipping Industry and Governments for Better Governance:

  • With documented insights into how AI systems work, industries, and policymakers can make more informed decisions about effectively governing, utilizing, and integrating these technologies into society.

Good evening, esteemed members of the committee,

My name is Beth Rudden CEO of Bast AI, and I come before you with over twenty years of experience in designing and building analytics, AI and information systems, including foundational work in establishing the data science profession at IBM as well as building the largest trustworthy AI Center of Excellence in the world. I served as a global technical executive IBM for 7 years before opening my own AI company in 2022. My deep involvement in the field has afforded me a clear understanding of both the potential and the challenges of AI technologies. It is from this perspective that I strongly support the SB 24–205 Consumer Protections for Artificial Intelligence’ bill.

This bill is a pragmatic and necessary measure to ensure that the development and deployment of high-risk artificial intelligence systems are conducted with the highest level of responsibility and care. The provisions outlined in the bill for both developers and deployers form a framework aimed at preventing algorithmic discrimination — a fundamental issue as AI becomes more embedded in our daily lives.

For developers, the bill mandates essential practices such as:

- Providing clear documentation and disclosures regarding the functionalities and risks of high-risk systems.

- Making available detailed information necessary for conducting thorough impact assessments.

- Ensuring all high-risk AI systems are accompanied by a publicly available statement that details the management of potential algorithmic discriminations.

For deployers, the bill requires:

- Implementation of a risk management policy and program.

- Completion of impact assessments to understand the broader effects of the AI systems they utilize.

- Notification to consumers when consequential decisions are made by these systems, enhancing transparency and trust.

Moreover, the bill sets forth requirements for the management of synthetic digital content, ensuring that such content is detectable and marked, thus safeguarding against the misuse of AI in generating deceptive information.

The stipulations for both general purpose and high-risk AI models to disclose in detail their design, training processes, and the data used for training, testing, and validation, are exactly the types of oversight needed to build trust in AI technologies. Such measures are not only about compliance but about fostering a culture of accountability and ethical consideration within the AI community.

The enforcement provisions, including the opportunity for developers and deployers to rectify violations before facing legal action, show a balanced approach to regulation — one that allows for correction and improvement rather than mere punishment.

This bill is a set of common-sense measures that provide a necessary foundation for the ethical development and deployment of AI. By supporting this bill, we commit to a path that respects consumer rights, promotes transparency, and fosters trust in the technologies that are shaping our future.

Thank you for your time and consideration of this critical legislation.

--

--