Skip to primary navigation Skip to content
October 18, 2023

Ranking Member Lofgren Opening Statement for Hearing on AI Risk Management

WASHINGTON, D.C. – Today, the House Committee on Science, Space, and Technology's Subcommittees on Investigations & Oversight and Research & Technology are holding a joint hearing titled, "Balancing Knowledge and Governance: Foundations for Effective Risk Management of Artificial Intelligences."

 

Ranking Member Zoe Lofgren's (D-CA) opening statement as prepared for the record is below.

Thank you, Chairman Obernolte and Ranking Member Foushee of our Investigations and Oversight Subcommittee, and Chairman Collins and Ranking Member Stevens of our Research and Technology Subcommittee, for holding today’s hearing. I would also like to welcome our distinguished panel of witnesses.

This hearing is about what questions we need to answer and tools we need to build to get meaningful governance of AI systems. Our Federal science agencies have taken significant first steps to address this challenge, most notably, NIST’s work on the AI Risk Management Framework. In the meantime, the technology itself continues to rapidly advance in both capabilities and applications. I believe regulation of AI will be necessary, but I am also keenly aware that we must strike a balance that allows for innovation and ensures the U.S. maintains leadership.

One critical question to answer in the near term is whether we can use a single AI governance framework or if governance should be broken out by sector or use case. There is currently some debate about how to control the technology itself, including through licensing, but ultimately our goal should be to stop or at least reduce any negative outcomes of its use. The NIST Risk Management Framework recognizes that context is important for mitigating the risks of AI deployment, and I suspect regulation should follow that example.

I am also keenly interested in the intersection of AI and intellectual property. To ensure intellectual property protections and promote innovation, we must develop tools, testing, and standards to track the use of copyrighted content by AI systems. That technical foundation will necessarily underpin any policy solution agreed to by both the content creation community and AI platforms. 

Finally, I am curious about what policy levers Congress may have to incentivize good AI governance practices. As my colleague Representative Lieu has often raised, the Federal government can influence the AI ecosystem by requiring AI risk management on systems adopted by Federal agencies and contractors. However, a shift in Federal policy to require AI risk management practices will require mature standards and a workforce capable of implementing them.While the contours of a regulatory framework are still being debated, it is clear we will need to lay the technical groundwork to govern AI. I thank the witnesses for joining us today for this very important discussion.

 

I yield back.

 

 

###