Chairwoman Stevens Opening Statement for Hearing on Managing the Risks of Artificial Intelligence
(Washington, DC) – Today, the House Committee on Science, Space, and Technology’s Subcommittee on Research and Technology is holding a hearing titled, Trustworthy AI: Managing the Risks of Artificial Intelligence.
Chairwoman of the Subcommittee on Research and Technology, Rep. Haley Stevens (D-MI), opening statement as prepared for the record is below.
Good morning and welcome to today’s Research and Technology hearing to examine the harmful impacts associated with artificial intelligence systems, and the activities that academia, government, and industry are conducting to prevent, mitigate, and manage AI risks. I am thrilled to be joined by our distinguished panel of witnesses. It is great to be with you all in person today, and I look forward to hearing your testimony.
Artificial intelligence has the potential to benefit many aspects of our lives and support our economic and national security. The applications in our everyday lives span from merely convenient, like recommending your next movie, to transformational, like aiding doctors in earlier detection of disease. In my home state of Michigan, advances in AI by automakers are accelerating the development of autonomous vehicles that will lead to reduced traffic and increased road safety. AI systems are also increasingly used to analyze massive amounts of data to propel research in fields to enhance our understanding of the universe in cosmology to synthetic biology to weather prediction.
But ill-conceived or untested applications of AI have also caused great harm. We have already seen ways AI systems can amplify, perpetuate, or exacerbate inequitable outcomes. Researchers have shown that AI systems making decisions in high-risk situations, such as credit or housing, can be biased against already disadvantaged communities.
This is why we need to encourage people developing or deploying AI systems to be thoughtful about what they are putting out into the world. We must develop the tools, methodologies, and standards to ensure that AI products and services are safe and secure, accurate, free of harmful bias, and otherwise trustworthy.
Since taking over the gavel of the Research and Technology Subcommittee, I have worked with my colleagues on both sides of the aisle to promote trustworthy AI. I was proud to secure trustworthy AI provisions in the CHIPS and Science Act – which the President signed into law last month. My Promoting Digital Privacy Technologies Act, which passed the House and awaits a vote in the Senate, supports privacy-enhanced datasets and tools for training AI systems. Additionally, this Committee led the development of the 2020 National AI Initiative Act to accelerate and coordinate Federal investments in research, standards, and education of trustworthy AI. In that Act, we also directed NIST to develop an AI risk management framework to help organizations understand and mitigate the risks associated with these technologies. I look forward to hearing about the progress of this work and the many other things NIST is doing to promote trustworthy AI in today’s discussion.
Academia and industry are also supporting ethical approaches to AI. Universities across the country are adopting principles for responsible use of AI and incorporating ethics into their computer science curricula. Industry is moving past theoretical principles into practical approaches to mitigating AI risks. But there is still much more to do.
I’m looking forward to hearing more about this work from our witnesses today and to discussing what we here in Congress can do to ensure the United States leads the world in trustworthy artificial intelligence. I’d like to again thank our witnesses for joining us today.
Next Article Previous Article