Biden-Harris administration announces initiatives for responsible AI innovation
In response to growing concerns surrounding artificial intelligence (AI) in such forms as chatbots and image creators, the administration of U.S. President Biden and Vice President Kamala Harris has announced new actions to promote “responsible AI innovation and protect people’s rights and safety.”
In a recent press call, a senior official highlighted the administration’s commitment to ensuring that AI improves people’s lives without putting their rights and safety at risk.
Recent headlines on AI cover a broad spectrum of topics, ranging from the mundane to the ominous. The “Godfather of AI” has issued warnings about the potential dangers of AI surpassing human intelligence and even posing a threat to humanity, while AI-powered applications assist users with their diets by visually identifying ingredients in their refrigerators and suggesting recipes. In addition, Apple co-founder Steve Wozniak has expressed concerns about the risks of AI in Tesla vehicles, and the CEO of Google’s DeepMind AI research lab has suggested that human-level AI might be only a few years away.
The administration said it has taken various steps, including releasing a Blueprint for an AI Bill of Rights last fall, to “make clear the values we must advance and the commonsense rights we must protect,” the official said. The AI Risk Management Framework was also released developed by the National Institute of Standards and Technology to provide guidance for companies, policymakers and technologists and ensure the protection of individuals’ rights, privacy and safety.
The senior official announced two new initiatives: first, an additional $140 million investment to establish seven new National AI Research Institutes, totaling 25 institutes across the U.S. and $500 million in funding to support “responsible innovation that advances the public good.” Second, the Office of Management and Budget (OMB) will issue policy guidance on the federal government’s use of AI in the coming months, looking to “make sure that we’re responsibly leveraging AI to advance agencies’ ability to improve lives and deliver results for the American people.”
Harris and senior administration officials held a meeting with the CEOs of four prominent U.S. companies involved in AI: Alphabet, Google’s parent company; Anthropic, a startup supported by Google and founded by former OpenAI members; and Microsoft, which has reportedly invested over $10 billion in OpenAI. The purpose of the meeting was to emphasize the responsibility and importance of fostering responsible and ethical AI innovation while implementing safeguards to reduce potential harms to society, according to the White House.
“The private sector has an ethical, moral, and legal responsibility to ensure the safety and security of their products,” reads part of the statement by the vice president after meeting with CEOs. “And every company must comply with existing laws to protect the American people.”
The administration said the meeting is part of a “broader, ongoing effort to engage with advocates, companies, researchers, civil rights organizations, not-for-profit organizations, communities, international partners, and others on critical AI issues.”
Major AI companies, including Anthropic, Google, Hugging Face, Microsoft, NVIDIA, OpenAI and Stability AI have committed to participating in an independent, public evaluation of their AI systems at the AI Village at DEF CON 31, one of the world’s largest hacker conventions. The evaluation, conducted by thousands of community partners and AI experts, will assess how these AI systems align with the values outlined in the Blueprint for an AI Bill of Rights and the AI Risk Management Framework, the White House said.
During the press call, Washington Post tech reporter Cat Zakrzewski asked whether the administration trusts AI companies to responsibly check the safety of the released products “given the history that we’ve seen in Silicon Valley with other technologies like social media?”
The senior administration official emphasized the importance of responsible behavior from all parties, adding that “part of what we want to do is make sure we have a conversation about how they’re going to fulfill those pledges.”
Regarding the safety and security of AI, the administration has identified several primary categories of risk, including safety and security, civil rights, privacy, trust in democracy, and jobs and the economy. The official stated that the risks are diverse and need to be carefully managed in order to harness the benefits of AI.
The senior administration official noted that the U.S. is working with the European Union through the U.S.-EU Trade and Technology Council, and said the administration does not see this as a race, but rather as an opportunity to advance AI in responsible, equitable and beneficial ways.
As the federal government begins to issue guidance on AI for its own agencies, the focus will remain on mitigating risks and protecting rights and safety, the White House said. The OMB will release draft policy guidance for public comment this summer, allowing input before it is finalized. The guidance will establish specific policies for federal departments and agencies to follow.
There is a boricua from Guaynabo, Gerald P. Kierce-Iturrioz, our son, which is the president of Trustible.AI. It is an IA Governance management platform that integrates with existing AI/ML platforms to assist organizations in defining necessary AI policies, implementing, and enforcing responsible AI practices, and generating evidence to demonstrate compliance with emerging AI regulatory frameworks and being prepared for AI audits. . Get the specifics: https://bit.ly/TrustibleAILaunchesFromStealth