Google, Amazon, Microsoft, Meta Other Tech Firms Agree To AI Safeguards Set By White House
AI is all about scientists and technology leaders. Because here.
That's why hundreds of scientists and technology leaders are worried about the future of artificial intelligence.
FAQ, USA TODAY
WASHINGTON – Amazon, Google, Meta, Microsoft and other leaders in the development of AI technologies have agreed to comply with a set of AI protections established by President Joe Biden's administration.
The White House announced on Friday that it has accepted voluntary pledges from seven US companies to ensure their AI products are safe before they go to market. Some engagements require third-party oversight of the operations of enterprise AI systems, even if they don't specify who controls the technology or holds the enterprise accountable.
Multiple warnings: AI is on the brink of extinction, technology leaders warn in an open letter. That's why the alarm sounds
Increased commercial investment in generative AI tools capable of writing human-like text and generating new images and other media has, among other things, raised public interest and concern about the possibility of misleading people and spreading misinformation.
The four tech giants, along with ChatGPT inventor OpenAI and startups Anthropogenic and Inflection, are participating in security testing "partly conducted by independent experts" to protect against major threats such as biosecurity and cybersecurity, the White House said in a statement.
The company has promised to use methods to identify vulnerabilities in its systems and use digital watermarks to distinguish between real and AI-generated depth images.
Where it's going: Fears of AI threats are growing as questions are raised as to whether tools like ChatGPT can be used for evil purposes.
They have also publicly criticized the shortcomings and risks of their technology, including the impact on equality and discrimination, the White House said.
Voluntary commitments are a quick way to address risk before long-term pressure is placed on Congress to pass legislation regulating the technology.
Some proponents of AI regulation say Biden's move is a start, but say more needs to be done to hold companies and their products accountable.
"History shows that many technology companies are unwilling to act responsibly and comply with strict regulations," said James Steyer, founder and CEO of Common Sense Media.
Senate Majority Leader Chuck Schumer said he would introduce legislation to regulate ID cards. He has conducted a series of briefings with government officials to educate senators on issues of bipartisan interest.
Several technology executives have called for regulation, and many traveled to the White House in May to speak with Biden, Vice President Kamala Harris and other officials.
However, some analysts and up-and-coming competitors worry that the proposed regulations could benefit wealthy pioneers led by OpenAI, Google and Microsoft.
The BSA software trading group, which includes Microsoft, said Friday it appreciates the Biden administration's efforts to set the rules for high-risk AI systems.
"Enterprise software companies look forward to working with the government and Congress to develop legislation that addresses the risks associated with artificial intelligence and promotes its benefits," the group said in a statement.
Many countries, including legislators in the European Union, are exploring ways to regulate AI and have negotiated AI laws for 27 countries.
UN Secretary-General António Guterres recently said the UN is “best placed” to adopt international standards and has appointed a Council to report on global AI governance options later this year.
The White House announced on Friday that it had consulted with several countries on the initiative.