in

Big Tech Bows to White House on AI Safety Rules

In a welcome move, eight major tech companies involved in artificial intelligence (AI) development have decided to join the White House’s AI safety pledge. Adobe, IBM, Palantir, Nvidia, Salesforce, Stability AI, Cohere, and Scale AI have all voluntarily committed to follow standards for safety, security, and transparency in relation to their use of AI. This comes after Amazon, Google, Microsoft, and several others signed the pledge back in July.

The rapid advancements in AI have raised concerns in Washington, particularly since the release of OpenAI’s ChatGPT chatbot last year. Lawmakers are scrutinizing AI for its potential to pose a threat to jobs, spread disinformation, create deep fakes, and even develop self-awareness. As a result, there is an increasing debate on how to regulate and handle the technology.

The participating tech firms have agreed to ensure the safety of their AI products before making them public, prioritize security, and earn the public’s trust. In addition to these voluntary commitments, the Biden administration is drafting an executive order with similar goals and encouraging legislative efforts in Congress to regulate AI. Chief of Staff Jeff Zients expressed the administration’s commitment to harnessing the benefits of AI while managing the risks and moving quickly to achieve these objectives.

The tech companies involved in the pledge have also committed to sharing information on potential dangers associated with AI and developing mechanisms to inform consumers when content is generated by AI. These commitments, according to the White House, represent an important bridge to government action and are part of the Biden-Harris Administration’s comprehensive approach to seizing the promise of AI while managing its risks.

In Congress, there are several bills pending that aim to regulate AI. Sen. Chuck Schumer (D-N.Y.) is hosting an AI forum, which will include CEOs from major tech companies, lawmakers, labor officials, and representatives from non-governmental organizations. The forum will further discussions on AI regulation. Additionally, there are bills such as the Artificial Intelligence and Biosecurity Risk Assessment Act, the No Robot Bosses Act, and other proposed legislation that address AI regulation and accountability.

The voluntary commitments made by these tech companies show a growing momentum towards self-regulation in the industry. While the pledges are not legally binding, the companies have agreed to conduct internal and external testing before releasing AI products, label AI-generated content, and share information about risks and vulnerabilities. Adobe, which has been at the forefront of AI responsibility efforts for the past four years, has encouraged other companies to support the FAIR Act, which protects individuals’ rights to their digital likenesses.

While the participation of tech companies in shaping AI regulations is commendable, there are concerns from consumer advocacy groups and others who worry about the influence of these companies in discussions about AI regulations. It is important to ensure that the voices of civil society are not overshadowed by the resources and influence of these tech giants.

Written by Staff Reports

Leave a Reply

Your email address will not be published. Required fields are marked *

NM Gov’s Gun Ban Fails: Fatal Shooting Hours After Unjust Order!

Libertarians Call for Conservatorship, Question Biden & McConnell’s Fitness to Lead!