OpenAI and Anthropic are taking proactive steps to ensure the safety of their latest AI models by granting the US government early access to these technologies, ahead of their public debut, according to The Verge. This move aims to scrutinize and bolster the security of advanced AI before it becomes widely available.
![](https://techgyve.com/wp-content/uploads/2024/08/3.jpg)
The two AI giants have formalized their commitment through a memorandum of understanding with the American Institute for AI Safety. This agreement mandates that both companies will provide their models for evaluation before and after they are released to the public. The collaboration is set to include efforts to assess and mitigate security risks, with additional involvement from the British AI Safety Institute.
This development comes in the wake of California lawmakers passing the Safe and Reliable Innovation in Advanced Artificial Intelligence Models Act. The new legislation, which awaits the signature of Governor Gavin Newsom, mandates stringent precautions for AI companies. These include the ability to swiftly deactivate AI models, safeguarding them from unauthorized modifications, and evaluating the potential risks of their deployment.
Despite the law’s intention to enhance AI safety, it has faced criticism from within the tech industry. Both OpenAI and Anthropic have voiced concerns that such regulations could hinder smaller firms, particularly those developing open-source AI technologies.
In parallel, the White House is encouraging AI companies to independently adopt robust security measures. Several major firms have pledged to invest in cybersecurity research, anti-discrimination efforts, and the development of watermarking systems for AI-generated content, highlighting a broader commitment to responsible AI innovation.
Source: