- dly.ai
- Posts
- 🔥 AI Companies Voluntarily Submit to AI Guidelines in the US
🔥 AI Companies Voluntarily Submit to AI Guidelines in the US

In the rapidly evolving AI sector, prominent players like OpenAI, Anthropic, Google, Inflection, Microsoft, Meta, and Amazon have voluntarily committed themselves to shared AI safety and transparency objectives ahead of a planned Executive Order from the Biden administration.
As part of their commitment, the seven tech companies and others within the industry have agreed to the following:
Before launch, AI systems will undergo security assessments from both internal and external sources, including adversarial "red teaming" conducted by external experts.
Information on AI risks and mitigation techniques, such as preventing "jailbreaking," will be shared among government, academia, and "civil society."
Prioritize investments in cybersecurity and "insider threat safeguards" to secure private model data, like weights, and protect intellectual property. The risk of premature wide release could create opportunities for malicious actors.
Enable external parties to identify and report vulnerabilities, such as implementing a bug bounty program or seeking domain expert analysis.
Develop strong watermarking or alternative methods for marking AI-generated content.
Share details about AI systems' capabilities, limitations, and appropriate and inappropriate use cases.
Give priority to researching societal risks, such as systematic bias and privacy concerns.
Engage in the development and deployment of AI to contribute to solving society's most pressing issues, such as cancer prevention and climate change. (During a press call, it was acknowledged that the carbon footprint of AI models was not being monitored).
While these guidelines are offered on a voluntary basis, the possibility of an upcoming executive order - currently under development - may serve as a driver for compliance.