The United States, Britain and more than a dozen other countries on Sunday unveiled what a senior U.S. official described as the first detailed international agreement on how to keep artificial intelligence safe from rogue actors, pushing for companies to create AI systems that are “secure by design,” Reuters reported.
In a 20-page document unveiled Sunday, the 18 countries agreed that companies designing and using AI need to develop and deploy it in a way that keeps customers and the wider public safe from misuse.
According to Reuters, the agreement is non-binding and carries mostly general recommendations such as monitoring AI systems for abuse, protecting data from tampering and vetting software suppliers.
Still, the director of the U.S. Cybersecurity and Infrastructure Security Agency, Jen Easterly, said it was important that so many countries put their names to the idea that AI systems needed to put safety first.
The Hill reported that the United States, along with 17 other countries, unveiled an international agreement that aims to keep artificial intelligence (AI) systems safe from rogue actors and urges providers to follow “secure by design principles.”
According to The Hill, the 20-page document, jointly published Sunday by the U.S. Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency and the United Kingdom’s National Security Centre, provides a set of guidelines to ensure AI systems are built to “function as intended” without leaking sensitive data to unauthorized users.
Other countries features in the agreement include Australia, Canada, Chile, the Czech Republic, Estonia, Germany, Israel, Italy, Japan, Nigeria, Poland, and Singapore.
Last month, the Biden administration issued a sweeping executive order on AI focused on managing the risks of AI. The order includes new standards of safety, worker protection principles, along with directing federal agencies to accelerate the development of techniques so AI systems can be trained while preserving the privacy of training data.
iPhoneInCanada reported about the guidelines for artificial intelligence systems. The guidelines are divided into four key areas reflecting the stages of the AI system development life cycle. It’s pretty broad without anything specific:
Secure Design: This section focuses on the design stage, covering risk understanding, threat modeling and considerations for system and model design.
Secure Development: Guidelines for the development state include supply chain security, documentation, and management of assets and technical debt.
Secure Deployment: This stage involves protecting infrastructure and models, developing incident management processes, and ensuring responsible release.
Secure Operation and Maintenance: Post-deployment, this section provides guidance on logging and monitoring, update management, and information sharing.
In my opinion, it makes sense for there to be specific guidelines on how AI is used. The guidelines could be used by various countries, and should include protections for users – without leaking sensitive data to other users.