US, Britain, other countries ink agreement to make AI ‘secure by design’
The United States, Britain and more than a dozen other countries on Sunday unveiled what a senior U.S. official described as the first detailed international agreement on how to keep artificial intelligence safe from rogue actors, pushing for companies to create AI systems that are “secure by design.”
The agreement is non-binding and carries mostly general recommendations such as monitoring AI systems for abuse, protecting data from tampering and vetting software suppliers.
In a 20-page document unveiled Sunday, the 18 countries agreed that companies designing and using AI need to develop and deploy it in a way that keeps customers and the wider public safe from misuse.
The agreement is the latest in a series of initiatives – few of which carry teeth – by governments around the world to shape the development of AI, whose weight is increasingly being felt in industry and society at large.
In addition to the United States and Britain, the 18 countries that signed on to the new guidelines include Germany, Italy, the Czech Republic, Estonia, Poland, Australia, Chile, Israel, Nigeria and Singapore.