U.S., UK Release Guidelines to Ensure Security Is Baked into Artificial Intelligence
The U.S. Cybersecurity and Infrastructure Security Agency (CISA) and the UK National Cyber Security Centre (NCSC) jointly released “Guidelines for Secure AI System Development,” a set of broad guidelines designed to make security considerations an essential, primary concern when developing any artificial intelligence (AI) framework or use application.
Agencies from the following countries endorsed the guidelines: Australia, Canada, Chile, Czech Republic, Estonia, France, Germany, Israel, Italy, Japan, New Zealand, Nigeria, Norway, Poland, Singapore, and South Korea. In addition, several organizations, including Amazon, Google, IBM, Microsoft, OpenAI, and Palantir, contributed to the development of the guidelines.
The report noted that companies are developing AI tools rapidly and that in such environments security is often a tangential consideration at best. Given the unique vulnerabilities that AI presents—including everything from cyberattack concerns to privacy and social engineering concerns—the report said, “Security must be a core requirement, not just in the development phase, but throughout the life cycle of the system.”
While it is written to address AI developers specifically, the guidelines are just as important for others to know and understand, including those who deploy and use AI technology. The report specifically called out “risk owners” as an important audience that needs to understand and look for these security principles in any AI deployments in their organizations.
The guidelines are consistent with a previous U.S. administration executive order and with voluntary standards established by several companies in the sector.
The following are the guidelines with short, highlighted descriptions. (Reminder: the guidelines were written with AI developers as the primary audience. Some material has been edited slightly for length and clarity).
1. Secure Design
Raise staff awareness of threats and risks
You provide users with guidance on the unique security risks facing AI systems (for example, as part of standard InfoSec training) and train developers in secure coding techniques and secure and responsible AI practices.
Model the threats to your system
As part of your risk management process, you apply a holistic process to assess the threats to your system, which includes understanding the potential impacts to the system, users, organizations, and wider society if an AI component is compromised or behaves unexpectedly.
Design your system for security as well as functionality and performance
You consider your threat model and associated security mitigations alongside functionality, user experience, deployment environment, performance, assurance, oversight, ethical and legal requirements, and other considerations.
Consider security benefits and trade-offs when selecting your AI model
Your decisions are informed by your threat model and are regularly reassessed as AI security research advances and understanding of the threat evolves.
2. Secure Development
Secure your supply chain
You assess and monitor the security of your AI supply chains across a system’s life cycle, and require suppliers to adhere to the same standards your own organization applies to other software.
Identify, track, and protect your assets
You know where your assets reside and have assessed and accepted any associated risks. You have processes and tools to track, authenticate, version control, and secure your assets, and can restore to a known good state in the event of compromise.
Document your data, models, and prompts
Your documentation includes security-relevant information such as the sources of training data (including fine-tuning data and human or other operational feedback), intended scope and limitations, guardrails, cryptographic hashes or signatures, retention time, suggested review frequency, and potential failure modes.
Manage your technical debt
As with any software system, you identify, track, and manage your ‘technical debt’ throughout an AI system’s life cycle (technical debt is where engineering decisions that fall short of best practices to achieve short-term results are made, at the expense of longer-term benefits). This should be managed from the earliest stages of development.
3. Secure Deployment
Secure your infrastructure
You apply appropriate access controls to your application programming interfaces (APIs), models, data, and to their training and processing pipelines, in research and development as well as deployment.
Protect your model continuously
Your approach to confidentiality risk mitigation will depend considerably on the use case and the threat model. Some applications, for example those involving very sensitive data, may require theoretical guarantees that can be difficult or expensive to apply. If appropriate, privacy-enhancing technologies (such as differential privacy or homomorphic encryption) can be used to explore or assure levels of risk associated with consumers, users, and attackers having access to models and outputs.
Develop incident management procedures
Responders have been trained to assess and address AI-related incidents. You provide high-quality audit logs and other security features or information to customers and users at no extra charge, to enable their incident response processes.
Release AI responsibly
You release models, applications, or systems only after subjecting them to appropriate and effective security evaluation, such as benchmarking and red teaming (as well as other tests that are beyond the scope for these guidelines, such as safety or fairness), and you are clear to your users about known limitations or potential failure modes.
Make it easy for users to do the right things
You state clearly to users which aspects of security they are responsible for, and are transparent about where (and how) their data might be used, accessed, or stored (for example, if it is used for model retraining, or reviewed by employees or partners).
4. Secure Operation and Maintenance
Monitor your system’s behavior
You can account for and identify potential intrusions and compromises, as well as natural data drift.
Monitor your system’s inputs
In line with privacy and data protection requirements, you monitor and log inputs to your system (such as inference requests, queries, or prompts) to enable compliance obligations, audit, investigation, and remediation in the case of compromise or misuse.
Follow a secure-by-design approach to updates
You include automated updates by default in every product and use secure, modular update procedures to distribute them. Your update processes (including testing and evaluation regimes) reflect the fact that changes to data, models, or prompts can lead to changes in system behavior (for example, you treat major updates like new versions).
Collect and share lessons learned
You maintain open lines of communication for feedback regarding system security, both internally and externally to your organization, including providing consent to security researchers to research and report vulnerabilities. When needed, you escalate issues to the wider community, for example publishing bulletins responding to vulnerability disclosures, including detailed and complete common vulnerability enumeration.