Artificial intelligence: standards and regulations
Artificial intelligence is a growing technology used in many aspects of our lives. Learn how governments are regulating and developing AI frameworks to encourage responsible AI development.
When Alan Turing researched artificial intelligence (AI) in post-war England, could he have imagined the scale and complexity of AI today? Would he have imagined the AI assistants we carry in our pockets, entire songs and images created by a machine, or the enormous datasets AI can parse and process in mere moments compared to human ability? The potential is endless.
Organizations like Apple are researching AI and machine learning (ML) with privacy and transparency in mind. And companies with a vested interest in AI created the AI Shared Responsibility Model for guidance on responsible research, development, creation, adoption and application of AI. This model helps organizations determine the parts of AI they are responsible for. For instance, if they are using the AI developed by a Software as a Service (SaaS) vendor, the vendor is likely responsible for the model’s design and data governance, while the customer is responsible for user training and admin controls.
Beyond corporate initiatives, governments are regulating AI and ML to protect user and data privacy. In other words, there are a lot of entities looking at AI. After all, AI’s complexity, power and future impact is only growing — making sure this is done responsibly is a team effort.
In this blog, we’ll talk about some ways governments regulate and provide guidance around AI.
Standards and regulations
Guidelines for secure AI system development
The UK National Cyber Security Center (NCSC) and the US Cybersecurity and Infrastructure Security Agency (CISA) provide guidance for organizations that create or build upon software that uses AI. As listed on the NCSC website, the aim is to help them build AI systems that:
- Function as intended
- Are available as needed
- Work without revealing sensitive data to unauthorized parties
These guidelines recommend a “secure by default” approach throughout the AI development lifecycle, through design, development, deployment, and operation and maintenance. In particular, they suggest:
- Secure coding practices and data handling
- Regular security testing and risk assessments
- Transparency and accountability through clear documentation of functionality, limitations and risks
To support this lifecycle, organizations should prioritize data privacy and ensure compliance with regulations like the General Data Protection Regulation (GDPR) in the European Union (EU) and the Health Insurance Portability and Accountability Act (HIPAA) in the United States.
AI Risk Management Framework
The US National Institute of Standards and Technology (NIST) created the AI Risk Management Framework (AI RMF) for organizations designing, developing, deploying or using AI systems to manage risks and promote trustworthiness. The AI RMF is designed for organizations of all types, sizes and industries.
NIST acknowledges that if uncontrolled, AI can “amplify, perpetuate, or exacerbate inequitable or undesirable outcomes,” and that responsible development emphasizes “human centricity, social responsibility, and sustainability.”
To encourage responsible AI, the AI RMF:
- Defines AI risks and trustworthiness by exploring if it is:
- Valid and reliable
- Safe
- Secure and resilient
- Accountable and transparent
- Explainable and interpretable
- Privacy-enhanced
- Fair.
- Establishes the AI RMF Core, which explains how organizations should manage AI risk. The Core’s high level functions are:
- Govern: Cultivating a culture and risk management
- Map: Understanding your context and its risks
- Measure: Assessing, analyzing or tracking risks
- Manage: Prioritizing and acting upon risks
- Provides supplemental material about how AI risks differ from other software risks, human-AI interaction, and more
EU AI Act
This recent act is the first-ever legal framework on AI, aiming to “provide AI developers and deployers with clear requirements and obligations regarding specific uses of AI” while reducing the burden on businesses. The provided high-level summary lists:
- The AI Act classifies AI according to its risk.
- The majority of obligations fall on providers (developers) of high-risk AI systems.
- Users are natural or legal persons that deploy an AI system in a professional capacity, not affected end users.
The summary also places rules on general-purpose AI (GPAI), including:
- What support documentation GPAI providers must supply
- Risk and incident management requirements for GPAI providers
The EU provides a compliance checker for organizations to determine their AI obligations.
AI at Jamf
Jamf uses machine learning to identify known and unknown cyber threats. Our threat intelligence engine, MI:RIAM:
- Discovers zero-day attacks
- Identifies numerous attack types
- Automatically blocks sophisticated attacks
- Works alone or alongside other tools to enforce security policies
- Automatically remediates endpoint threats
When developing Jamf AI technology, we hold ourselves to the highest data privacy and residency regulations standards. Doing so ensures that personal data is protected and product development follows a framework that respects all relevant AI regulations. This extends to collaborations with our strategic partners who are leading innovators in the AI space.
Learn more about our latest contribution in the AI space.
Read our blog on Jamf’s plugin for Microsoft’s leading AI security solution, Copilot for Security.