Regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI). It is part of the broader regulation of algorithms.[1][2] The regulatory and policy landscape for AI is an emerging issue in jurisdictions worldwide, including for international organizations without direct enforcement power like the IEEE or the OECD.[3]
Since 2016, numerous AI ethics guidelines have been published in order to maintain social control over the technology.[4] Regulation is deemed necessary to both foster AI innovation and manage associated risks.
Furthermore, organizations deploying AI have a central role to play in creating and implementing trustworthy AI, adhering to established principles, and taking accountability for mitigating risks.[5]
Regulating AI through mechanisms such as review boards can also be seen as social means to approach the AI control problem.[6][7]