UK Government has launched an AI assurance platform designed to help businesses identify and mitigate risks associated with AI systems as they develop and deploy them safely.
The new AI assurance platform will support organisations in gaining access to a one-stop-shop for information on steps they can take to identify and mitigate potential risks and harms posed by AI.
Assurance technologies are essentially tools that help businesses verify, scrutinise and trust the machine learning products they are using.
The platform will provide new practical resources which set out clear steps such as assessing the impact of evaluations, reviewing data used in AI systems to check for bias and how businesses can carry that data forward.
The government is giving businesses the support and clarity they need to use AI safely and responsibly while also making the UK a true hub of AI assurance expertise, said Secretary of State for Science, Innovation, and Technology, Peter Kyle.
The UK’s market of ensuring the trustworthiness of AI systems is forecast to grow six-fold in the next 10 years and unlock more than £6.5bn.
Meanwhile, the UK's AI Safety Institute has signed a memorandum of cooperation (MoC) with Singapore to deepen AI safety collaboration.
The AI Safety Institute has launched the Systemic AI Safety Grants programme, with up to £200,000 of funding available for academics, civil societies, and industry experts.
The US launched its own AI safety institute, while the EU has enacted an AI Act, which is considered one of the toughest regulatory regimes for the new technology.
AI assurance systems must be backed up by robust internal governance systems, according to Rowena Rix, head of innovation and AI at international law firm Dentons.