π Itβs finally here β the first version of Australiaβs Voluntary AI Safety Standard! π https://lnkd.in/gU694ScG led by National AI Centre
Collaborating with the amazing teams at Gradient Institute and Human Technology Institute, I led the overall technical oversight of the standard with deep expertise from Data61’s responsible AI science team.
The journey has just begun. π± As many of you know, there are various types of AI safety standards β from organisational and process management standards focused on governance to technical standards dealing with technical practice, measurements, metrics, and standards about quality thresholds. Some are also tailored to different roles across the AI supply chain, from AI model developers to AI system developers and AI deployers.
This version focuses on AI deployers and primarily addresses process and governance, with some exploration into technical practices. Future versions will dive deeper into the technical aspects and other roles. ππ
What sets this standard apart is its close alignment with existing international standards and regulatory frameworks, adding critical βhow-toβ guidance thatβs often missing from high-level international work. It also places special emphasis on key aspects for Australia, including being SME-friendly and highlighting diversity and First Nations perspectives. π¦πΊ
Weβd love to hear your feedback! π¬ Join us for upcoming webinars to discuss the standard with industry and the community.
For Data61’s responsible AI science, see here https://lnkd.in/gPhid9tX