🎉 It’s finally here – the first version of Australia’s Voluntary AI Safety Standard! 🚀 https://lnkd.in/gU694ScG led by National AI Centre
Collaborating with the amazing teams at Gradient Institute and Human Technology Institute, I led the overall technical oversight of the standard with deep expertise from Data61’s responsible AI science team.
The journey has just begun. 🌱 As many of you know, there are various types of AI safety standards – from organisational and process management standards focused on governance to technical standards dealing with technical practice, measurements, metrics, and standards about quality thresholds. Some are also tailored to different roles across the AI supply chain, from AI model developers to AI system developers and AI deployers.
This version focuses on AI deployers and primarily addresses process and governance, with some exploration into technical practices. Future versions will dive deeper into the technical aspects and other roles. 📊🔍
What sets this standard apart is its close alignment with existing international standards and regulatory frameworks, adding critical “how-to” guidance that’s often missing from high-level international work. It also places special emphasis on key aspects for Australia, including being SME-friendly and highlighting diversity and First Nations perspectives. 🇦🇺
We’d love to hear your feedback! 💬 Join us for upcoming webinars to discuss the standard with industry and the community.
For Data61’s responsible AI science, see here https://lnkd.in/gPhid9tX