AJCAI panel: Making AI Safety Real

🔥 Excited to have joined the AJCAI panel on responsible and safe AI, focusing on operationalising Australia’s AI Safety Standard! As a key author of the standard, I shared some insights about the motivations behind it and how it fits with other international standards. Here are a few key points:

🛡️ AI Safety Standards:

  • Many high-level standards lack practical guides 🛠️; we need more operational approaches.
  • Often designed with big organisations in mind 🧰—lots of committees, heavy documentation.
  • Focus has been on organisational risks rather than human risks 🤔; let’s value diverse perspectives.
  • Internationally interoperable 🌍: not to create differences but to provide practical guides under each requirement, mapped to reputable standards.
  • Supporting mandatory guardrails, with a special emphasis on improving practical guidance in version 2 🏈.

🤖 AISI Meeting Highlights:

  • Synthetic content: detection vs. provenance-based approaches for safety. Balancing trade-offs and security 🛠️🔍—Australia 🇦🇺 is leading in this space.
  • Evaluation: Empirical science needs realistic settings and human performance benchmarking 📊. CSIRO pushing the capabilities here!
  • Risk Assessment: How do we interpret risks? Materialised vs. hypothetical risks—nuances that matter ⚖️.

🇪🇺 EU AI Office joined the conversation on the AI Code of Conduct:

  • Debate on which risks to prioritise: All risk types vs. focusing on a few sub-types 🤔.
  • Can we do something meaningful about GPAI model/system risks vs. only looking at use-case-based evaluations? 🧹
  • Pushing for responsible scaling policies to become mandatory 🚀.


Leave a Reply

Your email address will not be published. Required fields are marked *

About Me

Research Director, CSIRO’s Data61
Conjoint Professor, CSE UNSW

For other roles, see LinkedIn & Professional activities.

If you’d like to invite me to give a talk, please see here & email liming.zhu@data61.csiro.au

Featured Posts

    Categories