AJCAI panel: Making AI Safety Real

πŸ”₯ Excited to have joined the AJCAI panel on responsible and safe AI, focusing on operationalising Australia’s AI Safety Standard! As a key author of the standard, I shared some insights about the motivations behind it and how it fits with other international standards. Here are a few key points:

πŸ›‘οΈ AI Safety Standards:

  • Many high-level standards lack practical guides πŸ› οΈ; we need more operational approaches.
  • Often designed with big organisations in mind πŸ§°β€”lots of committees, heavy documentation.
  • Focus has been on organisational risks rather than human risks πŸ€”; let’s value diverse perspectives.
  • Internationally interoperable 🌍: not to create differences but to provide practical guides under each requirement, mapped to reputable standards.
  • Supporting mandatory guardrails, with a special emphasis on improving practical guidance in version 2 🏈.

πŸ€– AISI Meeting Highlights:

  • Synthetic content: detection vs. provenance-based approaches for safety. Balancing trade-offs and security πŸ› οΈπŸ”β€”Australia πŸ‡¦πŸ‡Ί is leading in this space.
  • Evaluation: Empirical science needs realistic settings and human performance benchmarking πŸ“Š. CSIRO pushing the capabilities here!
  • Risk Assessment: How do we interpret risks? Materialised vs. hypothetical risksβ€”nuances that matter βš–οΈ.

πŸ‡ͺπŸ‡Ί EU AI Office joined the conversation on the AI Code of Conduct:

  • Debate on which risks to prioritise: All risk types vs. focusing on a few sub-types πŸ€”.
  • Can we do something meaningful about GPAI model/system risks vs. only looking at use-case-based evaluations? 🧹
  • Pushing for responsible scaling policies to become mandatory πŸš€.


About Me

Research Director, CSIRO’s Data61
Conjoint Professor, CSE UNSW

For other roles, see LinkedIn & Professional activities.

If you’d like to invite me to give a talk, please see here & email liming.zhu@data61.csiro.au

Featured Posts

    Categories