π₯ Excited to have joined the AJCAI panel on responsible and safe AI, focusing on operationalising Australia’s AI Safety Standard! As a key author of the standard, I shared some insights about the motivations behind it and how it fits with other international standards. Here are a few key points:
π‘οΈ AI Safety Standards:
- Many high-level standards lack practical guides π οΈ; we need more operational approaches.
- Often designed with big organisations in mind π§°βlots of committees, heavy documentation.
- Focus has been on organisational risks rather than human risks π€; let’s value diverse perspectives.
- Internationally interoperable π: not to create differences but to provide practical guides under each requirement, mapped to reputable standards.
- Supporting mandatory guardrails, with a special emphasis on improving practical guidance in version 2 π.
π€ AISI Meeting Highlights:
- Synthetic content: detection vs. provenance-based approaches for safety. Balancing trade-offs and security π οΈπβAustralia π¦πΊ is leading in this space.
- Evaluation: Empirical science needs realistic settings and human performance benchmarking π. CSIRO pushing the capabilities here!
- Risk Assessment: How do we interpret risks? Materialised vs. hypothetical risksβnuances that matter βοΈ.
πͺπΊ EU AI Office joined the conversation on the AI Code of Conduct:
- Debate on which risks to prioritise: All risk types vs. focusing on a few sub-types π€.
- Can we do something meaningful about GPAI model/system risks vs. only looking at use-case-based evaluations? π§Ή
- Pushing for responsible scaling policies to become mandatory π.