I never imagined I would one day share a speaking session โ and a speaker room chat โ with the ๐น๐ฒ๐ด๐ฒ๐ป๐ฑ๐ฎ๐ฟ๐ ๐๐๐๐๐ฟ๐ฎ๐น๐ถ๐ฎ๐ป ๐ฐ๐ฟ๐ถ๐ฐ๐ธ๐ฒ๐ ๐ฐ๐ฎ๐ฝ๐๐ฎ๐ถ๐ป ๐ฆ๐๐ฒ๐๐ฒ ๐ช๐ฎ๐๐ด๐ต. Thanks to the Zoho event in Sydney, that happened. Speaking right after Hugh Watson Australia’s Ambassador for Cyber Affairs and Critical Technology also added a valuable global perspective to the discussion.
In my fireside chat, I reflected on our journey helping develop Australiaโs AI safety standard: ๐ฝ๐ฟ๐ฎ๐ฐ๐๐ถ๐ฐ๐ฎ๐น ๐ด๐๐ถ๐ฑ๐ฎ๐ป๐ฐ๐ฒ ๐ณ๐ผ๐ฟ ๐ฆ๐ ๐๐ ๐ฎ๐ป๐ฑ ๐๐ ๐ฑ๐ฒ๐ฝ๐น๐ผ๐๐ฒ๐ฟ๐ ๐๐ต๐ฎ๐ ๐ป๐ผ๐ ๐ผ๐ป๐น๐ ๐ณ๐ถ๐๐ ๐๐ต๐ฒ ๐๐๐๐๐ฟ๐ฎ๐น๐ถ๐ฎ๐ป ๐ฐ๐ผ๐ป๐๐ฒ๐
๐ ๐ฏ๐๐ ๐ฎ๐น๐๐ผ ๐ฎ๐น๐ถ๐ด๐ป๐ ๐ฐ๐น๐ผ๐๐ฒ๐น๐ ๐๐ถ๐๐ต ๐ถ๐ป๐๐ฒ๐ฟ๐ป๐ฎ๐๐ถ๐ผ๐ป๐ฎ๐น ๐ณ๐ฟ๐ฎ๐บ๐ฒ๐๐ผ๐ฟ๐ธ๐.
On adoption barriers, we discussed an inconvenient truth: AI adoption is everywhere at the individual level โ from personal life to employees secretly experimenting or building tools for colleagues in unsanctioned ways. Yet it struggles to bubble up to the business level.
I do believe AI assurance and testing are critical for broad adoption. But the real challenge is this: how do we harness the innovation power of individuals without being paralysed by fear?
One way forward is sensible evaluation before deployment, backed by strong post-deployment monitoring and assurance. At CSIRO’s Data61, we are developing such a balanced approach that spreads risk management across DevOps pipelines rather than relying solely on large upfront risk assessments. That means: resisting the urge to make systems โperfectโ before release, because this wave of GenAI wonโt wait.
Finally, I was asked how I personally use AI. The timing couldnโt be better โ this Friday Iโll deliver the second instalment of my talk at another APS event, building on my recent (slightly controversial) reflections in ๐ฅ๐ฒ๐ฎ๐ฑ๐ถ๐ป๐ด ๐ถ๐ป ๐๐ต๐ฒ ๐๐ด๐ฒ ๐ผ๐ณ ๐๐. This time, Iโll go even further, tackling a more challenging topic โ ๐ช๐ฟ๐ถ๐๐ถ๐ป๐ด ๐ถ๐ป ๐๐ต๐ฒ ๐๐ด๐ฒ ๐ผ๐ณ ๐๐ โ where Iโll make another controversial point. Stay tuned.

