๐ Kicking off the new year with a big topic – “Europe’s Path in Artificial Intelligence”. I had the privilege of joining a stellar panel organised by Fraunhofer, with an online audience of ~500. I shared Australiaโs AI regulation status, including the pivotal role of the Voluntary AI Safety Standards in accelerating AI adoption with trust and helping explore regulatory approaches.
๐ I conveyed the following messages:
1๏ธโฃ ๐๐๐๐๐ญ๐ฒ ๐ฏ๐ฌ. ๐
๐ฎ๐ง๐๐ญ๐ข๐จ๐ง, ๐ง๐๐๐ซ ๐ฏ๐ฌ. ๐ฅ๐จ๐ง๐ -๐ญ๐๐ซ๐ฆ ๐ซ๐ข๐ฌ๐ค โ ๐ ๐๐๐ฅ๐ฌ๐ ๐๐ข๐๐ก๐จ๐ญ๐จ๐ฆ๐ฒ?
High-performing AI use cases and safety arenโt hard choices. The underlying science is the same โ deeply understanding AI models/systems and steering them effectively. The same understanding ensures functional accuracy and reliability, and also control over risksโwhether it’s bias or deepfake concerns today or out-of-control risks tomorrow.
2๏ธโฃ ๐๐ก๐ฒ ๐ฌ๐จ ๐ฆ๐๐ง๐ฒ ๐ซ๐ข๐ฌ๐ค ๐๐ฌ๐ฌ๐๐ฌ๐ฌ๐ฆ๐๐ง๐ญ๐ฌ, ๐ฆ๐๐๐ฌ๐ฎ๐ซ๐๐ฌ, ๐๐ง๐ ๐ฆ๐ข๐ญ๐ข๐ ๐๐ญ๐ข๐จ๐ง๐ฌ?
Because we lack good science understanding, we have to spread many mitigations across the lifecycleโprocess and product. And we have to layer things up and allow different ways because we are not sure exactly how much risk eduction one measure/mitigation achieves . Only by advancing the science of concrete and quantified risk assessment/mitigation can we truly reduce the confusion, the cost, and the interpretation variations plagued by high-level regulations, standards, and frameworks that try to do the right thing by piling up many mitigations just in case.
3๏ธโฃ ๐๐๐ฒ๐จ๐ง๐ ๐ฆ๐จ๐๐๐ฅ๐ฌ โ ๐๐ฒ๐ฌ๐ญ๐๐ฆ๐ฌ-๐ฅ๐๐ฏ๐๐ฅ ๐ข๐ง๐ง๐จ๐ฏ๐๐ญ๐ข๐จ๐ง:
AI models donโt work by themselves. The real leap comes from pairing them with external tools, smarter and more inference-time compute, and external knowledge bases. This system-level innovation is within reach for everyone; thereโs no need to chase endless scaling of GPUs or data. And if all models are learning the same underlying world model/understanding, they will converge on the same thing, so model-level competitive advantage may disappear. System-level innovation is where you truly differentiate, so start now.
๐ ๏ธ That’s why CSIRO’s Data61 is focusing on the science of measuring and controlling AI “systems”โfrom concrete risk assessments/mitigation to inference-time/system-level innovation for industry and government. A bottom-up, foundational science approach is key to cutting through the noise of vague mitigations and ensuring meaningful, efficient risk control alongside accelerated innovation. More here: https://lnkd.in/gPhid9tX
We are also developing version two of the AI Safety Standard right now in tandem with AI safety institutes around the world on model/system evaluation (assuring the system) and the ISO, NIST and EU AI Act’s Code of Practices for GPAI developers (assuring the processes). Stay tuned.
Leave a Reply