Panel – Europe’s Path in Artificial Intelligence

๐ŸŽ‰ Kicking off the new year with a big topic – “Europe’s Path in Artificial Intelligence”. I had the privilege of joining a stellar panel organised by Fraunhofer, with an online audience of ~500. I shared Australiaโ€™s AI regulation status, including the pivotal role of the Voluntary AI Safety Standards in accelerating AI adoption with trust and helping explore regulatory approaches.

๐Ÿš€ I conveyed the following messages:
1๏ธโƒฃ ๐’๐š๐Ÿ๐ž๐ญ๐ฒ ๐ฏ๐ฌ. ๐…๐ฎ๐ง๐œ๐ญ๐ข๐จ๐ง, ๐ง๐ž๐š๐ซ ๐ฏ๐ฌ. ๐ฅ๐จ๐ง๐ -๐ญ๐ž๐ซ๐ฆ ๐ซ๐ข๐ฌ๐ค โ€“ ๐€ ๐Ÿ๐š๐ฅ๐ฌ๐ž ๐๐ข๐œ๐ก๐จ๐ญ๐จ๐ฆ๐ฒ?
High-performing AI use cases and safety arenโ€™t hard choices. The underlying science is the same โ€“ deeply understanding AI models/systems and steering them effectively. The same understanding ensures functional accuracy and reliability, and also control over risksโ€”whether it’s bias or deepfake concerns today or out-of-control risks tomorrow.

2๏ธโƒฃ ๐–๐ก๐ฒ ๐ฌ๐จ ๐ฆ๐š๐ง๐ฒ ๐ซ๐ข๐ฌ๐ค ๐š๐ฌ๐ฌ๐ž๐ฌ๐ฌ๐ฆ๐ž๐ง๐ญ๐ฌ, ๐ฆ๐ž๐š๐ฌ๐ฎ๐ซ๐ž๐ฌ, ๐š๐ง๐ ๐ฆ๐ข๐ญ๐ข๐ ๐š๐ญ๐ข๐จ๐ง๐ฌ?
Because we lack good science understanding, we have to spread many mitigations across the lifecycleโ€”process and product. And we have to layer things up and allow different ways because we are not sure exactly how much risk eduction one measure/mitigation achieves . Only by advancing the science of concrete and quantified risk assessment/mitigation can we truly reduce the confusion, the cost, and the interpretation variations plagued by high-level regulations, standards, and frameworks that try to do the right thing by piling up many mitigations just in case.

3๏ธโƒฃ ๐๐ž๐ฒ๐จ๐ง๐ ๐ฆ๐จ๐๐ž๐ฅ๐ฌ โ€“ ๐’๐ฒ๐ฌ๐ญ๐ž๐ฆ๐ฌ-๐ฅ๐ž๐ฏ๐ž๐ฅ ๐ข๐ง๐ง๐จ๐ฏ๐š๐ญ๐ข๐จ๐ง:
AI models donโ€™t work by themselves. The real leap comes from pairing them with external tools, smarter and more inference-time compute, and external knowledge bases. This system-level innovation is within reach for everyone; thereโ€™s no need to chase endless scaling of GPUs or data. And if all models are learning the same underlying world model/understanding, they will converge on the same thing, so model-level competitive advantage may disappear. System-level innovation is where you truly differentiate, so start now.

๐Ÿ› ๏ธ That’s why CSIRO’s Data61 is focusing on the science of measuring and controlling AI “systems”โ€”from concrete risk assessments/mitigation to inference-time/system-level innovation for industry and government. A bottom-up, foundational science approach is key to cutting through the noise of vague mitigations and ensuring meaningful, efficient risk control alongside accelerated innovation. More here: https://lnkd.in/gPhid9tX

We are also developing version two of the AI Safety Standard right now in tandem with AI safety institutes around the world on model/system evaluation (assuring the system) and the ISO, NIST and EU AI Act’s Code of Practices for GPAI developers (assuring the processes). Stay tuned.


Leave a Reply

Your email address will not be published. Required fields are marked *

About Me

Research Director, CSIRO’s Data61
Conjoint Professor, CSE UNSW

For other roles, see LinkedIn & Professional activities.

If you’d like to invite me to give a talk, please see here & email liming.zhu@data61.csiro.au

Featured Posts

    Categories