Day three, the research symposium day, was the reason I came. You do not often get to hear and interact with Sir Demis Hassabis, Yoshua Bengio, Yann LeCun and alike on the same day.
Sir Demis predicted AGI within five to eight years. Whether that timeline holds or not, two themes stood out.
First, learning after training. He emphasised systems that continue to improve post-training. This often does not mean modifying the AI model. More often, it means building feedback loops, memory, tools and evaluation layers so the system, not just the model, learns from errors and new information. The practical question for any organisation is simple: is your AI learning from every mistake and every piece of feedback? If not, you are not unlocking its full capability.
Second, why AI is so powerful for science. It is not only about extracting patterns from massive datasets. It is about cross-disciplinary learning and synthesis. Cross-disciplinary collaboration is hard, and few humans have deep expertise across multiple domains. AI can learn and connect insights across fields, provided those insights are validated by domain scientists. There is no better place than CSIRO, where our CSIRO’s Data61 AI technologies are exposed to multiple scientific domains and we continually observe surprising cross-domain gains. That gives us confidence that we can compete, even alongside well-resourced frontier labs or domain-specific R&D groups. This is where we can press for an advantage for Australian AI.
In the afternoon, I joined the panel on Safe and Trusted AI. My argument was straightforward: safety is not a model property, it is a system property. We cannot rely on models to police themselves. Control sits in the system layer, whether through sophisticated AI monitors or well-understood rule-based guardrails. Trust must be evidence-based and calibrated. That is why Australia’s AI safety guidelines began with deployers rather than model builders, and why the Australian AI Safety Institute was recently established to gather evidence to improve calibrated trust.
I reused my car analogy. You do not need a perfect engine if you have reliable brakes. With trustworthy control mechanisms, you can move faster with confidence. Early car laws, such as the Red Flag Act, required a person to walk in front of a vehicle holding a red flag. Some human-in-the-loop approaches in AI risk becoming the modern equivalent, either a human scapegoat or an impractical way of slowing AI to irrelevance. We need better brakes, not permanent red flags.
Later, navigating formidable traffic, I joined the International AI Safety Science Report launch, a key report we contributed to over the past year to provide evidence for policy makers.
Day four will be the official opening ceremony with Prime Minister Modi and other dignitaries. The summit is reaching its peak.


Leave a Reply