In case you missed our year-end summary of the progress we’ve made in responsible AI at CSIRO’s Data61 in 2025, you can find it here. From launching the new Engineering AI Systems book, our practical guide to AI design and the operation of high-quality AI systems, to partnering with the Audit Office of NSW to integrate AI into government auditing, we’ve made significant strides across many fronts. Additionally, we have two major pieces of work with our partners currently under embargo, set to be released in the coming months, coinciding with major events. These represent milestones we are particularly proud of.
Looking ahead, here are some of the exciting directions we’re focusing on in 2026. Responsible and safe AI is never just about evaluating and understanding risks, but actively removing them for faster and more confident AI adoption.
- Scalable Oversight: Developing systems that fully automate AI oversight—checking for correctness, safety, and compliance at both development/training and runtime—will help organisations scale AI without relying on inefficient intervention, reducing bottlenecks and risks. See https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6031534
- Rethinking Human Oversight: In an age of scalable, machine-led oversight, we’re exploring how humans can exercise agency and add value by understanding AI outputs better, selectively applying judgment, and guiding them. This is closely related to the future of skills, learning, and jobs for humans. See https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5501939
- Unlocking Cross-Domain Tech Economy Potential: Techniques developed for mitigating AI risks, like adversarial elicitation of dangerous behaviours, are also proving powerful for unlocking beneficial capabilities, accelerating cross-domain learning, and enabling wider tech economy growth and productivity gains.
We’re excited about the opportunities ahead as we continue shaping the future of AI in Australia and beyond. Stay tuned for more as we advance the tech economy with responsible and safe AI.

