๐‘๐ž๐ฌ๐ฉ๐จ๐ง๐ฌ๐ข๐›๐ฅ๐ž ๐€๐ˆ ๐‘๐ž๐ฌ๐ž๐š๐ซ๐œ๐ก (๐‘๐€๐ˆ๐‘) ๐‚๐ž๐ง๐ญ๐ซ๐ž Launched

๐ŸŽ‰ Exciting News! The ๐‘๐ž๐ฌ๐ฉ๐จ๐ง๐ฌ๐ข๐›๐ฅ๐ž ๐€๐ˆ ๐‘๐ž๐ฌ๐ž๐š๐ซ๐œ๐ก (๐‘๐€๐ˆ๐‘) ๐‚๐ž๐ง๐ญ๐ซ๐ž is here! After a long journey, weโ€™re thrilled to see it come to life with our great partner at Australian Institute for Machine Learning (AIML). Beyond the headline research topics, here are some of the key rationales behind our four research themes and how CSIRO’s Data61‘s system-level responsible AI engineering https://lnkd.in/gPhid9tX is weaved through:

1. ๐“๐š๐œ๐ค๐ฅ๐ข๐ง๐  ๐’๐ฒ๐ง๐ญ๐ก๐ž๐ญ๐ข๐œ ๐‚๐จ๐ง๐ญ๐ž๐ง๐ญ ๐‘๐ข๐ฌ๐ค๐ฌ: Focusing on synthetic content risks such as misinformation, not in single-content isolation but as part of a system-level approach. Research will dive deep into model attributions, training/inference data, and information supply chain attribution and ecosystem context, with spoofing-proof security and integrity of the underlying mechanisms in mind.

2. ๐’๐š๐Ÿ๐ž ๐€๐ˆ ๐ข๐ง ๐ญ๐ก๐ž ๐‘๐ž๐š๐ฅ ๐–๐จ๐ซ๐ฅ๐: Recent frontier AI evaluations highlight tool accessโ€”especially software toolsโ€”as critical risk factors. Whether in the virtual or physical world (both are very real), our research prioritises the safety implications of LLM-generated code interacting with software tools and robotics.

3. (๐„๐ฏ๐š๐ฅ๐ฎ๐š๐ญ๐ข๐ง๐ ) ๐ƒ๐ข๐ฏ๐ž๐ซ๐ฌ๐ž ๐€๐ˆ: Itโ€™s about evaluating probabilistic AI systems holistically. How uncertainties propagate through interconnected multiple AI models and non-AI components to reach a system-level output that informs operational decisions or risk assessments. Instead of single-stakeholder calls, this approach retains diversity and enables context-specific decisions, ensuring uncertainties are understood and trade-offs are handled at the right level and right time.

4. ๐„๐ฑ๐ฉ๐ฅ๐š๐ข๐ง๐š๐›๐ฅ๐ž/๐‚๐š๐ฎ๐ฌ๐š๐ฅ ๐€๐ˆ: This focuses on causal AI across multiple abstraction levels. Stakeholders care about higher-level system-level causality, not just interpretability and causality inside an AI model. Bridging these abstraction layers is essential for responsible AI, judiciously exposing whatโ€™s happening inside the models and systems to what decision-makers need to know.

These themes are also designed to foster international collaboration with institutes like AI Safety Institutes (AISI) and the like. For example, Australia led the synthetic content risks track in the recent AISI network convening, drawing heavily on expertise from Theme 1 and influencing its formation. Weโ€™re also collaborating internationally on AI system evaluation to connect model benchmarks with system-level risk assessments in diverse contexts.

Stay tuned for more updatesโ€”including job opportunities! ๐Ÿš€ If youโ€™re an international research organisation working in these areas, reach outโ€”weโ€™d love to connect.

And yes, RAIR/RAREโ€ฆ because weโ€™re all about **RAIR** talent, **RAIR** opportunities, and **RAIR** breakthroughs!


Leave a Reply

Your email address will not be published. Required fields are marked *

About Me

Research Director, CSIRO’s Data61
Conjoint Professor, CSE UNSW

For other roles, see LinkedIn & Professional activities.

If you’d like to invite me to give a talk, please see here & email liming.zhu@data61.csiro.au

Featured Posts

    Categories