๐ Exciting News! The ๐๐๐ฌ๐ฉ๐จ๐ง๐ฌ๐ข๐๐ฅ๐ ๐๐ ๐๐๐ฌ๐๐๐ซ๐๐ก (๐๐๐๐) ๐๐๐ง๐ญ๐ซ๐ is here! After a long journey, weโre thrilled to see it come to life with our great partner at Australian Institute for Machine Learning (AIML). Beyond the headline research topics, here are some of the key rationales behind our four research themes and how CSIRO’s Data61‘s system-level responsible AI engineering https://lnkd.in/gPhid9tX is weaved through:
1. ๐๐๐๐ค๐ฅ๐ข๐ง๐ ๐๐ฒ๐ง๐ญ๐ก๐๐ญ๐ข๐ ๐๐จ๐ง๐ญ๐๐ง๐ญ ๐๐ข๐ฌ๐ค๐ฌ: Focusing on synthetic content risks such as misinformation, not in single-content isolation but as part of a system-level approach. Research will dive deep into model attributions, training/inference data, and information supply chain attribution and ecosystem context, with spoofing-proof security and integrity of the underlying mechanisms in mind.
2. ๐๐๐๐ ๐๐ ๐ข๐ง ๐ญ๐ก๐ ๐๐๐๐ฅ ๐๐จ๐ซ๐ฅ๐: Recent frontier AI evaluations highlight tool accessโespecially software toolsโas critical risk factors. Whether in the virtual or physical world (both are very real), our research prioritises the safety implications of LLM-generated code interacting with software tools and robotics.
3. (๐๐ฏ๐๐ฅ๐ฎ๐๐ญ๐ข๐ง๐ ) ๐๐ข๐ฏ๐๐ซ๐ฌ๐ ๐๐: Itโs about evaluating probabilistic AI systems holistically. How uncertainties propagate through interconnected multiple AI models and non-AI components to reach a system-level output that informs operational decisions or risk assessments. Instead of single-stakeholder calls, this approach retains diversity and enables context-specific decisions, ensuring uncertainties are understood and trade-offs are handled at the right level and right time.
4. ๐๐ฑ๐ฉ๐ฅ๐๐ข๐ง๐๐๐ฅ๐/๐๐๐ฎ๐ฌ๐๐ฅ ๐๐: This focuses on causal AI across multiple abstraction levels. Stakeholders care about higher-level system-level causality, not just interpretability and causality inside an AI model. Bridging these abstraction layers is essential for responsible AI, judiciously exposing whatโs happening inside the models and systems to what decision-makers need to know.
These themes are also designed to foster international collaboration with institutes like AI Safety Institutes (AISI) and the like. For example, Australia led the synthetic content risks track in the recent AISI network convening, drawing heavily on expertise from Theme 1 and influencing its formation. Weโre also collaborating internationally on AI system evaluation to connect model benchmarks with system-level risk assessments in diverse contexts.
Stay tuned for more updatesโincluding job opportunities! ๐ If youโre an international research organisation working in these areas, reach outโweโd love to connect.
And yes, RAIR/RAREโฆ because weโre all about **RAIR** talent, **RAIR** opportunities, and **RAIR** breakthroughs!
Leave a Reply