For many who have followed my posts, you will know I have travelled the world recently to strengthen connections with many responsible AI and AI safety research organisations, including all the new AI Safety Institutes (AISI) and many national organisations/programs with a critical mass in responsible/safe AI.
CSIRO’s Data61 has built active collaborations with organisations in the US, UK, Canada, Japan, Singapore, India, Germany, and France on this topic, with a strong focus on AI system trustworthiness engineering and assessment (beyond AI models and organisational governance/management aspects). Australia is effectively leading the formation of this community. ๐
One way for this community to exchange ideas and collaborate is through the workshops we organise and the organising/program committee members we invite to set the directions. We deliberately designed these to be hosted at different academic communities as we observe that these communities (AI/ML, Software/AI Engineering, Social Science/Law/Philosophy) do not talk to each other on this topic as much as we hope for. ๐ก
One ongoing one, co-organised by Qinghua Lu, is the ๐๐๐ฌ๐ฉ๐จ๐ง๐ฌ๐ข๐๐ฅ๐ ๐๐ ๐๐ง๐ ๐ข๐ง๐๐๐ซ๐ข๐ง๐ (๐๐๐๐)ย workshop hosted at the premier software/AI engineering conference, ICSE. We just had our 2024 instalment https://lnkd.in/gWFfZwQi , and stay tuned for the 2025 info. ๐
The other one is the ๐๐ ๐๐ซ๐ฎ๐ฌ๐ญ๐ฐ๐จ๐ซ๐ญ๐ก๐ข๐ง๐๐ฌ๐ฌ ๐๐ฌ๐ฌ๐๐ฌ๐ฌ๐ฆ๐๐ง๐ญ (๐๐๐๐) hosted at the ๐๐๐๐ ๐๐ฒ๐ฆ๐ฉ๐จ๐ฌ๐ข๐ฎ๐ฆ๐ฌ. We held this last year and are doing it again with an expanded community in November in Arlington, VA, USA. ๐๐ ๐ฒ๐จ๐ฎ ๐๐ซ๐ ๐ข๐ง๐ญ๐๐ซ๐๐ฌ๐ญ๐๐ ๐ข๐ง ๐ฃ๐จ๐ข๐ง๐ข๐ง๐ , ๐ฉ๐ฅ๐๐๐ฌ๐ ๐ฌ๐ฎ๐๐ฆ๐ข๐ญ ๐ฉ๐๐ฉ๐๐ซ๐ฌ ๐๐ฒ ๐๐ง๐ ๐๐ฎ๐ ๐ฎ๐ฌ๐ญ (๐ฆ๐๐ฑ๐ข๐ฆ๐ฎ๐ฆ ๐ ๐ฉ๐๐ ๐๐ฌ ๐๐จ๐ซ ๐๐ฎ๐ฅ๐ฅ ๐ฉ๐๐ฉ๐๐ซ๐ฌ, ๐ ๐ฉ๐๐ ๐๐ฌ ๐๐จ๐ซ ๐ฉ๐จ๐ฌ๐ญ๐๐ซ/๐ฌ๐ก๐จ๐ซ๐ญ/๐ฉ๐จ๐ฌ๐ข๐ญ๐ข๐จ๐ง ๐ฉ๐๐ฉ๐๐ซ๐ฌ) https://lnkd.in/gVSkQgTf . ๐
More importantly, we are hoping for more people to get involved in the organisation and community from more diverse countries and organisations.
If you are part of a research organisation with some national programs or a critical mass on responsible and safe AI, especially with a system-level focus, feel free to reach out to me.