After a period of relaxation and intensive writing sessions for our upcoming book on ๐๐ซ๐๐ก๐ข๐ญ๐๐๐ญ๐ฎ๐ซ๐ ๐๐ง๐ ๐๐๐ฏ๐๐ฉ๐ฌ ๐๐จ๐ซ ๐๐ ๐๐ฒ๐ฌ๐ญ๐๐ฆ๐ฌ, I’m excited to resume regular updates. A lot has unfolded, and even more is on the horizon, so stay tuned. Last week, I had the pleasure of speaking at the Australian Software Engineering Summer School,…
-
AI Leadership Summit Panel – Powering Productivity with Generative AI
Read more: AI Leadership Summit Panel – Powering Productivity with Generative AIThe biggest AI leadership day in Australia, during AI month! ๐I participated in a panel on ‘Powering Productivity with Generative AI’. Here are my main messages: – Like car brakes enable cars to run faster and with confidence, responsible AI is crucial for accelerating AI technology adoption. Australia may not create the fastest or largest…
-
GenAI For Government Summit: Day 2 Opening Talk
Read more: GenAI For Government Summit: Day 2 Opening TalkHere’s a curious thing: Generative AI can do more than create text, images, and videos. It’s also adept at generating code, diagnostics, plans, recommendations, predictions, and classifications… ๐ค But how does this differ from traditional predictive AI, planning AI, and AI recommenders? Or consider this: If an AI model can learn to perform all these…
-
Defence AI Symposium Talk: Beyond AI Models – Responsible AI in the Era of Frontier Models
Read more: Defence AI Symposium Talk: Beyond AI Models – Responsible AI in the Era of Frontier ModelsI enjoyed the Defence AI Symposium 2023, which offered a diverse perspective on using AI in a defence context, emphasizing trustworthiness and responsibility. I delivered a talk titled โBeyond AI Models: Responsible AI in the Era of Frontier Models,โ where I focused on: Selected Slides: Dropbox ๐๐๐ฌ๐๐๐ซ๐๐ก at CSIRO’s Data61: https://lnkd.in/gyzjE4-i๐๐จ๐จ๐ค: https://lnkd.in/gsQz5swy
-
FM+SE Vision 2030 in Mexico City
Read more: FM+SE Vision 2030 in Mexico CityI am often asked, โ๐ผ๐ ๐กโ๐๐ ๐ถโ๐๐ก๐บ๐๐ ๐๐๐, ๐๐ ๐๐๐๐๐๐๐๐ ๐ก๐ ๐๐๐๐ ๐ ๐ก๐๐๐ ๐๐ ๐ ๐๐๐ก๐๐๐ ๐๐๐ ๐๐ข๐ ๐โ๐๐๐๐๐๐?โ Powerful Foundation Models (FM) not only write increasingly sophisticated code, but they also perform a growing range of functions within the trained blackbox model, where no traditional code is ever written. My answer was finally put to the…
-
AIMX Keynote – Navigating Frontier AI Safety
Read more: AIMX Keynote – Navigating Frontier AI Safety๐ซ I thoroughly enjoyed my quick 2-day trip to Singapore. I couldn’t resist a bit of mischief by titling my keynote at Singapore’s AIMX Conference, “Navigating Frontier AI Safety: The Science of Responsible AI in Australia” ๐ฆ๐บ, given it was just a day before the UK AI Safety Summit. ๐ Setting cheekiness aside, our scientific…
-
The Hiroshima AI Process and Beyond: A Deep Dive into Japan’s AI Governance and Innovations
Read more: The Hiroshima AI Process and Beyond: A Deep Dive into Japan’s AI Governance and InnovationsJust wrapped up an insightful trip to Japan! After presenting NAIC/Data61’s work on responsible AI at the UN IGF, I had the privilege to visit several top AI institutes and delve deeper into Japan’s AI initiatives, such as JSTโs Trustworthy/Trusted AI programs, Tokyo University, Waseda University, NII, RIKEN AIP, and AIST. Exciting collaboration opportunities lie…
-
Australia’s Responsible AI Approach at UN IGF
Read more: Australia’s Responsible AI Approach at UN IGFI am thrilled to participate in this year’s UN Internet Governance Forum (IGF) in Kyoto, where the discussion on Responsible AI and GenAI takes center stage. On the first day, I had the honour of speaking about Australia’s approach to Responsible AI and contributing to the panel titled “Shaping AI Technologies to Ensure Respect for…
-
Oct Sky Talk – AI Transformation: A Clash with Human Expertise
Read more: Oct Sky Talk – AI Transformation: A Clash with Human ExpertiseI had the wonderful opportunity to participate in the October Sky event organized by Chaos1, where I gave a talk on ‘AI Transformation,’ featuring a deliberately provocative subtitle: ‘A Clash with Human Expertise.’ I opened with a 1945 quote from Vannevar Bush: ‘Consider a future deviceโฆin which an individual stores all his books, records, and…
-
Why is Operationalising Responsible AI Hard?
Read more: Why is Operationalising Responsible AI Hard?In my new CEDA – Committee for Economic Development of Australia opinion piece, https://lnkd.in/gjbK3Uej I tackle three pervasive challenges I’ve observed: 1. The tendency to view responsible AI as nothing more than a buzzword or truism. Is AI really special, or can we simply graft AI considerations onto existing governance frameworks? 2. The communication gap between executives, board…