• Australia’s Next Step in Responsible AI Adoption

    Australia’s Next Step in Responsible AI Adoption

    Every few weeks brings a new AI framework — but rarely one that genuinely helps organisations simplify. The National AI Centre‘s new guidance on responsible AI adoption, launched this week, is a welcome exception. It distils earlier guidance and complex international work into six clear, balanced practices that bridge governance intent with operational reality. We…

    Read more: Australia’s Next Step in Responsible AI Adoption
  • AI & Health, Safety, and Environment (HSE)
    ,

    AI & Health, Safety, and Environment (HSE)

    Cognitive overload, opaque automation, and AI surveillance are reshaping the modern workplace. I’m sharing some insights from my talk at the Safety Sphere, Australia’s leading HSE practitioner network, on “𝗪𝗵𝗲𝗻 𝗔𝗜 𝗕𝗲𝗰𝗼𝗺𝗲𝘀 𝗮 𝗖𝗼𝘄𝗼𝗿𝗸𝗲𝗿: 𝗨𝗻𝗱𝗲𝗿𝘀𝘁𝗮𝗻𝗱𝗶𝗻𝗴 𝗛𝗦𝗘 𝗥𝗶𝘀𝗸𝘀 𝗮𝗻𝗱 𝗢𝗽𝗽𝗼𝗿𝘁𝘂𝗻𝗶𝘁𝗶𝗲𝘀 𝗼𝗳 𝗚𝗲𝗻𝗲𝗿𝗮𝗹-𝗣𝘂𝗿𝗽𝗼𝘀𝗲 𝗔𝗜.” We examined how AI not only enhances productivity but also creates new health,…

    Read more: AI & Health, Safety, and Environment (HSE)
  • ICMI 2025 Keynote – Future of Human Oversight

    ICMI 2025 Keynote – Future of Human Oversight

    It was a great pleasure to deliver 𝗮 𝗸𝗲𝘆𝗻𝗼𝘁𝗲 𝘆𝗲𝘀𝘁𝗲𝗿𝗱𝗮𝘆 𝗮𝘁 𝘁𝗵𝗲 𝟮𝟳𝘁𝗵 𝗔𝗖𝗠 𝗜𝗻𝘁𝗲𝗿𝗻𝗮𝘁𝗶𝗼𝗻𝗮𝗹 𝗖𝗼𝗻𝗳𝗲𝗿𝗲𝗻𝗰𝗲 𝗼𝗻 𝗠𝘂𝗹𝘁𝗶𝗺𝗼𝗱𝗮𝗹 𝗜𝗻𝘁𝗲𝗿𝗮𝗰𝘁𝗶𝗼𝗻, 𝘄𝗵𝗲𝗿𝗲 𝗜 𝗲𝘅𝗽𝗹𝗼𝗿𝗲𝗱 𝘁𝗵𝗲 𝗲𝘃𝗼𝗹𝘃𝗶𝗻𝗴 𝗻𝗮𝘁𝘂𝗿𝗲 𝗮𝗻𝗱 𝗳𝘂𝘁𝘂𝗿𝗲 𝗼𝗳 𝗵𝘂𝗺𝗮𝗻 𝗼𝘃𝗲𝗿𝘀𝗶𝗴𝗵𝘁. Recent reports such as OpenAI’s GDPVal and METR show that AI systems can now autonomously perform complex doing/solving tasks and often surpass human experts on…

    Read more: ICMI 2025 Keynote – Future of Human Oversight
  • Representing Australia in the Global Effort for AI Safety

    Representing Australia in the Global Effort for AI Safety

    It has been a great honour to represent Australia on the Expert Advisory Panel for the International AI Safety Report, led by Joshua Bengio and backed by over 30 countries and international organisations including the EU, OECD and UN. This is the most important ongoing effort to monitor AI capability gains and safety risks of…

    Read more: Representing Australia in the Global Effort for AI Safety
  • When “Risk-Based” AI Becomes an Empty Promise

    When “Risk-Based” AI Becomes an Empty Promise

    𝗥𝗶𝘀𝗸-𝗯𝗮𝘀𝗲𝗱 𝗔𝗜 𝗽𝗼𝗹𝗶𝗰𝘆/𝗿𝗲𝗴𝘂𝗹𝗮𝘁𝗶𝗼𝗻 𝘀𝗼𝘂𝗻𝗱𝘀 𝗼𝗯𝘃𝗶𝗼𝘂𝘀. 𝗕𝘂𝘁 𝗶𝘁 𝗿𝗲𝘀𝘁𝘀 𝗮 𝗳𝗿𝗮𝗴𝗶𝗹𝗲 𝗮𝘀𝘀𝘂𝗺𝗽𝘁𝗶𝗼𝗻: 𝘁𝗵𝗮𝘁 𝘄𝗲 𝗰𝗮𝗻 𝗮𝗰𝘁𝘂𝗮𝗹𝗹𝘆 𝗺𝗲𝗮𝘀𝘂𝗿𝗲 𝗿𝗶𝘀𝗸.Yet risk assessment is often put forward as the first thing to answer, as if it’s the easy part. Some frameworks push “use case–based” solutions with a pre-defined list of high-risk use cases, but then smuggle in the magic…

    Read more: When “Risk-Based” AI Becomes an Empty Promise
  • From Shadow Adoption to Workflow Collapse: The Hidden Economics of AI

    From Shadow Adoption to Workflow Collapse: The Hidden Economics of AI

    𝗪𝗵𝗮𝘁 𝗶𝗳 𝘄𝗲’𝘃𝗲 𝗯𝗲𝗲𝗻 𝗰𝗮𝘁𝗲𝗴𝗼𝗿𝗶𝘀𝗶𝗻𝗴 𝗚𝗲𝗻𝗲𝗿𝗮𝗹 𝗣𝘂𝗿𝗽𝗼𝘀𝗲 𝗔𝗜 (𝗚𝗣𝗔𝗜) 𝘁𝗵𝗲 𝘄𝗿𝗼𝗻𝗴 𝘄𝗮𝘆?• What if it isn’t just another “general-purpose technology” like computing, but a bundle of many specific-purpose technologies out-of-the-box?• And what if calling AI a “capital investment” is also a category error—when in reality it behaves more like inexpensive labour? Yesterday at the Cost-Benefit…

    Read more: From Shadow Adoption to Workflow Collapse: The Hidden Economics of AI
  • If You Understand It, It’s Not AI: Designing Oversight for the Incomprehensible

    If You Understand It, It’s Not AI: Designing Oversight for the Incomprehensible

    New paper alert (working draft)! “When you really understand what AI is doing, it’s no longer AI — it’s just boring automation.” That old AI community joke captures the paradox of human oversight. We trust calculators and complex business process engines with millions of if–else rules, running hundreds of steps, responding to environmental changes, and…

    Read more: If You Understand It, It’s Not AI: Designing Oversight for the Incomprehensible
  • Writing in the Age of AI
    ,

    Writing in the Age of AI

    After my slightly controversial talk on 𝗥𝗲𝗮𝗱𝗶𝗻𝗴 𝗶𝗻 𝘁𝗵𝗲 𝗔𝗴𝗲 𝗼𝗳 𝗔𝗜, I delivered another talk at APS Learn series on 𝗪𝗿𝗶𝘁𝗶𝗻𝗴 𝗶𝗻 𝘁𝗵𝗲 𝗔𝗴𝗲 𝗼𝗳 𝗔𝗜. 𝗠𝘆 𝗰𝗼𝗿𝗲 𝘁𝗵𝗲𝘀𝗶𝘀: 𝘄𝗿𝗶𝘁𝗶𝗻𝗴 𝗶𝘀 𝘁𝗵𝗶𝗻𝗸𝗶𝗻𝗴. We write first for self-understanding, then for audience understanding. AI cannot replace that understanding, but it can be a strong discussion partner.…

    Read more: Writing in the Age of AI
  • The AI-ESG Paradox: Why Assessing AI’s Impact Defies Simple Metrics

    The AI-ESG Paradox: Why Assessing AI’s Impact Defies Simple Metrics

    Just sat down with the latest episode of The Institutional Edge: Real allocators. Real alpha. — and this one hits right at the crossroads of technology, investing, and sustainability. Angelo Calvello speaks with me about why AI’s ESG impact is so hard to measure and what investors can realistically do about it. We dig into…

    Read more: The AI-ESG Paradox: Why Assessing AI’s Impact Defies Simple Metrics
  • Zoho Fireside Chat on AI

    Zoho Fireside Chat on AI

    I never imagined I would one day share a speaking session — and a speaker room chat — with the 𝗹𝗲𝗴𝗲𝗻𝗱𝗮𝗿𝘆 𝗔𝘂𝘀𝘁𝗿𝗮𝗹𝗶𝗮𝗻 𝗰𝗿𝗶𝗰𝗸𝗲𝘁 𝗰𝗮𝗽𝘁𝗮𝗶𝗻 𝗦𝘁𝗲𝘃𝗲 𝗪𝗮𝘂𝗴𝗵. Thanks to the Zoho event in Sydney, that happened. Speaking right after Hugh Watson Australia’s Ambassador for Cyber Affairs and Critical Technology also added a valuable global perspective to…

    Read more: Zoho Fireside Chat on AI

Authored Books

About Me


About me – According to AI

Director/Head of CSIRO’s Data61
Conjoint Professor, CSE UNSW

For other roles, see LinkedIn & Professional activities.

If you’d like to invite me to give a talk, please see here & email liming.zhu@data61.csiro.au

Featured Posts