GenAI For Government Summit: Day 2 Opening Talk

Here’s a curious thing: Generative AI can do more than create text, images, and videos. It’s also adept at generating code, diagnostics, plans, recommendations, predictions, and classifications… 🤔 But how does this differ from traditional predictive AI, planning AI, and AI recommenders?

Or consider this: If an AI model can learn to perform all these amazing tasks, do we—or the AI—still need to write code to produce an app? Can we now ask an AI to ‘become/be’ the app (even generating UIs) just through learning? 🚀 Is coding on its way to becoming obsolete (rather than merely ‘being automated’)?

Or take AlphaGo Zero: It didn’t learn from human plays but easily outperforms those AIs that did. Does this suggest human data and expertise might actually be limiting AI’s potential? Do we worry too much about AI running out of data to learn from, or being limited by training data?

And why did the OECD recently update its AI definition, removing ‘human-defined objectives’ and replacing it with ‘implicit and explicit objectives’? What’s happening to human expertise, human knowledge, and even human objectives!?

I explored these provocative topics at the GenAI for Government Summit. It’s time to move beyond the usual narrative about GenAI’s capabilities, opportunities, risks, and hallucinations… Let’s delve into the essence of GenAI and debunk some myths. :-}

Slides


About Me

Research Director, CSIRO’s Data61
Conjoint Professor, CSE UNSW

For other roles, see LinkedIn & Professional activities.

If you’d like to invite me to give a talk, please see here & email liming.zhu@data61.csiro.au

Featured Posts

    Categories