Here’s a curious thing: Generative AI can do more than create text, images, and videos. It’s also adept at generating code, diagnostics, plans, recommendations, predictions, and classifications… π€ But how does this differ from traditional predictive AI, planning AI, and AI recommenders?
Or consider this: If an AI model can learn to perform all these amazing tasks, do weβor the AIβstill need to write code to produce an app? Can we now ask an AI to ‘become/beβ the app (even generating UIs) just through learning? π Is coding on its way to becoming obsolete (rather than merely ‘being automated’)?
Or take AlphaGo Zero: It didn’t learn from human plays but easily outperforms those AIs that did. Does this suggest human data and expertise might actually be limiting AI’s potential? Do we worry too much about AI running out of data to learn from, or being limited by training data?
And why did the OECD recently update its AI definition, removing ‘human-defined objectives’ and replacing it with ‘implicit and explicit objectives’? Whatβs happening to human expertise, human knowledge, and even human objectives!?
I explored these provocative topics at the GenAI for Government Summit. It’s time to move beyond the usual narrative about GenAI’s capabilities, opportunities, risks, and hallucinations… Let’s delve into the essence of GenAI and debunk some myths. :-}