What an incredible two-week trip to the US! Touching down in Silicon Valley, the centre of the storm, I could feel the excitement and apprehension surrounding foundation models/generative AI and the SVB/Tech layoffs.
At the AAAI Symposium on AI trustworthiness assessment, I had the fun of sitting next to the US policy lead from the Future of Life Institute, whose pause AI letter was making the wave in real time. I presented CSIRO’s Data61 Australia’s approach to operationalising responsible AI to the emerging global community in AI trustworthiness assessment, including addressing the challenges of foundation models from private companies.
Our journey continued with visits to Berkeley (BAIR), Stanford (HCAI and Medicine), Google (Bard/Responsible AI team), Adobe, Nvidia, and Cisco before flying to Boston to visit Harvard (Kennedy School and Medicine) and MIT (Media Lab). We concluded our trip in Washington, DC, forging new collaboration opportunities in responsible AI and cybersecurity with NSF, NIST, FDA, the Department of Homeland Security, and MITRE. While the impact of foundation models/ChatGPT on trust in AI, academic research, and industry jobs dominated many discussions, there was a unified voice emphasising the importance of responsible AI. In addition, the intersection of AI with cybersecurity and quantum computing emerged as another key topic.
One highlight of the trip was an unexpected encounter with a world-renowned visual artist visiting the MIT Media Lab. He began our conversation by stating that he never wanted generative AI to be just a tool for his work. Instead, he first envisioned AI as an equal partner in art, then as a legacy AI that would continue his work after his passing, and ultimately as an AI to which human artists would relinquish all control to avoid limiting the creative potential of AI. This took me aback and put another perspective on the human-centred discussion. We spent over an hour discussing various AI and artistic techniques to achieve this vision.