I had the pleasure of joining the IEEE Early Career Program Summit 2025 panel on โBridging Academia and Industry in Computing.โ It prompted me to reflect on two lessons I wish Iโd learned much earlier in my careerโbecause the conventional wisdom I heard early on turned out to be deeply limiting.
๐ปย ๐๐๐๐๐๐๐๐๐ ๐๐๐๐๐๐ #๐: โ๐๐จ๐งโ๐ญ ๐ฌ๐ก๐จ๐ฐ ๐ฅ๐จ๐ฐ-๐๐๐ ๐ซ๐๐ฌ๐๐๐ซ๐๐ก ๐ญ๐จ ๐ข๐ง๐๐ฎ๐ฌ๐ญ๐ซ๐ฒ. ๐๐จ๐งโ๐ญ ๐ฆ๐ข๐ฑ ๐ฌ๐ฉ๐๐๐ฎ๐ฅ๐๐ญ๐ข๐ฏ๐ ๐๐ง๐ ๐๐จ๐ฆ๐ฆ๐๐ซ๐๐ข๐๐ฅ ๐ฐ๐จ๐ซ๐ค.โ
This sounds practicalโkeep early-stage, high-risk ideas separate from commercialisation and industry collaboration projects to avoid falling into the infamous two โvalleys of death” you have to cross:
– Valley of Death 1: The gap between basic research (TRL 1-3) and applied development (TRL 4-6), where promising concepts often fail due to lack of funding or practical validation.
– Valley of Death 2: The gap between prototype development (TRL 6-7) and commercialisation (TRL 8-9), where technologies struggle to secure investment or market adoption.
But hereโs the lesson I learned the hard way:
๐ The most successful projects Iโve seen do exactly the opposite.
They expose low-TRL ideas to industry early to develop them in context. They manage risk by embedding fallback options, investing small, and leveraging mature support technologies. In todayโs software and AI ecosystems, where timelines compress from years to weeks, isolating speculative work is costing you real competitive advantage.
๐ป ๐๐๐๐๐๐๐๐๐ ๐๐๐๐๐๐ #๐: โ๐๐ญ๐๐ง๐๐๐ซ๐๐ฌ ๐๐ซ๐ ๐๐จ๐ซ๐ข๐ง๐ ๐ฌ๐ฎ๐ฆ๐ฆ๐๐ซ๐ข๐๐ฌ ๐จ๐ ๐๐ฌ๐ญ๐๐๐ฅ๐ข๐ฌ๐ก๐๐ ๐ ๐จ๐จ๐-๐๐ง๐จ๐ฎ๐ ๐ก ๐ฉ๐ซ๐๐๐ญ๐ข๐๐๐ฌ.โ
The assumption goes: standards are slow, conservative, and reflect the lowest common denominator of whatโs already widely adopted. Meanwhile, real researchโthe state-of-the-artโis happening far ahead in academia and research labs.
But here’s the lesson Iโve learned through direct involvement:
๐ Standards, especially in fast-moving fields like AI, are themselves exciting research.
Defining how to test LLM robustness, watermarking synthetic content, measuring fairness or reliabilityโthese are not solved problems but needs to be standardised in some way for assuring trust and regulatory conformance. They require new science, interdisciplinary thinking, and real-world constraints baked in.
CSIRO’s Data61 is leading the work on Australia’s AI safety standards, watermarking/labelling guidelines, risk threshold setting with Australian industry, and working with AI labs like OpenAI and Google via the Frontier Model Forum on measurement and evaluation to advance of science of evaluation and measurement for AI and agent systems. https://lnkd.in/gJvMTHWB
Thanks again to the organisers and fellow panellists for a lively and thoughtful session. I can’t resist pasting a time-suited accompanying picture to celebrate the arrival of AGI.
Leave a Reply