It has been a great honour to write an IEEE Software point/counterpoint piece on software engineering for AI with Prof. Mary Shaw at CMU, the towering figure in software engineering. To make such a debate fun and also counter Mary’s cogent arguments, I made my points a bit extreme and contrarian at places to draw responses. :-} Here is a high-level summary of my points. https://lnkd.in/gEUYwcxZ
On a fundamental level, AI solves problems autonomously with seemingly effective solutions that humans may not fully comprehend. AI can even propose new problems to be solved. This paradigm is radically different from the paradigm where humans identify and solve problems via software built by approaches and tools they reasonably understand. The new generation of AI engineering methods should possess attributes that help achieve the followings:
1. Define outcomes and guardrails in contrast to the traditional approaches of specifying problems, requirements and specifications. We need new methods that engineers can use to engage with diverse users and communities to better select problems, define outcomes and specify more inclusive goals and guardrails for AI.
2. Understand AI-generated solutions rather than setting or solving problems. We need new methods and (AI) tools that software engineers can use to interpret, explain, and simplify cryptic AI-based solutions with justified trust.
3. Embed radical observability and monitorability for governance, not just for error detection and diagnosis. We need new methods for “meaningful” human intervention rather than humans being symbolic liability sponges.
Even more captivating for us is the realisation that these new AI engineering methods will also play a critical role in understanding our own intelligence. Software engineers build software systems no longer just to satisfy human needs but to understand human intelligence.