What counts as a ๐ฑ๐ฒ๐ฐ๐ถ๐๐ถ๐ผ๐ป, really? From the strict sense in ๐ฎ๐ฑ๐บ๐ถ๐ป๐ถ๐๐๐ฟ๐ฎ๐๐ถ๐๐ฒ ๐น๐ฎ๐โwhere a decision formally alters someoneโs rights or entitlementsโto the everyday sense where almost everything, from selecting a movie to choosing a candidate, feels like a decision. Yet across these very different contexts, the underlying anatomy looks surprisingly similar.
In CSIRO’s Data61 new preprint (link in the comment), ๐๐ซ๐๐ง๐จ๐๐๐๐ฉ ๐ฟ๐๐จ๐๐๐ฃ ๐๐ค๐ง ๐ผ๐-๐๐ฃ๐๐๐ก๐๐ ๐ฟ๐๐๐๐จ๐๐ค๐ฃ ๐๐๐ ๐๐ฃ๐ ๐๐ฃ ๐๐ค๐ซ๐๐ง๐ฃ๐ข๐๐ฃ๐ฉ ๐๐๐ง๐ซ๐๐๐๐จ, we unpack this shared decision stack: fact-finding, rule application, deliberation, discretion, and formalisation. Each stage represents a distinct kind of role in decision-making and therefore a distinct opportunity (and limit) for AI to assist.
Our approach applies our recent framework on meaningful human oversight to this stack: distinguishing between doing and overseeing, between AI assistance and human accountability, and between algorithmic outputs and what law calls the โstate of mindโ of a specified officer or delegate.
Even where AI can โdoโ most of the work under human oversight, the discretion and formalisation of a decision usually remain with a natural person โ though AI can still perform independent double-checks.
Weโre exploring how this layered view can make humanโAI decision design more transparent and defensible across contexts.
If youโre interested in designing and testing the effectiveness of AIโs doing and human’s overseeing roles across your decision stack, weโd love to collaborate.

