Whereas machine studying and deep studying fashions usually produce good classifications and predictions, they’re nearly by no means excellent. Fashions nearly at all times have some share of false constructive and false adverse predictions. That’s typically acceptable, however issues rather a lot when the stakes are excessive. For instance, a drone weapons system that falsely identifies a faculty as a terrorist base might inadvertently kill harmless kids and lecturers until a human operator overrides the choice to assault.
The operator must know why the AI categorized the varsity as a goal and the uncertainties of the choice earlier than permitting