Developing a Turing test for ethical AI

Artificial intelligence developers have always had a “Wizard of Oz” air about them. Behind a magisterial curtain, they perform amazing feats that seem to bestow algorithmic brains on the computerized scarecrows of this world.

AI’s Turing test focused on the wizardry needed to trick us into thinking that scarecrows might be flesh-and-blood humans (if we ignore the stray straws bursting out of their britches). However, I agree with the argument recently expressed by Rohit Prasad, Amazon’s head scientist for Alexa, who argues that Alan Turing’s “imitation game” framework is no longer relevant as a grand challenge for AI professionals.

Creating a new Turing test for ethical AI

Prasad points out that impersonating natural-language dialogues is no longer an unattainable objective. The Turing test was an important conceptual breakthrough in the early 20th century, when what we now call cognitive computing and natural language processing were as futuristic as traveling to the moon. But it was never intended to be a technical benchmark, simply a thought experiment to illustrate how an abstract machine might emulate cognitive skills.

Prasad argues that the AI’s value resides in advanced capabilities that go far beyond impersonating natural-language conversations. He points to AI’s well-established capabilities of querying and digesting vast amounts of information much faster than any human could possibly manage unassisted. AI can process video, audio, image, sensor, and other types of data beyond text-based exchanges. It can take automated actions in line with inferred or prespecified user intentions, rather than through back-and-forth dialogues.

We can conceivably envelop all of these AI faculties into a broader framework focused on ethical AI. Ethical decision-making is of keen interest to anybody concerned with how AI systems can be programmed to avoid inadvertently invading privacy or taking other actions that transgress core normative principles. Ethical AI also intrigues science-fiction aficionados who have long debated whether Isaac Asimov’s intrinsically ethical laws of robotics can ever be programmed effectively into actual robots (physical or virtual).

If we expect AI-driven bots to be what philosophers call “moral agents,” then we need a new Turing test. An ethics-focused imitation game would hinge on how well an AI-driven device, bot, or application can convince a human that its verbal responses and other behavior might be produced by an actual moral human being in the same circumstances.

Copyright © 2021 IDG Communications, Inc.

Source link