Proactive AI

Proactive AI
December 5, 2017 admin
In Trends
AI

Some people think artificial intelligence (AI) will end civilization. These folks are not worried about today’s common forms of AI, but tomorrow’s more decisive AI, one that can take actions.

Though their concerns are overblown, we are facing a situation where we can design technology not only to make decisions, but to react. Reactive AI can cause Google Maps to suggest a better route to take to avoid a traffic jam. Proactive AI will cause your autonomous car to do so.

Intelligence comes in stages

Just like a child’s development process, computers become “intelligent” in stages. For AI, the basic stages are learning, reacting and acting.

The first goal of intelligence, native or artificial, is to make sense of the world around it. This is where the deep learning AI discipline has excelled. The perennial example was when Google assigned some computers the task of identifying cats in digital images. Through a little guidance and a healthy amount of trial and error, computers got good at finding cats, and dogs, and people, and human targets on the battlefield, and other aspects of the world outside the computer.

Interestingly, the deep learning stage of AI may be the springboard for finally eradicating cancer. Several AI-assisted cancer reach projects are underway that seek to use AI to analyze massive warehouses of patient genetic, treatment, and outcome data, in search of common causes and the most effective therapies.

Most of AI is racing today past the learning stage and toward the reactive development stage. As my moniker implies, these tools respond to the environmental data, having learned something about the nature of the environment and predictable outcomes. One promising company is AdviNow, who refer to themselves as “AI driven telemedicine.” AdviNow uses artificial intelligence to automate medical visits and delivers, without any human interaction, a complete and objective diagnosis for patients after a brief computer-driven interview and the patient supplying some vitals. What the patient’s doctor receives is a probabilistic breakdown of the illnesses and recommended treatment.

This is the realm of reactive AI, which is sneaking up on proactive AI. In this scenario, as with today’s “autonomous” cars, a human still makes critical decisions. Human intelligence has the distinct advantage of providing intuition, which is basically a gut instinct honed by experience. But there will come a day when the amount of raw data on a subject (cancer genetics, road/traffic conditions, medical diagnostics) is so large and the probability of a machine-made decision is so high, that we will let go of the human grip and grant machines the power to take action.

Why this is a problem, and how we will solve it

What scares people about proactive AI is mistrust. Arthur Clarke and Stanley Kubrick did us no favors by introducing the world to the fictitious HAL-9000 and its desire to kill people. Perhaps this is why Elon Musk is so fearful of AI.

But the solution is already in place. We humans learn in degrees. One way is via metered attempts at extending processes. We currently don’t let autonomous cars go without a human co-pilot. We don’t let reactive AI systems make big decisions, but they currently make many small ones. With each stage of maturity in a process, we certify success and slowly release the next degree of autonomous action.

With the solution already established, the next big gold rush will be in proactive AI. The goal of automation has always been to relieve humans of repetitive drudgery. If a computer can make a perfect (or in most cases a near perfect) decision, the cost of failure is low or zero. By letting the computer decide, we allow ourselves more time, more freedom, more wealth. Computers and AI should be more involved, and they will be in small, well-managed and wise baby steps.

Why this is still a problem, and why it might not be solvable

This stepwise prescription only holds true if there is rational transparency. When there is none, there is danger.

If we look for a lack of institutional transparency, we find government. The interesting thing about government, aside from its proclivity for secrecy, is its power. A powerful yet furtive government, using AI to make and automate decisions, has huge implications. Let an AI system make a cancer misdiagnosis and you lose a patient. Let an AI system buried in a military bunker decide the responsive targeting of enemy combatants, and hundreds of civilians could perish. I won’t bother to describe the Doctor Strangelove extension to this.

For now, in the private sector, the big plays are for publicly accessible big data warehouses and reactive AI. Having more data available for massive deep learning will lead to more radical reactive solutions. Tomorrow’s big play will be the bit-by-bit additional trust in the conclusions made by AI and the white-knuckle release of fears about letting machines take metered, tested, sane action.

Originally published at Forbes

Comments (0)

Leave a reply

Your email address will not be published. Required fields are marked *

*

6 + 8 =