The Case For Regulating A.I.

futurelab default header

There are many reasons why we shouldn’t regulate artificial intelligence, or “A.I.,” one reason why we should.

The case against regulation was argued most recently in a New York Times op-ed that gave at least three reasons: 1) nobody can agree on what even constitutes A.I., 2) machines have been doing “smart” things for a long time, and nobody saw fit to regulate them, and 3) if humans aren’t given visibility into what the silicon brains are doing, then we should pass legislation that ensures it.

Trusting technology is the popular position and, speaking as an avid user of all things technological, I’m a believer.

But there’s a line dividing the decisions my toaster makes about how long to burn bread and, say, an autonomous car forced to make a zero-sum decision between saving the lives of its occupants or the pedestrians in front of it.

A.I. that decides what we know by what’s revealed on our smartphones, or what colleges or jobs we get based on a deep understanding of what we say and do, is different than the machine smart enough to direct an elevator to stop on the right floor.

The case for regulation is simple: We regulate the behavior of organic intelligence.

Why wouldn’t we do it for artificial versions, too?

In fact, you could see most of the laws by which we live as originating in a need to regulate human intelligence (or the lack thereof).

Speed and other road laws exist because we can’t trust that every driver will be smart enough to drive responsibly. Criminal law protects people from those who don’t understand that community and mutual responsibility make society possible. Laws ensure that buildings are constructed to withstand stiff winds, thereby impeding a builder who was dumb enough to bet that he could avoid bad weather. The list goes on.

Why would the behaviors of imperfect A.I. be treated any differently?

[Read the entire essay at Medium]

Read the original post here.