Future Ethics Dilemma

I mentioned last week that I’ll be filing CS&W reports from Vietnam (445 photos here) later this spring, when my wife and I will spend the better part of a month in Saigon. It will be our sixth visit. First-timers are generally awed by the crush of traffic, but I’ve found that you can get used to just about everything. Even Saigon traffic.

However, and speaking of traffic, just in case you were looking for something new to worry about, an article in the New Yorker makes the point that at some not-too-distant future time, driverless cars will

“drive as well as or better than humans, smoothly adapting to rapid changes in their environments, like swerving cars or stray pedestrians.

At that point,

“This would require the vehicles to make value judgments….

For example,

“if a car detects a sudden obstacle—say, a jackknifed truck—should it hit the truck and kill its own driver, or should it swerve onto a crowded sidewalk and kill pedestrians? A human driver might react randomly (if she has time to react at all), but the response of an autonomous vehicle would have to be programmed ahead of time.

Insoluble moral dilemmas will need to be pre-programmed.

“What should we tell the car to do?”

Chew on that, and do let me know what you come up with.