Let’s get alarmed about something that seems not the least bit political — except, of course, eventually it is. Something we’ve always valued is slowly overturning many things we’ve always valued more.
Call it encroaching precision.
Our first problem is that precision cannot always be equated with accuracy, just as facts do not always reveal what’s true. There’s an argument to be made for “alternative facts” — assuming they are also correct and relevant — but I promised to steer us clear of politics.
Steering will come up again, so watch for it.
If I tell you that three out of five newspaper readers want the weather map on Page One, you might believe me. But if I asserted that it’s really 59.37 percent of readers, now you assume I’ve studied the issue deeply, even if I haven’t. The precision in the assertion is easily mistaken for accuracy — and so, believability.
Very few things that are actually true are knowable to two decimal points, at least to most humans. And that’s the second problem we face with increasing precision.
Sharing has become too easy. Forget Facebook and fake news — too political. In the analog world, artists knew how to get paid for their work. If you wanted to hear or see them, you had to go where they were and pay to be there.
Fans bought recordings to either relive the moment when they attended the concert or to imagine what being there must have been like. Excepting The Grateful Dead and a few other outliers, recordings were controlled by the artists or their agents.
Nobody worried too much about bootleg cassette recordings, because they weren’t very good copies. And copies of copies were dreadful. The technology was self-limiting. But digital copies have no such limitations. Exact digital copies can spread exponentially faster than analog approximations. And the original source may have been nothing more than an iPhone in the audience.
As video distribution expands, other performers are likewise worried about protecting their livelihoods. Phones can be checked at the door, but wearable and increasingly miniaturized devices will overtake those limits. Once holographic virtual reality takes hold, you and I may have difficulty discerning what’s real and what’s a copy.
Virtual reality programmers are already learning that they shouldn’t put the image of a chair into an imaginary room, because VR viewers cannot resist the urge to sit down. When they tumble backwards onto the floor, their VR helmet can’t help them.
Computers have their limits, but the most frightening problem is when they don’t. We’re well informed about the progress being made toward driverless cars and their promise to greatly reduce collisions that regularly occur on our roadways.
We’re hearing much less about how these automated devices will choose between multiple unavoidable collisions. Massachusetts Institute of Technology has built what they call a Moral Machine to demonstrate the life-or-death algorithms being added to the so-called smart car technology.
For instance, if debris falls from a bridge and blocks the car’s lane, should it swerve left and collide with a school bus or swerve right and hit a crowd of pedestrians? Unlike humans, these machines can tabulate the potential for loss of life for each option in a millisecond, and respond accordingly.
That sounds like a comfort, until you consider there may be a third option. Optimizing human lives sounds like a wonderful goal for MIT and car-computer programmers, but what if the optimized outcome is for the car to do nothing, thus hitting the debris and killing the vehicle’s inhabitants?
That sounds right from a cost-benefit analysis, but then a different question follows. Who will accept a ride in that driverless car? If people refuse to use driverless cars, what good can the technology do?
It’s getting late to be asking such questions, but this is the ride we’re on. Will we allow algorithms to be written that could determine our individual fates? If refusing those algorithms feels inhumane, then we’re faced with a terribly precise question. What exactly are we to make of ourselves, analogically speaking?
Don Kahle (firstname.lastname@example.org) blogs.