If the last five years has taught us anything, it's that the pace of technological change has far outpaced the ability of institutions to understand it – never mind contain it. Franklin Foer in World Without Mind: The Existential Threat of Big Tech makes this point in spades.
This shouldn't surprise any of us. Only four percent of federal politicians in the United States possess backgrounds in technology. Some prominent ones don't even know Facebook's core business model. (I'm not kidding.)
Brass tacks: We're moving faster than ever and our polarized world means that companies can often exceed the speed limit without getting caught. Uber, Google and Facebook are excellent cases in point here. The lesson: apologize only if you get caught.
It's not a far stretch to say that there's a chasm today in the business world between can and should. I'm betting that most organizations will be able to use new data sources largely without any meaningful external checks and balances. This begs the question: What, if anything, will curtail their AI data-gathering and usage efforts?
Bad PR and public outrage
Remember the oft-cited Target example? The big-box retailer did nothing illegal; statistician Andrew Pole was merely using the arrows in his quiver. Still, most people felt the company's data-driven marketing went too far. At least with Target, though, a human being made decisions – not some impersonal machine.
It's hardly inconceivable that something similar will happen with AI sooner rather than later. When it does, expect a fast and harsh reaction with social media serving as an accelerant. No, turtles are not guns.
Hacking and privacy
It's not hard to envision a scenario in which bad actors obtain data from smart devices and run with it. Think that's impossible? Consider the Roomba vacuum and its ability to collect and process intimate data on your home. I can think of many terrifying ways in which criminals could use this data.
Lack of effectiveness
While not quite as flagrant as the first two reasons, many organizations will retire or shutter certain data sources for a more prosaic reason: they simply don't work. In Analytics: The Agile Way, I describe how Google's people-operations department ignores GPA for certain positions. The tried-and-true measure didn't correlate with on-the-job performance.
Sure, increasingly powerful technologies and neural networks can handle mind-boggling amounts of data. Odds are, though, that data without predictive power will go the way of the dodo.
A legal and regulatory wake-up call
What if the EU strengthened GDPR? What if US politicians realized the obvious? They should finally do something about our deficient privacy laws. Pew Research found that an increasing number of Americans distrust the government to protect their data. And a SAS survey showed that US consumers think government should do more to protect data privacy.
Industry lobbyists will invariably argue about how such laws will stifle innovation. They always do. Maybe some CEOs will realize that Tim Cook is right: Privacy is a fundamental human right. Perhaps they'll realize the inevitability of legislation. In that vein, a sensible set of laws beats something more draconian.
Indeed, there's increasing recognition that any AI policy requires thoughtful data policy. From a recent New York Times' piece:
Access to data is going to be the most important thing for advancing science, said Antonio Torralba, director of the M.I.T. Quest for Intelligence project. So much data is held privately that without rules on privacy and liability, data will not be shared and advances in fields like health care will be stymied.
Simon says
I'm hopeful that increased checks on technology, data and AI are coming – and soon. The very idea that insurance companies, for instance, can obtain personal data and use it to deny legitimate claims disturbs me. Let's hope the can-should gap closes soon.
Feedback
What say you?
Learn how SAS can help with personal data protection