There has been much discussion about the relationship between data science and artificial intelligence. It can become a complicated dance when applied data science is partnered with emerging artificial intelligence technologies. Who takes the lead? How do we keep the beat? Can we make sure neither party steps on the other's toes?
I like to think of data science at least partially as being an application of artificial intelligence. Academics (and to some extent even practitioners) create algorithms while data scientists cull data and apply these algorithms. As these algorithms develop more abilities to learn, machines will become more intelligent.
Learning to dance with this new partner will be a delicate balance of directing the algorithms (through informed feature selection, feedback loops, manual model parameter selection and business rule encoding) and letting them lead (through autotuning, optimization techniques and deep learning). These considerations will undoubtedly grow in importance as data science and automated decisioning expand into every corner of the organization and into our daily lives.
However, one area where we, as data scientists, most definitely need to take the lead is in developing and using ethical frameworks. Computers are getting better at simulating intelligence, but so far, lag in simulating human values.
Serious folks like Stephen Hawking, Bill Gates and Elon Musk are investing considerable time and energy in spreading the word about the perceived threat that unconstrained (eg: without ethical frameworks) AI presents. As chief artificial intelligence practitioners, data scientists need to take the lead in setting up ethical frameworks which will keep AI on the right track.
Let’s start with a reasonable assumption: by the end of the century, machines will be a great deal more self sufficient (indeed, self-driving cars, target seeking military drones and smart coffee machines are already scientific FACT). Self-sufficient in this sense means self-learning and self-modifying, and eventually identifying and acting in contexts for which the machine was not originally designed. If you extrapolate this progression of self sufficiency to a natural conclusion, it could also mean that not even the hard-coding of back doors in such systems would be enough to completely eliminate the chance of undesirable machine behaviors.