The information obtained from all your accessible data is a great source of power for your organization to exercise. Information is power!
By applying artificial intelligence and machine learning models, you can infer details that might be very sensitive about a person or community, which may or may not be accurate. Therefore, you must decide if it is fair to consider them. With widespread use of these technologies, personal or group profiles are already being used to take actions affecting our daily lives, including medical diagnosis, financial risk analysis for credit, selection of candidates for admission, and strategies for policing and national security. All without the majority of the population realising or having any knowledge about it.
Accountability throughout the entire chain
As AI evolves, and we begin to apply it to everything in real time, its presence and influence will be much greater. Many more decisions with decisive impacts on our lives will be taken by robots (that is, models or algorithms of AI). This raises many ethical issues – already covered in my article “Is Big Data a Big Ethics Problem? ” – but also the issue of responsibility and accountability at all levels, including those who thought of the model, those who designed and programmed the model and decision rules, and those at the highest level of the organisation who approved the implementation and execution.
All professionals involved have a huge direct and indirect level of responsibility that will only increase. In its Data Age 2025 white paper, IDC estimates that by 2025, 20 percent of the 163 zettabytes generated in the global datasphere will be life-critical, and a similar amount will be handled in real time where the event happens. Therefore, the work of these professionals may have direct or indirect consequences, including social and economic exclusion or even death – and several examples of this already exist. So, having this in mind, several questions come up:
- Who’s responsible?
- Will there be civil or criminal liability?
- Who will be penalised?
- Do we already have a legal framework in place that applies to such situations?
A college of data science professionals is needed
From the above, it becomes clear that data scientists will have in the near future a role in our lives as or even more important than a doctor, lawyer or judge. They will not only develop tools that these professional classes will use to make decisions (or even produce autonomous decisions), but also tools that will decide on their own in basically every domain.
A College of Data Science Professionals is needed to establish a code of professional conduct presupposing a rigorous evaluation of the professional’s capabilities. #AI #ethics #accountability Click To TweetTherefore, it is my firm belief that a College of Data Science Professionals is needed to establish a code of professional conduct presupposing a rigorous evaluation of the professional’s capabilities. This should include an assessment of the ability to interpret and understand other codes of conduct (e.g., when developing models for medicine, public safety, or for the judiciary and criminal justice systems, they must understand the respective codes of conduct and ethical principles), but also regulations such as GDPR, which impose very stringent rules concerning the collection and processing of personal data. The College should include, like others, an ethics committee as guarantor of respect for the ethical principles underlying the professional activity.