How can human beings remain relevant alongside Artificial Intelligence, when this technology appears to challenge the very nature of what it means to be human? Surprisingly, science fiction may hold a clue to this all-too-real question.
Written by Jessica Tenuta, Packback Co-founder and Chief Product Officer
The celebrated science fiction author Isaac Asimov is perhaps best known for his Three Rules of Robotics. To ensure a future in which robots continue to work for the good of humanity, and do not grow beyond the control and understanding of the humans who created them, all robots in Asimov’s stories have been programmed with a set of rules intended to ensure the protection of humans with priority, even over the self-preservation of the robot itself.
While the Three Rules of Robotics are of course works of science fiction, Asimov’s approach to building “checks-and-balances” into the foundation of a potentially dangerous technology can teach us some very real lessons around our own real-world relationship to Artificial Intelligence (AI).
Artificial Intelligence is a broad term which refers to any technology used to perform any task which would otherwise require human cognition. In the education sector, educators are collaborating with AI to the benefit of students nationwide. Instructors are utilizing AI in ways that help them automate administrative tasks, like automatically moderating online discussion, which frees instructors’ time to engage with students on a personal level through feedback and intentional instruction.
AI is being used to predict which students are at risk of withdrawing from classes, and these alerts are shared with human guidance counselors so they can reach out and offer direct coaching and support, boosting graduation rates. While we are still far away from the sentient computer systems that science fiction calls to mind, AI already has the potential to improve human quality of life in unparalleled ways. But as with any powerful tool, AI has its own strengths, weaknesses, and inherent risks.
One such example is the use of AI in judicial sentencing, where AI has been used in courts to guide sentencing recommendations. The problem? The data used to train these types of algorithms (past sentencing data, recidivism rates) contains all the bias of our historical justice system, which leads these algorithms to perpetuate the same biases in their recommendations.
Like the Three Rules of Robotics, we need to build in safety measures to control for inherent risks and guide the conscientious design of AI, without hampering the incredible potential of this technology to support human well-being and abundance. What do those safety measures look like in practice? Developing ethical guidelines for AI.
Developing an AI ethics policy today can help organizations use AI thoughtfully in ways that align with their own values, while considering and prioritizing the overall well-being of society, and mitigate risks of bias and harm. As an example, Packback develops our own algorithms in accordance with our published AI Ethics policy.
While each organization’s policy may differ, we would propose the following “Three Rules of Ethical AI” which organizations can use to scaffold their own policies:
- Ethical AI must prioritize human well-being: This should include consideration for the displacement or replacement of people’s jobs, setting appropriate benchmarks for AI accuracy in accordance with the impact of a given use case, and auditing algorithms for potential bias. This also should measure the potential upside or ‘quality of life’ impact to society through the specific application of AI being considered.
- Ethical AI must be transparent and explainable: AI being used to make decisions that impact human lives must be explainable in plain language, and the means of decision-making should be transparent to the person(s) impacted by the AI.
- Ethical AI must defer to human oversight and consent: Data is the raw material of AI. Ethical AI must respect the consent of individuals for the use of non-public data to inform algorithms. Ethical AI must be overseen, audited, and held accountable by human beings, with a focus on reducing bias and promoting inclusion.
While efforts are being made to develop an international set of ethical standards around AI, we may be years away from any enforced regulations on AI ethics. But companies can take proactive measures to thoughtfully design the role of AI in their organization right now. That’s the most important first step toward building a future that makes the most of this promising new technology.