Black History Month: Ethical AI and the Bias Within
“To err is human, to forgive, divine.” Humans make mistakes. Biases are unmasked with and without intent. But, when it comes to AI, those unintentional biases can have devastating consequences.
In the 2017 book entitled, The Future of Leadership, Brigette Hyacinth says: “using AI to improve efficiency is one thing, using it to judge people isn’t something I would support. It violates the intention on the applications of AI. This seems to be social prejudice masquerading as science…”
How often have big tech companies backtracked their facial recognition software? What are the ethical implications of moving forward and leaving AI unchecked and unregulated?
When Social Sciences and Humanities Meets AI
From 2015 to 2019, the use of AI grew by over 250 percent and is projected to boast a revenue of over $100 billion by 2025. As major businesses such as Amazon and IBM cancel and suspend their facial recognition programs amidst protests against racial inequality, some realize more than regulatory change is needed.
Since 2014, algorithms have shown biases against people of color and between genders. In a recent article from Time.com, a researcher showed the inaccuracies of prediction for women of color, in particular.
Oprah Winfrey, Michelle Obama, and Serena Williams skewed as male. Three of the most recognizable faces in the world and AI algorithms missed the mark. These are the same algorithm and machine learning principles used to challenge humans at strategy games such as Chess and Go. Where’s the disconnect?
According to one author, it may be time to create a new field of study specific to AI. Though created in Computer Science and Computer Engineering labs, the complexities of humans are more often discussed in the field of humanities. To expand further as well into business schools, race and gender studies, and political science departments.
How Did We Get Here?
At first blush, it may not seem comparable to consider human history with the rise of artificial intelligence and its applications. Yet it’s human history and its social constructs that explain the racial and gender biases when it comes to ethics in AI.
How deep-seated are such biases? What drives the inequalities when AI-enabled algorithms pass over people of color and women in job searches, credit scores, or assume the status quo in incarceration statistics?
Disparities between rational and relational are the cornerstone from which to begin. Once again, in Hyacinth’s book, The Future of Leadership, the author tells a story of her mother explaining the community around the simple task of washing clothes. Though washing machines now exist and do allow people to do other things while the clothes are washed, there is a key element recounted by her mother washing machines lack. The benefit of community.
When her mother washed clothes, it was her and her surrounding community. They gathered to wash, visit, and connect. A job was completed, but the experience lingered on. And in the invention of a single machine, that particular bit of community was lost.
But it’s community and collaboration which remind humans of their humanity. And it’s from these psychological and sociological roles, artificial intelligence should learn. Create connections between those who build the systems and those who will use them.
Building AI Forward
Voices once shuttered and subjugated have opened doors to move artificial intelligence forward. It is the quintessence of ‘those who don’t know their history are doomed to repeat it’. The difference within this scientific equivalent is there is no history to repeat when it comes to technology. And so it is from the humanitarian angle AI is considered. The ability to do great things with technology is writ in books and screenplays, and so are its dangers.
While it isn’t likely an overabundance of ‘Mr. Smiths’ will fill our world, it is important we continue to break out of the siloes of science versus social sciences. If AI is to help humanity move forward, it’s important to ensure humanity plays a role in teaching our machine learning systems how different we are from each other and to consider the whole person, not just their exoskeleton.
If you’re interested in Data Science, Data and Technology, Machine Learning, or Robotics, Harnham may have a role for you. Check out our current vacancies or contact one of our expert consultants to learn more.
For our West Coast Team, contact us at (415) 614 – 4999 or send an email to firstname.lastname@example.org.
For our Mid-West and East Coast teams contact us at (212) 796-6070 or send an email to email@example.com.