Each year, I make an effort to attend the IET / BCS Turing Talk in London, and over the past few years I've witnessed talks by leading minds in the field of Artificial Intelligence (AI), Machine Learning and even Computer Vision. It is no coincidence that AI takes centre stage at this particular point in time, (i.e. the dawn of what the World Economic Forum call the 4th industrial revolution), because AI will likely have the most profound impact of all technologies powering said revolution.
This years edition of the Talk focused on the topic of AI bias, and how it mirrors & magnifies the biases of society and of the people that develop and deploy AI systems. Speaker, Krishna P. Gummadi, painted a clear picture of the resulting bias in data, algorithms and usage of AI, as well as the negative impact on under represented groups in society. He concluded with a 3 point call to action that will help address these issues, as follows:
- Implement fair learning objectives - develop algorithms that take into account the needs and presence of sub groups within a general population. Error rates are key, especially regarding false: positives, negatives, omission and discovery.
- Provide unbiased learning data - Address under represented minorities in sample data. Biased labelling can lead to self-fulfilling vicious cycle
- Ensure unbiased representational data - Address the huge gender bias in AI representation
Don’t be fooled into thinking this will be an easy task. In adopting ethical or fair learning objectives, for example, one must understand and carefully navigate the dilemma inherent in minimising error rates for one group versus another, versus the needs of an entire population. Furthermore, one may be forgiven for thinking, as the talk posited, that perhaps AI can "be engineered to help humans control (mitigate) bias or unfairness in their own decisions”, but this may be dangerous, or simply lazy & wishful, thinking.
In my opinion, AI does not have the level of maturity required at this time. It’s like raising a child, (with yourself as role model), and scolding her / him for mirroring your worst behaviours, but also expect him/her to figure out where, when and how you got things wrong, then proceed to fix it and you into the bargain! The point is that AI algorithms and the data which drive them are products of our society and cannot be expected to self-correct on the basis of the same flawed input. We need to do the heavy lifting in attempting to correct ourselves then let AI mirror and improve on the effort.
Finally, I think the Turing Talk organisers did well to feature Dr. Gummadi's research topic and I, along with the rest of the audience, sat in sometimes uncomfortable silence as he described some glaringly racist, sexist and other undesirable ills that plague society today - made all too concrete via AI enabled outcomes. I say 'AI enabled outcomes’ because AI programs, algorithms etc. are not necessarily malicious in of themselves but can effectively become so for under-represented groups, with both intended and unintended consequences. De-biasing AI will remain a tall order, unless those that develop and deploy it can take the above recommended measures as a crucial first step in that journey.