Social and ethical audits are common place in many industries, especially those where products are being manufactured and where the factories are based in countries considered to be higher risk. Working hours, modern slavery and workplace safety are some of the items checked to help eradicate ethical issues from supply chains.
However, as this Harvard Business School article highlights, the spotlight should also be shone on the Big Tech companies and in fact any company that uses data as part of it's services.
Why is that?
Well …. using AI, data scientists can build algorithms that interpret massive amounts of data to present a summary decision or piece of insight. Whilst this can be amazing and transform our ability to make decisions we have to remember:
- These decisions can have a significant impact on a company or an individual.
- These algorithms and the data sets they are based on may include bias - which if not monitored could propagate inaccuracies and biases that may target certain groups unfairly….
And because it's a digital service then any ethical issues these algorithms / decisions create could scale to a huge volume of people.
As a side note…. I asked ChatGPT “How could AI support the auditing of ethical issues and biases which may be built into an algorithm?”.
Not surprisingly it came up with some sensible ideas that suggested AI would be a good solution to the auditing approach.
Even better though, it suggested that “It's important to note that while AI can be a valuable tool in auditing algorithms, human oversight and expertise remain essential. ”
Sometimes the pace of change needs to be kept in check……..