Dr. David Hardoon, UnionBank Senior Adviser for Data & Artificial Intelligence, was recently invited to discuss his views on data science and artificial intelligence (AI) at an EFMA Sustainability & Regulation Community Best Practice Forum.
The Data & AI expert believes that “regulating AI is the wrong goalpost,” arguing that the objective must be safety first and equality, promoting trust in the technology. Adding that there are safety nets to mitigate the risks associated with it.
“Data, and to an extent, the AI as a mechanism and tool which manifests possibilities out of data, is an onion,” Hardoon said. “And what you find with this onion is that it’s not just about data. It’s not just about application. It’s not just about consumer engagement. It’s also about history. It’s also about our understanding of our own current behavior. It’s essentially opening up an immense view that potentially, previously, we were completely unaware of.”
Dr. Hardoon explains that AI may broken down to at least three buckets. First is data, which may be historically good or bad as it is a genuine representation of issues or errors that happened in the past or may happen moving forward. Then there is the AI system itself, specifically the approach of extracting information from the available data. And finally, there is the operationalization of the information that comes out.
“When thinking about operationalizing AI governance, it is imperative to have a broad appreciation of the risk that comes from your available historical data—the potential disadvantages, or errors, or issues, or elements that may result in lack of trust that may come from that,” Dr. Hardoon cited.
He emphasized that the most important thing about operationalizing AI within an organization is trust. Dr. Hardoon likened this on how individuals trust their closest friends and family members.
“Our trust in them isn’t that they always are correct or even always tell the truth, but it is in their ability to say ‘I’m sorry, I made a mistake. Allow me to correct myself.’ That is the exact same principle which we need to hold ourselves accountable for when we’re applying new technology, in making sure we’re putting in place safety nets, in assuring that we are able to validate what we’re doing and making sure that we are doing the right thing.”
Dr. Hardoon said that part of the ‘peeling’ process, especially of a new set of technology, is best to still have people in the loop.
“Not that the human may be any better, but we trust humans so far a bit more right now, until we get to that stage of realizing it’s good. Or perhaps in certain areas, we must simply accept that AI should never play a role, because we want to have the ability of continuous intervention in terms of outcome,” Dr. Hardoon concluded.