Proceed With Caution: Explainability, AI, And The Risks Of Regulatory Compliance

Dr. Jans Aasman was interviewed for this AI Business Article:

Another facet of explainability relates to rules that further clarify points of understanding. Such rules can fortify the meaning of explanations to satisfy customers and regulators. Once organizations have issued interpretability and explainability, they can then take the output of the ML algorithms and turn them into explainable rules, explains Franz CEO Jans Aasman. “Then you can say well, the reason I’m not giving you this loan is because of these factors.”

This use of rules not only aids explainability, but is also influential in customer relations pertaining to the results of complicated AI models. “Now, instead of just applying the formula, you can also use additional rules,” Aasman offers. “Of course you have rules for how you want to deal with customers, and you apply the rules and then you can use continuous machine learning to see if your actions were positive or negative for the bank or for your customer.”

“YOU NEED TO FIGURE OUT IF YOU CAN PREDICT FROM THE SIGNALS IN YOUR DATA WHETHER OR NOT SOMEONE IS GOING TO BE AT FINANCIAL RISK.”

– Jans Aasman, CEO, Franz

Read the full article at AI Business.