What it All Means and Why it Matters
Transparency is a common buzzword when it comes to artificial intelligence (AI) and is often cited as one of the main principles to address the risks and harms associated with AI, including in CR’s recent policy recommendations on AI. But transparency can mean many things to different people. It can refer to disclosing the use of an algorithmic decision-making tool to consumers. It can also refer to gaining visibility into how a model, in particular a machine learning (ML) model, reaches a decision. This type of visibility can help to ensure ML models are operating in a fair and non-discriminatory manner.
Addressing challenges to transparency remains hampered by a lack of common language or understanding when it comes to transparency and related concepts like explainability and interpretability. To get greater clarity on these issues, we sat down with Laura Kornhauser, co-founder and CEO of Stratyfy, and Shannan Herbert, Stratyfy Advisor, to get their thoughts on what these terms mean to them when it comes to ML models used for credit underwriting, and why transparency is so important for fair lending and financial inclusion. Stratyfy is a fintech company that equips financial institutions with inherently interpretable ML solutions to make more informed risk decisions in credit, fraud, and compliance. These solutions can be used to reduce bias, expand access, and ensure regulatory compliance.
CR: Let’s start with the bigger picture. What is the potential to leverage technology in underwriting to improve outcomes for long underserved communities? What examples are you seeing in your day-to-day work that excite you?
Stratyfy: We think there is tremendous potential to change the way that risk is quantified and evaluated through the use of technology. We have seen examples, through our own work and the work of others, that prove the value lenders can unlock by looking differently at underwriting standards they have employed for years. For example, identifying where potential biases might be showing up. For many companies, doing this manually without large lapses in between reviews was not previously feasible because of the time and resources required. But now with technology, lenders can detect bias faster and proactively fix the way they are underwriting loans before negatively impacting their customers or their bottom line.
There are two approaches we’ve seen that really excite us. One is lenders using technology, more specifically ML, to look at traditional indicators with a different lens. In other words, using similar data points that they’ve always used but in a much richer and more nuanced way. And two, lenders are leveraging ML to re-evaluate their underwriting processes to move beyond traditional indicators of creditworthiness. With ML’s ability to quickly consume data and extract insights, lenders are using both approaches to get a better understanding of the true riskiness of their applicants.
Another example that we’re excited about is lenders taking a more proactive stance to uncover bias, and doing that in both the model development phase and on an ongoing basis. Bias should never be a set-it-and-forget-it issue. You can have the best intentions in the development phase, but once that model gets deployed in the real world, things change. That’s another place that technology has massive value to offer. Technology enables lenders to look critically at how they’re making decisions, robustly evaluate for potential harmful biases, and then have the insights needed to take action right away.
CR: Why is transparency so important for fair and responsible AI? What exactly needs to be explained, and why? What issues arise when there is a lack of transparency, for consumers as well as for financial institutions?
Stratyfy: Transparency can mean different things to different people. One type of transparency is understanding the ins and outs of how an AI model works. Knowing what’s happening within the model, how it is taking inputs, combining them, and reaching a prediction. In other words, knowing what’s happening inside the box. That’s the level of transparency we would recommend for high-stakes use cases that truly impact people’s lives, such as determining who gets a loan or a job.
The danger that arises when you don’t have sufficient transparency into how your AI model works is that, especially for a financial institution, you’re going to have to explain the model outputs to your regulators. And if you can’t explain how the model arrived at those outputs, that could pose major problems. You need to be able to explain these outputs to your customers as well, otherwise you face reputational risk and bad customer experiences.
CR: What challenges do ML models in particular pose to transparency?
Stratyfy: ML models take larger sets of data into the model’s calculation and often have a high level of complexity in how they combine and transform that data in order to reach a prediction. These increased levels of complexity can create what are called “black boxes”, as it becomes challenging to understand how the model arrives at a prediction.
Using these types of models can seem very attractive due to their ability to take vast amounts of input data, quickly draw complex relationships, and spit out a number, which companies think will enable them to easily make better or more informed business decisions. But this thinking involves many assumptions. The biggest assumption is that the data you are feeding into the ML model is accurate and representative, which is not always the case. And that’s where you run into some major issues, particularly in credit underwriting. Because the data that you’re feeding into ML models often includes biases from the past and present, so you can’t rely on that data alone to give you the best answer.
This gets to the real need for transparency. Transparency is about understanding how a model is working. But what users also require is the ability to change how a model works in order to address these inherent issues with biased data, or biases within a model. We think this is an under-appreciated benefit of transparency, and in particular interpretability. Having a high level of interpretability gives you the ability not only to see inside the box, but to actually make changes as needed. This has been huge for our customers in Stratyfy’s work on bias mitigation.
CR: You brought up interpretability just now. Explainability and interpretability have both been put forth as potential solutions to achieve greater transparency, but these concepts are often not consistently defined or understood, and sometimes are even used interchangeably. How would you define explainability versus interpretability?
Stratyfy: These terms are definitely batted around in very loose ways. We define these terms by borrowing from Cynthia Rudin, who was one of the first people to shine a light on these concepts. We consider interpretability to be a feature of the modeling approach that one can choose to employ when selecting models to solve a particular prediction problem. Employing interpretable modeling approaches means that transparency is baked into the model from the ground up as it is being built, so that the resulting model and its predictions can be easily understood by humans. That’s interpretability.
By contrast, explainability is when you use another modeling approach that has more of a “black box” nature, so the model is not inherently transparent. You then employ additional approaches on top of the “black box” model that you built, using what are called post hoc explainability techniques, to provide some visibility into how that model is working.
CR: There are several relatively well-known post hoc explainability techniques that have been developed, such as SHAP and LIME. What do you consider to be the drawbacks of relying on post hoc explainability techniques to address transparency issues?
Stratyfy: There are a few key drawbacks. From a technical aspect, post hoc explainability techniques are typically not that robust. For example, they do not capture the underlying non-linear patterns or relationships within the data that may play a major role in a model’s outcomes, such as loan approval or rejection. The other major drawback is that they often do not give a recipe or roadmap to easily make changes to the model.
At a broader level, a key element of transparency that doesn’t receive enough attention is transparency for whom. For whom do you want to ensure that your models and decisions are transparent? Internal stakeholders? External parties like regulators? Your customers? We think the answer should be all of the above.
And that’s the other major drawback where a lot of explainability methodologies miss the mark – the tools typically leveraged make it hard to achieve transparency for a range of stakeholders. Plenty of explainable AI approaches result in explanations that are not accessible to individuals who lack the understanding, skill, and familiarity with ML technology. Whereas interpretable models, by their nature, can be understood by a range of stakeholders.
CR: If that’s the case, do you think current post hoc explainability techniques are sufficient to meet existing regulatory requirements? For example, with respect to requirements to provide adverse action notices to consumers?
Stratyfy: At the moment, it’s difficult to say for sure. Regulators have not yet taken a firm stance on whether post hoc explainability techniques are sufficient for fair lending compliance and are leaving that decision and gauging of associated risks to financial institutions. The Consumer Financial Protection Bureau (CFPB) did issue guidance discussing the use of uninterpretable or “black box” models, emphasizing that you still have to be able to provide reasonable adverse action notices even if you are using complex models. The guidance also stated that post hoc explainers still need to be validated, which may not be possible with less interpretable models.
But again, you really need to think about the question of transparency for whom. How do you make it so that the person receiving the adverse action notice knows what they need to do to improve, to qualify for a loan in the future? This is another opportunity for technology to be used proactively to help under-resourced and under-represented communities. Inherently interpretable models are more conducive to developing more actionable adverse action notices.
In our work, we provide lenders with the functionality to select which factors they want to use for this type of analysis, to focus on actionable recommendations for consumers. For example, it is less useful to tell someone to deal with a past bankruptcy because that’s not something they have the power to change. Interpretable models give lenders the ability to parse through analysis to highlight those factors impacting outcomes that consumers can actually do something about, which can then be conveyed to potential borrowers, which we think is much more valuable for consumers.
CR: Ok, so it seems there are some inherent benefits to taking an interpretability approach. What would you highlight as good practices for achieving interpretability? How does Stratyfy develop interpretable ML models?
Stratyfy: We have a patent-pending methodology that we use, a way to build models and decisioning strategies that employs the most cutting-edge interpretable methodology in the market. We use this proprietary methodology in the model development phase as well as in the ongoing monitoring and testing phases. This approach allows our customers to achieve accuracy or performance that is on par with the more “black box” ML approaches while still having the benefits of interpretability.
More specifically, our methodology learns from data automatically, but everything that is learned from that data is ultimately visible to the user. For example, we are learning what interactions between variables are predictive of our ideal outcome, such as the likelihood of repaying a loan. Which attributes when combined have predictive power in realizing that outcome? How are those variables interacting, and then how is that interaction weighted to have that predictive power? These are things that can be dynamically learned in an automated way, but our methodology makes these learnings visible for the user to see – and, critically, to then change should they want to.
So rather than an opaque ML process when training and fine-tuning a model, we provide the ability to see into the training process and see what are the results of those trainings. We also provide those results in a readable language so it is understandable to different stakeholders, both internally and externally.
CR: On the flip side, are there drawbacks to taking an interpretable approach to developing ML models? To what extent are you constraining the complexity of a model, and hence facing a performance tradeoff?
Stratyfy: There is a real tradeoff, but we think that tradeoff is small. You often have a small tradeoff in performance if you’re measuring by Area Under the Curve (AUC) or other commonly used measurements of performance. But we think the benefits you get are worth that small tradeoff.
In many cases, that small tradeoff is over-emphasized by market participants as a rationale for why a “black box” or extra complexity is needed. That tradeoff assumes the data that you’re using to test the performance of a model is representative, when the reality often is that the data you’re using is not representative or implicitly has problems. So we would argue that the benefits you get from using an interpretable approach to developing ML models far outweigh the small decline in hypothetical performance.
It’s important to note that there are use cases where this may not be the case. In some instances there are extraordinarily complex tasks that we’re asking machines to help us with, such as facial recognition, where thousands or millions of inputs may be required, otherwise you can’t achieve accuracy at all. In these situations, the benefits of complexity can outweigh the costs.
But in the case of credit underwriting, this is not necessary. In our experience with credit underwriting and in research we’ve conducted on this topic, inherently interpretable models can achieve just as much as complex models. And again, we find the associated tradeoff is typically very minimal, while conversely the benefits you gain are not just related to transparency, but also in terms of model stability.
CR: Where do you see the greatest need for regulatory clarity regarding transparency and interpretability requirements and standards? What would be most helpful from regulators on these topics?
Stratyfy: For one, there needs to be more consistency across regulatory bodies on how they view AI and ML technology. It can become confusing for financial institutions that have multiple regulatory bodies supervising them. We don’t necessarily want an overly prescriptive blueprint from regulators stating what is and is not allowed, because that would stifle innovation. But there are plenty of different frameworks right now, and we would argue that the vast majority of these are not specific enough. For example, just defining the terms we’ve discussed today like transparency, explainability, and interpretability would be quite helpful, so there is a common understanding across the market.
Guidance around how these systems will be evaluated would also be helpful. That would provide some clarity, particularly for those institutions that are still on the fence about using ML. We talk to many leaders in the community banking space and that’s always the main concern. They see the value in tools that transparently leverage ML and understand that these tools can be transformative, but they are concerned about how they’ll be evaluated for using this technology by regulators. These unknowns can prevent community banks from leveraging ML to reach more underserved consumers.
If these barriers are addressed, more community banks can feel comfortable using new technologies, and we can alleviate some of the fear and hesitancy that exists right now. Because there is so much good that can be done with ML.
CR: Specifically with respect to transparency, how do you think companies should be evaluated? What guardrails or standards should be put in place?
Stratyfy: For high-stakes use cases that really impact people’s opportunities such as decisions about who gets a loan or a job, there has to be a transparency standard that allows users to communicate with both technical and non-technical individuals regarding the inner workings of a model. Because only then can everyone trust and rely on ML capabilities and use them responsibly without running into regulatory risks or perpetuating bias.
And we believe that you can only achieve this with interpretable approaches, as these provide you with the transparency needed to be able to communicate with a range of stakeholders, including borrowers. Additionally, having the ability to go inside the box and make changes enables you to override in cases where your data represents or your model contain biases, which is nearly in every case.
We hope this Q&A helped shed some light on the complex and still evolving topics of AI/ML transparency, explainability, and interpretability. Our main takeaways are that: (1) it’s important to consider the level of transparency you need to meet the needs of multiple stakeholders, (2) an interpretable approach to developing ML models provides both transparency as well as the ability to adjust ML models to address inherent biases, and (3) in the case of credit underwriting, performance tradeoffs with an interpretable approach are minimal and counterbalanced by the benefits of transparency.
If you would like to help support the need for clearer standards and guardrails regarding transparency and algorithmic discrimination, please sign CR’s petition to the CFPB calling for clear guidance on the fair and responsible use of ML models in lending.