Disrupting Risk Management through Emerging Technologies

Download Slides

In the Financial markets with credit card companies there is always a need to measure the risk optimally and understand the performance of products before we could invest and make strategic decisions. At Capital One we are leveraging technologies to provide end to end analytical experiences for modelers and enable self service solutions for analysts and key stakeholders by allowing them to run loss forecasting scenarios seamlessly, perform gaming analysis, compare results of model runs, create new features, gain insights and produce outputs.

Watch more Spark + AI sessions here
or
Try Databricks for free

Video Transcript

– Okay, good morning everyone. Welcome to our session, Disrupting Risk Management Through Emerging Technologies. Presentation will be done by Badrish Davay and I, Yuri Bogdanov. We work at Capital One as part of technology group, supporting risk management in our organization. A little bit about Badrish and myself, So Badrish is currently Senior Manager of Engineering at Capital One. He has a very broad and great and long experience delivering software systems in financial and mortgage space, is very well known to build scalable applications in big data event streaming domains, implemented a lot of the complex data driven solutions, and prior to landing at Capital One, he has worked in the healthcare and professional services organizations, primarily working on the big data ecosystems About by myself so, I currently hold the role of the Senior Director of Engineering at Capital One and also have a very significant experience building applications in big data and event streaming.

I’ve also prior to coming to Capital One, I managed programs in both federal and telecommunications space. And as I mentioned before we are part of the risk management, consumer credit risk management organization at Capital One.

As I’ve been in Capital One since 2016, I was very fortunate to participate in cloud and data transformations initiatives at Capital One. First a couple of years, of my journey working for this great company, I led their technology transformation at one of the divisions of credit card called Upmarket, and last couple of years, I’ve moved to risk management in particular, this organization consumer credit risk.

As you can see in this quote, by 2025, risks functions in banks will need to be fundamentally different from what they are today. Unless banks start to act now they may be overwhelmed by the new demands they will face. That’s a quote from McKinsey on the future of risk management in banking. Of course, in financial markets with credit card companies, there is always a need to measure the risk optimally and understand the performance of products before we could advance and invest and make strategic decisions for a particular products in banks.

So how does it apply to credit risk? And what is it credit risk? What are the systems and applications that need to be built in order to support company’s lending and deposit businesses? In our case products such as domestic and international card auto, consumer banking, and also ability to provide a credit oversight. Credit risk is defined as the risk arising due to the borrower’s failure to strictly comply with the terms of the credit contract. This might happen when the customer is late in debt repayment, for example, late in credit card payments, not fully pays the debt amount or fails to pay that when principal and interest amounts are due. That creates a significant risk and causes financial losses and difficulties in the business activities of the bank.

In order for the company to successfully be in control of the credit risk management there is a process of identifying and analyzing risk factors, measuring the level of risk and then making the right decisions to make, to manage credit activities.

When thinking about the loss, losses and opportunities in credit risk, there are six, we call them W questions that have been asked every day.

Acceleration of in-depth analysis: what, who, when, where, why and what-if

这些问题是什么,谁、何时、何地和为什么, and what if, and those are the questions are applied to all the data that consumer credit organization consumes. How do we perform in depth analysis to answer those? Even the deepest analytical questions and cover those what-if scenarios. How do we identify portfolio level shifts and leverage the power of analytics and machine learning to make the right decisions efficiently and accurate? Those are the questions that we’re constantly trying to battle, when working as part of consumer credit organization, Capital One. So we’re drilling down into individual portfolios and perform what-if analysis in addition to looking at the data on the account level. And the end of the day, we have to make information available and very clear to the company so right decisions in consumer credit space can be made.

When we talking about the solutions and the systems that we’re building, the bringing together this full suite of traditional model development, execution analysis and machine learning tools.

Bring together the full suite of traditional model development, execution, analysis and machine learning tools

This start with the vertical and horizontal forecasts. And, we are also on the way to create the systems to analyze and refresh our loan level visualizations very quickly. When consuming the enterprise level data, our technology solutions take care of the data quality while processing the actuals, and then potentially creating dynamic features to run competing models for early trend detection.

What-if analysis is a very critical part of the work consumer credit organization does, with gaming scenarios being at the core of it to answer common what-if questions.

So we are on the way of creating the solutions to have these capabilities for our analysts to create a new feature exploration in minutes.

What are the main principles and components that are critical, that are very critical for our systems? So first of all, it’s the ease of use, it’s ability to have those interfaces on the go for analysis and for our analysts and modelers and potentially even executives to check the metrics and execute what-if scenarios quickly.

It is also very important to establish this push, we call it push-button forecasting, where we can execute competing models at the push of a button. Then, to present those high level results and drill down as needed, from on the go devices. How do we create those deep, you know deep analysis and leverage machine learning tools to perform those what-if analysis and understand those hypothesis? Those are the questions, so those are the systems that we’re trying to build. Of course security is our first and foremost principle as we work with data, that must be complied And how about also introducing some sort of the chart capabilities where loss forecasting experts can provide meaningful information to executives on the go? So as part of this presentation, what we gonna do is actually I’m gonna pass it to Badrish, and we gonna to demonstrate the use case where we experiment and apply those technology solutions to the problem statements just mentioned, and hopefully show you within this particular use case, what we’ve done, and how we can apply different technologies to make about principles happen. – Thanks Yuri for a great introduction on the CCRM side.

Imagine if we can make this interactions…

你好,每个人。这是Badrish,我带领few of the initiatives in the risk tech community and the consumer credit risk management at Capital One as Yuri introduced. So here is an interesting use case I would like to share with you all. Just imagine in today’s world, we have some users who would need to make decisions on the capital market situation and interacts with, either sending out an email or sending out a Slack message to their business counterparties, to see if there’s an insight available with the predictions or forecasting of the health of the credit card markets, based on the new scenarios. Then, on the right side what you see is, the business owners would start coordinating with the tech and the product owners to analyze, run the models, see if the predictions are accurate and validated, generate the reports, make the reports available to the users who initially requested, and then the user can make the decisions based on the report, what they see in the presentations or some form of graphical interface.

Now, imagine if there is a system which the same user can interact with at their fingertips and with few clicks and messages with the bot, it can get a pretty good insight on where the trends are leading. It can generate the trends based on including new features on the fly. What really matters here is to see the trend, the percentage trend which would drive some key decisions or help decision makers use this capability as one of the tool, to contribute, make the decision right there, and this is what our use case is all about. And this is where we call it as a, disruptive thinking .

Interactive Insights Bot

And to support our use case, what are the critical capabilities required to make a trusted solutions? And here are those, it should be interactive with the user, the visualizations and the graphs should be able to refresh in few seconds and not having to wait too long, ability to add dynamic features on the fly to compute the models, ability to work with large amount of data mined to create features from them and ability to add or remove multiple features on fly, to influence and see the predicted graph clearly.

互动见解机器人rchitecture

Here is the high level architecture, which gives you an end to end glance at what components are involved in making this happen.

First, the user requests for a prediction to the Slack bot, the request is sent via a Lambda API to an SQS queue.

There is another Lambda which is listening to the type of requests and processes the request either to run a model, generate the graph, or show the results based on the type of requests.

The machine learning models are trained, tested using hype Python packages like psychic loan with different logistic regression model and generate the output predictions or actuals, based on the request received from the bot.

The machine learning model engine will now process the output into the S3 bucket, and at last, the Slack bot will display the output as soon as the graph image is ready.

Here is an interactive workflow showing step by step, what is happening in the workflow process as the request comes in from the user or decision maker.

Interaction Workflow

The user submits the request for either to show or run the model.

The request is sent to the Slack bot via the Slack bot message area, this area.

Now that particular message is sent to the SQS queue via are the Lambda as soon as, you know, the message is received from the bot.

There’s a Lambda which is waiting to listen to that message. And as soon as the Lambda receives a message from the SQS queue, it processes that message and generates the data set which is required by the machine learning engine to process that data and generate the output results and store it in the S3 bucket.

And now at last, the Slack bot displays the graph from the S3 bucket.

There is some analysis required in the middle which is not real time but, in as needed basis by our data analysts and data scientist, to look at the features and the data which has been generated by the Lambda and the backend engine to see some of the graphs and the trends, but those are not done at a real time.

Analytics (Databricks > for Inputs/Outputs)

So Databricks has played one of the key attribute for us to do most of our analysis and make the model ready data sets available for us to support the additional features on fly. These are very, very large datasets, and it’s not easy to generate these features on fly, and the ability to add those features at runtime has made much, much easier using Databricks. So understanding the true platinum level features, which are the key for running these models is made much simpler by Databricks to identify the outliers and filtering them out as you can see at the bottom on the right screen. Once the process of generating this new features is configured, now that is automated using Databricks.

Once the data sets are pruned to arrive at a reduced list of features, it is consumed by the machine learning model stage. There is a split of data into training and testing set. We are using a simple linear regression model to fit and predict the results for a specific timeframe. It is based on the bring your own model principle, any model could be plugged in to carry out the predictions, the actuals and the predictions are plotted side by side, and also the metrics like root mean square area are captured to know the effectiveness of the model. Finally the results are exported to the PDF and stored in S3 for the front end bot to display to the users. The same steps are followed even after a new feature is added or removed on to the dataset and the results are compared.

So here are some commands describing the list of commands the bot shows.

@risk-ml-bot help

So here you can see the user can request to kind of run the model or show the predictions and so on.

@risk-ml-bot show actuals by feature

Here is an output for the actual for the last 12 months from May, 2019 until May 2020 for the charge off rates. As you can see the trend, line on the x-axis is the month and on the y-axis is the charge off rate in percentage, and the peak charge off rate is sometime around in the middle.

@risk-ml-bot run w/ addFeature

Now we are asking the bot to add a new feature into, in our existing list of features which are already there and getting confirmation from the bot that the feature has been added successfully.

Here the feature name, we are calling it as COVID data.

It is nothing but a column, which is getting added into the existing dataset with hundreds or hundreds of columns, which are already existing to generate the predictions.

@risk-ml-bot show predictions by feature

And after the feature has been added, we can now ask the bot to predict for the same metric charge off, for the last 12 months and do the backdated predictions which you can see in the blue line.

Now let’s see the actual demo from the Slack bot, requesting the same list of commands, I’ll show it on here.

So let’s play the video, here you are seeing the Slack bot, the user is now requesting a help menu, to display the help menu.

On the right side you see, list of commands the user can run, and it run to start a model or to show the results of the model run and so on. Here’s the Databricks we have using the model data set and this a schema.

We are using a EMR at the backend to actually create the new feature on fly.

So we are just getting it ready.

So the first command the user gonna give is to request to run the actuals, we are giving a metric called charge off rate for the last 12 months and here you go. The graph has been generated, and as you can see the trend for the last 12 months, the peak is at 17% of charge off rate in the last 12 months.

Monthly Average Charge Off Rate

Now, let’s try to add a feature to the existing list of features which we already have on fly. So we are asking the bot to add a feature called COVID data.

It takes few seconds to process it, there you go,

we’ve got back the reply from the bot saying the new feature, the COVID data feature has been added to the existing feature sets.

And here is the process as soon as the command is given from the bot, the EMR adds the feature. Now we are showing you the, if you look at the last column, the feature has been added to the existing dataset. As soon as the feature gets added, it’s gonna invoke to create the predictions. It’s not gonna wait for the bot, for the user to do it, and here,

now we are going to ask the bot to show the predictions for the same metric for the last 12 months.

There you go, now you can see there are two line graphs.

黑色的实绩和蓝色的the predicted. And as you can see, as soon as we added some of the COVID data related features, the trend of the monthly charge off has gone significantly higher. This ends our demo session. Finally, I would like to thank my whole team members, Neil, Nag, Bradim and Josh who are the key contributors to this effort and it was not possible without them, to make this happen. Now, we would like to take few questions via chat.

Watch more Spark + AI sessions here
or
Try Databricks for free
«回来
Badrish Davay
About Badrish Davay

Capital One

工程经理Badrish Davay是一个老Capital One, building applications in Big Data, Event Streaming and Machine Learning space. With 15 years of technology expertise implementing complex data-driven solutions, Badrish’s background includes building and executing technology strategy with focus in Big Data, Analytics, IoT and Cloud for startup organizations as well IT delivery for Financial and Mortgage space. Prior to landing at Capital One, he has worked in the Health Care and Professional services organizations building complex Big data ecosystems for generating insights.

About Yuri Bogdanov

Capital One

Yuri Bogdanov is the Sr. Director of Engineering at Capital One, building applications in Big Data, Event Streaming and Machine Learning space. With 20 years of technology expertise implementing complex data-driven solutions, background includes building and executing technology strategy with focus in Big Data, Analytics, IoT and Cloud for startup organizations as well as IT delivery for federal contractors in the DC Metro area. Prior to landing at Capital One, he has managed programs as large as $25M+ in both Federal and Telecommunication space. Yuri is passionate about solving business problems with innovative cutting edge solutions.