Feeds:
Posts
Comments

Posts Tagged ‘Credit Risk Management’

In terms of credit risk strategy, the lending markets in America and Britain undoubtedly lead the way while several other markets around the world are applying many of the same principles with accuracy and good results. However, for a number of reasons and in a number of ways, many more lending markets are much less sophisticated. In this article I will focus on these developing markets; discussing how credit risk strategies can be applied in such markets and how doing so will add value to a lender.

The fundamentals that underpin credit risk strategies are constant but as lenders develop in terms of sophistication the way in which these fundamentals are applied may vary. At the very earliest stages of development the focus will be on automating the decisioning processes; once this has been done the focus should shift to the implementation of basic scorecards and segmented strategies which will, in time, evolve from focusing on risk mitigation to profit maximisation.

Automating the Decisioning Process

The most under-developed markets tend to grant loans using a branch-based decisioning model as a legacy of the days of fully manual lending. As such, it is an aspect more typical of the older and larger banks in developing regions and one that is allowing newer and smaller competitors to enter the market and be more agile.

A typical branch-lending model looks something like the diagram below:

In a model like this, the credit policy is usually designed and signed-off by a committee of very senior managers working in the head-office. This policy is then handed-over to the branches for implementation; usually by delivering training and documentation to each of the bank’s branch managers. This immediately presents an opportunity for misinterpretation to arise as branch managers try to internalise the intentions of the policy-makers.

Once the policy has been handed-over, it becomes the branch manager’s responsibility to ensure that it is implemented as consistently as possible. However, since each branch manager is different, as is each member of branch staff, this is seldom possible and so policy implementation tends to vary to a greater or lesser extent across the branch network.

Even when the policy is well implemented though, the nature of a single written policy is such that it can identify the applicants that are considered too risky to qualify for a loan but it cannot go beyond that to segment accepted customers into risk groups. This means that the only way that senior management can ensure the policy is being implemented correctly in the highest risk situations is by using the size of the loan as an indication of risk. So, to do this a series triggers are set to escalate loan applications to management committees.

In this model, which is not an untypical one, there are three committees: one within the branch itself where senior branch staff review the work of the loan officer for small value loan applications; if the loan size exceeds the branch committee’s mandate though it must then be escalated to a regional committee or, if sufficiently large, all the way to a head-office committee.

Although it is easy to see how such a series of committees came into being, their on-going existence adds significant costs and delays to the application process.

In developing markets where skills are short there a significant premium must usually be paid to high quality management staff. So, to use the time of these managers to essentially remake the same decision over-and-over (having already decided on the policy, they now need to repeatedly decide whether an application meets the agreed upon criteria) is an inefficient way to invest a valuable resource. More importantly though are the delays that must necessarily accompany such a series of committees. As an application is passed on from one team – and more importantly from one location – to another a delay is incurred. Added to this is the fact that committees need to convene before they can make a decision and usually do so on fixed dates meaning that a loan application may have to wait a number of days until the next time the relevant committee meets.

But the costs and delays of such a model are not only incurred by the lender, the borrower too is burdened with a number of indirect costs. In order to qualify for a loan in a market where impartial third-party credit data is not widely available – i.e. where there are no strong and accurate credit bureaus – an applicant typically needs to ‘over prove’ their risk worthiness. Where address and identification data is equally unreliable this requirement is even more burdensome. In a typical example an applicant might need to first show an established relationship with the bank (6 months of salary payments, for example); provide a written undertaking from their employer that they will notify the bank of any change in employment status; the address of a reference who can be contacted when the original borrower can not; and often some degree of security, even for small value loans.

These added costs serve to discourage lending and add to what is usually the biggest problem faced by banks with a branch-based lending model: an inability to grow quickly and profitably.

Many people might think that the biggest issue faced by lenders in developing markets is the risk of bad debt but this is seldom the case. Lenders know that they don’t have access to all the information they need when they need it and so they have put in place the processes I’ve just discussed to mitigate the risk of losses. However, as I pointed out, those processes are ungainly and expensive. Too ungainly and too expensive as it turns out to facilitate growth and this is what most lenders want to change as they see more agile competitors starting to enter their markets.

A fundamental problem with growing with a branch-based lending model is that the costs of growing the system rise in line with the increase capacity. So, to serve twice as many customers will cost almost twice as much. This is the case for a few reasons. Firstly, each branch serves only a given geographical catchment area and so to serve customers in a new region, a new branch is likely to be needed. Unfortunately, it is almost impossible to add branches perfectly and each new branch is likely to lead to either an inefficient overlapping of catchment areas or ineffective gaps. Secondly, within the branch itself there is a fixed capacity both in terms of the number of staff it can accommodate and in terms of the number of customers each member of staff can serve. Both of these can be adjusted, but only slightly.

Added to this, such a model does not easily accommodate new lending channels. If, for example, the bank wished to use the internet as a channel it would need to replicate much of the infrastructure from the physical branches in the virtual branch because, although no physical buildings would be required and the coverage would be universal, the decisioning process would still require multiple loan officers and all the standard committees.

To overcome this many lenders have turned to agency agreements, most typically with large private and government employers. These employers will usually handle the administration of loan applications and loan payments for their staff and in return will either expect that their staff are offered loans at a discounted rate or that they themselves are compensated with a commission.

By simply taking the current policy rules from the branch based process and converting them into a series of automated rules in a centralised system many of these basic problems can be overcome; even before improving those rules with advanced statistical scorecards. Firstly the gap between policy design and policy implementation is removed, removing any risk of misinterpretation. Then the need for committees to ensure proper policy implementation is greatly reduced, greatly reducing the associated costs and delays. Thirdly the risk of inconsistent application is removed as every application, regardless of the branch originating it or the staff member capturing the data, is treated in the same way. Finally, since the decisioning is automated there is almost no cost to add a new channel onto the existing infrastructure meaning that new technologies like internet and mobile banking can be leveraged as profitable channels for growth.

The Introduction of Scoring

With the basic infrastructure in place it is time to start leveraging it to its full advantage by introducing scorecards and segmented strategies. One of the more subtle weaknesses of a manual decision is that it is very hard to use a policy to do anything other than decline an account. As soon as you try to make a more nuanced decision and categorise accepted accounts into risk groups the number of variables increases too fast to deal with comfortably.

It is easy enough to say that an application can be accepted only if the applicant is over 21 years of age, earns more than €10 000 a year and has been working for their current employer for at least a year but how do you segment all the qualifying applications into low, medium and high risk groups? A low risk customer might be one that is over 35 years old, earns more than €15 000 and has been working at their current employer for at least a year; or one that is over 21 years old but who earns more than €25 000 and has been working at their current employer for at least two years; or one that is over 40 years old, earns more than €15 000 and has been working at their current employer for at least a year, etc.

It is too difficult to manage such a policy using anything other than an automated system that uses a scorecard to identify and segment risk across all accounts. Being able to do this allows a bank to begin customising its strategies and its products to each customer segment/ niche. Low risk customers can be attracted with lower prices or larger limits, high spending customers can be offered a premium card with more features but also with higher fees, etc.

The first step in the process would be to implement a generic scorecard; that is a scorecard built using pooled third-party data that relates to a portfolio that is similar to the one in which it is to be implemented. These scorecards are cheap and quick to implement and, as when used to inform only simple strategies, offer almost as much value as a fully bespoke scorecard would. Over time the data needed to build a more specific scorecard can be captured so that the generic scorecard can be replaced after eighteen to twenty-four months.

But the making of a decision is not the end goal; all decisions must be monitored on an on-going basis so that strategy changes can be implemented as soon as circumstances dictate. Again this is not something that is possible to do using a manual system where each review of an account’s current performance tends to involve as much work as the original decision to lend to that customer did. Fully fledged behavioural scorecards can be complex to build for developing banks but at this stage of the credit risk evolution a series of simple triggers can be sufficient. Reviewing an account in an automated environment is virtually instantaneous and free and so strategy changes can be implemented as soon as they are needed: limits can be increased monthly to all low risk accounts that pass a certain utilisation trigger, top-up loans can be offered to all low and medium risk customers as soon as their current balances fall below a certain percentage of the original balance, etc.

In so doing, a lender can optimise the distribution of their exposure; moving exposure from high risk segments to low risk segments or vice versa to achieve their business objectives. To ensure that this distribution remains optimised the individual scores and strategies should be consistently tested using champion/ challenger experiments. Champion/ challenger is always a simple concept and can be applied to any strategy provided the systems exist to ensure that it is implemented randomly and that its results are measurable. The more sophisticated the strategies, the more sophisticated the champion/ challenger experiments will look but the underlying theory remains unchanged.

Elevating the Profile of Credit Risk

Once scorecards and risk segmented strategies have been implemented by the credit risk team, the team can focus on elevating their profile within the larger organisation. As credit risk strategies are first implemented they are unlikely to interest the senior managers of a lender who would likely have come through a different career path: perhaps they have more of a financial accounting view of risk or perhaps they have a background in something completely different like marketing. This may make it difficult for the credit risk team to garner enough support to fund key projects in the future and so may restrict their ability to improve.

To overcome this, the credit team needs to shift its focus from risk to profit. The best result a credit risk team can achieve is not to minimise losses but to maximise profits while keeping risk within an acceptable band. I have written several articles on profit models which you can read here, here, here and here but the basic principle is that once the credit risk department is comfortable with the way in which their models can predict risk they need to understand how this risk contributes to the organisation’s overall profit.

This shift will typically happen in two ways: as a change in the messages the credit team communicates to the rest of the organisation and as a change in the underlying models themselves.

To change the messages being communicated by the credit team they may need to change their recruitment strategies and bring in managers who understand both the technical aspects of credit risk and the business imperatives of a lending organisation. More importantly though, they need to always seek to translate the benefit of their work from technical credit terms – PD, LGD, etc. – into terms that can be more widely understood and appreciated by senior management – return on investment, reduced write-offs, etc. A shift in message can happen before new models are developed but will almost always lead to the development of more business-focussed models going forward.

So the final step then is to actually make changes to the models and it is by the degree to which such specialised and profit-segmented models have been developed and deployed that a lenders level of sophistication will be measured in more sophisticated markets.

Read Full Post »

*** I am in the process of updating these articles, and also of launching a podcast on the same subjects – mixing technical and strategic discussions with industry insiders around the world. This is still in the soft-launch phase, but the updated version of this article can be found at https://www.howtolendmoneytostrangers.show/articles/what-does-a-lender-look-like-on-the-inside while the first two episodes of the show can be found at https://www.howtolendmoneytostrangers.show/episodes

First things first, I am by no means a scorecard technician. I do not know how to build a scorecard myself, though I have a fair idea of how they are built; if that makes sense. As the title suggests, this article takes a simplistic view of the subject. I will delve into the underlying mathematics at only the highest of levels and only where necessary to explain another point. This article treats scorecards as just another tool in the credit risk process, albeit an important one that enables most of the other strategies discussed on this blog. I have asked a colleague to write a more specialised article covering the technical aspects and will post that as soon as it is available.

Scorecards aim to replace subjective human judgement with objective and statistically valid measures; replacing inconsistent anecdote-based decisions with consistent evidence-based ones. What they do is essentially no different from what a credit assessor would do, they just do it in a more objective and repeatable way. Although this difference may seem small, it enables a large array of new and profitable strategies.

So what is a scorecard?

A scorecard is a means of assigning importance to pieces of data so that a final decision can be made regarding the underlying account’s suitableness for a particular strategy. They do this by separating the data into its individual characteristics and then assigning a score to each characteristic based on its value and the average risk represented by that value.

For example an application for a new loan might be separated into age, income, length of relationship with the bank, credit bureau score, etc. Then the each possible value of those characteristics will be assigned a score based on the degree to which they impact risk. In this example ages between 19 and 24 might be given a score of – 100, ages between 25 and 30 a score of -75 and so on until ages 50 and upwards are given a score of +10. In this scenario young applicants are ‘punished’ while older customers benefit marginally from their age. This implies that risk has been shown to be inversely related to age. The diagram below shows an extract of a possible scorecard:

The score for each of these characteristics is then added to reach a final score. The final score produced by the scorecard is attached to a risk measure; usually something like the probability of an account going 90 days into arrears within the next 12 months. Reviewing this score-to-risk relationship allows a risk manager to set the point at which they will decline applications (the cut-off) and to understand the relative risk of each customer segment on the book. The diagram below shows how this score-to-risk relationship can be used to set a cut-off.

How is a scorecard built?

Basically what the scorecard builder wants to do is identify which characteristics at one point in time are predictive of a given outcome before or at some future point in time. To do this historic data must be structured so that one period can represent the ‘present state’ and the subsequent periods can represent the ‘future state’. In other words, if two years of data is available for analysis (the current month can be called Month 0 and the last Month can be called Month -24) then the most distant six months (from Month -24 to Month -18) will be used to represent the ‘current state’ or, more correctly, the observation period while the subsequent months (Months -17 to 0) represent the known future of those first six months and are called ‘the outcome period’. The type of data used in each of these periods will vary to reflect these differences so that application data (applicant age, applicant income, applicant bureau score, loan size requested, etc.) is important in the observation period and performance data (current balance, current days in arrears, etc.) is important in the outcome period.

With this simple step completed the accounts in the observation period must be defined and sorted based on their performance during the outcome period. To start this process a ‘bad definition’ and ‘good definition’ must first be agreed upon. This is usually something like: ‘to be considered bad, an account must have gone past 90 days in delinquency at least once during the 18 month outcome period’ and ‘to be considered good an account must never have gone past 30 days in delinquency during the same period’. Accounts that meet neither definition are classified as ‘indeterminate’.

Thus separated, the unique characteristics of each group can be identified. The data that was available at the time of application for every ‘good’ and ‘bad’ account is statistically tested and those characteristics with largely similar values within one group but largely varying values across groups are valuable indicators of risk and should be considered for the scorecard. For example if younger customers were shown to have a higher tendency to go ‘bad’ than older customers, then age can be said to be predictive of risk. If on average 5% of all accounts go bad but a full 20% of customers aged between 19 and 25 go bad while only 2% of customers aged over 50 go bad then age can be said to be a strong predictor of risk. There are a number of statistical tools that will identify these key characteristics and the degree to which they influence risk more accurately than this but they won’t be covered here.

Once each characteristic that is predictive of risk has been identified along with its relative importance some cleaning-up of the model is needed to ensure that no characteristics are overly correlated. That is, that no two characteristics are in effect showing the same thing. If this is the case, only the best of the related characteristics will be kept while the other will be discarded to prevent, for want of a better term, double-counting. Many characteristics are correlated in some way, for example the older you are the more likely you are to be married, but this is fine so long as both characteristics add some new information in their own right as is usually the case with age and marital status – an older, married applicant is less risky than a younger, married applicant just as a married, older applicant is less risky than a single, older applicant. However, there are cases where the two characteristics move so closely together that the one does not add any new information and should therefore not be included.

So, once the final characteristics and their relative weightings have been selected the basic scorecard is effectively in place. The final step is to make the outputs of the scorecard useable in the context of the business. This usually involves summarising the scores into a few score bands and may also include the addition of a constant – or some other means of manipulating the scores – so that the new scores match with other existing or previous models.

How do scorecards benefit an organisation?

Scorecards benefit organisations in two major ways: by describing risk in very fine detail they allow lenders to move beyond simple yes/ no decisions and to implement a wide range of segmented strategies; and by formalising the lending decision they provide lenders with consistency and measurability.

One of the major weaknesses of a manual decisioning system is that it seldom does more than identify the applications which should be declined leaving those that remain to be accepted and thereafter treated as being the same. This makes it very difficult to implement risk-segmented strategies. A scorecard, however, prioritises all accounts in order of risk and then declines those deemed too risky. This means that all accepted accounts can still be segmented by risk and this can be used as a basis for risk-based pricing, risk-based limit setting, etc.

The second major benefit comes from the standardisation of decisions. In a manual system the credit policy may well be centrally conceived but the quality of its implementation will be dependent on the branch or staff member actually processing the application. By implementing a scorecard this is no longer the case and the roll-out of a scorecard is almost always accompanied by the reduction in bad rates.

Over-and-above these risk benefits, the roll-out of a scorecard is also almost always accompanied by an increase in acceptance rates. This is because manual reviewers tend to be more conservative than they need to be in cases that vary in some way from the standard. The nature of a single credit policy is such that to qualify for a loan a customer must exceed the minimum requirements for every policy rule. For example, to get a loan the customer must be above the minimum age (say 28), must have been with the bank for more than the minimum period (say 6 months) and must have no adverse remarks on the credit bureau. A client of 26 with a five year history with the bank and a clean credit report would be declined. With a scorecard in place though the relative importance of exceeding one criteria can be weighed against the relative importance of missing another and a more accurate decision can be made; almost always allowing more customers in.

Implementing scorecards

There are three levels of scorecard sophistication and, as with everything else in business, the best choice for any situation will likely involve a compromise between accuracy and cost.

The first option is to create an expert model. This is a manual approximation of a scorecard based on the experience of several experts. Ideally this exercise would be supported by some form of scenario planning tool where the results of various adjustments could be seen for a series of dummy applications – or genuine historic applications if these exist – until the results that meet the expectations of the ‘experts’. This method is better than manual decisioning since it leads to a system that looks at each customer in their entirety and because it enforces a standardised outcome. That said, since it is built upon relatively subjective judgements it should be replaced with a statistically built scorecard as soon as enough data is available to do so.

An alternative to the expert model is a generic scorecard. These are scorecards which have been built statistically but using a pool of similar though not customer-specific data. These scorecards are more accurate than expert models so as long as the data on which they were built reasonably resembles the situation in which they are to be employed. A bureau-level scorecard is probably the purest example of such a scorecard though generic scorecards exist for a range of different products and for each stage of the credit life-cycle.

Ideally, they should first be fine-tuned prior to their roll-out to compensate for any customer-specific quirks that may exist. During a fine-tuning, actual data is run through the scorecard and the results used to make small adjustments to the weightings given to each characteristic in the scorecard while the structure of the scorecard itself is left unchanged. For example, assume the original scorecard assigned the following weightings: -100 for the age group 19 to 24; -75 for the age group 25 to 30; -50 for the age group 31 to 40; and 0 for the age group 41 upwards. This could either be implemented as it is bit if there is enough data to do a fine-tune it might reveal that in this particular case the weightings should actually be as follows: -120 for the age group 19 to 24; -100 for the age group 25 to 30; -50 for the age group 31 to 40; and 10 for the age group 41 upwards. The scorecard structure though, as you can see, does not change.

In a situation where there is no client-specific data and no industry-level data exists, an expert model may be best. However, where there is no client-specific data but where there is industry-level data it is better to use a generic scorecard. In a case where there is both some client-specific data and some industry-level data a fine-tuned generic scorecard will produce the best results.

The most accurate results will always come, however, from a bespoke scorecard. That is a scorecard built from scratch using the client’s own data. This process requires significant levels of good quality data and access to advanced analytical skills and tools but the benefits of a good scorecard will be felt throughout the organisation.

 

Read Full Post »

You’ve got to know when to hold ‘em, know when to fold ‘em

Know when to walk away and know when to run

I’ve always wanted to use the lines from Kenny Rogers’ famous song, The Gambler, in an article. But that is only part of the reason I decided to use the game of Texas Holdem poker as a metaphor for the credit risk strategy environment.

The basic profit model for a game of poker is very similar to that of a simple lending business. To participate in a game of Texas Holdem there is a fixed cost (buy in) in exchange for which there is the potential to make a profit but also the risk of making a loss. As each card is dealt, new information is revealed and the player should adjust their strategy accordingly. Not every hand will deliver a profit and some will even incur a fairly substantial loss, however over time and by following a good strategy the total profit accumulated from those hands that are winners can be sufficient to cover the losses of those hands that are losers and the fixed costs of participating and a profit can thus be made.

Similarly in a lending business there is a fixed cost to process each potential customer, only some of whom will be accepted as actual customers who have the potential to be profitable or to result in a loss.  The lender will make an overall profit only if the accumulated profit from each profitable customer is sufficient to cover the losses from those that weren’t and the fixed processing costs.

In both scenarios, the profit can be maximised by increasing exposure to risk when the odds of a profit are good and reducing exposure, on the other hand, when the odds of a loss are higher. A good card player therefore performs a similar role to a credit analyst: continuously calculating the odds of a win from each hand, designing strategies to maximise profit based on those odds and then adjusting those strategies as more information becomes available.

Originations

To join a game of Texas Holdem each player needs to buy into that game by placing a ‘blind’ bet before they have seen any of the cards.  As this cost is incurred before any of the cards are seen the odds of victory can not be estimated. The blind bet is, in fact, the price to see the odds.

Thereafter, each player is dealt two private cards; cards that only they can see. Once these cards have been dealt each player must decide whether to play the game or not.

To play on, each player must enter a further bet. This decision must be made based on the size of the bet and an estimate of the probability of victory based on the two known cards. If the player should instead choose to not play, the will forfeit their initial bet.

A conservative player, one who will play only when the odds are strongly in their favour, may lose fewer hands but they will instead incur a relatively higher cost of lost buy-ins. Depending on the cost of the buy-in and the average odds of winning, the most profitable strategy will change but it will unlikely be the most conservative strategy.

In a lending organisation the equivalent role is played by the originations team. Every loan application that is processed, incurs a cost and so when an application is declined that cost is lost. A conservative scorecard policy will decline a large number of marginal applications choosing, effectively, to lose a small but known processing cost rather than risk a larger but unknown credit loss.  In so doing though, it also gives up the profit potential on those accounts. As with poker betting strategies, the ideal cut-off will change based on the level of processing costs and the average probability of default but will seldom be overly conservative.

A card player calculates their odds of victory from the known combinations of cards possible from a standard 54 card deck.  The player has the possibility of creating any five card combination made up from their two known cards and a further five random ones yet to be dealt, while each other player can create a five card combination made-up of any seven cards except for the two the player himself has.  With this knowledge, the odds that the two private cards will result in a winning hand can be estimated and, based on that estimate, make the decision whether to enter a bet and if so of what size; or whether to fold and lose the buy-in.

The methods used to calculate odds may vary, as do the sources of potential profits, but at a conceptual level the theory on which originations is based is similar to the theory which under-pins poker betting.

As each account is processed through a scorecard the odds of it eventually rolling into default are estimated. These odds are then used to make the decision whether to offer credit and, if so, to what extent.  Where the odds of a default are very low the lender will likely offer more credit – the equivalent of placing a larger starting bet – and vice versa.

Customer Management

The reason that card games like Texas Holdem are games of skill rather than just games of chance, is that the odds of a victory change during the course of a game and so the player is required to adapt their betting strategy as new information is revealed.  Increasing their exposure to risk as the odds grow better or retreating as the odds worsen.  The same is true of a lending organisation where customer management strategies seek to maximise organisational profit but changing exposure as new information is received.

Once the first round of betting has been completed and each player’s starting position has been determined, the dealer turns over three ‘community cards’.  These are cards that all players can see and can use, along with their two private cards, to create their best possible poker hand. A significant amount of new information is revealed when those three community cards are dealt. In time two further community cards will be revealed and it will be from any combination of those seven cards that a winning hand will be constructed. So, at this point, each player knows five of the seven cards they will have access to and three of the cards their opponents can use. The number of possible hands becomes smaller and so the odds that the players had will be a winner can be calculated more accurately. That is not to say the odds of a win will go up, just that the odds can be stated with more certainty.

At this stage of the game, therefore, the betting activity usually heats up as players with good hands increase their exposure through bigger bets. Players with weaker hands will try to limit their exposure by checking – that is not betting at all – or by placing the minimum bet possible. This strategy limits their potential loss but also limits their potential gain as the total size of the ‘pot’ is also kept down.

As each of the next two community cards is revealed this process repeats itself with players typically willing to place ever larger bets as the new information received allows them to calculate the odds with more certainty. Only once the final round of betting is complete are the cards revealed and a winner determined. Those players that bet until the final round but still lose will have lost significantly in this instance. However, if they continue to play the odds well they will expect to recuperate that loss – and more – over time.

The customer management team within a lending organisation works with similar principals. As an account begins to operate, new information is received which allows the lender to determine with ever more certainty the probability that an account will eventually default: with every payment that is received on time, the odds of an eventual default decrease; with every broken promise-to-pay, those odds increase; etc.

So the role of the customer management team is to design strategies that optimise the lender’s exposure to each customer based on the latest information received. Where risk appears to be dropping, exposure should be increased through limit increases, cross-selling of new products, reduced pricing, etc. while when the opposite occurs the exposure should be kept constant or even decreased through limit decreases, pre-delinquency strategies, foreclosure, etc.

Collections

As the betting activity heats up around them a player may decide that the odds no longer justify the cost required to stay in the game and, in these cases, the player will decide to fold – and accept a known small loss rather than continue betting and risk an even bigger eventual loss chasing an unlikely victory.

Collections has too many operational components to fit neatly into the poker metaphor but it can be most closely likened to this decision of whether or not to fold. Not every hand can be a winner and even hands that initially appeared to be strong can be shown to be weak when the latter community cards are revealed. A player who was dealt two hearts and who then saw two further hearts dealt in the first three community cards would have been in  a strong position with the odds that the fifth heart they need to create a strong ‘flush hand’ sitting at fifty percent. However, if when the next two cards are dealt neither is a heart, the probability of a winning hand will drop to close to zero.

In this situation the player needs to make a difficult decision: they have invested in a hand that has turned out to be a ‘bad’ one and they can either accept the loss or invest further in an attempt to salvage something. If there is little betting pressure from the other players, they might choose to stay in the game by matching any final bets; figuring that because the total pot was large and the extra cost of participating small it was worth investing further in an unlikely win. Money already bet, after all, is a sunk cost. If the bets in the latest round are high however, they might choose to fold instead and keep what money they have left available for investment in a future, hopefully better hand.

As I said, the scope of collections goes well beyond this but certain key decisions a collections strategy manager must make relate closely to the question of whether or not to fold. Once an account has missed a payment and entered the collections processes the lender has two options: to invest further time and money in an attempt to collect some or all of the outstanding balance or to cut their losses and sell or even to write-off the debt.

In cases where there is strong long-term evidence that the account is a good one, the lender might decide – as a card player might when a strong hand is not helped by the fourth community card – to maintain or even increase their exposure by granting the customer some leeway in the form of a payment holiday, a re-aging of debt or even a temporary limit increase. On the other hand, in cases where the new information has forced a negative re-appraisal of the customer’s risk but the value owed by that customer is significant, it might still be preferable for the lender to invest a bit more in an attempt to make a recovery, even though they know that the odds are against them. This sort of an investment would come in the form of an intensive collections campaign or the paid involvement of specialist third party debt collectors.

As with a game of cards, the lender will not always get it exactly right and will over invest in some risky customers and under-invest in others; the goal is to get the investment right often enough in the long-term to ensure a profit overall.

It is also true that a lender who consistently shies away from investing in the collection of marginal debt – one that chooses too easily to write-off debt rather than to risk an investment in its recovery – may start to create a reputation for themselves that is punitive in the long-run. A lender that is seen as a ‘soft touch’ by the market will attract higher risk customers and will see a shift in portfolio risk towards the high-end as more and more customers decide to let their debt fall delinquent in the hopes of a painless write-off. Similarly a card player that folds in all situations except those where the odds are completely optimal, will soon be found out by their fellow players. Whenever they receive the perfect hand and bet accordingly, the rest of the table will likely fold and in so doing reduce the size of the ensuing pot which, although won, will be much smaller than it might otherwise have been. In extreme cases, this limiting of the wins gained from good hands may be so sever that the player is unable to cover the losses they have had to take in the games in which they folded.

Summary

The goal of credit risk strategy, like that of a poker betting strategy, is to end with the most money possible. To do this, calculated bets must be taken at various stages and with varying levels of data; risk must be re-evaluated continuously and at times it may become necessary to take a known loss rather than to risk ending up with an even greater, albeit uncertain, loss in the future.

So, in both scenarios, risk should not be avoided but should rather be converted into a series of numerical odds which can be used to inform investment strategies that seek to leverage off good odds and hedge against bad odds. In time, if accurate models are used consistently to inform logical strategies it is entirely possible to make a long-term profit.

Of course in their unique nuances both fields also vary quite extensively from each other, not least in the way money is earned and, most importantly, in the fact that financial services is not a zero sum game. However, I hope that where similarities do exist these have been helpful in understanding how the profit levers in a lending business fit together. For a more technical look at the same issue, you can read my articles on profit modelling in general and for credit cards and banks in particular.

Read Full Post »

There are certainly analytical tools in the market that are more sophisticated than Excel and there are certainly situations where these are needed to deliver enhanced accuracy or advanced features; However, this article will concentrate on building models to aid the decision-making process of a business leader rather than a specialist statistician, the need is for a model that is flexible and easy-to-use.  Since Excel is so widely available and understood, it is usually the best tool for this purpose.

In this article I will assume a basic understanding of Excel and its in-built mathematical functions.  Instead, I’ll discuss how some of the more advanced functions can be used to build decision-aiding models and, in particular, how to create flexible matrices.

Spreadsheets facilitate flexibility by allowing calculations to be parameterised so that a model can be built with the logic fixed but the input values flexible.  For example, the management of a bank may agree that the size of a credit limit grated to a customer should be based on that customer’s risk and income and that VIP customers should be entitled to an extra limit extension, though they may disagree over one or more of the inputs.  The limit-setting logic can be programmed into Excel as an equation that remains constant while the definition of what constitutes each risk group, each income band, the size of each limit and the size of the VIP bonus extension can each be changed at will. 

When building a model to assist with business decision-making, the key is to make sure that each profit lever is represented by a parameter that can be quickly and easily altered by decision-makers without altering the logical and established links between each of those profit levers.  Making use of Excel’s advanced functions and some simple logic, it is possible to do this in almost all situations without the resulting model becoming too complex for practical business use.

*     *     *     *     *

If I were to guess, I would say that at over 80% of the functionality needed to build a flexible decision-making model can be created using Excel’s basic mathematical functions and ‘IF clauses’ and ‘LOOKUPs’. 

If Clauses

IF clauses, once understood, have a multitude of uses.  When building a model to aid decision-making they are usually one of the most important tools at an analyst’s disposal.  Simply put, an IF clause provides a binary command: if a given event happens do this, if not do that.  If a customer number has been labelled as VIP, add €5 000 extra to the proposed limit, if not do not add anything, etc. 

IF( CustStatus = “VIP”, 5000, 0 )

Using this simple logic, it is possible to replicate a decision tree connecting a large number of decisions to create a single strategy.  IF clauses are very useful for categorising data, for identifying or selecting specific events, etc.

There are two important variations of the basic IF clause: SUMIF and COUNTIF.  These two functions allow to determine how often, or to what degree, a certain event has occurred.  Both functions have the same underlying logic, though the COUNTIF function is simpler.  What is the total sum of balances on all VIP accounts or simply how many VIP accounts are there.

SUMIF( Sheet1!$A$1:$A$200, “VIP”, Sheet1!$B$1:$B$200 ) or

COUNTIF( Sheet1!$A$1:$A$200, “VIP” )

 

Lookups

Look-ups, on the other hand, are used to retrieve related data; replicating some of the basic functionality of a database. 

A ‘lookup’ will take one value and retrieve a corresponding alternate value from a specific table.  Perhaps easier to understand through an example: assume there is a list showing which branch each of a bank’s customers belongs to, given a random selection of customer numbers a lookup would take each of those customer numbers and search the larger list until it found the matching number and then retrieve the associated branch name next to that customer name in the table. 

 

By ending the statement with ‘FALSE’ it means that only exact matches are permitted.  If I had ended the function with ‘TRUE’, it would have looked for the nearest possible match to the given customer name from within the list and returned the value corresponding to that.  This is not particularly useful in an example like this one but it is a useful way to group values into logical batches among other things.  For example, if I had a list of salaries and wanted to summarise them into salary bands I could create a table with the lowest and highest value in each band and then use a lookup ending with TRUE to find the band into which each unique salary falls.

 

There are actually two types of lookups in Excel, vertical lookups and horizontal lookups.  The former looks down a list until it finds the matching number (and then moves across to find the pertinent field) while the latter looks across a list (and then moves down to find the pertinent field); other than that the logic remains the same.

In the above example, the lookup will look take a given customer number within a table on the sheet and then, once it has been found, will return the value in the second column from the left of that table.  If it has been instead been an HLOOKUP function, the value returned would have been the one in the second row from the top. 

 

Embedded Functions

The real value of IF clauses and LOOKUPs comes when they are added to together, either with each other or with other Excel functions.  For example, if the account is labelled “VIP” then look for the associated relationship manager in a list of all the relationship managers, if not then look for the associated branch name – in both cases using the customer number to do the matching.

IF( CustStatus = “VIP”, VLOOKUP( CustNum, RelMans!$A$1:$B$20, 2, FALSE), VLOOKUP (CustNum, Branches!$A$1:$B$50, 2, FALSE))

In these cases, the results of the embedded function are used by the main function to deliver a result.

Matrices

In most cases however, businesses need to make decisions on more factors than can be represented simply by lists; in our example credit limits cannot be set with a reference to risk alone, income – and as a proxy for spend – considerations also need to be borne in mind.  When building a business model, a useful tool then is a two-dimensional matrix where results can be retrieved using embedded VLOOKUPs and HLOOKUPs.  Creating Matrices in Excel is a three-step process – at least I only know how to do it using three steps. 

I will walk through the example of a limit setting matrix.  In this example I want to set a limit for each customer based on a combination of the customer’s risk and income while also keeping product restrictions in mind.  I want this model to be flexible enough so that I can easily change the definition of the risk and income bands as well as the prices assigned to each segment of the matrix.

The first step is to create the desired matrix, choosing the axis labels and number of segments.  Within this matrix, each segment should be populated with the desired limit.  The labels of the matrix will remain fixed though the definition of each label can be changed as needed.  The limit in each segment can be hard-coded in – €5 000 for example – or can relate to a reference cell – product minimum plus a certain amount for example.

In this example I have decided to 12 segment matrix that will cater for 4 income bands (Low, Moderate, High and Very High) and 3 risk bands (Low, Moderate and High).  I’ve then populated the matrix with the limits we will use as a starting point for our discussions.  Managers will not want to know just how the proposed model impacts limits at a customer level, they will also want to see how it impacts limits at a portfolio level so I have used COUNTIF and SUMIF to provide a summary of the limit distribution across the portfolio – all shown below:

 

The second step is to summarise the values of the two key variables into the respective bands; using VLOOKUPs as discussed above.  In this example we want to summarise the risk grades of customers into LOW RISK, MODERATE RISK and HIGH RISK and the income into similar LOW INCOME, MODERATE INCOME, HIGH INCOME and VERY HIGH INCOME.

As a starting point, I have decided to make the splits as shown in the tables below.  These tables were used to label each account in the dataset using two new columns I have created, also shown below:

 

 

 

Then each account can be matched to a matrix segment using VLOOKUPs and HLOOKUPs, embedded to create a matrix lookup function.  What we want to do is to use the VLOOKUP functionality to find the right row corresponding to the risk of the customer and then to move across the number of rows to find the right column corresponding to the income of the customer.  The first part of the equation is relatively simple to construct so long as we ignore the column number:

VLOOKUP( Risk Band, $A$3:$E$5, ?, False )

Provided we’re a little creative with it, an HLOOKUP will allow us to fill in the missing part.  What we need to do is find a way to convert the ‘Salary Band’ field into a number representing the column.  You might have noticed that there was a row of numbers under each of the Income Bands in the matrix shown above.  This was done in order to allow an HLOOKUP to return that number so that it can be placed it into the missing part of the VLOOKUP.  An HLOOKUP will search for the Salary Band and then return the number from the row directly blow it, which in this case has been specifically set to be equal to its column number – remembering to add one to take into account the field used to house the name of the Risk Grade that is needed for the VLOOKUP.

HLOOKUP ( Salary Band, $B$1:$E$2, 2, FALSE )

In this case it will always be the second row so we can hardcode in the ‘2’.  This entire function is then substituted into the VLOOKUP to create a function that will look-up both the Risk Band and the Salary Band of any given customer and return the relative limit from the matrix.

VLOOKUP( Risk Band, $A$1:$E$5, HLOOKUP ( Salary Band, $B$1:$E$2, 2, FALSE ) , False )

All that is now needed is to add two further fields to take into account the potential VIP bonus limit and the model is complete – and the results are shown below:

 

This version of the model can be distributed or taken into a workshop and, as each component is adjusted so too are the individual limits granted as well as the tables summarising the distribution of limits across the portfolio.  For example, the marketing team may wish to increase the limits of Low Risk, Very High Income customers to 60,000 and, at the same time, the risk team may wish to re-categorise those with a risk score of 4 as ‘High Risk’ and increase the qualifying income for ‘High Income’ to 7,000.  The first change requires a simple change in the Limit Matrix while the second requires two simple changes to the references tables, giving the new matrix limit using the tables shown below.

 

*     *     *     *     *

It is also possible to show the distribution by matrix segment. The method is based on the same logic discussed up to now, although the implementation is a bit clumsy. 

The first step is to create a dummy matrix with the same labels but populated with a segment number rather than a limit.  Then you need to create a new field in the dataset called something like ‘Segment Number’ and to populate this field using the same equation from above.  Once this field has been populated you can create a another dummy version of the matrix and, in this case, use the SUMIF or COUNTIF function to calculate the value of limits or the number of customers in each segment.  With that populated it is easy to turn those numbers into a percentage of the total either in the same step or using one final new matrix:

 






Read Full Post »

« Newer Posts - Older Posts »