Feeds:
Posts
Comments

Posts Tagged ‘Originations’

The thing is: no one really cares about banking products. There’s no idolizing of the guys who started AmEx Cards or CapitalOne, no queue outside HSBC the night before a new card is launched. This is a problem because people only buy things they care about, or things they need and for which there is no alternative.

Banks used to keep outside competitors away with the huge capital and regulatory costs of setting-up a payments system but as more commerce moves online and as these other costs drop, those barriers will fall.
The problem is cards are essentially commodities. With a few exceptions, a credit card is a credit card is a debit card, even. This is especially true as the actual plastic starts to play a smaller role in the transaction. In freeing customers from location-specific branch and ATM networks, online banking has also removed the personal relationship that may once have made a bank something more than a logo on a card.
The credit card survives – and indeed still thrives – because it is the most convenient way for most people to make most payments, at the moment, but this is changing. With more and more online and mobile alternatives, banks will have to start competing with more retail-savvy competitors and to do that they need to reconsider the way they consider and market their products.
Traditionally banks spent large amounts on above the line advertising to attract customers and retain customers who they offered a suite of standard products; a one-size-fits-all model. Then, stand alone credit card issuers and other niche companies started to attack the banks’ market share with tailored products offered through direct marketing campaigns; an altered-by-the-in-store-tailor, still not 100% customized.
Direct marketing is no longer enough because it works on a some key principals which are being undermined: the contact must be made at a time and place where the customer is open to the idea of a new card, but in a flooded market the chances of your contact reaching a customer before a competitors in this window period is getting smaller and you’re almost always contacting them at home; the contact must come in a medium that is relevant to a customer, both mail and email are becoming less relevant to customers; and the offer should appeal to a particular niche, but a direct marketing campaign, even a niche one, must involve a degree of choice compromisation.
A new model is needed that can reach customers at a convenient time and place, through a relevant medium to offer products tailored to their needs, cheaply. The last word is especially important because banks have long used vague pricing structures to protect themselves from commodity prices but new laws and competition from more transparent – and even ‘no cost’ –competitors will drive prices down, making only the most efficient banks profitable.
This article is an attempt to run with that idea, sometimes beyond the limits of practicality; hopefully in doing so I will raise some interesting questions about what is and isn’t important in the modern, mass market credit card business.

That’s where the idea for the credit card vending machine took root: it is a symbol for efficient, convenient, and ‘productized’ transactional banking. Turning the credit card marketing model around to offer customized cards to customers in convenient locations, without paper work and at low cost.
I envisage a customer approaching a machine in a shopping mall, choosing a card design from the display, entering the relevant data, selecting product features, paying a fee based on the feature bundle, and then waiting while the machine embosses, encodes and produces their card.

The concept is simply an amalgamation of components that are all already available and automatable:
·        an online application form,
·        a means of automated customer verification (ID card scanning in HK and fingerprint reading in Hong Kong for example) ,
·        a secure communications channel,
·        a card embossing machine

Data Capture
I hate forms, especially hand written forms. Every time someone asks me to write out my name and address I immediately assume they value bureaucracy over customer service.
Instead, the data capture process should be designed to leverage stored data, focusing on verifying data rather than capturing it. In Hong Kong I can use my government-issued identity smartcard and a scan of my thumbprint to enter and leave the country, the same tools could provide my demographic which could then be supplemented by bureau and internal databases, requiring me to enter only minimal data. An ATM card and PIN code might do the same thing.
Where this is not possible, the interface would need to provide a vivid and easy means for manually capturing data.
Customer Acquisition
Credit acquisition strategies should already be automated. Very little about them will change, they’ll just be implemented closer to the customer. Hosting them in a vending machine – or doing it via secure link to the bank’s system – is also no different, just a lot of smaller machines processing the data rather than one big one. In fact if there is anything in your processes that can’t be automated in this way you should probably revaluate the cost:benefit trade-off of them anyway.
In terms of marketing, by being located closer to the point of use also makes it easier to do short-term, co-branded campaigns.
Product Selection
Once the data has been captured and the credit and profitability scores have been calculated, a list of product features can be made available, either explicitly or as shadow limits. The obvious way to do this would be to allow a customer to add features onto a low cost, low feature basic card: higher limits, a reward programme, limited edition designs, etc. all with an associated higher fee.
But I’m not threatening anyone’s job here. Any number of strategies can be implemented in the background. The product characteristics might be customer selected, but the options provided and the pricing of those options will be based on analytics-driven credit strategies.
Even target market analysis is still important. In fact, you’ll have one more important data point: the demographic data will allow you to model risk and behaviour based on home address, but you’ll now also now where they shop, allowing you to model behaviour in more detail.
Just because credit card designs don’t obviously affect the standard profit levers, it doesn’t mean they can’t be important influencers of application volumes, but most banks offer only two or three options in each product category.
In part this is because the major card companies want to protect their visual brand identities, but mainly it is because it is hard to advertise hundreds of different card designs to your customers without confusing them.
By filling each machine with a unique selection of generic and limited edition designs, though, you could offer a selection of designs to the market that is never overwhelming but which presents more opportunities for individualism across the market. You might even be able to offer an electronic display of all possible designs to be printed on white plastics.
Look, I started out managing fraud analytics on a card portfolio and I know my old boss will be fuming at this stage; there are risk involved in storing blank plastics and especially in storing the systems for encoding chips and magstripes. However, ATMs have many of the same risks and I believe that they are sufficiently controllable to support the rest of the idea at least in its intended purpose here.
Invoicing
Connecting the card to a funding account could be done offline afterwards, but I would prefer a model that had the customer link the card to their savings account by inserting their ATM card and entering the PIN; the bank could to the debit order/ standing order administration in the background.
Payment
Finally payments, I would propose a single cost model where the actual card is paid for by debiting the funding account when the invoice is created or by cash as with any other vending machine purchase; a single cost model makes the process more transparent and helps to reposition the card as a product purchased willingly.

The systems that make the credit card vending machine could also be leveraged for other, revenue generating purposes.
It provides a channel that could revitalize card upgrades. Instead of linking card upgrades to hidden product parameters, they can become customer initiated and feature driven: learning from the internet’s status-badge mindset, banks could allow customers to insert a card into the machine, pay a small upgrade fee and have it replicated on a new, limited-edition plastic made available based on longevity and spend scores for example, even linking it to retail brands so a Burberry Card might become available only if you spend $5,000 or more in a Burberry store on a vending machine card, etc. Multiple, smaller upgrades would create a new and different revenue stream.
The machines could also act as a channel for online application fulfillment. Customers who have applied online, who have a card, or who need to replace a lost card could have those printed at the most convenient vending machine rather than having to visit a branch.

The way I have spoken about the credit card vending machine is as a new and somewhat quirky sales channel for of generic cards in a generic market place – a Visa Classic Card with a choice of limits, reward programmes and designs, for example. In other words, I have positioned it as a better way to make traditional credit cards relevant in a retail environment.
But it could also offer opportunities in other ways too, for example in the unbanked sectors in places like South Africa where branch networks are prohibitively expensive to roll-out in low-income, rural areas. There customers incur significant costs to reach a bank for even simple services. Though mobile banking is making inroads, there is still room for card based transactional banking. A credit card vending machine would be more difficult to get right in this sort of environment, but if done right it would be a cheap way to expand market share for innovative lenders.

This article is not intended to stand as business proposal, but rather to highlight the parts of the traditional lending business that I feel are most at risk from competition and irrelevance. A review of your marketing efforts and team structures with this in mind might reveal functions that are no longer needed, product parameters that are too complex or attitudes to customer service that need to be improved.

Advertisements

Read Full Post »

Many lenders fail to fully appreciate the size of their fraud losses. By not actively searching for – and thus not classifying – fraud within their bad debt losses, they miss the opportunity to create better defences and so remain exposed to ever-growing losses. Any account that is written-off without ever having made a payment is likely to be fraud; any account that is in collections for their full limit within the first month or two is likely to be fraud; any recently on book account that is written-off because the account holder is untraceable is likely to be fraud, etc.

Credit scorecards do not detect application fraud very well because the link between the credit applicant and the credit payer is broken. In a ‘normal’ case the person applying for the credit is also the person that will pay the monthly instalments and so the data in the application represents the risk of the future account-holder and thus the risk of a missed payment. However, when a fraudster applies with a falsified or stolen identity there is no such link and so the data in that application no longer has any relationship to the future account-holder and so can’t represent the true risk.

 

First Person Fraud

Now that explanation assumes we are talking about third-party fraud; fraud committed by someone other than the person described on the application. That is the most clear-cut form of fraud. However, there is also the matter of first person fraud which is less clear-cut.

First person fraud is committed when a customer applies using their own identity but does so with no intention of paying back the debt, often also changing key data fields – like income – to improve their chances of a larger loan.

Some lenders will treat this as a form of bad debt while others prefer to treat it as a form of fraud. It doesn’t really matter so long as it is treated as a specific sub-type of either definition. I would, however, recommend treating it as a sub-type of fraud unless a strong internal preference exists for treating it as bad debt. Traditional models for detecting bad debt are built on the assumption that the applicant has the intention of paying their debt and so aim to measure the ability to do so which they then translate into a measure of risk. In these cases though, that assumption is not true and so there should instead be a model looking for the existence of the willingness to pay the debt rather than the ability to do so. From a recovery point of view, a criminal fraud case it is also a stronger deterrent to customers than a civil bad debt one.

 

Third Person Fraud

The rest of the fraud then, is third-party fraud. There are a number of ways fraud can happen but I’ll just cover the two most common types: false applications and identity take-overs.

False applications are applications using entirely or largely fictional data. This is the less sophisticated method and is usually the first stage of fraud in a market and so is quickly detected when a fraud solution or fraud scorecard is implemented. Creating entirely new and believable identities on a large-scale without consciously or subconsciously reverting to a particular pattern is difficult. There is therefore a good chance of detecting false applications by using simple rules based on trends, repeated but mismatched information, etc.

A good credit bureau can also limit the impact of false applications since most lenders will then look for some history of borrowing before a loan is granted. An applicant claiming to be 35 years old and earning €5 000 a month with no borrowing history will raise suspicions, especially where there is also a sudden increase in credit enquiries.

Identity take-over is harder to detect but also harder to perpetrate, so it is a more common problem in the more sophisticated markets. In these cases a fraudster adopts the identity – and therefore the pre-existing credit history – of a genuine person with only the slightest changes made to contact information in most cases. Again a good credit bureau is the first line of defence albeit now in a reactive capacity alerting the lender to multiple credit enquiries within a short period of time.

Credit bureau alerts should be supported by a rule-based fraud system with access to historical internal and, as much as possible, external data. Such a system will typically be built using three types of rules: rules specific to the application itself; rules matching information in the application to historical internal and external known frauds; rules matching information in the application to all historical applications.

 

Application Specific Rules

Application specific rules can be built and implemented entirely within an organisation and are therefore often the first phase in the roll-out of a full application fraud solution. These rules look only at the information captured from the application in question and attempt to identify known trends and logical data mismatches.

Based on a review of historical fraud trends the lender may have identified that the majority of their frauds originated through their online channel in loans to customers aged 25 years or younger, who were foreign citizens and who had only a short history at their current address. The lender would then construct a rule to identify all applications displaying these characteristics.

Over-and-above these trends there are also suspicious data mismatches that may be a result of the data being entered by someone less familiar with the data than a real customer would be expected to be with their own information. These data mismatches would typically involve things like an unusually high salary given the applicant’s age, an inconsistency between the applicant’s stated age and date of birth, etc.

In the simplest incarnation these rules would flag applications for further, manual investigation. In more sophisticated systems though, some form of risk-indicative score would be assigned to each rule and applications would then be prioritised based on the scores they accumulated from each rule hit.

These rules are easy to implement and need little in the way of infrastructure but they only detect those fraudulent attempts where a mistake was made by the fraudster. In order to broaden the coverage of the application fraud solution it is vital to look beyond the individual application and to consider a wider database of stored information relating to previous applications – both those known to have been fraudulent and those still considered to be good.

 

Known Fraud Data

The most obvious way to do this is to match the information in the application to all the information from previous applications that are known – or at least suspected – to have been fraudulent. The fraudster’s greatest weakness is that certain data fields need to be re-used either to subvert the lenders validation processes or to simplify their own processes.

For example many lenders may phone applicants to confirm certain aspects on their application or to encourage early utilisation and so in these cases the fraudster would need to supply at least one genuine contact number; in other cases lenders may automatically validate addresses and so in these cases the fraudster would need to supply a valid address. No matter the reason, as soon as some data is re-used it becomes possible to identify where that has happened and in so doing to identify a higher risk of fraud.

To do this, the known fraud data should be broken down into its component parts and matched separately so that any re-use of an individual data field – address, mobile number, employer name, etc. – can be identified even if it is used out of context. Once identified, it is important to calculate the relative importance in order to prioritise alerts. Again this is best done with a scorecard but expert judgement alone can still add value; for example it is possible that several genuine applicants will work for an employer that has been previously used in a fraudulent application but it would be much more worrying if a new applicant was to apply using a phone number or address that was previously used by a fraudster.

It is also common to prioritise the historical data itself based on whether it originated from a confirmed fraud or a suspected one. Fraud can usually only be confirmed if the loan was actually issued, not paid and then later shown to be fraudulent. Matches to data relating to these accounts will usually be prioritised. Data relating to applications that were stopped based on the suspicion of fraud, on the other hand, may be slightly de-prioritised.

 

Previous Applications

When screening new applications it is important to check their data not just against the known fraud data discussed above but also against all previous ‘good’ applications. This is for two reasons: firstly not all fraudulently applied for applications are detected and secondly, especially in the case of identity theft, the fraudster is not always the first person to use the data and so it is possible that a genuine customer had previously applied using the data that is now being used by a fraudster.

Previous application data should be matched in two steps if possible. Where the same applicant has applied for a loan before, their specific data should be matched and checked for changes and anomalies. The analysis must be able to show if, for a given social security number, there have been any changes in name, address, employer, marital status, etc. and if so, how likely those changes are to be the result of an attempted identity theft versus a simple change in circumstances. Then – or where the applicant has not previously applied for a loan – the data fields should be separated and matched to all existing data in the same way that the known fraud data was queried.

As with the known fraud data it is worth prioritising these alerts. A match to known fraud data should be prioritised over a match to a previous application and within the matches a similar prioritisation should occur: again it would not be unusual for several applicants to share the same employer while it would be unusual for more than one applicant to share a mobile phone number and it would be impossible for more than one applicant to share a social security or national identity number.

 

Shared Data

When matching data in this way the probability of detecting a fraud increase as more data becomes available for matching. That is why data sharing is such an important tool in the fight against application fraud. Each lender may only receive a handful of fraud cases which limits not only their ability to develop good rules but most importantly limits their ability to detect duplicated data fields.

Typically data is shared indirectly and through a trusted third-party. In this model each lender lists all their known and suspected frauds on a shared database that is used to generate alerts but cannot otherwise be accessed by lenders. Then all new applications are first matched to the full list of known frauds before being matched only to the lender’s own previous applications and then subjected to generic and customised application-specific rules as shown in the diagram below:

 

Read Full Post »

First things first, I am by no means a scorecard technician. I do not know how to build a scorecard myself, though I have a fair idea of how they are built; if that makes sense. As the title suggests, this article takes a simplistic view of the subject. I will delve into the underlying mathematics at only the highest of levels and only where necessary to explain another point. This article treats scorecards as just another tool in the credit risk process, albeit an important one that enables most of the other strategies discussed on this blog. I have asked a colleague to write a more specialised article covering the technical aspects and will post that as soon as it is available.

 

Scorecards aim to replace subjective human judgement with objective and statistically valid measures; replacing inconsistent anecdote-based decisions with consistent evidence-based ones. What they do is essentially no different from what a credit assessor would do, they just do it in a more objective and repeatable way. Although this difference may seem small, it enables a large array of new and profitable strategies.

So what is a scorecard?

A scorecard is a means of assigning importance to pieces of data so that a final decision can be made regarding the underlying account’s suitableness for a particular strategy. They do this by separating the data into its individual characteristics and then assigning a score to each characteristic based on its value and the average risk represented by that value.

For example an application for a new loan might be separated into age, income, length of relationship with the bank, credit bureau score, etc. Then the each possible value of those characteristics will be assigned a score based on the degree to which they impact risk. In this example ages between 19 and 24 might be given a score of – 100, ages between 25 and 30 a score of -75 and so on until ages 50 and upwards are given a score of +10. In this scenario young applicants are ‘punished’ while older customers benefit marginally from their age. This implies that risk has been shown to be inversely related to age. The diagram below shows an extract of a possible scorecard:

The score for each of these characteristics is then added to reach a final score. The final score produced by the scorecard is attached to a risk measure; usually something like the probability of an account going 90 days into arrears within the next 12 months. Reviewing this score-to-risk relationship allows a risk manager to set the point at which they will decline applications (the cut-off) and to understand the relative risk of each customer segment on the book. The diagram below shows how this score-to-risk relationship can be used to set a cut-off.

How is a scorecard built?

Basically what the scorecard builder wants to do is identify which characteristics at one point in time are predictive of a given outcome before or at some future point in time. To do this historic data must be structured so that one period can represent the ‘present state’ and the subsequent periods can represent the ‘future state’. In other words, if two years of data is available for analysis (the current month can be called Month 0 and the last Month can be called Month -24) then the most distant six months (from Month -24 to Month -18) will be used to represent the ‘current state’ or, more correctly, the observation period while the subsequent months (Months -17 to 0) represent the known future of those first six months and are called ‘the outcome period’. The type of data used in each of these periods will vary to reflect these differences so that application data (applicant age, applicant income, applicant bureau score, loan size requested, etc.) is important in the observation period and performance data (current balance, current days in arrears, etc.) is important in the outcome period.

With this simple step completed the accounts in the observation period must be defined and sorted based on their performance during the outcome period. To start this process a ‘bad definition’ and ‘good definition’ must first be agreed upon. This is usually something like: ‘to be considered bad, an account must have gone past 90 days in delinquency at least once during the 18 month outcome period’ and ‘to be considered good an account must never have gone past 30 days in delinquency during the same period’. Accounts that meet neither definition are classified as ‘indeterminate’.

Thus separated, the unique characteristics of each group can be identified. The data that was available at the time of application for every ‘good’ and ‘bad’ account is statistically tested and those characteristics with largely similar values within one group but largely varying values across groups are valuable indicators of risk and should be considered for the scorecard. For example if younger customers were shown to have a higher tendency to go ‘bad’ than older customers, then age can be said to be predictive of risk. If on average 5% of all accounts go bad but a full 20% of customers aged between 19 and 25 go bad while only 2% of customers aged over 50 go bad then age can be said to be a strong predictor of risk. There are a number of statistical tools that will identify these key characteristics and the degree to which they influence risk more accurately than this but they won’t be covered here.

Once each characteristic that is predictive of risk has been identified along with its relative importance some cleaning-up of the model is needed to ensure that no characteristics are overly correlated. That is, that no two characteristics are in effect showing the same thing. If this is the case, only the best of the related characteristics will be kept while the other will be discarded to prevent, for want of a better term, double-counting. Many characteristics are correlated in some way, for example the older you are the more likely you are to be married, but this is fine so long as both characteristics add some new information in their own right as is usually the case with age and marital status – an older, married applicant is less risky than a younger, married applicant just as a married, older applicant is less risky than a single, older applicant. However, there are cases where the two characteristics move so closely together that the one does not add any new information and should therefore not be included.

So, once the final characteristics and their relative weightings have been selected the basic scorecard is effectively in place. The final step is to make the outputs of the scorecard useable in the context of the business. This usually involves summarising the scores into a few score bands and may also include the addition of a constant – or some other means of manipulating the scores – so that the new scores match with other existing or previous models.

 

How do scorecards benefit an organisation?

Scorecards benefit organisations in two major ways: by describing risk in very fine detail they allow lenders to move beyond simple yes/ no decisions and to implement a wide range of segmented strategies; and by formalising the lending decision they provide lenders with consistency and measurability.

One of the major weaknesses of a manual decisioning system is that it seldom does more than identify the applications which should be declined leaving those that remain to be accepted and thereafter treated as being the same. This makes it very difficult to implement risk-segmented strategies. A scorecard, however, prioritises all accounts in order of risk and then declines those deemed too risky. This means that all accepted accounts can still be segmented by risk and this can be used as a basis for risk-based pricing, risk-based limit setting, etc.

The second major benefit comes from the standardisation of decisions. In a manual system the credit policy may well be centrally conceived but the quality of its implementation will be dependent on the branch or staff member actually processing the application. By implementing a scorecard this is no longer the case and the roll-out of a scorecard is almost always accompanied by the reduction in bad rates.

Over-and-above these risk benefits, the roll-out of a scorecard is also almost always accompanied by an increase in acceptance rates. This is because manual reviewers tend to be more conservative than they need to be in cases that vary in some way from the standard. The nature of a single credit policy is such that to qualify for a loan a customer must exceed the minimum requirements for every policy rule. For example, to get a loan the customer must be above the minimum age (say 28), must have been with the bank for more than the minimum period (say 6 months) and must have no adverse remarks on the credit bureau. A client of 26 with a five year history with the bank and a clean credit report would be declined. With a scorecard in place though the relative importance of exceeding one criteria can be weighed against the relative importance of missing another and a more accurate decision can be made; almost always allowing more customers in.

 

Implementing scorecards

There are three levels of scorecard sophistication and, as with everything else in business, the best choice for any situation will likely involve a compromise between accuracy and cost.

The first option is to create an expert model. This is a manual approximation of a scorecard based on the experience of several experts. Ideally this exercise would be supported by some form of scenario planning tool where the results of various adjustments could be seen for a series of dummy applications – or genuine historic applications if these exist – until the results that meet the expectations of the ‘experts’. This method is better than manual decisioning since it leads to a system that looks at each customer in their entirety and because it enforces a standardised outcome. That said, since it is built upon relatively subjective judgements it should be replaced with a statistically built scorecard as soon as enough data is available to do so.

An alternative to the expert model is a generic scorecard. These are scorecards which have been built statistically but using a pool of similar though not customer-specific data. These scorecards are more accurate than expert models so as long as the data on which they were built reasonably resembles the situation in which they are to be employed. A bureau-level scorecard is probably the purest example of such a scorecard though generic scorecards exist for a range of different products and for each stage of the credit life-cycle.

Ideally, they should first be fine-tuned prior to their roll-out to compensate for any customer-specific quirks that may exist. During a fine-tuning, actual data is run through the scorecard and the results used to make small adjustments to the weightings given to each characteristic in the scorecard while the structure of the scorecard itself is left unchanged. For example, assume the original scorecard assigned the following weightings: -100 for the age group 19 to 24; -75 for the age group 25 to 30; -50 for the age group 31 to 40; and 0 for the age group 41 upwards. This could either be implemented as it is bit if there is enough data to do a fine-tune it might reveal that in this particular case the weightings should actually be as follows: -120 for the age group 19 to 24; -100 for the age group 25 to 30; -50 for the age group 31 to 40; and 10 for the age group 41 upwards. The scorecard structure though, as you can see, does not change.

In a situation where there is no client-specific data and no industry-level data exists, an expert model may be best. However, where there is no client-specific data but where there is industry-level data it is better to use a generic scorecard. In a case where there is both some client-specific data and some industry-level data a fine-tuned generic scorecard will produce the best results.

The most accurate results will always come, however, from a bespoke scorecard. That is a scorecard built from scratch using the client’s own data. This process requires significant levels of good quality data and access to advanced analytical skills and tools but the benefits of a good scorecard will be felt throughout the organisation.


Read Full Post »

You’ve got to know when to hold ‘em, know when to fold ‘em

Know when to walk away and know when to run

I’ve always wanted to use the lines from Kenny Rogers’ famous song, The Gambler, in an article. But that is only part of the reason I decided to use the game of Texas Holdem poker as a metaphor for the credit risk strategy environment.

The basic profit model for a game of poker is very similar to that of a simple lending business. To participate in a game of Texas Holdem there is a fixed cost (buy in) in exchange for which there is the potential to make a profit but also the risk of making a loss. As each card is dealt, new information is revealed and the player should adjust their strategy accordingly. Not every hand will deliver a profit and some will even incur a fairly substantial loss, however over time and by following a good strategy the total profit accumulated from those hands that are winners can be sufficient to cover the losses of those hands that are losers and the fixed costs of participating and a profit can thus be made.

Similarly in a lending business there is a fixed cost to process each potential customer, only some of whom will be accepted as actual customers who have the potential to be profitable or to result in a loss.  The lender will make an overall profit only if the accumulated profit from each profitable customer is sufficient to cover the losses from those that weren’t and the fixed processing costs.

In both scenarios, the profit can be maximised by increasing exposure to risk when the odds of a profit are good and reducing exposure, on the other hand, when the odds of a loss are higher. A good card player therefore performs a similar role to a credit analyst: continuously calculating the odds of a win from each hand, designing strategies to maximise profit based on those odds and then adjusting those strategies as more information becomes available.

Originations

To join a game of Texas Holdem each player needs to buy into that game by placing a ‘blind’ bet before they have seen any of the cards.  As this cost is incurred before any of the cards are seen the odds of victory can not be estimated. The blind bet is, in fact, the price to see the odds.

Thereafter, each player is dealt two private cards; cards that only they can see. Once these cards have been dealt each player must decide whether to play the game or not.

To play on, each player must enter a further bet. This decision must be made based on the size of the bet and an estimate of the probability of victory based on the two known cards. If the player should instead choose to not play, the will forfeit their initial bet.

A conservative player, one who will play only when the odds are strongly in their favour, may lose fewer hands but they will instead incur a relatively higher cost of lost buy-ins. Depending on the cost of the buy-in and the average odds of winning, the most profitable strategy will change but it will unlikely be the most conservative strategy.

In a lending organisation the equivalent role is played by the originations team. Every loan application that is processed, incurs a cost and so when an application is declined that cost is lost. A conservative scorecard policy will decline a large number of marginal applications choosing, effectively, to lose a small but known processing cost rather than risk a larger but unknown credit loss.  In so doing though, it also gives up the profit potential on those accounts. As with poker betting strategies, the ideal cut-off will change based on the level of processing costs and the average probability of default but will seldom be overly conservative.

A card player calculates their odds of victory from the known combinations of cards possible from a standard 54 card deck.  The player has the possibility of creating any five card combination made up from their two known cards and a further five random ones yet to be dealt, while each other player can create a five card combination made-up of any seven cards except for the two the player himself has.  With this knowledge, the odds that the two private cards will result in a winning hand can be estimated and, based on that estimate, make the decision whether to enter a bet and if so of what size; or whether to fold and lose the buy-in.

The methods used to calculate odds may vary, as do the sources of potential profits, but at a conceptual level the theory on which originations is based is similar to the theory which under-pins poker betting.

As each account is processed through a scorecard the odds of it eventually rolling into default are estimated. These odds are then used to make the decision whether to offer credit and, if so, to what extent.  Where the odds of a default are very low the lender will likely offer more credit – the equivalent of placing a larger starting bet – and vice versa.

Customer Management

The reason that card games like Texas Holdem are games of skill rather than just games of chance, is that the odds of a victory change during the course of a game and so the player is required to adapt their betting strategy as new information is revealed.  Increasing their exposure to risk as the odds grow better or retreating as the odds worsen.  The same is true of a lending organisation where customer management strategies seek to maximise organisational profit but changing exposure as new information is received.

Once the first round of betting has been completed and each player’s starting position has been determined, the dealer turns over three ‘community cards’.  These are cards that all players can see and can use, along with their two private cards, to create their best possible poker hand. A significant amount of new information is revealed when those three community cards are dealt. In time two further community cards will be revealed and it will be from any combination of those seven cards that a winning hand will be constructed. So, at this point, each player knows five of the seven cards they will have access to and three of the cards their opponents can use. The number of possible hands becomes smaller and so the odds that the players had will be a winner can be calculated more accurately. That is not to say the odds of a win will go up, just that the odds can be stated with more certainty.

At this stage of the game, therefore, the betting activity usually heats up as players with good hands increase their exposure through bigger bets. Players with weaker hands will try to limit their exposure by checking – that is not betting at all – or by placing the minimum bet possible. This strategy limits their potential loss but also limits their potential gain as the total size of the ‘pot’ is also kept down.

As each of the next two community cards is revealed this process repeats itself with players typically willing to place ever larger bets as the new information received allows them to calculate the odds with more certainty. Only once the final round of betting is complete are the cards revealed and a winner determined. Those players that bet until the final round but still lose will have lost significantly in this instance. However, if they continue to play the odds well they will expect to recuperate that loss – and more – over time.

The customer management team within a lending organisation works with similar principals. As an account begins to operate, new information is received which allows the lender to determine with ever more certainty the probability that an account will eventually default: with every payment that is received on time, the odds of an eventual default decrease; with every broken promise-to-pay, those odds increase; etc.

So the role of the customer management team is to design strategies that optimise the lender’s exposure to each customer based on the latest information received. Where risk appears to be dropping, exposure should be increased through limit increases, cross-selling of new products, reduced pricing, etc. while when the opposite occurs the exposure should be kept constant or even decreased through limit decreases, pre-delinquency strategies, foreclosure, etc.

Collections

As the betting activity heats up around them a player may decide that the odds no longer justify the cost required to stay in the game and, in these cases, the player will decide to fold – and accept a known small loss rather than continue betting and risk an even bigger eventual loss chasing an unlikely victory.

Collections has too many operational components to fit neatly into the poker metaphor but it can be most closely likened to this decision of whether or not to fold. Not every hand can be a winner and even hands that initially appeared to be strong can be shown to be weak when the latter community cards are revealed. A player who was dealt two hearts and who then saw two further hearts dealt in the first three community cards would have been in  a strong position with the odds that the fifth heart they need to create a strong ‘flush hand’ sitting at fifty percent. However, if when the next two cards are dealt neither is a heart, the probability of a winning hand will drop to close to zero.

In this situation the player needs to make a difficult decision: they have invested in a hand that has turned out to be a ‘bad’ one and they can either accept the loss or invest further in an attempt to salvage something. If there is little betting pressure from the other players, they might choose to stay in the game by matching any final bets; figuring that because the total pot was large and the extra cost of participating small it was worth investing further in an unlikely win. Money already bet, after all, is a sunk cost. If the bets in the latest round are high however, they might choose to fold instead and keep what money they have left available for investment in a future, hopefully better hand.

As I said, the scope of collections goes well beyond this but certain key decisions a collections strategy manager must make relate closely to the question of whether or not to fold. Once an account has missed a payment and entered the collections processes the lender has two options: to invest further time and money in an attempt to collect some or all of the outstanding balance or to cut their losses and sell or even to write-off the debt.

In cases where there is strong long-term evidence that the account is a good one, the lender might decide – as a card player might when a strong hand is not helped by the fourth community card – to maintain or even increase their exposure by granting the customer some leeway in the form of a payment holiday, a re-aging of debt or even a temporary limit increase. On the other hand, in cases where the new information has forced a negative re-appraisal of the customer’s risk but the value owed by that customer is significant, it might still be preferable for the lender to invest a bit more in an attempt to make a recovery, even though they know that the odds are against them. This sort of an investment would come in the form of an intensive collections campaign or the paid involvement of specialist third party debt collectors.

As with a game of cards, the lender will not always get it exactly right and will over invest in some risky customers and under-invest in others; the goal is to get the investment right often enough in the long-term to ensure a profit overall.

It is also true that a lender who consistently shies away from investing in the collection of marginal debt – one that chooses too easily to write-off debt rather than to risk an investment in its recovery – may start to create a reputation for themselves that is punitive in the long-run. A lender that is seen as a ‘soft touch’ by the market will attract higher risk customers and will see a shift in portfolio risk towards the high-end as more and more customers decide to let their debt fall delinquent in the hopes of a painless write-off. Similarly a card player that folds in all situations except those where the odds are completely optimal, will soon be found out by their fellow players. Whenever they receive the perfect hand and bet accordingly, the rest of the table will likely fold and in so doing reduce the size of the ensuing pot which, although won, will be much smaller than it might otherwise have been. In extreme cases, this limiting of the wins gained from good hands may be so sever that the player is unable to cover the losses they have had to take in the games in which they folded.

Summary

The goal of credit risk strategy, like that of a poker betting strategy, is to end with the most money possible. To do this, calculated bets must be taken at various stages and with varying levels of data; risk must be re-evaluated continuously and at times it may become necessary to take a known loss rather than to risk ending up with an even greater, albeit uncertain, loss in the future.

So, in both scenarios, risk should not be avoided but should rather be converted into a series of numerical odds which can be used to inform investment strategies that seek to leverage off good odds and hedge against bad odds. In time, if accurate models are used consistently to inform logical strategies it is entirely possible to make a long-term profit.

Of course in their unique nuances both fields also vary quite extensively from each other, not least in the way money is earned and, most importantly, in the fact that financial services is not a zero sum game. However, I hope that where similarities do exist these have been helpful in understanding how the profit levers in a lending business fit together. For a more technical look at the same issue, you can read my articles on profit modelling in general and for credit cards and banks in particular.

Read Full Post »

Introduction

Traditional credit scoring is built on the inherent assumption that past behaviour predicts future behaviour or, in simpler terms, that if you have always repaid your loans in the past you are likely to continue to do so into the future. 

 

This method has worked so well because it is generally true that the type of person who has met their obligations in the past is also the type of person who will attempt to do so in the future.  However, behaviour is driven by the intention to act as well as the ability to do so.  Therefore, good or bad intentions are only partially responsible for actions. 

 

The traditional method falls short though in that it fails to take into account an individual’s ability to repay.  Good intentions and money management skills may have allowed a customer to meet their existing obligations but it is possible that the new loan or a change n the external environment may have been enough to push them over the edge.

 

Affordability checks, therefore, differ from risk checks in that they test an individual’s ability to repay debt.  Calculating the ability to repay a loan is theoretically easier than calculating the intention to repay a loan.  Simply put, a customer is able to repay a loan when they have more money available to meet their debt obligations than they need to keep those obligations up-to-date.

 

Affordability:    Available Cash ≥ Cost of Debt Obligations

 

So, a lack of affordability can originate from two events – a decrease in available cash or an increase in the cost of debt obligations.  The relative availability of cash is not something over which the bank has much control.  So, the only way in which the bank can directly impact the affordability of a customer is by changing the cost of debt obligations – usually by providing further credit but also by changing pricing. 

 

Therefore, whenever the bank is increasing the debt burden of a customer they need to calculate the implications that this strategy will have on affordability.  Increases in the debt burden can take several forms but include new loans (originations), credit limit increases (account management), debt restructuring (collections), etc.

 

Affordability and Bad Debt

Although subtly different, risk and affordability have the same outcome measure – bad debt provisions or bad debt write-offs.  In fact, a lack of affordability can cause a low risk account to display very similar behaviours to a high risk account and to arrive at the same end.

 

We have seen that a lack of affordability can originate from one of two events – a decrease in available cash or an increase in the cost of debt obligations.  In turn, each of these events can be attributed to either internal or external factors and, depending on which of these is dominant in any one case, the strategies employed will change accordingly.

 

The matrix below combines the two possible causes of a negative change in affordability and provides some examples of each.

 

 AffordabilityMatrix

 

When a portfolio experiences a large and widespread increase in bad debt – or the behaviour that usually precedes bad debt – it is quite possible that this has been caused by a deterioration in the external forces that affect affordability rather than by a deterioration in risk –  a large rise in interest rates for example.

 

Internal and external factors can act in conjunction so it is important to identify which of them is to blame for the deterioration of customer’s affordability.  Because external factors are by definition outside of the control of the individual, they are unlikely to be predicable using individual-level data.  Instead, external factors usually need to be predicted based on wider economic trends.  Internal factors are more closely correlated to individual behaviour and so should be predictable using individual-level data.

 

We could imagine a customer (represented by the circle in the diagram below) as falling somewhere within a range of “affordability” – from not being able to meet any of their debt obligations to being able to meet them all easily (represented by the pyramid).  A change in internal behaviour will move that customer closer towards one of those extremes.

 

The external factors act as a hurdle (represented by the line), with customers above the line being able to afford their current debt obligations and customers who fall below the line being unable to do so.  As the external environment worsens, the hurdle moves up the pyramid erasing the affordability of previously sound customers.

 

 

 

In the illustration above it is clear how a customer who can currently afford their debt obligation (bold circle, bold line) could find themselves unable to afford their debt in the future should their internal situation worsen (loss of commission income, loss of work, etc.) in conjunction with an worsening of external factors (increase in interest rates, new debt taken on, etc.).

 

The A significant worsening in external factors can therefore make a sudden impact across a portfolio while a worsening in internal factors is likely to be much more isolated in its impact. 

 

Measuring Affordability

Unfortunately, it is not as easy to measure affordability in practice as it is in theory.  The components of the affordability calculation are exceedingly difficult to obtain indirectly.  This means that a bank – or other lending institution – is usually forced to work with an estimate of affordability which is usually arrived at using an amalgamation of customer-supplied (direct) data and third-party (indirect) data. 

 

Unfortunately, customer-supplied data is prone to intentional and unintentional manipulation while third-party data is seldom complete.  It is unlikely, even for a customer with a rich credit history, for a third-party to know all of the relevant in-flows and out-flows.  This scenario is further complicated once joint-incomes and joint-debts are considered.  The relative availability of data will vary within and across markets – affected by factors such as credit bureau sophistication, privacy laws, etc.  It is important to know the relative availability of affordability data as it is the main factor dictating the level of predictions that are possible. 

 

As it is easier to prove the a customer can not afford a loan than it is to prove that they can afford one,  the relative availability of data and systems will dictate the type of affordability decisions that can be made.  For example, where an organisation only has access to basic, customer-supplied data it will be impossible to rule out manipulation of the data for the customer’s benefit.  However, it is reasonable to assume that whatever manipulation may have occurred would have been undertaken so as to improve the apparent level of affordability of that client.  Therefore, where the data provided is sufficient to show a lack of affordability it can be taken at face value that the customer should not receive the loan.  However, where the data provided shows the customer to be in a position to afford the loan it may not be sufficient to guarantee that the real situation is also as such.  So, the only decision the organisation will be able to make are those where a lack of affordability can be clearly proven.  Should that same organisation later gain access to third-party data, it might become possible to identify customers who probably can’t afford the loan, etc.

 

 

 

In reality, the affordability decision is always negatively framed – as with a statistical test where the hypothesis is either rejected or not rejected, never accepted.  In our example, should there have been insufficient evidence to show a lack of affordability, the bank would not have proven affordability, it would simply have failed to prove a lack thereof.  The fact that the bank may subsequently choose to proceed with the loan does not change this.  It is impossible to truly prove affordability.  The possibility of unknown data will always exist – perhaps the customer has a gambling problem, perhaps they know that they will be retrenched in a month, etc.

 

This might be a subtle point that seems little more than semantics, but it is important to the strategy setting process.    Unless the data and systems available are very sophisticated, affordability checks should always be a means of identifying otherwise good accounts that should be declined, never as a means of identifying otherwise risky accounts that should be approved.

 

Measuring Affordability in Practice

There are three important questions to answer – when should affordability tests be performed, what should they measure and how accurate should they aim to be.

 

Affordability is affected by a change in available cash or a change in the total debt burden.  As only the size debt burden is within the control – at least to a degree – of the bank, this is the most important trigger for an affordability test.

 

So, whenever the bank plans to increase the debt burden by offering further credit, it needs to test the proposed strategy’s likely impact on the target customers’ affordability.  Such a test would need to consider the impact that the strategy will likely have on internal factors as well as the impact that any anticipated changes in the external factors could have on the same customers.  

 

Having identified what should be measured, the final step is to determine the achievable level of accuracy.  It is never possible to prove affordability beyond doubt, so an affordability test will always be looking to find sufficient evidence of a lack of affordability.  Where little information exists, only the clearest cases can be identified – those where the customer can definitely not afford the loan – while, with slightly more information, it might be possible to identify those likely, but not guaranteed, to be unable to afford the loan, etc.  This is not only an important step to managed expectations but also to ensure the result sof the affordability test are used in the right context.

 

 

Read Full Post »