Feeds:
Posts
Comments

Posts Tagged ‘Analytics’

I wrote my layman’s introduction to scoring a while ago now and never delivered the promised more in-depth articles. This is the first in a line of articles correcting that oversight. The team at Scorto has very kindly provided me with a white paper on scorecard building, which I will break into sections and reproduce here. In the first of those articles, I’ll look into reject inference, a topic that has been asked about before.

One of the inherent problems with a scorecard is that while you can test easily test whether you made the right decision in accepting an application, it is less easy to know whether you made the right decision in rejecting an application. In the day-to-day running of a business this might not seem like much of a problem, but it is dangerous in two ways: · it can limit the highly profitable growth opportunities around the cut-off point by hiding any segmenting behaviour a characteristic might have; and · it can lead to a point where the data that is available for creating new scorecards represents only a portion of the population likely to apply. As this portion is disproportionately ‘good’ it can cause future scorecards to under-estimate the risk present in a population. Each application provides a lender with a great deal of characteristic data: age, income, bureau score, etc. That application data is expensive to acquire, but of limited value until it is connected with behavioural data. When an application is approved, that value-adding behavioural data follows as a matter of course and comes cheaply: did the customer of age x and with income of y and a bureau score of z go “bad” or not? Every application that is rejected gets no such data. Unless we go out of our way to get it; and that’s where reject inference comes into play.

The general population in a market will have an average probability of bad that is influenced by various national and economic characteristics, but generally stable. A smaller sub-population will make-up the total population of applicants for any given loan product –the average probability of bad in this total population will rise and fall more easily depending on marketing and product design. It is the risk of that total population of applicants that a scorecard should aim to understand. However, the data from existing customers is not a full reflection of that population. It has been filtered through the approval process it stripped of a lot of its bads. Very often, the key data problem when building a scorecard build is the lack of information on “bad” since that’s what we’re trying to model, the probability an application with a given set of characteristics will end up “bad”. The more conservative the scoring strategy in question, the more the data will become concentrated in the better score quadrants and the weaker it will become for future scorecard builds. Clearly we need a way to bring back that information. Just because the rejected applications were too risky to approve doesn’t mean they’re too risky to add value in this exercise. We do this by combining the application data of the rejected applicants with external data sources or proxies. The main difficulty related to this approach is the unavailability and/ or inconsistency of the data which may make it difficult to classify an outcome as “good” or “bad”. A number of methods can be used to infer the performance of rejected applicants.

Simple Augmentation
Not all rejected applications would have gone bad. We knew this at the time we rejected them, we just knew that too few would stay good to compensate for those that did go bad. So while a segment of applications with a 15% probability of bad might be deemed too risky, 85% of them would still be good accounts. Using that knowledge we can reconsider the rejected applications in the data exercise.

· A base scoring model is built using data from the borrowers whose behavior is known – the previously approved book.
· Using the developed model, the rejected applications are scored and an estimation is made of the percentage of “bad” borrowers and that performance is assigned at random but in proportion across the rejected applications.
· The cut-off point should be set in accordance with the rules of the current lending policy that define the permissible level of bad borrowers.
· Information on the rejected and approved requests is merged and the resulting set is used to build the final scoring model.

Accept/ Reject Augmentation
The basis of this method consists in the correction of the weights of the base scoring model by taking into consideration the likelihood of the request‘s approval.
· The first step is to build a model that evaluates the likelihood of a requests approval or rejection. · The weights of the characteristics are adjusted taking into consideration the likelihood of the request‘s approval or rejection, determined during the previous step. This is done so that the resulting scores are inversely proportional to the likelihood of the request‘s approval. So, for example, if the original approval rate was 50% in a certain cluster then each approved record is replicated to stand in for itself and the one that was rejected.
· This method is preferable to the Simple Augmentation method, but not without its own drawbacks. Two key problems can be created by augmentation: the impact of small and unusual groups can be exaggerated (such as low-side overrides for VIP clients) and then because you’ve only modeled on approved accounts the approval rates will be either 0% or 100% in each node.

Fuzzy Augmentation
The distinguishing feature of this method is that each rejected request is split and used twice, to reflect each of the likelihood of the good and bad outcomes. In other words, if a rejected application has a 15% probability of going bad it is split and 15% of the person is assumed to go bad and 85% assumed to stay good.
· Classification
Evaluation of a set of the rejected requests is performed using a base scoring model that was built based on requests with a known status;
– The likelihood of a default p(bad) and that of the “good” outcome p(good) are determined based on the set cut-off point, defining the required percentage of the “bad” requests (p(bad)+p(good)=1); – Two records that correspond to the likelihood of the “good” and “bad” outcomes are formed for each rejected request;
– Evaluation of the rejected requests is performed taking into consideration the likelihood of the two outcomes. Those accounts that fall under the likelihood of the “good” outcome are assigned with the weight p(good). The accounts that fall under the likelihood of the “bad” outcome are assigned with the weight p(bad).
· Clarification
– The data on the approved requests is merged with the data on the rejected requests and the rating of each request is adjusted taking into consideration the likelihood of the request‘s further approval. For example, the frequency of the “good” outcome for a rejected request is evaluated as the result of the “good” outcome multiplied by the weight coefficient.
– The final scoring model is built based on the combined data set.

Reject inference is no a single silver bullet. Used inexpertly it can lead to less accurate rather than more accurate results. Wherever possible, it is better to augment the exercise with a test-and-learn experiment to understand the true performance of small portions of key rejected segments. Then a new scorecard can be built based on the data from this new test segment alone and the true bad rates from that model can be compared and averaged to those from the reject inference model to get a more reliable bad rate for the rejected population.

Read Full Post »

First things first, I am by no means a scorecard technician. I do not know how to build a scorecard myself, though I have a fair idea of how they are built; if that makes sense. As the title suggests, this article takes a simplistic view of the subject. I will delve into the underlying mathematics at only the highest of levels and only where necessary to explain another point. This article treats scorecards as just another tool in the credit risk process, albeit an important one that enables most of the other strategies discussed on this blog. I have asked a colleague to write a more specialised article covering the technical aspects and will post that as soon as it is available.

 

Scorecards aim to replace subjective human judgement with objective and statistically valid measures; replacing inconsistent anecdote-based decisions with consistent evidence-based ones. What they do is essentially no different from what a credit assessor would do, they just do it in a more objective and repeatable way. Although this difference may seem small, it enables a large array of new and profitable strategies.

So what is a scorecard?

A scorecard is a means of assigning importance to pieces of data so that a final decision can be made regarding the underlying account’s suitableness for a particular strategy. They do this by separating the data into its individual characteristics and then assigning a score to each characteristic based on its value and the average risk represented by that value.

For example an application for a new loan might be separated into age, income, length of relationship with the bank, credit bureau score, etc. Then the each possible value of those characteristics will be assigned a score based on the degree to which they impact risk. In this example ages between 19 and 24 might be given a score of – 100, ages between 25 and 30 a score of -75 and so on until ages 50 and upwards are given a score of +10. In this scenario young applicants are ‘punished’ while older customers benefit marginally from their age. This implies that risk has been shown to be inversely related to age. The diagram below shows an extract of a possible scorecard:

The score for each of these characteristics is then added to reach a final score. The final score produced by the scorecard is attached to a risk measure; usually something like the probability of an account going 90 days into arrears within the next 12 months. Reviewing this score-to-risk relationship allows a risk manager to set the point at which they will decline applications (the cut-off) and to understand the relative risk of each customer segment on the book. The diagram below shows how this score-to-risk relationship can be used to set a cut-off.

How is a scorecard built?

Basically what the scorecard builder wants to do is identify which characteristics at one point in time are predictive of a given outcome before or at some future point in time. To do this historic data must be structured so that one period can represent the ‘present state’ and the subsequent periods can represent the ‘future state’. In other words, if two years of data is available for analysis (the current month can be called Month 0 and the last Month can be called Month -24) then the most distant six months (from Month -24 to Month -18) will be used to represent the ‘current state’ or, more correctly, the observation period while the subsequent months (Months -17 to 0) represent the known future of those first six months and are called ‘the outcome period’. The type of data used in each of these periods will vary to reflect these differences so that application data (applicant age, applicant income, applicant bureau score, loan size requested, etc.) is important in the observation period and performance data (current balance, current days in arrears, etc.) is important in the outcome period.

With this simple step completed the accounts in the observation period must be defined and sorted based on their performance during the outcome period. To start this process a ‘bad definition’ and ‘good definition’ must first be agreed upon. This is usually something like: ‘to be considered bad, an account must have gone past 90 days in delinquency at least once during the 18 month outcome period’ and ‘to be considered good an account must never have gone past 30 days in delinquency during the same period’. Accounts that meet neither definition are classified as ‘indeterminate’.

Thus separated, the unique characteristics of each group can be identified. The data that was available at the time of application for every ‘good’ and ‘bad’ account is statistically tested and those characteristics with largely similar values within one group but largely varying values across groups are valuable indicators of risk and should be considered for the scorecard. For example if younger customers were shown to have a higher tendency to go ‘bad’ than older customers, then age can be said to be predictive of risk. If on average 5% of all accounts go bad but a full 20% of customers aged between 19 and 25 go bad while only 2% of customers aged over 50 go bad then age can be said to be a strong predictor of risk. There are a number of statistical tools that will identify these key characteristics and the degree to which they influence risk more accurately than this but they won’t be covered here.

Once each characteristic that is predictive of risk has been identified along with its relative importance some cleaning-up of the model is needed to ensure that no characteristics are overly correlated. That is, that no two characteristics are in effect showing the same thing. If this is the case, only the best of the related characteristics will be kept while the other will be discarded to prevent, for want of a better term, double-counting. Many characteristics are correlated in some way, for example the older you are the more likely you are to be married, but this is fine so long as both characteristics add some new information in their own right as is usually the case with age and marital status – an older, married applicant is less risky than a younger, married applicant just as a married, older applicant is less risky than a single, older applicant. However, there are cases where the two characteristics move so closely together that the one does not add any new information and should therefore not be included.

So, once the final characteristics and their relative weightings have been selected the basic scorecard is effectively in place. The final step is to make the outputs of the scorecard useable in the context of the business. This usually involves summarising the scores into a few score bands and may also include the addition of a constant – or some other means of manipulating the scores – so that the new scores match with other existing or previous models.

 

How do scorecards benefit an organisation?

Scorecards benefit organisations in two major ways: by describing risk in very fine detail they allow lenders to move beyond simple yes/ no decisions and to implement a wide range of segmented strategies; and by formalising the lending decision they provide lenders with consistency and measurability.

One of the major weaknesses of a manual decisioning system is that it seldom does more than identify the applications which should be declined leaving those that remain to be accepted and thereafter treated as being the same. This makes it very difficult to implement risk-segmented strategies. A scorecard, however, prioritises all accounts in order of risk and then declines those deemed too risky. This means that all accepted accounts can still be segmented by risk and this can be used as a basis for risk-based pricing, risk-based limit setting, etc.

The second major benefit comes from the standardisation of decisions. In a manual system the credit policy may well be centrally conceived but the quality of its implementation will be dependent on the branch or staff member actually processing the application. By implementing a scorecard this is no longer the case and the roll-out of a scorecard is almost always accompanied by the reduction in bad rates.

Over-and-above these risk benefits, the roll-out of a scorecard is also almost always accompanied by an increase in acceptance rates. This is because manual reviewers tend to be more conservative than they need to be in cases that vary in some way from the standard. The nature of a single credit policy is such that to qualify for a loan a customer must exceed the minimum requirements for every policy rule. For example, to get a loan the customer must be above the minimum age (say 28), must have been with the bank for more than the minimum period (say 6 months) and must have no adverse remarks on the credit bureau. A client of 26 with a five year history with the bank and a clean credit report would be declined. With a scorecard in place though the relative importance of exceeding one criteria can be weighed against the relative importance of missing another and a more accurate decision can be made; almost always allowing more customers in.

 

Implementing scorecards

There are three levels of scorecard sophistication and, as with everything else in business, the best choice for any situation will likely involve a compromise between accuracy and cost.

The first option is to create an expert model. This is a manual approximation of a scorecard based on the experience of several experts. Ideally this exercise would be supported by some form of scenario planning tool where the results of various adjustments could be seen for a series of dummy applications – or genuine historic applications if these exist – until the results that meet the expectations of the ‘experts’. This method is better than manual decisioning since it leads to a system that looks at each customer in their entirety and because it enforces a standardised outcome. That said, since it is built upon relatively subjective judgements it should be replaced with a statistically built scorecard as soon as enough data is available to do so.

An alternative to the expert model is a generic scorecard. These are scorecards which have been built statistically but using a pool of similar though not customer-specific data. These scorecards are more accurate than expert models so as long as the data on which they were built reasonably resembles the situation in which they are to be employed. A bureau-level scorecard is probably the purest example of such a scorecard though generic scorecards exist for a range of different products and for each stage of the credit life-cycle.

Ideally, they should first be fine-tuned prior to their roll-out to compensate for any customer-specific quirks that may exist. During a fine-tuning, actual data is run through the scorecard and the results used to make small adjustments to the weightings given to each characteristic in the scorecard while the structure of the scorecard itself is left unchanged. For example, assume the original scorecard assigned the following weightings: -100 for the age group 19 to 24; -75 for the age group 25 to 30; -50 for the age group 31 to 40; and 0 for the age group 41 upwards. This could either be implemented as it is bit if there is enough data to do a fine-tune it might reveal that in this particular case the weightings should actually be as follows: -120 for the age group 19 to 24; -100 for the age group 25 to 30; -50 for the age group 31 to 40; and 10 for the age group 41 upwards. The scorecard structure though, as you can see, does not change.

In a situation where there is no client-specific data and no industry-level data exists, an expert model may be best. However, where there is no client-specific data but where there is industry-level data it is better to use a generic scorecard. In a case where there is both some client-specific data and some industry-level data a fine-tuned generic scorecard will produce the best results.

The most accurate results will always come, however, from a bespoke scorecard. That is a scorecard built from scratch using the client’s own data. This process requires significant levels of good quality data and access to advanced analytical skills and tools but the benefits of a good scorecard will be felt throughout the organisation.


Read Full Post »

There are certainly analytical tools in the market that are more sophisticated than Excel and there are certainly situations where these are needed to deliver enhanced accuracy or advanced features; However, this article will concentrate on building models to aid the decision-making process of a business leader rather than a specialist statistician, the need is for a model that is flexible and easy-to-use.  Since Excel is so widely available and understood, it is usually the best tool for this purpose.

In this article I will assume a basic understanding of Excel and its in-built mathematical functions.  Instead, I’ll discuss how some of the more advanced functions can be used to build decision-aiding models and, in particular, how to create flexible matrices.

Spreadsheets facilitate flexibility by allowing calculations to be parameterised so that a model can be built with the logic fixed but the input values flexible.  For example, the management of a bank may agree that the size of a credit limit grated to a customer should be based on that customer’s risk and income and that VIP customers should be entitled to an extra limit extension, though they may disagree over one or more of the inputs.  The limit-setting logic can be programmed into Excel as an equation that remains constant while the definition of what constitutes each risk group, each income band, the size of each limit and the size of the VIP bonus extension can each be changed at will. 

When building a model to assist with business decision-making, the key is to make sure that each profit lever is represented by a parameter that can be quickly and easily altered by decision-makers without altering the logical and established links between each of those profit levers.  Making use of Excel’s advanced functions and some simple logic, it is possible to do this in almost all situations without the resulting model becoming too complex for practical business use.

*     *     *     *     *

If I were to guess, I would say that at over 80% of the functionality needed to build a flexible decision-making model can be created using Excel’s basic mathematical functions and ‘IF clauses’ and ‘LOOKUPs’. 

If Clauses

IF clauses, once understood, have a multitude of uses.  When building a model to aid decision-making they are usually one of the most important tools at an analyst’s disposal.  Simply put, an IF clause provides a binary command: if a given event happens do this, if not do that.  If a customer number has been labelled as VIP, add €5 000 extra to the proposed limit, if not do not add anything, etc. 

IF( CustStatus = “VIP”, 5000, 0 )

Using this simple logic, it is possible to replicate a decision tree connecting a large number of decisions to create a single strategy.  IF clauses are very useful for categorising data, for identifying or selecting specific events, etc.

There are two important variations of the basic IF clause: SUMIF and COUNTIF.  These two functions allow to determine how often, or to what degree, a certain event has occurred.  Both functions have the same underlying logic, though the COUNTIF function is simpler.  What is the total sum of balances on all VIP accounts or simply how many VIP accounts are there.

SUMIF( Sheet1!$A$1:$A$200, “VIP”, Sheet1!$B$1:$B$200 ) or

COUNTIF( Sheet1!$A$1:$A$200, “VIP” )

 

Lookups

Look-ups, on the other hand, are used to retrieve related data; replicating some of the basic functionality of a database. 

A ‘lookup’ will take one value and retrieve a corresponding alternate value from a specific table.  Perhaps easier to understand through an example: assume there is a list showing which branch each of a bank’s customers belongs to, given a random selection of customer numbers a lookup would take each of those customer numbers and search the larger list until it found the matching number and then retrieve the associated branch name next to that customer name in the table. 

 

By ending the statement with ‘FALSE’ it means that only exact matches are permitted.  If I had ended the function with ‘TRUE’, it would have looked for the nearest possible match to the given customer name from within the list and returned the value corresponding to that.  This is not particularly useful in an example like this one but it is a useful way to group values into logical batches among other things.  For example, if I had a list of salaries and wanted to summarise them into salary bands I could create a table with the lowest and highest value in each band and then use a lookup ending with TRUE to find the band into which each unique salary falls.

 

There are actually two types of lookups in Excel, vertical lookups and horizontal lookups.  The former looks down a list until it finds the matching number (and then moves across to find the pertinent field) while the latter looks across a list (and then moves down to find the pertinent field); other than that the logic remains the same.

In the above example, the lookup will look take a given customer number within a table on the sheet and then, once it has been found, will return the value in the second column from the left of that table.  If it has been instead been an HLOOKUP function, the value returned would have been the one in the second row from the top. 

 

Embedded Functions

The real value of IF clauses and LOOKUPs comes when they are added to together, either with each other or with other Excel functions.  For example, if the account is labelled “VIP” then look for the associated relationship manager in a list of all the relationship managers, if not then look for the associated branch name – in both cases using the customer number to do the matching.

IF( CustStatus = “VIP”, VLOOKUP( CustNum, RelMans!$A$1:$B$20, 2, FALSE), VLOOKUP (CustNum, Branches!$A$1:$B$50, 2, FALSE))

In these cases, the results of the embedded function are used by the main function to deliver a result.

Matrices

In most cases however, businesses need to make decisions on more factors than can be represented simply by lists; in our example credit limits cannot be set with a reference to risk alone, income – and as a proxy for spend – considerations also need to be borne in mind.  When building a business model, a useful tool then is a two-dimensional matrix where results can be retrieved using embedded VLOOKUPs and HLOOKUPs.  Creating Matrices in Excel is a three-step process – at least I only know how to do it using three steps. 

I will walk through the example of a limit setting matrix.  In this example I want to set a limit for each customer based on a combination of the customer’s risk and income while also keeping product restrictions in mind.  I want this model to be flexible enough so that I can easily change the definition of the risk and income bands as well as the prices assigned to each segment of the matrix.

The first step is to create the desired matrix, choosing the axis labels and number of segments.  Within this matrix, each segment should be populated with the desired limit.  The labels of the matrix will remain fixed though the definition of each label can be changed as needed.  The limit in each segment can be hard-coded in – €5 000 for example – or can relate to a reference cell – product minimum plus a certain amount for example.

In this example I have decided to 12 segment matrix that will cater for 4 income bands (Low, Moderate, High and Very High) and 3 risk bands (Low, Moderate and High).  I’ve then populated the matrix with the limits we will use as a starting point for our discussions.  Managers will not want to know just how the proposed model impacts limits at a customer level, they will also want to see how it impacts limits at a portfolio level so I have used COUNTIF and SUMIF to provide a summary of the limit distribution across the portfolio – all shown below:

 

The second step is to summarise the values of the two key variables into the respective bands; using VLOOKUPs as discussed above.  In this example we want to summarise the risk grades of customers into LOW RISK, MODERATE RISK and HIGH RISK and the income into similar LOW INCOME, MODERATE INCOME, HIGH INCOME and VERY HIGH INCOME.

As a starting point, I have decided to make the splits as shown in the tables below.  These tables were used to label each account in the dataset using two new columns I have created, also shown below:

 

 

 

Then each account can be matched to a matrix segment using VLOOKUPs and HLOOKUPs, embedded to create a matrix lookup function.  What we want to do is to use the VLOOKUP functionality to find the right row corresponding to the risk of the customer and then to move across the number of rows to find the right column corresponding to the income of the customer.  The first part of the equation is relatively simple to construct so long as we ignore the column number:

VLOOKUP( Risk Band, $A$3:$E$5, ?, False )

Provided we’re a little creative with it, an HLOOKUP will allow us to fill in the missing part.  What we need to do is find a way to convert the ‘Salary Band’ field into a number representing the column.  You might have noticed that there was a row of numbers under each of the Income Bands in the matrix shown above.  This was done in order to allow an HLOOKUP to return that number so that it can be placed it into the missing part of the VLOOKUP.  An HLOOKUP will search for the Salary Band and then return the number from the row directly blow it, which in this case has been specifically set to be equal to its column number – remembering to add one to take into account the field used to house the name of the Risk Grade that is needed for the VLOOKUP.

HLOOKUP ( Salary Band, $B$1:$E$2, 2, FALSE )

In this case it will always be the second row so we can hardcode in the ‘2’.  This entire function is then substituted into the VLOOKUP to create a function that will look-up both the Risk Band and the Salary Band of any given customer and return the relative limit from the matrix.

VLOOKUP( Risk Band, $A$1:$E$5, HLOOKUP ( Salary Band, $B$1:$E$2, 2, FALSE ) , False )

All that is now needed is to add two further fields to take into account the potential VIP bonus limit and the model is complete – and the results are shown below:

 

This version of the model can be distributed or taken into a workshop and, as each component is adjusted so too are the individual limits granted as well as the tables summarising the distribution of limits across the portfolio.  For example, the marketing team may wish to increase the limits of Low Risk, Very High Income customers to 60,000 and, at the same time, the risk team may wish to re-categorise those with a risk score of 4 as ‘High Risk’ and increase the qualifying income for ‘High Income’ to 7,000.  The first change requires a simple change in the Limit Matrix while the second requires two simple changes to the references tables, giving the new matrix limit using the tables shown below.

 

*     *     *     *     *

It is also possible to show the distribution by matrix segment. The method is based on the same logic discussed up to now, although the implementation is a bit clumsy. 

The first step is to create a dummy matrix with the same labels but populated with a segment number rather than a limit.  Then you need to create a new field in the dataset called something like ‘Segment Number’ and to populate this field using the same equation from above.  Once this field has been populated you can create a another dummy version of the matrix and, in this case, use the SUMIF or COUNTIF function to calculate the value of limits or the number of customers in each segment.  With that populated it is easy to turn those numbers into a percentage of the total either in the same step or using one final new matrix:

 






Read Full Post »

Probably the most common credit card business model is for customers to be charged a small annual fee in return for which they are able to make purchases using their card and to only pay for those purchases after some interest-free period – often up to 55 days.  At the end of this period, the customer can choose to pay the full amount outstanding (transactors) in which case no interest accrues or to pay down only a portion of the amount outstanding (revolvers) in which case interest charges do accrue.  Rather than charging its customer a usage fee, the card issuer also earns a secondary revenue stream by charging merchants a small commission on all purchases made in their stores by the issuer’s customers.

So, although credit cards are similar to other unsecured lending products in many ways, there enough important differences that are not catered for in the generic profit model for banks (described here and drawn here) to warrant an article specifically focusing on the credit card profit modelNote: In this article I will only look at the profit model from an issuer’s point of view, not from an acquirer’s.

* * * 

We started the banking profit model by saying that profit was equal to total revenue less bad debts, less capital holding costs and less fixed costs.  This remains largely true.  What changes is the way in which we arrive at the total revenue, the way in which we calculate the cost of interest and the addition of a two new costs – loyalty programmes and fraud.  Although in reality there may also be some small changes to the calculation of bad debts and to fixed costs, for the sake of simplicity, I am going to assume that these are calculated in the same way as in the previous models.

 

Revenue

Unlike a traditional lender, a card issuer has the potential to earn revenue from two sources: interest from customers and commission from merchants.  The profit model must therefore be adjusted to cater for each of these revenue streams as well as annual fees. 

Total Revenue  = Fees + Interest Revenue + Commission Revenue

                                = Fees + (Revolving Balances x Interest Margin x Repayment Rate) + (Total Spend x Commission)

                                = (AF x CH) + (T x ATV) x ((RR x PR x i) + CR)

Where              AF = Annual Fee                                               CH = Number of Card Holders  

                           T = Number of Transactions                          PR = Repayment Rate

                           ATV = Average Transaction Value              i = Interest Rate

                           RR = Revolve Rate                                              CR = Commission Rate

Customers usually fall into one of two groups and so revenue strategies tend to conform to these same splits.  Revolvers are usually the more profitable of the two groups as they can generate revenue in both streams.  However, as balances increase and approach the limit the capacity to continue spending decreases.  Transactors, on the other hand, seldom carry a balance on which an issuer can earn interest but they have more freedom to spend.

Strategies aimed at each group should be carefully considered.  Balance transfers – or campaigns which encourage large, once-off purchases – create revolving balances and sometimes a large, once-off commission while generating little on-going commission income.  Strategies that encourage frequent usage don’t usually lead to increased revolving balances but do have a more consistent – and often growing – long-term impact on commission revenue..   

Variable Costs

There is also a significant difference between how card issuers and other lenders accrue variable costs. 

Firstly, unlike other loans, most credit cards have an interest free period during which the card issuer must cover the costs of the carrying the debt.

The high interest margin charged by card issuers aims to compensate them for this cost but it is important to model it separately as not all customers end up revolving and hence, not all customers pay that interest at a later stage.  In these cases, it is important for an issuer to understand whether the commission earnings alone are sufficient to compensate for these interest costs.

Secondly, most card issuers accrue costs for a customer loyalty programme.  It is common for card issuers to provide their customers with rewards for each Euro of spend they put on their cards.  The rate at which these rewards accrue varies by card issuer but is commonly related in some way to the commission that the issuer earns.  It is therefore possible to account for this by simply using a net commission rate.  However, since loyalty programmes are an important tool in many markets I prefer to keep it out as a specific profit lever.

Finally, credit card issuers also run the risk of incurring transactional fraud –  lost, stolen or counterfeited cards.  There are many cases in which the card issuer will need to carry the cost of fraudulent spend that has occurred on their cards.  This is not a cost common to other lenders, at least not after the application stage.

Variable Costs = (T x ATV) x ((CoC x IFP) + L + FR)

Where            T = Number of Transactions                         IFP = Interest Free Period Adjustment

                         ATV = Average Transaction Value             CoC = Cost of Capital

                         FR = Fraud Rate

Shorter interest free periods and cheaper loyalty programmes will result in lower costs but will also likely result in lower response rates to marketing efforts, lower card usage and higher attrition among existing customers.

 

The Credit Card Profit Model                   

Profit is simply what is left of revenue once all costs have been paid; in this case after variable costs, bad debt costs, capital holding costs and fixed costs have been paid.

I have decided to model revenue and variable costs as functions of total spend while modelling bad debt and capital costs as a function of total balances and total limits. 

The difference between the two arises from the interaction of the interest free period and the revolve rate over time.  When a customer first uses their card their spend increases and so does the commission earned and loyalty fees and interest costs accrued by the card issuer.  Once the interest free period ends and the payment falls due, some customers (transactors) will pay their full balance outstanding and thus have a zero balance while others will pay the minimum due (revolve) and thus create a balance equal to 100% less the minimum repayment percentage of that spend. 

Over time, total spend increase in both customer groups but balances only increase among the group of customers that are revolving.  It is these longer-term balances on which capital costs accrue and which are ultimately at risk of being written-off.  In reality, the interaction between spend and risk is not this ‘clean’ but this captures the essence of the situation.

Profit = Revenue – Variable Costs – Bad Debt – Capital Holding Costs – Fixed Costs

= (AF x CH) + (T x ATV) x ((RR x PR x i) + CR) – (T x ATV) x (L + (CoC x IFP)) – (TL x U x BR) – (TL x U x CoC +   TL x   (1 – U) x BHR x CoC) – FC

= (T x ATV) x (CR – L – (CoC x IFP) -FR) – (TL x U x BR) – ((TL x U x CoC) + (TL x (1 – U) x BHR x CoC)) – FC

Where        AF = Annual Fee                                               CH = Number of Card Holders          

                      T = Number of Transactions                         i = Interest Rate

                      ATV = Average Transaction Value               TL = Total Limits

                      RR = Revolve Rate                                                U = Av. Utilisation

                      PR = Repayment Rate                                          BR = Bad Rate

                      CR = Commission Rate                                        CoC = Cost of Capital

                      L = Loyalty Programme Costs                          BHR = Basel Holding Rate

                      IFP = Interest Free Period Adjustment        FC = Fixed Costs

                      FR = Fraud Rate

 

Visualising the Credit Card Profit Model  

Like with the banking profit model, it is also possible to create a visual profit model.  This model communicates the links between key ratios and teams in a user-friendly manner but does so at the cost of lost accuracy.

The key marketing and originations ratios remain unchanged but the model starts to diverge from the banking one when spend and balances are considered in the account management and fraud management stages.   

The first new ratio is the ‘usage rate’ which is similar to a ‘utilisation rate’ except that it looks at monthly spend rather than at carried balances.  This is done to capture information for transactors who may have a zero balance – and thus a zero balance – at each month end but who may nonetheless have been restricted by their limit at some stage during the month.

The next new ratio is the ‘fraud rate’.  The structure and work of a fraud function is often similar in design to that of a debt management team with analytical, strategic and operational roles.  I have simplified it here to a simple ratio of fraud: good spend as this is the most important from a business point-of-view, however if you are interested in more detail about the fraud function you can read this article or search in this category for others.

The third new ratio is the ‘commission rate’.  The commission rate earned by an issuer will vary by each merchant type and, even within merchant types, in many cases on a case-by-case basis depending on the relative power of each merchant.  Certain card brands will also attract different commission rates; usually coinciding with their various strategies.  So American Express and Diners Club who aim to attract wealthier transactors will charge higher commission rates to compensate for their lower revolve rates while Visa and MasterCard will charge lower rates but appeal to a broader target market more likely to revolve.

The final new ratio is the revolve rate which I have mentioned above.  This refers to the percentage of customers who pay the minimum balance – or less than their full balance – every month.  On these customers an issuer can earn both commission and interest but must also carry higher risk.  The ideal revolve rate will vary by market and depending on the issuers business objectives but should be higher when the issuer is aiming to build balances and lower when the issuer is looking to reduce risk.

 






Read Full Post »

The purpose of analytics is to guide business practices by empowering decision makers with clear and accurate insights into the problem at hand.  So even the best piece of analytics can fall short of this goal if the link between the analyst and the ultimate decision maker is ineffective.  Therefore, analysts should invest time in perfecting the art of presenting their findings, not just the science of reaching them.

A good presentation begins when the project begins, it does not begin only once the results have been calculated.  In order for a piece of analysis to effectively guide decision-making its objectives must be aligned with the project’s objectives from the very start. 

The easiest way to ensure that the analyst is working in the same direction as the decision maker is to employ the story board technique.  Much like a film maker will create a high-level story board to explain how their story will develop from scene to scene; an analyst should draw a high-level story board showing how the story underlying their analysis will develop from slide to slide.  The analysis should proceed only once the decision maker has agreed that the logical flow presented will achieve the desired end goal.  No fancy software is needed; story boarding can be done by hand or in PowerPoint.

One way to keep the flow clear is to use the headings as summaries of the slides message.  For example, instead of using a heading along the lines of ‘Utilisation Figures’ in the second slide above, I used ‘Utilisation is very risk biased’.  The audience immediately knows where I am going with this slide and doesn’t need to work towards this same conclusion as I speak.  This simple trick will also help you to quickly spot inconsistencies in the story flow.

The story board method works because, in many ways, a good piece of analysis is like a film in how it tells a story: like a film, it must tell a story that flows logically from one point to another culminating in a coherent and memorable message and, like a film, it must often find concise visual summaries for complex concepts. 

Using the story board approach from the start helps to put the piece of analysis in context.  By defining the scope it prevents time being invested in non value-adding activities and by confirming a logical thread it ensures a fruitful outcome. 

The analyst should follow a structured process to create a logical and value adding piece of analysis, such as the five point plan below:

(1) the problem must be fully understood;

(2) the analysis must be designed to address each key aspect of the problem;

(3) the analysis must be carried out;

(4) the results should be interpreted in terms of the problem and used to create the final presentation;

(5) actual performance of the solution should be monitored and compared to expectations.

Understanding the problem is the most important step.  Many an analysts feels that their understanding of a particular analytical technique is their key value offering.  However, the results will be sub-optimal at best and value-destroying at worst unless the problem to which that technique is to be applied is well understood.  Understanding a problem requires research into the business problem at hand, the key factors involved, the relationships between them and the relative priority of each.  The analyst should not be happy until each of these are understood and all of the inherent assumptions have been challenged and proven valid. 

When the analyst has a complete understanding of the problem they will be in a position to prioritise each component part.  Once the problem has been understood and its component parts prioritised, the analysis itself can be designed along the logical lines of the story.  Here dummy graphs and tables can be added to the story boards.  Once again, before the next step is taken it is worth verifying that the proposed measures will indeed prove the point covered by each particular story board.

Once the dummy graphs and tables have been inserted the analyst should ask themselves questions like: would a table showing the relative percentage of good and bad accounts with balances over 90% of their limit, when shown together with a table of average utilisations, prove that the current credit limit policy is enabling higher levels of bad debt?  If not, alternative measures should be considered and weighed in the same way. 

It is important to note though that the intention is not to find the one graph that supports your pre-determined beliefs but rather to find a measure that will prove or disprove your key message.  The analyst should make this decision before the numbers are included to prevent this sort of intentional bias.  In the above example the decision is made before we know for sure what patterns will emerge from the data.  If the data later shows no significant difference in average balances and utilisations between each group, the analyst should be willing to accept that perhaps there is less value in the project than first imagined; they should not try to manipulate the results to hide this fact.

I said earlier that a presentation often has to use visual tools to concisely summarise complex concepts.  These visual tools can include hand drawn schematics (useful when drawn live as an interactive tool for explaining concepts but less able to communicate numerical analysis accurately), graphs (less interactive but more accurate when it comes to presenting numerical results) and tables.  When using visual tools it is important to not let the visuals distract from the message you want to communicate.  The wrong scale can, for example, make trends seem to appear where they don’t exist and disappear where they do.  Excess information, unnecessary legends, the wrong choice of graph, etc. can all work to ‘encode’ your message.  It is important that your visual message faithfully reflects the message of the underlying data, just using an easier to interpret medium.

The same logic applies to animations.  I believe that animations in presentations can add great value when used well but in many – if not most – cases they simply distract.  I tend to use animations when I wish either to create a sense of interaction or when the order in which events progress is important – as when discussing a process with multiple steps, each building on its predecessor.

Once the analysis has been designed and approved it must be delivered.  This is where the most focus has been traditionally and it is indeed a vital step.  The value that an analytical approach to problem solving brings to a business is the ability to make decisions based on a true understanding of the underlying business and its component parts.  Unless the analysis is accurate this is not possible and so great care must be taken when selecting and implementing analytical techniques.  However, this step is most valuable when it comes on top of the solid foundation created by each of the prior steps.

The results of the analysis must be substituted into the story board in place of the dummy graphs and tables.  The final touches should be applied to the presentation at this stage, as should any changes in the message necessitated by unexpected new information. 

Once the presentation is complete, it can be delivered to the decision maker in whichever format is most appropriate.  Thought should be given to the question of the delivery channel.  Presentations that are to delivered face-to-face should include fewer and less detailed bullet points, while those that are to be sent to a large, indirect audience  should contain more detailed information.

However, that is not where the process should end.  I started this article by saying that the purpose of analytics is to guide business practices and so until the extent to which business practices have actually been changed – and the impact of those changes – has been understood, the ultimate value of the analysis will not be known.  Any piece of analysis should therefore cater for a period of on-going monitoring where key project metrics can be measured and the actual results compared to expected results.  The nature of each specific piece of analysis will dictate how long this period should be and which metrics should be included.  But, in all cases, the analysis can only be considered successful once it can be shown that the business has made beneficial changes based on it.

*   *   *

To read more about presentation tips and techniques, click here

Read Full Post »

Older Posts »