Feeds:
Posts
Comments

Posts Tagged ‘profit modelling’

In terms of credit risk strategy, the lending markets in America and Britain undoubtedly lead the way while several other markets around the world are applying many of the same principles with accuracy and good results. However, for a number of reasons and in a number of ways, many more lending markets are much less sophisticated. In this article I will focus on these developing markets; discussing how credit risk strategies can be applied in such markets and how doing so will add value to a lender.

The fundamentals that underpin credit risk strategies are constant but as lenders develop in terms of sophistication the way in which these fundamentals are applied may vary. At the very earliest stages of development the focus will be on automating the decisioning processes; once this has been done the focus should shift to the implementation of basic scorecards and segmented strategies which will, in time, evolve from focusing on risk mitigation to profit maximisation.

Automating the Decisioning Process

The most under-developed markets tend to grant loans using a branch-based decisioning model as a legacy of the days of fully manual lending. As such, it is an aspect more typical of the older and larger banks in developing regions and one that is allowing newer and smaller competitors to enter the market and be more agile.

A typical branch-lending model looks something like the diagram below:

In a model like this, the credit policy is usually designed and signed-off by a committee of very senior managers working in the head-office. This policy is then handed-over to the branches for implementation; usually by delivering training and documentation to each of the bank’s branch managers. This immediately presents an opportunity for misinterpretation to arise as branch managers try to internalise the intentions of the policy-makers.

Once the policy has been handed-over, it becomes the branch manager’s responsibility to ensure that it is implemented as consistently as possible. However, since each branch manager is different, as is each member of branch staff, this is seldom possible and so policy implementation tends to vary to a greater or lesser extent across the branch network.

Even when the policy is well implemented though, the nature of a single written policy is such that it can identify the applicants that are considered too risky to qualify for a loan but it cannot go beyond that to segment accepted customers into risk groups. This means that the only way that senior management can ensure the policy is being implemented correctly in the highest risk situations is by using the size of the loan as an indication of risk. So, to do this a series triggers are set to escalate loan applications to management committees.

In this model, which is not an untypical one, there are three committees: one within the branch itself where senior branch staff review the work of the loan officer for small value loan applications; if the loan size exceeds the branch committee’s mandate though it must then be escalated to a regional committee or, if sufficiently large, all the way to a head-office committee.

Although it is easy to see how such a series of committees came into being, their on-going existence adds significant costs and delays to the application process.

In developing markets where skills are short there a significant premium must usually be paid to high quality management staff. So, to use the time of these managers to essentially remake the same decision over-and-over (having already decided on the policy, they now need to repeatedly decide whether an application meets the agreed upon criteria) is an inefficient way to invest a valuable resource. More importantly though are the delays that must necessarily accompany such a series of committees. As an application is passed on from one team – and more importantly from one location – to another a delay is incurred. Added to this is the fact that committees need to convene before they can make a decision and usually do so on fixed dates meaning that a loan application may have to wait a number of days until the next time the relevant committee meets.

But the costs and delays of such a model are not only incurred by the lender, the borrower too is burdened with a number of indirect costs. In order to qualify for a loan in a market where impartial third-party credit data is not widely available – i.e. where there are no strong and accurate credit bureaus – an applicant typically needs to ‘over prove’ their risk worthiness. Where address and identification data is equally unreliable this requirement is even more burdensome. In a typical example an applicant might need to first show an established relationship with the bank (6 months of salary payments, for example); provide a written undertaking from their employer that they will notify the bank of any change in employment status; the address of a reference who can be contacted when the original borrower can not; and often some degree of security, even for small value loans.

These added costs serve to discourage lending and add to what is usually the biggest problem faced by banks with a branch-based lending model: an inability to grow quickly and profitably.

Many people might think that the biggest issue faced by lenders in developing markets is the risk of bad debt but this is seldom the case. Lenders know that they don’t have access to all the information they need when they need it and so they have put in place the processes I’ve just discussed to mitigate the risk of losses. However, as I pointed out, those processes are ungainly and expensive. Too ungainly and too expensive as it turns out to facilitate growth and this is what most lenders want to change as they see more agile competitors starting to enter their markets.

A fundamental problem with growing with a branch-based lending model is that the costs of growing the system rise in line with the increase capacity. So, to serve twice as many customers will cost almost twice as much. This is the case for a few reasons. Firstly, each branch serves only a given geographical catchment area and so to serve customers in a new region, a new branch is likely to be needed. Unfortunately, it is almost impossible to add branches perfectly and each new branch is likely to lead to either an inefficient overlapping of catchment areas or ineffective gaps. Secondly, within the branch itself there is a fixed capacity both in terms of the number of staff it can accommodate and in terms of the number of customers each member of staff can serve. Both of these can be adjusted, but only slightly.

Added to this, such a model does not easily accommodate new lending channels. If, for example, the bank wished to use the internet as a channel it would need to replicate much of the infrastructure from the physical branches in the virtual branch because, although no physical buildings would be required and the coverage would be universal, the decisioning process would still require multiple loan officers and all the standard committees.

To overcome this many lenders have turned to agency agreements, most typically with large private and government employers. These employers will usually handle the administration of loan applications and loan payments for their staff and in return will either expect that their staff are offered loans at a discounted rate or that they themselves are compensated with a commission.

By simply taking the current policy rules from the branch based process and converting them into a series of automated rules in a centralised system many of these basic problems can be overcome; even before improving those rules with advanced statistical scorecards. Firstly the gap between policy design and policy implementation is removed, removing any risk of misinterpretation. Then the need for committees to ensure proper policy implementation is greatly reduced, greatly reducing the associated costs and delays. Thirdly the risk of inconsistent application is removed as every application, regardless of the branch originating it or the staff member capturing the data, is treated in the same way. Finally, since the decisioning is automated there is almost no cost to add a new channel onto the existing infrastructure meaning that new technologies like internet and mobile banking can be leveraged as profitable channels for growth.

The Introduction of Scoring

With the basic infrastructure in place it is time to start leveraging it to its full advantage by introducing scorecards and segmented strategies. One of the more subtle weaknesses of a manual decision is that it is very hard to use a policy to do anything other than decline an account. As soon as you try to make a more nuanced decision and categorise accepted accounts into risk groups the number of variables increases too fast to deal with comfortably.

It is easy enough to say that an application can be accepted only if the applicant is over 21 years of age, earns more than €10 000 a year and has been working for their current employer for at least a year but how do you segment all the qualifying applications into low, medium and high risk groups? A low risk customer might be one that is over 35 years old, earns more than €15 000 and has been working at their current employer for at least a year; or one that is over 21 years old but who earns more than €25 000 and has been working at their current employer for at least two years; or one that is over 40 years old, earns more than €15 000 and has been working at their current employer for at least a year, etc.

It is too difficult to manage such a policy using anything other than an automated system that uses a scorecard to identify and segment risk across all accounts. Being able to do this allows a bank to begin customising its strategies and its products to each customer segment/ niche. Low risk customers can be attracted with lower prices or larger limits, high spending customers can be offered a premium card with more features but also with higher fees, etc.

The first step in the process would be to implement a generic scorecard; that is a scorecard built using pooled third-party data that relates to a portfolio that is similar to the one in which it is to be implemented. These scorecards are cheap and quick to implement and, as when used to inform only simple strategies, offer almost as much value as a fully bespoke scorecard would. Over time the data needed to build a more specific scorecard can be captured so that the generic scorecard can be replaced after eighteen to twenty-four months.

But the making of a decision is not the end goal; all decisions must be monitored on an on-going basis so that strategy changes can be implemented as soon as circumstances dictate. Again this is not something that is possible to do using a manual system where each review of an account’s current performance tends to involve as much work as the original decision to lend to that customer did. Fully fledged behavioural scorecards can be complex to build for developing banks but at this stage of the credit risk evolution a series of simple triggers can be sufficient. Reviewing an account in an automated environment is virtually instantaneous and free and so strategy changes can be implemented as soon as they are needed: limits can be increased monthly to all low risk accounts that pass a certain utilisation trigger, top-up loans can be offered to all low and medium risk customers as soon as their current balances fall below a certain percentage of the original balance, etc.

In so doing, a lender can optimise the distribution of their exposure; moving exposure from high risk segments to low risk segments or vice versa to achieve their business objectives. To ensure that this distribution remains optimised the individual scores and strategies should be consistently tested using champion/ challenger experiments. Champion/ challenger is always a simple concept and can be applied to any strategy provided the systems exist to ensure that it is implemented randomly and that its results are measurable. The more sophisticated the strategies, the more sophisticated the champion/ challenger experiments will look but the underlying theory remains unchanged.

Elevating the Profile of Credit Risk

Once scorecards and risk segmented strategies have been implemented by the credit risk team, the team can focus on elevating their profile within the larger organisation. As credit risk strategies are first implemented they are unlikely to interest the senior managers of a lender who would likely have come through a different career path: perhaps they have more of a financial accounting view of risk or perhaps they have a background in something completely different like marketing. This may make it difficult for the credit risk team to garner enough support to fund key projects in the future and so may restrict their ability to improve.

To overcome this, the credit team needs to shift its focus from risk to profit. The best result a credit risk team can achieve is not to minimise losses but to maximise profits while keeping risk within an acceptable band. I have written several articles on profit models which you can read here, here, here and here but the basic principle is that once the credit risk department is comfortable with the way in which their models can predict risk they need to understand how this risk contributes to the organisation’s overall profit.

This shift will typically happen in two ways: as a change in the messages the credit team communicates to the rest of the organisation and as a change in the underlying models themselves.

To change the messages being communicated by the credit team they may need to change their recruitment strategies and bring in managers who understand both the technical aspects of credit risk and the business imperatives of a lending organisation. More importantly though, they need to always seek to translate the benefit of their work from technical credit terms – PD, LGD, etc. – into terms that can be more widely understood and appreciated by senior management – return on investment, reduced write-offs, etc. A shift in message can happen before new models are developed but will almost always lead to the development of more business-focussed models going forward.

So the final step then is to actually make changes to the models and it is by the degree to which such specialised and profit-segmented models have been developed and deployed that a lenders level of sophistication will be measured in more sophisticated markets.

Advertisements

Read Full Post »

You’ve got to know when to hold ‘em, know when to fold ‘em

Know when to walk away and know when to run

I’ve always wanted to use the lines from Kenny Rogers’ famous song, The Gambler, in an article. But that is only part of the reason I decided to use the game of Texas Holdem poker as a metaphor for the credit risk strategy environment.

The basic profit model for a game of poker is very similar to that of a simple lending business. To participate in a game of Texas Holdem there is a fixed cost (buy in) in exchange for which there is the potential to make a profit but also the risk of making a loss. As each card is dealt, new information is revealed and the player should adjust their strategy accordingly. Not every hand will deliver a profit and some will even incur a fairly substantial loss, however over time and by following a good strategy the total profit accumulated from those hands that are winners can be sufficient to cover the losses of those hands that are losers and the fixed costs of participating and a profit can thus be made.

Similarly in a lending business there is a fixed cost to process each potential customer, only some of whom will be accepted as actual customers who have the potential to be profitable or to result in a loss.  The lender will make an overall profit only if the accumulated profit from each profitable customer is sufficient to cover the losses from those that weren’t and the fixed processing costs.

In both scenarios, the profit can be maximised by increasing exposure to risk when the odds of a profit are good and reducing exposure, on the other hand, when the odds of a loss are higher. A good card player therefore performs a similar role to a credit analyst: continuously calculating the odds of a win from each hand, designing strategies to maximise profit based on those odds and then adjusting those strategies as more information becomes available.

Originations

To join a game of Texas Holdem each player needs to buy into that game by placing a ‘blind’ bet before they have seen any of the cards.  As this cost is incurred before any of the cards are seen the odds of victory can not be estimated. The blind bet is, in fact, the price to see the odds.

Thereafter, each player is dealt two private cards; cards that only they can see. Once these cards have been dealt each player must decide whether to play the game or not.

To play on, each player must enter a further bet. This decision must be made based on the size of the bet and an estimate of the probability of victory based on the two known cards. If the player should instead choose to not play, the will forfeit their initial bet.

A conservative player, one who will play only when the odds are strongly in their favour, may lose fewer hands but they will instead incur a relatively higher cost of lost buy-ins. Depending on the cost of the buy-in and the average odds of winning, the most profitable strategy will change but it will unlikely be the most conservative strategy.

In a lending organisation the equivalent role is played by the originations team. Every loan application that is processed, incurs a cost and so when an application is declined that cost is lost. A conservative scorecard policy will decline a large number of marginal applications choosing, effectively, to lose a small but known processing cost rather than risk a larger but unknown credit loss.  In so doing though, it also gives up the profit potential on those accounts. As with poker betting strategies, the ideal cut-off will change based on the level of processing costs and the average probability of default but will seldom be overly conservative.

A card player calculates their odds of victory from the known combinations of cards possible from a standard 54 card deck.  The player has the possibility of creating any five card combination made up from their two known cards and a further five random ones yet to be dealt, while each other player can create a five card combination made-up of any seven cards except for the two the player himself has.  With this knowledge, the odds that the two private cards will result in a winning hand can be estimated and, based on that estimate, make the decision whether to enter a bet and if so of what size; or whether to fold and lose the buy-in.

The methods used to calculate odds may vary, as do the sources of potential profits, but at a conceptual level the theory on which originations is based is similar to the theory which under-pins poker betting.

As each account is processed through a scorecard the odds of it eventually rolling into default are estimated. These odds are then used to make the decision whether to offer credit and, if so, to what extent.  Where the odds of a default are very low the lender will likely offer more credit – the equivalent of placing a larger starting bet – and vice versa.

Customer Management

The reason that card games like Texas Holdem are games of skill rather than just games of chance, is that the odds of a victory change during the course of a game and so the player is required to adapt their betting strategy as new information is revealed.  Increasing their exposure to risk as the odds grow better or retreating as the odds worsen.  The same is true of a lending organisation where customer management strategies seek to maximise organisational profit but changing exposure as new information is received.

Once the first round of betting has been completed and each player’s starting position has been determined, the dealer turns over three ‘community cards’.  These are cards that all players can see and can use, along with their two private cards, to create their best possible poker hand. A significant amount of new information is revealed when those three community cards are dealt. In time two further community cards will be revealed and it will be from any combination of those seven cards that a winning hand will be constructed. So, at this point, each player knows five of the seven cards they will have access to and three of the cards their opponents can use. The number of possible hands becomes smaller and so the odds that the players had will be a winner can be calculated more accurately. That is not to say the odds of a win will go up, just that the odds can be stated with more certainty.

At this stage of the game, therefore, the betting activity usually heats up as players with good hands increase their exposure through bigger bets. Players with weaker hands will try to limit their exposure by checking – that is not betting at all – or by placing the minimum bet possible. This strategy limits their potential loss but also limits their potential gain as the total size of the ‘pot’ is also kept down.

As each of the next two community cards is revealed this process repeats itself with players typically willing to place ever larger bets as the new information received allows them to calculate the odds with more certainty. Only once the final round of betting is complete are the cards revealed and a winner determined. Those players that bet until the final round but still lose will have lost significantly in this instance. However, if they continue to play the odds well they will expect to recuperate that loss – and more – over time.

The customer management team within a lending organisation works with similar principals. As an account begins to operate, new information is received which allows the lender to determine with ever more certainty the probability that an account will eventually default: with every payment that is received on time, the odds of an eventual default decrease; with every broken promise-to-pay, those odds increase; etc.

So the role of the customer management team is to design strategies that optimise the lender’s exposure to each customer based on the latest information received. Where risk appears to be dropping, exposure should be increased through limit increases, cross-selling of new products, reduced pricing, etc. while when the opposite occurs the exposure should be kept constant or even decreased through limit decreases, pre-delinquency strategies, foreclosure, etc.

Collections

As the betting activity heats up around them a player may decide that the odds no longer justify the cost required to stay in the game and, in these cases, the player will decide to fold – and accept a known small loss rather than continue betting and risk an even bigger eventual loss chasing an unlikely victory.

Collections has too many operational components to fit neatly into the poker metaphor but it can be most closely likened to this decision of whether or not to fold. Not every hand can be a winner and even hands that initially appeared to be strong can be shown to be weak when the latter community cards are revealed. A player who was dealt two hearts and who then saw two further hearts dealt in the first three community cards would have been in  a strong position with the odds that the fifth heart they need to create a strong ‘flush hand’ sitting at fifty percent. However, if when the next two cards are dealt neither is a heart, the probability of a winning hand will drop to close to zero.

In this situation the player needs to make a difficult decision: they have invested in a hand that has turned out to be a ‘bad’ one and they can either accept the loss or invest further in an attempt to salvage something. If there is little betting pressure from the other players, they might choose to stay in the game by matching any final bets; figuring that because the total pot was large and the extra cost of participating small it was worth investing further in an unlikely win. Money already bet, after all, is a sunk cost. If the bets in the latest round are high however, they might choose to fold instead and keep what money they have left available for investment in a future, hopefully better hand.

As I said, the scope of collections goes well beyond this but certain key decisions a collections strategy manager must make relate closely to the question of whether or not to fold. Once an account has missed a payment and entered the collections processes the lender has two options: to invest further time and money in an attempt to collect some or all of the outstanding balance or to cut their losses and sell or even to write-off the debt.

In cases where there is strong long-term evidence that the account is a good one, the lender might decide – as a card player might when a strong hand is not helped by the fourth community card – to maintain or even increase their exposure by granting the customer some leeway in the form of a payment holiday, a re-aging of debt or even a temporary limit increase. On the other hand, in cases where the new information has forced a negative re-appraisal of the customer’s risk but the value owed by that customer is significant, it might still be preferable for the lender to invest a bit more in an attempt to make a recovery, even though they know that the odds are against them. This sort of an investment would come in the form of an intensive collections campaign or the paid involvement of specialist third party debt collectors.

As with a game of cards, the lender will not always get it exactly right and will over invest in some risky customers and under-invest in others; the goal is to get the investment right often enough in the long-term to ensure a profit overall.

It is also true that a lender who consistently shies away from investing in the collection of marginal debt – one that chooses too easily to write-off debt rather than to risk an investment in its recovery – may start to create a reputation for themselves that is punitive in the long-run. A lender that is seen as a ‘soft touch’ by the market will attract higher risk customers and will see a shift in portfolio risk towards the high-end as more and more customers decide to let their debt fall delinquent in the hopes of a painless write-off. Similarly a card player that folds in all situations except those where the odds are completely optimal, will soon be found out by their fellow players. Whenever they receive the perfect hand and bet accordingly, the rest of the table will likely fold and in so doing reduce the size of the ensuing pot which, although won, will be much smaller than it might otherwise have been. In extreme cases, this limiting of the wins gained from good hands may be so sever that the player is unable to cover the losses they have had to take in the games in which they folded.

Summary

The goal of credit risk strategy, like that of a poker betting strategy, is to end with the most money possible. To do this, calculated bets must be taken at various stages and with varying levels of data; risk must be re-evaluated continuously and at times it may become necessary to take a known loss rather than to risk ending up with an even greater, albeit uncertain, loss in the future.

So, in both scenarios, risk should not be avoided but should rather be converted into a series of numerical odds which can be used to inform investment strategies that seek to leverage off good odds and hedge against bad odds. In time, if accurate models are used consistently to inform logical strategies it is entirely possible to make a long-term profit.

Of course in their unique nuances both fields also vary quite extensively from each other, not least in the way money is earned and, most importantly, in the fact that financial services is not a zero sum game. However, I hope that where similarities do exist these have been helpful in understanding how the profit levers in a lending business fit together. For a more technical look at the same issue, you can read my articles on profit modelling in general and for credit cards and banks in particular.

Read Full Post »

I wrote in my last article that a debt collection agency (DCA) working on a commission basis had the ability to ‘cherry pick’ the accounts that they worked, distributing their invested effort across multiple customer segments in multiple portfolios to generate significantly higher rewards.  In this article I will walk through a simple example of how a DCA could do this across three portfolios and then discuss how the same principles can be applied by primary lenders.

 

A DCA Example

A third-party DCA is collecting debts on behalf of three different clients, each of which pays the same commission rate and each of which has outsourced a portfolio of 60 000 debts. 

Half of the accounts in Portfolio A have a balance of €4 000 while the other half are split evenly between balances of €2 000 and balances of €5 500.  After running the accounts in question through a simple scorecard, the DCA was able to determine that 60% of the accounts are in the high risk group with only a 7% probability of payment, 20% are in the medium risk group with a 11% probability of payment while the remainder are in the low risk group and have a 20% probability of payment.

Portfolio B ia made-up primarily of account with higher balances, with half of the accounts carry a balance of €13 500 and the remainder are equally split between balances of €7 500 and €5 000.  Unfortunately, the risk of this portfolio is also higher and, after also putting this portfolio through the scorecard, the DCA was able to determine that 50% of the accounts were in the highest risk group with an associated probability of payment of just 2% while 30% of the accounts were in the medium risk group with a probability of payment of 5% and 20% of the accounts were in the lowest risk group with a probability of payment of 13%.

In portfolio C the accounts are evenly split across three balances: €4 500, €8 000 and €10 000.  After a similar scorecard exercise it was also shown that 70% of accounts are in the highest risk group with a 7.5% probability of payment, 30% of accounts are in the medium risk group with a probability of payment of 14% and the final 10% in the low risk group have an 18% probability of payment.

The DCA now has a few options when assigning work to its staff.  It could assign accounts randomly from across all three portfolios to the next available staff member, it could assign accounts from the highest balance to the lowest balance or it could assign specific portfolios to specific teams, prioritising work within each portfolio but not across them.  Some of these approaches are better than others but neither will deliver the optimal results.  To achieve optimal results, the DCA needs to break each portfolio into customer segments and then prioritise each of those segments; working the highest yielding segment first and the lowest yielding one last.

Using the balance and probability of payment information we have, it is possible to calculate a recovery yield for each of the nine segments in each portfolio; the recovery yield being simply the balance multiplied by the probability of recovering that balance.  Once the recovery yield has been determined for each of the nine segments in each of the three portfolios it is possible to prioritise them against each other as shown below.

With the order of priority determined, it is possible now to assign effort in the most lucrative way.  For example, if the DCA in question only had enough staff to work 50 000 accounts they would expect to collect balances of approximately €27.7 million if they worked the accounts randomly, approximately €40.4 million if they prioritised their effort based on balance but as much €53.3 million if they followed the recommended approach – an uplift of 92%.  As more staff become available so the less the apparent uplift decreases but there is still a 44% improvement in recoveries if 100 000 accounts can be worked.

If all accounts can be worked then, at least if we keep our assumptions simple, there is no uplift in recoveries to be gained by working the accounts in any particular order. 

 

Ideal Staff Numbers

However, that is not to say that the model becomes insignificant.  While the yield changes based on which segment an account is in, the cost of working each of those accounts remains the same.  Since profitability is the difference between yield and cost and since cost remains steady, a drop in yield is also a drop in profit.  So, continuing along that line of reasoning, there will be a level of yield below which a DCA is making a loss by collecting on an account.

So, it stands to reason then that a DCA working all accounts is unlikely to be making as much profit as they would be if they were to use the ‘cheery picking’ model to determine their staffing needs.  New staff should be added to the team for as long as they will add more value than they will cost.  As each new member of staff will be working on lower yield accounts there are diminishing marginal returns on staff until the point that a new member of staff will be actually value destroying.

Assume it costs €30 000 to employ and equip one collector and that that collector can work 1 500 accounts in a year.  To be value adding then, that collector must be assigned to work only accounts with a net yield of more than €20. 

Up to now, I simply referred to recovery yield as the total expected recovery from a segment.  That was possible at the time as we had made the simplifying assumption that each portfolio earned the same commission and were only looking to prioritise the accounts.  However, once we start to look at the DCA’s profit, we need to look at net yield – or the commission earned by the DCA from the recovery. 

If we assume a 10% commission is earned on all recoveries then for the yield of €1 800 in highest yielding segment becomes a net yield of €180.  Using that assumption we are able to see that the ideal staffing contingent for the example DCA is 104: allowing the DCA to work the 156 000 accounts in segment 24 and better. 

At this level the DCA will collect approximately €96 million earning themselves €9.6 million in commission and paying out €3.1 million in staff costs in the process; this would leave them with a profit of €6.4 million.  If they lay off two members of staff and work one less segment their profit would decrease by €6 000.  If, instead, they hired 5 more members of staff and worked one more segment their profits would be reduced by nearly €40 000.

Commission Rate Changes

Having just introduced the role of commission, it makes sense to consider how changes in commission rates might impact on what we have already discussed. 

The simplest change to consider is an across-the-board change in commission rates.  This doesn’t change the order in which accounts are worked as it affects all yields equally.  It does, however, change the optimal staff levels.  In the above example an across-the-board decrease in commission from 10% to 5% would halve the yields of each block meaning to still achieve a net yield of €20 a segment would have to have a gross yield of €400.  In turn this would mean that staff numbers would need to be cut back to 59: now working 88 000 accounts and generating a total profit of €2 million.

A more common scenario is that commissions are fixed over the term of the contract but that these commissions vary from portfolio to portfolio. 

Most DCAs will charge baseline commission rates which vary with the age of the debt at the time it is taken on.  For example, a DCA may charge a client 5% of all recoveries made on accounts handed over at 60 days in arrears but 10% of all recoveries made on accounts handed over at 120 days in arrears.  This compensates the DCA for the lower recovery rates expected on older debt and encourages primary lenders to outsource more debts to the DCA.

When a DCA is operating across portfolios which each earn different commission rates it should use the net yield in the prioritisation exercise described above rather than the gross yield.  Assume that the DCA from our earlier example actually earns a commission of 5% for all recoveries made from Portfolio A, 7.5% on all recoveries made from Portfolio B and 10% on all recoveries made from Portfolio C. 

Now, the higher rewards offered in Portfolio C change the order in which accounts should be worked.  The DCA no longer concentrates on the largest recovery yield but rather the largest net yield. 

Primary Lenders

Of course, the concept and models described here are not unique to the world of DCAs, primary lenders should structure their debt management efforts around similar concepts.  The only major difference during the earlier stages of the debt management cycle is that there tends to be more strategic options, more scenarios and a wider diversity of accounts.

This leads to a more complex model but one that ultimately aims to achieve the same end result: the optimal mix between cost and reward.  Again a scorecard forms the basis for the model and creates the customer segments mentioned above.  Again the size of the balance can be used as a proxy for the expected benefit.   There is of course no longer a commission but there are new complexities, including the need to cost multiple strategy paths and the need to calculate the recovery rate as the recovery rate of the strategy only – i.e. net of any recoveries that would have happened regardless.  For more on this you can read my articles on risk based collections and on self-cure models.

Read Full Post »

In the other articles I’ve written here I looked at risk-based collections strategies from a primary lender’s point-of-view and with a particular focus on the earlier stages of the process.  However, although the basic principles are universally applicable, there are a number of thought changes that need to be made when one is considering risk-based collections strategies from a third party’s point-of-view or when the debt in question has been delinquent for a longer period of time.  In this article I will try to highlight the most important of those changes.

 

Late Stage Collections

The most important difference between early stage collections and late stage collections is driven by changes to risk distribution over time.  A random group of accounts in early stage collections is likely to be made-up from a diverse distribution of actual account risks: a lot of low risk accounts, a lot of medium risk accounts, several high risk accounts and a few very high risk accounts.  As this group of accounts proceeds through the collections process the distribution becomes more homogeneous with a bias towards the more risky accounts.  This is not because there are more risky accounts present per se, but rather because most of the lower risk accounts have left collections.  As accounts proceed through collections it is the lower risk accounts that leave at a faster rate and the higher risk accounts therefore begin to dominate as illustrated below.

 

The practical implication of this is that it becomes harder and harder to segment accounts into risk groups.  Since risk-based strategies are built upon customer segmentation, it follows that these strategies also become more difficult to design.

For this reason, specialist scorecards and strategies are recommended for late stage collections.  In most cases, a traditional behavioural scorecard starts to lose its effectiveness as a debt portfolio reaches about 60 days of delinquency from where its performance drops steadily, seldom being of significant value after 120 days.

Early stage collections scorecards may add value for longer but once a debt has passed 210 days of delinquency a specialist scorecard is almost always needed.

Since the distribution of risk has been reversed, so too should be the focus of the scorecard.  Rather than trying to predict which few of the many good accounts will eventually go bad, the scorecard now needs to predict which few of the many bad accounts will eventually cure.  The traditional ‘bad definition’ is replaced with a ‘good definition’. 

The exact ‘good definition’ will vary with business requirements but is usually related to whether an account will make a payment, a certain number of consecutive payments or payments equal to a certain percentage of balance outstanding.

Specialised late stage collections scorecards of this kind tend to focus on events that happened post delinquency rather than pre-delinquency: number of times in collections, number of previous collection payments, promise-to-pays kept, number of negative bureau remarks, number of legal claims outstanding, etc. 

Despite their technical limitations, late stage scorecards can still offer significant value.  In a recent implementation of late stage scorecard built off very limited data I have seen a portfolio segmented and summarised into four risk groups with the 25% of accounts in the lowest risk group four times as likely to make a payment as the 25% of accounts in the highest risk group.

 

External Collections

Managing late stage collections is typically a drawn-out and operationally intensive processes.  For this reason, many lenders choose to outsource the function to third parties.

Once a debt has left the primary lender, its nature changes; most obviously because the ‘balance’ side of the lending profit model is no longer a consideration.  The third-party can no longer profit from commission or interest charges levied on new balance growth, can no longer charge annual fees and can no longer generate cross-sell opportunities.  So only the cost side of the traditional lending profit model remains. 

That is not to say that debt collection agencies don’t earn revenue of course, it’s just that they earn their revenue from the cost side of the traditional lending profit model; the side dealing with risk costs and bad debt losses.

There are two dominant business models for debt collection agencies based on whether the original debt was sold outright or merely outsourced by the primary lender.  When a debt collection agency buys a portfolio of debt outright, it tends to see that portfolio in a similar way to the primary lender.  When, on the other hand, it only collects the debts on the original lender’s behalf, the portfolio is usually viewed quite differently.

 

Purchased Debt

When a portfolio of debts is purchased by a debt collection agency they usually pay a price equal to a given percentage of the balances outstanding.  They then need to recover a high enough percentage of those balances to cover this initial price as well as all the operational costs that need to be incurred in making those recoveries.

This business model means that the buyer takes on all the risk inherent in a portfolio at the time of purchase and has little ability to adjust their level of investment thereafter.  The price paid at the start is the critical factor in overall profitability; managing the costs incurred in the collection process and using better techniques to gain an up-lift in recoveries usually make a lesser impact.

A debt collection agency interested in purchasing a new portfolio should therefore invest considerable time and resources to accurately estimate the expected recoveries from – and the expected cost of working – any new portfolio.

Unfortunately, these efforts are usually complicated by a lack of quality data.  It is uncommon for the purchaser to have access to extensive data relating to the portfolio for sale, often because this simply doesn’t exist but also sometimes due to a reluctance to share data on the part of the seller.  Therefore, it is often necessary to make some compromises.

The best way to deal with a lack of specific data is to deploy a generic model.  Generic models can either be built in-house (if the purchaser has experience collecting debts on other, similar portfolios) or they can be purchased from a specialist firm with access to pooled industry data. 

Rather than running accounts through the generic scorecard on a case-by-case basis as one would if it were deployed in its typical form, the scorecard is used to segment a random sample of accounts from the new portfolio in order to create an estimate of that portfolio’s total risk make-up.  The expected recovery rates of the model create a baseline estimate for the expected recovery rate of the portfolio.  The generic strategy paths of the model can be used to create a baseline estimate for the cost of the recovery.

This baseline can then be adjusted upwards or downwards to take into account any variance from the norm the purchaser expects to stem from their own environment: for example the expected recover rate would be adjusted downwards if the purchaser had never collected a debt in the market in question or upwards if they had a track record of consistently achieving higher than average recoveries.

A generic scorecard might be less accurate than a bespoke scorecard would be in each specific case but it is more broadly applicable than the bespoke scorecard would have been.  A generic approach is also quicker and cheaper to implement.  This will be the preferable solution, therefore, whenever there is either simply no data with which to build a bespoke model or where the total value of the portfolio does not justify a larger investment.

 

Third-Party Debt

If the debt collection agency has not bought the portfolio outright but is collecting on behalf the primary lender or debt owner, the profit model changes somewhat.  Unlike the traditional lending model, the profit model of the third-party debt collector is not heavily influenced by bad debt write-offs.  Instead of incurring small risk costs for each account in arrears and large write-off costs, the third-party debt collector earns a fee or commission for every recovery made.  This means that if a large debt is written-off there is no direct cost as such, just a lost opportunity for a commission-earning recovery. 

Since the commission is usually a small percentage of the total balance outstanding the impact of balance size is diluted; but the cost to make a recovery is relatively unchanged so the focus shifts to ‘cherry picking’.  For a given operational cost it can be more profitable to contact the accounts that are most likely to pay than it is to try to prevent large balance accounts that are at risk of defaulting from doing so.

Using a combination of the expected recovery rate and the expected recovery value on a recent project, we were able to identify segments of a portfolio that generated returns over 8 times higher than the average returns for that portfolio.  Identifying and focusing on similar segments from each of its portfolios, rather than treating each portfolio as internally and externally homogenous, will allow a debt collection agency to generate significantly higher profits by ensuring all high yield opportunities are followed while all low yield opportunities are not pursued.

This ability to shift effort to where it is most profitable is the key difference between the two business models.  While the eventual profitability of a single deal is still dominated by the contracted commission rates, the debt collection agency can more easily shift their investment to other more profitable deals as soon as it becomes apparent that the actual recovery rates on a given portfolio are lower than initially expected. 

Assume a debt collection agency has signed a new contract to collect debt on a lender’s behalf in return for a commission of 10% of all recoveries.  Assume too that they signed the deal at this price because their internal calculations suggested that they would receive payments from every second debtor which, in turn, would be sufficient to cover their costs and profit requirements.  If, after a month of working the new portfolio, that they were to discover that they were only receiving payments from every fourth debtor they could reduce the number of staff they had working on the new portfolio and re-assign them to more profitable work on another portfolio; restricting their efforts on the new portfolio to only those customer segments identified as still profitable.

In reality, contracts signed between lenders and agencies try to overcome this problem so the application of the theory is seldom this ‘clean’.  In most cases a large primary lender will assign their debt to more than one collections agency to allow a comparison of the performance of each against the other.  They usually then distribute their debt in a ratio based on that performance so that, for example, the best performing agency may receive 60% of all outsourced debt, the second best agency 30% and the worst agency 10%. 

Clauses like this complicate the calculation of optimal investments but don’t change the fact that it is still possible for a third-party debt collector to adjust their level of investment in any one portfolio as more information is learned.

*   *   *   *   *

I have been asked before how a debt collection agency can optimise the timing of when they hand accounts over to lawyers.  The model I suggest for these cases is an adaptation of the self-cure model I introduced in a previous article.  Rather than dedicate a whole article to this version, I have included a discussion here.  However, since the subject matter is a bit more technical from here onwards it may be of less interest to some readers.

 

Escalating to Legal

Once an account has reached an external debt collection agency there are not many options left in terms of further escalation; if an account leaves a debt collection agency it can either move sideways to another debt collection agency, be written-off or escalated to legal recoveries.

Knowing which of these routes is best for each account is an important factor in determining the overall profitability of a third-party debt collector.  The approach I would recommend is the same one I discussed here when talking about self-cure models.

It is the nature of the debt collection business that much of the work will go unrewarded, with only a small portion of accounts generating meaningful payments.  It is important to not over-invest in accounts where the likelihood of recoveries is too low.  However, since all accounts have already demonstrated either a lack of willingness to pay or a lack of ability to pay, determining when the likelihood to recover is too low is easier said than done.

The factors involved are very similar to those at play in a self-cure model where one is also seeking to not over-invest in accounts while simultaneously not compromising the long-term recovery rate of the portfolio by not working an account that may consequently go bad where they might otherwise not have.  In the case of a self-cure model the goal is to avoid needlessly paying to contact debtors that will pay regardless of a contact and in the case of an escalation model the table has been turned so the goal is to avoid needlessly paying to contact a debtor that will not pay regardless of a contact.

In order to calculate the optimal time to change the treatment of an account it is important to know the direct costs and expected benefits of each strategy and, most importantly, how those change over time. 

The expected recovery from a telephone and letter based debt collection strategy decreases over time.  Once a delinquent customer has been contacted telephonically on multiple occasions the probability of the next contact making a difference is small.  The expected recovery from a legal recovery strategy decreases more slowly.  The cost of each method is also different, with the standard approach having a small but regular cost where the legal method usually has a high but largely fixed cost associated.  It is these differences that create an opportunity to profit from a change in strategy.

In summary, the cheaper method should always be employed unless the lost opportunity for recoveries from not using the more expensive method outweighs those savings.  So, in the case of a debt collection agency an account should be retained for the next period unless the decrease in expected recoveries from a legal recoveries strategy over the same period is greater than the cost difference between the two methods as shown in the table below.

 






Read Full Post »

Probably the most common credit card business model is for customers to be charged a small annual fee in return for which they are able to make purchases using their card and to only pay for those purchases after some interest-free period – often up to 55 days.  At the end of this period, the customer can choose to pay the full amount outstanding (transactors) in which case no interest accrues or to pay down only a portion of the amount outstanding (revolvers) in which case interest charges do accrue.  Rather than charging its customer a usage fee, the card issuer also earns a secondary revenue stream by charging merchants a small commission on all purchases made in their stores by the issuer’s customers.

So, although credit cards are similar to other unsecured lending products in many ways, there enough important differences that are not catered for in the generic profit model for banks (described here and drawn here) to warrant an article specifically focusing on the credit card profit modelNote: In this article I will only look at the profit model from an issuer’s point of view, not from an acquirer’s.

* * * 

We started the banking profit model by saying that profit was equal to total revenue less bad debts, less capital holding costs and less fixed costs.  This remains largely true.  What changes is the way in which we arrive at the total revenue, the way in which we calculate the cost of interest and the addition of a two new costs – loyalty programmes and fraud.  Although in reality there may also be some small changes to the calculation of bad debts and to fixed costs, for the sake of simplicity, I am going to assume that these are calculated in the same way as in the previous models.

 

Revenue

Unlike a traditional lender, a card issuer has the potential to earn revenue from two sources: interest from customers and commission from merchants.  The profit model must therefore be adjusted to cater for each of these revenue streams as well as annual fees. 

Total Revenue  = Fees + Interest Revenue + Commission Revenue

                                = Fees + (Revolving Balances x Interest Margin x Repayment Rate) + (Total Spend x Commission)

                                = (AF x CH) + (T x ATV) x ((RR x PR x i) + CR)

Where              AF = Annual Fee                                               CH = Number of Card Holders  

                           T = Number of Transactions                          PR = Repayment Rate

                           ATV = Average Transaction Value              i = Interest Rate

                           RR = Revolve Rate                                              CR = Commission Rate

Customers usually fall into one of two groups and so revenue strategies tend to conform to these same splits.  Revolvers are usually the more profitable of the two groups as they can generate revenue in both streams.  However, as balances increase and approach the limit the capacity to continue spending decreases.  Transactors, on the other hand, seldom carry a balance on which an issuer can earn interest but they have more freedom to spend.

Strategies aimed at each group should be carefully considered.  Balance transfers – or campaigns which encourage large, once-off purchases – create revolving balances and sometimes a large, once-off commission while generating little on-going commission income.  Strategies that encourage frequent usage don’t usually lead to increased revolving balances but do have a more consistent – and often growing – long-term impact on commission revenue..   

Variable Costs

There is also a significant difference between how card issuers and other lenders accrue variable costs. 

Firstly, unlike other loans, most credit cards have an interest free period during which the card issuer must cover the costs of the carrying the debt.

The high interest margin charged by card issuers aims to compensate them for this cost but it is important to model it separately as not all customers end up revolving and hence, not all customers pay that interest at a later stage.  In these cases, it is important for an issuer to understand whether the commission earnings alone are sufficient to compensate for these interest costs.

Secondly, most card issuers accrue costs for a customer loyalty programme.  It is common for card issuers to provide their customers with rewards for each Euro of spend they put on their cards.  The rate at which these rewards accrue varies by card issuer but is commonly related in some way to the commission that the issuer earns.  It is therefore possible to account for this by simply using a net commission rate.  However, since loyalty programmes are an important tool in many markets I prefer to keep it out as a specific profit lever.

Finally, credit card issuers also run the risk of incurring transactional fraud –  lost, stolen or counterfeited cards.  There are many cases in which the card issuer will need to carry the cost of fraudulent spend that has occurred on their cards.  This is not a cost common to other lenders, at least not after the application stage.

Variable Costs = (T x ATV) x ((CoC x IFP) + L + FR)

Where            T = Number of Transactions                         IFP = Interest Free Period Adjustment

                         ATV = Average Transaction Value             CoC = Cost of Capital

                         FR = Fraud Rate

Shorter interest free periods and cheaper loyalty programmes will result in lower costs but will also likely result in lower response rates to marketing efforts, lower card usage and higher attrition among existing customers.

 

The Credit Card Profit Model                   

Profit is simply what is left of revenue once all costs have been paid; in this case after variable costs, bad debt costs, capital holding costs and fixed costs have been paid.

I have decided to model revenue and variable costs as functions of total spend while modelling bad debt and capital costs as a function of total balances and total limits. 

The difference between the two arises from the interaction of the interest free period and the revolve rate over time.  When a customer first uses their card their spend increases and so does the commission earned and loyalty fees and interest costs accrued by the card issuer.  Once the interest free period ends and the payment falls due, some customers (transactors) will pay their full balance outstanding and thus have a zero balance while others will pay the minimum due (revolve) and thus create a balance equal to 100% less the minimum repayment percentage of that spend. 

Over time, total spend increase in both customer groups but balances only increase among the group of customers that are revolving.  It is these longer-term balances on which capital costs accrue and which are ultimately at risk of being written-off.  In reality, the interaction between spend and risk is not this ‘clean’ but this captures the essence of the situation.

Profit = Revenue – Variable Costs – Bad Debt – Capital Holding Costs – Fixed Costs

= (AF x CH) + (T x ATV) x ((RR x PR x i) + CR) – (T x ATV) x (L + (CoC x IFP)) – (TL x U x BR) – (TL x U x CoC +   TL x   (1 – U) x BHR x CoC) – FC

= (T x ATV) x (CR – L – (CoC x IFP) -FR) – (TL x U x BR) – ((TL x U x CoC) + (TL x (1 – U) x BHR x CoC)) – FC

Where        AF = Annual Fee                                               CH = Number of Card Holders          

                      T = Number of Transactions                         i = Interest Rate

                      ATV = Average Transaction Value               TL = Total Limits

                      RR = Revolve Rate                                                U = Av. Utilisation

                      PR = Repayment Rate                                          BR = Bad Rate

                      CR = Commission Rate                                        CoC = Cost of Capital

                      L = Loyalty Programme Costs                          BHR = Basel Holding Rate

                      IFP = Interest Free Period Adjustment        FC = Fixed Costs

                      FR = Fraud Rate

 

Visualising the Credit Card Profit Model  

Like with the banking profit model, it is also possible to create a visual profit model.  This model communicates the links between key ratios and teams in a user-friendly manner but does so at the cost of lost accuracy.

The key marketing and originations ratios remain unchanged but the model starts to diverge from the banking one when spend and balances are considered in the account management and fraud management stages.   

The first new ratio is the ‘usage rate’ which is similar to a ‘utilisation rate’ except that it looks at monthly spend rather than at carried balances.  This is done to capture information for transactors who may have a zero balance – and thus a zero balance – at each month end but who may nonetheless have been restricted by their limit at some stage during the month.

The next new ratio is the ‘fraud rate’.  The structure and work of a fraud function is often similar in design to that of a debt management team with analytical, strategic and operational roles.  I have simplified it here to a simple ratio of fraud: good spend as this is the most important from a business point-of-view, however if you are interested in more detail about the fraud function you can read this article or search in this category for others.

The third new ratio is the ‘commission rate’.  The commission rate earned by an issuer will vary by each merchant type and, even within merchant types, in many cases on a case-by-case basis depending on the relative power of each merchant.  Certain card brands will also attract different commission rates; usually coinciding with their various strategies.  So American Express and Diners Club who aim to attract wealthier transactors will charge higher commission rates to compensate for their lower revolve rates while Visa and MasterCard will charge lower rates but appeal to a broader target market more likely to revolve.

The final new ratio is the revolve rate which I have mentioned above.  This refers to the percentage of customers who pay the minimum balance – or less than their full balance – every month.  On these customers an issuer can earn both commission and interest but must also carry higher risk.  The ideal revolve rate will vary by market and depending on the issuers business objectives but should be higher when the issuer is aiming to build balances and lower when the issuer is looking to reduce risk.

 






Read Full Post »

One of the business world’s most repeated truisms is that you get what you measure.  So it stands to reason that, if the goal of an organisation is to maximise profit, a unit’s contribution to that profit maximising effort should be the primary measure against which it is evaluated. 

Instead, it is common to find a wide range of diverse measures being used to evaluate and direct teams.  These measures are usually assigned using a traditional top-down budgeting approach.  The CEO might start the budgeting process by aiming to deliver a specific return on equity to shareholders.  Then, together with the other directors, she might identify a number of strategies that they believe will deliver that return.  Those strategies are then broken down into goals to be achieved by each of the business units that make up the organisation.  For example, the marketing team’s goal might be to generate new loan applications.  Similarly, the product team’s goal might be to increase the average revenue-per-account.  These goals are converted into specific measures and success is defined by the team’s ability to meet and exceed those measures.

This approach seems logical but it has several important weaknesses.  In this case, the marketing team might offer loans to potential customers at a reduced interest rate in order to increase demand.  As logical – and rewarding – a move as this might be from their point-of-view, it would also be in direct conflict with the product team’s goal of increasing the average revenue-per-account.  Alternatively, a change in the prevailing market conditions might reverse the need for market share growth.

Profit Model Analytics offers a solution by improving goal alignment in two ways: it co-ordinates the activities of disparate teams with each other and with their environments.

Firstly, consider the interaction between teams.  Individual profit levers seldom reinforce one another.  In fact, the improvement of one profit lever is often only possible at the detriment of another.  So, when teams are given goals based on individual profit levers, conflict is the norm.  For as long as the full impact of a strategy is divided across two or more teams some of those teams will remain overly conservative and the others overly aggressive.

However, the profit model looks beyond a team’s narrow area of interest and considers the impact that a strategy will have on the broader organisation.  It elevates the interaction of profit levers, or the profit model, above the performance of any individual profit lever.  Thus, it forces teams to share goals and co-ordinate their activities across reporting structures.

In the scenario above, the marketing team would now be incentivised to follow a different and more profitable strategy than simply increasing the number of new applications received.  A profit model would show that the number of applications a bank receives is a cost driver, not a revenue driver.  In fact, revenue is only derived from loans that are turned into good customers and this is a factor of the bank’s approval rate, the rate at which customers’ take-up approved loans, the customers’ attrition rate and the inherent risk of the target market.  In order to maximise profit therefore, the marketing team might decide to concentrate its efforts on appealing to a lower risk population whose members are more likely to be approved for loans or on encouraging customers who have been approved to take-up their loans.  Neither of these strategies would have a negative impact on the other teams.  In fact, if the marketing team chose to focus its efforts on reducing the rate at which customers left the bank, both the marketing team and the product team could benefit.

Secondly, consider the interaction between a team and its changing environment.  Goals set using the top-down process usually change in gradual steps – coming into being after one summit meeting and remaining in force until the next such meeting.  The environment, however, is more dynamic.  This can lead to confusion and conflict as team goals diverge from – and are emphasised at the expense of – organisational goals.  Fortunately the single, widely-held goal of profit optimisation not only reduces inter-team conflict, it also increases goal consistency and goal relevance over time. 

When times are good, the marketing team might be encouraged to increase market share by targeting slightly riskier populations.  However, risky growth becomes unprofitable as soon as the market experiences a downturn.  It is easy to see how a conflict between the interests of the team and the interests of the organisation could have arisen if they were measured against a static goal based on the growth in the number of new accounts while the environment changed around them.  Profit, on the other hand, is a fluid goal.  In this case, as soon changes in the environment make a conservative strategy more profitable, the marketing team could adapt quickly by, for example, abandoning their initial growth goal in favour of a risk-minimisation goal.  Although the goal remains the same – to maximise profit – the optimal means of achieving it will vary just as surely as a mountaineer attempting to summit Everest might need to adjust their route to compensate for changes in the prevailing weather conditions.

One simple way to visualise the profit model is as a pyramid.  Each layer of the profit model pyramid expresses the layer above it in finer detail.  So, on top of the pyramid is profit which is the result of all the activities of an organisation.  Profit can, most simply, be broken into revenue, variable costs and fixed costs and so the next layer down is made-up by these major profit levers.  Each of these profit levers is, in turn, made-up of more detailed profit levers and so on.

 

pma-pyramid

Regardless of the budgeting process employed, each team is likely to be given one overriding goal.  The success of each team is then measured by their ability to meet a pre-set target that is represented by a measurable metric that resides somewhere in the pyramid.  The further down the pyramid that a measurable metric resides, the more likely a goal based on it is to be counter productive across teams and to become variable over time.

Returning to the example of the marketing team, they might be measured on the number of new applications (level three), the number of active accounts (level two) or profit (level one).  A marketing team with the ‘level three’ goal of increasing the number of new applications will be in continual conflict with a risk team measured on the opposite and competing ‘level three’ goal of minimising the number of accounts in default.  There will be less conflict if both teams are measured on ‘level two’ goals – such as number of active and up-to-date accounts and accounts in default as a percentage of all account balances – but the conflict will only be resolved entirely when both teams are measured according to the ‘level one’ goal of profit.  The unified goal of profit optimisation will alter the relationship between these two teams from adversarial to cooperative.

Making profit the unifying goal of all teams does not mean that all teams are evaluated on the overall performance of an organisation, however.  The usual rules about effective goal setting still apply and such a broad-stroke approach would make it difficult for team members to identify the link between their efforts and the fruits thereof.  Rather, it means that each team should be evaluated on its ability to maximise profit by implementing projects that positively impact those parts of the profit model over which they exert some control.

To do this, the organisation must follow a simple three step process.  The first step is to reach agreement on the make-up of its profit model.  Teams should agree on which profit levers to include in the profit model as well as the way in which those levers interact.  The second step is to identify which teams impact which profit levers.  Most teams will have a primary impact for a small number of profit levers and a secondary impact on a few more.  (Note: A profit lever with no identified ‘owner’ points towards a weakness in organisational structure, as does a profit lever with too many owners)  The third step, once the breadth of a team’s potential impact has been established, is to ensure that every project implemented by a team includes processes to monitor and aggregate the performance of all of the profit levers under its influence.

Read Full Post »

Older Posts »