Posts Tagged ‘champion-challenger’

In terms of credit risk strategy, the lending markets in America and Britain undoubtedly lead the way while several other markets around the world are applying many of the same principles with accuracy and good results. However, for a number of reasons and in a number of ways, many more lending markets are much less sophisticated. In this article I will focus on these developing markets; discussing how credit risk strategies can be applied in such markets and how doing so will add value to a lender.

The fundamentals that underpin credit risk strategies are constant but as lenders develop in terms of sophistication the way in which these fundamentals are applied may vary. At the very earliest stages of development the focus will be on automating the decisioning processes; once this has been done the focus should shift to the implementation of basic scorecards and segmented strategies which will, in time, evolve from focusing on risk mitigation to profit maximisation.

Automating the Decisioning Process

The most under-developed markets tend to grant loans using a branch-based decisioning model as a legacy of the days of fully manual lending. As such, it is an aspect more typical of the older and larger banks in developing regions and one that is allowing newer and smaller competitors to enter the market and be more agile.

A typical branch-lending model looks something like the diagram below:

In a model like this, the credit policy is usually designed and signed-off by a committee of very senior managers working in the head-office. This policy is then handed-over to the branches for implementation; usually by delivering training and documentation to each of the bank’s branch managers. This immediately presents an opportunity for misinterpretation to arise as branch managers try to internalise the intentions of the policy-makers.

Once the policy has been handed-over, it becomes the branch manager’s responsibility to ensure that it is implemented as consistently as possible. However, since each branch manager is different, as is each member of branch staff, this is seldom possible and so policy implementation tends to vary to a greater or lesser extent across the branch network.

Even when the policy is well implemented though, the nature of a single written policy is such that it can identify the applicants that are considered too risky to qualify for a loan but it cannot go beyond that to segment accepted customers into risk groups. This means that the only way that senior management can ensure the policy is being implemented correctly in the highest risk situations is by using the size of the loan as an indication of risk. So, to do this a series triggers are set to escalate loan applications to management committees.

In this model, which is not an untypical one, there are three committees: one within the branch itself where senior branch staff review the work of the loan officer for small value loan applications; if the loan size exceeds the branch committee’s mandate though it must then be escalated to a regional committee or, if sufficiently large, all the way to a head-office committee.

Although it is easy to see how such a series of committees came into being, their on-going existence adds significant costs and delays to the application process.

In developing markets where skills are short there a significant premium must usually be paid to high quality management staff. So, to use the time of these managers to essentially remake the same decision over-and-over (having already decided on the policy, they now need to repeatedly decide whether an application meets the agreed upon criteria) is an inefficient way to invest a valuable resource. More importantly though are the delays that must necessarily accompany such a series of committees. As an application is passed on from one team – and more importantly from one location – to another a delay is incurred. Added to this is the fact that committees need to convene before they can make a decision and usually do so on fixed dates meaning that a loan application may have to wait a number of days until the next time the relevant committee meets.

But the costs and delays of such a model are not only incurred by the lender, the borrower too is burdened with a number of indirect costs. In order to qualify for a loan in a market where impartial third-party credit data is not widely available – i.e. where there are no strong and accurate credit bureaus – an applicant typically needs to ‘over prove’ their risk worthiness. Where address and identification data is equally unreliable this requirement is even more burdensome. In a typical example an applicant might need to first show an established relationship with the bank (6 months of salary payments, for example); provide a written undertaking from their employer that they will notify the bank of any change in employment status; the address of a reference who can be contacted when the original borrower can not; and often some degree of security, even for small value loans.

These added costs serve to discourage lending and add to what is usually the biggest problem faced by banks with a branch-based lending model: an inability to grow quickly and profitably.

Many people might think that the biggest issue faced by lenders in developing markets is the risk of bad debt but this is seldom the case. Lenders know that they don’t have access to all the information they need when they need it and so they have put in place the processes I’ve just discussed to mitigate the risk of losses. However, as I pointed out, those processes are ungainly and expensive. Too ungainly and too expensive as it turns out to facilitate growth and this is what most lenders want to change as they see more agile competitors starting to enter their markets.

A fundamental problem with growing with a branch-based lending model is that the costs of growing the system rise in line with the increase capacity. So, to serve twice as many customers will cost almost twice as much. This is the case for a few reasons. Firstly, each branch serves only a given geographical catchment area and so to serve customers in a new region, a new branch is likely to be needed. Unfortunately, it is almost impossible to add branches perfectly and each new branch is likely to lead to either an inefficient overlapping of catchment areas or ineffective gaps. Secondly, within the branch itself there is a fixed capacity both in terms of the number of staff it can accommodate and in terms of the number of customers each member of staff can serve. Both of these can be adjusted, but only slightly.

Added to this, such a model does not easily accommodate new lending channels. If, for example, the bank wished to use the internet as a channel it would need to replicate much of the infrastructure from the physical branches in the virtual branch because, although no physical buildings would be required and the coverage would be universal, the decisioning process would still require multiple loan officers and all the standard committees.

To overcome this many lenders have turned to agency agreements, most typically with large private and government employers. These employers will usually handle the administration of loan applications and loan payments for their staff and in return will either expect that their staff are offered loans at a discounted rate or that they themselves are compensated with a commission.

By simply taking the current policy rules from the branch based process and converting them into a series of automated rules in a centralised system many of these basic problems can be overcome; even before improving those rules with advanced statistical scorecards. Firstly the gap between policy design and policy implementation is removed, removing any risk of misinterpretation. Then the need for committees to ensure proper policy implementation is greatly reduced, greatly reducing the associated costs and delays. Thirdly the risk of inconsistent application is removed as every application, regardless of the branch originating it or the staff member capturing the data, is treated in the same way. Finally, since the decisioning is automated there is almost no cost to add a new channel onto the existing infrastructure meaning that new technologies like internet and mobile banking can be leveraged as profitable channels for growth.

The Introduction of Scoring

With the basic infrastructure in place it is time to start leveraging it to its full advantage by introducing scorecards and segmented strategies. One of the more subtle weaknesses of a manual decision is that it is very hard to use a policy to do anything other than decline an account. As soon as you try to make a more nuanced decision and categorise accepted accounts into risk groups the number of variables increases too fast to deal with comfortably.

It is easy enough to say that an application can be accepted only if the applicant is over 21 years of age, earns more than €10 000 a year and has been working for their current employer for at least a year but how do you segment all the qualifying applications into low, medium and high risk groups? A low risk customer might be one that is over 35 years old, earns more than €15 000 and has been working at their current employer for at least a year; or one that is over 21 years old but who earns more than €25 000 and has been working at their current employer for at least two years; or one that is over 40 years old, earns more than €15 000 and has been working at their current employer for at least a year, etc.

It is too difficult to manage such a policy using anything other than an automated system that uses a scorecard to identify and segment risk across all accounts. Being able to do this allows a bank to begin customising its strategies and its products to each customer segment/ niche. Low risk customers can be attracted with lower prices or larger limits, high spending customers can be offered a premium card with more features but also with higher fees, etc.

The first step in the process would be to implement a generic scorecard; that is a scorecard built using pooled third-party data that relates to a portfolio that is similar to the one in which it is to be implemented. These scorecards are cheap and quick to implement and, as when used to inform only simple strategies, offer almost as much value as a fully bespoke scorecard would. Over time the data needed to build a more specific scorecard can be captured so that the generic scorecard can be replaced after eighteen to twenty-four months.

But the making of a decision is not the end goal; all decisions must be monitored on an on-going basis so that strategy changes can be implemented as soon as circumstances dictate. Again this is not something that is possible to do using a manual system where each review of an account’s current performance tends to involve as much work as the original decision to lend to that customer did. Fully fledged behavioural scorecards can be complex to build for developing banks but at this stage of the credit risk evolution a series of simple triggers can be sufficient. Reviewing an account in an automated environment is virtually instantaneous and free and so strategy changes can be implemented as soon as they are needed: limits can be increased monthly to all low risk accounts that pass a certain utilisation trigger, top-up loans can be offered to all low and medium risk customers as soon as their current balances fall below a certain percentage of the original balance, etc.

In so doing, a lender can optimise the distribution of their exposure; moving exposure from high risk segments to low risk segments or vice versa to achieve their business objectives. To ensure that this distribution remains optimised the individual scores and strategies should be consistently tested using champion/ challenger experiments. Champion/ challenger is always a simple concept and can be applied to any strategy provided the systems exist to ensure that it is implemented randomly and that its results are measurable. The more sophisticated the strategies, the more sophisticated the champion/ challenger experiments will look but the underlying theory remains unchanged.

Elevating the Profile of Credit Risk

Once scorecards and risk segmented strategies have been implemented by the credit risk team, the team can focus on elevating their profile within the larger organisation. As credit risk strategies are first implemented they are unlikely to interest the senior managers of a lender who would likely have come through a different career path: perhaps they have more of a financial accounting view of risk or perhaps they have a background in something completely different like marketing. This may make it difficult for the credit risk team to garner enough support to fund key projects in the future and so may restrict their ability to improve.

To overcome this, the credit team needs to shift its focus from risk to profit. The best result a credit risk team can achieve is not to minimise losses but to maximise profits while keeping risk within an acceptable band. I have written several articles on profit models which you can read here, here, here and here but the basic principle is that once the credit risk department is comfortable with the way in which their models can predict risk they need to understand how this risk contributes to the organisation’s overall profit.

This shift will typically happen in two ways: as a change in the messages the credit team communicates to the rest of the organisation and as a change in the underlying models themselves.

To change the messages being communicated by the credit team they may need to change their recruitment strategies and bring in managers who understand both the technical aspects of credit risk and the business imperatives of a lending organisation. More importantly though, they need to always seek to translate the benefit of their work from technical credit terms – PD, LGD, etc. – into terms that can be more widely understood and appreciated by senior management – return on investment, reduced write-offs, etc. A shift in message can happen before new models are developed but will almost always lead to the development of more business-focussed models going forward.

So the final step then is to actually make changes to the models and it is by the degree to which such specialised and profit-segmented models have been developed and deployed that a lenders level of sophistication will be measured in more sophisticated markets.


Read Full Post »

Many lenders feel that once an account has entered the debt management process it is time to start terminating the relationship.  This is an attitude that may be valid in low risk environments where debt management tends to see only the worst accounts.  However in today’s environment lenders should not view debt management as purely an exit channel for bad customers but also as an alternative sales channel.    

In other words, in the diagram below you can no longer view the procession of an account as always being from left to right but need to consider the reverse movement; turning ‘bad’ customers ‘good’ again.   

Debt Management fulfils the same role as sales and account management

In times of strong economic growth it might be possible to drive portfolio growth solely on the back of new customer acquisition.  In these times the market is full of ‘good’ potential customers looking for new credit and the customers that end up in debt management are few and of very high inherent risk.  However as the economy slows down, two significant things happen: ‘good’ customers stop borrowing and so new sales slow (and the quality of new customers tends to drop) and more customers find themselves in the debt management process (and conversely the quality of those customers tends to rise).   

In such times then, the lender should invest more effort in identifying the best prospects from within their debt management portfolio.  Customers in debt management provide an attractive prospect pool for a few reasons: the expected ‘response rate’ to an ‘offer’ is likely to be higher than in marketing campaigns; the lender has extensive data on each customer and their habits; there is a cost of not recovering; and, in the case of revolving products, the ‘new’ customer comes with an already established balance – mimicking a traditional balance transfer.   

But not all customers are worth retaining and so it is important to understand the relative risk and value of each customer in debt management before assigning a retention strategy.  Risk segmentation is ideally done using a dedicated debt management scorecard but, at least in the earliest stages, it can also be done using a behavioural scorecard.   

Customers who are high risk are by definition likely to re-offend.  Customers like this, who are regularly in debt management, are expensive to retain and consume both operational resources and capital provisions.  Unless the balance outstanding is large or the price premium charged is very high, it may be best to expedite these customers through the process by outsourcing this debt to a third-party debt collector.    

The fact that the balance outstanding is small though should not, on its own, be used to label a customer as ‘not worth retaining’.  The most important value is not the current value but the potential future value of a customer.  The lender should consider the potential for future loans and cross-sells too.  When the relationship is sacrificed with one product, as it surely will be with an expedited outsourcing/ write-off process, it is sacrificed for all other current and future products too.    

So segmentation should only be done based on a full customer view which includes a measure of risk and reward.  As always, the way to do this is through data analysis, scorecards and test-and-learn strategies.    

Where good customers have been identified it is worth investing in their retention.  This investment must be made in long-term and short-term retention strategies.    

The debt management process provides lenders with a rare opportunity to spend a significant amount of time speaking to their customers on a one-to-one basis.  Viewed in this light, debt management provides a wonderful opportunity for long-term relationship building.  Make sure your organisation can benefit from this opportunity by having staff that are skilled in customer handling and sales techniques – not just in demanding repayment.  In the long-term, investing in staff training should be a priority for every organisation.  Good training in this area will include references to reading a customer, over-coming objectives and structuring budgets/ payment plans (I’d recommend speaking to Mark Smith for all of your collections staff training needs).   

In the short-term, monetary investments – waived fees, discounted settlements, etc. – should be considered on a case by case basis.  These are the easiest incentives to provide though they should not be the first solution to which a lender turns.  When a ‘good’ customer falls into arrears it is, almost by definition, because they lack the ability to pay rather than the willingness to do so.  This means that the customer is usually willing to work with the lender to find a payment plan that will lead to full repayment while still accommodating their temporary financial difficulties.   

If the customer’s income source has temporarily disappeared – through a loss of job, etc. – then a payment holiday should be considered with a term extension to cater for increased interest repayments.  While term extensions can be used on their own, as can debt consolidation, where the problem stems simply from monthly costs exceeding monthly incomes – as perhaps in the case of rising interest rates or falling commission earnings.  In all case a payment plan should of course be accompanied by education and budgeting assistance.

Read Full Post »

An effective knowledge management strategy is mandatory for any organisation wanting to succeed in today’s knowledge-based economy.  Such strategies should cover the creation of knowledge as well as the systems that allow for it to be stored, shared and used as a catalyst for the creation of further knowledge.


That said, regardless of the form that shared knowledge portals take, they remain stubbornly under-used and under-stocked.  This is likely to remain the case for as long as they require staff to shift their effort away from their immediate activities to find, read and interpret other peoples’ work.  A knowledge management strategy built around tools that sit outside of the day-to-day activities of an organisation’s staff is unlikely to add real value


But, by developing a culture of test-and-learn analytics, it is possible to entrench knowledge within its “organisational DNA”.  In so doing, that knowledge becomes easier to store, easier to share and easier to access.



Test-and-learn is a simple but often misapplied concept.  When it ingrained within the culture of an organisation, however, it can deliver excellent financial and knowledge management results.  Based on the scientific method, it first came to prominence as a business concept in the late eighties when American credit card issuers used it to dramatically grow their industry.


Test-and-learn analysis is an evidence-based technique whose starting point is always the hypothesis that a proposed new strategy will be more profitable than the incumbent strategy.  This expectation is usually based on an analysis of existing data stored in in-house databases but could also include experience “stored” within the minds of staff members and information acquired from third-parties.


But business decisions should not be made on hypotheses alone and so theses hypotheses must first be tested.  It is from the results of this testing that learnings are gained.  The test must compare the proposed strategy to the incumbent one in a controlled environment free from extraneous influences.  The results of the test are monitored are then subjected to statistical analysis to identify the more profitable of the two strategies.  Thus identified, the ‘winning strategy’ is rolled-out across the board and becomes the incumbent strategy against which any future hypotheses are to be tested.


Consider a marketing analyst working for a retail bank who must chose between two potential marketing campaigns designed to generate applications for credit cards.  Traditionally, the bank’s new customers were enticed with the offer of a year’s free membership to its loyalty programme.

Our analyst, however, hypothesises that more customers would apply for a card if the bank offered to waive the card fees for the first year.  Wanting to make the best use of her limited budget, she must first design a test to prove her hypothesis.  A portion of potential customers will each be randomly offered one of the two options.  After two months of careful analysis, she will be able to prove whether customers respond better to her “no fees” offer.  This real evidence will justify her using the bulk of her marketing budget to advertise her “no fees” offer.  The successful strategy then also becomes the standard against which all future marketing strategies are to be compared.


The test-and-learn approach therefore creates a circular pattern as it moves an unproven hypothesis from theory to established fact against which, in time, new hypothesis will be tested.  This circular nature is what makes the test-and-learn approach a good knowledge management tool.


Creating Explicit and Collective Knowledge

The example began with tacit individual knowledge in the form of a marketing analyst’s hypothesis that a “no fees” offer would improve response rates.  By testing this hypothesis in a scientific manner, she was able to turn that tacit knowledge into explicit knowledge.  At this stage though, that explicit knowledge was only held by the analyst running the test – i.e. individual explicit knowledge.  However, as soon as the learnings from the test were used to change the marketing strategy, everyone who came into contact with the new strategy would also become explicitly aware of the new knowledge.


The test-and-learn process can therefore be simplified into three stages – “developing the hypothesis”, “testing the hypothesis” and “taking action”.  Each of these stages, in turn, represents a stage in the knowledge management process – “tacit individual knowledge” becomes “explicit individual knowledge” and then “explicit collective knowledge”. 


So, once the culture of test-and-learn analytics is fully embedded within an organisation, any member of staff need only look at test outcomes to have de facto access to all the relevant knowledge of that organisation.  Because the “free loyalty programme” campaign was replaced by the “no fees” campaign anyone with an interest in the organisation’s marketing strategies will know that the “no fees” offer is better.  Every time this knowledge is updated, for example if it is subsequently found that response rates do not drop if it is only the first six months’ worth of fees that are waived, those same stakeholders will once again have access to the new knowledge when the strategy is changed again.

Read Full Post »

Test-and-learn analytics is a conceptually simple technique that is often misunderstood in practice.  In fact, many organisations fail in their test-and-learn endeavours before they even start – by incorrectly defining the concept. 


Simply rolling out a new strategy to all customers and monitoring what happens is not “test-and-learn”.  That’s trial-and-error and it is an expensive and reckless business practice that provides little detailed information and no mitigation for the risk of failure. 


Test-and-learn analytics, on the other hand, limits the costs and risks of innovation by testing all new strategies against incumbent strategies in a controlled environment before committing to a large-scale roll-out.  When properly implemented, test-and-learn analytics can change the way that an organisation – and even whole industry – does business and it can deliver outstanding results.  Such a successful implementation is built on three pillars: the right culture; the right people and the right tools.



Once the leadership of an organisation has decided to the adopt test-and-learn analytics, the most common mistake they make is to see it as a technical issue to be dealt with by the IT department.  While it is true that test-and-learn analytics require certain technical enablers – which will be discussed later – the single most important requirement for ensuring a successful implementation is the right organisational culture.


The successful adoption of test-and-learn requires the organisation’s culture to apply two distinct forces.  Firstly, senior business decision-makers must demand the results of test-and-learn analysis as an input into all their important decisions.  When alternative strategies are being raised at the board level, the directors must demand comparative test results before giving their go ahead.  When budgetary approval is being sought for a major initiative, the finance team must demand the results of pilot test before signing-off.  Secondly, test-and-learn thrives in a flat structure where ideas are evaluated on their own merits, independent of the relative seniority of the person putting them forward.  Business expertise should of course lead to more astute assumptions at the start of the process but, once the process is complete and the results of the test have been analysed, these test results should be allowed to speak for themselves.This allows alternative proposals to compete on a level playing field which, in turn, means that the decisions that are made are more likely to be the correct ones. 


This move towards a meritocracy is not always a move that is easily made within large organisations but, unless senior management make a concerted effort to flatten their structures, the results of tests will become meaningless and the benefits of the technique will evaporate.



The next consideration for a test-and-learn implementation must be the people.  While traditional analysts could rely on technical know-how; test-and-learn analysts need a broader combination of technical and business skills – as, indeed, do all other stakeholders in the process.  In other words, it is no longer sufficient to just answer the questions posed, analysts need to help their organisations ask and answer the right questions.


It is therefore important to recruit staff with this mix of skills – ideally by taking candidates through a series of numerical case studies dealing with realistic business problems.  These case studies should test a candidate’s ability to identify the profit levers within a business model, construct the equations that describe the interaction of those levers and manipulate them to obtain the relevant information. This ability to translate business problems into solvable equations is far more important to a test-and-learn analyst than a deep understanding of statistical tools as the latter can be taught far more easily.



The market place offers a range of specialised systems that can enable and accelerate large test-and-learn roll-outs but they are all based on the same simple system requirements.


Good quality data is the foundation on which test-and-learn analytics is built and so, in order to provide this, the most important component of the system is an accurate and easily accessed database.  The database should be large enough to store all relevant data for as long as it is needed and designed in a way that facilitates quick and reliable data retrieval.  Databases only hold historical data but to create, implement and measure a test that data needs to first be manipulated.  To achieve this it is important that some form of analytical software sit on top of that database.  Although Microsoft Excel will often suffice in smaller and/ or newer implementations, in time it will likely become necessary to upgrade to the more sophisticated offerings from Experian, SAS, etc.


Once a test has been designed it must be run in real-time as part of the organisation’s day-to-day business operations.  This requires some form of real-time delivery engine.  The nature of an organisations business and the relative level of sophistication it requires will ultimately dictate the level of investment required in such a delivery engine.  However, with careful manipulation of available systems it is often possible to run complex tests using rudimentary systems.


Finally, any insights gained from the data and proven in the test must be communicated to the relevant stakeholders.  Although other products do exist, widely available programmes like Microsoft’s Power Point and Visio are usually sufficient – in the right hands of course – to convey the message in a convenient and effective manner.

Read Full Post »

Test-and-learn analytics does not contradict or compete with traditional data analytics, it builds thereupon.  All test-and-learn programmes start with a traditional analysis of historical data.  However, rather than just using this information to make an informed hypothesis about the future, it constructs an environment in which that hypothesis can be scientifically verified before being widely accepted or rejected.


The first benefit of this approach is that it creates a true measure of the performance differential between two or more strategies.  In an environment where only one strategy at a time exists, it is impossible to know to what degree any observed difference in performance is due to the strategy (internal and controllable) and to what degree it is due to the environment (external and uncontrollable).


Consider an internet florist who sells roses at a 10% discount in the month leading up to Valentine’s Day.  If he experiences an increase in sales, how much of that is due to the lower price and how much is due to the general increase in demand for roses at that time of year?  Because he only has one strategy each year it is impossible to determine.  This would be true even if he offered a 10% discount in one year and a 15% discount in the next.  It would once again be impossible to correctly attribute any increase in sales in the second year between the change in the discount and any changes in the larger economy.  If however, in the first week of the campaign, he were to randomly offer half of the visitors to his website a 10% discount and the other half a 15% – and this is easy to do on the internet – he could calculate the degree to which an increase in discount relates to an increase in sales.  This is test-and-learn in action.


The second major benefit of the test-and-learn approach is that it controls the costs and risks that arise whenever a new strategy is implemented.  Let’s assume that our florist only experiences a very small increase in sales when the larger discount is offered.  If he had offered the 15% discount for the whole month he would have lost money because the larger discount would not have been sufficiently compensated for by increased sales.  However, by testing the two offers for a week he has only ‘lost’ money on half of a week’s sales and is able to run the more profitable strategy for the remaining three weeks of the campaign.  Innovation has therefore become cheaper and safer so more ideas can be tested which increases the odds of finding a new competitor-beating strategy.  It is possible to see how our florist, because he can now measure new strategies without risking his whole month’s sales, could try to optimise sales by changing the wording of his advertisements, by using new photos, etc.


Thirdly, when the test-and-learn approach is fully embedded within and organisation’s DNA it naturally leads to continuous improvement.  Test-and-learn analytics is a circular process without end.  An idea, or hypothesis, is first tested against the evidence from a scientific experiment and then either implemented or rejected.  But, instead of stopping there, the process regularly repeats itself.  As soon as one idea has been implemented it must be tested against the next idea and then the next idea after that.  At the end of each cycle the strategy is either improved or an inferior alternative is cheaply discarded – both of which strengthen the organisation as a whole.


But unless test-and-learn is fully understood at all levels of an organisation it can lead to dangerous misinformation.  Therefore, the mechanics of every test must be understood before actions are taken based upon the results thereof.  The three most important questions to ask of any piece of analysis are: Is the business model fully understood? Are the test and control groups statistically identical? Are the results proven or just implied?


Most organisations consist of several interconnected profit levers – some of which compete with one another and others of which enable one another.  The profit of such an organisation is maximised when all their profit levers act together in the optimal way.  Unless the test has considered the implication of a proposed strategy on all the relevant profit levers it might improve one part of the business at the cost of another.  For example, a simpler loan application process may increase response rates but might also increase risk and so both of these measures need to be included in the test.


Statistically identical test and control groups are created by randomly assigning candidates to one or the other.  If this assignment is not done correctly, certain underlying trends can unknowingly be built into the test.  For example, although you can use the last digit of a credit card number to randomly assign groups, using the whole number would group similar candidates together – those who bank together, who have similar incomes, etc. 


Much has been made in this article about the fact that test-and-learn analytics leads to scientifically proven results.  This is, however, only true for the aspects specifically measured in the test.   Broader implications that may have been drawn from the test results are only as accurate as the assumptions through which they were derived.  For example, a test might prove that one month after selling a credit card to a new group of customers, those customers are no riskier than traditional customers.  And, it might then be assumed that this new group of customers will still be equally risky in a year’s time.  Although this assumption appears reasonable, it has not been specifically proven one and so, in time, it might be shown to be inaccurate.  Long-term strategies based on short-term tests must therefore be undertaken with caution.

Read Full Post »

Older Posts »