Feeds:
Posts
Comments

Posts Tagged ‘Test-and-Learn’

First things first, I am by no means a scorecard technician. I do not know how to build a scorecard myself, though I have a fair idea of how they are built; if that makes sense. As the title suggests, this article takes a simplistic view of the subject. I will delve into the underlying mathematics at only the highest of levels and only where necessary to explain another point. This article treats scorecards as just another tool in the credit risk process, albeit an important one that enables most of the other strategies discussed on this blog. I have asked a colleague to write a more specialised article covering the technical aspects and will post that as soon as it is available.

 

Scorecards aim to replace subjective human judgement with objective and statistically valid measures; replacing inconsistent anecdote-based decisions with consistent evidence-based ones. What they do is essentially no different from what a credit assessor would do, they just do it in a more objective and repeatable way. Although this difference may seem small, it enables a large array of new and profitable strategies.

So what is a scorecard?

A scorecard is a means of assigning importance to pieces of data so that a final decision can be made regarding the underlying account’s suitableness for a particular strategy. They do this by separating the data into its individual characteristics and then assigning a score to each characteristic based on its value and the average risk represented by that value.

For example an application for a new loan might be separated into age, income, length of relationship with the bank, credit bureau score, etc. Then the each possible value of those characteristics will be assigned a score based on the degree to which they impact risk. In this example ages between 19 and 24 might be given a score of – 100, ages between 25 and 30 a score of -75 and so on until ages 50 and upwards are given a score of +10. In this scenario young applicants are ‘punished’ while older customers benefit marginally from their age. This implies that risk has been shown to be inversely related to age. The diagram below shows an extract of a possible scorecard:

The score for each of these characteristics is then added to reach a final score. The final score produced by the scorecard is attached to a risk measure; usually something like the probability of an account going 90 days into arrears within the next 12 months. Reviewing this score-to-risk relationship allows a risk manager to set the point at which they will decline applications (the cut-off) and to understand the relative risk of each customer segment on the book. The diagram below shows how this score-to-risk relationship can be used to set a cut-off.

How is a scorecard built?

Basically what the scorecard builder wants to do is identify which characteristics at one point in time are predictive of a given outcome before or at some future point in time. To do this historic data must be structured so that one period can represent the ‘present state’ and the subsequent periods can represent the ‘future state’. In other words, if two years of data is available for analysis (the current month can be called Month 0 and the last Month can be called Month -24) then the most distant six months (from Month -24 to Month -18) will be used to represent the ‘current state’ or, more correctly, the observation period while the subsequent months (Months -17 to 0) represent the known future of those first six months and are called ‘the outcome period’. The type of data used in each of these periods will vary to reflect these differences so that application data (applicant age, applicant income, applicant bureau score, loan size requested, etc.) is important in the observation period and performance data (current balance, current days in arrears, etc.) is important in the outcome period.

With this simple step completed the accounts in the observation period must be defined and sorted based on their performance during the outcome period. To start this process a ‘bad definition’ and ‘good definition’ must first be agreed upon. This is usually something like: ‘to be considered bad, an account must have gone past 90 days in delinquency at least once during the 18 month outcome period’ and ‘to be considered good an account must never have gone past 30 days in delinquency during the same period’. Accounts that meet neither definition are classified as ‘indeterminate’.

Thus separated, the unique characteristics of each group can be identified. The data that was available at the time of application for every ‘good’ and ‘bad’ account is statistically tested and those characteristics with largely similar values within one group but largely varying values across groups are valuable indicators of risk and should be considered for the scorecard. For example if younger customers were shown to have a higher tendency to go ‘bad’ than older customers, then age can be said to be predictive of risk. If on average 5% of all accounts go bad but a full 20% of customers aged between 19 and 25 go bad while only 2% of customers aged over 50 go bad then age can be said to be a strong predictor of risk. There are a number of statistical tools that will identify these key characteristics and the degree to which they influence risk more accurately than this but they won’t be covered here.

Once each characteristic that is predictive of risk has been identified along with its relative importance some cleaning-up of the model is needed to ensure that no characteristics are overly correlated. That is, that no two characteristics are in effect showing the same thing. If this is the case, only the best of the related characteristics will be kept while the other will be discarded to prevent, for want of a better term, double-counting. Many characteristics are correlated in some way, for example the older you are the more likely you are to be married, but this is fine so long as both characteristics add some new information in their own right as is usually the case with age and marital status – an older, married applicant is less risky than a younger, married applicant just as a married, older applicant is less risky than a single, older applicant. However, there are cases where the two characteristics move so closely together that the one does not add any new information and should therefore not be included.

So, once the final characteristics and their relative weightings have been selected the basic scorecard is effectively in place. The final step is to make the outputs of the scorecard useable in the context of the business. This usually involves summarising the scores into a few score bands and may also include the addition of a constant – or some other means of manipulating the scores – so that the new scores match with other existing or previous models.

 

How do scorecards benefit an organisation?

Scorecards benefit organisations in two major ways: by describing risk in very fine detail they allow lenders to move beyond simple yes/ no decisions and to implement a wide range of segmented strategies; and by formalising the lending decision they provide lenders with consistency and measurability.

One of the major weaknesses of a manual decisioning system is that it seldom does more than identify the applications which should be declined leaving those that remain to be accepted and thereafter treated as being the same. This makes it very difficult to implement risk-segmented strategies. A scorecard, however, prioritises all accounts in order of risk and then declines those deemed too risky. This means that all accepted accounts can still be segmented by risk and this can be used as a basis for risk-based pricing, risk-based limit setting, etc.

The second major benefit comes from the standardisation of decisions. In a manual system the credit policy may well be centrally conceived but the quality of its implementation will be dependent on the branch or staff member actually processing the application. By implementing a scorecard this is no longer the case and the roll-out of a scorecard is almost always accompanied by the reduction in bad rates.

Over-and-above these risk benefits, the roll-out of a scorecard is also almost always accompanied by an increase in acceptance rates. This is because manual reviewers tend to be more conservative than they need to be in cases that vary in some way from the standard. The nature of a single credit policy is such that to qualify for a loan a customer must exceed the minimum requirements for every policy rule. For example, to get a loan the customer must be above the minimum age (say 28), must have been with the bank for more than the minimum period (say 6 months) and must have no adverse remarks on the credit bureau. A client of 26 with a five year history with the bank and a clean credit report would be declined. With a scorecard in place though the relative importance of exceeding one criteria can be weighed against the relative importance of missing another and a more accurate decision can be made; almost always allowing more customers in.

 

Implementing scorecards

There are three levels of scorecard sophistication and, as with everything else in business, the best choice for any situation will likely involve a compromise between accuracy and cost.

The first option is to create an expert model. This is a manual approximation of a scorecard based on the experience of several experts. Ideally this exercise would be supported by some form of scenario planning tool where the results of various adjustments could be seen for a series of dummy applications – or genuine historic applications if these exist – until the results that meet the expectations of the ‘experts’. This method is better than manual decisioning since it leads to a system that looks at each customer in their entirety and because it enforces a standardised outcome. That said, since it is built upon relatively subjective judgements it should be replaced with a statistically built scorecard as soon as enough data is available to do so.

An alternative to the expert model is a generic scorecard. These are scorecards which have been built statistically but using a pool of similar though not customer-specific data. These scorecards are more accurate than expert models so as long as the data on which they were built reasonably resembles the situation in which they are to be employed. A bureau-level scorecard is probably the purest example of such a scorecard though generic scorecards exist for a range of different products and for each stage of the credit life-cycle.

Ideally, they should first be fine-tuned prior to their roll-out to compensate for any customer-specific quirks that may exist. During a fine-tuning, actual data is run through the scorecard and the results used to make small adjustments to the weightings given to each characteristic in the scorecard while the structure of the scorecard itself is left unchanged. For example, assume the original scorecard assigned the following weightings: -100 for the age group 19 to 24; -75 for the age group 25 to 30; -50 for the age group 31 to 40; and 0 for the age group 41 upwards. This could either be implemented as it is bit if there is enough data to do a fine-tune it might reveal that in this particular case the weightings should actually be as follows: -120 for the age group 19 to 24; -100 for the age group 25 to 30; -50 for the age group 31 to 40; and 10 for the age group 41 upwards. The scorecard structure though, as you can see, does not change.

In a situation where there is no client-specific data and no industry-level data exists, an expert model may be best. However, where there is no client-specific data but where there is industry-level data it is better to use a generic scorecard. In a case where there is both some client-specific data and some industry-level data a fine-tuned generic scorecard will produce the best results.

The most accurate results will always come, however, from a bespoke scorecard. That is a scorecard built from scratch using the client’s own data. This process requires significant levels of good quality data and access to advanced analytical skills and tools but the benefits of a good scorecard will be felt throughout the organisation.


Read Full Post »

The Space Pen

We’ve surely all heard the old story of how, during the space race, America invested millions of dollars to develop a pen that could write in the zero gravity conditions of space while the Russian achieved the same goal using the humble pencil.  Over the years much of the story has been exaggerated for the sake of its telling but its key lesson has remained the same: where there are two ways of achieving a goal, the cheapest of these methods is best. 

In this story the goal was to allow astronauts to write without gravity driving the flow of ink flow through a traditional pen.  It could have been achieved using an expensive pen with pressurised ink or, so the story implies, just as easily using a cheap pencil.

Learnings for Debt Management

If we were to apply the learnings to our debt management function, doing so would surely back-up the case for implementing a broadly inclusive self-cure strategy: that is a strategy that allows debtors a period of time in which to pro-actively repay their outstanding debt before investing the organisation’s time and money to contact them re-actively to make a direct request for that payment.  Since the value of a collections recovery is the same regardless of how it is achieved, it makes sense that the method used to generate that recovery should be the cheapest effective method available.  And, likewise, it makes sense that the cheapest method would be the one in which no costs are incurred.

However, by delving deeper into the history of the space pen we find that some caution is required before making that logical leap.

You see, the real story behind the space pen does not end at the same point that the anecdote does.  In fact, there are two pertinent points that are seldom mentioned.  Firstly, NASA had been using pencils prior to the development of the space pen and had decided they needed to be replaced.  Secondly, after the introduction of space pens at NASA, the Russians also started to use them.

Why would both teams have replaced the cheaper solution with a more expensive one if both did the same job?  Well it turns out that they had identified several indirect costs of pencil use; broken pieces of pencil lead can pose a risk in a zero gravity environment and the wood is flammable.

So the key lesson of the story remains true: the cheapest affective method to solve a given problem is the best method.  However, the measurement of ‘cheapest’ must include all direct and indirect costs.  This is true as much for a debt management function as it is for the space programme.

When designing a comprehensive self-cure strategy therefore, a lender must understand both is expected benefits and its direct and indirect costs before deciding who to include and for how long.

Estimating the Expected Benefits of a Self-Cure Strategy

The expected benefit of a self-cure strategy is simply the expected number of payment agreements to be achieved as a percentage of all customers in the strategy – or the probability or payment. 

A standard risk based collections strategy will segment customers into a number of risk groups each of which can then be treated differently.  As a natural product of this, each of these groups will have a known probability of payment based on their observed behaviour over time.  But it is important to take care when using these numbers in relation to a proposed self-cure strategy.

The probabilities of payment associated with the existing risk groups inherently assume that each account will proceed through the current debt management operational strategies as before.  By making that assumption invalid, you make the numbers invalid.  The expected benefit of a self-cure strategy can therefore not be assumed to be equal to the currently observed probability of payment; they actual probabilities of payment will likely be significantly lower.

Therefore, early iterations of a self-cure strategy should include a number of test-and-learn experiments designed to determine the probability of payment under a self-cure strategy.  A good starting point is to allow a test group a very short self-cure period – perhaps just two or three days.  In many organisations this amounts to little more than de-prioritising these accounts so that the time taken to work through the rest of the accounts can serve as the self-cure period.  Once the basic risk assumptions have been tested, the self-cure period can be extended – though usually to not longer than fifteen days.

It is also important to note that the probability of payment must not be measured as a single, static figure.  The way it will be applied in the eventual self-cure model means that it is important to measure how the probability of payment changes over time.

Some customers in the early stages of debt management will be ‘lazy payers’, that is customers who have the will and means to meet their obligations but tend to pay late on a regular basis; their payments will likely come in the first few days after the due date.  Other customers may have been without access to their normal banking channels for whatever reason; their payments may be more widely spread across the days after due date.  Regardless of the exact reasons, in most portfolios the majority of self-cure payments will come in the first few days after due date and thereafter at an ever-slowing rate.

Estimating the Costs of a Self-Cure Strategy

If there were direct costs involved in a self-cure model, there would be a break-even point where the dropping effectiveness and the ongoing costs of the strategy would make it inefficient to continue.  However, because a self-cure strategy has no such direct costs the problem needs to be looked at differently.

But, as I mentioned earlier, a valuable lesson can be learned by following the story of the space pen all the way to its real conclusion: the total cost of a solution is never its direct costs alone but also includes all of its indirect costs.  In the space race, the pencil’s low direct cost was nullified by its high indirect risk costs.  In debt management, a self-cure strategy’s low direct cost may also be nullified by its high indirect risk costs.

The indirect risk costs of a self-cure strategy stem from the fact that the probability of making a recovery decreases as the time to make a customer contact increases.  Customers who are in arrears with one lender are likely to also have other pressing financial obligations.  While the one lender may follow a self-cure strategy and hold off on a direct request for repayment, their debtor may re-prioritise their funds and pay another, more aggressive, lender instead. So, while waiting for a free self-cure payment to come in a lender is also reducing their chances of making a recovery from the next best method should it become clear at a point in the future that no such payment is likely to be forthcoming. 

The cost of a self-cure strategy is therefore based on the rate at which the probability of receiving a payment from next best strategy decreases.  For every day that a self-cure strategy is in force the next best strategy must start one day later and this is the key cost to bear in mind.  Is one week of potential cheap recoveries from the self-cure model worth one week of opportunities lost for more expensive but more certain recoveries in the phone base collection strategy?

Building a Self-Cure Strategy

A self-cure strategy should be applied to all accounts for as long as they remain sufficiently likely to make a payment to compensate for the indirect costs of the self-cure strategy incurred by foregoing the opportunity to drive payments using the next best strategy.

As stated, the benefits of the strategy are equal to the probability of payment over a period of time and the costs are equal to the decrease in the probability of payment from the next best strategy over that same period.

If a customer is as likely to make a payment when they are called on day one as they are when called on day five, then there is no cost in a self-cure strategy for those first five days.  Therefore, no call should be made until day six regardless of how small the probability of receiving a payment from the self-cure strategy actually is.  This is because, with no costs, any recovery made is value generating and any recovery not made is value neutral. 

However, if after the first five days a customer who has not been contacted begins to become less likely to make a payment when eventually called, costs start to accrue.  The customer should remain in the self-cure strategy up to the point where the probability of payment from the self-cure strategy is expected to drop to a level lower than the associated drop in the probability of payment from the next best strategy.

The ideal time to move an account out of the self-cure strategy and into the next best strategy would be at the end of the period preceding the one in which this cross over of cost and benefit occurs.

Please note that the next best strategy does actually have a direct cost.  Strictly speaking, this direct cost should be added to the benefit of the self-cure strategy at each point in time.  However, in the early collections stages the next best strategy is usually cheap (text messages, letters or phone calls, etc.) and so these costs are insignificant.  However, if the next best strategy is expensive – legal collections or outsourcing for example – these costs could become a material consideration.  For the sake of simplicity I will not include the direct cost of the next best strategy in this discussion but will in an upcoming article covering the question of when to sell a bad debt/ escalate it to legal.

Summary

The cheapest method should always be used to make a recovery in debt management but, before the cheapest method can be identified, all direct and indirect cost must be understood.

I haven’t set out to discuss all the direct and indirect costs of debt management strategies here – not even all the direct and indirect costs of self-cure strategies.  Rather, I have attempted to explain the most important indirect costs involved in self-cure strategies and how it can be used to identify the ideal point at which an account should be moved out of a self-cure strategy and into the first lender-driven debt management strategy.

This point will vary based on each customer’s risk profile and the effectiveness of existing debt management strategies.  The probability of payment for the next best strategy will decrease faster for higher risk customers than for lower risk customers; bringing forward the ideal point of escalation.  The probability of payment will fall slower for more intense collection techniques (such as legal collections) than for soft collections techniques (such as SMS) but costs also vary; the structure of an organisation’s debt management function will also move the ideal point of escalation.

Finally, you might find it strange that I didn’t talk about which clients should be included in a self-cure strategy.  The reason is that, in theory, every customer should first be considered for a self-cure strategy.  The important part of this statement is that I used the words ‘considered for’ not ‘included in’.  Because of the mechanics of the model proposed, higher risk customers may well have an ideal point of escalation that is equal to the day they enter debt management and so, while ‘considered’ for inclusion in the self-cure strategy they won’t actually be ‘included’.  At the same time, medium risk customers may be included and escalated after five days while the lowest risk customers may be included and escalated only on the fifteenth day.  This will all vary with your portfolio’s make-up and so it is equally possible that no customer group will be worth leaving in a self-cure strategy for more than a day or two.

Read Full Post »

Many lenders feel that once an account has entered the debt management process it is time to start terminating the relationship.  This is an attitude that may be valid in low risk environments where debt management tends to see only the worst accounts.  However in today’s environment lenders should not view debt management as purely an exit channel for bad customers but also as an alternative sales channel.    

In other words, in the diagram below you can no longer view the procession of an account as always being from left to right but need to consider the reverse movement; turning ‘bad’ customers ‘good’ again.   

Debt Management fulfils the same role as sales and account management

In times of strong economic growth it might be possible to drive portfolio growth solely on the back of new customer acquisition.  In these times the market is full of ‘good’ potential customers looking for new credit and the customers that end up in debt management are few and of very high inherent risk.  However as the economy slows down, two significant things happen: ‘good’ customers stop borrowing and so new sales slow (and the quality of new customers tends to drop) and more customers find themselves in the debt management process (and conversely the quality of those customers tends to rise).   

In such times then, the lender should invest more effort in identifying the best prospects from within their debt management portfolio.  Customers in debt management provide an attractive prospect pool for a few reasons: the expected ‘response rate’ to an ‘offer’ is likely to be higher than in marketing campaigns; the lender has extensive data on each customer and their habits; there is a cost of not recovering; and, in the case of revolving products, the ‘new’ customer comes with an already established balance – mimicking a traditional balance transfer.   

But not all customers are worth retaining and so it is important to understand the relative risk and value of each customer in debt management before assigning a retention strategy.  Risk segmentation is ideally done using a dedicated debt management scorecard but, at least in the earliest stages, it can also be done using a behavioural scorecard.   

Customers who are high risk are by definition likely to re-offend.  Customers like this, who are regularly in debt management, are expensive to retain and consume both operational resources and capital provisions.  Unless the balance outstanding is large or the price premium charged is very high, it may be best to expedite these customers through the process by outsourcing this debt to a third-party debt collector.    

The fact that the balance outstanding is small though should not, on its own, be used to label a customer as ‘not worth retaining’.  The most important value is not the current value but the potential future value of a customer.  The lender should consider the potential for future loans and cross-sells too.  When the relationship is sacrificed with one product, as it surely will be with an expedited outsourcing/ write-off process, it is sacrificed for all other current and future products too.    

So segmentation should only be done based on a full customer view which includes a measure of risk and reward.  As always, the way to do this is through data analysis, scorecards and test-and-learn strategies.    

Where good customers have been identified it is worth investing in their retention.  This investment must be made in long-term and short-term retention strategies.    

The debt management process provides lenders with a rare opportunity to spend a significant amount of time speaking to their customers on a one-to-one basis.  Viewed in this light, debt management provides a wonderful opportunity for long-term relationship building.  Make sure your organisation can benefit from this opportunity by having staff that are skilled in customer handling and sales techniques – not just in demanding repayment.  In the long-term, investing in staff training should be a priority for every organisation.  Good training in this area will include references to reading a customer, over-coming objectives and structuring budgets/ payment plans (I’d recommend speaking to Mark Smith for all of your collections staff training needs).   

In the short-term, monetary investments – waived fees, discounted settlements, etc. – should be considered on a case by case basis.  These are the easiest incentives to provide though they should not be the first solution to which a lender turns.  When a ‘good’ customer falls into arrears it is, almost by definition, because they lack the ability to pay rather than the willingness to do so.  This means that the customer is usually willing to work with the lender to find a payment plan that will lead to full repayment while still accommodating their temporary financial difficulties.   

If the customer’s income source has temporarily disappeared – through a loss of job, etc. – then a payment holiday should be considered with a term extension to cater for increased interest repayments.  While term extensions can be used on their own, as can debt consolidation, where the problem stems simply from monthly costs exceeding monthly incomes – as perhaps in the case of rising interest rates or falling commission earnings.  In all case a payment plan should of course be accompanied by education and budgeting assistance.

Read Full Post »

An effective knowledge management strategy is mandatory for any organisation wanting to succeed in today’s knowledge-based economy.  Such strategies should cover the creation of knowledge as well as the systems that allow for it to be stored, shared and used as a catalyst for the creation of further knowledge.

 

That said, regardless of the form that shared knowledge portals take, they remain stubbornly under-used and under-stocked.  This is likely to remain the case for as long as they require staff to shift their effort away from their immediate activities to find, read and interpret other peoples’ work.  A knowledge management strategy built around tools that sit outside of the day-to-day activities of an organisation’s staff is unlikely to add real value

 

But, by developing a culture of test-and-learn analytics, it is possible to entrench knowledge within its “organisational DNA”.  In so doing, that knowledge becomes easier to store, easier to share and easier to access.

 

Test-and-Learn

Test-and-learn is a simple but often misapplied concept.  When it ingrained within the culture of an organisation, however, it can deliver excellent financial and knowledge management results.  Based on the scientific method, it first came to prominence as a business concept in the late eighties when American credit card issuers used it to dramatically grow their industry.

 

Test-and-learn analysis is an evidence-based technique whose starting point is always the hypothesis that a proposed new strategy will be more profitable than the incumbent strategy.  This expectation is usually based on an analysis of existing data stored in in-house databases but could also include experience “stored” within the minds of staff members and information acquired from third-parties.

 

But business decisions should not be made on hypotheses alone and so theses hypotheses must first be tested.  It is from the results of this testing that learnings are gained.  The test must compare the proposed strategy to the incumbent one in a controlled environment free from extraneous influences.  The results of the test are monitored are then subjected to statistical analysis to identify the more profitable of the two strategies.  Thus identified, the ‘winning strategy’ is rolled-out across the board and becomes the incumbent strategy against which any future hypotheses are to be tested.

 

Consider a marketing analyst working for a retail bank who must chose between two potential marketing campaigns designed to generate applications for credit cards.  Traditionally, the bank’s new customers were enticed with the offer of a year’s free membership to its loyalty programme.

Our analyst, however, hypothesises that more customers would apply for a card if the bank offered to waive the card fees for the first year.  Wanting to make the best use of her limited budget, she must first design a test to prove her hypothesis.  A portion of potential customers will each be randomly offered one of the two options.  After two months of careful analysis, she will be able to prove whether customers respond better to her “no fees” offer.  This real evidence will justify her using the bulk of her marketing budget to advertise her “no fees” offer.  The successful strategy then also becomes the standard against which all future marketing strategies are to be compared.

 

The test-and-learn approach therefore creates a circular pattern as it moves an unproven hypothesis from theory to established fact against which, in time, new hypothesis will be tested.  This circular nature is what makes the test-and-learn approach a good knowledge management tool.

 

Creating Explicit and Collective Knowledge

The example began with tacit individual knowledge in the form of a marketing analyst’s hypothesis that a “no fees” offer would improve response rates.  By testing this hypothesis in a scientific manner, she was able to turn that tacit knowledge into explicit knowledge.  At this stage though, that explicit knowledge was only held by the analyst running the test – i.e. individual explicit knowledge.  However, as soon as the learnings from the test were used to change the marketing strategy, everyone who came into contact with the new strategy would also become explicitly aware of the new knowledge.

 

The test-and-learn process can therefore be simplified into three stages – “developing the hypothesis”, “testing the hypothesis” and “taking action”.  Each of these stages, in turn, represents a stage in the knowledge management process – “tacit individual knowledge” becomes “explicit individual knowledge” and then “explicit collective knowledge”. 

 

So, once the culture of test-and-learn analytics is fully embedded within an organisation, any member of staff need only look at test outcomes to have de facto access to all the relevant knowledge of that organisation.  Because the “free loyalty programme” campaign was replaced by the “no fees” campaign anyone with an interest in the organisation’s marketing strategies will know that the “no fees” offer is better.  Every time this knowledge is updated, for example if it is subsequently found that response rates do not drop if it is only the first six months’ worth of fees that are waived, those same stakeholders will once again have access to the new knowledge when the strategy is changed again.

Read Full Post »

Test-and-learn analytics is a conceptually simple technique that is often misunderstood in practice.  In fact, many organisations fail in their test-and-learn endeavours before they even start – by incorrectly defining the concept. 

 

Simply rolling out a new strategy to all customers and monitoring what happens is not “test-and-learn”.  That’s trial-and-error and it is an expensive and reckless business practice that provides little detailed information and no mitigation for the risk of failure. 

 

Test-and-learn analytics, on the other hand, limits the costs and risks of innovation by testing all new strategies against incumbent strategies in a controlled environment before committing to a large-scale roll-out.  When properly implemented, test-and-learn analytics can change the way that an organisation – and even whole industry – does business and it can deliver outstanding results.  Such a successful implementation is built on three pillars: the right culture; the right people and the right tools.

 

Culture

Once the leadership of an organisation has decided to the adopt test-and-learn analytics, the most common mistake they make is to see it as a technical issue to be dealt with by the IT department.  While it is true that test-and-learn analytics require certain technical enablers – which will be discussed later – the single most important requirement for ensuring a successful implementation is the right organisational culture.

 

The successful adoption of test-and-learn requires the organisation’s culture to apply two distinct forces.  Firstly, senior business decision-makers must demand the results of test-and-learn analysis as an input into all their important decisions.  When alternative strategies are being raised at the board level, the directors must demand comparative test results before giving their go ahead.  When budgetary approval is being sought for a major initiative, the finance team must demand the results of pilot test before signing-off.  Secondly, test-and-learn thrives in a flat structure where ideas are evaluated on their own merits, independent of the relative seniority of the person putting them forward.  Business expertise should of course lead to more astute assumptions at the start of the process but, once the process is complete and the results of the test have been analysed, these test results should be allowed to speak for themselves.This allows alternative proposals to compete on a level playing field which, in turn, means that the decisions that are made are more likely to be the correct ones. 

 

This move towards a meritocracy is not always a move that is easily made within large organisations but, unless senior management make a concerted effort to flatten their structures, the results of tests will become meaningless and the benefits of the technique will evaporate.

 

People

The next consideration for a test-and-learn implementation must be the people.  While traditional analysts could rely on technical know-how; test-and-learn analysts need a broader combination of technical and business skills – as, indeed, do all other stakeholders in the process.  In other words, it is no longer sufficient to just answer the questions posed, analysts need to help their organisations ask and answer the right questions.

 

It is therefore important to recruit staff with this mix of skills – ideally by taking candidates through a series of numerical case studies dealing with realistic business problems.  These case studies should test a candidate’s ability to identify the profit levers within a business model, construct the equations that describe the interaction of those levers and manipulate them to obtain the relevant information. This ability to translate business problems into solvable equations is far more important to a test-and-learn analyst than a deep understanding of statistical tools as the latter can be taught far more easily.

 

Tools

The market place offers a range of specialised systems that can enable and accelerate large test-and-learn roll-outs but they are all based on the same simple system requirements.

 

Good quality data is the foundation on which test-and-learn analytics is built and so, in order to provide this, the most important component of the system is an accurate and easily accessed database.  The database should be large enough to store all relevant data for as long as it is needed and designed in a way that facilitates quick and reliable data retrieval.  Databases only hold historical data but to create, implement and measure a test that data needs to first be manipulated.  To achieve this it is important that some form of analytical software sit on top of that database.  Although Microsoft Excel will often suffice in smaller and/ or newer implementations, in time it will likely become necessary to upgrade to the more sophisticated offerings from Experian, SAS, etc.

 

Once a test has been designed it must be run in real-time as part of the organisation’s day-to-day business operations.  This requires some form of real-time delivery engine.  The nature of an organisations business and the relative level of sophistication it requires will ultimately dictate the level of investment required in such a delivery engine.  However, with careful manipulation of available systems it is often possible to run complex tests using rudimentary systems.

 

Finally, any insights gained from the data and proven in the test must be communicated to the relevant stakeholders.  Although other products do exist, widely available programmes like Microsoft’s Power Point and Visio are usually sufficient – in the right hands of course – to convey the message in a convenient and effective manner.

Read Full Post »

Older Posts »