Feeds:
Posts
Comments

Posts Tagged ‘Reporting’

We usually assume that in a given situation, the more conservative of two strategies will better protect the bank’s interest. So, in the sort of uncertain times that we are facing now, it is common to migrate towards more conservative approaches, but this isn’t always the best approach.
In fact, a more conservative approach can sometimes encourage the sort of behaviour that it aims to prevent. Provisions are a case in point.

Typically provisions are calculated based on a bank’s experience of risk over the last 6 months – as reflected in the net roll-rates. This period is long enough to smooth out any once-off anomalies and short enough to react quickly to changing conditions.
However, we were recently asked if it wouldn’t be more conservative to use the worst net roll-rates over the last 10 years. While this is technically more conservative (since the worst roll-rates in 120 months are almost certainly worse than the worse roll-rates in 6 months) it could actually help to create a higher risk portfolio. Yes, the bank would immediately be more secure, but over time two factors are likely to push risk in the wrong direction:

1)        The provision rate is an important source of feedback. It tells the originations team a lot about the risk that is coming into the portfolio from internal and external forces. The sooner the provisions react to new risks, the sooner the originations strategies can be adjusted. So, because a 10 year worst case scenario is an almost static measure and unaffected by changes in risk, new risk could be entering the portfolio without triggering any warnings. A slow and unintentional slide in credit quality will result.
2)        Admittedly, other metrics can alert a lender to increases in risk, but there is another incentive at work because provisions are the cost of carrying risk; by setting the cost of risk at a static and artificially high level you change the risk-reward dynamic in a portfolio.
A low risk customer segment should have a low cost of risk, allowing you to grow a portfolio by lending to low risk/ low margin customers. However, if all customers were to carry a high cost of risk regardless, only high margin customers would be profitable; and since high margin customers are usually also higher risk, there would be an incentive to grow the portfolio in the most risky segments.

In cases where the future is expected to be significantly worse than the recent past, it is better therefore to apply a flat provision overlay, a once-off increase in provisions that will increase coverage but still provide allow provisions to rise and fall with changing risk.

Advertisements

Read Full Post »

You will almost certainly have heard the phrase, ‘you can’t manage want you don’t measure’. This is true, but there is a corollary to that phrase which is often not considered, ‘you have to manage what you do measure’.

To manage a business you need to understand it, but more reports do not necessarily mean a deeper understanding. More reports do, however, mean more work, often exponentially more work. So while regular reporting is obviously important for the day-to-day functioning of a business, its extent should be carefully planned.
Since I started this article with one piece of trite wisdom, I’ll continue. I’m trying to write my first novel – man can not live on tales of credit risk strategy alone – and in a writing seminar I attended the instructor made reference to this piece of wisdom which he picked-up in an otherwise forgettable book on script writing, ‘if nothing has changed, nothing has happened’.
It is important to look at the regular reports generated in an organization with this philosophy in mind – do the embedded metrics enable the audience present to change the business? If the audience is not going to – or is not able to – change anything based on a metric then nothing is actually happening and if nothing is going happening, why are we spending money doing it?
Don’t get me wrong, I am an ardent believer in the value of data and data analytics, I just question the value in regular reporting. Those two subjects are definitely related, but they’re not just different, at times I believe they are fundamentally opposed.

An over-reliance on reporting can damage a business in four ways:

Restricting Innovation and Creativity
Raw data – stored in a well-organized and accessible database – encourages creative and insightful problem solving, it begs for innovative relationships to be found, provides opportunities for surprising connections to be made, and encourages ‘what if’ scenario planning.
Reports are tools for managing an operation. Reports come with ingrained expectations and encourage more constrained and retrospective analysis. They ask questions like ‘did what we expect to happen, actually happen’.
The more an organization relies on reports the more, I believe, it will tend to become operational in nature and backward focused in its analytics, asking and explaining what happened last month and how that was different to plan and to the month before. Yes it is import to know how many new accounts were opened and whether that was more or less than planned for in the annual budget, but no one ever changed the status quo by knowing how many accounts they had opened.
The easiest way to look good as the analytics department in an organization with a heavy focus on reports, is to get those reports to show stable numbers in-line with the annual plan, thus raising as few questions as possible; and the easiest way to do that is by implementing the same strategy year after year. To look good in an organization that understands the real value of data though, an analytics department has to add business value, has to delve into the data and has to come up with insightful stories about relationships that weren’t known last year, designing and implementing innovative strategies that are by their nature hard to plan accurately in an annual budgeting process, but which have the potential to change an industry.

Creating a False Sense of Control
Reports also create an often false sense of accuracy. A report, nicely formatted and with numbers showing month-on-month and year-to-date changes to the second decimal point, carries a sense of presence; if the numbers today look like the numbers did a year ago they feel like they must be right, but if the numbers today look like the numbers did a year ago there is also less of an incentive to test the underlying assumptions and the numbers can only ever be as accurate as those assumptions: how is profit estimated, how is long-term risk accounted for, how are marketing costs accounted for, how much growth is assumed, etc. and is this still valid?
Further, in a similar way to how too many credit policies can end up reducing the accountability of business leaders rather than increasing it, when too much importance is placed on reporting managers become accountable for knowing their numbers, rather than knowing their businesses. If you can say how much your numbers changed month-on-month but not why, then you’re focusing on the wrong things.

Raising Costs
Every report includes multiple individual metrics and goes to multiple stakeholders, each of those metrics has the potential to raise a question with each of those stakeholders. This is good if the question being raised influences the actions of the business, but the work involved in answering a question is not related to the value of answering it and so as more metrics of lesser importance are added to a business’ vocabulary, the odds of a question generating non-value-adding work increases exponentially.
Once it has been asked, it is hard to ignore a question pertaining to a report without looking like you don’t understand your business, but sometimes the opposite is true. If you really understand your business you’ll know which metrics are indicative of its overall state and which are not. While your own understanding of your business should encompass the multiple and detailed metrics impacting your business, you should only be reporting the most important of those to broader audiences.
And it is not just what you’re reporting, but to whom. Often a question asked out of interest by an uninvolved party can trigger a significant amount of work without providing any extra control or oversight. Better reports and better audiences should therefore replace old ones and metrics that are not value-adding in a context should not be displayed in that context; or the audience needs to change until the context is right.

Compounding Errors
The biggest problem, though, that I have with a report-based approach is the potential for compounding errors. When one report is compiled based off another report there is always the risk that an error in the first will be included in the second. This actually costs the organization in two ways: firstly the obvious risk of incorrectly informed decisions and secondly in the extra work needed to stay vigilant to this risk.
Numbers need to be checked and rechecked, formats need to be aligned or changed in synchronization, and reconciliations need to be carried out where constant differences exist – month-end data versus cycle end data, monthly average exchange rates versus month-end exchange rates, etc.
Time should never be spent getting the numbers to match; that changes nothing. Time should rather be spent creating a single source of data that can be accessed by multiple teams and which can be left in its raw state, any customization of the data happening in one team will therefore remain isolated from all other teams.

Reports are important and will remain so, but their role should be understood. A few key metrics should be reported widely and these should each add a significant and unique piece of information about an organization’s health, at one level down a similar report should break down the team’s performance, but beyond that time and resources should be invested in the creative analysis of raw data, encouraging the creation of analytics-driven business stories.
Getting this right will involve a culture change more than anything, a move away from trusting the person who knows their numbers to trusting the person who provides the most genuine insight.
I know of a loan origination operation that charges sales people a token fee for any declined application which they asked to be manually referred, forcing them to consider the merits of the case carefully before adding to the costs. A similar approach might be helpful here, charging audiences for access to monthly reports on a per metric basis – this could be an actual monetary fine which is added saved up for an end of year event or a virtual currency awarded on a quota basis.

Read Full Post »

There are certainly analytical tools in the market that are more sophisticated than Excel and there are certainly situations where these are needed to deliver enhanced accuracy or advanced features; However, this article will concentrate on building models to aid the decision-making process of a business leader rather than a specialist statistician, the need is for a model that is flexible and easy-to-use.  Since Excel is so widely available and understood, it is usually the best tool for this purpose.

In this article I will assume a basic understanding of Excel and its in-built mathematical functions.  Instead, I’ll discuss how some of the more advanced functions can be used to build decision-aiding models and, in particular, how to create flexible matrices.

Spreadsheets facilitate flexibility by allowing calculations to be parameterised so that a model can be built with the logic fixed but the input values flexible.  For example, the management of a bank may agree that the size of a credit limit grated to a customer should be based on that customer’s risk and income and that VIP customers should be entitled to an extra limit extension, though they may disagree over one or more of the inputs.  The limit-setting logic can be programmed into Excel as an equation that remains constant while the definition of what constitutes each risk group, each income band, the size of each limit and the size of the VIP bonus extension can each be changed at will. 

When building a model to assist with business decision-making, the key is to make sure that each profit lever is represented by a parameter that can be quickly and easily altered by decision-makers without altering the logical and established links between each of those profit levers.  Making use of Excel’s advanced functions and some simple logic, it is possible to do this in almost all situations without the resulting model becoming too complex for practical business use.

*     *     *     *     *

If I were to guess, I would say that at over 80% of the functionality needed to build a flexible decision-making model can be created using Excel’s basic mathematical functions and ‘IF clauses’ and ‘LOOKUPs’. 

If Clauses

IF clauses, once understood, have a multitude of uses.  When building a model to aid decision-making they are usually one of the most important tools at an analyst’s disposal.  Simply put, an IF clause provides a binary command: if a given event happens do this, if not do that.  If a customer number has been labelled as VIP, add €5 000 extra to the proposed limit, if not do not add anything, etc. 

IF( CustStatus = “VIP”, 5000, 0 )

Using this simple logic, it is possible to replicate a decision tree connecting a large number of decisions to create a single strategy.  IF clauses are very useful for categorising data, for identifying or selecting specific events, etc.

There are two important variations of the basic IF clause: SUMIF and COUNTIF.  These two functions allow to determine how often, or to what degree, a certain event has occurred.  Both functions have the same underlying logic, though the COUNTIF function is simpler.  What is the total sum of balances on all VIP accounts or simply how many VIP accounts are there.

SUMIF( Sheet1!$A$1:$A$200, “VIP”, Sheet1!$B$1:$B$200 ) or

COUNTIF( Sheet1!$A$1:$A$200, “VIP” )

 

Lookups

Look-ups, on the other hand, are used to retrieve related data; replicating some of the basic functionality of a database. 

A ‘lookup’ will take one value and retrieve a corresponding alternate value from a specific table.  Perhaps easier to understand through an example: assume there is a list showing which branch each of a bank’s customers belongs to, given a random selection of customer numbers a lookup would take each of those customer numbers and search the larger list until it found the matching number and then retrieve the associated branch name next to that customer name in the table. 

 

By ending the statement with ‘FALSE’ it means that only exact matches are permitted.  If I had ended the function with ‘TRUE’, it would have looked for the nearest possible match to the given customer name from within the list and returned the value corresponding to that.  This is not particularly useful in an example like this one but it is a useful way to group values into logical batches among other things.  For example, if I had a list of salaries and wanted to summarise them into salary bands I could create a table with the lowest and highest value in each band and then use a lookup ending with TRUE to find the band into which each unique salary falls.

 

There are actually two types of lookups in Excel, vertical lookups and horizontal lookups.  The former looks down a list until it finds the matching number (and then moves across to find the pertinent field) while the latter looks across a list (and then moves down to find the pertinent field); other than that the logic remains the same.

In the above example, the lookup will look take a given customer number within a table on the sheet and then, once it has been found, will return the value in the second column from the left of that table.  If it has been instead been an HLOOKUP function, the value returned would have been the one in the second row from the top. 

 

Embedded Functions

The real value of IF clauses and LOOKUPs comes when they are added to together, either with each other or with other Excel functions.  For example, if the account is labelled “VIP” then look for the associated relationship manager in a list of all the relationship managers, if not then look for the associated branch name – in both cases using the customer number to do the matching.

IF( CustStatus = “VIP”, VLOOKUP( CustNum, RelMans!$A$1:$B$20, 2, FALSE), VLOOKUP (CustNum, Branches!$A$1:$B$50, 2, FALSE))

In these cases, the results of the embedded function are used by the main function to deliver a result.

Matrices

In most cases however, businesses need to make decisions on more factors than can be represented simply by lists; in our example credit limits cannot be set with a reference to risk alone, income – and as a proxy for spend – considerations also need to be borne in mind.  When building a business model, a useful tool then is a two-dimensional matrix where results can be retrieved using embedded VLOOKUPs and HLOOKUPs.  Creating Matrices in Excel is a three-step process – at least I only know how to do it using three steps. 

I will walk through the example of a limit setting matrix.  In this example I want to set a limit for each customer based on a combination of the customer’s risk and income while also keeping product restrictions in mind.  I want this model to be flexible enough so that I can easily change the definition of the risk and income bands as well as the prices assigned to each segment of the matrix.

The first step is to create the desired matrix, choosing the axis labels and number of segments.  Within this matrix, each segment should be populated with the desired limit.  The labels of the matrix will remain fixed though the definition of each label can be changed as needed.  The limit in each segment can be hard-coded in – €5 000 for example – or can relate to a reference cell – product minimum plus a certain amount for example.

In this example I have decided to 12 segment matrix that will cater for 4 income bands (Low, Moderate, High and Very High) and 3 risk bands (Low, Moderate and High).  I’ve then populated the matrix with the limits we will use as a starting point for our discussions.  Managers will not want to know just how the proposed model impacts limits at a customer level, they will also want to see how it impacts limits at a portfolio level so I have used COUNTIF and SUMIF to provide a summary of the limit distribution across the portfolio – all shown below:

 

The second step is to summarise the values of the two key variables into the respective bands; using VLOOKUPs as discussed above.  In this example we want to summarise the risk grades of customers into LOW RISK, MODERATE RISK and HIGH RISK and the income into similar LOW INCOME, MODERATE INCOME, HIGH INCOME and VERY HIGH INCOME.

As a starting point, I have decided to make the splits as shown in the tables below.  These tables were used to label each account in the dataset using two new columns I have created, also shown below:

 

 

 

Then each account can be matched to a matrix segment using VLOOKUPs and HLOOKUPs, embedded to create a matrix lookup function.  What we want to do is to use the VLOOKUP functionality to find the right row corresponding to the risk of the customer and then to move across the number of rows to find the right column corresponding to the income of the customer.  The first part of the equation is relatively simple to construct so long as we ignore the column number:

VLOOKUP( Risk Band, $A$3:$E$5, ?, False )

Provided we’re a little creative with it, an HLOOKUP will allow us to fill in the missing part.  What we need to do is find a way to convert the ‘Salary Band’ field into a number representing the column.  You might have noticed that there was a row of numbers under each of the Income Bands in the matrix shown above.  This was done in order to allow an HLOOKUP to return that number so that it can be placed it into the missing part of the VLOOKUP.  An HLOOKUP will search for the Salary Band and then return the number from the row directly blow it, which in this case has been specifically set to be equal to its column number – remembering to add one to take into account the field used to house the name of the Risk Grade that is needed for the VLOOKUP.

HLOOKUP ( Salary Band, $B$1:$E$2, 2, FALSE )

In this case it will always be the second row so we can hardcode in the ‘2’.  This entire function is then substituted into the VLOOKUP to create a function that will look-up both the Risk Band and the Salary Band of any given customer and return the relative limit from the matrix.

VLOOKUP( Risk Band, $A$1:$E$5, HLOOKUP ( Salary Band, $B$1:$E$2, 2, FALSE ) , False )

All that is now needed is to add two further fields to take into account the potential VIP bonus limit and the model is complete – and the results are shown below:

 

This version of the model can be distributed or taken into a workshop and, as each component is adjusted so too are the individual limits granted as well as the tables summarising the distribution of limits across the portfolio.  For example, the marketing team may wish to increase the limits of Low Risk, Very High Income customers to 60,000 and, at the same time, the risk team may wish to re-categorise those with a risk score of 4 as ‘High Risk’ and increase the qualifying income for ‘High Income’ to 7,000.  The first change requires a simple change in the Limit Matrix while the second requires two simple changes to the references tables, giving the new matrix limit using the tables shown below.

 

*     *     *     *     *

It is also possible to show the distribution by matrix segment. The method is based on the same logic discussed up to now, although the implementation is a bit clumsy. 

The first step is to create a dummy matrix with the same labels but populated with a segment number rather than a limit.  Then you need to create a new field in the dataset called something like ‘Segment Number’ and to populate this field using the same equation from above.  Once this field has been populated you can create a another dummy version of the matrix and, in this case, use the SUMIF or COUNTIF function to calculate the value of limits or the number of customers in each segment.  With that populated it is easy to turn those numbers into a percentage of the total either in the same step or using one final new matrix:

 






Read Full Post »

The purpose of analytics is to guide business practices by empowering decision makers with clear and accurate insights into the problem at hand.  So even the best piece of analytics can fall short of this goal if the link between the analyst and the ultimate decision maker is ineffective.  Therefore, analysts should invest time in perfecting the art of presenting their findings, not just the science of reaching them.

A good presentation begins when the project begins, it does not begin only once the results have been calculated.  In order for a piece of analysis to effectively guide decision-making its objectives must be aligned with the project’s objectives from the very start. 

The easiest way to ensure that the analyst is working in the same direction as the decision maker is to employ the story board technique.  Much like a film maker will create a high-level story board to explain how their story will develop from scene to scene; an analyst should draw a high-level story board showing how the story underlying their analysis will develop from slide to slide.  The analysis should proceed only once the decision maker has agreed that the logical flow presented will achieve the desired end goal.  No fancy software is needed; story boarding can be done by hand or in PowerPoint.

One way to keep the flow clear is to use the headings as summaries of the slides message.  For example, instead of using a heading along the lines of ‘Utilisation Figures’ in the second slide above, I used ‘Utilisation is very risk biased’.  The audience immediately knows where I am going with this slide and doesn’t need to work towards this same conclusion as I speak.  This simple trick will also help you to quickly spot inconsistencies in the story flow.

The story board method works because, in many ways, a good piece of analysis is like a film in how it tells a story: like a film, it must tell a story that flows logically from one point to another culminating in a coherent and memorable message and, like a film, it must often find concise visual summaries for complex concepts. 

Using the story board approach from the start helps to put the piece of analysis in context.  By defining the scope it prevents time being invested in non value-adding activities and by confirming a logical thread it ensures a fruitful outcome. 

The analyst should follow a structured process to create a logical and value adding piece of analysis, such as the five point plan below:

(1) the problem must be fully understood;

(2) the analysis must be designed to address each key aspect of the problem;

(3) the analysis must be carried out;

(4) the results should be interpreted in terms of the problem and used to create the final presentation;

(5) actual performance of the solution should be monitored and compared to expectations.

Understanding the problem is the most important step.  Many an analysts feels that their understanding of a particular analytical technique is their key value offering.  However, the results will be sub-optimal at best and value-destroying at worst unless the problem to which that technique is to be applied is well understood.  Understanding a problem requires research into the business problem at hand, the key factors involved, the relationships between them and the relative priority of each.  The analyst should not be happy until each of these are understood and all of the inherent assumptions have been challenged and proven valid. 

When the analyst has a complete understanding of the problem they will be in a position to prioritise each component part.  Once the problem has been understood and its component parts prioritised, the analysis itself can be designed along the logical lines of the story.  Here dummy graphs and tables can be added to the story boards.  Once again, before the next step is taken it is worth verifying that the proposed measures will indeed prove the point covered by each particular story board.

Once the dummy graphs and tables have been inserted the analyst should ask themselves questions like: would a table showing the relative percentage of good and bad accounts with balances over 90% of their limit, when shown together with a table of average utilisations, prove that the current credit limit policy is enabling higher levels of bad debt?  If not, alternative measures should be considered and weighed in the same way. 

It is important to note though that the intention is not to find the one graph that supports your pre-determined beliefs but rather to find a measure that will prove or disprove your key message.  The analyst should make this decision before the numbers are included to prevent this sort of intentional bias.  In the above example the decision is made before we know for sure what patterns will emerge from the data.  If the data later shows no significant difference in average balances and utilisations between each group, the analyst should be willing to accept that perhaps there is less value in the project than first imagined; they should not try to manipulate the results to hide this fact.

I said earlier that a presentation often has to use visual tools to concisely summarise complex concepts.  These visual tools can include hand drawn schematics (useful when drawn live as an interactive tool for explaining concepts but less able to communicate numerical analysis accurately), graphs (less interactive but more accurate when it comes to presenting numerical results) and tables.  When using visual tools it is important to not let the visuals distract from the message you want to communicate.  The wrong scale can, for example, make trends seem to appear where they don’t exist and disappear where they do.  Excess information, unnecessary legends, the wrong choice of graph, etc. can all work to ‘encode’ your message.  It is important that your visual message faithfully reflects the message of the underlying data, just using an easier to interpret medium.

The same logic applies to animations.  I believe that animations in presentations can add great value when used well but in many – if not most – cases they simply distract.  I tend to use animations when I wish either to create a sense of interaction or when the order in which events progress is important – as when discussing a process with multiple steps, each building on its predecessor.

Once the analysis has been designed and approved it must be delivered.  This is where the most focus has been traditionally and it is indeed a vital step.  The value that an analytical approach to problem solving brings to a business is the ability to make decisions based on a true understanding of the underlying business and its component parts.  Unless the analysis is accurate this is not possible and so great care must be taken when selecting and implementing analytical techniques.  However, this step is most valuable when it comes on top of the solid foundation created by each of the prior steps.

The results of the analysis must be substituted into the story board in place of the dummy graphs and tables.  The final touches should be applied to the presentation at this stage, as should any changes in the message necessitated by unexpected new information. 

Once the presentation is complete, it can be delivered to the decision maker in whichever format is most appropriate.  Thought should be given to the question of the delivery channel.  Presentations that are to delivered face-to-face should include fewer and less detailed bullet points, while those that are to be sent to a large, indirect audience  should contain more detailed information.

However, that is not where the process should end.  I started this article by saying that the purpose of analytics is to guide business practices and so until the extent to which business practices have actually been changed – and the impact of those changes – has been understood, the ultimate value of the analysis will not be known.  Any piece of analysis should therefore cater for a period of on-going monitoring where key project metrics can be measured and the actual results compared to expected results.  The nature of each specific piece of analysis will dictate how long this period should be and which metrics should be included.  But, in all cases, the analysis can only be considered successful once it can be shown that the business has made beneficial changes based on it.

*   *   *

To read more about presentation tips and techniques, click here

Read Full Post »

Unscrupulous crooks ensure that pyramid schemes are seldom out of the news for very long; cases like the high-profile Madoff affair have cost investors billions of dollars and made headlines worldwide.  However, the principals behind them can also shed some light on more mundane issues: such as portfolio reporting.  Because, in the same way that rapid growth rates can create the illusion of sustainable results in a pyramid scheme, they can hide the true patterns in a set of data.

 

I’ll start with a simplified model of a pyramid scheme by way of illustration.  This scheme comes about when ten individuals are enticed into each investing $100 in a project with promised returns of 50% per annum.  Unbeknownst to these investors, there is no underlying business and the project is simply a screen for a pyramid scheme.

 

So, at the start of the first year the scheme has ten investors and $1 000 in capital.  At the end of that same year, $500 of the start-up capital is used to fund dividend payments which leave the initial investors blissful, but ignorant.  With a ‘proven track record’ the conman can now approach further investors: let’s assume he gets twenty-five new investors at the start of the second year.  With the resultant cash injection the scheme’s capital reserves grow to $3 000 dollars and, despite a greater dividend burden, the scheme still manages to end the year showing a growing capital balance.  The scheme can now show two years of capital growth and two years of 50% annual pay-outs and, as news of this spreads, fifty new investors sign-up growing the capital balance to $6 250 – more than enough to once again make a 50% pay-out to all investors. 

 

So long as the scheme continues to double the total number of investors each year, it will continue to produce these impressive ‘results’.  However, should something occur to restrict the influx of new investors, the true underlying performance of the scheme will become quickly and irreversibly apparent.

 

Let us assume in our example that rumours begin circulating that link other bad deals to our conman and that these begin to scare away many potential investors.  Thus, despite another year of extraordinary performance, only twenty five new investors join the scheme in its fourth year.  Cash from these new investors has increased the capital balance to $4 500 but these new investors have also increased the pay-out burden: to $5 500.  In other words, the scheme is no longer able to make a full dividend pay-out to all of its investors.  So, after its first year of disappointing results, there is an even larger drop in investor demand and only ten new investors can be found at the start of the fifth year.  These investors inject capital totalling $1 000 while simultaneously increasing the total dividend burden to $6 000.  After a second poor year, the true nature of the scheme is revealed and as investors rush to liquidate their investments they find that there is no capital available to do so.  Some of the investors have done quite well out of the deal and others have lost almost everything.  The first group of investors has doubled its money, the last group has lost 90% of theirs; and this is in a scenario when no money is taken out of the scheme! 

 

The reason that such a scheme could continue to run undetected is that the combination of rapid growth in the inflow of money masked the equally great, but delayed outflow of money. 

 

In a similar way, rapid growth in a portfolio of loans can mask a worsening of risk metrics and can lead to incorrect strategy decisions or a delay in the implementation of corrective measures.  I will clarify this statement by once again using an illustrative example: albeit one that requires the use of data tables.

 

Assume you have taken-over the management of a previously stable portfolio of loans.  Your intention is to market these loans to a new client population and thus to grow the size of the book while keeping risk unchanged.  At the end of April you are given the figures below against which you intend to benchmark the book’s performance under your management.

Reporting01 

Due to the current financial crisis, risk is the major concern and it has been decided that any increase in risk should be identified as quickly as possible.  With this in mind, you agree to run a one month pilot project at the end of which risk will be measured.  Should the risk – account balances at three months delinquent as a percentage of up-to-date account balances – of the book be seen to be increasing, the new project will be stopped immediately. 

 

At the end of May, things appear good.  Up-to-date account balances have risen by 10% (from 12 734 to 14 008) while account balances at three months delinquent have risen by just 6% (from 531 to 562).  The net result of these two changes is that the key risk ratio didn’t just fail to increase, it actually fell from 4.17% to 3.84%.

Reporting02 

With this seen as sufficient evidence, the go-ahead is given to continue with the new strategy in an even more aggressive manner.  Over the next two months, the value of current balances grows by 15% a month and the risk metrics continue to remain within the pre-set benchmarks. 

 

However, in July a slight up-tick is seen.  As the risk metric has not actually exceeded the benchmark, the corrective action is mild with only some of the acquisition activities being slowed: growth of the book is drops to 10%.   The figures continue to worsen and, in August, all acquisition activity is stopped.

Reporting03 

Yet, despite the cessation of all acquisitions, the risk figures continue to worsen over the next three months and ultimately the year ends with an average risk figure of more than double the pre-set benchmark.

 Reporting04

 So, how is it possible that such a large change in risk could happen overnight and despite corrective action being taken so swiftly?  It is possible because the change didn’t happen overnight.  In fact, the change had been happening since the first month of the new strategy.  It wasn’t visible then, however, because the risk metrics in place were ‘tricked’ by timing.

 

Risk is only realised over time.  In order for an account to become three months in arrears, for example, it needs to start as up-to-date.  By necessity, it must take a full month to become one month in arrears, another to get to two months in arrears and a third to become three months in arrears, etc.  So, in the same way that pyramid schemes use the delay between investment inflows and dividend outflows to allow new investors to fund old obligations; the delay between the acquisition of risk and of its becoming apparent, allows new growth to pay for older risk – and thus to hide worsening trends.

 

When new accounts were acquired in May, new risk was taken on and risk in the portfolio in general began to worsen.  However, no new risk was evident.  The new accounts were immediately brought into the calculation as, by definition, up-to-date.  The accounts at three months delinquent had rolled from two months delinquent in April – at a slightly faster rate than before – and were compared against this number.  The net result of which was a significant apparent improvement in the ratio as the impact of the large artificial increase in the denominator dwarfed the smaller – but real – increase in the numerator.

 

To counteract this, a time lag must be built into all risk metrics.  Rather than comparing the value of accounts at three months in arrears to the value of accounts that are up-to-date today, they should be compared to the value of accounts that were up-to-date three months ago: when they began their slide into arrears.  May’s increased value of up-to-date accounts will only impact on the risk ratios when it is compared to the related increase or decrease in value of account balances at three months delinquent in August.

 

The impact of this simple change is clearly evident in the figures below where the increase in risk is already apparent by the end of May.  Now, the figure of 562 is compared to February’s figure of 12 240 not to May’s figure of 14 008.  Had this reporting been in place at the time, it would have been possible to halt the project long before any further risk was acquired.

Reporting05 

This can hardly be considered a ground-breaking revelation but it hopefully goes some way to casting the spotlight on a simple but important principal of reporting: that in order for the numbers we report to add value to the business, they need to accurately reflect reality.  This principal is usually applied when we consider which data to include but should also be applied when we consider which time period to include.  Targets and benchmarks have little value until they are logically linked to the greater business which they aim to reflect.

Read Full Post »