Feeds:
Posts
Comments

Archive for the ‘Reporting’ Category

We usually assume that in a given situation, the more conservative of two strategies will better protect the bank’s interest. So, in the sort of uncertain times that we are facing now, it is common to migrate towards more conservative approaches, but this isn’t always the best approach.
In fact, a more conservative approach can sometimes encourage the sort of behaviour that it aims to prevent. Provisions are a case in point.

Typically provisions are calculated based on a bank’s experience of risk over the last 6 months – as reflected in the net roll-rates. This period is long enough to smooth out any once-off anomalies and short enough to react quickly to changing conditions.
However, we were recently asked if it wouldn’t be more conservative to use the worst net roll-rates over the last 10 years. While this is technically more conservative (since the worst roll-rates in 120 months are almost certainly worse than the worse roll-rates in 6 months) it could actually help to create a higher risk portfolio. Yes, the bank would immediately be more secure, but over time two factors are likely to push risk in the wrong direction:

1)        The provision rate is an important source of feedback. It tells the originations team a lot about the risk that is coming into the portfolio from internal and external forces. The sooner the provisions react to new risks, the sooner the originations strategies can be adjusted. So, because a 10 year worst case scenario is an almost static measure and unaffected by changes in risk, new risk could be entering the portfolio without triggering any warnings. A slow and unintentional slide in credit quality will result.
2)        Admittedly, other metrics can alert a lender to increases in risk, but there is another incentive at work because provisions are the cost of carrying risk; by setting the cost of risk at a static and artificially high level you change the risk-reward dynamic in a portfolio.
A low risk customer segment should have a low cost of risk, allowing you to grow a portfolio by lending to low risk/ low margin customers. However, if all customers were to carry a high cost of risk regardless, only high margin customers would be profitable; and since high margin customers are usually also higher risk, there would be an incentive to grow the portfolio in the most risky segments.

In cases where the future is expected to be significantly worse than the recent past, it is better therefore to apply a flat provision overlay, a once-off increase in provisions that will increase coverage but still provide allow provisions to rise and fall with changing risk.

Read Full Post »

You will almost certainly have heard the phrase, ‘you can’t manage want you don’t measure’. This is true, but there is a corollary to that phrase which is often not considered, ‘you have to manage what you do measure’.

To manage a business you need to understand it, but more reports do not necessarily mean a deeper understanding. More reports do, however, mean more work, often exponentially more work. So while regular reporting is obviously important for the day-to-day functioning of a business, its extent should be carefully planned.
Since I started this article with one piece of trite wisdom, I’ll continue. I’m trying to write my first novel – man can not live on tales of credit risk strategy alone – and in a writing seminar I attended the instructor made reference to this piece of wisdom which he picked-up in an otherwise forgettable book on script writing, ‘if nothing has changed, nothing has happened’.
It is important to look at the regular reports generated in an organization with this philosophy in mind – do the embedded metrics enable the audience present to change the business? If the audience is not going to – or is not able to – change anything based on a metric then nothing is actually happening and if nothing is going happening, why are we spending money doing it?
Don’t get me wrong, I am an ardent believer in the value of data and data analytics, I just question the value in regular reporting. Those two subjects are definitely related, but they’re not just different, at times I believe they are fundamentally opposed.

An over-reliance on reporting can damage a business in four ways:

Restricting Innovation and Creativity
Raw data – stored in a well-organized and accessible database – encourages creative and insightful problem solving, it begs for innovative relationships to be found, provides opportunities for surprising connections to be made, and encourages ‘what if’ scenario planning.
Reports are tools for managing an operation. Reports come with ingrained expectations and encourage more constrained and retrospective analysis. They ask questions like ‘did what we expect to happen, actually happen’.
The more an organization relies on reports the more, I believe, it will tend to become operational in nature and backward focused in its analytics, asking and explaining what happened last month and how that was different to plan and to the month before. Yes it is import to know how many new accounts were opened and whether that was more or less than planned for in the annual budget, but no one ever changed the status quo by knowing how many accounts they had opened.
The easiest way to look good as the analytics department in an organization with a heavy focus on reports, is to get those reports to show stable numbers in-line with the annual plan, thus raising as few questions as possible; and the easiest way to do that is by implementing the same strategy year after year. To look good in an organization that understands the real value of data though, an analytics department has to add business value, has to delve into the data and has to come up with insightful stories about relationships that weren’t known last year, designing and implementing innovative strategies that are by their nature hard to plan accurately in an annual budgeting process, but which have the potential to change an industry.

Creating a False Sense of Control
Reports also create an often false sense of accuracy. A report, nicely formatted and with numbers showing month-on-month and year-to-date changes to the second decimal point, carries a sense of presence; if the numbers today look like the numbers did a year ago they feel like they must be right, but if the numbers today look like the numbers did a year ago there is also less of an incentive to test the underlying assumptions and the numbers can only ever be as accurate as those assumptions: how is profit estimated, how is long-term risk accounted for, how are marketing costs accounted for, how much growth is assumed, etc. and is this still valid?
Further, in a similar way to how too many credit policies can end up reducing the accountability of business leaders rather than increasing it, when too much importance is placed on reporting managers become accountable for knowing their numbers, rather than knowing their businesses. If you can say how much your numbers changed month-on-month but not why, then you’re focusing on the wrong things.

Raising Costs
Every report includes multiple individual metrics and goes to multiple stakeholders, each of those metrics has the potential to raise a question with each of those stakeholders. This is good if the question being raised influences the actions of the business, but the work involved in answering a question is not related to the value of answering it and so as more metrics of lesser importance are added to a business’ vocabulary, the odds of a question generating non-value-adding work increases exponentially.
Once it has been asked, it is hard to ignore a question pertaining to a report without looking like you don’t understand your business, but sometimes the opposite is true. If you really understand your business you’ll know which metrics are indicative of its overall state and which are not. While your own understanding of your business should encompass the multiple and detailed metrics impacting your business, you should only be reporting the most important of those to broader audiences.
And it is not just what you’re reporting, but to whom. Often a question asked out of interest by an uninvolved party can trigger a significant amount of work without providing any extra control or oversight. Better reports and better audiences should therefore replace old ones and metrics that are not value-adding in a context should not be displayed in that context; or the audience needs to change until the context is right.

Compounding Errors
The biggest problem, though, that I have with a report-based approach is the potential for compounding errors. When one report is compiled based off another report there is always the risk that an error in the first will be included in the second. This actually costs the organization in two ways: firstly the obvious risk of incorrectly informed decisions and secondly in the extra work needed to stay vigilant to this risk.
Numbers need to be checked and rechecked, formats need to be aligned or changed in synchronization, and reconciliations need to be carried out where constant differences exist – month-end data versus cycle end data, monthly average exchange rates versus month-end exchange rates, etc.
Time should never be spent getting the numbers to match; that changes nothing. Time should rather be spent creating a single source of data that can be accessed by multiple teams and which can be left in its raw state, any customization of the data happening in one team will therefore remain isolated from all other teams.

Reports are important and will remain so, but their role should be understood. A few key metrics should be reported widely and these should each add a significant and unique piece of information about an organization’s health, at one level down a similar report should break down the team’s performance, but beyond that time and resources should be invested in the creative analysis of raw data, encouraging the creation of analytics-driven business stories.
Getting this right will involve a culture change more than anything, a move away from trusting the person who knows their numbers to trusting the person who provides the most genuine insight.
I know of a loan origination operation that charges sales people a token fee for any declined application which they asked to be manually referred, forcing them to consider the merits of the case carefully before adding to the costs. A similar approach might be helpful here, charging audiences for access to monthly reports on a per metric basis – this could be an actual monetary fine which is added saved up for an end of year event or a virtual currency awarded on a quota basis.

Read Full Post »

The purpose of analytics is to guide business practices by empowering decision makers with clear and accurate insights into the problem at hand.  So even the best piece of analytics can fall short of this goal if the link between the analyst and the ultimate decision maker is ineffective.  Therefore, analysts should invest time in perfecting the art of presenting their findings, not just the science of reaching them.

A good presentation begins when the project begins, it does not begin only once the results have been calculated.  In order for a piece of analysis to effectively guide decision-making its objectives must be aligned with the project’s objectives from the very start. 

The easiest way to ensure that the analyst is working in the same direction as the decision maker is to employ the story board technique.  Much like a film maker will create a high-level story board to explain how their story will develop from scene to scene; an analyst should draw a high-level story board showing how the story underlying their analysis will develop from slide to slide.  The analysis should proceed only once the decision maker has agreed that the logical flow presented will achieve the desired end goal.  No fancy software is needed; story boarding can be done by hand or in PowerPoint.

One way to keep the flow clear is to use the headings as summaries of the slides message.  For example, instead of using a heading along the lines of ‘Utilisation Figures’ in the second slide above, I used ‘Utilisation is very risk biased’.  The audience immediately knows where I am going with this slide and doesn’t need to work towards this same conclusion as I speak.  This simple trick will also help you to quickly spot inconsistencies in the story flow.

The story board method works because, in many ways, a good piece of analysis is like a film in how it tells a story: like a film, it must tell a story that flows logically from one point to another culminating in a coherent and memorable message and, like a film, it must often find concise visual summaries for complex concepts. 

Using the story board approach from the start helps to put the piece of analysis in context.  By defining the scope it prevents time being invested in non value-adding activities and by confirming a logical thread it ensures a fruitful outcome. 

The analyst should follow a structured process to create a logical and value adding piece of analysis, such as the five point plan below:

(1) the problem must be fully understood;

(2) the analysis must be designed to address each key aspect of the problem;

(3) the analysis must be carried out;

(4) the results should be interpreted in terms of the problem and used to create the final presentation;

(5) actual performance of the solution should be monitored and compared to expectations.

Understanding the problem is the most important step.  Many an analysts feels that their understanding of a particular analytical technique is their key value offering.  However, the results will be sub-optimal at best and value-destroying at worst unless the problem to which that technique is to be applied is well understood.  Understanding a problem requires research into the business problem at hand, the key factors involved, the relationships between them and the relative priority of each.  The analyst should not be happy until each of these are understood and all of the inherent assumptions have been challenged and proven valid. 

When the analyst has a complete understanding of the problem they will be in a position to prioritise each component part.  Once the problem has been understood and its component parts prioritised, the analysis itself can be designed along the logical lines of the story.  Here dummy graphs and tables can be added to the story boards.  Once again, before the next step is taken it is worth verifying that the proposed measures will indeed prove the point covered by each particular story board.

Once the dummy graphs and tables have been inserted the analyst should ask themselves questions like: would a table showing the relative percentage of good and bad accounts with balances over 90% of their limit, when shown together with a table of average utilisations, prove that the current credit limit policy is enabling higher levels of bad debt?  If not, alternative measures should be considered and weighed in the same way. 

It is important to note though that the intention is not to find the one graph that supports your pre-determined beliefs but rather to find a measure that will prove or disprove your key message.  The analyst should make this decision before the numbers are included to prevent this sort of intentional bias.  In the above example the decision is made before we know for sure what patterns will emerge from the data.  If the data later shows no significant difference in average balances and utilisations between each group, the analyst should be willing to accept that perhaps there is less value in the project than first imagined; they should not try to manipulate the results to hide this fact.

I said earlier that a presentation often has to use visual tools to concisely summarise complex concepts.  These visual tools can include hand drawn schematics (useful when drawn live as an interactive tool for explaining concepts but less able to communicate numerical analysis accurately), graphs (less interactive but more accurate when it comes to presenting numerical results) and tables.  When using visual tools it is important to not let the visuals distract from the message you want to communicate.  The wrong scale can, for example, make trends seem to appear where they don’t exist and disappear where they do.  Excess information, unnecessary legends, the wrong choice of graph, etc. can all work to ‘encode’ your message.  It is important that your visual message faithfully reflects the message of the underlying data, just using an easier to interpret medium.

The same logic applies to animations.  I believe that animations in presentations can add great value when used well but in many – if not most – cases they simply distract.  I tend to use animations when I wish either to create a sense of interaction or when the order in which events progress is important – as when discussing a process with multiple steps, each building on its predecessor.

Once the analysis has been designed and approved it must be delivered.  This is where the most focus has been traditionally and it is indeed a vital step.  The value that an analytical approach to problem solving brings to a business is the ability to make decisions based on a true understanding of the underlying business and its component parts.  Unless the analysis is accurate this is not possible and so great care must be taken when selecting and implementing analytical techniques.  However, this step is most valuable when it comes on top of the solid foundation created by each of the prior steps.

The results of the analysis must be substituted into the story board in place of the dummy graphs and tables.  The final touches should be applied to the presentation at this stage, as should any changes in the message necessitated by unexpected new information. 

Once the presentation is complete, it can be delivered to the decision maker in whichever format is most appropriate.  Thought should be given to the question of the delivery channel.  Presentations that are to delivered face-to-face should include fewer and less detailed bullet points, while those that are to be sent to a large, indirect audience  should contain more detailed information.

However, that is not where the process should end.  I started this article by saying that the purpose of analytics is to guide business practices and so until the extent to which business practices have actually been changed – and the impact of those changes – has been understood, the ultimate value of the analysis will not be known.  Any piece of analysis should therefore cater for a period of on-going monitoring where key project metrics can be measured and the actual results compared to expected results.  The nature of each specific piece of analysis will dictate how long this period should be and which metrics should be included.  But, in all cases, the analysis can only be considered successful once it can be shown that the business has made beneficial changes based on it.

*   *   *

To read more about presentation tips and techniques, click here

Read Full Post »

Unscrupulous crooks ensure that pyramid schemes are seldom out of the news for very long; cases like the high-profile Madoff affair have cost investors billions of dollars and made headlines worldwide.  However, the principals behind them can also shed some light on more mundane issues: such as portfolio reporting.  Because, in the same way that rapid growth rates can create the illusion of sustainable results in a pyramid scheme, they can hide the true patterns in a set of data.

 

I’ll start with a simplified model of a pyramid scheme by way of illustration.  This scheme comes about when ten individuals are enticed into each investing $100 in a project with promised returns of 50% per annum.  Unbeknownst to these investors, there is no underlying business and the project is simply a screen for a pyramid scheme.

 

So, at the start of the first year the scheme has ten investors and $1 000 in capital.  At the end of that same year, $500 of the start-up capital is used to fund dividend payments which leave the initial investors blissful, but ignorant.  With a ‘proven track record’ the conman can now approach further investors: let’s assume he gets twenty-five new investors at the start of the second year.  With the resultant cash injection the scheme’s capital reserves grow to $3 000 dollars and, despite a greater dividend burden, the scheme still manages to end the year showing a growing capital balance.  The scheme can now show two years of capital growth and two years of 50% annual pay-outs and, as news of this spreads, fifty new investors sign-up growing the capital balance to $6 250 – more than enough to once again make a 50% pay-out to all investors. 

 

So long as the scheme continues to double the total number of investors each year, it will continue to produce these impressive ‘results’.  However, should something occur to restrict the influx of new investors, the true underlying performance of the scheme will become quickly and irreversibly apparent.

 

Let us assume in our example that rumours begin circulating that link other bad deals to our conman and that these begin to scare away many potential investors.  Thus, despite another year of extraordinary performance, only twenty five new investors join the scheme in its fourth year.  Cash from these new investors has increased the capital balance to $4 500 but these new investors have also increased the pay-out burden: to $5 500.  In other words, the scheme is no longer able to make a full dividend pay-out to all of its investors.  So, after its first year of disappointing results, there is an even larger drop in investor demand and only ten new investors can be found at the start of the fifth year.  These investors inject capital totalling $1 000 while simultaneously increasing the total dividend burden to $6 000.  After a second poor year, the true nature of the scheme is revealed and as investors rush to liquidate their investments they find that there is no capital available to do so.  Some of the investors have done quite well out of the deal and others have lost almost everything.  The first group of investors has doubled its money, the last group has lost 90% of theirs; and this is in a scenario when no money is taken out of the scheme! 

 

The reason that such a scheme could continue to run undetected is that the combination of rapid growth in the inflow of money masked the equally great, but delayed outflow of money. 

 

In a similar way, rapid growth in a portfolio of loans can mask a worsening of risk metrics and can lead to incorrect strategy decisions or a delay in the implementation of corrective measures.  I will clarify this statement by once again using an illustrative example: albeit one that requires the use of data tables.

 

Assume you have taken-over the management of a previously stable portfolio of loans.  Your intention is to market these loans to a new client population and thus to grow the size of the book while keeping risk unchanged.  At the end of April you are given the figures below against which you intend to benchmark the book’s performance under your management.

Reporting01 

Due to the current financial crisis, risk is the major concern and it has been decided that any increase in risk should be identified as quickly as possible.  With this in mind, you agree to run a one month pilot project at the end of which risk will be measured.  Should the risk – account balances at three months delinquent as a percentage of up-to-date account balances – of the book be seen to be increasing, the new project will be stopped immediately. 

 

At the end of May, things appear good.  Up-to-date account balances have risen by 10% (from 12 734 to 14 008) while account balances at three months delinquent have risen by just 6% (from 531 to 562).  The net result of these two changes is that the key risk ratio didn’t just fail to increase, it actually fell from 4.17% to 3.84%.

Reporting02 

With this seen as sufficient evidence, the go-ahead is given to continue with the new strategy in an even more aggressive manner.  Over the next two months, the value of current balances grows by 15% a month and the risk metrics continue to remain within the pre-set benchmarks. 

 

However, in July a slight up-tick is seen.  As the risk metric has not actually exceeded the benchmark, the corrective action is mild with only some of the acquisition activities being slowed: growth of the book is drops to 10%.   The figures continue to worsen and, in August, all acquisition activity is stopped.

Reporting03 

Yet, despite the cessation of all acquisitions, the risk figures continue to worsen over the next three months and ultimately the year ends with an average risk figure of more than double the pre-set benchmark.

 Reporting04

 So, how is it possible that such a large change in risk could happen overnight and despite corrective action being taken so swiftly?  It is possible because the change didn’t happen overnight.  In fact, the change had been happening since the first month of the new strategy.  It wasn’t visible then, however, because the risk metrics in place were ‘tricked’ by timing.

 

Risk is only realised over time.  In order for an account to become three months in arrears, for example, it needs to start as up-to-date.  By necessity, it must take a full month to become one month in arrears, another to get to two months in arrears and a third to become three months in arrears, etc.  So, in the same way that pyramid schemes use the delay between investment inflows and dividend outflows to allow new investors to fund old obligations; the delay between the acquisition of risk and of its becoming apparent, allows new growth to pay for older risk – and thus to hide worsening trends.

 

When new accounts were acquired in May, new risk was taken on and risk in the portfolio in general began to worsen.  However, no new risk was evident.  The new accounts were immediately brought into the calculation as, by definition, up-to-date.  The accounts at three months delinquent had rolled from two months delinquent in April – at a slightly faster rate than before – and were compared against this number.  The net result of which was a significant apparent improvement in the ratio as the impact of the large artificial increase in the denominator dwarfed the smaller – but real – increase in the numerator.

 

To counteract this, a time lag must be built into all risk metrics.  Rather than comparing the value of accounts at three months in arrears to the value of accounts that are up-to-date today, they should be compared to the value of accounts that were up-to-date three months ago: when they began their slide into arrears.  May’s increased value of up-to-date accounts will only impact on the risk ratios when it is compared to the related increase or decrease in value of account balances at three months delinquent in August.

 

The impact of this simple change is clearly evident in the figures below where the increase in risk is already apparent by the end of May.  Now, the figure of 562 is compared to February’s figure of 12 240 not to May’s figure of 14 008.  Had this reporting been in place at the time, it would have been possible to halt the project long before any further risk was acquired.

Reporting05 

This can hardly be considered a ground-breaking revelation but it hopefully goes some way to casting the spotlight on a simple but important principal of reporting: that in order for the numbers we report to add value to the business, they need to accurately reflect reality.  This principal is usually applied when we consider which data to include but should also be applied when we consider which time period to include.  Targets and benchmarks have little value until they are logically linked to the greater business which they aim to reflect.

Read Full Post »