It’s been nearly three years since I last wrote an article for this site so it’s perhaps cheeky of me to use it for my personal gain, but here you go: those three years have not been spent idle, on the writing front, it’s just that I’ve been writing about something a little more exciting. Like shipwrecks, car chases, and upside down shootouts.

Drachen, my debut thriller, is available on pre-order at Amazon (US: http://amzn.com/B0133U3HGC UK: http://www.amazon.co.uk/dp/B0133U3HGC)


A marine archaeologist standing-up for herself. A psychopath with mother issues. A hitman who hates failure. A soldier with a point to prove. A policeman out on a limb. And a treasure that tests every allegiance.


Brett Rivera might not know exactly what’s going on, or who she can trust, but she’s in the race of her life and she knows she’s not going to give up; after three years of searching she has found the wreck of the Drachen. It goes downhill from there: first the hold is empty, and then she’s attacked, and then she’s almost killed.

Why is a mother-obsessed psychopath spending so much money to catch her? Who is the British soldier really? How is the hazy amber globe and the rusted keys she recovered supposed to help her locate the Hanseatic League’s greatest lost treasure?

Brett doesn’t know, but she has two things in her favour: Patrick, her best friend, and an ancient book which just might be the missing piece. She is pursued in Finland, double-crossed in Tallinn, abducted in Lübeck, shot at in Bremen, and she’s not taking it lying down.


A shipwreck. A lost treasure. A hell of a race from one to the other.


Reject Inference

I wrote my layman’s introduction to scoring a while ago now and never delivered the promised more in-depth articles. This is the first in a line of articles correcting that oversight. The team at Scorto has very kindly provided me with a white paper on scorecard building, which I will break into sections and reproduce here. In the first of those articles, I’ll look into reject inference, a topic that has been asked about before.

One of the inherent problems with a scorecard is that while you can test easily test whether you made the right decision in accepting an application, it is less easy to know whether you made the right decision in rejecting an application. In the day-to-day running of a business this might not seem like much of a problem, but it is dangerous in two ways: · it can limit the highly profitable growth opportunities around the cut-off point by hiding any segmenting behaviour a characteristic might have; and · it can lead to a point where the data that is available for creating new scorecards represents only a portion of the population likely to apply. As this portion is disproportionately ‘good’ it can cause future scorecards to under-estimate the risk present in a population. Each application provides a lender with a great deal of characteristic data: age, income, bureau score, etc. That application data is expensive to acquire, but of limited value until it is connected with behavioural data. When an application is approved, that value-adding behavioural data follows as a matter of course and comes cheaply: did the customer of age x and with income of y and a bureau score of z go “bad” or not? Every application that is rejected gets no such data. Unless we go out of our way to get it; and that’s where reject inference comes into play.

The general population in a market will have an average probability of bad that is influenced by various national and economic characteristics, but generally stable. A smaller sub-population will make-up the total population of applicants for any given loan product –the average probability of bad in this total population will rise and fall more easily depending on marketing and product design. It is the risk of that total population of applicants that a scorecard should aim to understand. However, the data from existing customers is not a full reflection of that population. It has been filtered through the approval process it stripped of a lot of its bads. Very often, the key data problem when building a scorecard build is the lack of information on “bad” since that’s what we’re trying to model, the probability an application with a given set of characteristics will end up “bad”. The more conservative the scoring strategy in question, the more the data will become concentrated in the better score quadrants and the weaker it will become for future scorecard builds. Clearly we need a way to bring back that information. Just because the rejected applications were too risky to approve doesn’t mean they’re too risky to add value in this exercise. We do this by combining the application data of the rejected applicants with external data sources or proxies. The main difficulty related to this approach is the unavailability and/ or inconsistency of the data which may make it difficult to classify an outcome as “good” or “bad”. A number of methods can be used to infer the performance of rejected applicants.

Simple Augmentation
Not all rejected applications would have gone bad. We knew this at the time we rejected them, we just knew that too few would stay good to compensate for those that did go bad. So while a segment of applications with a 15% probability of bad might be deemed too risky, 85% of them would still be good accounts. Using that knowledge we can reconsider the rejected applications in the data exercise.

· A base scoring model is built using data from the borrowers whose behavior is known – the previously approved book.
· Using the developed model, the rejected applications are scored and an estimation is made of the percentage of “bad” borrowers and that performance is assigned at random but in proportion across the rejected applications.
· The cut-off point should be set in accordance with the rules of the current lending policy that define the permissible level of bad borrowers.
· Information on the rejected and approved requests is merged and the resulting set is used to build the final scoring model.

Accept/ Reject Augmentation
The basis of this method consists in the correction of the weights of the base scoring model by taking into consideration the likelihood of the request‘s approval.
· The first step is to build a model that evaluates the likelihood of a requests approval or rejection. · The weights of the characteristics are adjusted taking into consideration the likelihood of the request‘s approval or rejection, determined during the previous step. This is done so that the resulting scores are inversely proportional to the likelihood of the request‘s approval. So, for example, if the original approval rate was 50% in a certain cluster then each approved record is replicated to stand in for itself and the one that was rejected.
· This method is preferable to the Simple Augmentation method, but not without its own drawbacks. Two key problems can be created by augmentation: the impact of small and unusual groups can be exaggerated (such as low-side overrides for VIP clients) and then because you’ve only modeled on approved accounts the approval rates will be either 0% or 100% in each node.

Fuzzy Augmentation
The distinguishing feature of this method is that each rejected request is split and used twice, to reflect each of the likelihood of the good and bad outcomes. In other words, if a rejected application has a 15% probability of going bad it is split and 15% of the person is assumed to go bad and 85% assumed to stay good.
· Classification
Evaluation of a set of the rejected requests is performed using a base scoring model that was built based on requests with a known status;
– The likelihood of a default p(bad) and that of the “good” outcome p(good) are determined based on the set cut-off point, defining the required percentage of the “bad” requests (p(bad)+p(good)=1); – Two records that correspond to the likelihood of the “good” and “bad” outcomes are formed for each rejected request;
– Evaluation of the rejected requests is performed taking into consideration the likelihood of the two outcomes. Those accounts that fall under the likelihood of the “good” outcome are assigned with the weight p(good). The accounts that fall under the likelihood of the “bad” outcome are assigned with the weight p(bad).
· Clarification
– The data on the approved requests is merged with the data on the rejected requests and the rating of each request is adjusted taking into consideration the likelihood of the request‘s further approval. For example, the frequency of the “good” outcome for a rejected request is evaluated as the result of the “good” outcome multiplied by the weight coefficient.
– The final scoring model is built based on the combined data set.

Reject inference is no a single silver bullet. Used inexpertly it can lead to less accurate rather than more accurate results. Wherever possible, it is better to augment the exercise with a test-and-learn experiment to understand the true performance of small portions of key rejected segments. Then a new scorecard can be built based on the data from this new test segment alone and the true bad rates from that model can be compared and averaged to those from the reject inference model to get a more reliable bad rate for the rejected population.




On a completely unrelated matter, but one a lot more important than credit risk strategy in the grander scheme of things, it’s great to see Chinese celebrities raising awareness about the terrible and futile cost of the trade in rhino horn.

We usually assume that in a given situation, the more conservative of two strategies will better protect the bank’s interest. So, in the sort of uncertain times that we are facing now, it is common to migrate towards more conservative approaches, but this isn’t always the best approach.
In fact, a more conservative approach can sometimes encourage the sort of behaviour that it aims to prevent. Provisions are a case in point.

Typically provisions are calculated based on a bank’s experience of risk over the last 6 months – as reflected in the net roll-rates. This period is long enough to smooth out any once-off anomalies and short enough to react quickly to changing conditions.
However, we were recently asked if it wouldn’t be more conservative to use the worst net roll-rates over the last 10 years. While this is technically more conservative (since the worst roll-rates in 120 months are almost certainly worse than the worse roll-rates in 6 months) it could actually help to create a higher risk portfolio. Yes, the bank would immediately be more secure, but over time two factors are likely to push risk in the wrong direction:

1)        The provision rate is an important source of feedback. It tells the originations team a lot about the risk that is coming into the portfolio from internal and external forces. The sooner the provisions react to new risks, the sooner the originations strategies can be adjusted. So, because a 10 year worst case scenario is an almost static measure and unaffected by changes in risk, new risk could be entering the portfolio without triggering any warnings. A slow and unintentional slide in credit quality will result.
2)        Admittedly, other metrics can alert a lender to increases in risk, but there is another incentive at work because provisions are the cost of carrying risk; by setting the cost of risk at a static and artificially high level you change the risk-reward dynamic in a portfolio.
A low risk customer segment should have a low cost of risk, allowing you to grow a portfolio by lending to low risk/ low margin customers. However, if all customers were to carry a high cost of risk regardless, only high margin customers would be profitable; and since high margin customers are usually also higher risk, there would be an incentive to grow the portfolio in the most risky segments.

In cases where the future is expected to be significantly worse than the recent past, it is better therefore to apply a flat provision overlay, a once-off increase in provisions that will increase coverage but still provide allow provisions to rise and fall with changing risk.

You will almost certainly have heard the phrase, ‘you can’t manage want you don’t measure’. This is true, but there is a corollary to that phrase which is often not considered, ‘you have to manage what you do measure’.

To manage a business you need to understand it, but more reports do not necessarily mean a deeper understanding. More reports do, however, mean more work, often exponentially more work. So while regular reporting is obviously important for the day-to-day functioning of a business, its extent should be carefully planned.
Since I started this article with one piece of trite wisdom, I’ll continue. I’m trying to write my first novel – man can not live on tales of credit risk strategy alone – and in a writing seminar I attended the instructor made reference to this piece of wisdom which he picked-up in an otherwise forgettable book on script writing, ‘if nothing has changed, nothing has happened’.
It is important to look at the regular reports generated in an organization with this philosophy in mind – do the embedded metrics enable the audience present to change the business? If the audience is not going to – or is not able to – change anything based on a metric then nothing is actually happening and if nothing is going happening, why are we spending money doing it?
Don’t get me wrong, I am an ardent believer in the value of data and data analytics, I just question the value in regular reporting. Those two subjects are definitely related, but they’re not just different, at times I believe they are fundamentally opposed.

An over-reliance on reporting can damage a business in four ways:

Restricting Innovation and Creativity
Raw data – stored in a well-organized and accessible database – encourages creative and insightful problem solving, it begs for innovative relationships to be found, provides opportunities for surprising connections to be made, and encourages ‘what if’ scenario planning.
Reports are tools for managing an operation. Reports come with ingrained expectations and encourage more constrained and retrospective analysis. They ask questions like ‘did what we expect to happen, actually happen’.
The more an organization relies on reports the more, I believe, it will tend to become operational in nature and backward focused in its analytics, asking and explaining what happened last month and how that was different to plan and to the month before. Yes it is import to know how many new accounts were opened and whether that was more or less than planned for in the annual budget, but no one ever changed the status quo by knowing how many accounts they had opened.
The easiest way to look good as the analytics department in an organization with a heavy focus on reports, is to get those reports to show stable numbers in-line with the annual plan, thus raising as few questions as possible; and the easiest way to do that is by implementing the same strategy year after year. To look good in an organization that understands the real value of data though, an analytics department has to add business value, has to delve into the data and has to come up with insightful stories about relationships that weren’t known last year, designing and implementing innovative strategies that are by their nature hard to plan accurately in an annual budgeting process, but which have the potential to change an industry.

Creating a False Sense of Control
Reports also create an often false sense of accuracy. A report, nicely formatted and with numbers showing month-on-month and year-to-date changes to the second decimal point, carries a sense of presence; if the numbers today look like the numbers did a year ago they feel like they must be right, but if the numbers today look like the numbers did a year ago there is also less of an incentive to test the underlying assumptions and the numbers can only ever be as accurate as those assumptions: how is profit estimated, how is long-term risk accounted for, how are marketing costs accounted for, how much growth is assumed, etc. and is this still valid?
Further, in a similar way to how too many credit policies can end up reducing the accountability of business leaders rather than increasing it, when too much importance is placed on reporting managers become accountable for knowing their numbers, rather than knowing their businesses. If you can say how much your numbers changed month-on-month but not why, then you’re focusing on the wrong things.

Raising Costs
Every report includes multiple individual metrics and goes to multiple stakeholders, each of those metrics has the potential to raise a question with each of those stakeholders. This is good if the question being raised influences the actions of the business, but the work involved in answering a question is not related to the value of answering it and so as more metrics of lesser importance are added to a business’ vocabulary, the odds of a question generating non-value-adding work increases exponentially.
Once it has been asked, it is hard to ignore a question pertaining to a report without looking like you don’t understand your business, but sometimes the opposite is true. If you really understand your business you’ll know which metrics are indicative of its overall state and which are not. While your own understanding of your business should encompass the multiple and detailed metrics impacting your business, you should only be reporting the most important of those to broader audiences.
And it is not just what you’re reporting, but to whom. Often a question asked out of interest by an uninvolved party can trigger a significant amount of work without providing any extra control or oversight. Better reports and better audiences should therefore replace old ones and metrics that are not value-adding in a context should not be displayed in that context; or the audience needs to change until the context is right.

Compounding Errors
The biggest problem, though, that I have with a report-based approach is the potential for compounding errors. When one report is compiled based off another report there is always the risk that an error in the first will be included in the second. This actually costs the organization in two ways: firstly the obvious risk of incorrectly informed decisions and secondly in the extra work needed to stay vigilant to this risk.
Numbers need to be checked and rechecked, formats need to be aligned or changed in synchronization, and reconciliations need to be carried out where constant differences exist – month-end data versus cycle end data, monthly average exchange rates versus month-end exchange rates, etc.
Time should never be spent getting the numbers to match; that changes nothing. Time should rather be spent creating a single source of data that can be accessed by multiple teams and which can be left in its raw state, any customization of the data happening in one team will therefore remain isolated from all other teams.

Reports are important and will remain so, but their role should be understood. A few key metrics should be reported widely and these should each add a significant and unique piece of information about an organization’s health, at one level down a similar report should break down the team’s performance, but beyond that time and resources should be invested in the creative analysis of raw data, encouraging the creation of analytics-driven business stories.
Getting this right will involve a culture change more than anything, a move away from trusting the person who knows their numbers to trusting the person who provides the most genuine insight.
I know of a loan origination operation that charges sales people a token fee for any declined application which they asked to be manually referred, forcing them to consider the merits of the case carefully before adding to the costs. A similar approach might be helpful here, charging audiences for access to monthly reports on a per metric basis – this could be an actual monetary fine which is added saved up for an end of year event or a virtual currency awarded on a quota basis.

The thing is: no one really cares about banking products. There’s no idolizing of the guys who started AmEx Cards or CapitalOne, no queue outside HSBC the night before a new card is launched. This is a problem because people only buy things they care about, or things they need and for which there is no alternative.

Banks used to keep outside competitors away with the huge capital and regulatory costs of setting-up a payments system but as more commerce moves online and as these other costs drop, those barriers will fall.
The problem is cards are essentially commodities. With a few exceptions, a credit card is a credit card is a debit card, even. This is especially true as the actual plastic starts to play a smaller role in the transaction. In freeing customers from location-specific branch and ATM networks, online banking has also removed the personal relationship that may once have made a bank something more than a logo on a card.
The credit card survives – and indeed still thrives – because it is the most convenient way for most people to make most payments, at the moment, but this is changing. With more and more online and mobile alternatives, banks will have to start competing with more retail-savvy competitors and to do that they need to reconsider the way they consider and market their products.
Traditionally banks spent large amounts on above the line advertising to attract customers and retain customers who they offered a suite of standard products; a one-size-fits-all model. Then, stand alone credit card issuers and other niche companies started to attack the banks’ market share with tailored products offered through direct marketing campaigns; an altered-by-the-in-store-tailor, still not 100% customized.
Direct marketing is no longer enough because it works on a some key principals which are being undermined: the contact must be made at a time and place where the customer is open to the idea of a new card, but in a flooded market the chances of your contact reaching a customer before a competitors in this window period is getting smaller and you’re almost always contacting them at home; the contact must come in a medium that is relevant to a customer, both mail and email are becoming less relevant to customers; and the offer should appeal to a particular niche, but a direct marketing campaign, even a niche one, must involve a degree of choice compromisation.
A new model is needed that can reach customers at a convenient time and place, through a relevant medium to offer products tailored to their needs, cheaply. The last word is especially important because banks have long used vague pricing structures to protect themselves from commodity prices but new laws and competition from more transparent – and even ‘no cost’ –competitors will drive prices down, making only the most efficient banks profitable.
This article is an attempt to run with that idea, sometimes beyond the limits of practicality; hopefully in doing so I will raise some interesting questions about what is and isn’t important in the modern, mass market credit card business.

That’s where the idea for the credit card vending machine took root: it is a symbol for efficient, convenient, and ‘productized’ transactional banking. Turning the credit card marketing model around to offer customized cards to customers in convenient locations, without paper work and at low cost.
I envisage a customer approaching a machine in a shopping mall, choosing a card design from the display, entering the relevant data, selecting product features, paying a fee based on the feature bundle, and then waiting while the machine embosses, encodes and produces their card.

The concept is simply an amalgamation of components that are all already available and automatable:
·        an online application form,
·        a means of automated customer verification (ID card scanning in HK and fingerprint reading in Hong Kong for example) ,
·        a secure communications channel,
·        a card embossing machine

Data Capture
I hate forms, especially hand written forms. Every time someone asks me to write out my name and address I immediately assume they value bureaucracy over customer service.
Instead, the data capture process should be designed to leverage stored data, focusing on verifying data rather than capturing it. In Hong Kong I can use my government-issued identity smartcard and a scan of my thumbprint to enter and leave the country, the same tools could provide my demographic which could then be supplemented by bureau and internal databases, requiring me to enter only minimal data. An ATM card and PIN code might do the same thing.
Where this is not possible, the interface would need to provide a vivid and easy means for manually capturing data.
Customer Acquisition
Credit acquisition strategies should already be automated. Very little about them will change, they’ll just be implemented closer to the customer. Hosting them in a vending machine – or doing it via secure link to the bank’s system – is also no different, just a lot of smaller machines processing the data rather than one big one. In fact if there is anything in your processes that can’t be automated in this way you should probably revaluate the cost:benefit trade-off of them anyway.
In terms of marketing, by being located closer to the point of use also makes it easier to do short-term, co-branded campaigns.
Product Selection
Once the data has been captured and the credit and profitability scores have been calculated, a list of product features can be made available, either explicitly or as shadow limits. The obvious way to do this would be to allow a customer to add features onto a low cost, low feature basic card: higher limits, a reward programme, limited edition designs, etc. all with an associated higher fee.
But I’m not threatening anyone’s job here. Any number of strategies can be implemented in the background. The product characteristics might be customer selected, but the options provided and the pricing of those options will be based on analytics-driven credit strategies.
Even target market analysis is still important. In fact, you’ll have one more important data point: the demographic data will allow you to model risk and behaviour based on home address, but you’ll now also now where they shop, allowing you to model behaviour in more detail.
Just because credit card designs don’t obviously affect the standard profit levers, it doesn’t mean they can’t be important influencers of application volumes, but most banks offer only two or three options in each product category.
In part this is because the major card companies want to protect their visual brand identities, but mainly it is because it is hard to advertise hundreds of different card designs to your customers without confusing them.
By filling each machine with a unique selection of generic and limited edition designs, though, you could offer a selection of designs to the market that is never overwhelming but which presents more opportunities for individualism across the market. You might even be able to offer an electronic display of all possible designs to be printed on white plastics.
Look, I started out managing fraud analytics on a card portfolio and I know my old boss will be fuming at this stage; there are risk involved in storing blank plastics and especially in storing the systems for encoding chips and magstripes. However, ATMs have many of the same risks and I believe that they are sufficiently controllable to support the rest of the idea at least in its intended purpose here.
Connecting the card to a funding account could be done offline afterwards, but I would prefer a model that had the customer link the card to their savings account by inserting their ATM card and entering the PIN; the bank could to the debit order/ standing order administration in the background.
Finally payments, I would propose a single cost model where the actual card is paid for by debiting the funding account when the invoice is created or by cash as with any other vending machine purchase; a single cost model makes the process more transparent and helps to reposition the card as a product purchased willingly.

The systems that make the credit card vending machine could also be leveraged for other, revenue generating purposes.
It provides a channel that could revitalize card upgrades. Instead of linking card upgrades to hidden product parameters, they can become customer initiated and feature driven: learning from the internet’s status-badge mindset, banks could allow customers to insert a card into the machine, pay a small upgrade fee and have it replicated on a new, limited-edition plastic made available based on longevity and spend scores for example, even linking it to retail brands so a Burberry Card might become available only if you spend $5,000 or more in a Burberry store on a vending machine card, etc. Multiple, smaller upgrades would create a new and different revenue stream.
The machines could also act as a channel for online application fulfillment. Customers who have applied online, who have a card, or who need to replace a lost card could have those printed at the most convenient vending machine rather than having to visit a branch.

The way I have spoken about the credit card vending machine is as a new and somewhat quirky sales channel for of generic cards in a generic market place – a Visa Classic Card with a choice of limits, reward programmes and designs, for example. In other words, I have positioned it as a better way to make traditional credit cards relevant in a retail environment.
But it could also offer opportunities in other ways too, for example in the unbanked sectors in places like South Africa where branch networks are prohibitively expensive to roll-out in low-income, rural areas. There customers incur significant costs to reach a bank for even simple services. Though mobile banking is making inroads, there is still room for card based transactional banking. A credit card vending machine would be more difficult to get right in this sort of environment, but if done right it would be a cheap way to expand market share for innovative lenders.

This article is not intended to stand as business proposal, but rather to highlight the parts of the traditional lending business that I feel are most at risk from competition and irrelevance. A review of your marketing efforts and team structures with this in mind might reveal functions that are no longer needed, product parameters that are too complex or attitudes to customer service that need to be improved.

Every lending organisation needs a good credit policy but at what point does ‘good’ policy become ‘too much’ policy?

There is of course a trade-off between risk control and operational efficiency but their relationship isn’t always as clear as you might think and it continues to evolve as the lending industry moves away from hard-coded, one-size-fits-all rules to more dynamic strategies.

So how do we know when there is too much policy? In my opinion, a credit policy is more like the army than it is like the police force – it should establish and defend the boundaries of the lending decision but problems arise when it becomes actively involved within those borders.
This manifests itself in the common complaint of policy teams spending too much time managing day-to-day policy compliance and too little time thinking about a policy’s purpose; creating a culture where people ask a lot of questions about ‘how’ something is done in a particular organization but very few about ‘why’ it is done.
This is a problem of policy process as well as policy content.

The process should not shift accountability along a sign-off chain
Credit policies tend to generate supporting processes that can easily devolve to a point where even simple change requests must pass through a complex sign-off chain. Where does the accountability reside in such a chain?
All too often, only at the top; of course it is important for the most senior approver to be accountable but it is even more important that the original decision-maker is accountable and the further up the chain a decision moves the less likely this is the case. Each new signature should not represent the new owner of the accountability but rather the new additional co-owner with the original decision-maker.
Of course there are situations where a chain of sign-offs is a genuine safe-guard but in many more cases they serve to undermine the decision-making process by removing that key relationship between action and accountability. As a result the person proposing an action is able to make lax decisions while the person agreeing to them is removed from the information and so more prone to oversights; bad proposals consume resources while moving up and down the sign-off chain or, worse, slip through the gaps and are approved.

The first step back in the right direction is to remove the policy team from the sign-off process. Since the policy already reflects their views, their sign-off is redundant. Instead the business owner should be able to sign-off to the fact that the proposed change is within the parameters set out in the policy and should be held accountable for that fact.
By doing this the business can make faster decisions while simultaneously been forced to better understand the policy. But does it mean that the policy team should just sit back and assume all of the policies are being adhered to? No. The policy team still plays two important roles in the process: they provide guidance as needed and they monitor the decisions that have already been made, only now they do so outside of the sign-off process. In most cases there is sufficient time between a decision being made and it being implemented for a re-active check to still be
The only cases that should require direct pre-emptive input from the policy team are those that the product team feels breach the current policy; which brings us to the second solution.

The content should not assume accountability, the person should
A credit policy that is rich in detail is also a credit policy that is likely to generate many and insignificant breaches and thus a constant stream of work for the policy team. Over time it is easy for any policy to evolve in this way as new rules and sub-rules get added to accommodate perceived new risks or to adjust to changing circumstances; indeed it is often in the policy team’s interest to allow it to do so. However, extra detail almost always leads to higher, not lower, risk.
Firstly, a complex policy is less likely to be understood and therefore more likely to lead to accountability shifting to the policy team through the sign-off chain as discussed above. By increasing the volume of ‘policy exception’ cases you also reduce the time and resources available for focusing on each request and so important projects may receive less diligence than they deserve.
But an overly complex policy can also shift accountability in another way: whenever you describe ten specific situations where a certain action is not allowed you can often be implied to be simultaneously implying that it is allowed in any other situation and thus freeing the actor from making a personal decision regarding its suitability to the given situation; the rule becomes more accountable than the person.
The first point is easily understood so I’ll focus on the second. By filling your policy with detailed rules you imply that anything that doesn’t expressly breach the policy is allowable and so expose the organization to risks that haven’t yet been considered.
The most apt example I can think to explain this point better relates not to credit policy but to something much simpler – travel expenses.

I used to work in a team that travelled frequently to international offices, typically spending three to four days abroad at any one time. When I joined we were a small team with a large amount of autonomy and my boss dictated the policy for travel-related expenses and his policy was: when you’re travelling for work, eat and drink as you would at home if you were paying for it.
He told us not to feel that we should sacrifice just because the country we were in happened to be an expensive one – it was the organisation’s decision to send us there after all – but similarly not to become extravagant just because the company was picking-up the tab.
It was a very broad policy with little in the way of detail and so it made us each accountable; it worked brilliantly and I never heard of a colleague that abused it or felt abused by it.
That policy was inherently fair in all situations because it was flexible. However, in time our parent company bought another local company and our team was brought under their ‘more developed’ corporate structures, including their travel claim policies. These policies, like at so many companies, tried to be fair by unwaveringly applying a single maximum value to all meal claims. In some locations this meant you could eat like a king while in others austerity was forced upon you.
In don’t have data to back this up but I am sure that it created a lose-lose situation: morale definitely dropped and I’m certain the cost of travel claims increased as everyone spent up to the daily cap maximum each day, either because they had to or simply because now they could without feeling any responsibility not to.
Of course this example doesn’t necessarily apply 100% to a credit policy but much of the underlying truth remains: broader policy rules makes people accountable and so they needn’t increase risk in any many cases they actually decrease it.

A credit policy that says ‘we don’t lend to customer segments where the expected returns fail to compensate for their risk’ makes the decision-maker accountable than a policy that says ‘we don’t lend to students, the self-employed or the unemployed’.
Under the former policy, if a decision-maker isn’t confident enough in their reasons for lending into a new segment they can’t go ahead with that decision. On the other hand though, if they have solid analysis and a risk controlling roll-out process in place, they can go ahead and, unhindered by needless policy, can make a name for themselves and money for the business.
The latter policy though makes the decision-maker accountable only for the fact that the new customer segment was not one of those expressively prohibited not to the fact that the decision is likely to be a profitable one.

Of course encouraging broader rules and more accountability pre-supposes that the staff in key positions are competent but if they are not, it’s not a new credit policy that you need…