BLOG

A single perspective can deceive; How to integrate risk & performance

Organizations worldwide are investing massively in financial and non-financial risk management. Processes, controls, tooling, education….. the lot!. Nevertheless, their executive boards invariably deal with it defensively and see it as a regulatory dissatisfier. Why is that? Surely not only to keep heir license to operate? That would mean organizations spend millions of dollars on something that is “just” addressed to avoid regulatory problems and isn’t actively used to achieve the goals of the organization. Does this imply we face a prospect of organizations managing risks all right but ceasing to exist?

It is not because organizations wouldn’t want to put those millions of dollars to use to achieve their goals, right? No, it is because something essential is missing. Because risk is not being balanced with performance explicitly and therefore only constitutes a one sided view on the organization and it’s operations.

Specialists, consultants, information managers and even knowledge systems – hardly anyone – make clear how performance, defined as achieving the goals of the organization, is impacted by changing the risk profile/appetite, nor vice versa, which risk profile is required to realize the desired performance. Not even the faintest link is established.

Nonetheless, controlling both risks and the organization’s quest for achieving their organizational goals, is no rocket science either. As organizations have every knowledge in place, this article not only demonstrates how a selfcalibrating link between risk run and performance shown could be easily established (past, present and future) but two seriously impeding factors are discussed as well:

  • Data organization and information systems are too much of a (very expensive) mess
  • The required effective organisational behaviour of taking responsibility and being accountable, has not been
    developed yet.

    Too often organizations and their management get away with underperformance by saying “but we managed the risks
    all right”. So, evidently, an approach to address these two factors, is presented too. Who knows, if we manage just doing that, we might be on route not only to manage risks all right but to sustain en strengthen the existence of organizations too!
  • Part 1 Status quo risk management
  • Part 2 What is the problem?
  • Part 3 Integration is the answer
  • Part 4 And in practice?
  • Part 5 Data organization drives it
  • Part 6 Behavior drives it

Part 1…Status quo on risk management

We are doing not too bad today…


In our current society, most businesses are heavily regulated, not just nationally but in some sectors globally too. Sectors like the food supply chain or the even children’s day care are among the most heavily regulated but due to the financial crisis in 2008, especially financial services have been in the public eye.

All businesses however have sophisticated risk systems in place, incorporating risk classification, budgetting, measurement and reporting, drawing data from several, often unsynchronized application/system databases; signaling when risk budgets are (going to be) exceeded, prompting management to redirect. Well known examples of systems used are Cerrix, Bwise etc. With any luck Governance and Compliance are incorporated too.


But….. organizations also have extended performance management systems in place, quantifying performance KPI’s and projecting performance in the (far) future. This “managerial reporting” is frequently done by tooling like Excel, Cognos or Business Objects. PowerBi is getting more and more popular also because of its modelling capabilities, being a competitor to Matlab and its likes.


Having to comply with regulatory reporting requirements organizations often use extended book-keeping functionality combined with functionality to do “operational P&L-reporting” which is getting more and more prominent in yearly reporting. Yet tooling for this is often based on technologies of 20 years or more ago.

Anyway, the link between all those systems and databases – risk, performance, managerial reporting, regulatory reporting etc. – is makeshift if not manual, ambiguous, unreliable and vulnerable. And also regulatory attention to internal control has increased heavily in the last few years.

Organizations can no longer do with a periodic update of the internal control system and have to prove they are actually in control; yet another model and another tool are introduced. These tools assess the effectiveness of controls and yes relating them and synchronizing them to all those other systems is sketchy and vulnerable.

Where does this leave us?

Part 2…What is the problem?

Where does this leave us? These systems on risk, performance, bookkeeping, internal audit etc. and the (risk) modelling behind it are working quite well in their own right, given the relatively low number of reported regulatory breaches on risk/audit/compliance guidelines. Nonetheless, the information landscape gets complex and very costly as all these systems are designed in different tooling and not well connected. Though intelligent interfaces and data warehouses are built on top of it, data is not synchronized well too.


As oversight is hard to keep and interdependencies are not made explicit, information is fragmented and leading executives to redirect the execution of their strategy based on just a single angle. Including other angles would require weighing reliability, timeliness and correctness, making it too complex for a human mind to comprehend.


An example on content……. hedge funds and pension funds trying to buy derivatives as protection against ever lowering interest rates. The decision is often based on a solid financial risk analysis. However, the technical complexity of the information environment makes that it takes days if not weeks to gather the information for the financial analysis. As a few days later will make you pay much more for the same derivatives, a huge operational risk is involved and performance deteriorates.


Let’s take another example on content…… Setting up a strategy, identifying and assessing financial and non-financial risk is mostly done properly. On financial risk, (complex) stochastic modeling is nicely used to establish how the trade-off between financial risk and performance works. Even though uncertainty is still there, it suggests specifically manageable mitigative action. In the earlier used example of pension funds and hedge funds you can think of lowering or increasing interest rate hedges and even how that is done best.


On non-financial risk no such link is established between de- or up-risking and performance. Some might say that notions like taking more risks leads to more revenues will do. When risk limits are hit, using solely this notion will lead to blunt de-risking often followed by equally blunt up-risking again.


It is easy to see that this might work adversely. Imagine a case where risk increases relatively less than performance. Then an organization might consider not to de-risk but to up-risk depending on how important performance is i.e. for the continuity of the organization.

Just avoiding the crashed car might get you in an even worse accident; choose the cooperative trajectory also
considering the next avoiding vehicle.

As managing operational risk and performance is done implicitly and haphazardly, mitigating actions cannot be explained well to stakeholders. Most organizations get away with it as they are only seriously challenged on one dimension i.e. risk (and not performance) ………until now! Integration is needed….. let’s start with risk and performance management.

Part 3…Integration is the answer

Integration is needed….. let’s start with risk and performance management. Let us not take on the whole world at once and stick to a first step.


It is fine to start like any risk management set up would start; describe the processes in the organization and attribute risks to them according to any classification you feel comfortable with. Financial services use mainly FIRM or COSO. For each identified risk, estimations of probability and impact are made, often using simple 4- or 5-point scales for both probability and impact.

What is important now, is that not only a risk estimation is attributed to each process but a contribution to the goals of the organization as well. Naturally this is not exact science, but a simple attribution of a percentage will do nicely to start with. It establishes an explicit, albeit not exact link between risk and performance. Furthermore, it allows for a process to contribute to more than 1 of the goals of the business. Finally, it is also an objectivation to rank the “importance” of processes too.


This sounds simple…… In reality, any business will know higher and lower-level processes (hierarchy) not always fully disjunct too. This makes summing up risk and goal contributions complex but not impossible, it is a matter of putting in time and brains to sort this out, because that is what it is about…..sorting out.


Not simple but with additional benefits…… Attribution of both risk and goal contribution to processes allow for a functional allocation of risk budget. Usually, risk budget is allocated based on the “as is” state of the organization and its processes not knowing whether this is effective or not (given the goals of the organization).


Now processes contributing more to the goals of the organization, can be allocated more risk budget too. The organization can also start de- or up-risking processes right away, a first exercise in maintaining and ever improving the combined risk-performance system.


Attributing a contribution to the organization’s goals to processes; isn’t that just random? At first perhaps. But there are ways to calibrate attribution of both risk and performance to processes. After all, you can measure the actual performance and you can measure the actual risk you run. If
that doesn’t fit with what you estimated, adjust either attributed risk or contribution or even both. See part 5 for some casuistry.


But there is more, by assessing the trend in the risk you run, you could forecast performance. Matching the forecasted performance with the upcoming actual performance is not only another calibration, it allows you to actively change the relationship of your processes to influence future risk and performance balances i.e., by innovation. This is really another dimension to business cases.


Sounds nice in theory…..how does it work in practice?

Part 4… And in practice?

Sounds nice in theory…..how does it work in practice? Let’s say a risk gets manifest in a process. First thing to do is, determining whether this occurrence is in line with the estimated probability. If yes, risk probability estimations don’t need to be adjusted. Taking a look at performance is nonetheless necessary. Is this still on target, then the organization is fine. If performance is deteriorating, it should be assessed whether impact of risk occurrence should be increased or the contribution of this process to the goals of the organization should be changed.

When a risk gets manifest, people tend to assume their risk expectancy was too low…..not necessarily the case!

If the risk manifestation is not in line with the estimated probability, it might be that the controls are not as effective as expected. So not only performance (see above) but also the effectiveness of controls has to be assessed and improved or a change of estimated risk probability is accepted. This way the risk-performance system is continuously calibrated, improved and adjusted.


Talking about risk budget….. Risk manifestations should not necessarily lead to reduction of risk budget. After all a manifestation could well be a good representation of the estimated probabilities. But how does that work?


Following the above, the risk budget should not be defined as an amount for a period like a year which is lowered with every risk occurrence. Instead, risk budget should be defined as the most accurate risk estimation (probability and impact) given the goals of the organization, most likely after the initial de- and up-risking of several processes.

Now, when a risk manifestation in a process is in line with the risk estimation (probability and impact), risk budget is unaltered. When the manifestation leads to the conclusion that the estimation was off, a new and better estimation should be made and the risk budget will change. If the new budget exceeds a threshold either additional mitigating measures to reduce risk have to be taken or a swap of risk budget with other processes have to take place including possible risk mitigating measures. In this way, risk budget is not a yearly budget but something that is valid at any moment in time, adjusted when risk and/or performance require.

This all sounds rather ideal, but the tricks of the trade are still:

  • data organization (and management)
  • behavior (organizational and individual)

Part 5…Data organization drives it!

So we have the intelligence to cleverly link risk, controls and performance which allows for continuous risk budgetting and is selfcalibrating on top of that. However this intelligence can’t just be incorporated in current architecture and infrastructure just like that. So let’s first see …………


What’s the problem of many established organizations? The simple answer is….they are still taking processes as their starting point, just as they have been doing back in the 70-ies and 80-ies. And for this they are still developing decentral functionality and associated databases. Subsequently, they are still needing supra application functionality to gather, cleans/scrub and enrich data to get some oversight.


Now these organizations are saddled with a unsynchronized, not actual and unreliable batch of data that has different timestamps And that shattered data has clearly more than one truth, so what truth do they use to steer on? Which application and what truth is leading? Furthermore, all these cross data based interfaces are extremely vulnerable.


And for an integrated risk-control performance framework you need unambiguous, high quality data, with a (near) real time availability. So what’s the problem? In short….they are application centrically organized.


As a process or application centric approach will invariably lead to the organization having a data congestion, the million dollar question is ……..what’s the alternative? Well, starting with data instead of processes of course.


Starting with data for an integrated risk-control-performance framework.


We simply start with setting up a set of risks; we used FIRM at the lowest level of detail. This ensures that you have a complete set and not just only the ones that might have been identified in the organization which is by definition incomplete. Then we set up a set of controls; as mirror of the risks but also tagged with all kinds of types of controls. You can think of typologies like preventive, detective, and corrective or physical, technical and administrative or manual, IT dependent, application and IT General or design controls (segregation of duties) and controls on content (reconciliation or connection checks). Initially this database is filled with actually present controls in the organization. Blank spots might trigger the question why don’t we have a control like that. This helps to be complete too.

Furthermore, we set up what you might call a workflow. But not as a process but just as a set of activities/work instructions within the organization, uniquely but randomly numbered. This avoids the complexity of layered processes and up and downstream controls or key and non-key controls. As we use the risks and controls to ensure that we don’t miss out on anything, who needs processes?


Linking the data sets.


So we have three sets now; risks, controls and activities. These sets are going to be linked. That is what a relational database is all about. This is not done by a coder or developer but by the colleagues that perform each of these activities (“actors”). It is just click and drop for the actor to establish a link. Now, for each activity, the actor can put down the probability and impact of each of the involved risks, gross and net, using his or her own assessment of the effectiveness of the controls. In chapter 4 we explained how this effectiveness is calibrated.


Anyway, this allows for summing up risks actually run to the latest insights of every actor in the organization and comparing it to the risk budget (appetite) set by the board. Our experience is that the sum of risks actually run always undercuts the budget. So where you would expect controls to be tightened, often they can be loosened, lowering the operational costs. Point is you can now optimize objectively how much being in control in accordance with your risk appetite, will cost you. Furthermore, this allows for all kinds of effort vs effectiveness considerations and one can now
objectively skip controls that really don’t do much without having to remove them from the database; just cut the link. How is that for defining key controls?


Finally, we create a set of reports on the progress on performance, risk budget and changes to the previous period and we can drill down to activity level. These reports can be stored for auditing puproses. Changing your content and/or the presentations is just a matter of different functionality, often selecting different representations in standard software. If you like to have future projections on performance and risk you will need some extra intelligence, but that is just functionality putting on top of the data sets. Changing that functionality is easy; just put is alongside the original functionality and you can back test how both functionalities compare to each other. Processing power and storage capacity are not a problem; it is so very cheap!


A few interesting aspects.


One of the other essential elements is that the timing of the activities, initially tiggers the execution of the control. Yet not every time the activity is performed, the control has to be executed too. So frequency and yes or no the control has been executed in time, are attributes to that control. Attributes can always be easily added later on, so no worries if you forget one at first.


This also helps to discriminate between key en non key control; controls that serve many activities are obviously key. And how about duplications? As each colleague knows what controls to execute when doing the activity, many activities might use the exact same control. Then the first activity that performs that control, performs it for the other controls too. If an actor judges a control to be not effective enough, he or she can amend it, tighten it and just add it to the database (including all attributes). He might also check whether there is another control that is effective enough and try that one. Yes we end up with many controls, but who cares? Cores of storage are so cheap and speed of
processors is more than enough to quickly browse a relational database.


And unidentified risks occurring? This invariably results in an activity going wrong. An actor notices this and will put in the new risk and a new control and the link with the activity. Done and dusted! What if he doesn’t do this, covers it up? Than you will see in the reports that the performance deteriorates; a report that you could chose to do daily rather than monthly or even quarterly. Actually, this is a new control….put in in the set of controls!.

Summary


So the whole risk-control-performance framework consists of:

  • 4 sets; risks, controls, activities and reports
  • Linking elements of those sets to each other
  • Basic reporting functionality
  • Intelligence to project risk and performance

Quit simple right? All because we started with setting up the data. We know a real life example of a multi billion asset owner. Setting up this system only too EUR 90k. Maintenance only costs you up to EUR 1.500 a month. No standard software can beat those numbers Starting with data simplifies, strengthens and is way cheaper! An offer you can’t refuse!

Part 6…Behavior drives it!

Why has behavior such an impact? Imagine activities and certain controls that need to be executed to keep within risk budget. These controls are faithfully executed at first but when routine kicks in, ticking the box is to be expected. Invariably this is mitigated by a structural measure to do job rotation or a 4-eye principle or even additional controls. However, this only delays moving back
to ticking the box.

Ask your colleagues involved which behavior would be most effective, they come up with many effective ideas. Just a few from many observations drawn from real life.

  • Reality is always a little different than the “model” in the risk-performance system. People involved like to have the authority to adjust the processes and controls without having to go through lengthy approval procedures. Maybe only 1 colleague (from risk) could do the approval.
  • A simple approach will facilitate this, take a snapshot of the system each time an activity or control is changed and keep record of all those snapshots. Now you can trace each change to activities and/or controls and this allows for different but better auditing. Rather than checking each activity and control change, internal audit can check whether the colleagues are proposing sensible changes and if there is a clear reason for the change. If the change process is sensible you have a better probability that your controls are effective. Way better than labor intensive checks & balances.
  • Allowing colleagues involved in activities and controls to make amendments, will also make sure that you have a better chance to be ahead of adverse deviations. After all colleagues feel that they can actually do something with what they observe* which is such a strong motivator!

    A large bank was faced with a fine of hundreds of millions of Euros for money laundering. Of course, many colleagues have seen that a retail client had almost 200 accounts, but the system did not allow them to put it somewhere in or to flag a signal. So the signal got totally lost. What if they could have amended the system?
  • When signaling an adverse deviation, easy processing through the ranks of hierarchy should be the standard. This requires that personal interest should not prevail, not from the colleague but certainly not from their bosses: Openness to adverse deviation, however sensitive to stakeholders or other bosses, should be applauded and not penalized.
  • Controls are often executed as a job on the side so when times get really busy, executing them always gets between a rock and a hard place however little time it requires. Colleagues should be skilled in their own work organization, prioritization and escalation and bosses should be skilled in addressing functional escalations from their coworkers. This is not just skill, this is behavior: it requires courage and adopting a vulnerable conduct to prioritize and escalate.
  • Assessing the effectiveness of controls requires colleagues to unfurl the collective brainpower. Not only effective communication is needed for this, letting go of the notion that a professional should know all, is imperative. Also imperative is that “using the orchestra” is appreciated in appraisals too. Again, not just skills, but attitude too.

How to incorporate behavior? As stated before, setting up risk-performance systems starts with describing all activities. Why not start with describing the desired behavior in detail as well? This will make behavior observable and unambiguously redirectable. Yes, this will require time, but this will be paid back in multiple as processes are more effective and less surprises occur. Moreover, it ensures that your risk-performance system is continuously if not daily evolving, keeping track of all developments in the environment as close as you never would dream of.


However, you can develop the most effective attitude and behavior in your organization, when your data organization and management is still fragile, it won’t work… And vice versa.

Let’s begin

Start creating more value by using data in your organisation with our guidence

Similar Posts