Risk and return are two sides of the same coin. Historically, firms have analyze financial results with sophisticated return measures such as IRR, NPV and ROE. More recently, and with the gentle push of the Basel accords, banks and non-bank financial institutions have begun to analyze risk with the same degree of rigor. Risk and Finance analysis are the two functions within in most banks that have an enterprise-wide demand for data with which they conduct their analysis. Data issues and enterprise-wide remediation of data flow now often originate within one of these two organizations.
However, it is still the case in many institutions that Risk and Finance use separate data acquisition strategies. This can cause duplication of effort, inconsistent results, and a lot of effort spent trying to reconcile results between organizational units. Leading companies are driving towards a Risk and Finance Data Convergence, where both organizations consume data from the same source. While there are tremendous efficiencies with this approach, there are many challenges these units face. In particular, they tend to not share the same concern over data granularity.
The definition of analysis for risk and finance professionals has changed in the last decade. In times prior, risk and finance analysts were consumers of data squirreled away in separate IT silos. They had little ownership of the data and little say in what was available. Many firms have learned how to compete on analytics in the risk and finance space. As a result, the push-pull boundary of data ownership has shifted from the IT silos to the business units where contextualized analysts understand both the data architecture and the businesses they are building. This is a positive change for organizations that can change with this paradigm shift, and a serious blow to the competitiveness of organizations that cannot.
Enterprise level risk analysis for banks and non-bank financial institutions has been an area of increased focus by board members and external regulators. The challenges of becoming facile at the core tenants of enterprise risk, as laid out in Basel III are Market Risk, Liquidity Risk, Credit Risk, and Operational Risk. In order for a firm to be adept at these competencies, there is a strong reliance on enterprise data flows that are timely, accurate, and complete. Additionally, risk analysis bears the unique requirement for transaction and account level granularity data feeds. The days of using aggregate and average statistics for risk decisions has long since passed.
As a result the emphasis for large enterprise risk management solutions often hinges on improvements in Enterprise Data Architecture, implementation of specialized tools such as Aladdin or Loan Performance, and an upgrading of talent to people with strong data and programming skills in SQL or SAS who also understand the business. Firms often struggle with one or more of these as they get up the learning curve.
Continuous monitoring is a top priority of well functioning risk organizations. This should be achieved through a series of automated data movements that feed an automated web-based reporting tool set. While there are many ways to manage the ETL of enterprise data for monitoring and a multitude of tool sets that can be used for reporting, there are some common tenants that top programs have. These include clear ownership, automated data quality monitoring, clear and consistent definitions, concise labeling of output, and automated web-based reporting with email alerts. If the approach to observing and managing enterprise risk includes a number of manual data interfaces, manual running of reports and post-report data manipulation, the cost to the organization is significantly higher than in an automated environment and the level of output quality, including timing, is lower than can be achieved.
Traditional Financial analysis has focused on vertical looks at periodic profitability and the explanations behind those results. Banks, and lending in particular, face fairly predictable annuity streams. Financial analysis in those groups tends to focus more on the horizontal economics of the business. This requires a high level of forecasting sophistication and modeling to translate the cash flow streams into NPV valuations. The ability to disaggregate the NPVs into various performance buckets, and sometimes risk related buckets, enables organizations to truly understand their performance and have a high degree of forward looking earnings predictability. This requires a more granular look at data than many finance organizations have access to.
Scenario governance is an area of increasing concern amongst senior bank management and regulators. Different groups may define and run, for example, a stress case scenario with different shocks. One group may assume stress is 500bp increase in rates, while another that back-end rates risk 25%, and a third will focus more on defaults than interest rate shocks. At the end of the day all three areas contribute their results to some form of a stress case scenario result. We can see a number of issues. First, the scenario is not clearly defined across all economic factors. Second, it is not centrally stored and governed. Third, the output results from one scenario may be an input to another model for the same scenario. Fourth, it may not be possible to set up a certain model to match the parameters specified due to the nature of the model. Fifth, the results are not centrally stored for uniform analysis. All of these issues lead to a lot of extra time and confusion when driving financial scenarios.
Risk and Finance Convergence
Risk and Finance are the two groups that have an enterprise purview of data. They share similar needs in terms of timeliness, accuracy, and completeness. However, generally due to differences in granularity requirements, the two groups consume data through different channels. This leads to a number of inconsistencies and a lot of time spent reconciling the two sets of numbers. Finance generally needs less granular aggregate numbers and often makes top-side adjustments to these that may not be recorded anywhere other than the general ledger. This means that enterprise data is created by their group and not shared beyond their group. The enterprise risk group often needs more granular information, such as account level and transaction level data in order to run the types of sophisticated forecasts necessary to manage the risk. Once again, there is a gap in the ubiquity of the data as forecast data is created and stored in the risk function and not made available to the wider enterprise.
The trend we observe in the industry is a slow move to risk and finance data convergence. A single data warehouse or data mart set that both groups use interchangeably for analytic purposes. We also observe a migration in the tools beings used by these groups from simple excel based scheduled to more sophisticated tools using SQL and SAS to reduce repetitive steps and focus more on data visualization and business impact.