Rightsizing Your Risk Architecture
A comparison of risk management and architectures that satisfy the demands and requirements of enterprises today
By: Frank Bucalo
Sep. 20, 2010 03:30 AM
In today's evolving corporate ecosystem, managing risk has become a core focus and function of many enterprises. Mitigating consequences such as loss of revenue, unfavorable publicity and loss of market share, among others, are of utmost importance. Risk applications vary in a variety ways and the underlying technology is also quite diverse. This article will compare and contrast aspects of risk management and suggest architectures (from cloud-based solutions to high-performance solutions) that satisfy the demands and requirements of enterprises today. We will start with a review of "Risk Management," followed by a look at the term "architecture." Then we will review two specific use cases to illustrate the thought process involved in mapping risk management requirements to appropriate architectures.
Risk Management Reviewed
Often, the right architecture reflects the advice of Albert Einstein: "Make everything as simple as possible, but not simpler." In other words, the right architecture satisfies all of the business and contextual requirements (unless some are negotiated away) in the most efficient manner. We architects call this combination of effectiveness (it does the job) and efficiency (it uses the least resources) as "elegance."
Contrast this with common expectations for risk management computer systems architecture. I am often asked to document the right risk management reference architecture. That request reflects a naiveté and any such reference architecture falls into Einstein's category of "too simple." In reality, there are levels of architecture and many potential reference architectures for risk management systems. The process for determining the appropriate one, as in the case of structural architecture, involves the same steps of discovery, analysis, comparison, and collaboration culminating in the "elegant" design choice.
The remainder of this article will illustrate this process using two specific risk management scenarios. The first use case will reflect a generalized compliance management (e.g., SOX, ISO 9000 and ISO 27001) scenario. The second use case will illustrate a scenario at the other end of the risk management spectrum - managing risk for an algorithmic trading portfolio. Each scenario will include standard functional requirements as well as illustrative contextual limiting factors to be considered. Then we will review some options with associated tradeoffs in order to arrive at an "elegant" design.
Generalized Compliance Management
Consequently, compliance management tends to be well defined and somewhat standardized. One standard framework is "Committee of Sponsoring Organizations" (COSO). A standard library of objectives is "Control Objectives for Information and related Technology" (COBIT). And the "Open Compliance and Ethics Group" (OCEG) "GRC Capability Model." Generalized compliance management processing is characterized by manual data entry, document management, workflow, and reporting. Accordingly, high-performance computing requirements are rare, and capacity requirements tend to increase predictably slowly in a linear manner. Security requirements consist primarily of access control and auditing requirements. For the sake of illustration, we will assume that the client has minimal internal resources to build a custom system and a need to be fully operational in a short time. Finally, the information in the system is confidential, but would not be considered to be secret or top secret. That is, there would be minimal impact if information somehow leaked.
Given this set of requirements and context we consider a number of architectural options.
The first option is to build a homegrown application, either via coding or using an out-of-the box platform (e.g., SharePoint). Typically, a customer with the profile above cannot consider coding a homegrown solution due to the cost and time limitations. While an out-of-the-box platform appears to be optimal on the surface, it is not strange to discover that many changes are required to meet specific requirements.
The second option is to consider the use of commercial-off-the-shelf (COTS) software in the customer's environment. Given the fact that compliance management is somewhat standardized, there is a short time frame for implementation and few available resources, this option is more attractive than building an internal application. The strength of most current COTS solutions is that they are typically highly configurable and extensible via adaptation. The term "adaptation" implies that:
The associated challenge of the COTS approach is that internal technical and business staff needs to understand the configuration and adaptation capabilities so that an optimal COTS design can be defined. Fortunately, an elegant method exists to achieve the required level of understanding. That method consists of a joint workshop where customer technical and business staff collaborates with a COTS subject matter expert (SME) to model and define adaptations. We refer to this as an "enablement" workshop. I have observed that customer staff can become knowledgeable and proficient in the use of a COTS solution in about two weeks. A word of warning - it is tempting to try to bypass the initial learning curve in the name of efficiency. But customers are in danger of wasting many months of effort only to discover that the COTS platform provides the desired functionality out-of-the-box. For example, one customer I encountered created a custom website to allow data entry of external information into their COTS system only to discover that the application provided import/export features out-of-the box. In this case, they wasted $100,000 and many months of effort before they discovered the feature. In the case of COTS implementations, ignorance is not bliss.
Before we consider the third option, which uses the phrase "cloud computing," we need to define the context of this phrase. In the most general context, "cloud computing" could represent any architectural option that uses any Internet protocol. That could represent an internal cloud that uses dedicated, high-performance infrastructure. However, the phrase is typically used to indicate a solution that is at least partially outsourced. Outsourcing could involve Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), or Software-as-a-Service (SaaS). In this article we will assume that "cloud computing" indicates outsourcing to a service provider that is providing commodity infrastructure, platform, and software potentially on a virtual platform.
The third option to consider is "cloud computing" on dedicated, physical hardware. Given the customer's limited resources and the rapid-time-to-implementation requirement, this option can be quite attractive. With this option, service provider security becomes an issue. Is their environment protected from outside access as well as access from other clients? In such a case, provider certifications might be considered. For example, are they ISO 27001 certified? Likewise, one would have to consider failover and disaster recovery requirements and provider capabilities. In general, remember that the use of a service provider does not relieve you of architectural responsibility. Due diligence is required. Therefore the "Total Architect" must formulate specific Service Level Agreements (SLAs) with the business owners and corresponding Operation Level Agreements (OLAs) with the service provider.
A fourth option consists of cloud computing hosted on a virtual machine. The use of virtual machines can reduce the cost of hardware and associated costs (e.g., cooling) by sharing the physical environment. The tradeoff is that the virtual layer typically adds latency, thus reducing performance. But in this given scenario, we have deemed that performance is not a critical requirement so this is also a viable option and perhaps the "elegant" solution design.
In summary of this business-driven scenario, notice the architectural issues could be classified as business-oriented or service-level oriented. Nowhere did we discuss bits, bytes, chips, networks or other low-level technical concerns. This illustrates that architecture is a multi-layered discipline. The next scenario will demonstrate the addition of the technical architecture layer.
Algorithmic Trading Portfolio Risk Management
Risk management in standard trading portfolio analysis usually looks at changes in the Value at Risk (VaR) for a current portfolio. That metric considers a held trading portfolio and runs it against many models to ascertain a worst case loss amount over a window of time and at a given probability. In the case of high frequency algorithmic trading, positions are only held for minutes to hours. Therefore, VaR is somewhat meaningless. Instead, risk management for algorithmic training is less about the portfolio and more about the algorithm. Potential outcomes for new algorithms must be evaluated before they are enacted. Without getting too much into the specifics, which is the subject of many books, the challenge to managing risk for an algorithmic trading strategies involves managing high volumes of "tick" data (aka market price changes) and exercising models (both historical and simulated) to estimate the risk and reward characteristics of the algorithm. Those models can be statistical and such statistical models should assuming both normality and using Extreme-Value-Theory (EVT).
Consequently, the technical architecture of risk systems deals with tools and techniques to achieve low latency, data management, and parallel processing. The remainder of this article will review those topics and potential delivery architectures. In this scenario, contextual limitations include the need for extreme secrecy around trading algorithms and the intense competition to be the fastest. Since so much money is as stake, firms are willing to invest in resources to facilitate their trading platform and strategy.
While in the previous scenario we are willing to move processing into the cloud, accept reduced performance, and reduce costs using virtualization, the right architecture in this scenario seeks to bring processing units closer together, increase performance and accept increased cost to realize gains in trading revenue while controlling risks.
A number of tools and techniques exist to achieve low latency. One technique is to co-locate an algorithmic trading firm's servers at or near exchange data centers. Using this technique, one client reduced their latency by 21 milliseconds - a huge improvement in trading scenarios where latency is currently measured in microseconds. Most cloud applications use virtualization, so many people assume that association. But this does not have to be the case. That's why I'm explicitly including references to both the cloud and virtualization. Another technique to achieve low latency is to replace Ethernet and standard TCP/IP processing with more efficient communications mechanisms. For example, one might use an Infiniband switched fabric communications link to replace a 10 Gigabit Ethernet interconnect fabric. One estimate is that this change provides 4x lower latency. Another example replaces the traditional TCP/IP socket stack with a more efficient socket stack. Such a stack reduces latency by bypassing the kernel, using Remote Direct Memory Access (RDMA), and zero-copy. This change can reduce latency by 80-90%. A final example places the use of single-core CPUs with multi-core CPUs. In this configuration intra-CPU communications take place over silicon rather than a bus or network. Using this tool, latency for real-time events can be reduced by an order of magnitude, say from 1-4 milliseconds to 1-3 microseconds.
When it comes to data, one estimate is that a single day of tick data (individual price changes for a traded instrument) is equivalent to 30 years of daily observations, depending on how frequently an instrument is traded. Again, there are architectural options for reducing such data and replicating it to various data stores where parallelized models can process the data locally. A key tool for reducing real-time data is the use of Complex Event Processing (CEP). A number of such tools exist. They take large volumes of streaming data and produce an aggregate value over a time window. For example, a CEP tool can take hundreds of ticks and reduce them to a single metric such as the Volume-Weighted Average Price (VWAP) - the ratio of the value traded to total volume traded over a time horizon (e.g., one minute). By reducing hundreds of events into a single event, one can add efficiency thus making a risk management model tractable.
A number of possibilities exist to facilitate parallel processing - from the use of multi-core CPUs to multi-processor CPUs to computing grids. And these options are not orthogonal. For example, one can compute a number of trade scenarios in parallel across a grid of multi-core CPUs. The right parallel architecture is dependent on a number of dimensions. For example, "Does the computational savings gained by having local data justify the cost of data replication?" Or, "Is the cost of a multi-core CPU with specialized cores warranted compared to the use of a commodity CPU?" Or even, "Is the cost of a network communication amortized over the life of a distributed computation?"
As is always the case, the architect must understand the requirements, the context, and be able to elaborate the capability and limitations of each option to choose the "right" architecture. Clearly, in this case, the customer is highly concerned about secrecy so the use of an external provider might be out of the question. Also, because virtualization adds one or more layers with corresponding latency, the use of a virtual environment might also be unacceptable. Finally, the wish to outperform competitors implies that any standardized COTS solution out of the question.
In summary, this article has demonstrated that the "right" risk architecture is a function of the scenario. We have illustrated that there are various levels of architecture (business, context, delivery, technical, data, parallelism, etc.) to be considered depending on the scenario. But when a concrete scenario is presented, the "Total Architecture" approach can be applied and the "right" architecture becomes clear and decisions become obvious.
Reader Feedback: Page 1 of 1
Latest Cloud Developer Stories
Subscribe to the World's Most Powerful Newsletters
Subscribe to Our Rss Feeds & Get Your SYS-CON News Live!
SYS-CON Featured Whitepapers
Most Read This Week