+ All Categories
Home > Documents > Applying Analytic Approach ACAMS Today Dec Feb2010

Applying Analytic Approach ACAMS Today Dec Feb2010

Date post: 02-Dec-2014
Category:
Upload: walter-stevens
View: 51 times
Download: 1 times
Share this document with a friend
3
32 ACAMS TODAY | December–February 2010 www.ACAMS.org S ince the USA PATRIOT Act was passed in September 2001, finan- cial institutions have invested heavily to comply with regulatory expectations. Initial efforts focused on automating the detection of suspicious activity. The Federal Financial Institutions Examination Council (FFIEC) manual has emphasized a risk-based approach to anti-money laundering (AML) compliance. However, in practice, most institutions monitor based on published red flags and behaviors that generally are consistent with money laundering or terrorist financ- ing. As a result, transaction monitoring systems can generate high volumes of items that may have little relevance to legitimate investigations within your insti- tution. Further, the cost benefits of automa- tion are negated by the funding required to staff the investigation of low-value alerts. More than 1.25 million Suspicious Activity Reports (SARs) were filed in 2007. While defensive filings have been criticized by the private and public sectors, institu- tions continue to file SARs when in doubt to avoid a potential enforcement action. Filing of questionable SARs creates a similar prob- lem for law enforcement: too many SARs and too few resources. The crux of the prob- lem is that institutions have deployed trans- action monitoring systems to generate alerts and file SARs, yet they receive no feedback from the US Department of Treasury’s Financial Crimes Enforcement Network (FinCEN), nor do many institutions analyze investigations data to improve the quality of their detection processes. Very few institu- tions have applied business intelligence to justify monitoring policies and quantify the factors that improve the relevancy of alerts. There have been numerous articles on applying statistics to reduce false positives through scenario parameter tuning. This article will describe a proven approach to data analysis — a defensible methodol- ogy that has been used in clinical trials studies, fraud detection, warranty analy- sis, telecommunications churn and data- base marketing applications. No single technique is the answer; most problems require a comparison of several techniques to find the most appropriate tool. First we will look at some of the simpler and popu- lar techniques for tuning rules, including the advantages and disadvantages of each approach. Regardless of the preferred approach, it is important to establish con- sistent processes for modifying rules (i.e. how, when, and why?). Scenario tuning The tuning of detection rules to make them more efficient and effective is a cru- cial step in optimizing any transaction monitoring system. There have been dozens of articles written about using standard deviation or range-based outlier detection to set scenario parameters. The traditional approach to rule tuning is to iteratively change parameters one rule at a time until incremental performance improvements diminish. This approach is actually fairly risky because a change to a rule without understanding outcomes can have danger- ous downstream consequences. A number of rule-based tuning approaches have been proposed; however, incremental rule tuning quickly reaches a point of dimin- ishing returns, and one at which the risk from over-tuning scenario parameters far outweighs long-term benefits. AML alert tuning life cycle The tuning life cycle is a multi-phase process designed to put rigor around the AML scenario tuning process. Figure 1 shows the phases of a tuning project. The following provides some details around each phase. Figure 1 - AML Alert Tuning Life Cycle Phase 1 — Objective definition What is the objective of the tuning exer- cise? Possible objectives range from simply documenting the process of parameter set- ting and maintenance, and establishing a consistent tuning methodology, to present- ing higher-quality work items to analysts, workload reduction, retaining SARs and addressing various risk segments. This initial phase focuses on the understanding of the project objectives and requirements from compliance, business and technical perspectives. Based on the objectives and project constraints, the requirements and evaluation criteria are agreed upon. Appl ying an analytic approach to AML compliance Phase 1. “Define Objective” Phase 5. “Integrate, Deploy and Monitor” Phase 2. Define “Good Alert” Phase 4. “Apply Variety of Techniques” Phase 3. “Historical Data Collection”
Transcript
Page 1: Applying Analytic Approach ACAMS Today Dec Feb2010

32 ACAMS toDAy | December–February 2010 www.ACAMS.org

Since the USA PATRIOT Act was passed in September 2001, finan-cial institutions have invested

heavily to comply with regulatory expectations. Initial efforts focused on automating the detection of suspicious activity. The Federal Financial Institutions Examination Council (FFIEC) manual has emphasized a risk-based approach to anti-money laundering (AML) compliance. However, in practice, most institutions monitor based on published red flags and behaviors that generally are consistent with money laundering or terrorist financ-ing. As a result, transaction monitoring systems can generate high volumes of items that may have little relevance to legitimate investigations within your insti-tution. Further, the cost benefits of automa-tion are negated by the funding required to staff the investigation of low-value alerts.

More than 1.25 million Suspicious Activity Reports (SARs) were filed in 2007. While defensive filings have been criticized by the private and public sectors, institu-tions continue to file SARs when in doubt to avoid a potential enforcement action. Filing of questionable SARs creates a similar prob-lem for law enforcement: too many SARs and too few resources. The crux of the prob-lem is that institutions have deployed trans-action monitoring systems to generate alerts and file SARs, yet they receive no feedback from the US Department of Treasury’s Financial Crimes Enforcement Network (FinCEN), nor do many institutions analyze investigations data to improve the quality of their detection processes. Very few institu-tions have applied business intelligence to justify monitoring policies and quantify the

factors that improve the relevancy of alerts.There have been numerous articles on

applying statistics to reduce false positives through scenario parameter tuning. This article will describe a proven approach to data analysis — a defensible methodol-ogy that has been used in clinical trials studies, fraud detection, warranty analy-sis, telecommunications churn and data-base marketing applications. No single technique is the answer; most problems require a comparison of several techniques to find the most appropriate tool. First we will look at some of the simpler and popu-lar techniques for tuning rules, including the advantages and disadvantages of each approach. Regardless of the preferred approach, it is important to establish con-sistent processes for modifying rules (i.e. how, when, and why?).

Scenario tuning The tuning of detection rules to make

them more efficient and effective is a cru-cial step in optimizing any transaction monitoring system. There have been dozens of articles written about using standard deviation or range-based outlier detection to set scenario parameters. The traditional approach to rule tuning is to iteratively change parameters one rule at a time until incremental performance improvements diminish. This approach is actually fairly risky because a change to a rule without understanding outcomes can have danger-ous downstream consequences. A number of rule-based tuning approaches have been proposed; however, incremental rule tuning quickly reaches a point of dimin-ishing returns, and one at which the risk

from over-tuning scenario parameters far outweighs long-term benefits.

AML alert tuning life cycle The tuning life cycle is a multi-phase

process designed to put rigor around the AML scenario tuning process. Figure 1 shows the phases of a tuning project. The following provides some details around each phase.

Figure 1 - AML Alert Tuning Life Cycle

Phase 1 — Objective definition What is the objective of the tuning exer-

cise? Possible objectives range from simply documenting the process of parameter set-ting and maintenance, and establishing a consistent tuning methodology, to present-ing higher-quality work items to analysts, workload reduction, retaining SARs and addressing various risk segments. This initial phase focuses on the understanding of the project objectives and requirements from compliance, business and technical perspectives. Based on the objectives and project constraints, the requirements and evaluation criteria are agreed upon.

Applying an analytic approach to AML compliance

Phase 1.“Define Objective”

Phase 5.“Integrate, Deploy

and Monitor”

Phase 2.Define

“Good Alert”

Phase 4.“Apply Variety of

Techniques”

Phase 3.“Historical Data

Collection”

Page 2: Applying Analytic Approach ACAMS Today Dec Feb2010

34 ACAMS toDAy | December–February 2010 www.ACAMS.org

PRACTICAL SOLUTIONS

Phase 2 — Define “good alert” What constitutes an “investigation-

worthy” or “relevant” alert? The definition of an investigation-worthy alert is critical to tuning and evaluation. Is a good alert simply any alert that has the resulted in the filing of a SAR? Or is it any alert that has been investigated longer than 10 days? In our consulting practice, we have seen that the definition of a “good alert” varies depending on the type of business and sur-veillance scenarios being employed. The key to defining a “good alert” is working directly with the compliance group and having well-defined alert disposition poli-cies and procedures.

Phase 3 — Historical data collection and baseline performance review

While it sounds simple, the historical data collection phase starts with the initial collection of historical alerts and follows with the gathering of supporting alert data: transactions, profiles and events. Because this process may require pulling lots of data from disparate systems, more time is spent on data gathering than on analyt-ics themselves. The goal is to build a set of data that can be used as input into the tuning and evaluation phases. How much data is appropriate? It depends on the sce-nario and techniques applied. Generally a minimum of two to three months of his-toric alerts is needed, while some scenar-ios and techniques may require as much as six to 12 months of data.

Phase 4 — Apply a variety of tuning techniques

There is no “one” magic technique that you can apply across the enterprise. We generally apply traditional statistical tech-niques like logistic regression and com-pare the results with more advanced data mining techniques like decision trees and neural networks. Taking this “champion / challenger” approach allows the business analyst to select the most appropriate tech-nique based upon the business problem. Scenario tuning is a “test-and-learn” pro-cess that requires a detailed understand-ing of the investigation process as well as good data analysis skills. For smaller insti-tutions, iterative “what-if” analysis may be all that is required to eliminate redundant work items, while for larger institutions with a higher-risk profile, a more sophis-ticated approach is expected. The keys to applying tuning techniques are having quality historic data and using rigorous back-testing to review the before and after results of the tuning techniques.

Phase 5 — Integrate, deploy and monitor Once the tuning method has been

selected, it needs to be enacted to enhance your existing AML monitoring system. For simplistic techniques like param-eter threshold changes, this can usually be done through the scenario interface, while applying an analytic approach requires a more structured process. Analytic scor-ing processes involve a data preparation step to accumulate the required inputs for scoring; the model execution step, which actually scores an alert; and finally, apply-ing business rules to take advantage of the score. While this may sound involved, in practice, integrating analytic scoring processes with applications is straight-forward. The scoring process can be done as a post-process to scenario execution; however, it can also be applied as a pre-processing step or even as a Web service, depending on the institution’s require-ments. Finally, as part of the monitoring process, analytic scoring models require periodic re-training. Normally models are retrained every 12 to 18 months or as needed when an institution’s business pro-file changes.

Example tuning techniques Here we have a simple scenario — cash

deposits in wires out. For this example, let’s assume the scenario to simply create an alert if Cash In is greater than $6,000 followed by sum of Wires Out greater than $1,000 over a five-day time period. Based on a sample, we have 150 alerts, of which five have been determined to be “relevant” or investigation-worthy. You can see the baseline distribution of alerts in the scatter plot below (figure 2).

Figure 2 — Baseline Alert Distribution

The traditional approach is to itera-tively change parameters until we reach the desired alert reduction or start miss-ing investigation-worthy alerts. We call this the “what-if” approach. Here we change the scenario parameters Cash In

from $6,000 to $8,500 and Wire Out from $1,000 to $4,000. This change will elimi-nate roughly 30 percent of alerts and still not miss any of the “relevant” ones. The pros to this approach are that it is simple to understand and provides immediate results. The cons are that iterative parame-ter tuning addresses one scenario at a time and not the alert population as a whole. Finally, parameter tuning can be fairly risky given that surveillance rules use “hard cutoff” points, which make them easy to over-tune.

Figure 3 — Iterative “What-If” Parameter Tuning

Figure 3 shows the result of iterative “what-if” parameter tuning. The scenario’s parameters are tuned to the point that they don’t miss any “good alerts.” The problem is that you have to be careful not to “over-tune” the scenario. For example, if we change the parameters to the mean plus three standard deviations outlier method that many endorse, this would mean that we would change our Cash In parameter to $11,000 and Wire Out parameter to $6,000. We reduce the alert volume by 90 percent, but we would miss three of the five “rel-evant” alerts.

Figure 4 — Range-Based Outlier Detection

Figure 4 illustrates the risk of missing “relevant” alerts through the application of range-based outlier detection technique. In the last few years, a number of articles

Cash_In

Wire_Out

Alert Breakdown • False Positives • Good Alerts

16

14

12

10

8

6

0 2 4 6 8 10 12 14

Cash_In

Wire_Out

Alert Breakdown •False Positives •Good Alerts •Eliminated Alerts

16

14

12

10

8

6

0 2 4 6 8 10 12 14

Cash_In

Wire_Out

Alert Breakdown •False Positives •Good Alerts •Eliminated Alerts

16

14

12

10

8

6

0 2 4 6 8 10 12 14

Page 3: Applying Analytic Approach ACAMS Today Dec Feb2010

December–February 2010 | ACAMS toDAy 35www.ACAMS.org

PRACTICAL SOLUTIONS

have been written about using simple sta-tistical justification for scenario parameter setting. This tuning method has the advan-tage of being simple to implement, inter-pret and explain. However, this method does not take into account the interactions between variables and may eliminate behavior that is truly suspicious. The prob-lem is that besides the interaction between variables, parameter setting based on this type of approach assumes that numeric variables follow a normal distribution. For example, if the population of outgoing wire transactions is skewed, then results may be questionable.

Finding a more effective approachA more effective approach is to apply

analytics to predict which alerts are inves-tigation-worthy. For this example, we train a decision tree to identify investigation-worthy alerts based on the transaction profiles for each account. A decision tree is a common data mining technique that predicts the value of a target, in our case “good alert,” based on several input vari-ables: the transaction and event profile.

Once trained and tested, the deci-sion tree can be appended to the alert-generation process. The decision tree will produce a risk score; the higher the score, the higher the risk. Using the origi-nal scenario parameters of $6,000 and $1,000 plus a risk score, we can reduce the false positives in this example by 60 percent and still identify all of the “rel-evant” alerts. Figure 5 visually highlights the results of applying a decision tree to predict “good” alerts. Alerts that exceed a prescribed risk score will be triaged into the analyst’s work queue.

Figure 5 — Risk based Scoring Process Predicting “Relevant Alerts”

“The reason why predictive analytics is far superior to other tuning methods is that predictive analytics can encode much more information about an account or cus-tomer’s activity than a scenario alone.”

Real-world example In 2009, a large retail financial institu-

tion enlisted us to apply advanced analyt-ics in an effort to reduce the bank’s AML false positives. The institution has more than 1,500 financial centers in 10 states and manages more than $150 billion in assets. The BSA/AML group employs a full-time staff of nearly 50 investigators, processes more than 3.5 million transac-tions daily and currently investigates more than 70,000 alerts annually. The institu-tion had applied a number of pragmatic approaches to reduce irrelevant alerts, including segmenting rules by line of business, aggregating alerts and distribut-ing commercial alerts on a monthly basis. Each of these “low-hanging” approaches was effective in reducing alert volume by 15-20 percent.

A project was launched during the summer of 2009, and initial results have been impressive. Multiple strategies were employed to address varying business objectives. For example, one strategy was enacted to maximize SAR retention rates, another to maximize work-load reduc-tion, and a third approach balanced SAR retention rates with work-load reduction while maintaining the quality of inves-tigations. The BSA/AML department wanted to reduce irrelevant work items with minimal loss of SARs. Based on our conservative estimate, the institution will reduce 47 percent of redundant and irrelevant work items while retaining 99 percent of SARs filed. This institution will save approximately $1 million annually, increase the efficiency of their investiga-tions and allocate their staff to higher-quality investigations.

Data quality and availability critical for success One of the most important aspects of

this approach is data quality and data availability. In the case study above, the institution had several years’ experience with an automated monitoring system. They were confident in the results of the system, and the consistency of investiga-tive processes practiced by their investiga-tions team. Compliance management had a high degree of confidence that work items were being disposed of properly. If inves-tigations data is reliable, the models will provide more accurate results.

Notice that we refrain from the term false positive in the context of AML moni-toring. We prefer relevant versus irrel-evant work items to be more accurate. It is important to structure your disposition nomenclature so that relevant work items

will include more than SAR or Currency Transaction Report (CTR) filings. Often an event is unusual, meriting further review but not a SAR. The process of bucketing good and bad alerts usually involves the application of business logic to prepare the data for analysis. If your institution is in the process of implementing a case management system, the ability to analyze and learn from case histories should be a consideration during the workflow design and enactment.

Another consideration is the ability to support and maintain this sort of meth-odology. Where do you find people with the requisite skill sets? Most likely, your database marketing or credit risk depart-ment has business analysts or quantitative analysts with the necessary statistical background to review and modify models as appropriate. This process is also a good candidate for software as a service (SAAS) given that no personally identifiable infor-mation is exchanged and the review pro-cess is typically performed annually.

Conclusion There have been numerous articles on

applying statistics to reduce false posi-tives through scenario parameter tuning. These approaches are a good first step in reducing irrelevant work items; however, institutions need to understand the risks associated with missing truly anomalous behavior. In the current economic cli-mate, the emphasis on cost reduction has never been greater, yet compliance depart-ments cannot cut costs in a manner that increases reputational risk. We know that applying predictive analytics is a defen-sible methodology that has been used in clinical trials studies, fraud detection, warranty analysis, telecommunications churn and database marketing applica-tions. It is time the compliance industry adopted more sophisticated techniques to reduce the cost of compliance and mitigate risk. A logical evolution of a risk-based approach is to learn from past investiga-tions and behaviors so that employees can be allocated optimally. The key is to apply consistent processes and methods that will stand the test of internal audit and regula-tory scrutiny. The data never lies!

David Stewart, CAMS, director of the Financial Crimes Practice, SAS, Cary, NC, USA, [email protected]

Michael Ames, principal consultant, SAS Professional Services, Cary, NC, USA, [email protected]

Cash_In

Wire_Out

Alert Breakdown +Eliminated Alerts •Good Alerts •Predicted Alerts

16

14

12

10

8

6

0 2 4 6 8 10 12 14


Recommended