+ All Categories
Home > Documents > People Like VAs Like GPUs

People Like VAs Like GPUs

Date post: 15-Oct-2016
Category:
Upload: nvidia
View: 212 times
Download: 0 times
Share this document with a friend
4
W ith their smoosh- ing together of life insurance, mutual funds, and tax-deferred death benefits variable annuities demand generous allocations of compute power in order to make them efficiently manageable. But, people like to be able to combine a financial guarantee with some type of life insurance cover, and hang the computational consequences. They like death benefits, lifetime income streams, spousal continu- ations, and the plethora of other guarantee features available like lifestyle funds, daily ratchets, maxi- mum anniversary value, and so on. They like variable annuities to the tune of $1.61 trillion worth of assets in these insurance products attained in the first quarter of this year – an all-time high. The market in the USA alone represents $150 billion worth of business to the insurance industry, so like it or not, there is no choice but to commit to feeding the beast. The only flexibility therein lies in how smart the choices are in attaining the requisite allocations of compute power. Peter Phillips is the Managing Director of the Annuities Solutions Group (ASG) at AON Benfield 10 Wilmott magazine Securities. ASG provides clients with hedge program advisory services, risk management consulting, and solutions. The group’s PathWise platform is a variable annuity risk management system supporting new product development, financial and regulatory reporting, enterprise risk management activities, and hedge program management and reporting. The PathWise platform is an integrated high-performance computing solution based on paral- lel computing technologies, such as NVIDIA Tesla GPU (graphics process- ing unit) hardware, and a purpose- built interactive grid computing middleware. The platform was developed to significantly accelerate simulation-based risk management, delivering real-time performance lev- els, and support for next-generation forecasting and decision-making. When it comes to modeling vari- able annuities a number of different disciplines have to be brought to bear: besides economic modeling used to support the risk hedging, statutory modeling also comes into play for, say, the United States, whilst regulatory modeling – where the emphasis is more on calculat- ing reserves and capital – applies for other jurisdictions. Philips compares this level of modeling complexity to that of the residential mortgage market, suggesting that perhaps with variable annuities the level of modeling complexity may even be greater: “Like the mortgage market they cannot be priced on a pure arbitrage standalone basis because there’s a large component of policy holder behavior that drives the mechanics of the cash flow calcu- lations and ultimately the economic, reserve and capital numbers you get when modeling these risks. In the mortgage market consumer behav- ior drives prepayment dynamics, which have a key role to play in the valuation and risk management of these securities. For variable annui- ties there are some very difficult cal- culations to perform and we have to perform them repeatedly and across millions of policyholders.” “In the old days of the variable annuity business,” Philips says, reflecting on the not-so-distant past, “people were lucky to get an econom- ic value on their book once a week: that was cutting edge. The world’s really moved on from that point, and people want to know what their book is on an intra-day basis now.” In order to do this you have to have an enormous amount of computational power at your disposal and you have to be good at unlocking that compu- tational power too. Philips explains that GPUs were very attractive to ASG because the problem they deal with is an “embarrassingly paral- lel valuation problem in terms of finance and mathematics.” In other words, they have a problem that’s well suited to the advantages in the design of GPUs. Having taken this NVIDIA Peoplelikevariableannuitiestothetuneof $1.61trillionworthofassetsintheinsurance products.Butthesheeramountofcompute powerrequiredtodevelopandriskmanage theseproductsisnotsolikeable.Atleast untilyoubringGPUsintothepicture PeopleLikeVAsLikeGPUs
Transcript

With their smoosh-ing together of life insurance, mutual funds, and tax-deferred

death benefits variable annuities demand generous allocations of compute power in order to make them efficiently manageable. But, people like to be able to combine a financial guarantee with some type of life insurance cover, and hang the computational consequences. They like death benefits, lifetime income streams, spousal continu-ations, and the plethora of other guarantee features available like lifestyle funds, daily ratchets, maxi-mum anniversary value, and so on. They like variable annuities to the tune of $1.61 trillion worth of assets in these insurance products attained in the first quarter of this year – an all-time high. The market in the USA alone represents $150 billion worth of business to the insurance industry, so like it or not, there is no choice but to commit to feeding the beast. The only flexibility therein lies in how smart the choices are in attaining the requisite allocations of compute power.

Peter Phillips is the Managing Director of the Annuities Solutions Group (ASG) at AON Benfield

10� Wilmott�magazine

Securities. ASG provides clients with hedge program advisory services, risk management consulting, and solutions. The group’s PathWise platform is a variable annuity risk management system supporting new product development, financial and regulatory reporting, enterprise risk management activities, and hedge program management and reporting. The PathWise platform is an integrated high-performance computing solution based on paral-lel computing technologies, such as NVIDIA Tesla GPU (graphics process-ing unit) hardware, and a purpose-built interactive grid computing middleware. The platform was developed to significantly accelerate simulation-based risk management, delivering real-time performance lev-els, and support for next-generation

forecasting and decision-making.When it comes to modeling vari-

able annuities a number of different disciplines have to be brought to bear: besides economic modeling used to support the risk hedging, statutory modeling also comes into play for, say, the United States, whilst regulatory modeling – where the emphasis is more on calculat-ing reserves and capital – applies for other jurisdictions. Philips compares this level of modeling complexity to that of the residential mortgage market, suggesting that perhaps with variable annuities the level of modeling complexity may even be greater: “Like the mortgage market they cannot be priced on a pure arbitrage standalone basis because there’s a large component of policy holder behavior that drives

the mechanics of the cash flow calcu-lations and ultimately the economic, reserve and capital numbers you get when modeling these risks. In the mortgage market consumer behav-ior drives prepayment dynamics, which have a key role to play in the valuation and risk management of these securities. For variable annui-ties there are some very difficult cal-culations to perform and we have to perform them repeatedly and across millions of policyholders.”

“In the old days of the variable annuity business,” Philips says, reflecting on the not-so-distant past, “people were lucky to get an econom-ic value on their book once a week: that was cutting edge. The world’s really moved on from that point, and people want to know what their book is on an intra-day basis now.” In order to do this you have to have an enormous amount of computational power at your disposal and you have to be good at unlocking that compu-tational power too. Philips explains that GPUs were very attractive to ASG because the problem they deal with is an “embarrassingly paral-lel valuation problem in terms of finance and mathematics.” In other words, they have a problem that’s well suited to the advantages in the design of GPUs. Having taken this

NVIDIA

People�like�variable�annuities�to�the�tune�of�$1.61�trillion�worth�of�assets�in�the�insurance�products.�But�the�sheer�amount�of�compute�power�required�to�develop�and�risk�manage�these�products�is�not�so�likeable.�At�least�

until�you�bring�GPUs�into�the�picture

People�Like�VAs�Like�GPUs

path it was very clear to the team at AON Benfield that there were huge computational advantages, discover-ing that one GPU produced speed improvements of 50 to 100 times, over a high-end quad core chip, which was a delightful discovery. However, it was another question to move from “playing” with the GPUs to working with GPUs in production. The challenge to Philips’ team was that “I know how to do this with one GPU but how could I do it across a hundred of them? That’s a tricky thing to do successfully, it involves middleware, and so we made the investment in terms of people and the time to develop our own purpose built middleware and to develop our own tools to work more successfully with GPUs because they can be very difficult to work with in our busi-ness. That’s been our experience.”

Sometimes it pays to arrive late to a party. In this context, Aon Benfield was not saddled with legacy code that was designed to run on CPUs. “If you want to move from CPUs to GPUs, it’s a non-trivial task,” Philips says. “Basically you have to rewrite your code base. And if you have a bunch of installations, you could imagine the headaches that would entail. But for us we understood, and were very familiar with the work flows in the industry.” Philips points out that one thing about variable annuities is that the model designs are always changing. The features are always changing, the regulatory reporting is always changing, and people want more analysis, more complex analysis, so a solution has to support these difficult business needs: “this includes seeing into your models, so the closed paradigm of a black box doesn’t really work so well in our industry anymore.”

You also need to orchestrate all

these complex calculations more nimbly. Old systems might be file based, which means that it would be very hard to do any type of simula-tion calculation moving through time on a large scale in a reasonable amount of time. “For us we thought that was a very important thing to do out of the box, and we are able to say ok, we’ll use the best technology that is available today, we are going to leverage it to fit the business model that we know and are famil-iar with. What does that mean? For us it was developing tools that were nimble and tied to the workflows that were transparent. We did that but it represented a very significant investment in time and effort for us to get to that point.”

With adoption by large insurers like Canada’s Standard Life, the ben-efits of the change are evident in the feedback Aon has received, Philips reports. “It’s disruptive technology; the feedback we get is very positive. People are really excited about their ability to control and own their own modeling, and they are excited about all the things that they can do with the added computational power of GPUs, and with virtual private GPU clouds, which allow you to perform complex simulations to support regulatory reporting like Solvency II on complex products like variable annuities. So the pricing people are happy, the hedging peo-ple are happy, the valuation people are happy.”

In the insurance industry today these areas inside companies tend to be siloed. The hedging program will have one system, the valua-tion people who calculate those reserves and capital numbers have a different system, and the pricing and product development people will have another one too. “This

contributes to the balkanization of internal risk management practice,” Philips observes, “which I think has led to some trouble for some direct writers. Our approach to this is a holistic one with one integrated system designed to do all things from day one. We want the system or platform to provide a glass box for users so that they can see every step of their calculation – the order of the calculations with precision and really own that process. And we are big believers in audit trails and automation too.” PathWise is able to provide live seriatim-intraday Greeks for their hedging programs. Users can now see their assets and liability values jointly using the latest capital market information. “That’s a para-digm shift in terms of not only the computational power but in terms of orchestration too. You have live pric-ing feeds coming in, you have asset positions, you have trading going on during the day, you have to be able to successfully marry all these compo-nents together which is a non-trivial task but it is ultimately a worth-while one and the feedback we are getting is terrific, in terms of lower-ing operational risk, reducing over-all IT costs, and in terms of improved hedge program performance. We are also getting positive feedback about the ability to leverage GPUs to do the fantastically difficult calcula-tions that are required for Solvency II and C3 Phase II calculations.” The industry parlance here is “stochastic-on-stochastic” calculations; one not only has to value the liability of time zero but also value the liability and assets through time and calculate all their sensitivities and simulate the hedge program strategy. “That’s an order of magnitude different problem in terms of complexity and computational power,” Philips says,

“but if you design the approach in an appropriate fashion, you are able to do these things very nimbly.”

All the thinking and effort that went into modeling the variable annuity liabilities has been extended to different assets and liabilities too. “Now we have a whole asset library based on GPUs,” says Philips. “Why does this matter? Well, we got a call from one client that wants us to do collateral modeling for the swap book in real time. When a trader wants to add a swap position on with a counterparty we would model the daily collateral over the lifetime of the deal. Using our modeling tools and GPUs we can value 500,000 swaps per second per GPU. So now people can do fantastic things in other areas and leverage all the thinking and effort into PathWise and not have to be a low level C or CUDA expert.”

Philips hopes to see PathWise used both outside the VA industry and within the VA industry to tackle the most difficult problems, like Solvency II, and leverage the ability to perform stochastic-on-stochastic calculations to test different hedging strategies, assess the impact of dif-ferent strategies on their capital and their reserves. “We are seeing this in our business today, with clients com-ing to us with these difficult stochas-tic-on-stochastic problems. It doesn’t matter in what field, it could be banking, it could be insurance, but people should consider leveraging GPUs for their business problems.”

Jim Brackett and Chad Scheuster are Principals at Milliman, the con-sultancy behind MGHedge, a widely used system in the insurance indus-try for hedging the risk variable inherent in variable annuity-type contracts, specifically the guaran-tees that companies sell with them.

�Wilmott��magazine 11

NVIDIA

In addition to the market-consistent risk-neutral valuation that compa-nies do on a daily basis to dynamical-ly hedge those liabilities, Milliman have also adopted the model through time to address regulatory calculations such as capital RBC C-3 Phase II and VACARVM, which is a reserve requirement. “We’ve also adopted the model to do things like simulate a hedge program into the future so when companies first come into the business, one of their natu-ral questions is what is this going to look like, what is the hedge program going to look like going forward? That’s a natural question and one way to do that is to do a stochastic projection and sort of give them a feel for how the hedge program will perform over either historical paths or the ideal is to actually run a bunch of stochastically generated real-world paths.”

The genesis of Brackett and Scheuster’s work and the models, branded under the name MG Hedge, dates back to 2002. The goal set was to perform Monte Carlo simulations on a seriatim basis, i.e. contract by contract. At that point in time, look-ing at all the Monte Carlo simula-tions as well as sensitivity to market conditions that the team were mea-suring, it was an enormously compu-tationally demanding prospect and so, out of the gate, the first imple-mentation of the MG Hedge software was built and configured to run on clusters of CPUs.

The team implemented the sec-ond generation of its own schedul-ing and clustering software around 2004, and that is still primarily the technology solution for deploying MG Hedge solutions both outside Milliman and internally. Milliman’s business is oriented in two different directions. The first is licensing MG

Hedge software to clients like life insurers to use inside their environ-ments, the second is providing an outsourcing operation that runs software on clients’ behalf and also exercises some control over the trad-ing activities and other elements of the dynamic hedging programs that use the models.

Since 2004 the volume of com-putation that Milliman have had to face has grown massively, the team noted the emergence of GPUs on the horizon around 2007 and began to look at it sort of through an R&D lens. “It looked very attractive,” recalls Brackett, “and we actually did some work in moving a plain vanilla options pricer to GPU around 2007 and saw an enormously impressive pick-up that was about 100 times performance ratio from GPU to CPU but we did walk away from that with some concerns at the time – lack of floating point precision was a serious concern, and another concern for us that I took quite seriously was just the development running curve – some of the debugging facilities around the development process that was running towards GPU then and we thought there was great promise but felt there were reservations.”

All of these concerns were perfectly addressed by the Fermi architecture’s emergence a couple of years later, and when Milliman saw what was going on with Fermi with regard to the double precision and the overall increase in comput-ing capacity. Brackett and Scheuster also had the opportunity to attend the GTC conferences that were being held. “They were absolutely phenom-enal in terms of influencing our thinking and giving us an insight on what the upside is and some of the downsides might be. Attending the GTC in 2009 – our first year, really

opened our eyes and we walked out of that process more committed to CUDA and so that really manifested itself into some R&D on our models and we were rebuilding a proto-typical MG hedge liability valuation model using CUDA.”

The team started with a new code base with a thin layer of abstrac-tion to facilitate compiling both for native CPU as well as the GPU in order to do a direct comparison of performance. The initial perfor-mance improvements observed were in the neighborhood of three times. “I think it is pretty common for a very raw first cut,” recalls Brackett, “but for some it was disappointing after the buzz around GTC so we continued to work on factoring our kernels and playing essentially in that code base to the point that we saw the performance ratios climb and climb and climb, 3x 10x 30x 50x 100x ultimately you saw, Chad, that it stabilized between 75x to 150x .”

The discussion with Milliman helps to illustrate why consultancies often play such an important role when it comes to adoption of new technologies or novel deployments of existing technology. The level of tech-nicality that is involved can act as a barrier for people at this first phase in GPU in finance and insurance, a level of entry for people who are already comfortable within a CPU infrastructure but very aware that it is not absolutely optimal. An ideal opportunity for really smart consul-tancies to come in having studied the possibilities and what the challenges are, and really translate it directly to specific problems at client level.

“As we learned from experience, getting our feet wet with GPUs is probably straightforward,” says Brackett, “you can walk away with a 3x performance improvement

with very little effort and I think what’s sort of exciting about all of this is the common DNA in CUDA and the underlying hardware that is accessible to a wide audience. For some of them to get that return on investment to get a 3x performance improvement for very little engi-neering effort, may still be a very attractive option for them. I would like to think of Milliman at the other end of the spectrum, the end that recognizes it takes a considerable investment and resources and com-mitment to in some ways revisit the paradigm that we built for ourselves a decade ago, we worked very hard to decouple some of the software stack that is used for high perfor-mance computing, scheduling fault tolerance, reporting computing data modeling, all of that has already been decoupled from our models, our quantitative models, and we have different teams working on that. To really take maximum advantage of GPU we really had to reintegrate ele-ments of those, I think for companies that don’t have the culture to reinte-grate those elements or simply don’t have the expertise along these two different trajectories, traditionally these two very different trajectories, they will not be able to reach that end of the spectrum. But nevertheless there is a spectrum where parties at one end are happy to get the 3x and walk away and others that invest in a lot of resources and get 150x per-formance plus, of course, opening new doors to new models we’re all working towards legitimizing GPU and I hope that translates into techni-cal performance improvements and continued support for the overall community as things move forward. That’s pretty exciting. It is accessible across a whole range of potential ben-eficiaries.”

12� Wilmott�magazine

W

NVIDIA


Recommended