What does high performance analytics (HPA) mean to modelers?!

InterpretationSubscripts on decision variables are getting cheaper !!

Gains in performance boost computer-based modeling capabilities

We have witnessed a 1,000 times improvement in peak flops (floating point operations per second) every ten years for the past three decades.  For those unfamiliar with Moore's Law postulated by Gordon Moore, fellow UC Berkeley grad and former CEO of Intel, in his 1965 paper he predicted a doubling in the number of transistors on a computer chip every two years.  When combined with faster clock speed, we have witnessed improvements in chip performance that have taken us to the brink of excascale computing (that's ten to the eighteenth power or a quintillion flops) and billion-way concurrency!  

My trusty Pickett slide rule circa "The 60's"

As a college freshman in 1969, armed with a slide rule, I never imagined that this level of computing capacity would exist in my lifetime -- not in my wildest dreams.  Allow me to share a personal story that can illustrate the impact of high performance analytics (HPA) on decision-makers and problem-solvers.  I hope that it will foster a deeper appreciation of the impact that this technological advancement will have on the way business leaders gain knowledge in order to develop and execute strategies and make key decisions.  HPA will surely help them to meet or exceed their corporate goals. 

 

Balance sheet analytics in the 80's

In 1985, as a balance sheet management analyst, I developed strategies to engineer a target balance sheet on an 18 month planning horizon.  A primary tool was a large-scale financial optimization system that pulled a half gigabyte of data from all of the bank's transaction systems (commercial loans, SWAPs-collars and caps booked by the investment bank,  Eurodollar placements and takings, treasuries, agencies, term repos, reverses, other capital markets securities, consumer certificates of deposit, jumbo and liability management CDs, financial futures, and so on).  It also accepted interest rate forecasts for all key market indices supplied by the banks economics unit and risk preferences based upon executive management's risk appetite

The objective function was to maximize net interest income (NII) plus realized capital gains/losses plus capital appreciation/depreciation.  I will not go into the constraint descriptions, but they were considerable.  Since the model was a temporal one, the cash flows needed to be preserved, while purchases and sales of securities were permitted for the first six months of an eighteen month horizon.  There were also non-linear risk constraints that were varied to generate an efficient frontier of risk-return tradeoffs.  Strategy choice was a function of the resulting pay-off matrix under different economic scenarios and the corporate risk appetite (tangent of A/L Management Committee indifference curve with the efficient frontier). 

The problem size, expressed in terms of the matrix that was generated from the modeling language and data inputs for input to the optimizer, was 30 thousand rows by 15 thousand columns, with a density of non-zero coefficients of 0.55 percent.  It took 50 minutes to generate the matrix, and 10 minutes to solve it on an IBM 3033 mainframe running in OS/MVS operating system batch mode.   In those days, great care was taken to manage problem sizes that could otherwise chew up a lot of CPU cycles on expensive computing platforms and pose unacceptably long run-times. 

Due to the long processing times, we were compelled to make the models as simple as possible.  For example, an instrument was defined based  on the type of security and its maturity instead of just the category of security with maturing being a second dimension.  This cut down on the number of decision variables, but it also limited the ability to interrogate the model and consider maturity structure independent of the category of security.  We made many other compromises in a problem formulation that made the application more challenging to work with from many respects.  Those spillover effects included difficulties in data management, constraint specification, infeasibility tracing, model documentation, problem modification, and verification (both of the problem specification and the optimal solution).  Despite those and many other barriers, we managed to develop some great balance sheet strategies and the few basis points of improvement we achieved annually for a super-regional bank with $32 billion in assets more than covered our technology investment, staffing costs and overhead by a factor of two (that's an ROI exceeding 100%).  We verified the value added, contrasted against both a benchmark "do nothing" strategy and a naive approach based on past performance.  We always asked "whether the juice was worth the squeeze" question, and before continuing with my balance sheet formulation story, let me digress for a short tale relative to the bank's trading operation.

Relative to the bank's trading book.  I recall the chairman coming down to my office one afternoon.  He shut the door to my office and told me he was wondering if our bond trading activities were really delivering for the bank.  So he asked me to run a simulation wherein we turned over the bond portfolio every two years (i.e. replace 4 1/6 percent of the portfolio every month) over a five year period, based on purchases at the historical Fed auction prices.  He wanted the results on his desk the next morning.  I reported to the CEO the next morning, accompanied by my manager, the bank's chief economist.  The answer ratified that our traders were consistently beating the market by a statistically significant, and financially material, margin that was well worth the costs of technology and performance-based compensation.

High performance analytics (HPA) in our current decade

 Fast forward thirty years, and we are now getting very close to achieving excascale computing.  For the problem just described, the implications are very dramatic.  We can now refine the model to have sufficient subscripts on the decision variable to enable modeling closer to the business reality by defining a new problem framework with some realistic ranges on individual dimension limits.
 
Additional subscripts better capture problem & facilitate solution analysis

You may wonder why a modeler might include legal location of the entity holding a security as a dimension in the framework.  Well, it turns out there are different tax treatments for various securities, in different, yes even neighboring, states in the US.  If you consider cross border holdings, then geo-political risk and foreign exchange risk come into play.  Euro-denominated securities could be put on a USD equivalent basis, but if they are still denominated in euros when a market disruption or failure occurs, the USD cash equivalent value may change.  Sure, on the tax treatment issue, you could handle the issue through the ETL, or data input stage, to put all securities on a pre-tax or after-tax equivalent basis.  However, you would not be able to perform "what-if"  simulations or post-optimality / parametric analysis with an optimization problem that is memory resident with billion-way concurrency.  Instead, you would need to reload "big data."

 You may also wonder why the problem framework should include the utility dimension.  Well, that has to do with risk appetite.  With the proper formulation, this mathematical programming problem can allow course correction on what securities to buy, sell, and hold, based on the extent to which profit plan targets have been met.  The appetite for risk would likely increase after goals have been met by sufficient margin, say 110%, 150%, 200%, and 250%.  In this case there would be four breakpoints introduced for the objective function, which would in effect quadruple the problem size.  But remember, decision variable subscripts are getting cheaper!
 
In the first scenario in the high performance balance sheet management problem formulation, I allowed the dimension sizes to exhaust all practical problem sizes.  The result was nearly three quadrillion decision variables.  Scenario two is a more conservative formulation, still very realistic for most mid-to-larger sized firms, which is in the neighborhood of nine billion decision variables.
 

HPA Takeaways

 
In summary, high performance analytics is not just about speedIt enables modeling that:
  • can consume massive volumes of data
  • is closer to the business reality
  • can encompass a vast array of possibilities
  • can surface whole families of solutions + associated trade-offs
  • can identify and portray the connectedness of solutions
  • fosters a far deeper understanding of the solution and its sensitivity to model assumptions and uncontrollable forces
 I will have more to say from a higher level perspective in a blog post on Friday that addresses what HPA means to CEO's.  Please stay tuned!
 
tags: HPA

2 Trackbacks

  1. [...] of acquiring knowledge is learning through high performance analytics (HPA).  As noted in my last post that pointed to the value of HPA for modelers, CEOs are entering an era where it will be possible [...]

  2. By See HPA and Buy It Now at #SASGF13! on April 29, 2013 at 5:19 pm

    [...] two past posts, I sought to explain what HPA means to modelers and what HPA means to CEOs.  I would like to invite you to check out videos 3-5 in a series, [...]

Post a Comment

Your email is never published nor shared. Required fields are marked *

*
*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <p> <pre lang="" line="" escaped=""> <q cite=""> <strike> <strong>