Asset Management Technology: Balancing Standardization with Innovation

Row of old computers on desk

We’ve all heard the buzzwords like “industry standard” and “best practice.” These terms are particularly prevalent these days as our industry gets caught in a conundrum of how to reduce costs and operational risks via standardization while the front office is being pushed to differentiate itself amongst peers. Differentiation, however, comes at a cost. A cost to innovate, a cost to support more sophistication, higher on-going technology costs, and the list goes on. So, how much standardization can you afford while still supporting the innovation and differentiation of investment strategies, product types, and asset classes required for your firm to stay competitive?

Let’s first take a step back and understand how the industry has evolved in order to understand how solving this problem has changed over time.

Up to the Mid-1980s

Process

Result

  • Processes were well defined and handled mostly manually.
  • Typewriters ruled the roost and computers were a luxury few could afford.
  • Applications were accessed through a "green screen terminal." Your investment accounting system was running off a mainframe system that was chugging away in a room (or more realistically an entire floor) somewhere.

The investment process reflected an assembly line: one function is performed, and the work-product is handed to the next person.

  • There was tons of paper, with handwritten instructions, going everywhere (trade tickets, cash breaks, general ledgers, you name it…).
  • The business was neither scalable nor were you able to effectively mitigate away your operational risks.
  • Mainframe systems were very rigid offering few opportunities for configuration or customizations which heavily constrained the business.

 

Mid-1980s to Late-1990s

Evolution

Result

  • We saw computers become mainstream and phase out the typewriter and “green screen terminals.”
  • Mainframe systems were still wildly popular but were now being accessed through “terminal emulators” on your computer while word processors and spreadsheets also gained in popularity.
  • During this period, we also saw the birth of homegrown/custom built applications (aka “side systems”) aimed directly at plugging functional gaps the mainframe systems couldn’t address in a timely enough manner.
  • Custom side systems quickly proliferated as they were easy to build, easy to maintain, and offered quick stop-gap solutions allowing the business the flexibility they needed to grow.
  • Side systems ranged from: Access DBs (with crudely designed user interfaces) that extended the capabilities, for example, of your base accounting system. Excel workbooks with visual basic macros that would create files that could facilitate bulk load type activities, perform quick and dirty recons, and other repetitive operational tasks. In the latter parts of this era, the sophistication of the side systems grew to some .Net applications running with a back-end database hosted in a “sandbox” server somewhere.

 

2000 to Mid-2000s

Evolution

Result

  • Companies had an “oh $h*t” moment in preparing for Y2K. As they evaluated their architecture, they found a massive web of legacy applications (that were hobbling along) complemented by an endless list of custom-built side systems that were plugging various holes throughout the organization. There was no long term vision, no product roadmap, etc. behind these “side systems.”
  • With the uncontrolled proliferation of “side systems” companies found themselves with a very complex architecture that was nearly impossible to support.

 

 

Mid-2000s to 2010ish

Evolution

Result

  • We enter a new era for financial services technology firms (fintechs as they’re sometimes referred to today). Responding to client demand and the increasing popularity of server based applications, fintechs spot an opportunity and quickly pounce on building more comprehensive solutions that encompass all the ‘base’ functionality mainframes offered along with the ‘customizable’ nature of homegrown systems.
  • Vendors hurried to make their systems as flexible and customizable as possible but eventually the pendulum swung too far.
  • Systems became increasingly difficult to implement and post-implementation periods were often riddled with issues, outages, and other problems that would grind a business to a halt.  
  • A highly flexible, highly customizable system proved extremely difficult to implement and support.

 

2010ish to Present Day

Evolution

  • We hit an inflection point.
  • Fintechs are still on a mission to provide a single solution that can replace multiple disparate applications. However, today we’re seeing more ‘plug and play’ modules (i.e. want to bolt on a derivatives module to your trading system, yea—they can do that!).
  • We’re seeing the push for front-to-back solutions that extend the ‘plug and play’ capabilities further beyond traditional front, middle, and back siloes.
  • “Plug and Play” starts to take two forms: 1) ability to integrate, with out of the box interfaces, different applications together using a Best of Breed model or 2) ability for a single application to activate/deactivate modules to extend its functionality across siloes.
  • Most importantly, we’re seeing products offering robust ‘out of the box’ functionality while putting the much sought after ‘configuration items’ at the tips of users fingers allowing the business to retain the level of agility they had with homegrown systems while offering the stability of a mature product that has been fully tested.
  • We’re seeing data services cropping up that can get plugged directly into your platforms.

In short—plug and play is the name of the game. Your bespoke requirements are met through configurations rather than customizations—be that configuration of pre-built interfaces to string along best of breed systems or by activation and configuration of modules within all-in-one solutions.

 

So back to the original question: how do we balance the need to standardize operating models, and processes, while discouraging a new surge in stop-gap side systems required to support the growing complexity of the business?

Favor and leverage the trends that we’re seeing:

  • Cloud offerings where the vendor supports and runs your application, managing fixes, enhancements, and configurations in a standard and controlled manner.
  • If you prefer an all-in-one solution, look for applications that make it easy to expand functionality via modules that are simply turned “on” and configured to your needs.
  • Applications that integrate easily to data service providers.
  • If you prefer a best of breed suite, look for applications that are interoperable with other applications, either upstream or downstream.

Standardization is achieved by focusing on standardizing the “how” rather than the “what.” That is, “how” we integrate with market data is standardized, “how” we add functionality to an application via add-on modules is standardized, how we integrate best of breed applications is standardized, etc.

By making your technology applications and architecture modular, flexible, and configurable, you build a supportable technology stack that can accommodate a variety of custom requirements through applications that can be easily configured and expanded as the business needs change over time.