Co-authored by Derek Pratt and Colleen Devine, with contributions from Jenny Mynahan
The meteoric rise of artificial intelligence (AI) may be dominating headlines, but for many operations team, the real pressure lies in delivering interoperability across live, multi-vendor environments. As asset managers operate across increasingly hybrid technology environments and parallel operating models, operations teams are under growing pressure to make disparate systems work together in production. As solutions optionality becomes a strategic priority, Colleen Devine and Derek Pratt explore what needs to happen for interoperability to hold up in practice.
Solutions optionality underpinned by interoperability – a new look ecosystem prevails
Solutions optionality—put simply—is when buy-side firms prioritize solutions that integrate within an ecosystem, allowing them to choose, combine and evolve technologies. It is about maximizing choice and adaptability in the operating model, enabling organizations to respond quickly to evolving market demands and the latest transformation requirements.
“A lot of asset managers like to think that there is a single system, which can handle all of their operational processes, but unfortunately this is not the case,” says Colleen Devine, Managing Director, North America, Citisoft. Even where firms adopt front‑to‑back platforms, interoperability still depends on how that integrated core connects with complementary services, data providers, and client‑retained systems across the wider ecosystem.
In a multi-vendor model, front-to-back solutions are collections of interoperable components. Modern app architecture and data consumption do not support a monolithic approach, especially in a world populated by Software-as-a-Service (SAAS) and specialist data providers.
An interoperable backbone is therefore an absolute pre-requisite. In large, complex organizations, this often shows up at the top of the house when different parts of the operating model rely on different core systems.
For example, firms may use a specialist risk and analytics platform in the front office while running trade processing and accounting through a separate middle‑office platform. When interoperability is not working in production, teams are forced to extract data from each system and reconcile it manually, introducing delay, inconsistency, and operational risk. A reliance on spreadsheets and ad‑hoc workarounds is often the clearest signal that the interoperability model is failing in live environments.
As more asset managers embrace solutions optionality, solution providers are also having to adapt by becoming more accommodative to clients’ interoperability demands.
“Historically, middle office outsourcing arrangements often required asset managers to adopt a provider’s core processing systems. While retaining certain systems—such as an OMS—has long been common, today’s difference is the expectation that interoperability is designed deliberately, rather than accommodated after the fact,” adds Devine.
In turn, service provider solution design teams are increasingly focused on how services can be delivered safely and consistently across system boundaries, not simply whether those boundaries can be supported.
This means service providers must design their solution and control models around interoperability itself, making explicit decisions about which services can be delivered safely across system boundaries, and where taking on additional scope would introduce operational or risk exposure.
Why the industry is doubling down on solutions optionality
Solutions optionality is not a new concept by any stretch, but there are several drivers accelerating its adoption.
Firstly, asset managers are growing in overall AUM terms—both organically and through M&A—but that growth brings operating‑model complexity, and with it, the gap between what the business needs and what a single platform can deliver out of the box tends to widen.
Transformation programs are forcing coexistence, not replacement
Increasingly, this is not just a function of growth or diversification, but of transformation programs themselves. Many firms are working towards a rationalized target state, but in practice, they are unable to fully adopt it. Core systems that are deeply embedded in the operating model—whether for investment, risk, or data—are often not going anywhere. As a result, firms are forced into hybrid states where the target architecture coexists with incumbent platforms, rather than replacing them outright.
In that context, solutions optionality becomes less about enabling new investment strategies and more about making that coexistence workable. It requires service providers to operate across system boundaries and support models where key components of the operating stack remain client‑retained. In some cases, this means providers must run services around systems they do not control, simply because those systems are the best fit for the client and cannot be displaced without introducing greater disruption.
Asset diversification amplifies operational complexity
Asset class diversification further compounds this. In recent years, firms have increasingly broadened their portfolios beyond public markets to include, for instance, private markets, esoteric products, and digital assets. While this expansion is typically additive rather than a rotation away from traditional asset classes, it introduces materially different data models, processing requirements, and lifecycle events that place new demands on the operating model, notes Derek Pratt, Director, Citisoft.
|
“The complexity isn’t about volume, it’s about variation. Different asset types behave differently across the investment lifecycle, and that increases the burden on systems, data flows, and operating processes that may have been designed with a narrower scope in mind," continues Pratt.
|
At this point, firms often find that the challenge is less about replacing platforms and more about how those platforms coexist. Supporting multi‑asset portfolios typically requires deliberate decisions about where specialized capability is needed, how data is normalized across the environment, and how responsibilities are distributed across systems—rather than relying on any single solution to adapt indefinitely.”
For Pratt, this is often the point where firms realize they are no longer making a platform decision at all. “You can keep trying to force an existing system to do something it wasn’t designed for,” he notes, “but at some point, it becomes clearer that the issue isn’t which platform you’ve chosen. It’s how data moves across the environment, how it’s standardized, and whether different users can rely on it for their decisions.”
Regulation is another accelerant. As firms operate across multiple jurisdictions, differences in regulatory, tax, and reporting requirements can add pressure to the operating model—particularly where those requirements intersect with local data, accounting, or control frameworks. In practice, this often leads firms to introduce specialist capabilities alongside their core platforms, increasing the importance of interoperability across the wider ecosystem.
Making interoperability work
Achieving solutions optionality is contingent on interoperability working. The mistake many programs make is treating that as an integration exercise rather than an operating model discipline executed by the asset manager in partnership with their service providers.
Devine says clients expect transformational change programs to be carried out in a way that is timely, well-risk managed, ensures resilience, supports optionality and keeps costs to the minimum. But for the process to be as frictionless as possible, a few things need to happen.
1. Define scope, ownership, and the "line" early
Before kickstarting a program looking to enhance interoperability, firms need to be clear what business outcomes they are trying to enable, and then map out what is changing, who owns each boundary, and what “done” means in production.
For service providers, this often requires being clear about what they will not support, as much as what they will, to avoid rebuilding legacy complexity inside a new operating model, and to ensure those boundaries remain intact as client requirements evolve. This, in turn, means identifying gaps between expectations and delivery, and being explicit about which outcomes rely on system capability versus point solutions and workarounds.
Firms also need to be realistic about timeframes, particularly where interoperability is intended to support near‑term strategic initiatives.
|
“Timelines need to be shaped early, based on a clear understanding of scope, resourcing availability, and dependencies across the wider operating model. When providers and asset managers work closely upfront to align on what’s realistically achievable, it significantly reduces the risk of frustration and rework later on,” highlights Devine.
|
A practical reason is that delivery teams often discover hidden complexity in current state processes that have evolved over years, including bespoke reporting, legacy controls, and a long tail of downstream dependencies.
Transformational change projects are expensive undertakings, which may be a hard pill for some asset managers to swallow given their margins are already being squeezed by challenging market headwinds, rising operational and regulatory costs, and downward pressure on fees. Costs tend to rise fastest when scope expands through replication of current complexity rather than simplification. As Devine notes, discussions of current state processes can anchor programs to existing ways of working. Comments such as “that’s not how I do it today” can lead to complexity being rebuilt rather than reduced, driving both scope and timeline creep.
Being open from the start with stakeholders about cost impacts is non-negotiable. Optionality only creates value when firms can change components without rebuilding the surrounding operating model each time.
2. Secure executive sponsorship, then keep it active
All transformational change projects require C-suite sponsorship. A failure to get executive buy-in could result in funding drying up, or internal priorities shifting mid-program. In practice, sponsorship is what protects the roadmap when markets move, strategy changes, or competing initiatives fight for the same funding and talent.
Regular dialogue between the C-suite and the teams delivering the program is a key ingredient for success. Not all C-suite executives will be technologists by training, so domain experts need to provide high-level insights about what is happening and why. This is to ensure expectations are aligned to the real effort required to make interoperability survive production, including the skills, controls, and operational changes needed around the technology.
3. Get data integration right
Interoperability programs will count for little if data integration is not prioritized.
Irrespective of whether firms have a best of breed technology infrastructure in place, poor integration creates operational siloes that lead to errors, inefficiencies, and operational cost runovers.
“It becomes a failure in the interoperability model if data is not joined up properly and teams are dumping data into, say a portfolio management tool, or multiple spreadsheets, and grinding it altogether to produce a report or analysis,” says Pratt. When that happens, the firm has bought optionality but not realized its benefits. Instead, it has created multiple sources of truth and pushed work onto end users.
To avoid these pitfalls, Pratt emphasizes spending time with users and understanding the data they need to access. That user lens matters because shadow IT is rarely created out of preference. It tends to emerge when users cannot access timely, trusted data through the core operating model.
Without interoperability, there is no solutions optionality. However, interoperability is not complete once systems are connected, it is a living, breathing, and highly functional operating model that must continue to hold together in live environments, across system and service provider boundaries, and under real operational and control constraints.
A clearly-defined scope, sustained executive sponsorship, and a user-led approach to data integration are what prevents optionality from slipping into manual workarounds. Together, they give firms the ability to evolve their ecosystem over time, rather than repeatedly rebuilding it.
Comments