Citisoft Blog

Rimes’ Vijay Mayadas on Data and Decision-Grade AI

Written by Chris Mills | Mar 3, 2026

We recently sat down with Vijay Mayadas, CEO of Rimes, to explore how the firm is responding to shifting data demands. Vijay shares how Rimes is delivering an environment that transforms fragmented data into a trusted backbone.

Watch the full interview

This interview is part of Citisoft's Solution Market Perspective Series, which explores how leading service providers are responding to shifting client needs across the investment management ecosystem.

At Citisoft, we work across the market with asset owners, asset managers, and asset servicers. One of the hottest topics at the moment is data and AI. Given your background as an executive leading industry innovation and as a recent grad in data science, what do you see as the most urgent innovation priority for the asset management businesses today?

At Rimes, we service about 350 clients across asset management, asset owners, sovereign wealth funds, the whole gamut. We service them with data pipelines at scale that help them with their most mission critical workflows.

We have talked to a lot of clients about the next three, five, seven years – and obviously AI is top of mind for all our clients – but the one common theme that we hear is you can't have a good AI strategy without a strong data strategy.

The data foundations have to be rock solid for you to be able to unlock the benefits of AI. A lot of our conversations really dig in into the current state of data platforms, of data technologies, and what it will take to evolve data foundations to be able to reap the benefits of the next level of AI automation.

Vijay, you also engage with your clients’ specific challenges that might undermine a firm's readiness for AI. Where are firms getting it wrong? But probably more importantly, where are they getting it right? 

I’ll share a bit of an anecdote from a client conversation I had recently. I was up in Boston with a long-only equity manager and we asked them, what are your top priorities? They said the only priority we have that is higher than AI is generating alpha. That was pretty profound to us.

And then we got into a dialogue around, tell us about your AI strategy. The first thing they said was we're very focused on data because we can't have an AI strategy without data. And one of the challenges that they have is that they have a tremendous amount of fragmentation across all of their data platforms and they are very focused on consolidating, simplifying their data layer, generating data sets that are AI ready, that can be plugged into AI workflows, so they can trust the AI models, and can start to trust agents as they take on more workloads.

So that was an example of a firm that I think is doing the right things right. Number one, they recognise that AI is a top priority. The second is they recognise that you need good data foundations to get AI right. And third, they are very focused on developing kind of what I call a trusted backbone of data.

The firms that are probably going to struggle are going to be those who have not really internalised how important it is to fix the backbone of data and are starting to build AI starting to make additional investments in AI ahead of understanding what it takes and fixing the data backbone to ensure they have a trusted, connected, explainable layer of data.

Let's dig into that a little bit more. How do you help those firms move from experimentation into practice, into real world decisions? How does that really work in practice?

Let’s take a use case or two, for example, AI-generated portfolio construction exception management in the middle office. So obviously what you do is you start off understanding how you can build a proof of concept to demonstrate the economic benefits of using AI and automation – let's say to reduce the amount of manual work involved in solving a middle office exception. You do that in the sandbox, asking: 

  • Does the AI actually behave?
  • Does it do the right thing?
  • Do you trust it?
  • And then after that you say, OK, what's it going to take for me to really roll this out into production?
  • How much trust am I willing to place in AI making decisions, acting on a more autonomous basis?

That is the big leap that firms are trying to understand how to make. And in a highly regulated industry, there are still a lot of things that need to be understood and done in order to have that level of comfort that you can start to enable more autonomous AI driven workflows. But that is the direction of travel for the industry. So I would say in terms of the reality of how things are getting done today and what things are likely to look like is that there are a tremendous amount of proof of concepts going on in the industry.

And a lot of work [is going into] trying to understand what is the next step to roll these things out into production. And that's not just a technology or a data problem, it's also a compliance and a governance problem. What's the right governance standards around how you use AI internally to drive more autonomous decision making. But to the end of it, whether it's governance, whether it's compliance, whether it's scalable AI workflows, nothing really works unless you have a very strong data foundation.

As the AI tools and agents become more accessible to a wider set of stakeholders, this is not just the technology team, this is this is top-to-bottom, left-to-right. Increasingly firms are opening up to that wider set of stakeholders. How are the clients adapting to that usage and accommodating that wider and broader engagement across the business?

Clients want a lot of options as it relates to how users experience data. Once you have a trusted backbone of data and you need to create as much optionality for the client in terms of how they want to interact with that data. It could be through a UX with obviously a human. It could be via APIs, via a machine, it could be via agents. The goal is that you enable data to be surfaced in a way that it can be used by a wide variety of consumption mechanisms from human to agent. The work that needs to be done to make that as easy, as frictionless, as possible is to ensure you don't have to change the data or the way the data is expressed.

As the different types of users change from agent to human, you want a simple, adaptable, highly flexible API that can be pointed to an agent or human or workflow the customer wants to use on that data. And that's what I mean when I say you need to be able to solve for a broad range of user experiences using trusted data sets and minimise the cost and complexity of integrating across all of those different users. 

We know that regulators are and will be demanding ever more explainability and traceability in any and all AI decision making across the business. And with a broader user layer that we also talked about earlier, surely this introduces greater risk and compliance challenges. How are firms building that governance, that lineage, that explainability into their AI workflows?

We are helping firms build a single explainable layer of truth across a vast amount of different data sets, whether it's private asset data, ESG data, referential pricing data, making sure that they all grounded by a trusted backbone of data, where they can view those datasets as being the single source of truth around data. And they can apply very granular rules around lineage auditability, explain ability at the individual data element level.

That is the key thing you have to get right. And that's what we're focused on helping clients with. 

So if you had to bet on one thing that would separate the leaders from the rest in asset management by let's say 2030, what would it be?

Look, I think the tools, the ability to access AI there, if you might even want to say the sort of democratisation of AI, the ease of access to the most advanced models, the most advanced AI tools. 

Not all of the leading firms are going to be at the cutting edge of adoption of using these tools, right? The ultimate proof points around business outcomes of those tools, it's alpha generation and your cost structure, your profitability, and it's not going to be an option to not use this type of AI capability.

But I think that the firms that are really going to stand out and say 2030 are those who have invested in the cleanest, most trusted and connected data backbone. Those projects take quite a bit of time and it's like building the foundations of a building. If you don't get the bottom right then and you build on top of it, it can be quite hard to kind of reverse engineer your foundations. It's a lot easier to perhaps switch out an AI algorithm, an AI model; it's a lot more difficult to re-engineer the underlying plumbing of your data sets.

So the firm's [that are] going to be best positioned in 2030 are the ones today who are deeply thinking about what that data plumbing, that data architecture, that trusted backbone looks like.

So thinking very immediately about your impact on that with the clients on the ability to prepare for the future. What's the practical step they can take now?

One of the most common conversations we're having with our clients is what does a trusted data backbone actually look like? So the way we help them, as we say, give us one of your most complex data problems. 

As an example, it might be mastering a complex fund hierarchy structure. It might be linking private asset holdings to public equity holdings and providing a total portfolio view linked to other types of data sets – for example, ESG data sets – bringing together disparate data sets and mastering them.

Give us [Rimes] that most complex data problem and we will demonstrate how to solve that problem in a way that's aligned with building a trusted connected backbone. So how do you master those data sets in the right way? What platform do you move those data sets onto to ensure you have lineage, trust, explainability, governance? How do you deal with compliance around those data sets? How do you solve an issue if something happens to the data set, an anomalous data field somehow ends up in the data set, and you have to audit around how that anomaly evolved and how you ultimately resolve it? 

All of those, let's call it data mastering, getting data ready for AI workflows, all of that can be prototyped quite quickly using the tools we have at Rimes for some of the most complex investment data management workflows and data sets out there. So that's a very tangible next step.

It's a high-learning, low-risk next step because a client can then say, look, I understand how we got from A to B and where A is a bunch of disparate, very messy data sets. B is this highly unified, deeply connected set of data with semantic layers embedded into the data sets that help AI models. I understand how to go from A to B and maybe they do some prototyping or testing of AI workflows on that data set.

So you do an end-to-end prototype and then you have conviction as a client that yes, I can take this model and scale it more into production. Let's go after the next messy data problem and run it through the same type of process.

Watch the full interview

For more information on what other solutions providers are doing in this space, read our Solutions Market Perspective Series