Asset Management and the Observer Effect

Colleagues look at graphs together on laptop screen

As one of my business contacts is fond of saying; “if it gets measured, we change it” and this is very relevant to Mike Despo’s recent blog An Ounce of Prevention is Worth a Pound of Cure. Any business function should be self-aware of the quality of service they deliver and the cost of delivery, then use this information to identify areas for improvement which could actually be upstream rather than internal.

As an illustration, a number of years ago I ran a number of business functions at an asset manager including the team responsible for the configuration of monthly and quarterly client reports. After every month and quarter end reporting cycle we went through a detailed review process:

  1. We produced a draft report showing the total number of reports produced and distributed within client deadline, and those that were later than deadline. We listed all the reports that had to be re-run and our preliminary understanding of the cause of the re-run. This was circulated to all the areas with involvement in data production, report production, checking and distribution. Each re-run—even those that did not impact the timely distribution to clients—represents a cost to the asset manager that should be preventable.
  2. The draft report was reviewed with all those other areas and the actual reason for each re-run and late report agreed.
  3. A final report was produced covering report and re-run numbers, causes of issues, graphs of progress in eliminating re-runs, and identifying changes that were required in order to prevent recurrence of problems. Progress on fixing issues that had previously been identified would be included in discussions and in the report.

The purpose of the process was not “naming and shaming,” but rather to prevent recurrence. It was noticeable that in a significant number of instances the cause of a re-run/inaccurate data was not down to errors or mistakes by that team, but were the result of upstream systems, departments or even external market conditions. Finding the right things to measure is critical—if we measure something we change it—as it is human nature to work to improve one's performance against a benchmark. In physics this is known as the Observer Effect which Wikipedia summarises as “the mere observation of a phenomenon inevitably changes that phenomenon”.

As an example, at a company I worked for, the IT department had an objective to fix every PC problem within 24 hours and had to produce monthly statistics on their performance. I arrived at work one Monday to find my PC would not start up so I called help desk who raised a ticket and sent someone to fix it. That person arrived with a preconception of what the problem was and when that proved to be false, they recorded this on the ticket, closed it and raised a new one for their next idea of what the problem was. Next day someone else turned up and tried the second fix which also proved not to work. They updated and closed the ticket and raised a new one for someone to address on the third day…

As their client, my PC did not work for three days yet their measurement shows they’d addressed 3 tickets, each within the 24 hour deadline of their being raised (great results, no?). An extreme example of changing the process to meet the measured objective, but a real one.

The key thing is that whatever the measures are, they must measure the right process and provide information which can be used to identify where effort is required to achieve improvement. In my first example there were a couple of occasions where reports had to be re-run due to corporate actions missing a dividend from the previous month. On the face of it, the corporate actions team messed up, but when we looked into the issue we found that the issuer of the security concerned had announced the dividend this month, back-dated into last month and nobody in the market knew about it until afterwards. It turned out this was a regular occurrence in that particular market and could be addressed accordingly.

The actual cause of the “problem” was therefore a feature of the markets the account was invested in rather than errors within corporate actions. This was communicated to the individual asset manager for that portfolio who then reviewed the report production timescales with the client and agreed new deadlines. Production of the client reports was moved back to allow sufficient time for late announcements to be captured and processed, thus preventing the need for re-run and repeated manual checking.

In complex asset management organizations, there are nearly limitless ways to improve operations through measurement. It’s up to the organization to determine priorities and up to operations and technology leaders to find the right way to measure against these priorities and improve. Beating the benchmark isn’t just for the front office—it should be the ethos across any competitive firm.

Topics