Manager Research: Scoring Systems

Nov 2nd 2017 - by Hilary Wiek, CFA, CAIA

Editor's Note: We are thrilled to have guest author Hilary Wiek, CFA, CAIA, contributing. She has over twenty years of experience in the investment management arena and incredible depth in due diligence and investment research. Most recently, Hilary was Head of Global Equity at Segal Rogerscasey. Prior to that she was the Director of Public & Private Equity for the State of South Carolina Retirement Systems. Hilary graduated from the University of Puget Sound with a degree in Business Leadership and from Case Western Reserve University with an MBA in Economics and Finance. She volunteers with the CFA Institute and its local affiliates as well as the CAIA Association and its curriculum and testing activities.

Hilary is currently working with firms on a project basis relating to investment research, process & procedures, writing & editing, and leadership & ethics. She welcomes comments in reaction to her writing and inquiries about project or permanent employment.

Email Hilary or LinkedIn


Qualitative Reactions to Interactions

There are many aspects of asset manager research that need to be brought together in order to come up with an overall opinion that might be shared with an interested party. Historical quantitative data, qualitative reactions to interactions with an investment manager, and evaluations of a wide variety of factors surrounding skill, incentives, repeatability, and more. It’s complex. It’s hard.

It is tempting to seek an easier way. Something that could be aided by computers, rankings, weightings, scoring, etc. It’s elegant, it’s easy to explain, it allows you to compare managers on an equal footing, it gives you what looks like a mathematically Right Answer. But it’s bogus.1

Just as a good asset manager has a robust and repeatable process to get through all of the information surrounding its stocks, a good manager research program has a variety of subject areas and underlying factors that need to be evaluated consistently. At a high level, it is the usual suspects like firm, team, and process, but under those are factors like stability, resources, and efficient trading processes.

But how to organize all of these pieces you are assessing? How to turn data and notes into information? If every manager needs to satisfy in the same broad areas, why not create a number system to make them comparable to each other? Perhaps one should design a system that arrives at a score above which the manager is a great idea and below which people should shy away?

And the answer is...it depends.

There are a number of problems with trying to simplify and standardize the evaluation of investment managers. The first is around weighting. At the high level of firm, team, and process, is one more important than the other? Should they all be equally weighted? Always? The answer is: it depends. In a steady state (when does that ever happen?), possibly you could weight each of these major categories the same across all managers. But this ignores that fact that the way one really adds value in manager research is applying critical thinking when things are unsteady – or have a higher risk of being that way.

For example, think of the firm going through an ownership change where the founder is leaving or has decided to take the company public. At this moment of change, what is going on with the team or the process becomes less important (though they may of course be impacted by the firm-level issues) and the weight of the evaluation really needs to rest on what’s going on at the firm level. Everything else could be going right: performance, team stability, incentives, commitment to process, etc., but a poorly executed generational transition could sink the firm and thus the strategy. A good manager research program will recognize that one element can sink a recommendation despite all else appearing to be fine.

At the more granular level, there are dozens of factors that could be evaluated at a money manager. Performance, risk metrics, currency management, trading efficiency, hiring practices, and so much more. But do they all need to be evaluated equally every time? Not necessarily. Think of trading: a company that turns over a 100 stock portfolio twice a year has a lot more to gain from efficient trading practices than a 20 stock portfolio that trades two stocks per annum. While one would never say it isn’t ideal for everyone to have an efficient trading practice, in the first case, you would want to interview the traders to understand if they are using best practices in trading and eking out every basis point from a trade, while in the second case, whether they use best practices or not, the effect on the portfolio will be miniscule. So weighting this factor equally across managers would overstate risks for some managers and possibly understate them for others. Manager research needs to be flexible to situationally understand what is important.

Finally, the manager research process is based on the findings of a variety of humans. Done properly, more than one person will provide input to a recommendation, but even then, calibrating a finding on an emerging markets equity manager to that of a large cap core portfolio is difficult, as the managers face different challenges and the risks a client would face can vary dramatically in importance. Clients sometimes ask for best ideas, regardless of asset class, but, especially in a large manager research department, the staff looking across asset classes will vary and the findings will have biases and inconsistencies based on who did the work and what was going on at the time.

Consistency is Key

While tailoring a recommendation to a specific investment manager and product is ideal, there are aspects of the manager research process that should be done consistently. It is up to each research program to determine what those standardized elements might be, but these may include a requirement for in-person meetings with multiple decision makers, the completion of due diligence questionnaires to form a base line of knowledge from which to do a deeper analysis, or the running of an analysis pack from a database provider that starts to build a framework of what the manager has done in a variety of situations. These consistent building blocks to the process may then be complemented by targeted questions that arise from the initial information gathered.

While a robust and repeatable process is as important to manager researchers as it is for the asset managers they assess, it is key to know what needs to be done consistently and which aspects, in the final analysis, require an experienced eye to determine what is truly important to the specific manager in the specific time being evaluated.


  1. The October 25, 2017, Wall Street Journal article, "Morningstar Mirage," was published shortly after first drafting this article, but it firmly supports the view that scoring systems are highly flawed when used to identify forward looking success stories.