OPINION | At a recent member information evening in Mt Gambier, some questions about how our fund had performed over the last year, and how we rated compared with our competitors, led to me explaining to members how the research house racket works.

First Super’s policy is not to supply our data to any of the research houses. This is not because we’ve anything to hide, but because it wouldn’t be a great use of member resources. Nor would spending member funds on paying to be assessed by these companies and then paying to use their awards logo on marketing collateral.

If we participated in the racket, the results wouldn’t be at all bad.

In fact, last financial year our investment option with a 70 per cent allocation to growth assets would have rated as the top-performing balanced fund in one of the league tables, having returned 13.6 per cent.

Our 3- and 5-year returns would have us in the top 10. The story as it relates to our other investment options is essentially the same.

Pay to play

In the past, when I was working at another fund, I once contacted a research company to ask why it had assessed us as being in the bottom half of funds when our net returns were in the top 25 per cent. Their answer was because they hadn’t completed an onsite assessment of us.

Naively, I asked them to come and do the assessment. It was at that stage that I was told there was a substantial fee attached to come and “get to know us”.

We paid the fee, got a ‘qualitative assessment’ and, lo and behold, got the upgrade. Bonus for the research house, we then also paid to use their ratings logo. For them it was a gift that kept giving.

I used to be a butcher, and this whole process looked a lot like someone was putting their thumb on the scale.

Wrong criteria

Another difficultly I’ve got with the research houses is the opacity of their assessment processes.

Sure, there are percentage weightings on some websites, but there’s no detail on criteria. There’s too much subjectivity in the weighting or, as they put it, ‘qualitative factors’.

At least one research house doesn’t give great weight to past performance, on the basis that it is no indication of future performance. Well, track record, amongst other things, counts when assessing manager performance and it should for funds.

For most members, returns are what it’s all about, so that’s at the heart of why I think the ratings process sucks.

Then there are other things about the ratings racket that just don’t make sense.
Funds are marked down by one research house if they employ humans to answer the phones straight up, rather than leaving members wrestling with an interactive voice response system. There are plenty more examples of research houses deciding what’s good for members and their funds without any engagement with real people to find out what’s important to them.

Opaque disclosure

Further, there is little upfront disclosure on the research house websites of the process they use for assessing funds and associated costs. All research houses should make it clear to consumers when some funds have paid to be assessed and those that haven’t paid have, in effect, been marked down.

Similarly, research houses should make it clear when not all funds are included, meaning their league tables don’t provide a complete picture.

As fund trustees work out how to implement the Australian Prudential Regulation Authority’s member outcomes test, I’d love to see more funds seriously consider whether participating in the ratings racket delivers on member outcomes.

Join the discussion