We’ve hit year‐end and it’s time for the second annual Your Future, Your Super (YFYS) performance test. Last year, 13 funds failed the test and since then, 11 of those 13 funds have either merged with another fund or are well one the way to merging. At first glance, the performance test seems to be achieving just what it set out to do – removing underperforming funds.
But has it? Underperformance isn’t easy to measure. Traditional peer comparisons are problematic as each fund invests differently, with different levels of risk, so we’re never quite comparing like‐with‐like. The YFYS performance test tries to account for this by comparing eight‐year performance to a composite benchmark based on the fund’s strategic asset allocation, over one period. But even then, it ignores the value added from asset allocation itself, which is the main driver of performance, and just measures the implementation of that strategy against asset sector benchmarks – as this is what can be measured reasonably easily. We agree that there should be a performance test, but to base the statutory assessment of a fund’s performance on the current single measure, over one period, is problematic and misses the ‘forest for the trees’.
The YFYS performance test
There’s been some discussion about fixing the benchmarks used in the current test. While using different benchmarks for growth and defensive alternatives would help, the main problem with the performance test is not the benchmarks, but its ‘bright line’ nature that uses one number over one period to assess the performance of each fund. That one number doesn’t measure the main driver of performance: asset allocation. It also doesn’t factor in positive changes super funds may have made to their investment models during the eight years to enhance member investment outcomes and future performance.
Further, while the legislated benchmarks are appropriate for many funds, they’re not appropriate for some funds as they don’t account for subtle but important differences in how funds invest. These differences include the wide range of alternative assets used by funds, ESG issues, lower fixed interest duration and various downside protection mechanisms. Some funds failed the first performance test because of poor performance or high fees but others failed because their investment strategy didn’t align well with the structure of the test. Despite all this, we expect it’s too late to change the nature of the performance test and that it needs to run its course, at least for this year, perhaps with some tweaks to benchmarks. We anticipate very few funds that passed the test last year will fall foul of it this year.
What about choice products?
The proposed performance test for diversified trustee‐directed choice products is slated from 30 June 2022. We’ve outlined some of the problems with applying the performance test to MySuper products due to subtle differences in the way different funds invest. But for choice products, these differences are much more prominent. Indeed, many choice products are ill‐suited to assessment using the standard benchmarks. Examples include socially responsible options, real return portfolios and conservative portfolios with low duration and defensive alternatives to reduce risk. While we could exclude some of these investments from the test, for example the socially responsible options, it’s better to come up with a test that can cater for the full breadth of choice investment options.
We’ve already expressed concerns with the simple ‘bright line’ test for MySuper, but applying the test to trustee‐directed choice products is even more problematic. If there’s to be a performance test for choice products, it should include a range of different metrics that provide more information on performance. These metrics could include the current performance test metric but over various periods (say five, seven and eight years), risk‐adjusted returns over various periods, an administration fees metric and possibly others. This approach would provide different lenses on a fund’s performance and a much fuller picture of performance. We believe that the performance test should also transition to this multi‐metric approach over time.
The current performance test is not fit‐for‐purpose for trustee‐directed choice products. Indeed, we believe there’s a strong case for Treasury to put a pause on the performance test for trustee‐directed choice products to provide time to formulate a methodology that’s fit‐for‐purpose.
Monitoring choice products is still important
Even if we paused the current legislated test for these products, we must retain the close scrutiny on choice products that the performance test would provide. We believe this can be done using APRA’s choice product data that will soon be provided from 30 June 2022 and will form the basis of its choice heatmap to be released later this year.
This data could be used to calculate a range of metrics to better assess fund performance rather than one number over one period. If a trustee‐directed choice product fails to meet, say, the majority of these metrics, it should be required to issue underperformance letters to members unless it could mount a compelling counter-argument to APRA. For example, it’s a socially responsible option and has performed well against relevant benchmarks and has provided strong risk‐adjusted returns, or the fund has made significant changes in recent years that have led to better performance.
However, if a fund has just performed poorly or has high fees, it should fail the test and be required to issue letters to members. Importantly, the onus would be on the product provider to convince APRA of its case. Introducing this APRA review step is important to allow the wide range of choice strategies to be treated appropriately. We’ve also previously suggested that an APRA review step should be adopted for the YFYS performance test.
APRA has a key role to play
Involving APRA in the process should work well given that it’s shown itself in recent times to be tougher and more proactive than it may have been in the past. It’s also collecting much more data on super funds to give it a more comprehensive view of each fund. Treasury could clearly outline the parameters of APRA’s review to provide it with the necessary powers to conduct its review role with appropriate clarity and authority.
Another complication of applying the current performance test to trustee‐directed products is that some funds will have investment options that pass and others that fail, even though they are managed using the same investment process, just targeted at different risk profiles. Our expectation is that conservative and high growth options will be more likely to fail the test for very different reasons, while investment options in the middle of the risk spectrum are less likely to fail. Many members will be confused after receiving a letter saying that some of their options have failed the test but others have passed, even though it’s the same team managing the money.
Finally, there’s an argument that because choice products have been chosen by a client, a performance test isn’t warranted. But given that many choice products are chosen by advisers on behalf of their clients, and long‐term performance has such a big impact on member outcomes, we believe that it’s important to include choice products in a performance test. However, if the test is applied to choice products, it should be expanded to include multiple metrics that allow a more robust assessment of performance, and APRA should review the test outcome upon request of the provider in cases where the various tests may not provide a fair assessment of the product’s performance.