We may be at the early stages of a new concept: the growth of the ‘academic portfolio’. Although still confined to a relatively small group, a number of global institutional investors are turning to academic research papers for new sources of investment insight to generate alpha. So could the next great investment portfolio be sourced from the shelves of a university grad student?

Today, we are at an important confluence of external factors driving investment decisions. Years of global financial repression, challenging economic growth and low yields have led to an expansion in alternative investments. Research indicates that over the next three years 51 per cent of pension funds worldwide plan to increase exposure to hedge of hedge funds, 50 per cent to real estate, 46 per cent to private equity and 41 per cent to infrastructure.

Alongside this, the high costs and underperformance by the investment industry as a whole have pushed many asset owners to make more investment decisions in-house. This is especially true here in Australia, where super funds are at the vanguard when it comes to internalising the investment function.

Under these circumstances it’s hardly surprising that good investment research is in high demand. Many institutional investors are trying to find that unique insight to give their portfolio an edge. Quality academic papers could hold the key.

But how much economic, financial and other social science research is published each year? Well the answer is a lot. In the past 12 months alone there have been more than 65,000 research papers published on the Social Science Research Network (SSRN) website. So the real challenge is panning the golden nuggets of insight hidden within the flow of information from the halls of academia.

When you think of using academic research to enhance the research process, or to create new strategies altogether, you typically think of quantitative or systematic investing. This would, though, be unfair to many fundamental strategies. Many of the tenants within portfolio management are rooted in academic research. Everything from the Morningstar box, to the discounted cash flow models were created through a data-driven, peer-reviewed, open-discussion, academic approach.

Research at the heart of everything

Many of these papers, however, were written decades ago. It may seem that most of the ideas have already been codified and integrated into the modern world. However, with faster and greater access to large sets of data, our understanding of risk and diversity – and the ability to take advantage of the underappreciated – is growing. The world, and the portfolios through which we view the world, will continue to evolve.

Research sits at the heart of what we do, not only as financial professionals but also in our daily lives. Whenever we plan to buy something, one of the first things we do is jump onto Google to do some ‘research’. The internet has given us access to a library of product reviews and peer opinion. Whether we are replacing the toaster, upgrading our golf clubs or buying a new car, the access we have to online product reviews and online discussion forums means the first step is often to check what other people think.

But in the face of the vast array of information sources available to us, how do we know we are choosing the right ones? How do we assess whether to trust the opinions we’re reading? A good decision requires understanding of the choices available, an evaluation of those choices and importantly awareness of any bias by the author.

The same is true for investment professionals. Investment and academic research can provide useful insight, help us allocate capital, and assist us in avoiding regret. And if we are not researchers ourselves – or if we have time constraints – then we have to select and apply the ideas of others. Here our selection criteria are key. This is especially important when the decisions we make are felt by the investors and fund members we serve.

Many of us are familiar with the seminal work by UC Berkeley’s Hal Varian (now chief economist at Google) and the late Peter Lyman, on the acceleration of information. However we don’t need a study to tell us that the rate of information we receive has far surpassed our human ability to read and understand it.

Quantity not necessarily quality

So with all this information available to us, how do we know we are choosing the right sources? Although thousands of papers are published each quarter, it is fair to say that most academic work is not suitable for use in portfolio development. Since the quantity of information says nothing about the quality of information, we need to overcome two challenges in order to build a process to use academic research.

First, we need to identify what kind of work we should be looking for when developing new ideas. Second, with the tremendous volume of research that is published each year, we need a method for identification. How do we narrow down the potential universe of papers into a manageable selection with a high likelihood of success?

Academic papers that may at first seem appropriate for development can, generally, be characterised into three different groups. The first we’ll call ‘academic descriptive’. Many of these ideas can be classified as event studies and may seem like an easy place to start. They describe or quantify a contemporaneous relationship between indicators. The danger in choosing these for development, however, is that they often explain correlation rather than causation.

We can characterise the second group of papers as ‘trading predictive’. This group of papers uses robust data sets and the methodology describes how the strategy was created and how it incorporates the real life constraints of practical implementation. The results of these papers are equally robust as they contain both in-sample and out-of-sample conclusions. Often times these papers go further and add lags to the variables used to devise the strategy. They may even discuss how to implement the strategy. This type of paper may seem ripe for harvest but they are typically written by a practitioner who has already developed and implemented this strategy, and who is publishing the work for justification or marketing purposes.

We’ll call the third group ‘academic predictive’. Rather than being written by a practitioner or an academic who is employed by an investment house, these papers are written by professors or graduate students to assess the predictive power of a model or theory they have hypothesised. These papers are the proverbial needles in the haystack. The author may not have access to large or robust private data sets, and their analysis may be based on ex-post results that do not consider constraints and potential market impact. But the hypothesis looks promising.

By devising a method for filtering out the first two groups and identifying those with potential in the third group, investors may succeed in creating a subset of ideas with great promise.

Technology is the answer

This leaves us with our final challenge: how do we know if a paper is worth reading? No investment team, no matter how well resourced, could realistically read 65,000 papers a year. So how can we narrow down all the potential ideas into just the ones with a highest likelihood of success? The answer is technology.

The MIT technologist/futurist Andrew McAfee often says that a powerful future is one where humans and machines form partnerships, each specialising in what they’re good at. Herein lies our answer. Work is already underway to develop systems that harness this partnership to enable searches.

Dr Stephen Lawrence, who presented on this subject at AIST’s ASI conference on September 6, runs a fintech group at State Street called Quantextual, which is dedicated to combining machine learning and human expertise. Computers are much better than humans at recognising patterns. Humans meanwhile can train algorithms to excel at interpreting and summarising these patterns. The key is to match the two.

There are certain characteristics or patterns that we are looking for when searching for the right academic paper. Computers are not as good at answering questions that require context. For example ‘how practical is the conclusion?’ For this part, we need human expertise. At the same time it is impractical for humans to sift through the extent of all research published. Furthermore, technology is not yet at a point where machines can tell us if research is good. By combining the quantitative power of machine learning with the context of human expertise, we can create a powerful approach to identify and develop the next great portfolio.

 

Dan Gerard, is head of advisory solutions for Asia Pacific at State Street.

Join the discussion