There are powerful forces changing the quantitative communities around the world. A careful combination of heavy competition in asset management, a desperate need to stand-out in an overcrowded field, and readily available datasets and coding libraries, the new class on the block feels familiar, but very different at the same time.

We start with the ‘AI’ start-ups taking the world by storm. The recipe is becoming standard: we take a team with nimble fresh-faced technologists, add a splash of marketing professional with snappy one fresh name (usually unrelated to its business), and bake well with plentiful cheap VC capital. We hope that a unicorn rises. The pitches ring similar: here is the market (it’s big), here is what we do to disrupt it (it’s amazing), this is how we will change the world (boom!).

But what was meant to be the beginning of the end of for legacy businesses (‘a fintech threat’), morphed into a ‘fintech alliance’ as these companies pivoted to enablement rather than competition, and ended up selling their technologies to incumbents. It turns out that getting paying customers was harder than building technology. For asset management, this meant intelligent automation, alternate data groups, rather than better investment decisions. For example, Robo-Advisors didn’t necessarily promise to be a more insightful investor, but to make access to capital markets quicker and cheaper for investors.

This is a marked change to the previous wave of quantitative sciences that began in the 1980s, and may show the signs of things to come.

The legacy quant

Casting our minds back a few decades, and we would see the quantitative wave begin, led by the academic world in the 1980s. These early successes relied on Nobel winning mean-variance frameworks established in the 1950s, and built data-driven forecasting models. Fast forward a decade and Long-term Capital Management (LTCM), a heavily academic top-heavy pitch, was printing money under the banner of a strong emphasis on asset price modelling, and quantitative decision-making. These models supported fundamental models (cash-flows, financials matter) but removed the noise and behavioural biases, focusing on data-driven analysis rather than the indominable gut-feel of a senior portfolio manager. Implementation was done carefully with sophisticated portfolio construction and risk modelling techniques, and a careful optimisation and trade process, that emphasised diversification and a kind of factor driven logic. The activity of asset management was handled from beginning to end, and was a direct competitor to the incumbents. You’ll find a more recent incarnation of these products embedded into the notion of Smart Beta and Alternate Risk Premia (ARP), both of which rely on this kind of ‘factor-based’ empirical approach.

It wasn’t all smooth sailing of course. Quant ‘crises’ revealed how opaque, overhyped and misunderstood these strategies were. Innovation came in bursts as neighbouring sciences, like quantum science and biologists took a shot at designing better mouse-traps for stocks. The problem at the core was (and is) that the best forecast models are not much better than guesses, which makes them difficult to distinguish from noise. I learnt this early, as a young quant joining the industry in 2007, I watched markets and models crumble, imprinting a permanent sense of humility and need to continually disbelieve and improve models.

The new class

However, the recent ‘class’ of data-driven analysts are different. They carry a computer science gene, with little to no social science framework. Rebranded as data scientists, their expertise is in as much about setting up information structures and the conduit algorithms that lead data around models. With little focus on domain expertise, they practise intelligent automation, data analytics, and in some cases, more complex machine learning methodologies. The solutions to the forecasting problems of asset pricing are not necessarily improved, but as before, the goal posts seem to have shifted. As quants in the 1990s, they bring a fresh approach, in this case less of the (human) story, and more of the experimenting statistics, side-stepping some incumbent beliefs from econometrics to asset pricing models. Design, accessibility, clarity, and data enablement for incumbent decision makers is the practical bread and butter of this discipline. In that sense, it is not a competing discipline, at least not yet.

Unlike the previous generation, or neighbouring industries like medicine or real-time fraud detection, this discipline isn’t looking to challenge head-on, or replace, but rather embed into organisations as intelligent automation and as an efficiency booster. This is why they have primarily been embraced by the middle and back offices of organisations.

For the front office, their viewpoint is far more practical, with a noticeable absence of economic theory, and a pareto-optimal theoretical framework that the economists brought to the foundations of quant managers in the 1980s. Data scientists focus less on personal conviction as investors would, and more on the technological bliss of solving problems with speed and agility. They don’t fit neatly into the current narrative of investors and personal convictions, and I have yet to see them take a leading role in discussions in the mainstream media. Their conviction lies in the seas of data that feed their models, and model validation is an empirical exercise, not a conversation.

Be careful what you wish for

History doesn’t repeat, but it does rhyme. Probabilistic disciplines come, offer a new and exciting path, create a bit of buzz, and alter the path forward in a permanent way. The best analogy I can think of is by Yuval Harari in his best seller ‘Sapiens’, whereby he outlines the move from foraging to active farming. This enabled much greater populations to be maintained with efficient foods like wheat, but reduced the diversity of consumed food dramatically. Importantly, it was an irreversible development. Once the larger population established itself, it could no longer go back to foraging. The path of the human race was changed.

The recent wave of data scientists, bolstered by the success of quasi-monopolistic global organisations, like Google and Amazon, are confident and well resourced. Industries will likely embrace their gifts because they make us more competitive. As we do, not only do we get better at our trade, but the competitive axioms also change. Quantitative funds didn’t take over the world because, in part, they competed head-on. The new breed of data science will bleed into industry, under the guise of enablement, as is therefore likely to make a much more marked impact on the shape of our industry, and ‘what matters’.

 

Michael Kollo is a former general manager of quantitative solutions and risk at HESTA

 

 

 

 

Join the discussion