In this article, I am going to take a detour from my regular sermons – I am going to get off my high horse of fundamental valuations, no-growth Earnings power value, and other jargon and ride the horse of technical analysis/ time series analysis/ machine learning/ pattern recognition. By education and background, I am not a technical analyst. I generally keep my long-term investor hat on, as opposed to a seat of the pants trader. Yet, in India, the market seems to be obsessed with technical analysis. The popular financial media spends more time talking about moving averages than valuations. Anecdotally as well, at least in India stocks seem to respond to technicals especially after large moves and in the absence of material news. As a result, I have felt a need of a technical analysis ‘sounding board’ – some framework to offer an alternative viewpoint on the stocks that I am looking at. This comes from behavioral finance and I am as affected by prospect theory as the next guy – a loss of 1% in a stock I hold causes me as much emotional pain as a gain of ~2% causes emotional pleasure. I might like a stock fundamentally, but are there chances of buying the stock cheaper? So, in the spirit of keeping myself in check, and trying to avoid being the greater fool by buying/ selling a stock at a price which might ultimately prove prohibitive, I started to approach the problem with an engineering mindset, something I am comfortable with.

Say, I am looking at a chart – what is it that I am really looking at apart from the squiggly lines. What information does a chart encapsulate? Can this information be used to predict the future? Are there patterns which will repeat themselves? Can I throw enough charts to a machine learning system and teach it to discriminate between a potentially rising stock vs a potentially falling stock – in the same way that I could teach a machine learning program to discriminate between a cat and a dog given enough data. Think pattern recognition! This goes against my core mindset of handicapping, relative value, fundamentals guiding stock prices etc and into the domain of predictions. If I were able to accurately predict, then I would not be writing this article – rather sipping daiquiris in my own personal yacht while letting my system secretly mint me money. Yet, with the advances in machine learning, I was curious to see what this hot new approach throws up. With so much data and computing power available at so cheap prices, I took a year to set up building my system.

The first issue was data and its frequency. I certainly don’t run a HFT shop, so tick level data for me was out of the question. Neither am I a professional chartist. So 15/ 30/ 60 min timeframes were ruled out too. I am perfectly content with daily data frequency to try to divine the future. A simple OHLCV dataset works for me. The other issue was how much data. If I use too old data, then I could be feeding a different regime to my ML program. If I use too little, then I might not be feeding enough patterns to be statistically relevant. Finally, I boiled down on a few years of data (as opposed to say 10 years which would include the Great Financial Crisis and skewed, perhaps irrelevant behavior) based purely on the limitations of my computing power and the time required to process the information.

So coming back to the information in the charts – I wanted to pick up the information from a visual domain (where a convoluted neural network would be suited) to a slightly more linear domain where SVMs and decision trees could be used. So I started considering features like distances from periodic highs and lows, momentum/ trend/ volatility indicators, volume trends and metrics as well as some metric which present the current stock’s performance within the context of the performance of the rest of the universe. This is feature engineering. The problem with this approach is that my results would depend on the features that I am presenting to the ML algo. The other approach would be use Deep learning (Neural networks, RNNs, CNNs etc) with multiple layers which would obviate the need to engineer features since the neural network would configure itself to take care of this issue. The problem that I faced with Deep Learning is that philosophically I believe that each stock’s data tells its own story. So the behavior of a large cap like RELIANCE may not be appropriate for a small cap like VENKYS. Which means I would have to create as many neural nets as the stocks in my universe (1000+). Besides, any neural network worth its salt needs data points in millions as opposed to thousands that I want to use. Given the ginormity of the exercise and my personal limitations as a Data Scientist, I bypassed Deep Learning and stuck to shallow machine learning with ensemble approaches, and with features I thought covered a wide gamut of indicators.

Given that I am using stock price returns as the outcomes, it is relatively easy to label the data. So data collection, while non-trivial, was not that difficult. Selecting the different algos/ packages for pushing data through was also not that difficult. There is definitely a ‘Learning Curve’ in Machine Learning. I conceptually understand the different approaches – it is what goes on inside and the interpretation of results that is the biggest challenge. Also needed was wrapping the head around data normalization/ standardization, principal component analysis et al. This exercise was inherently that of creating a statistical black box – something which predicts how a stock is going to perform given the history. And by default it is a garbage in garbage out system. As long as I don’t forget that the output from the algo is only due to the data that I have fed it, I am doing fine. There is no guarantee that the pattern that the system saw in the data will be repeated. But, what I can do and like about it is that it allows me calculate the odds. I can calculate that given the current pattern, in the past the stock has been up by 5% in 5 days say 60% of the time. This is nice cushion when I look at things. As a live example, currently the system thinks that RELIANCE has a very high probability to mean revert to lower levels after the rare big move last week. The fundamental half of me could think that with lower debt levels, on a sum of the parts basis, the stock could rise further. Yet statistically, cool unemotional mathematically and with the blessings of the buzzword of ML/ AI, the system reminds me that now might not be the time to buy. Emotional investing vs pure math – as long as I am aware that I make an emotional decision whenever I decide to buy or sell (I ‘like/ dislike’ a stock), and I have an unemotional system to either back or refute me, I am happy with the investment process. Welcome Quantamental!

So how did my system do. Well, it definitely threw up some interesting finds in 2017 when the whole market was on adrenaline and momentum was a big driver of stock performance. Maybe the system recognized that momentum indicators were responsible for stock performance. In 2018 and 2019, there has been a big flight to quality. In this case, the system has not been designed to detect falling dominoes because it does not receive information on junk hidden in the closets. Yet, even in the current market, it has been able to detect without human emotions when a stock looks oversold or overbought. This is its biggest value. Is it successful always or has it been able to detect all such cases? No. But, at the end of the day, a system is only as smart as its creator! I still have to keep my rational thinking on when the system thinks that a bankrupt JETAIRWAYS is a buy :-)

How can this system be extended. If we have enough history, we could use features which tie company fundamentals (duly normalized as ratios) to stock performance. So instead of using OHLCV and its derivative indicators, we use PEs, EV/EBIDTs at different points in time and see how the stock performs in the future and in the context of the rest of the market. This is again limited by the size of the history.

Finally, in a recent report by AQR, they argue that Machine Learning ‘will likely apply to problems involving optimizing portfolio construction, such as risk management, transaction cost analysis, and factor construction at first. That is because finance and markets are different from other areas where ML has come to offer up breakthrough research’. What this means is that markets are inherently noisy. A cat can easily morph into a dog. Besides, markets show reflexivity – the perpetuation of self fulfilling prophecies. Markets are also adaptive. Machine Learning at least in its current avatar, cannot handle this complexity. AI in its current journey is still not fully sentient – skynet is still under construction. Overall, when painting the portrait before a buy or sell decision, I still find it very useful to check the output of the system and keep emotion under control.