Artificial Intelligence – The Asymptote

Only AGI can prevent a correction. 

  • More companies and more models all of whom perform within a fairly close range but consume billions of dollars in investment capital point to a correction and a shake-out that only a few will survive.
  • The advent of generative AI powered by large language models (LLMs) has turned the technology sector and many others on their heads as the prevailing view is that we are now much closer to the time when machines reach superintelligence (AGI) and replace humans at most economic tasks.
  • My view is that while LLMs have two transformational characteristics (or superpowers), we are no closer to AGI than we were 12 years ago when machine learning came into its own meaning that there is a mismatch between expectations and reality.
  • This is not new, and this is the 4th time that this has happened in AI and has also been seen in 1998-2000 with the Internet and in 2018 with autonomous driving.
  • I hold this view because there is no empirical evidence that demonstrates that the machines understand the causality of the tasks they are given or that they have the ability to reason.
  • I view both of these as essential characteristics for AGI and without them, the improvements in performance of LLMs will slow to a crawl and there are signs everywhere that this is beginning to happen.
  • For example, Anthropic has just launched its latest version of its model called Claude 3.5 Sonnet with great fanfare stating that it is the best-performing model available.
  • While this is true, the ranges cited by Anthropic are small with the largest range being for coding (92.0% to 84.1%) and math problem-solving (71.1% to 57.8%).
  • The smallest ranges occur at the top end of performance and include multilingual math (91.6% to 87.5%) and grade school math (96.4% to 90.8%) (see here).
  • These are significantly smaller ranges than when the race began with GPT-3 in 2020 and point to the asymptotic nature of performance improvements over time.
  • By contrast, the models are getting orders of magnitude larger with every generation, and it is only Nvidia’s ability to deliver large cost savings through new silicon and split them with its customers that keeps model growth within the realms of being barely affordable.
  • Even with Nvidia’s cost reductions, the costs of training and inferencing multi-trillion parameter models are growing far more quickly than performance is improving which is unsustainable.
  • Furthermore, another Open AI executive has launched yet another start-up to compete with Open AI while Open AI itself is admitting that what it has in the labs is not a huge jump over what is available today.
  • This means that there will be a large increase in the supply of models that all perform roughly at the same level which can only lead to price competition and a correction when everyone realises that the business models are greatly overstated.
  • Generative AI has two superpowers which are the ability to use natural language as a man-machine interface and the ability to automatically ingest, store, cross reference and retrieve unstructured data in multiple formats.
  • These superpowers will deliver substantial benefits and productivity enhancements allowing plenty of new businesses to flourish but they will not deliver superintelligent machines.
  • This is the misconception that has driven valuations to ludicrous levels and at some point relatively soon, the market will realise this.
  • This is why the only thing that can prevent a correction is AGI but there is no sign of this anywhere on the horizon.
  • This will trigger a shakeout where Open AI is likely to be acquired by Microsoft, Anthropic by Amazon as well as many others that merge or simply go out of business.
  • Nvidia is one of the ones that will suffer the least as it is the only one that is making real money from generative AI right now and even with a correction in demand for its processors, it will suffer much less than those offering generative AI services for $20 per month.
  • Hence, if I were forced into direct investment in the generative AI sector, I would choose Nvidia, but I prefer to look more laterally at inference at the edge or nuclear power to run all of these new data centres that are springing up.
  • I have positions in both of these themes.

RICHARD WINDSOR

Richard is founder, owner of research company, Radio Free Mobile. He has 16 years of experience working in sell side equity research. During his 11 year tenure at Nomura Securities, he focused on the equity coverage of the Global Technology sector.