Meta Platforms Q1 24 – Bad habits

Zuck returns to a bad habit.

  • Meta reported reasonable results but Meta’s habitual tendency to spend resurfaced spooking the market which sent the shares down 15% which I can understand as, based on what it launched last week, Meta should be less capital intensive than its peers.
  • Q1 24 revenues / EPS were $36.5bn / $4.71 slightly ahead of estimates of $36.1bn / $4.32 but when it came to expense guidance there was bad news.
  • Here capex will be $35bn – $40bn compared to its previous estimate of $30bn – $37bn to support its AI ambitions while losses at Reality Labs will increase materially to new all-time highs.
  • Reality Labs is Meta’s department that is tasked with creating its version of The Metaverse, but Meta is claiming that Reality Labs is increasingly overlapping with its AI activities which doesn’t quite make sense to me.
  • Mr Zuckerberg explained this stating: “so it is true that more of the Reality Labs work, like I said, is sort of focused on the AI goals as well”, which is as about as woolly and vague as it gets.
  • Consequently, it is impossible to tell what is really going on here as if resources from Reality Labs were being focused on AI for the company, then the expenses for those resources should have been moved out of Reality Labs.
  • Hence, on the face of it, it looks like a badly disguised return to the old bad habit of profligate spending under the guise of AI in the hope that the market will not care because it is for AI.
  • The market wasn’t fooled and sent the shares down 15% anyway.
  • On the AI front, Meta’s 12% increase in capex also makes no sense when one looks at the direction its AI efforts are taking.
  • What is remarkable about Llama 3 is its size which at 70bn is the same size as Llama 2.
  • This is a complete contrast to everyone else whose models seem to get bigger and bigger with every generation.
  • This is why Meta’s competitors need to keep spending on cloud computing so aggressively but with smaller models, Meta should be much more efficient.
  • To be completely fair, Llama 3 is trained with much more data than before so the amount of compute required for training will be much greater but as the focus moves to inference, its greater efficiencies should bear fruit.
  • Its peers are 10x the size or larger, and so when it comes to storage and running inference compute, Llama 3 should be far more efficient.
  • Hence, why Meta needs to increase capex by 15% is a bit of a mystery.
  • Instead, Meta should be focusing on using its smaller model to run at a far lower cost than its rivals and encouraging developers to use its model as their services will be much cheaper to run.
  • This is why I am somewhat non-plussed by these increases as they do not make sense and given Meta’s habit of over-spending, need a much better explanation.
  • As a result, the 2024 consensus EPS estimate for Meta of $20.15 may now be too high and I would not be surprised to see a round of downgrades.
  • Assuming a 10% cut and including the 15% fall in the share price, the shares are trading on 23x 2024 PER which is at the bottom end of the developed market peer group which I think means that this is not the beginning of another rout in the share price.
  • However, there are cheaper options out there in the market meaning that I do not see this correction as an opportunity.

RICHARD WINDSOR

Richard is founder, owner of research company, Radio Free Mobile. He has 16 years of experience working in sell side equity research. During his 11 year tenure at Nomura Securities, he focused on the equity coverage of the Global Technology sector.