OpenAI – Strange Action & AI Conference

OpenAI acts like someone with something to hide.

  • OpenAI’s shiny new model is already stoking controversy as OpenAI is threatening to eject users who have the temerity to try and figure out how the system works in a sign that either OpenAI is on the road to AGI and doesn’t want to lose its lead or that users that will work out o1 is not reasoning but pattern matching.
  • OpenAI released o1 (aka Strawberry) with great fanfare stating that this leap in reasoning capability was another step along the road towards superintelligent machines.
  • However, it failed to offer any concrete evidence that o1 can reason beyond performance on standardised testing and its scientific paper-style press release is not much more than a marketing product launch (see here).
  • Consequently, everyone is itching to find out how o1 works and whether it is really reasoning or not.
  • Since the arrival of ChatGPT a whole new branch of computer science has been created which is commonly referred to as prompt engineering.
  • This refers to a series of techniques to ensure that the LLM returns the best and most accurate answer possible.
  • Two subsets of this are known as jailbreaking where users attempt to circumvent controls put into the model by its creator and prompt injection where the user tricks the model into revealing its structure or its methodology.
  • No one has really worried too much about the use of these techniques until now as users who are trying to figure out how o1 works using these sorts of techniques are being sent cease and desist warnings by OpenAI threatening ejection.
  • In my mind, these are not the actions of an entity whose stated goal of to develop AGI and ensure that it is used for the benefit of all mankind.
  • Instead, this looks like a company with a lot of money at stake which has something to hide.
  • There are two possibilities.
    • First, trade secrets: where OpenAI has somehow managed to get o1 to reason from first principles and is moving to protect its trade secrets.
    • I think that this is pretty unlikely as OpenAI has provided no hard evidence of reasoning whatsoever and o1 still fails in areas where it should not if this capability had been instilled into the model (see here).
    • Furthermore, if OpenAI was true to its stated mission, it would have published all of its methods and shared its great revelation with the world.
    • Instead, the company is being increasingly secretive which implies that all is not as it seems.
    • Second, simulation: where instead of real reasoning, OpenAI has created a model that can simulate reasoning even more effectively than the models that have come before it.
    • There is more evidence for this than there is of a breakthrough in my opinion as the model represents no improvement over previous models in many tests outside the math and coding tests that were proudly presented in the press release.
    • Furthermore, the model still gets many of the simplest reasoning tasks wrong further implying that this is a model that was trained to ace the tests rather than to properly reason from first principles.
    • With a pre-money valuation of $150bn (minimum 30x 2025 revenues) at stake, it becomes clear why OpenAI doesn’t want the Internet suddenly working out that it is a simulation rather than reality.
  • Hence, I think that the debate on whether statistical systems can reason remains far from resolved.
  • The ability to pass the simple reasoning test of “if A=B then it follows that B=A on made-up data”, would be a big step towards resolving the debate.
  • However, so far, all LLMs have catastrophically failed this most simple test.
  • Only when the basics are on solid ground can one have faith that the reasoning is real rather than an increasingly sophisticated simulation.
  • Either way, this is getting a lot of chatter and excitement which will lead to more investment meaning more demand for Nvidia which remains the safest way to play the generative AI craze directly.
  • However, I continue to prefer the adjacencies of inference at the edge (Qualcomm) and nuclear power as their valuations are far more reasonable and both will still perform well even if generative AI fails to live up to expectations.

AI Conference

  • RFM business partner, Counterpoint Research is holding a conference that will examine the impact of AI from foundry to cloud and everything in between.
  • This is being held at the Freemont Marriott in Silicon Valley on October 2nd 2024.
  • Qualcomm, MediaTek and Micron are signed up as event sponsors and will be speaking at the event.
  • Anyone interested in attending can find more details here.

RICHARD WINDSOR

Richard is founder, owner of research company, Radio Free Mobile. He has 16 years of experience working in sell side equity research. During his 11 year tenure at Nomura Securities, he focused on the equity coverage of the Global Technology sector.