Artificial Intelligence – The Sledgehammer

LLM controls come at a cost to functionality.

  • OpenAI has finally launched its voice assistant but even after months of delay, key features are missing underlining just how difficult it is to control LLMs when they have no idea what it is that they are doing, and their creators have no idea how they work.
  • OpenAI’s voice assistant is now available but features like computer vision, music generation and impersonations are either absent or have been limited such that a lot of the functionality and fun of using it will also have been curtailed.
  • This is symptomatic of what RFM refers to as The Black Box Problem that exists in all AI systems that use a form of deep learning.
  • All systems that are based on deep learning are black boxes in that their creators can see the data going and the answers coming out but have no idea what caused the neural net to provide the answer that it did.
  • Instead, the technique of reinforcement learning is used to push the model to provide the desired answer but even then, how the pathway through the model arrives at the answer remains a mystery.
  • Furthermore, because these are statistical-based systems, they have no way of telling the difference between an answer that is correct and one that is made up.
  • This makes them very difficult to control meaning that functionality can’t really be fine-tuned and has to be controlled at a much higher level.
  • These are the filters that OpenAI and others are including in their LLMs and instead of controlling certain aspects of an activity, the filters will often stop it from working altogether.
  • This has the effect of reducing the functionality of AI to the user and the customer as can be seen with Apple’s slow roll-out of Apple Intelligence and OpenAI removing promised features from its assistant at launch.
  • This is a characteristic that is being seen across the board and many LLMs will now refuse to execute certain tasks, requests or inquiries that are completely innocuous but fall foul of a filter that has been put in to prevent bad behaviour.
  • This is unlikely to stop, and the risk is that as AI regulation becomes increasingly onerous, the functionality of LLMs is degraded to ensure that AI does not do anything that the regulators will frown upon.
  • An extreme example of this is in China where all publicly available LLMs must guarantee that they will not produce content that conflicts with socialist values.
  • The penalties for non-compliance are significant and as a result, the functionality of Chinese LLMs is not as good as their foreign peers.
  • The rules in the EU are also far wider reaching and both Meta Platforms and Apple have declined to make their AI services available in the EU as a result.
  • The net result is that controlling LLMs is like swatting a fly on a bone China plate with a sledgehammer meaning that much finer control mechanisms are required.
  • This is highly problematic because LLMs are the biggest black boxes of all and it is not until their creators understand how LLMs arrive at the answers that they produce, that they will be able to control them finely.
  • This is just one factor of many that is going to make it difficult for generative AI to live up to the hype and expectations that are being set for it.
  • Hence, I continue to think that we are in a bubble and that investors in OpenAI at a pre-money valuation will write down the investment on the books in due time.
  • With direct investments being overhyped and overpriced, I continue to prefer the adjacencies of inference at the edge (Qualcomm) and nuclear power as their valuations are far more reasonable and both will still perform well even if generative AI fails to live up to expectations.

RICHARD WINDSOR

Richard is founder, owner of research company, Radio Free Mobile. He has 16 years of experience working in sell side equity research. During his 11 year tenure at Nomura Securities, he focused on the equity coverage of the Global Technology sector.