Open AI – Christmas Deception

Sora is no gift and is obscuring other issues.

  • Open AI’s 12 days of Christmas headlines with Sora but all the fuss and buzz is creating a smoke screen for a number of more serious issues which point to the ongoing structural weaknesses at Open AI which could once again endanger its existence.
  • Open AI kicked off 12 days of launches and announcements on December 5th and so far, the biggest release is the general availability of Sora.
  • Sora is the video generation algorithm that produces incredible-looking footage but clearly demonstrates that it has no understanding of physics or any of the footage that it is generating.
  • This has now been made available to paying users but even for testers, it is proving to be incredibly expensive.
  • For example, to generate a 20-second 1080p duration clip (the maximum) one needs to be a member of ChatGPT Pro which costs $200 per month.
  • The $200 gives the user 10,000 video generation credits but these credits will not last long as a 16:9 1080p 20-second video will cost 2,000 credits meaning that the user can create 5 per month.
  • Lower resolutions are far cheaper, but the algorithm constantly refuses to recreate anything that it thinks has even a remote possibility of getting OpenAI into trouble.
  • This is a classic example of the control problem that I have pointed out many times where because these models are almost impossible to control, they are prevented from undertaking many tasks that would make them much more useful.
  • Furthermore, although the video quality remains best in class, Sora demonstrates its lack of understanding of causality with levitating objects, strange artefacts and objects not performing the tasks with which they have been commanded.
  • At 1080p resolution, Sora remains the best but others like Google’s Veo, Runway and Pika are becoming available, and I suspect that prices are going to fall hard and fast.
  • While Sora is generating all of the attention other events are occurring that are being overlooked which I think underlines just how unstable Open AI is and how much scope there is for strife, infighting and a potential collapse.
  • The latest change (that critics will see as more broken promises) is OpenAI’s removal of the AGI (artificial general intelligence) clause in the agreement that governs its relationship with Microsoft.
  • Under the current agreement, if and when OpenAI creates AGI, (defined as the point at which machines can “outperform humans at most economically valuable work”), then Microsoft loses access to this technology.
  • In practice, this means that all of the profits from AGI go to the non-profit for the benefit of all mankind and Microsoft gets nothing beyond that point.
  • OpenAI have reiterated this many times and it is also in their charter creating the potential for yet more internal conflict and strife.
  • The problem is that OpenAI subscribes to the “bigger is better” philosophy of AI meaning that it is an endless bonfire of resource consumption which regularly needs huge amounts of cash to keep it going.
  • However, as the provider of the fuel for the fire, Microsoft has some leverage and so now according to the FT, there are discussions underway to remove that clause (see here).
  • This raises the likelihood that AGI will no longer be for the public good but exploited for corporate gain.
  • One can argue the merits of this in either direction but what matters here is the conflict that this will create and how one can not trust anything that OpenAI says.
  • Furthermore, the definition of AGI will be determined by OpenAI’s board which creates a massive conflict of interest and is certain to trigger a bitter and contentious lawsuit with Microsoft should the board ever make that determination.
  • It is almost certain that the reason for ditching this provision would be to ensure further investment from Microsoft which would naturally be very reluctant to invest further while having its return capped.
  • The good news is that I don’t think that this provision is going to be triggered any time soon as I think that we are no closer to AGI than we were 10 years ago but there are other issues that this slew of releases is distracting the media from.
  • At the beginning of 2024, Open AI watered its prohibitions towards using its technology in military and defence applications with the prohibition from using OpenAI to “harm yourself or others”.
  • During the year, OpenAI said that it would work with the US Pentagon on cybersecurity but not weapons and watered the provision down again to “help protect people” in a blog post in October.
  • On 4th December Open AI announced that it is partnering with Anduril (defence company) which makes a range of hardware and software products designed for use on the battlefield.
  • Once again, the ethics and morality of these changes can be argued in both directions but it is yet another sign that the company is railing against the non-profit shackles that prevent it from becoming a gigantic global corporation.
  • The take-home message here is that the path that OpenAI is on remains completely clear in that it will become a proper for-profit corporation that is likely to end up being acquired by Microsoft.
  • This acquisition is likely to take place when the AI bubble finally bursts which could happen at any time although there is still precious little sign of it.
  • This is why I have little desire to be anywhere near this sector and prefer the far more reliable and reasonably priced adjacencies of inference at the edge and nuclear power.

RICHARD WINDSOR

Richard is founder, owner of research company, Radio Free Mobile. He has 16 years of experience working in sell side equity research. During his 11 year tenure at Nomura Securities, he focused on the equity coverage of the Global Technology sector.

Leave a Comment