Home Stocks Are OpenAI’s Exit Documents Too Restrictive for Departing Employees?

Are OpenAI’s Exit Documents Too Restrictive for Departing Employees?

665
0

What Is Happening at OpenAI?

In the recent announcement of ChatGPT-4o, the company’s flagship product got the ability to speak and recognize visuals, but for the tech nerds and followers, all the hoopla around the product’s upgrade was taken over by the reality that departing employees of the company may recognize visuals but are not allowed to speak.

Soon after the debut of ChatGPT-4o, OpenAI’s co-founder and chief scientist, Ilya Sutskever, left the company. Sutskever was also leading the Superalignment team, which looks after the safety issues related to the company’s AI product.

Hours after the departure of Sutskever, another co-leader of the Superalignment team, Jan Leike, also left. The departure of both safety experts hinted at the possibility of OpenAI maybe changing its safety preferences, and speculation started that they might have resigned in protest.

Altman Has Apologized for Equity Policy Missteps

Many observers accuse Altman of being a good player at issuing public statements, but they criticize him for practically working completely against what he says.

Today, Altman posted on X about how the company handles the equity of its employees. He mentioned that there was a provision about the potential cancellation of equity in their previous documents, but they never claimed anything back. He said that it should never have been there in any of their documents.

Altman also said that their team had already been fixing the exit paperwork for the past month, and anyone who left the company with old documents signed could contact him for correction. He also said that it is one of the few times that he is embarrassed to manage the company.

But looking at history, as we said earlier, Altman has been accused of contradictions in his words and practices; for example, he is being criticized for convincing Saudi Arabia to invest billions in making AI accelerators. Knowing that the country is a monarchy and that powerful people can use technology to control society.

This happened at the same time when he applauded AI safety and signed a letter to the US government to stop fast AI development to save humanity from harm. He was also smart enough that he turned a non-profit into a technocratic startup and then put the commercialization on turbo boost. It’s not simple and requires some sort of deception to make it happen.

Not Everything Is Simple at OpenAI

There is also some history involved. When CEO Sam Altman was fired by the company’s board last year before he quickly returned to his position again, Sutskever was also on the board that made the decision.

But Sutskever quickly changed his stance and regretted his decision; he even started working with the employees who wanted Altman back at the company. Sutskever also signed the letter that demanded Altman’s return.

Since Altman returned to the company, Sutskever’s position remained unclear, and he was also removed from the board, as were many others after Altman’s return. While Altman showed his grief over Sutskever’s departure, saying,

The former chief scientist said that he is leaving for something that is personally important to him. While we don’t know what that thing or project is, it seems like maybe it is the AI safety concern, which is thought to be the key focus of Sutskever’s preference for all the time he worked at OpenAI.

On the other hand, Leike’s announcement was straightforward, as he just said, “I resigned,” and later explained his concerns in a series of posts on X. Altman says they are working on correcting the exit papers, but another former employee of the company, Daniel Kokotaijlo, said that he surrendered his equity for not signing the agreement. He announced that he left the company because he lost “confidence that it would behave responsibly around the time of AGI.”