December 15, 2023
OpenAI: Money v. Ethics
When ethics become a business decision, the discussion instantly becomes about money, not ethics.
We probably won't know the whole story about what happened with Sam Altman's firing and return at OpenAI until the books begin to roll out. And even then, we may not learn the truth because money has a way of changing people's stories.
Since the drama has died down, it's been reported that the Board had real concerns about Altman's honesty, integrity and management tactics, and that his move fast and break things approach may not be the best when dealing with generative AI.
It's generally not a good idea to break things for the sake of innovation when it affects people's health, safety, careers, and lives.
Even when you don't break anything, technology has consequences. When IBM invented a program to manage data, one of its first big clients was Hitler, who used the ability to efficiently store, search, and sort information to help the Nazis kill 6 million people.
Today, we see GenAI being used to flood social media with misinformation, manipulate people and their beliefs, and create and inflame conspiracy theories—on all sides. In some ways, it's the ultimate Us v. Them accelerator. And when there's money to be made, this is a feature, not a bug.
At what cost?
It's entirely possible the Board made the right decision for the company. They reversed in a matter of days because of an extraordinary response that was effectively dramatized and publicized. The Board was suddenly faced with a choice between the potential failure of the company and the potential danger to humanity.
When it became about the money, they chose money because they had to. Boards of Directors have a legal duty to maximize the return for investors, not to do the right thing.
How many other ethical questions are resolved this way because it's required by law? It's time to think about our priorities and maybe move slow and not break things. Or people.
For employers, this is a nice discussion of the OpenAI story to illustrate how out of touch Boards can be and what other employment law considerations apply.
Know your employees.
On November 17, OpenAI, the leading artificial intelligence company behind ChatGPT, announced that it had removed Sam Altman as the company’s CEO. Mr. Altman has been a long-time well-respected entrepreneur in the Silicon Valley tech community. The decision to remove Mr. Altman was made by the firm’s Board of Directors after they determined Mr. Altman had not been “consistently candid in his communications with the board.” Within days, Microsoft, a major investor in OpenAI, announced that it was hiring Mr. Altman and others from OpenAI to lead a “new advanced AI research team.” The news shocked many, not the least of whom were OpenAI’s employees.
In a November 20 letter to the OpenAI Board, OpenAI employees objected to Mr. Altman’s removal and requested his reinstatement. The employees argued that the Board “lack[ed] competence, judgement, and care for our mission and employees,” and threatened to resign en masse to join Mr. Altman’s new team at Microsoft. More than 700 of OpenAI’s 770 employees signed the letter. In response to the news, Salesforce CEO Marc Benioff offered a job to any OpenAI researcher who resigned.
The next day, OpenAI retreated, announcing it had reached an agreement with Mr. Altman for him to return as CEO, coupled with the departure of the Board that had forced him out.
The episode provides lessons for employers:
The OpenAI situation seems to have had a happy ending. Mr. Altman returns to a job he says he loves, and the employees get back the boss they want. A crisis averted, with lessons for employers of all kinds. Now let’s hope OpenAI can return to focusing on making sure artificial intelligence doesn’t take over the world.
CompAnalyst® Pay Equity Suite can help you achieve and sustain pay equity